Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi182.front.sepia.ceph.com smithi False True 2023-07-20 22:44:40.777325 scheduled_yuriw@teuthology centos 8 x86_64 Marked down by ceph-cm-ansible due to missing NVMe card 2023-07-20T22:59:57Z
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7345412 2023-07-20 16:49:41 2023-07-20 18:27:51 2023-07-20 19:19:05 0:51:14 0:38:38 0:12:36 smithi main rhel 8.6 fs:workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/blogbench}} 3
Failure Reason:

Command failed on smithi138 with status 2: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume getpath cephfs --group_name qa sv_0'

fail 7345364 2023-07-20 16:49:01 2023-07-20 17:45:31 2023-07-20 18:28:16 0:42:45 0:29:45 0:13:00 smithi main rhel 8.6 fs:workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/distributed export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/fs/misc}} 3
Failure Reason:

Command failed on smithi138 with status 2: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume getpath cephfs --group_name qa sv_0'

fail 7345335 2023-07-20 16:48:35 2023-07-20 17:02:11 2023-07-20 17:45:11 0:43:00 0:33:09 0:09:51 smithi main rhel 8.6 fs:workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/distributed export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/kernel_untar_build}} 3
Failure Reason:

Command failed on smithi067 with status 2: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume getpath cephfs --group_name qa sv_0'

dead 7345277 2023-07-20 16:12:36 2023-07-20 22:44:40 2023-07-20 23:08:22 0:23:42 0:07:11 0:16:31 smithi main centos 8.stream rados/singleton/{all/dump-stuck mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} 1
Failure Reason:

{'Failing rest of playbook due to missing NVMe card'}

pass 7345217 2023-07-20 16:11:46 2023-07-20 22:15:19 2023-07-20 22:44:32 0:29:13 0:21:47 0:07:26 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/dedup-io-mixed} 2
pass 7345130 2023-07-20 16:10:33 2023-07-20 21:25:53 2023-07-20 22:15:18 0:49:25 0:42:54 0:06:31 smithi main rhel 8.6 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 2
pass 7345049 2023-07-20 16:09:27 2023-07-20 20:39:41 2023-07-20 21:25:56 0:46:15 0:33:58 0:12:17 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/redirect} 2
pass 7344932 2023-07-20 14:39:38 2023-07-20 19:20:06 2023-07-20 20:39:49 1:19:43 0:43:04 0:36:39 smithi main centos 8.stream fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client-recovery} 2
pass 7344844 2023-07-20 14:38:32 2023-07-20 16:33:56 2023-07-20 17:05:55 0:31:59 0:23:07 0:08:52 smithi main rhel 8.4 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/direct_io} wsync/{no}} 3
pass 7344819 2023-07-20 14:38:13 2023-07-20 15:50:51 2023-07-20 16:32:11 0:41:20 0:23:50 0:17:30 smithi main centos 8.stream fs/volumes/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/volumes/{overrides test/misc}} 2
pass 7344770 2023-07-20 14:35:38 2023-07-20 15:49:09 1791 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 7344754 2023-07-20 14:35:26 2023-07-20 14:35:39 2023-07-20 15:07:54 0:32:15 0:10:58 0:21:17 smithi main rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e20180733e4b085d1abe55675eb8c828b0bfedec

pass 7344666 2023-07-20 08:37:50 2023-07-20 08:44:41 2023-07-20 09:28:48 0:44:07 0:33:23 0:10:44 smithi main centos 8.stream fs:mirror-ha/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/three-per-cluster clients/{mirror} cluster/{1-node} objectstore/bluestore-bitmap overrides/{whitelist_health} supported-random-distro$/{centos_8} workloads/cephfs-mirror-ha-workunit} 1
pass 7344532 2023-07-20 08:25:32 2023-07-20 09:50:57 2023-07-20 12:25:47 2:34:50 2:06:55 0:27:55 smithi main ubuntu 22.04 fs/fscrypt/{begin/{0-install 1-ceph 2-logrotate} bluestore-bitmap clusters/1-mds-1-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} overrides/{ignorelist_health ignorelist_health_more ignorelist_wrongly_marked_down pg-warn} tasks/{0-client 1-tests/fscrypt-ffsb}} 3
pass 7344509 2023-07-20 08:25:15 2023-07-20 09:23:33 2023-07-20 09:52:02 0:28:29 0:16:44 0:11:45 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/acls} 2
fail 7344483 2023-07-20 08:24:55 2023-07-20 08:25:41 2023-07-20 08:44:34 0:18:53 0:08:33 0:10:20 smithi main rhel 8.6 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
Failure Reason:

Command failed on smithi182 with status 1: 'sudo dnf -y copr enable ceph/el9'

pass 7344378 2023-07-20 02:11:19 2023-07-20 02:11:58 2023-07-20 02:44:52 0:32:54 0:27:28 0:05:26 smithi main rhel 8.6 fs:mirror-ha/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/three-per-cluster clients/{mirror} cluster/{1-node} objectstore/bluestore-bitmap overrides/{whitelist_health} supported-random-distro$/{rhel_8} workloads/cephfs-mirror-ha-workunit} 1
fail 7344323 2023-07-20 00:40:28 2023-07-20 02:43:08 2023-07-20 03:18:08 0:35:00 0:26:05 0:08:55 smithi main rhel 8.6 fs:workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/distributed export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/iozone}} 3
Failure Reason:

Command failed on smithi173 with status 2: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume getpath cephfs --group_name=qa sv_0'

fail 7344267 2023-07-20 00:39:44 2023-07-20 01:37:31 2023-07-20 02:11:45 0:34:14 0:26:34 0:07:40 smithi main rhel 8.6 fs:workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/kernel_untar_build}} 3
Failure Reason:

Command failed on smithi173 with status 2: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume getpath cephfs --group_name=qa sv_0'

fail 7344243 2023-07-20 00:39:24 2023-07-20 00:56:01 2023-07-20 01:38:21 0:42:20 0:34:05 0:08:15 smithi main rhel 8.6 fs:workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/suites/fsstress}} 3
Failure Reason:

Command failed on smithi173 with status 2: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume getpath cephfs --group_name=qa sv_0'