Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi080.front.sepia.ceph.com smithi True True 2024-04-26 05:22:47.789023 scheduled_teuthology@teuthology rhel 8.6 x86_64 /home/teuthworker/archive/teuthology-2024-04-24_22:24:15-fs-reef-distro-default-smithi/7672480
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7674585 2024-04-26 02:09:22 2024-04-26 04:32:28 2024-04-26 05:15:54 0:43:26 0:37:04 0:06:22 smithi main centos 9.stream upgrade/cephfs/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

pass 7674491 2024-04-26 01:29:26 2024-04-26 03:48:09 2024-04-26 04:32:19 0:44:10 0:34:51 0:09:19 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps} 3
pass 7674445 2024-04-26 01:28:36 2024-04-26 03:24:37 2024-04-26 03:48:16 0:23:39 0:14:30 0:09:09 smithi main ubuntu 22.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/sample_fio} 1
pass 7674360 2024-04-26 01:27:04 2024-04-26 02:46:39 2024-04-26 03:24:28 0:37:49 0:26:58 0:10:51 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/snaps-few-objects-localized} 2
pass 7674341 2024-04-26 01:26:45 2024-04-26 02:31:40 2024-04-26 02:46:30 0:14:50 0:09:31 0:05:19 smithi main centos 9.stream rados/singleton/{all/mon-auth-caps mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_latest}} 1
fail 7674232 2024-04-26 01:03:28 2024-04-26 01:07:13 2024-04-26 02:20:34 1:13:21 1:02:36 0:10:45 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/fs/misc}} 3
Failure Reason:

error during quiesce thrashing: Error quiescing set '71b71560': 110 (ETIMEDOUT)

pass 7674080 2024-04-25 21:33:10 2024-04-25 22:47:22 2024-04-25 23:29:21 0:41:59 0:31:12 0:10:47 smithi main ubuntu 22.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-hybrid powercycle/default supported-distros/ubuntu_latest tasks/snaps-few-objects thrashosds-health} 4
pass 7674034 2024-04-25 21:32:24 2024-04-25 21:45:22 2024-04-25 22:10:00 0:24:38 0:13:16 0:11:22 smithi main ubuntu 22.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-stupid powercycle/default supported-distros/ubuntu_latest tasks/cfuse_workunit_suites_truncate_delay thrashosds-health} 4
pass 7673937 2024-04-25 21:04:58 2024-04-26 00:29:28 2024-04-26 01:07:07 0:37:39 0:27:52 0:09:47 smithi main ubuntu 22.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7673904 2024-04-25 21:04:21 2024-04-26 00:11:43 2024-04-26 00:28:40 0:16:57 0:10:23 0:06:34 smithi main centos 9.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 7673868 2024-04-25 21:03:45 2024-04-25 23:56:04 2024-04-26 00:09:08 0:13:04 0:06:42 0:06:22 smithi main centos 9.stream rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats'

pass 7673825 2024-04-25 21:03:04 2024-04-25 23:29:24 2024-04-25 23:57:12 0:27:48 0:17:01 0:10:47 smithi main ubuntu 22.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/libcephsqlite} 2
pass 7673734 2024-04-25 21:01:39 2024-04-25 21:06:11 2024-04-25 21:45:30 0:39:19 0:21:21 0:17:58 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/cache-snaps} 2
pass 7673649 2024-04-25 20:00:42 2024-04-25 20:31:00 2024-04-25 20:58:21 0:27:21 0:17:11 0:10:10 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/with-quiesce 2-workunit/suites/iozone}} 2
pass 7673624 2024-04-25 17:44:59 2024-04-25 18:29:17 2024-04-25 19:04:54 0:35:37 0:23:54 0:11:43 smithi main ubuntu 22.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/with-quiesce 2-workunit/suites/fsstress}} 2
fail 7673584 2024-04-25 16:55:16 2024-04-25 17:42:46 2024-04-25 18:23:48 0:41:02 0:33:35 0:07:27 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/iozone}} 3
Failure Reason:

Command failed on smithi057 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/ceph/ceph /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 3071830b3533a96301fd87d582ed5f17f0b618cd'

fail 7673565 2024-04-25 16:55:10 2024-04-25 17:13:04 2024-04-25 17:29:52 0:16:48 0:08:41 0:08:07 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/with-quiesce 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed on smithi080 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/ceph/ceph /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 3071830b3533a96301fd87d582ed5f17f0b618cd'

pass 7673528 2024-04-25 14:19:25 2024-04-25 14:54:12 2024-04-25 15:31:12 0:37:00 0:24:34 0:12:26 smithi main centos 8.stream krbd/wac/wac/{bluestore-bitmap ceph/ceph clusters/fixed-3 conf tasks/wac verify/many-resets} 3
fail 7673484 2024-04-25 13:51:37 2024-04-25 13:53:42 2024-04-25 14:38:00 0:44:18 0:33:20 0:10:58 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/fs/misc}} 3
Failure Reason:

Command failed on smithi080 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 300 ceph --cluster ceph --admin-daemon /var/run/ceph/ceph-mon.a.asok --format=json config get run_dir'

fail 7673472 2024-04-25 12:18:08 2024-04-25 13:04:01 2024-04-25 13:11:02 0:07:01 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/direct_io}} 3
Failure Reason:

machine smithi107.front.sepia.ceph.com is locked by scheduled_pdonnell@teuthology, not scheduled_leonidus@teuthology