Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi063.front.sepia.ceph.com smithi True True 2024-04-26 17:30:21.760897 scheduled_leonidus@teuthology centos 9 x86_64 /home/teuthworker/archive/leonidus-2024-04-26_17:24:23-fs-wip-lusov-quiescer-fixes-distro-default-smithi/7674892
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 7674892 2024-04-26 17:25:22 2024-04-26 17:29:41 2024-04-26 17:53:30 0:25:06 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/fs/misc}} 3
pass 7674700 2024-04-26 12:10:51 2024-04-26 12:19:09 2024-04-26 13:04:59 0:45:50 0:36:50 0:09:00 smithi main ubuntu 22.04 rgw/tempest/{0-install clusters/fixed-1 frontend/beast ignore-pg-availability overrides s3tests-branch tasks/s3tests ubuntu_latest} 1
pass 7674605 2024-04-26 07:23:21 2024-04-26 07:26:46 2024-04-26 08:04:27 0:37:41 0:28:45 0:08:56 smithi main ubuntu 22.04 rgw/cloud-transition/{cluster ignore-pg-availability overrides s3tests-branch supported-random-distro$/{ubuntu_latest} tasks/cloud_transition_s3tests} 1
fail 7674579 2024-04-26 02:09:16 2024-04-26 04:29:05 2024-04-26 06:47:23 2:18:18 2:06:54 0:11:24 smithi main ubuntu 22.04 upgrade/reef-x/parallel/{0-random-distro$/{ubuntu_22.04} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

"2024-04-26T04:52:58.685584+0000 mon.a (mon.0) 177 : cluster 3 [WRN] MON_DOWN: 1/3 mons down, quorum a,b" in cluster log

pass 7674515 2024-04-26 01:29:53 2024-04-26 03:59:10 2024-04-26 04:29:18 0:30:08 0:18:47 0:11:21 smithi main ubuntu 22.04 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/sync-many workloads/pool-create-delete} 2
pass 7674468 2024-04-26 01:28:59 2024-04-26 03:34:38 2024-04-26 03:59:25 0:24:47 0:16:45 0:08:02 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/redirect} 2
pass 7674399 2024-04-26 01:27:45 2024-04-26 03:01:27 2024-04-26 03:36:07 0:34:40 0:22:32 0:12:08 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/cache-pool-snaps-readproxy} 2
pass 7674202 2024-04-25 22:34:38 2024-04-26 11:52:17 2024-04-26 12:19:06 0:26:49 0:15:01 0:11:48 smithi main ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-all-distro/ubuntu_20.04 tasks/cfuse_workunit_suites_fsync thrashosds-health} 4
pass 7674190 2024-04-25 22:34:26 2024-04-26 11:35:39 2024-04-26 11:52:37 0:16:58 0:09:20 0:07:38 smithi main centos 9.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-lz4 powercycle/default supported-all-distro/centos_latest tasks/cfuse_workunit_suites_truncate_delay thrashosds-health} 4
pass 7674122 2024-04-25 22:33:17 2024-04-26 10:13:50 2024-04-26 11:37:11 1:23:21 1:11:55 0:11:26 smithi main ubuntu 20.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-all-distro/ubuntu_20.04 tasks/radosbench thrashosds-health} 4
pass 7674091 2024-04-25 22:32:46 2024-04-26 09:46:23 2024-04-26 10:14:56 0:28:33 0:21:37 0:06:56 smithi main rhel 8.6 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-stupid powercycle/default supported-all-distro/rhel_8 tasks/cfuse_workunit_suites_pjd thrashosds-health} 4
pass 7674074 2024-04-25 21:33:04 2024-04-25 22:40:08 2024-04-25 23:07:43 0:27:35 0:15:19 0:12:16 smithi main ubuntu 22.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-stupid powercycle/default supported-distros/ubuntu_latest tasks/cfuse_workunit_suites_fsync thrashosds-health} 4
pass 7674042 2024-04-25 21:32:32 2024-04-25 21:51:28 2024-04-25 22:41:34 0:50:06 0:39:12 0:10:54 smithi main ubuntu 22.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-stupid powercycle/default supported-distros/ubuntu_latest tasks/cfuse_workunit_misc thrashosds-health} 4
fail 7673788 2024-04-25 21:02:29 2024-04-25 23:07:37 2024-04-26 02:59:22 3:51:45 3:44:39 0:07:06 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/rados_api_tests} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi063 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b22e2ebdeb24376882b7bda2a7329c8cccc2276a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 7673736 2024-04-25 21:01:41 2024-04-25 21:16:23 2024-04-25 21:51:45 0:35:22 0:24:08 0:11:14 smithi main centos 9.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_latest} tasks/rados_workunit_loadgen_mostlyread} 2
fail 7673633 2024-04-25 20:00:36 2024-04-25 20:07:10 2024-04-25 21:19:27 1:12:17 1:03:29 0:08:48 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/fs/misc}} 3
Failure Reason:

error during quiesce thrashing: Error quiescing set 'd960ac51': 110 (ETIMEDOUT)

fail 7673605 2024-04-25 17:44:53 2024-04-25 18:13:47 2024-04-25 19:14:51 1:01:04 0:51:14 0:09:50 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/dbench}} 3
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi063 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7797745e6d8b3ac036504dc61491d368615024da TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'

fail 7673567 2024-04-25 16:55:11 2024-04-25 17:17:06 2024-04-25 18:00:35 0:43:29 0:31:42 0:11:47 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/dbench}} 3
Failure Reason:

Command failed on smithi031 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/ceph/ceph /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 3071830b3533a96301fd87d582ed5f17f0b618cd'

pass 7673540 2024-04-25 16:28:26 2024-04-25 16:38:28 2024-04-25 17:21:08 0:42:40 0:34:31 0:08:09 smithi main centos 9.stream rgw:verify/{0-install accounts$/{none} clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability inline-data$/{on} msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec-profile s3tests-branch sharding$/{single} striping$/{stripe-greater-than-chunk} supported-random-distro$/{centos_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/lockdep} 2
fail 7673518 2024-04-25 13:51:47 2024-04-25 14:41:33 2024-04-25 15:36:09 0:54:36 0:34:49 0:19:47 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/direct_io}} 3
Failure Reason:

Command failed on smithi031 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 300 ceph --cluster ceph --admin-daemon /var/run/ceph/ceph-mon.a.asok --format=json config get run_dir'