Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi039.front.sepia.ceph.com smithi True True 2024-05-27 17:01:07.649378 scheduled_teuthology@teuthology x86_64 /home/teuthworker/archive/teuthology-2024-05-21_20:16:16-rbd-main-distro-default-smithi/7718864
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7727891 2024-05-27 12:39:10 2024-05-27 12:41:44 2024-05-27 13:03:42 0:21:58 0:11:17 0:10:41 smithi fix-install-task centos 9.stream crimson-rados/singleton/{all/osd-backfill crimson-supported-all-distro/centos_latest crimson_qa_overrides objectstore/bluestore rados} 1
fail 7727808 2024-05-27 07:27:40 2024-05-27 08:52:45 2024-05-27 10:36:37 1:43:52 1:32:52 0:11:00 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/kernel_untar_build}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

pass 7727723 2024-05-27 06:48:10 2024-05-27 10:36:21 2024-05-27 11:04:34 0:28:13 0:15:38 0:12:35 smithi main centos 9.stream fs/32bits/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/fixed-2-ucephfs conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/cfuse_workunit_suites_pjd} 2
pass 7727516 2024-05-26 22:06:16 2024-05-27 13:03:02 2024-05-27 13:28:50 0:25:48 0:12:11 0:13:37 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_20.04} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 7727400 2024-05-26 22:04:21 2024-05-27 11:46:17 2024-05-27 12:17:35 0:31:18 0:19:05 0:12:13 smithi main centos 8.stream rados/singleton/{all/pg-autoscaler-progress-off mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} 2
pass 7727333 2024-05-26 22:03:13 2024-05-27 11:03:13 2024-05-27 11:47:24 0:44:11 0:36:41 0:07:30 smithi main rhel 8.6 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7726350 2024-05-26 10:48:31 2024-05-26 12:20:43 2024-05-26 13:02:38 0:41:55 0:31:21 0:10:34 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-quiesce/with-quiesce 6-workunit/suites/fsync-tester}} 3
pass 7726299 2024-05-26 10:46:47 2024-05-26 11:12:24 2024-05-26 12:20:33 1:08:09 0:55:42 0:12:27 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/ffsb}} 3
fail 7726260 2024-05-26 05:24:47 2024-05-26 07:24:23 2024-05-26 07:59:41 0:35:18 0:27:11 0:08:07 smithi main rhel 8.6 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{rhel_8} tasks/{0-install test/rados_python}} 3
Failure Reason:

Command failed (workunit test rados/test_python.sh) on smithi190 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=02dd4a0049517eb4889baa50cc36ffa32d7c2440 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh'

pass 7726222 2024-05-26 05:18:22 2024-05-26 06:58:15 2024-05-26 07:25:25 0:27:10 0:15:51 0:11:19 smithi main ubuntu 22.04 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap s3tests-branch supported-all-distro/ubuntu_latest tasks/{0-install test/cfuse_workunit_suites_pjd}} 3
pass 7726170 2024-05-26 05:17:28 2024-05-26 06:16:58 2024-05-26 06:58:51 0:41:53 0:34:11 0:07:42 smithi main rhel 8.6 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap s3tests-branch supported-all-distro/rhel_8 tasks/{0-install test/rgw_s3tests}} 3
pass 7726123 2024-05-26 05:16:39 2024-05-26 05:45:45 2024-05-26 06:17:34 0:31:49 0:21:06 0:10:43 smithi main centos 8.stream smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap s3tests-branch supported-all-distro/centos_8 tasks/{0-install test/cfuse_workunit_suites_blogbench}} 3
dead 7726004 2024-05-25 22:49:01 2024-05-26 20:42:09 2024-05-27 08:54:51 12:12:42 smithi main centos 8.stream krbd/rbd-nomount/{bluestore-bitmap clusters/fixed-3 conf install/ceph ms_mode/crc$/{crc-rxbounce} msgr-failures/many tasks/rbd_concurrent} 3
Failure Reason:

hit max job timeout

pass 7725149 2024-05-24 18:50:58 2024-05-25 13:56:07 2024-05-25 14:17:48 0:21:41 0:10:24 0:11:17 smithi main centos 9.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_latest} tasks/scrub_test} 2
pass 7725105 2024-05-24 18:50:23 2024-05-25 13:18:24 2024-05-25 13:54:10 0:35:46 0:22:56 0:12:50 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/upmap-read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/small-objects} 4
pass 7725071 2024-05-24 18:49:55 2024-05-25 12:56:48 2024-05-25 13:19:32 0:22:44 0:10:39 0:12:05 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/full-tiering mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 7725031 2024-05-24 18:49:24 2024-05-25 12:26:18 2024-05-25 12:56:43 0:30:25 0:15:07 0:15:18 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/cache-agent-small} 4
pass 7724957 2024-05-24 18:48:26 2024-05-25 11:32:41 2024-05-25 12:28:07 0:55:26 0:41:25 0:14:01 smithi main ubuntu 22.04 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/sync workloads/rados_mon_osdmap_prune} 2
pass 7724922 2024-05-24 18:47:57 2024-05-25 11:02:33 2024-05-25 11:35:15 0:32:42 0:21:09 0:11:33 smithi main ubuntu 22.04 rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
fail 7724775 2024-05-24 17:52:45 2024-05-24 20:39:44 2024-05-24 21:07:32 0:27:48 0:15:26 0:12:22 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

"2024-05-24T20:58:50.884105+0000 mon.smithi039 (mon.0) 229 : cluster [WRN] Health check failed: 2 hosts fail cephadm check (CEPHADM_HOST_CHECK_FAILED)" in cluster log