Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi130.front.sepia.ceph.com smithi True True 2024-04-26 23:25:50.313531 scheduled_rishabh@teuthology centos 9 x86_64 /home/teuthworker/archive/rishabh-2024-04-26_19:30:57-fs-wip-rishabh-testing-20240426.111959-testing-default-smithi/7675322
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 7675322 2024-04-26 19:35:50 2024-04-26 23:23:09 2024-04-26 23:36:52 0:14:01 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/suites/fsx}} 3
pass 7675197 2024-04-26 19:33:16 2024-04-26 20:53:56 2024-04-26 23:25:44 2:31:48 2:21:58 0:09:50 smithi main ubuntu 22.04 fs/fscrypt/{begin/{0-install 1-ceph 2-logrotate 3-modules} bluestore-bitmap clusters/1-mds-1-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} overrides/{ignorelist_health ignorelist_health_more ignorelist_wrongly_marked_down osd pg-warn pg_health} tasks/{0-client 1-tests/fscrypt-ffsb}} 3
fail 7675159 2024-04-26 19:32:28 2024-04-26 20:04:02 2024-04-26 20:55:07 0:51:05 0:39:09 0:11:56 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/fsstress}} 3
Failure Reason:

Command failed on smithi102 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs subvolumegroup pin cephfs qa random 0.10'"

pass 7675119 2024-04-26 19:31:37 2024-04-26 19:36:42 2024-04-26 20:04:05 0:27:23 0:17:26 0:09:57 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
fail 7675020 2024-04-26 18:21:41 2024-04-26 19:21:06 2024-04-26 19:35:30 0:14:24 0:05:50 0:08:34 smithi main centos 9.stream rados/thrash-erasure-code-crush-4-nodes/{arch/x86_64 ceph mon_election/connectivity msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=2-crush} 4
Failure Reason:

Command failed on smithi022 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7674991 2024-04-26 18:21:08 2024-04-26 19:05:33 2024-04-26 19:20:44 0:15:11 0:06:00 0:09:11 smithi main centos 9.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-4 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 4
Failure Reason:

Command failed on smithi022 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7674953 2024-04-26 18:20:27 2024-04-26 18:49:36 2024-04-26 19:02:28 0:12:52 0:05:05 0:07:47 smithi main centos 9.stream rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{centos_latest}} 2
Failure Reason:

Command failed on smithi143 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7674924 2024-04-26 17:25:31 2024-04-26 18:08:40 2024-04-26 18:44:40 0:36:00 0:26:44 0:09:16 smithi main ubuntu 22.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/with-quiesce 2-workunit/suites/fsstress}} 2
Failure Reason:

Command failed on smithi064 with status 1: 'sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp'

fail 7674845 2024-04-26 15:08:21 2024-04-26 16:01:57 2024-04-26 16:39:38 0:37:41 0:31:01 0:06:40 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"2024-04-26T16:30:00.000168+0000 mon.smithi081 (mon.0) 403 : cluster [WRN] Health detail: HEALTH_WARN 1 osds down; Degraded data redundancy: 48/216 objects degraded (22.222%), 17 pgs degraded" in cluster log

pass 7674775 2024-04-26 14:13:40 2024-04-26 14:25:08 2024-04-26 14:52:46 0:27:38 0:18:04 0:09:34 smithi main ubuntu 22.04 orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_domain} 2
pass 7674749 2024-04-26 12:11:33 2024-04-26 12:44:32 2024-04-26 13:05:40 0:21:08 0:13:32 0:07:36 smithi main centos 9.stream rgw/upgrade/{1-install/quincy/{distro$/{centos_latest} install overrides} 2-setup 3-upgrade-sequence/rgws-then-osds cluster frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides} 2
fail 7674593 2024-04-26 02:09:30 2024-04-26 04:36:01 2024-04-26 08:23:15 3:47:14 3:37:04 0:10:10 smithi main centos 9.stream upgrade/cephfs/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi130 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b22e2ebdeb24376882b7bda2a7329c8cccc2276a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/fsstress.sh'

fail 7674486 2024-04-26 01:29:20 2024-04-26 03:43:57 2024-04-26 04:31:12 0:47:15 0:40:41 0:06:34 smithi main centos 9.stream rados/standalone/{supported-random-distro$/{centos_latest} workloads/mon} 1
Failure Reason:

Command failed (workunit test mon/osd-erasure-code-profile.sh) on smithi130 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ddeaa91797339a19172c3036ff48cca58c12f448 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/osd-erasure-code-profile.sh'

pass 7674413 2024-04-26 01:28:00 2024-04-26 03:08:53 2024-04-26 03:44:07 0:35:14 0:28:42 0:06:32 smithi main centos 9.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 2
pass 7674361 2024-04-26 01:27:05 2024-04-26 02:46:39 2024-04-26 03:08:29 0:21:50 0:11:30 0:10:20 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/crushdiff mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
fail 7674231 2024-04-26 01:03:28 2024-04-26 01:05:02 2024-04-26 02:38:27 1:33:25 1:20:39 0:12:46 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/fs/misc}} 3
Failure Reason:

error during quiesce thrashing: Error quiescing set 'e3a976f4': 110 (ETIMEDOUT)

pass 7674198 2024-04-25 22:34:34 2024-04-26 11:50:05 2024-04-26 12:44:25 0:54:20 0:40:27 0:13:53 smithi main ubuntu 22.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-all-distro/ubuntu_latest tasks/cfuse_workunit_misc thrashosds-health} 4
pass 7674136 2024-04-25 22:33:31 2024-04-26 10:31:18 2024-04-26 11:51:08 1:19:50 1:11:04 0:08:46 smithi main rhel 8.6 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-all-distro/rhel_8 tasks/radosbench thrashosds-health} 4
pass 7674110 2024-04-25 22:33:05 2024-04-26 10:01:53 2024-04-26 10:32:55 0:31:02 0:23:47 0:07:15 smithi main centos 9.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-all-distro/centos_latest tasks/snaps-few-objects thrashosds-health} 4
pass 7674056 2024-04-25 21:32:46 2024-04-25 22:10:56 2024-04-25 22:58:58 0:48:02 0:32:23 0:15:39 smithi main ubuntu 22.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-hybrid powercycle/default supported-distros/ubuntu_latest tasks/cfuse_workunit_misc thrashosds-health} 4