Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi194.front.sepia.ceph.com smithi True True 2024-05-09 04:59:15.879482 scheduled_pdonnell@teuthology ubuntu 22.04 x86_64 /home/teuthworker/archive/pdonnell-2024-05-08_22:06:20-fs-wip-pdonnell-testing-20240508.183908-debug-distro-default-smithi/7699106
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 7699106 2024-05-08 22:11:09 2024-05-09 04:58:15 2024-05-09 13:34:02 8:36:05 smithi main ubuntu 22.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-3-client conf/{client mds mgr mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/ior-shared-file} 5
pass 7699063 2024-05-08 22:10:25 2024-05-09 04:18:30 2024-05-09 04:59:11 0:40:41 0:29:16 0:11:25 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-quiesce/with-quiesce 6-workunit/suites/fsstress}} 3
pass 7699044 2024-05-08 22:10:05 2024-05-09 03:04:57 2024-05-09 04:19:59 1:15:02 1:05:15 0:09:47 smithi main centos 9.stream fs/volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/clone}} 2
pass 7698998 2024-05-08 22:09:16 2024-05-09 02:01:37 2024-05-09 03:05:57 1:04:20 0:52:25 0:11:55 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/iogen}} 3
fail 7698868 2024-05-08 22:06:58 2024-05-08 23:51:17 2024-05-09 01:58:16 2:06:59 1:53:48 0:13:11 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-quiesce/with-quiesce 6-workunit/fs/misc}} 3
Failure Reason:

error during quiesce thrashing: local variable 'mds_remote' referenced before assignment

pass 7698200 2024-05-08 19:27:18 2024-05-08 23:35:28 2024-05-08 23:55:01 0:19:33 0:10:22 0:09:11 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
pass 7698152 2024-05-08 19:26:20 2024-05-08 22:50:40 2024-05-08 23:36:02 0:45:22 0:38:59 0:06:23 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/suites/iogen}} 3
pass 7698077 2024-05-08 19:24:51 2024-05-08 21:33:37 2024-05-08 22:50:45 1:17:08 1:11:12 0:05:56 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/suites/dbench}} 3
pass 7698041 2024-05-08 19:24:10 2024-05-08 20:50:46 2024-05-08 21:33:31 0:42:45 0:34:54 0:07:51 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/suites/fsync-tester}} 3
pass 7697971 2024-05-08 19:22:51 2024-05-08 19:41:04 2024-05-08 20:51:58 1:10:54 0:59:52 0:11:02 smithi main ubuntu 22.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cephfs_misc_tests} 4
pass 7697817 2024-05-08 16:08:10 2024-05-08 16:17:15 2024-05-08 17:07:46 0:50:31 0:40:10 0:10:21 smithi main centos 9.stream rbd/device/{base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} msgr-failures/few objectstore/bluestore-comp-lz4 supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/diff-continuous-krbd} 3
pass 7697777 2024-05-08 15:06:21 2024-05-08 17:06:30 2024-05-08 18:45:23 1:38:53 1:27:44 0:11:09 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/quincy backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/radosbench} 3
pass 7697635 2024-05-08 15:03:32 2024-05-08 15:41:56 2024-05-08 16:17:21 0:35:25 0:25:12 0:10:13 smithi main centos 9.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zlib} tasks/e2e} 2
pass 7697580 2024-05-08 15:02:23 2024-05-08 15:05:40 2024-05-08 15:42:19 0:36:39 0:19:45 0:16:54 smithi main centos 9.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 2
fail 7697268 2024-05-08 04:01:39 2024-05-08 06:09:06 2024-05-08 11:48:44 5:39:38 5:28:09 0:11:29 smithi main centos 9.stream fs:workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-quiesce/with-quiesce 6-workunit/kernel_untar_build}} 3
Failure Reason:

Command failed (workunit test kernel_untar_build.sh) on smithi070 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bee299f4ac344e4c45e550d7a80abcadd7bc3d5a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh'

fail 7697223 2024-05-08 04:01:01 2024-05-08 04:51:16 2024-05-08 06:04:10 1:12:54 1:03:22 0:09:32 smithi main centos 9.stream fs:workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-quiesce/no 6-workunit/kernel_untar_build}} 3
Failure Reason:

"2024-05-08T05:38:02.423623+0000 mds.b (mds.0) 74 : cluster [WRN] Scrub error on inode 0x1000000f94d (/volumes/qa/sv_1/cb66b44a-e133-4f67-9495-062a6f8c7af8/client.0/tmp/t/linux-6.5.11/fs/ext4) see mds.b log and `damage ls` output for details" in cluster log

fail 7696841 2024-05-08 00:45:45 2024-05-08 04:20:17 2024-05-08 04:49:43 0:29:26 0:19:18 0:10:08 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_monitoring_stack_basic} 3
Failure Reason:

"2024-05-08T04:40:58.429559+0000 mon.a (mon.0) 478 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7696827 2024-05-08 00:45:29 2024-05-08 03:55:15 2024-05-08 04:14:54 0:19:39 0:06:35 0:13:04 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/reef backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/osd-delay rados thrashers/pggrow thrashosds-health workloads/radosbench} 3
Failure Reason:

Command failed on smithi194 with status 1: 'sudo yum -y install ceph-radosgw'

pass 7696801 2024-05-07 23:05:20 2024-05-08 03:30:51 2024-05-08 03:56:53 0:26:02 0:13:56 0:12:06 smithi main ubuntu 22.04 rbd/thrash/{base/install clusters/{fixed-2 openstack} conf/{disable-pool-app} msgr-failures/few objectstore/bluestore-comp-lz4 supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/rbd_fsx_cache_writethrough} 2
pass 7696716 2024-05-07 23:04:11 2024-05-08 01:49:16 2024-05-08 03:33:53 1:44:37 1:32:58 0:11:39 smithi main centos 9.stream rbd/mirror/{base/install clients/{mirror-extra mirror} cluster/{2-node openstack} conf/{disable-pool-app} msgr-failures/few objectstore/bluestore-bitmap supported-random-distro$/{centos_latest} workloads/compare-mirror-images-krbd} 2