Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7682749 2024-05-01 10:03:28 2024-05-01 11:53:47 2024-05-01 12:43:44 0:49:57 0:37:37 0:12:20 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/fs/test_o_trunc}} 3
fail 7682750 2024-05-01 10:03:29 2024-05-01 11:53:48 2024-05-01 12:48:27 0:54:39 0:44:35 0:10:04 smithi main centos 9.stream fs/cephadm/multivolume/{0-start 1-mount 2-workload/dbench distro/single-container-host overrides/{ignorelist_health pg_health}} 2
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi028 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=148a44391d69844f26ccb96d75a4ba77e4183fe4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/dbench.sh'

pass 7682751 2024-05-01 10:03:30 2024-05-01 11:53:48 2024-05-01 12:26:28 0:32:40 0:21:02 0:11:38 smithi main ubuntu 22.04 fs/full/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-node-1-mds-1-osd conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/mgr-osd-full} 1
fail 7682752 2024-05-01 10:03:31 2024-05-01 11:53:48 2024-05-01 12:27:17 0:33:29 0:23:20 0:10:09 smithi main ubuntu 22.04 fs/mirror/{begin/{0-install 1-ceph 2-logrotate 3-modules} cephfs-mirror/one-per-cluster clients/{mirror} cluster/{1-node} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health pg_health} supported-random-distros$/{ubuntu_latest} tasks/mirror} 1
Failure Reason:

Test failure: test_cephfs_mirror_blocklist (tasks.cephfs.test_mirroring.TestMirroring)

pass 7682753 2024-05-01 10:03:32 2024-05-01 11:53:49 2024-05-01 12:29:12 0:35:23 0:26:05 0:09:18 smithi main ubuntu 22.04 fs/mirror-ha/{begin/{0-install 1-ceph 2-logrotate 3-modules} cephfs-mirror/{1-volume-create-rm 2-three-per-cluster} clients/{mirror} cluster/{1-node} objectstore/bluestore-bitmap overrides/{ignorelist_health pg_health} supported-random-distro$/{ubuntu_latest} workloads/cephfs-mirror-ha-workunit} 1
fail 7682754 2024-05-01 10:03:33 2024-05-01 11:53:49 2024-05-01 12:54:38 1:00:49 0:49:36 0:11:13 smithi main ubuntu 22.04 fs/nfs/{cluster/{1-node} conf/{client mds mgr mon osd} overrides/{ignorelist_health pg_health} supported-random-distros$/{ubuntu_latest} tasks/nfs} 1
Failure Reason:

"2024-05-01T12:23:24.382951+0000 mon.a (mon.0) 331 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log

fail 7682755 2024-05-01 10:03:35 2024-05-01 11:53:49 2024-05-01 12:34:41 0:40:52 0:30:54 0:09:58 smithi main centos 9.stream fs/valgrind/{begin/{0-install 1-ceph 2-logrotate 3-modules} centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/{ignorelist_health pg_health} tasks/mirror}} 1
Failure Reason:

Test failure: test_cephfs_mirror_blocklist (tasks.cephfs.test_mirroring.TestMirroring)

pass 7682756 2024-05-01 10:03:36 2024-05-01 11:53:50 2024-05-01 14:15:36 2:21:46 2:09:20 0:12:26 smithi main centos 9.stream fs/volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/basic}} 2
fail 7682757 2024-05-01 10:03:37 2024-05-01 11:53:50 2024-05-01 13:08:41 1:14:51 1:05:37 0:09:14 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/kernel_untar_build}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

fail 7682758 2024-05-01 10:03:38 2024-05-01 11:53:50 2024-05-01 12:47:36 0:53:46 0:42:16 0:11:30 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/blogbench}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

fail 7682759 2024-05-01 10:03:40 2024-05-01 11:53:51 2024-05-01 12:05:32 0:11:41 0:04:15 0:07:26 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/xfstests-dev} 2
Failure Reason:

Command failed on smithi144 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-fuse cephfs-java libcephfs_jni1 libcephfs1 librados2 librbd1 python-ceph rbd-fuse python3-cephfs cephfs-shell cephfs-top cephfs-mirror bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel postgresql postgresql-server postgresql-contrib bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel postgresql postgresql-server postgresql-contrib bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel postgresql postgresql-server postgresql-contrib bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel postgresql postgresql-server postgresql-contrib'

fail 7682760 2024-05-01 10:03:41 2024-05-01 11:53:51 2024-05-01 13:05:42 1:11:51 0:59:28 0:12:23 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/ffsb}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

fail 7682761 2024-05-01 10:03:42 2024-05-01 11:53:52 2024-05-01 12:53:35 0:59:43 0:46:57 0:12:46 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7682762 2024-05-01 10:03:43 2024-05-01 11:53:52 2024-05-01 12:37:29 0:43:37 0:31:51 0:11:46 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/fsstress}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

fail 7682763 2024-05-01 10:03:45 2024-05-01 11:53:52 2024-05-01 12:59:21 1:05:29 0:55:26 0:10:03 smithi main ubuntu 22.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu/{overrides ubuntu_latest}} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health session_timeout} ranks/3 tasks/dbench validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable calloc calloc _dl_check_map_versions

fail 7682764 2024-05-01 10:03:46 2024-05-01 11:53:53 2024-05-01 12:21:54 0:28:01 0:15:53 0:12:08 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/acls} 2
Failure Reason:

Command failed on smithi088 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest'

pass 7682765 2024-05-01 10:03:47 2024-05-01 11:53:53 2024-05-01 12:57:37 1:03:44 0:55:47 0:07:57 smithi main centos 9.stream fs/volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/clone}} 2
fail 7682766 2024-05-01 10:03:48 2024-05-01 11:53:54 2024-05-01 12:11:13 0:17:19 0:05:51 0:11:28 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/admin} 2
Failure Reason:

No module named 'tasks'

fail 7682767 2024-05-01 10:03:49 2024-05-01 11:58:05 2024-05-01 12:11:17 0:13:12 0:05:53 0:07:19 smithi main centos 9.stream fs/multifs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health} tasks/failover} 2
Failure Reason:

No module named 'tasks'

pass 7682768 2024-05-01 10:03:51 2024-05-01 11:58:25 2024-05-01 12:57:57 0:59:32 0:48:28 0:11:04 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/iogen}} 3
pass 7682769 2024-05-01 10:03:52 2024-05-01 12:00:46 2024-05-01 12:46:50 0:46:04 0:28:09 0:17:55 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/pjd}} 3
pass 7682770 2024-05-01 10:03:53 2024-05-01 12:06:57 2024-05-01 12:39:14 0:32:17 0:23:43 0:08:34 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/direct_io}} 3
fail 7682771 2024-05-01 10:03:54 2024-05-01 12:08:18 2024-05-01 15:58:09 3:49:51 3:34:20 0:15:31 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/kernel_untar_build}} 3
Failure Reason:

"2024-05-01T12:58:42.313142+0000 mds.b (mds.0) 129 : cluster [WRN] Scrub error on inode 0x10000006760 (/volumes/qa/sv_0/3e47dfcd-486c-4a7a-b5a6-c54ec02f59ad/client.0/tmp/t/linux-6.5.11/drivers/clk) see mds.b log and `damage ls` output for details" in cluster log

fail 7682772 2024-05-01 10:03:56 2024-05-01 12:08:18 2024-05-01 13:05:58 0:57:40 0:44:05 0:13:35 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7682773 2024-05-01 10:03:57 2024-05-01 12:08:19 2024-05-01 13:03:10 0:54:51 0:44:50 0:10:01 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/suites/dbench}} 3
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi016 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=148a44391d69844f26ccb96d75a4ba77e4183fe4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'

pass 7682774 2024-05-01 10:03:58 2024-05-01 12:08:19 2024-05-01 13:49:00 1:40:41 1:24:53 0:15:48 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/ffsb}} 3
pass 7682775 2024-05-01 10:03:59 2024-05-01 12:08:20 2024-05-01 12:42:19 0:33:59 0:26:11 0:07:48 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/fsstress}} 3
pass 7682776 2024-05-01 10:04:01 2024-05-01 12:08:20 2024-05-01 12:46:26 0:38:06 0:21:44 0:16:22 smithi main centos 9.stream fs/volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/misc}} 2
fail 7682777 2024-05-01 10:04:02 2024-05-01 12:08:20 2024-05-01 13:15:15 1:06:55 1:00:25 0:06:30 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/iogen}} 3
Failure Reason:

"2024-05-01T12:42:00.273147+0000 mds.b (mds.0) 86 : cluster [WRN] client.24992 does not advance its oldest_flush_tid (420348), 100000 completed flushes recorded in session" in cluster log

fail 7682778 2024-05-01 10:04:03 2024-05-01 12:08:21 2024-05-01 13:10:31 1:02:10 0:45:22 0:16:48 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7682779 2024-05-01 10:04:04 2024-05-01 12:08:21 2024-05-01 12:34:08 0:25:47 0:17:45 0:08:02 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/forward-scrub} 2
Failure Reason:

"2024-05-01T12:27:29.367950+0000 mds.d (mds.0) 1 : cluster [ERR] dir 0x10000000000 object missing on disk; some files may be lost (/dir)" in cluster log

fail 7682780 2024-05-01 10:04:06 2024-05-01 12:08:22 2024-05-01 13:12:41 1:04:19 0:49:16 0:15:03 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/postgres}} 3
Failure Reason:

The following counters failed to be set on mds daemons: {'mds.imported', 'mds.exported'}

pass 7682781 2024-05-01 10:04:07 2024-05-01 12:08:22 2024-05-01 12:52:01 0:43:39 0:29:50 0:13:49 smithi main centos 9.stream fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-1-client-coloc conf/{client mds mgr mon osd} distro/{centos_latest} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/libcephfs/{frag test}} 2
fail 7682782 2024-05-01 10:04:08 2024-05-01 12:08:22 2024-05-01 12:34:54 0:26:32 0:14:14 0:12:18 smithi main ubuntu 22.04 fs/shell/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-1-client-coloc conf/{client mds mgr mon osd} distro/ubuntu_latest mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/cephfs-shell} 2
Failure Reason:

Test failure: test_cd_with_args (tasks.cephfs.test_cephfs_shell.TestCD)

fail 7682783 2024-05-01 10:04:09 2024-05-01 12:08:23 2024-05-01 13:12:55 1:04:32 0:52:10 0:12:22 smithi main ubuntu 22.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu/{overrides ubuntu_latest}} mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health session_timeout} ranks/1 tasks/dbench validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable calloc calloc _dl_check_map_versions

pass 7682784 2024-05-01 10:04:10 2024-05-01 12:08:23 2024-05-01 13:44:25 1:36:02 1:29:10 0:06:52 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/dbench}} 3
pass 7682785 2024-05-01 10:04:12 2024-05-01 12:08:24 2024-05-01 12:42:13 0:33:49 0:27:26 0:06:23 smithi main centos 9.stream fs/volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/snapshot}} 2
fail 7682786 2024-05-01 10:04:13 2024-05-01 12:08:24 2024-05-01 13:12:48 1:04:24 0:39:32 0:24:52 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/suites/ffsb}} 3
Failure Reason:

Command failed (workunit test suites/ffsb.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=148a44391d69844f26ccb96d75a4ba77e4183fe4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'

pass 7682787 2024-05-01 10:04:14 2024-05-01 12:24:45 2024-05-01 12:57:41 0:32:56 0:26:19 0:06:37 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/fsx}} 3
pass 7682788 2024-05-01 10:04:15 2024-05-01 12:24:45 2024-05-01 13:16:16 0:51:31 0:39:25 0:12:06 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/fs/test_o_trunc}} 3
fail 7682789 2024-05-01 10:04:17 2024-05-01 12:24:46 2024-05-01 13:19:49 0:55:03 0:43:58 0:11:05 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7682790 2024-05-01 10:04:18 2024-05-01 12:24:46 2024-05-01 14:12:23 1:47:37 1:35:39 0:11:58 smithi main centos 9.stream fs/fscrypt/{begin/{0-install 1-ceph 2-logrotate 3-modules} bluestore-bitmap clusters/1-mds-1-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} overrides/{ignorelist_health ignorelist_health_more ignorelist_wrongly_marked_down osd pg-warn pg_health} tasks/{0-client 1-tests/fscrypt-common}} 3
Failure Reason:

Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)

pass 7682791 2024-05-01 10:04:19 2024-05-01 12:24:47 2024-05-01 14:26:20 2:01:33 1:48:27 0:13:06 smithi main ubuntu 22.04 fs/volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/basic}} 2
fail 7682792 2024-05-01 10:04:20 2024-05-01 12:24:47 2024-05-01 13:13:10 0:48:23 0:33:17 0:15:06 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/quiesce} 2
Failure Reason:

Test failure: test_quiesce_authpin_wait (tasks.cephfs.test_quiesce.TestQuiesceMultiRank)

fail 7682793 2024-05-01 10:04:22 2024-05-01 12:29:18 2024-05-01 13:53:23 1:24:05 1:05:15 0:18:50 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/kernel_untar_build}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

pass 7682794 2024-05-01 10:04:23 2024-05-01 12:39:13 2024-05-01 13:46:40 1:07:27 0:57:02 0:10:25 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/postgres}} 3
pass 7682795 2024-05-01 10:04:24 2024-05-01 12:39:13 2024-05-01 14:08:40 1:29:27 1:20:33 0:08:54 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/suites/dbench}} 3
pass 7682796 2024-05-01 10:04:25 2024-05-01 12:39:24 2024-05-01 13:03:26 0:24:02 0:12:01 0:12:01 smithi main centos 9.stream fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-1-client-coloc conf/{client mds mgr mon osd} distro/{centos_latest} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/libcephfs_python} 2
fail 7682797 2024-05-01 10:04:26 2024-05-01 12:42:14 2024-05-01 13:37:29 0:55:15 0:44:28 0:10:47 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7682798 2024-05-01 10:04:28 2024-05-01 12:42:25 2024-05-01 13:11:29 0:29:04 0:16:19 0:12:45 smithi main ubuntu 22.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/sessionmap} 2
Failure Reason:

Test failure: test_session_reject (tasks.cephfs.test_sessionmap.TestSessionMap)

pass 7682799 2024-05-01 10:04:29 2024-05-01 12:43:45 2024-05-01 13:59:53 1:16:08 1:04:31 0:11:37 smithi main ubuntu 22.04 fs/volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/clone}} 2
dead 7682800 2024-05-01 10:04:30 2024-05-01 12:43:46 2024-05-02 00:56:16 12:12:30 smithi main centos 9.stream fs/multifs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health} tasks/failover} 2
Failure Reason:

hit max job timeout

pass 7682801 2024-05-01 10:04:31 2024-05-01 12:46:36 2024-05-01 13:24:49 0:38:13 0:27:44 0:10:29 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/direct_io}} 3
fail 7682802 2024-05-01 10:04:33 2024-05-01 12:46:57 2024-05-01 17:22:31 4:35:34 4:16:22 0:19:12 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/fs/misc}} 3
Failure Reason:

Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi019 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=148a44391d69844f26ccb96d75a4ba77e4183fe4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/multiple_rsync.sh'

fail 7682803 2024-05-01 10:04:34 2024-05-01 12:54:48 2024-05-01 13:29:45 0:34:57 0:26:01 0:08:56 smithi main centos 9.stream fs/verify/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health session_timeout} ranks/5 tasks/dbench validater/valgrind} 2
Failure Reason:

valgrind error: Leak_PossiblyLost posix_memalign UnknownInlinedFun ceph::buffer::v15_2_0::list::refill_append_space(unsigned int)

fail 7682804 2024-05-01 10:04:35 2024-05-01 12:54:49 2024-05-01 14:08:02 1:13:13 1:02:38 0:10:35 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/kernel_untar_build}} 3
Failure Reason:

"2024-05-01T13:20:00.000329+0000 mon.a (mon.0) 685 : cluster [WRN] pg 1.0 is stuck inactive for 64s, current state unknown, last acting [5,2]" in cluster log