Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7608463 2024-03-17 23:17:48 2024-03-17 23:19:51 2024-03-18 00:17:06 0:57:15 0:47:09 0:10:06 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/blogbench}} 3
Failure Reason:

"2024-03-17T23:48:03.070702+0000 mon.a (mon.0) 210 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.79:3300/0,v1:172.21.15.79:6789/0] is down (out of quorum)" in cluster log

pass 7608464 2024-03-17 23:17:49 2024-03-17 23:20:12 2024-03-17 23:49:35 0:29:23 0:19:45 0:09:38 smithi main centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
fail 7608465 2024-03-17 23:17:50 2024-03-17 23:20:12 2024-03-18 00:27:24 1:07:12 0:56:45 0:10:27 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/dbench}} 3
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi077 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c382027d5c66ef4e0aa1131ea3e8080c658d3184 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'

fail 7608466 2024-03-17 23:17:51 2024-03-17 23:20:23 2024-03-18 00:31:58 1:11:35 0:59:01 0:12:34 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/ffsb}} 3
Failure Reason:

"2024-03-17T23:46:19.275987+0000 mon.a (mon.0) 211 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.178:3300/0,v1:172.21.15.178:6789/0] is down (out of quorum)" in cluster log

fail 7608467 2024-03-17 23:17:53 2024-03-17 23:22:04 2024-03-18 00:11:22 0:49:18 0:39:25 0:09:53 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/fs/norstats}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

fail 7608468 2024-03-17 23:17:54 2024-03-17 23:22:04 2024-03-18 00:04:54 0:42:50 0:31:24 0:11:26 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/fsstress}} 3
Failure Reason:

"2024-03-17T23:46:24.394622+0000 mon.a (mon.0) 213 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.81:3300/0,v1:172.21.15.81:6789/0] is down (out of quorum)" in cluster log

fail 7608469 2024-03-17 23:17:55 2024-03-17 23:23:55 2024-03-18 00:04:45 0:40:50 0:32:29 0:08:21 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"2024-03-17T23:54:15.126316+0000 mon.smithi063 (mon.0) 253 : cluster [WRN] Health check failed: Degraded data redundancy: 44/222 objects degraded (19.820%), 18 pgs degraded (PG_DEGRADED)" in cluster log

fail 7608470 2024-03-17 23:17:56 2024-03-17 23:23:55 2024-03-18 00:06:57 0:43:02 0:33:10 0:09:52 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/fsx}} 3
Failure Reason:

Command failed (workunit test suites/fsx.sh) on smithi033 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c382027d5c66ef4e0aa1131ea3e8080c658d3184 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh'

fail 7608471 2024-03-17 23:17:57 2024-03-17 23:23:56 2024-03-18 00:04:54 0:40:58 0:31:54 0:09:04 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/fsync-tester}} 3
Failure Reason:

"2024-03-17T23:49:05.368057+0000 mon.a (mon.0) 213 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.83:3300/0,v1:172.21.15.83:6789/0] is down (out of quorum)" in cluster log

fail 7608472 2024-03-17 23:17:58 2024-03-17 23:23:56 2024-03-18 00:11:48 0:47:52 0:35:47 0:12:05 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/fs/test_o_trunc}} 3
Failure Reason:

"2024-03-17T23:48:19.241927+0000 mon.a (mon.0) 212 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.113:3300/0,v1:172.21.15.113:6789/0] is down (out of quorum)" in cluster log

dead 7608473 2024-03-17 23:17:59 2024-03-17 23:23:56 2024-03-17 23:28:32 0:04:36 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
Failure Reason:

Error reimaging machines: Expected smithi040's OS to be centos 8 but found centos 9

dead 7608474 2024-03-17 23:18:00 2024-03-17 23:23:57 2024-03-17 23:36:05 0:12:08 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/iogen}} 3
Failure Reason:

SSH connection to smithi040 was lost: 'rm -f /tmp/kernel.x86_64.rpm && echo kernel-6.8.0_rc7_g3845b7ac715a-1.x86_64.rpm | wget -nv -O /tmp/kernel.x86_64.rpm --base=https://2.chacra.ceph.com/r/kernel/testing/3845b7ac715a3ff198582638d199a90e30e254da/centos/9/flavors/default/x86_64/ --input-file=-'

fail 7608475 2024-03-17 23:18:01 2024-03-17 23:23:57 2024-03-18 00:11:25 0:47:28 0:37:55 0:09:33 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/iozone}} 3
Failure Reason:

"2024-03-17T23:48:09.029211+0000 mon.a (mon.0) 211 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.64:3300/0,v1:172.21.15.64:6789/0] is down (out of quorum)" in cluster log

fail 7608476 2024-03-17 23:18:02 2024-03-17 23:23:58 2024-03-18 00:10:42 0:46:44 0:37:39 0:09:05 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7608477 2024-03-17 23:18:03 2024-03-17 23:23:58 2024-03-18 00:18:16 0:54:18 0:42:20 0:11:58 smithi main ubuntu 22.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{overrides ubuntu_latest}} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/3 tasks/dbench validater/valgrind} 2
Failure Reason:

valgrind error: InvalidRead UnknownInlinedFun UnknownInlinedFun QuiesceDbManager::leader_bootstrap(std::queue<QuiesceDbPeerListing, std::deque<QuiesceDbPeerListing, std::allocator<QuiesceDbPeerListing> > >&&, std::chrono::duration<unsigned long, std::ratio<1l, 1000000000l> >&)

fail 7608478 2024-03-17 23:18:04 2024-03-17 23:23:58 2024-03-18 00:05:48 0:41:50 0:30:28 0:11:22 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/pjd}} 3
Failure Reason:

"2024-03-17T23:48:16.615702+0000 mon.a (mon.0) 212 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.114:3300/0,v1:172.21.15.114:6789/0] is down (out of quorum)" in cluster log

fail 7608479 2024-03-17 23:18:05 2024-03-17 23:23:59 2024-03-17 23:50:46 0:26:47 0:14:58 0:11:49 smithi main ubuntu 22.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} subvol_versions/create_subvol_version_v2 tasks/openfiletable} 2
Failure Reason:

Test failure: test_max_items_per_obj (tasks.cephfs.test_openfiletable.OpenFileTable)

dead 7608480 2024-03-17 23:18:06 2024-03-17 23:23:59 2024-03-17 23:30:42 0:06:43 smithi main centos 9.stream fs/cephadm/multivolume/{0-start 1-mount 2-workload/dbench distro/single-container-host} 2
Failure Reason:

Error reimaging machines: Expected smithi135's OS to be centos 9 but found ubuntu 22.04

dead 7608481 2024-03-17 23:18:07 2024-03-17 23:24:00 2024-03-17 23:35:47 0:11:47 smithi main ubuntu 22.04 fs/mirror/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/one-per-cluster clients/{mirror} cluster/{1-node} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health} supported-random-distros$/{ubuntu_latest} tasks/mirror} 1
Failure Reason:

Error reimaging machines: Expected smithi135's OS to be ubuntu 22.04 but found centos 9

fail 7608482 2024-03-17 23:18:08 2024-03-17 23:24:00 2024-03-18 00:08:39 0:44:39 0:35:23 0:09:16 smithi main centos 9.stream fs/nfs/{cluster/{1-node} conf/{client mds mon osd} overrides/ignorelist_health supported-random-distros$/{centos_latest} tasks/nfs} 1
Failure Reason:

"2024-03-17T23:44:06.645269+0000 mon.a (mon.0) 487 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7608483 2024-03-17 23:18:10 2024-03-17 23:24:00 2024-03-18 00:13:49 0:49:49 0:37:32 0:12:17 smithi main centos 9.stream fs/shell/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/centos_latest mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/cephfs-shell} 2
Failure Reason:

Test failure: test_reading_conf (tasks.cephfs.test_cephfs_shell.TestShellOpts)

fail 7608484 2024-03-17 23:18:11 2024-03-17 23:24:01 2024-03-18 00:46:22 1:22:21 1:11:38 0:10:43 smithi main ubuntu 22.04 fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/workunit/snaps} 2
Failure Reason:

Command failed on smithi116 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'

fail 7608485 2024-03-17 23:18:12 2024-03-17 23:24:01 2024-03-18 00:05:15 0:41:14 0:32:08 0:09:06 smithi main centos 9.stream fs/valgrind/{begin/{0-install 1-ceph 2-logrotate} centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/ignorelist_health tasks/mirror}} 1
Failure Reason:

Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)

fail 7608486 2024-03-17 23:18:13 2024-03-17 23:24:01 2024-03-18 00:07:50 0:43:49 0:33:35 0:10:14 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/direct_io}} 3
Failure Reason:

"2024-03-17T23:44:37.655863+0000 mon.a (mon.0) 213 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.162:3300/0,v1:172.21.15.162:6789/0] is down (out of quorum)" in cluster log

fail 7608487 2024-03-17 23:18:14 2024-03-17 23:24:02 2024-03-18 00:54:45 1:30:43 1:18:59 0:11:44 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/fs/misc}} 3
Failure Reason:

"2024-03-17T23:50:18.942377+0000 mon.a (mon.0) 214 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.175:3300/0,v1:172.21.15.175:6789/0] is down (out of quorum)" in cluster log

fail 7608488 2024-03-17 23:18:15 2024-03-17 23:24:02 2024-03-17 23:50:48 0:26:46 0:15:36 0:11:10 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} subvol_versions/create_subvol_version_v1 tasks/pool-perm} 2
Failure Reason:

Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)

dead 7608489 2024-03-17 23:18:16 2024-03-17 23:24:03 2024-03-18 12:31:04 13:07:01 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/kernel_untar_build}} 3
Failure Reason:

hit max job timeout

fail 7608490 2024-03-17 23:18:17 2024-03-17 23:24:03 2024-03-18 00:31:01 1:06:58 0:55:42 0:11:16 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/postgres}} 3
Failure Reason:

"2024-03-17T23:47:00.907033+0000 mon.a (mon.0) 211 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.103:3300/0,v1:172.21.15.103:6789/0] is down (out of quorum)" in cluster log

fail 7608491 2024-03-17 23:18:18 2024-03-17 23:24:24 2024-03-18 00:14:52 0:50:28 0:38:09 0:12:19 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/blogbench}} 3
Failure Reason:

"2024-03-17T23:47:48.747321+0000 mon.a (mon.0) 211 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.130:3300/0,v1:172.21.15.130:6789/0] is down (out of quorum)" in cluster log

fail 7608492 2024-03-17 23:18:19 2024-03-17 23:24:44 2024-03-17 23:35:46 0:11:02 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/dbench}} 3
Failure Reason:

Command failed on smithi040 with status 1: 'sudo yum upgrade -y linux-firmware'

fail 7608493 2024-03-17 23:18:20 2024-03-17 23:28:45 2024-03-17 23:42:05 0:13:20 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/ffsb}} 3
Failure Reason:

Failed to reconnect to smithi076

dead 7608494 2024-03-17 23:18:21 2024-03-17 23:28:45 2024-03-17 23:35:55 0:07:10 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Error reimaging machines: list index out of range

fail 7608495 2024-03-17 23:18:22 2024-03-17 23:30:56 2024-03-18 00:11:12 0:40:16 0:29:37 0:10:39 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/fs/norstats}} 3
Failure Reason:

"2024-03-17T23:52:56.691479+0000 mon.a (mon.0) 212 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.82:3300/0,v1:172.21.15.82:6789/0] is down (out of quorum)" in cluster log

fail 7608496 2024-03-17 23:18:23 2024-03-17 23:31:37 2024-03-18 00:19:28 0:47:51 0:32:34 0:15:17 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/fsstress}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

fail 7608497 2024-03-17 23:18:24 2024-03-17 23:32:17 2024-03-18 00:10:01 0:37:44 0:26:48 0:10:56 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/fsx}} 3
Failure Reason:

Command failed (workunit test suites/fsx.sh) on smithi078 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c382027d5c66ef4e0aa1131ea3e8080c658d3184 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh'

fail 7608498 2024-03-17 23:18:25 2024-03-17 23:32:48 2024-03-18 00:16:00 0:43:12 0:31:50 0:11:22 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/fsync-tester}} 3
Failure Reason:

"2024-03-17T23:53:17.958131+0000 mon.a (mon.0) 213 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.163:3300/0,v1:172.21.15.163:6789/0] is down (out of quorum)" in cluster log

fail 7608499 2024-03-17 23:18:26 2024-03-17 23:34:08 2024-03-18 01:53:57 2:19:49 2:11:01 0:08:48 smithi main ubuntu 22.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} subvol_versions/create_subvol_version_v1 tasks/snap-schedule} 2
Failure Reason:

"2024-03-17T23:56:19.409946+0000 mon.a (mon.0) 543 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 7608500 2024-03-17 23:18:27 2024-03-17 23:34:09 2024-03-18 00:24:33 0:50:24 0:37:09 0:13:15 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/fs/test_o_trunc}} 3
Failure Reason:

"2024-03-17T23:58:00.844642+0000 mon.a (mon.0) 213 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.104:3300/0,v1:172.21.15.104:6789/0] is down (out of quorum)" in cluster log

fail 7608501 2024-03-17 23:18:28 2024-03-17 23:35:49 2024-03-18 00:23:34 0:47:45 0:37:01 0:10:44 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7608502 2024-03-17 23:18:29 2024-03-17 23:35:50 2024-03-18 00:03:40 0:27:50 0:16:48 0:11:02 smithi main ubuntu 22.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{overrides ubuntu_latest}} mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/5 tasks/fsstress validater/lockdep} 2
Failure Reason:

Command failed on smithi196 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'

fail 7608503 2024-03-17 23:18:30 2024-03-17 23:35:50 2024-03-18 00:25:00 0:49:10 0:37:22 0:11:48 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/iogen}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

fail 7608504 2024-03-17 23:18:31 2024-03-17 23:36:01 2024-03-18 00:15:36 0:39:35 0:28:37 0:10:58 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/iozone}} 3
Failure Reason:

"2024-03-17T23:58:23.935839+0000 mon.a (mon.0) 211 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.105:3300/0,v1:172.21.15.105:6789/0] is down (out of quorum)" in cluster log

fail 7608505 2024-03-17 23:18:33 2024-03-17 23:36:01 2024-03-18 00:19:42 0:43:41 0:32:06 0:11:35 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/pjd}} 3
Failure Reason:

"2024-03-17T23:57:11.722473+0000 mon.a (mon.0) 212 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.22:3300/0,v1:172.21.15.22:6789/0] is down (out of quorum)" in cluster log

fail 7608506 2024-03-17 23:18:34 2024-03-17 23:38:22 2024-03-18 00:11:53 0:33:31 0:21:17 0:12:14 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
Failure Reason:

Command failed on smithi049 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell 4.29 deep-scrub'

fail 7608507 2024-03-17 23:18:35 2024-03-17 23:39:23 2024-03-18 00:34:20 0:54:57 0:36:30 0:18:27 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/direct_io}} 3
Failure Reason:

"2024-03-18T00:09:25.961819+0000 mon.a (mon.0) 214 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.111:3300/0,v1:172.21.15.111:6789/0] is down (out of quorum)" in cluster log

pass 7608508 2024-03-17 23:18:36 2024-03-17 23:46:44 2024-03-18 00:28:42 0:41:58 0:32:38 0:09:20 smithi main ubuntu 22.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} subvol_versions/create_subvol_version_v2 tasks/strays} 2
fail 7608509 2024-03-17 23:18:37 2024-03-17 23:47:05 2024-03-18 01:07:23 1:20:18 1:08:29 0:11:49 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/fs/misc}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

fail 7608510 2024-03-17 23:18:38 2024-03-17 23:48:05 2024-03-18 01:56:31 2:08:26 1:56:10 0:12:16 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/kernel_untar_build}} 3
Failure Reason:

"2024-03-18T00:10:49.774397+0000 mon.a (mon.0) 212 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.171:3300/0,v1:172.21.15.171:6789/0] is down (out of quorum)" in cluster log

fail 7608511 2024-03-17 23:18:39 2024-03-17 23:49:06 2024-03-18 00:44:54 0:55:48 0:43:48 0:12:00 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/postgres}} 3
Failure Reason:

"2024-03-18T00:17:54.017013+0000 mon.a (mon.0) 802 : cluster [WRN] Health check failed: Degraded data redundancy: 2/6 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)" in cluster log

fail 7608512 2024-03-17 23:18:40 2024-03-17 23:49:36 2024-03-18 00:32:43 0:43:07 0:31:26 0:11:41 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"2024-03-18T00:22:53.838039+0000 mon.smithi076 (mon.0) 245 : cluster [WRN] Health check failed: Degraded data redundancy: 69/354 objects degraded (19.492%), 19 pgs degraded (PG_DEGRADED)" in cluster log

fail 7608513 2024-03-17 23:18:41 2024-03-17 23:51:27 2024-03-18 01:16:29 1:25:02 1:13:37 0:11:25 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/blogbench}} 3
Failure Reason:

"2024-03-18T00:16:25.362343+0000 mon.a (mon.0) 214 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.146:3300/0,v1:172.21.15.146:6789/0] is down (out of quorum)" in cluster log

fail 7608514 2024-03-17 23:18:42 2024-03-17 23:51:27 2024-03-18 01:04:30 1:13:03 1:01:29 0:11:34 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/dbench}} 3
Failure Reason:

"2024-03-18T00:10:52.398414+0000 mon.a (mon.0) 213 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.92:3300/0,v1:172.21.15.92:6789/0] is down (out of quorum)" in cluster log

fail 7608515 2024-03-17 23:18:43 2024-03-17 23:51:28 2024-03-18 01:10:38 1:19:10 1:08:33 0:10:37 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/ffsb}} 3
Failure Reason:

"2024-03-18T00:12:36.304255+0000 mon.a (mon.0) 211 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.150:3300/0,v1:172.21.15.150:6789/0] is down (out of quorum)" in cluster log

fail 7608516 2024-03-17 23:18:44 2024-03-17 23:51:28 2024-03-18 00:37:54 0:46:26 0:25:19 0:21:07 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/fs/norstats}} 3
Failure Reason:

"2024-03-18T00:24:49.264873+0000 mon.a (mon.0) 578 : cluster [WRN] Health check failed: Degraded data redundancy: 2/6 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)" in cluster log

fail 7608517 2024-03-17 23:18:45 2024-03-18 00:06:54 2024-03-18 07:44:28 7:37:34 7:24:05 0:13:29 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} subvol_versions/create_subvol_version_v2 tasks/xfstests-dev} 2
Failure Reason:

Test failure: test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev)

fail 7608518 2024-03-17 23:18:46 2024-03-18 00:06:54 2024-03-18 00:56:03 0:49:09 0:38:11 0:10:58 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/fsstress}} 3
Failure Reason:

"2024-03-18T00:27:18.544420+0000 mon.a (mon.0) 213 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.63:3300/0,v1:172.21.15.63:6789/0] is down (out of quorum)" in cluster log

fail 7608519 2024-03-17 23:18:47 2024-03-18 00:06:55 2024-03-18 00:53:24 0:46:29 0:37:08 0:09:21 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

dead 7608520 2024-03-17 23:18:48 2024-03-18 00:06:55 2024-03-18 00:11:40 0:04:45 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/fsx}} 3
Failure Reason:

Error reimaging machines: Expected smithi083's OS to be centos 9 but found centos 8

dead 7608521 2024-03-17 23:18:50 2024-03-18 00:06:56 2024-03-18 00:11:45 0:04:49 smithi main centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
Failure Reason:

Error reimaging machines: SSH connection to smithi145 was lost: "while [ ! -e '/.cephlab_net_configured' ]; do sleep 5; done"

fail 7608522 2024-03-17 23:18:51 2024-03-18 00:06:56 2024-03-18 01:04:24 0:57:28 0:48:03 0:09:25 smithi main ubuntu 22.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{overrides ubuntu_latest}} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/1 tasks/dbench validater/valgrind} 2
Failure Reason:

valgrind error: Leak_PossiblyLost calloc __trans_list_add

fail 7608523 2024-03-17 23:18:52 2024-03-18 00:06:56 2024-03-18 00:52:33 0:45:37 0:35:35 0:10:02 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/fsync-tester}} 3
Failure Reason:

"2024-03-18T00:28:19.222977+0000 mon.a (mon.0) 212 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.90:3300/0,v1:172.21.15.90:6789/0] is down (out of quorum)" in cluster log

pass 7608524 2024-03-17 23:18:53 2024-03-18 00:06:57 2024-03-18 00:30:22 0:23:25 0:14:48 0:08:37 smithi main ubuntu 22.04 fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-ec-root overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
fail 7608525 2024-03-17 23:18:54 2024-03-18 00:06:57 2024-03-18 00:54:54 0:47:57 0:32:41 0:15:16 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/fs/test_o_trunc}} 3
Failure Reason:

"2024-03-18T00:30:50.850379+0000 mon.a (mon.0) 211 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.83:3300/0,v1:172.21.15.83:6789/0] is down (out of quorum)" in cluster log

fail 7608526 2024-03-17 23:18:55 2024-03-18 00:11:58 2024-03-18 01:07:02 0:55:04 0:43:54 0:11:10 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/iogen}} 3
Failure Reason:

"2024-03-18T00:33:58.238681+0000 mon.a (mon.0) 210 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.148:3300/0,v1:172.21.15.148:6789/0] is down (out of quorum)" in cluster log

fail 7608527 2024-03-17 23:18:56 2024-03-18 00:11:59 2024-03-18 01:09:13 0:57:14 0:37:44 0:19:30 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} subvol_versions/create_subvol_version_v1 tasks/admin} 2
Failure Reason:

Test failure: test_idem_unaffected_root_squash (tasks.cephfs.test_admin.TestFsAuthorizeUpdate)

fail 7608528 2024-03-17 23:18:57 2024-03-18 00:22:00 2024-03-18 01:00:28 0:38:28 0:26:44 0:11:44 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/iozone}} 3
Failure Reason:

"2024-03-18T00:45:39.458554+0000 mon.a (mon.0) 214 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.22:3300/0,v1:172.21.15.22:6789/0] is down (out of quorum)" in cluster log

fail 7608529 2024-03-17 23:18:58 2024-03-18 00:22:11 2024-03-18 01:12:10 0:49:59 0:38:21 0:11:38 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/pjd}} 3
Failure Reason:

"2024-03-18T00:46:24.182179+0000 mon.a (mon.0) 217 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.120:3300/0,v1:172.21.15.120:6789/0] is down (out of quorum)" in cluster log

pass 7608530 2024-03-17 23:18:59 2024-03-18 00:22:11 2024-03-18 00:52:38 0:30:27 0:19:33 0:10:54 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
fail 7608531 2024-03-17 23:19:01 2024-03-18 00:22:12 2024-03-18 02:37:27 2:15:15 2:02:20 0:12:55 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/fs/misc}} 3
Failure Reason:

"2024-03-18T00:46:17.493372+0000 mon.a (mon.0) 212 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.94:3300/0,v1:172.21.15.94:6789/0] is down (out of quorum)" in cluster log

fail 7608532 2024-03-17 23:19:02 2024-03-18 00:22:12 2024-03-18 01:08:46 0:46:34 0:32:46 0:13:48 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"2024-03-18T00:57:57.730277+0000 mon.smithi064 (mon.0) 246 : cluster [WRN] Health check failed: Reduced data availability: 34 pgs peering (PG_AVAILABILITY)" in cluster log

fail 7608533 2024-03-17 23:19:03 2024-03-18 00:22:12 2024-03-18 02:21:11 1:58:59 1:46:33 0:12:26 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/kernel_untar_build}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

fail 7608534 2024-03-17 23:19:04 2024-03-18 00:22:13 2024-03-18 01:36:15 1:14:02 1:01:55 0:12:07 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/postgres}} 3
Failure Reason:

"2024-03-18T00:46:23.350824+0000 mon.a (mon.0) 214 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.172:3300/0,v1:172.21.15.172:6789/0] is down (out of quorum)" in cluster log

fail 7608535 2024-03-17 23:19:05 2024-03-18 00:22:13 2024-03-18 00:44:23 0:22:10 0:12:32 0:09:38 smithi main ubuntu 22.04 fs/shell/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/ubuntu_latest mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/cephfs-shell} 2
Failure Reason:

Test failure: test_cd_with_args (tasks.cephfs.test_cephfs_shell.TestCD)

fail 7608536 2024-03-17 23:19:06 2024-03-18 00:22:14 2024-03-18 01:10:59 0:48:45 0:38:10 0:10:35 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/blogbench}} 3
Failure Reason:

"2024-03-18T00:51:31.546121+0000 mon.a (mon.0) 1097 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log

fail 7608537 2024-03-17 23:19:07 2024-03-18 00:22:14 2024-03-18 01:20:30 0:58:16 0:47:03 0:11:13 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/dbench}} 3
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi097 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c382027d5c66ef4e0aa1131ea3e8080c658d3184 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'

fail 7608538 2024-03-17 23:19:08 2024-03-18 00:22:15 2024-03-18 01:12:01 0:49:46 0:37:49 0:11:57 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/fs/norstats}} 3
Failure Reason:

"2024-03-18T00:46:46.446330+0000 mon.a (mon.0) 212 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.79:3300/0,v1:172.21.15.79:6789/0] is down (out of quorum)" in cluster log

fail 7608539 2024-03-17 23:19:09 2024-03-18 00:22:15 2024-03-18 01:09:48 0:47:33 0:37:39 0:09:54 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7608540 2024-03-17 23:19:11 2024-03-18 00:22:15 2024-03-18 01:02:10 0:39:55 0:27:53 0:12:02 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/fsstress}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

fail 7608541 2024-03-17 23:19:12 2024-03-18 00:22:16 2024-03-18 01:12:55 0:50:39 0:37:29 0:13:10 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/fsx}} 3
Failure Reason:

Command failed (workunit test suites/fsx.sh) on smithi073 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c382027d5c66ef4e0aa1131ea3e8080c658d3184 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh'

fail 7608542 2024-03-17 23:19:13 2024-03-18 00:22:16 2024-03-18 00:59:48 0:37:32 0:25:18 0:12:14 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/fsync-tester}} 3
Failure Reason:

"2024-03-18T00:44:40.805817+0000 mon.a (mon.0) 213 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.179:3300/0,v1:172.21.15.179:6789/0] is down (out of quorum)" in cluster log

fail 7608543 2024-03-17 23:19:14 2024-03-18 00:22:17 2024-03-18 01:12:43 0:50:26 0:38:21 0:12:05 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/fs/test_o_trunc}} 3
Failure Reason:

"2024-03-18T00:46:28.034784+0000 mon.a (mon.0) 214 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.88:3300/0,v1:172.21.15.88:6789/0] is down (out of quorum)" in cluster log

fail 7608544 2024-03-17 23:19:15 2024-03-18 00:22:17 2024-03-18 01:20:48 0:58:31 0:46:42 0:11:49 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/iogen}} 3
Failure Reason:

"2024-03-18T00:42:01.490162+0000 mon.a (mon.0) 212 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.106:3300/0,v1:172.21.15.106:6789/0] is down (out of quorum)" in cluster log

fail 7608545 2024-03-17 23:19:16 2024-03-18 00:22:17 2024-03-18 01:17:34 0:55:17 0:37:26 0:17:51 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/iozone}} 3
Failure Reason:

"2024-03-18T00:52:28.629737+0000 mon.a (mon.0) 211 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.44:3300/0,v1:172.21.15.44:6789/0] is down (out of quorum)" in cluster log

fail 7608546 2024-03-17 23:19:17 2024-03-18 00:30:29 2024-03-18 01:10:31 0:40:02 0:23:28 0:16:34 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/pjd}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

pass 7608547 2024-03-17 23:19:19 2024-03-18 00:37:40 2024-03-18 01:05:22 0:27:42 0:16:29 0:11:13 smithi main ubuntu 22.04 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_pjd}} 2
fail 7608548 2024-03-17 23:19:20 2024-03-18 00:37:41 2024-03-18 01:29:45 0:52:04 0:41:07 0:10:57 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7608549 2024-03-17 23:19:21 2024-03-18 00:37:41 2024-03-18 02:24:52 1:47:11 1:34:35 0:12:36 smithi main ubuntu 22.04 fs/fscrypt/{begin/{0-install 1-ceph 2-logrotate} bluestore-bitmap clusters/1-mds-1-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} overrides/{ignorelist_health ignorelist_health_more ignorelist_wrongly_marked_down osd pg-warn} tasks/{0-client 1-tests/fscrypt-common}} 3
Failure Reason:

Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)

fail 7608550 2024-03-17 23:19:22 2024-03-18 00:37:42 2024-03-18 01:15:47 0:38:05 0:26:37 0:11:28 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/direct_io}} 3
Failure Reason:

"2024-03-18T00:59:23.600092+0000 mon.a (mon.0) 213 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.111:3300/0,v1:172.21.15.111:6789/0] is down (out of quorum)" in cluster log

fail 7608551 2024-03-17 23:19:23 2024-03-18 00:37:42 2024-03-18 02:57:02 2:19:20 2:08:45 0:10:35 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/fs/misc}} 3
Failure Reason:

"2024-03-18T00:56:01.040376+0000 mon.a (mon.0) 212 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.151:3300/0,v1:172.21.15.151:6789/0] is down (out of quorum)" in cluster log

dead 7608552 2024-03-17 23:19:24 2024-03-18 00:37:42 2024-03-18 00:56:22 0:18:40 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

fail 7608553 2024-03-17 23:19:25 2024-03-18 00:37:43 2024-03-18 01:59:37 1:21:54 1:10:07 0:11:47 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/kernel_untar_build}} 3
Failure Reason:

"2024-03-18T00:59:49.505082+0000 mon.a (mon.0) 211 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.176:3300/0,v1:172.21.15.176:6789/0] is down (out of quorum)" in cluster log

fail 7608554 2024-03-17 23:19:26 2024-03-18 00:37:43 2024-03-18 01:09:53 0:32:10 0:18:52 0:13:18 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} subvol_versions/create_subvol_version_v1 tasks/forward-scrub} 2
Failure Reason:

"2024-03-18T01:04:41.470097+0000 mds.a (mds.0) 1 : cluster [ERR] dir 0x10000000000 object missing on disk; some files may be lost (/dir)" in cluster log

fail 7608555 2024-03-17 23:19:27 2024-03-18 00:37:44 2024-03-18 01:09:43 0:31:59 0:20:46 0:11:13 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
Failure Reason:

Command failed on smithi005 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell 4.29 deep-scrub'

fail 7608556 2024-03-17 23:19:28 2024-03-18 00:37:44 2024-03-18 01:36:18 0:58:34 0:43:00 0:15:34 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/postgres}} 3
Failure Reason:

"2024-03-18T01:02:36.472079+0000 mon.a (mon.0) 210 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.137:3300/0,v1:172.21.15.137:6789/0] is down (out of quorum)" in cluster log

pass 7608557 2024-03-17 23:19:29 2024-03-18 00:42:05 2024-03-18 01:54:42 1:12:37 0:51:44 0:20:53 smithi main ubuntu 22.04 fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_dbench traceless/50pc} 2
pass 7608558 2024-03-17 23:19:30 2024-03-18 00:52:47 2024-03-18 01:15:35 0:22:48 0:13:12 0:09:36 smithi main centos 9.stream fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_latest} mount/fuse msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_trivial_sync}} 2
fail 7608559 2024-03-17 23:19:31 2024-03-18 00:53:08 2024-03-18 01:46:28 0:53:20 0:41:42 0:11:38 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/blogbench}} 3
Failure Reason:

"2024-03-18T01:16:37.325508+0000 mon.a (mon.0) 212 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.38:3300/0,v1:172.21.15.38:6789/0] is down (out of quorum)" in cluster log

pass 7608560 2024-03-17 23:19:32 2024-03-18 00:53:08 2024-03-18 01:26:08 0:33:00 0:21:56 0:11:04 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} subvol_versions/create_subvol_version_v2 tasks/fragment} 2
fail 7608561 2024-03-17 23:19:33 2024-03-18 00:53:08 2024-03-18 01:58:54 1:05:46 0:52:16 0:13:30 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi029 with status 1: "sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:quincy shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 37f51182-e4c4-11ee-95c9-87774f69a715 -e sha1=3d78f5ba07bf18880172c07d4d7b4ac00c9d30af -- bash -c 'ceph orch ps'"

fail 7608562 2024-03-17 23:19:34 2024-03-18 00:53:09 2024-03-18 02:06:11 1:13:02 1:03:00 0:10:02 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/suites/dbench}} 3
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi055 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c382027d5c66ef4e0aa1131ea3e8080c658d3184 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'

pass 7608563 2024-03-17 23:19:36 2024-03-18 00:53:09 2024-03-18 01:18:00 0:24:51 0:11:34 0:13:17 smithi main centos 9.stream fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-comp overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
fail 7608564 2024-03-17 23:19:37 2024-03-18 00:56:40 2024-03-18 02:18:45 1:22:05 1:02:01 0:20:04 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/ffsb}} 3
Failure Reason:

"2024-03-18T01:28:04.885193+0000 mon.a (mon.0) 211 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.46:3300/0,v1:172.21.15.46:6789/0] is down (out of quorum)" in cluster log

fail 7608565 2024-03-17 23:19:38 2024-03-18 01:05:32 2024-03-18 02:00:39 0:55:07 0:40:35 0:14:32 smithi main centos 9.stream fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/5 tasks/dbench validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

pass 7608566 2024-03-17 23:19:39 2024-03-18 01:08:32 2024-03-18 01:38:15 0:29:43 0:19:28 0:10:15 smithi main ubuntu 22.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} 2
fail 7608567 2024-03-17 23:19:40 2024-03-18 01:08:33 2024-03-18 01:51:17 0:42:44 0:32:08 0:10:36 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/fs/norstats}} 3
Failure Reason:

"2024-03-18T01:27:46.967720+0000 mon.a (mon.0) 214 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.22:3300/0,v1:172.21.15.22:6789/0] is down (out of quorum)" in cluster log

fail 7608568 2024-03-17 23:19:41 2024-03-18 01:08:33 2024-03-18 01:50:22 0:41:49 0:29:46 0:12:03 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/fsstress}} 3
Failure Reason:

"2024-03-18T01:30:35.297944+0000 mon.a (mon.0) 211 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.145:3300/0,v1:172.21.15.145:6789/0] is down (out of quorum)" in cluster log

fail 7608569 2024-03-17 23:19:42 2024-03-18 01:08:34 2024-03-18 01:51:36 0:43:02 0:31:54 0:11:08 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/suites/fsx}} 3
Failure Reason:

Command failed (workunit test suites/fsx.sh) on smithi064 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c382027d5c66ef4e0aa1131ea3e8080c658d3184 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh'

pass 7608570 2024-03-17 23:19:43 2024-03-18 01:08:34 2024-03-18 01:44:13 0:35:39 0:24:12 0:11:27 smithi main ubuntu 22.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} subvol_versions/create_subvol_version_v1 tasks/mds-full} 2
fail 7608571 2024-03-17 23:19:44 2024-03-18 01:08:35 2024-03-18 01:48:15 0:39:40 0:28:36 0:11:04 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/fsync-tester}} 3
Failure Reason:

"2024-03-18T01:30:52.117401+0000 mon.a (mon.0) 211 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.92:3300/0,v1:172.21.15.92:6789/0] is down (out of quorum)" in cluster log

pass 7608572 2024-03-17 23:19:45 2024-03-18 01:08:35 2024-03-18 01:28:15 0:19:40 0:10:05 0:09:35 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
fail 7608573 2024-03-17 23:19:47 2024-03-18 01:08:36 2024-03-18 01:53:39 0:45:03 0:32:06 0:12:57 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/fs/test_o_trunc}} 3
Failure Reason:

"2024-03-18T01:30:02.587454+0000 mon.a (mon.0) 213 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.133:3300/0,v1:172.21.15.133:6789/0] is down (out of quorum)" in cluster log

fail 7608574 2024-03-17 23:19:48 2024-03-18 01:08:36 2024-03-18 01:58:51 0:50:15 0:37:42 0:12:33 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds