Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7665366 2024-04-20 20:50:53 2024-04-20 20:51:34 2024-04-20 22:54:53 2:03:19 1:42:33 0:20:46 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/suites/ffsb}} 3
Failure Reason:

Command failed on smithi027 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:a8c84f5997950fd7d008204839253291d52e6eda shell --fsid 3b288db4-ff5b-11ee-bc93-c7b262605968 -- ceph daemon mds.d perf dump'

pass 7665368 2024-04-20 20:50:54 2024-04-20 20:51:35 2024-04-20 22:02:57 1:11:22 0:47:47 0:23:35 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no multimds/yes pg-warn pg_health} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
fail 7665370 2024-04-20 20:50:55 2024-04-20 20:51:35 2024-04-20 21:43:12 0:51:37 0:29:00 0:22:37 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/fsx}} 3
Failure Reason:

Command failed (workunit test suites/fsx.sh) on smithi044 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a8c84f5997950fd7d008204839253291d52e6eda TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh'

dead 7665373 2024-04-20 20:50:56 2024-04-20 20:51:36 2024-04-21 11:26:34 14:34:58 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/xfstests-dev} 2
Failure Reason:

hit max job timeout

fail 7665375 2024-04-20 20:50:58 2024-04-20 20:51:36 2024-04-20 21:53:43 1:02:07 0:38:33 0:23:34 smithi main centos 9.stream fs/cephadm/multivolume/{0-start 1-mount 2-workload/dbench distro/single-container-host overrides/{ignorelist_health pg_health}} 2
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi184 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a8c84f5997950fd7d008204839253291d52e6eda TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/dbench.sh'

fail 7665377 2024-04-20 20:50:59 2024-04-20 20:51:36 2024-04-20 21:31:59 0:40:23 0:18:59 0:21:24 smithi main centos 9.stream fs/mirror/{begin/{0-install 1-ceph 2-logrotate 3-modules} cephfs-mirror/one-per-cluster clients/{mirror} cluster/{1-node} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health pg_health} supported-random-distros$/{centos_latest} tasks/mirror} 1
Failure Reason:

Test failure: test_cephfs_mirror_blocklist (tasks.cephfs.test_mirroring.TestMirroring)

fail 7665379 2024-04-20 20:51:00 2024-04-20 20:51:37 2024-04-20 21:47:46 0:56:09 0:40:43 0:15:26 smithi main centos 9.stream fs/nfs/{cluster/{1-node} conf/{client mds mgr mon osd} overrides/{ignorelist_health pg_health} supported-random-distros$/{centos_latest} tasks/nfs} 1
Failure Reason:

"2024-04-20T21:16:52.027571+0000 mon.a (mon.0) 327 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log

fail 7665382 2024-04-20 20:51:01 2024-04-20 20:51:37 2024-04-20 21:40:35 0:48:58 0:30:24 0:18:34 smithi main centos 9.stream fs/valgrind/{begin/{0-install 1-ceph 2-logrotate 3-modules} centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/{ignorelist_health pg_health} tasks/mirror}} 1
Failure Reason:

Test failure: test_cephfs_mirror_blocklist (tasks.cephfs.test_mirroring.TestMirroring)

pass 7665384 2024-04-20 20:51:03 2024-04-20 20:51:37 2024-04-20 22:15:15 1:23:38 0:59:25 0:24:13 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no multimds/yes pg-warn pg_health} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
fail 7665386 2024-04-20 20:51:04 2024-04-20 20:51:38 2024-04-20 22:39:26 1:47:48 1:25:33 0:22:15 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/kernel_untar_build}} 3
Failure Reason:

error during scrub thrashing:

fail 7665388 2024-04-20 20:51:05 2024-04-20 20:51:38 2024-04-20 22:40:20 1:48:42 1:27:49 0:20:53 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

pass 7665390 2024-04-20 20:51:06 2024-04-20 20:51:39 2024-04-20 21:21:14 0:29:35 0:18:59 0:10:36 smithi main ubuntu 22.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu/{overrides ubuntu_latest}} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health session_timeout} ranks/3 tasks/fsstress validater/lockdep} 2
fail 7665393 2024-04-20 20:51:07 2024-04-20 20:51:39 2024-04-20 21:41:50 0:50:11 0:31:59 0:18:12 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/admin} 2
Failure Reason:

Test failure: test_adding_multiple_caps (tasks.cephfs.test_admin.TestFsAuthorize)

pass 7665395 2024-04-20 20:51:09 2024-04-20 20:51:39 2024-04-20 21:50:05 0:58:26 0:38:37 0:19:49 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/suites/blogbench}} 3
fail 7665396 2024-04-20 20:51:10 2024-04-20 20:51:40 2024-04-20 21:53:49 1:02:09 0:41:16 0:20:53 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/suites/ffsb}} 3
Failure Reason:

Command failed (workunit test suites/ffsb.sh) on smithi007 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a8c84f5997950fd7d008204839253291d52e6eda TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'

pass 7665397 2024-04-20 20:51:11 2024-04-20 20:51:40 2024-04-20 21:11:40 0:20:00 0:10:04 0:09:56 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/pjd}} 2
pass 7665398 2024-04-20 20:51:12 2024-04-20 20:51:41 2024-04-20 21:21:54 0:30:13 0:15:42 0:14:31 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/asok_dump_tree} 2
pass 7665399 2024-04-20 20:51:13 2024-04-20 20:51:41 2024-04-20 21:48:58 0:57:17 0:36:23 0:20:54 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/fs/norstats}} 3
fail 7665400 2024-04-20 20:51:15 2024-04-20 20:51:41 2024-04-20 21:53:49 1:02:08 0:39:37 0:22:31 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"2024-04-20T21:40:00.000196+0000 mon.smithi028 (mon.0) 391 : cluster [WRN] Health detail: HEALTH_WARN 1 osds down; Degraded data redundancy: 20/225 objects degraded (8.889%), 4 pgs degraded" in cluster log

fail 7665401 2024-04-20 20:51:16 2024-04-20 20:51:42 2024-04-20 21:23:08 0:31:26 0:21:32 0:09:54 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/suites/fsx}} 3
Failure Reason:

Command failed (workunit test suites/fsx.sh) on smithi059 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a8c84f5997950fd7d008204839253291d52e6eda TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh'

fail 7665402 2024-04-20 20:51:17 2024-04-20 20:51:42 2024-04-20 22:33:20 1:41:38 1:21:55 0:19:43 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/fs/misc}} 3
Failure Reason:

error during scrub thrashing:

fail 7665403 2024-04-20 20:51:18 2024-04-20 20:51:43 2024-04-20 22:01:57 1:10:14 0:56:28 0:13:46 smithi main ubuntu 22.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu/{overrides ubuntu_latest}} mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health session_timeout} ranks/5 tasks/dbench validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable calloc calloc _dl_check_map_versions

fail 7665404 2024-04-20 20:51:19 2024-04-20 20:51:43 2024-04-20 22:40:47 1:49:04 1:27:44 0:21:20 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7665405 2024-04-20 20:51:20 2024-04-20 20:51:43 2024-04-20 21:50:13 0:58:30 0:37:40 0:20:50 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/suites/dbench}} 3
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi022 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a8c84f5997950fd7d008204839253291d52e6eda TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'

pass 7665406 2024-04-20 20:51:22 2024-04-20 20:51:44 2024-04-20 22:26:59 1:35:15 1:13:55 0:21:20 smithi main centos 9.stream fs/multifs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health} tasks/failover} 2
pass 7665407 2024-04-20 20:51:23 2024-04-20 20:51:44 2024-04-20 21:51:52 1:00:08 0:44:44 0:15:24 smithi main ubuntu 22.04 fs/snaps/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/workunit/snaps} 2
fail 7665408 2024-04-20 20:51:24 2024-04-20 20:51:44 2024-04-20 21:32:25 0:40:41 0:20:47 0:19:54 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/forward-scrub} 2
Failure Reason:

"2024-04-20T21:25:28.220524+0000 mds.b (mds.0) 1 : cluster [ERR] dir 0x10000000000 object missing on disk; some files may be lost (/dir)" in cluster log

pass 7665409 2024-04-20 20:51:25 2024-04-20 20:51:45 2024-04-20 21:33:34 0:41:49 0:35:29 0:06:20 smithi main centos 9.stream fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{ubuntu/{overrides ubuntu_latest}} kclient-overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped} objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
fail 7665410 2024-04-20 20:51:26 2024-04-20 20:51:45 2024-04-20 21:45:57 0:54:12 0:33:48 0:20:24 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/suites/fsx}} 3
Failure Reason:

Command failed (workunit test suites/fsx.sh) on smithi077 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a8c84f5997950fd7d008204839253291d52e6eda TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh'

pass 7665411 2024-04-20 20:51:28 2024-04-20 20:51:46 2024-04-20 21:58:06 1:06:20 0:43:40 0:22:40 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no multimds/yes pg-warn pg_health} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
pass 7665412 2024-04-20 20:51:29 2024-04-20 20:51:46 2024-04-20 21:27:12 0:35:26 0:20:37 0:14:49 smithi main ubuntu 22.04 fs/32bits/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/fixed-2-ucephfs conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-ec-root overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/cfuse_workunit_suites_pjd} 2
fail 7665413 2024-04-20 20:51:30 2024-04-20 20:51:46 2024-04-20 21:59:28 1:07:42 0:45:18 0:22:24 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7665414 2024-04-20 20:51:31 2024-04-20 20:51:47 2024-04-20 21:56:28 1:04:41 0:42:50 0:21:51 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi102 with status 135: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a8c84f5997950fd7d008204839253291d52e6eda TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-git-ceph.sh'

fail 7665415 2024-04-20 20:51:32 2024-04-20 20:51:47 2024-04-20 23:57:42 3:05:55 2:42:23 0:23:32 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/fs/misc}} 3
Failure Reason:

error during scrub thrashing:

pass 7665416 2024-04-20 20:51:34 2024-04-20 21:36:41 1364 smithi main centos 9.stream fs/cephadm/renamevolume/{0-start 1-rename distro/single-container-host overrides/{ignorelist_health pg_health}} 2
fail 7665417 2024-04-20 20:51:35 2024-04-20 21:32:30 1644 smithi main ubuntu 22.04 fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-1-client-coloc conf/{client mds mgr mon osd} distro/{ubuntu_latest} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/libcephfs/{frag test}} 2
Failure Reason:

Command failed (workunit test libcephfs/test.sh) on smithi136 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a8c84f5997950fd7d008204839253291d52e6eda TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/libcephfs/test.sh'

fail 7665418 2024-04-20 20:51:36 2024-04-20 21:18:41 839 smithi main ubuntu 22.04 fs/shell/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-1-client-coloc conf/{client mds mgr mon osd} distro/ubuntu_latest mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/cephfs-shell} 2
Failure Reason:

Test failure: test_cd_with_args (tasks.cephfs.test_cephfs_shell.TestCD)

pass 7665419 2024-04-20 20:51:38 2024-04-20 21:43:36 1993 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/blogbench}} 3
fail 7665420 2024-04-20 20:51:39 2024-04-20 20:51:49 2024-04-20 22:17:02 1:25:13 1:05:47 0:19:26 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/dbench}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

pass 7665421 2024-04-20 20:51:40 2024-04-20 20:51:50 2024-04-20 21:52:43 1:00:53 0:39:10 0:21:43 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/ffsb}} 3
dead 7665422 2024-04-20 20:51:41 2024-04-20 20:51:50 2024-04-20 20:58:03 0:06:13 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no multimds/yes pg-warn pg_health} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
Failure Reason:

Error reimaging machines: Expected smithi150's OS to be centos 8 but found centos 9

dead 7665423 2024-04-20 20:51:43 2024-04-20 20:51:51 2024-04-20 20:58:03 0:06:12 smithi main ubuntu 22.04 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down multifs pg_health session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_snaptests}} 2
Failure Reason:

Error reimaging machines: Expected smithi150's OS to be ubuntu 22.04 but found centos 9

dead 7665424 2024-04-20 20:51:44 2024-04-20 21:02:26 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/openfiletable} 2
Failure Reason:

Error reimaging machines: Failed to reconnect to smithi150

pass 7665425 2024-04-20 20:51:45 2024-04-20 20:51:51 2024-04-20 21:32:43 0:40:52 0:30:35 0:10:17 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/fs/norstats}} 3
fail 7665426 2024-04-20 20:51:46 2024-04-20 21:27:38 1205 smithi main ubuntu 22.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/pool-perm} 2
Failure Reason:

Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)

fail 7665427 2024-04-20 20:51:48 2024-04-20 22:00:26 2687 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/fsx}} 3
Failure Reason:

Command failed (workunit test suites/fsx.sh) on smithi023 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a8c84f5997950fd7d008204839253291d52e6eda TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh'

dead 7665428 2024-04-20 20:51:49 2024-04-20 20:56:33 2024-04-20 21:07:36 0:11:03 smithi main ubuntu 22.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Error reimaging machines: Expected smithi150's OS to be ubuntu 22.04 but found centos 9

pass 7665429 2024-04-20 20:51:50 2024-04-20 20:58:14 2024-04-20 21:38:01 0:39:47 0:31:45 0:08:02 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/fs/test_o_trunc}} 3
pass 7665430 2024-04-20 20:51:51 2024-04-20 20:59:14 2024-04-20 21:28:37 0:29:23 0:17:22 0:12:01 smithi main ubuntu 22.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/quota} 2
dead 7665431 2024-04-20 20:51:53 2024-04-20 21:00:45 2024-04-20 21:09:08 0:08:23 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Error reimaging machines: Failed to reconnect to smithi150

dead 7665432 2024-04-20 20:51:54 2024-04-20 21:02:35 2024-04-20 21:16:29 0:13:54 smithi main centos 9.stream fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down multifs pg_health session_timeout thrashosds-health} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_suites_fsstress}} 2
Failure Reason:

SSH connection to smithi150 was lost: 'sudo rpm -ivh --oldpackage --replacefiles --replacepkgs /tmp/kernel.x86_64.rpm'

pass 7665433 2024-04-20 20:51:55 2024-04-20 21:11:48 2024-04-20 21:43:53 0:32:05 0:18:30 0:13:35 smithi main ubuntu 22.04 fs/32bits/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/fixed-2-ucephfs conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/cfuse_workunit_suites_fsstress} 2
pass 7665434 2024-04-20 20:51:56 2024-04-20 21:14:29 2024-04-20 21:44:00 0:29:31 0:19:36 0:09:55 smithi main ubuntu 22.04 fs/permission/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/fixed-2-ucephfs conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/cfuse_workunit_misc} 2
pass 7665435 2024-04-20 20:51:58 2024-04-20 21:14:30 2024-04-20 22:03:35 0:49:05 0:36:17 0:12:48 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/iozone}} 3
pass 7665436 2024-04-20 20:51:59 2024-04-20 21:16:21 2024-04-20 21:39:45 0:23:24 0:08:55 0:14:29 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/recovery-fs} 2
fail 7665437 2024-04-20 20:52:00 2024-04-20 21:20:52 2024-04-20 22:24:58 1:04:06 0:54:26 0:09:40 smithi main ubuntu 22.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu/{overrides ubuntu_latest}} mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health session_timeout} ranks/3 tasks/dbench validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable calloc calloc _dl_check_map_versions

pass 7665438 2024-04-20 20:52:01 2024-04-20 21:21:22 2024-04-20 21:45:05 0:23:43 0:14:04 0:09:39 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/iozone}} 2
pass 7665439 2024-04-20 20:52:02 2024-04-20 21:22:03 2024-04-20 21:54:42 0:32:39 0:21:13 0:11:26 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/scrub} 2
fail 7665440 2024-04-20 20:52:04 2024-04-20 21:27:14 2024-04-20 23:23:10 1:55:56 1:42:58 0:12:58 smithi main ubuntu 22.04 fs/fscrypt/{begin/{0-install 1-ceph 2-logrotate 3-modules} bluestore-bitmap clusters/1-mds-1-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} overrides/{ignorelist_health ignorelist_health_more ignorelist_wrongly_marked_down osd pg-warn pg_health} tasks/{0-client 1-tests/fscrypt-common}} 3
Failure Reason:

Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)

pass 7665441 2024-04-20 20:52:05 2024-04-20 21:27:34 2024-04-20 22:41:59 1:14:25 1:07:03 0:07:22 smithi main centos 9.stream fs/multifs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health} tasks/failover} 2
fail 7665442 2024-04-20 20:52:06 2024-04-20 21:28:45 2024-04-20 22:05:05 0:36:20 0:25:31 0:10:49 smithi main centos 9.stream fs/snaps/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/workunit/snaps} 2
Failure Reason:

Command failed (workunit test fs/snaps/untar_snap_rm.sh) on smithi059 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a8c84f5997950fd7d008204839253291d52e6eda TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/untar_snap_rm.sh'

pass 7665443 2024-04-20 20:52:07 2024-04-20 21:29:55 2024-04-20 23:35:04 2:05:09 1:55:38 0:09:31 smithi main centos 9.stream fs/volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/basic}} 2
fail 7665444 2024-04-20 20:52:08 2024-04-20 21:29:56 2024-04-20 22:23:08 0:53:12 0:41:44 0:11:28 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

pass 7665445 2024-04-20 20:52:09 2024-04-20 21:29:56 2024-04-20 22:19:37 0:49:41 0:41:09 0:08:32 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/blogbench}} 3
fail 7665446 2024-04-20 20:52:11 2024-04-20 21:29:57 2024-04-20 21:51:21 0:21:24 0:09:55 0:11:29 smithi main centos 9.stream fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-1-client-coloc conf/{client mds mgr mon osd} distro/{centos_latest} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/libcephfs_python} 2
Failure Reason:

Found coredumps on ubuntu@smithi115.front.sepia.ceph.com

fail 7665447 2024-04-20 20:52:12 2024-04-20 21:29:57 2024-04-20 23:23:57 1:54:00 1:40:30 0:13:30 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/ffsb}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

fail 7665448 2024-04-20 20:52:13 2024-04-20 21:32:48 2024-04-20 22:10:57 0:38:09 0:30:24 0:07:45 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/fsstress}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

fail 7665449 2024-04-20 20:52:14 2024-04-20 21:33:38 2024-04-20 22:11:44 0:38:06 0:24:46 0:13:20 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/fsx}} 3
Failure Reason:

Command failed (workunit test suites/fsx.sh) on smithi047 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a8c84f5997950fd7d008204839253291d52e6eda TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh'

pass 7665450 2024-04-20 20:52:15 2024-04-20 21:36:49 2024-04-20 22:27:15 0:50:26 0:42:30 0:07:56 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/iogen}} 3
pass 7665451 2024-04-20 20:52:16 2024-04-20 21:38:10 2024-04-20 22:30:49 0:52:39 0:27:14 0:25:25 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no multimds/yes pg-warn pg_health} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3