Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7682691 2024-05-01 09:56:47 2024-05-01 10:39:49 2024-05-01 11:29:07 0:49:18 0:38:47 0:10:31 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/fs/test_o_trunc}} 3
Failure Reason:

Command failed on smithi155 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs subvolumegroup pin cephfs qa random 0.10'"

pass 7682692 2024-05-01 09:56:48 2024-05-01 10:39:49 2024-05-01 11:42:11 1:02:22 0:51:19 0:11:03 smithi main centos 9.stream fs/cephadm/multivolume/{0-start 1-mount 2-workload/dbench distro/single-container-host overrides/{ignorelist_health pg_health}} 2
fail 7682693 2024-05-01 09:56:49 2024-05-01 10:39:50 2024-05-01 11:21:43 0:41:53 0:32:30 0:09:23 smithi main ubuntu 22.04 fs/full/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-node-1-mds-1-osd conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/mgr-osd-full} 1
Failure Reason:

Command failed (workunit test fs/full/subvolume_clone.sh) on smithi079 with status 110: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c52319bb8a48f903d15f6fe65002319a2e2ccf86 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/full/subvolume_clone.sh'

fail 7682694 2024-05-01 09:56:50 2024-05-01 10:39:50 2024-05-01 11:13:34 0:33:44 0:23:07 0:10:37 smithi main ubuntu 22.04 fs/mirror/{begin/{0-install 1-ceph 2-logrotate 3-modules} cephfs-mirror/one-per-cluster clients/{mirror} cluster/{1-node} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health pg_health} supported-random-distros$/{ubuntu_latest} tasks/mirror} 1
Failure Reason:

Test failure: test_cephfs_mirror_blocklist (tasks.cephfs.test_mirroring.TestMirroring)

fail 7682695 2024-05-01 09:56:52 2024-05-01 10:39:50 2024-05-01 11:33:45 0:53:55 0:44:09 0:09:46 smithi main ubuntu 22.04 fs/mirror-ha/{begin/{0-install 1-ceph 2-logrotate 3-modules} cephfs-mirror/{1-volume-create-rm 2-three-per-cluster} clients/{mirror} cluster/{1-node} objectstore/bluestore-bitmap overrides/{ignorelist_health pg_health} supported-random-distro$/{ubuntu_latest} workloads/cephfs-mirror-ha-workunit} 1
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7682696 2024-05-01 09:56:53 2024-05-01 10:39:51 2024-05-01 11:21:35 0:41:44 0:30:36 0:11:08 smithi main ubuntu 22.04 fs/nfs/{cluster/{1-node} conf/{client mds mgr mon osd} overrides/{ignorelist_health pg_health} supported-random-distros$/{ubuntu_latest} tasks/nfs} 1
Failure Reason:

Test failure: test_export_create_with_relative_pseudo_path_and_root_directory (tasks.cephfs.test_nfs.TestNFS)

fail 7682697 2024-05-01 09:56:54 2024-05-01 10:39:51 2024-05-01 11:22:47 0:42:56 0:29:41 0:13:15 smithi main centos 9.stream fs/valgrind/{begin/{0-install 1-ceph 2-logrotate 3-modules} centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/{ignorelist_health pg_health} tasks/mirror}} 1
Failure Reason:

Test failure: test_cephfs_mirror_blocklist (tasks.cephfs.test_mirroring.TestMirroring)

fail 7682698 2024-05-01 09:56:55 2024-05-01 10:39:52 2024-05-01 11:14:25 0:34:33 0:21:35 0:12:58 smithi main centos 9.stream fs/volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/basic}} 2
Failure Reason:

Test failure: test_volume_info_pending_subvol_deletions (tasks.cephfs.test_volumes.TestVolumes)

fail 7682699 2024-05-01 09:56:57 2024-05-01 10:41:02 2024-05-01 11:54:10 1:13:08 1:00:38 0:12:30 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/kernel_untar_build}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

fail 7682700 2024-05-01 09:56:58 2024-05-01 10:43:03 2024-05-01 11:46:23 1:03:20 0:52:54 0:10:26 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/blogbench}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

dead 7682701 2024-05-01 09:56:59 2024-05-01 10:43:13 2024-05-01 23:03:34 12:20:21 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/xfstests-dev} 2
Failure Reason:

hit max job timeout

fail 7682702 2024-05-01 09:57:00 2024-05-01 10:43:54 2024-05-01 11:36:10 0:52:16 0:39:42 0:12:34 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/ffsb}} 3
Failure Reason:

Command failed on smithi001 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs subvolumegroup pin cephfs qa random 0.10'"

fail 7682703 2024-05-01 09:57:02 2024-05-01 10:47:25 2024-05-01 11:44:33 0:57:08 0:45:28 0:11:40 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7682704 2024-05-01 09:57:03 2024-05-01 10:48:45 2024-05-01 11:39:28 0:50:43 0:38:51 0:11:52 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/fsstress}} 3
Failure Reason:

Command failed on smithi098 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs subvolumegroup pin cephfs qa random 0.10'"

fail 7682705 2024-05-01 09:57:04 2024-05-01 10:50:16 2024-05-01 11:49:48 0:59:32 0:49:52 0:09:40 smithi main ubuntu 22.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu/{overrides ubuntu_latest}} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health session_timeout} ranks/3 tasks/dbench validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable calloc calloc _dl_check_map_versions

fail 7682706 2024-05-01 09:57:05 2024-05-01 10:50:16 2024-05-01 11:17:32 0:27:16 0:16:14 0:11:02 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/acls} 2
Failure Reason:

Command failed on smithi029 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest'

fail 7682707 2024-05-01 09:57:07 2024-05-01 10:50:17 2024-05-01 11:09:23 0:19:06 0:12:01 0:07:05 smithi main centos 9.stream fs/volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/clone}} 2
Failure Reason:

Test failure: test_clone_failure_status_in_progress_cancelled (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)

dead 7682708 2024-05-01 09:57:08 2024-05-01 10:50:17 2024-05-01 22:59:31 12:09:14 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/admin} 2
Failure Reason:

hit max job timeout

pass 7682709 2024-05-01 09:57:09 2024-05-01 10:50:18 2024-05-01 12:03:56 1:13:38 1:07:25 0:06:13 smithi main centos 9.stream fs/multifs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health} tasks/failover} 2
fail 7682710 2024-05-01 09:57:10 2024-05-01 10:50:18 2024-05-01 11:39:34 0:49:16 0:39:32 0:09:44 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/iogen}} 3
Failure Reason:

Command failed on smithi026 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs subvolumegroup pin cephfs qa random 0.10'"

fail 7682711 2024-05-01 09:57:12 2024-05-01 10:50:39 2024-05-01 11:42:07 0:51:28 0:39:46 0:11:42 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/pjd}} 3
Failure Reason:

Command failed on smithi040 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs subvolumegroup pin cephfs qa random 0.10'"

fail 7682712 2024-05-01 09:57:13 2024-05-01 10:50:49 2024-05-01 11:34:02 0:43:13 0:35:53 0:07:20 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/direct_io}} 3
Failure Reason:

Command failed on smithi070 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs subvolumegroup pin cephfs qa random 0.10'"

fail 7682713 2024-05-01 09:57:14 2024-05-01 10:50:49 2024-05-01 11:42:22 0:51:33 0:41:16 0:10:17 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/kernel_untar_build}} 3
Failure Reason:

Command failed on smithi028 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs subvolumegroup pin cephfs qa random 0.10'"

fail 7682714 2024-05-01 09:57:15 2024-05-01 10:51:00 2024-05-01 11:51:20 1:00:20 0:46:05 0:14:15 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7682715 2024-05-01 09:57:16 2024-05-01 10:53:41 2024-05-01 11:45:51 0:52:10 0:38:46 0:13:24 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/suites/dbench}} 3
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi046 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c52319bb8a48f903d15f6fe65002319a2e2ccf86 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'

fail 7682716 2024-05-01 09:57:18 2024-05-01 10:56:02 2024-05-01 11:51:23 0:55:21 0:41:28 0:13:53 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/ffsb}} 3
Failure Reason:

Command failed on smithi066 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs subvolumegroup pin cephfs qa random 0.10'"

fail 7682717 2024-05-01 09:57:19 2024-05-01 10:57:32 2024-05-01 11:39:19 0:41:47 0:35:16 0:06:31 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/fsstress}} 3
Failure Reason:

Command failed on smithi007 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs subvolumegroup pin cephfs qa random 0.10'"

fail 7682718 2024-05-01 09:57:20 2024-05-01 10:58:13 2024-05-01 11:23:32 0:25:19 0:14:08 0:11:11 smithi main centos 9.stream fs/volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/misc}} 2
Failure Reason:

Test failure: test_connection_expiration (tasks.cephfs.test_volumes.TestMisc)

fail 7682719 2024-05-01 09:57:21 2024-05-01 10:58:23 2024-05-01 12:08:17 1:09:54 1:02:18 0:07:36 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/iogen}} 3
Failure Reason:

"2024-05-01T11:34:43.542500+0000 mds.b (mds.0) 109 : cluster [WRN] client.24832 does not advance its oldest_flush_tid (542656), 100001 completed flushes recorded in session" in cluster log

fail 7682720 2024-05-01 09:57:23 2024-05-01 10:59:14 2024-05-01 11:55:08 0:55:54 0:45:08 0:10:46 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7682721 2024-05-01 09:57:24 2024-05-01 10:59:14 2024-05-01 11:24:12 0:24:58 0:17:53 0:07:05 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/forward-scrub} 2
Failure Reason:

"2024-05-01T11:17:30.488136+0000 mds.a (mds.0) 1 : cluster [ERR] dir 0x10000000000 object missing on disk; some files may be lost (/dir)" in cluster log

fail 7682722 2024-05-01 09:57:25 2024-05-01 10:59:14 2024-05-01 11:50:42 0:51:28 0:41:19 0:10:09 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/postgres}} 3
Failure Reason:

Command failed on smithi043 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs subvolumegroup pin cephfs qa random 0.10'"

fail 7682723 2024-05-01 09:57:26 2024-05-01 10:59:25 2024-05-01 14:25:24 3:25:59 3:11:51 0:14:08 smithi main centos 9.stream fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-1-client-coloc conf/{client mds mgr mon osd} distro/{centos_latest} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/libcephfs/{frag test}} 2
Failure Reason:

Command failed (workunit test libcephfs/test.sh) on smithi116 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c52319bb8a48f903d15f6fe65002319a2e2ccf86 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/libcephfs/test.sh'

fail 7682724 2024-05-01 09:57:27 2024-05-01 11:04:16 2024-05-01 11:32:00 0:27:44 0:13:55 0:13:49 smithi main ubuntu 22.04 fs/shell/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-1-client-coloc conf/{client mds mgr mon osd} distro/ubuntu_latest mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/cephfs-shell} 2
Failure Reason:

Test failure: test_cd_with_args (tasks.cephfs.test_cephfs_shell.TestCD)

fail 7682725 2024-05-01 09:57:29 2024-05-01 11:06:07 2024-05-01 12:14:06 1:07:59 0:54:02 0:13:57 smithi main ubuntu 22.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu/{overrides ubuntu_latest}} mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health session_timeout} ranks/1 tasks/dbench validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable calloc calloc _dl_check_map_versions

fail 7682726 2024-05-01 09:57:30 2024-05-01 11:08:27 2024-05-01 11:53:32 0:45:05 0:35:33 0:09:32 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/dbench}} 3
Failure Reason:

Command failed on smithi037 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs subvolumegroup pin cephfs qa random 0.10'"

dead 7682727 2024-05-01 09:57:31 2024-05-01 11:10:28 2024-05-01 23:20:21 12:09:53 smithi main centos 9.stream fs/volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/snapshot}} 2
Failure Reason:

hit max job timeout

fail 7682728 2024-05-01 09:57:32 2024-05-01 11:11:29 2024-05-01 12:05:15 0:53:46 0:40:43 0:13:03 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/suites/ffsb}} 3
Failure Reason:

Command failed (workunit test suites/ffsb.sh) on smithi018 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c52319bb8a48f903d15f6fe65002319a2e2ccf86 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'

fail 7682729 2024-05-01 09:57:33 2024-05-01 11:14:19 2024-05-01 12:00:05 0:45:46 0:35:22 0:10:24 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/fsx}} 3
Failure Reason:

Command failed on smithi047 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs subvolumegroup pin cephfs qa random 0.10'"

fail 7682730 2024-05-01 09:57:35 2024-05-01 11:17:10 2024-05-01 12:10:53 0:53:43 0:40:25 0:13:18 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/fs/test_o_trunc}} 3
Failure Reason:

Command failed on smithi081 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs subvolumegroup pin cephfs qa random 0.10'"

fail 7682731 2024-05-01 09:57:36 2024-05-01 11:19:11 2024-05-01 12:16:36 0:57:25 0:44:30 0:12:55 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7682732 2024-05-01 09:57:37 2024-05-01 11:21:42 2024-05-01 13:23:03 2:01:21 1:50:30 0:10:51 smithi main centos 9.stream fs/fscrypt/{begin/{0-install 1-ceph 2-logrotate 3-modules} bluestore-bitmap clusters/1-mds-1-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} overrides/{ignorelist_health ignorelist_health_more ignorelist_wrongly_marked_down osd pg-warn pg_health} tasks/{0-client 1-tests/fscrypt-common}} 3
Failure Reason:

Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)

fail 7682733 2024-05-01 09:57:38 2024-05-01 11:21:42 2024-05-01 11:55:20 0:33:38 0:22:20 0:11:18 smithi main ubuntu 22.04 fs/volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/basic}} 2
Failure Reason:

Test failure: test_volume_info_pending_subvol_deletions (tasks.cephfs.test_volumes.TestVolumes)

fail 7682734 2024-05-01 09:57:39 2024-05-01 11:21:42 2024-05-01 11:46:48 0:25:06 0:14:19 0:10:47 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/quiesce} 2
Failure Reason:

Test failure: test_quiesce_find (tasks.cephfs.test_quiesce.TestQuiesce)

fail 7682735 2024-05-01 09:57:41 2024-05-01 11:21:43 2024-05-01 12:29:07 1:07:24 1:00:40 0:06:44 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/kernel_untar_build}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

fail 7682736 2024-05-01 09:57:42 2024-05-01 11:21:44 2024-05-01 12:15:47 0:54:03 0:39:10 0:14:53 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/postgres}} 3
Failure Reason:

Command failed on smithi049 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs subvolumegroup pin cephfs qa random 0.10'"

fail 7682737 2024-05-01 09:57:43 2024-05-01 11:25:05 2024-05-01 12:06:25 0:41:20 0:20:57 0:20:23 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/suites/dbench}} 3
Failure Reason:

No module named 'tasks.cephfs.fuse_mount'

fail 7682738 2024-05-01 09:57:44 2024-05-01 11:37:15 2024-05-01 15:02:50 3:25:35 3:13:41 0:11:54 smithi main centos 9.stream fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-1-client-coloc conf/{client mds mgr mon osd} distro/{centos_latest} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/libcephfs_python} 2
Failure Reason:

Command failed (workunit test fs/test_python.sh) on smithi001 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c52319bb8a48f903d15f6fe65002319a2e2ccf86 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/test_python.sh'

fail 7682739 2024-05-01 09:57:46 2024-05-01 11:37:16 2024-05-01 12:08:17 0:31:01 0:20:44 0:10:17 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

No module named 'tasks.fs'

fail 7682740 2024-05-01 09:57:47 2024-05-01 11:37:16 2024-05-01 12:02:47 0:25:31 0:16:12 0:09:19 smithi main ubuntu 22.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/sessionmap} 2
Failure Reason:

Test failure: test_session_reject (tasks.cephfs.test_sessionmap.TestSessionMap)

fail 7682741 2024-05-01 09:57:48 2024-05-01 11:37:17 2024-05-01 12:04:58 0:27:41 0:17:36 0:10:05 smithi main ubuntu 22.04 fs/volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/clone}} 2
Failure Reason:

Test failure: test_clone_failure_status_in_progress_cancelled (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)

pass 7682742 2024-05-01 09:57:49 2024-05-01 11:37:27 2024-05-01 13:00:32 1:23:05 1:13:51 0:09:14 smithi main centos 9.stream fs/multifs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health} tasks/failover} 2
fail 7682743 2024-05-01 09:57:50 2024-05-01 11:37:27 2024-05-01 12:08:43 0:31:16 0:21:17 0:09:59 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/direct_io}} 3
Failure Reason:

No module named 'tasks.cephfs.fuse_mount'

fail 7682744 2024-05-01 09:57:51 2024-05-01 11:37:28 2024-05-01 12:02:43 0:25:15 0:07:56 0:17:19 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/fs/misc}} 3
Failure Reason:

No module named 'tasks.cephadm'

fail 7682745 2024-05-01 09:57:53 2024-05-01 11:52:49 2024-05-01 12:09:44 0:16:55 0:06:02 0:10:53 smithi main centos 9.stream fs/verify/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health session_timeout} ranks/5 tasks/dbench validater/valgrind} 2
Failure Reason:

Command failed on smithi143 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-fuse cephfs-java libcephfs_jni1 libcephfs1 librados2 librbd1 python-ceph rbd-fuse ceph-debuginfo python3-cephfs cephfs-shell cephfs-top cephfs-mirror bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel postgresql postgresql-server postgresql-contrib bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel postgresql postgresql-server postgresql-contrib bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel postgresql postgresql-server postgresql-contrib bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel postgresql postgresql-server postgresql-contrib'

fail 7682746 2024-05-01 09:57:54 2024-05-01 11:52:50 2024-05-01 12:07:42 0:14:52 0:06:10 0:08:42 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/kernel_untar_build}} 3
Failure Reason:

Command failed on smithi016 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-fuse cephfs-java libcephfs_jni1 libcephfs1 librados2 librbd1 python-ceph rbd-fuse python3-cephfs cephfs-shell cephfs-top cephfs-mirror bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel postgresql postgresql-server postgresql-contrib bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel postgresql postgresql-server postgresql-contrib bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel postgresql postgresql-server postgresql-contrib bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel postgresql postgresql-server postgresql-contrib bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel postgresql postgresql-server postgresql-contrib bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel postgresql postgresql-server postgresql-contrib'