Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6804734 2022-04-25 07:47:48 2022-04-25 08:44:04 2022-04-25 09:07:16 0:23:12 0:17:27 0:05:45 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
fail 6804735 2022-04-25 07:47:48 2022-04-25 08:44:04 2022-04-25 09:06:23 0:22:19 0:11:01 0:11:18 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Command failed on smithi103 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6804736 2022-04-25 07:47:49 2022-04-25 08:44:54 2022-04-25 09:05:46 0:20:52 0:08:26 0:12:26 smithi master ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/alternate-pool} 2
Failure Reason:

"2022-04-25T09:03:57.946612+0000 mon.a (mon.0) 194 : cluster [WRN] Health check failed: Degraded data redundancy: 10/48 objects degraded (20.833%), 5 pgs degraded (PG_DEGRADED)" in cluster log

pass 6804737 2022-04-25 07:47:50 2022-04-25 08:45:45 2022-04-25 10:01:42 1:15:57 1:06:27 0:09:30 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/iogen} wsync/{yes}} 3
fail 6804738 2022-04-25 07:47:51 2022-04-25 08:46:35 2022-04-25 09:22:22 0:35:47 0:25:13 0:10:34 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/iozone}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804739 2022-04-25 07:47:52 2022-04-25 08:47:16 2022-04-25 09:03:52 0:16:36 0:08:39 0:07:57 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/pjd}} 2
Failure Reason:

Command failed on smithi007 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6804740 2022-04-25 07:47:52 2022-04-25 08:47:56 2022-04-25 09:23:11 0:35:15 0:23:47 0:11:28 smithi master centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/pacific}} 3
Failure Reason:

Extra data: line 2 column 556 (char 556)

pass 6804741 2022-04-25 07:47:53 2022-04-25 08:49:37 2022-04-25 10:13:10 1:23:33 1:10:17 0:13:16 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
fail 6804742 2022-04-25 07:47:54 2022-04-25 08:50:47 2022-04-25 09:29:05 0:38:18 0:26:28 0:11:50 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/iozone} wsync/{no}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804743 2022-04-25 07:47:55 2022-04-25 08:56:08 2022-04-25 09:15:00 0:18:52 0:08:27 0:10:25 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
Failure Reason:

Command failed on smithi016 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804744 2022-04-25 07:47:55 2022-04-25 08:56:09 2022-04-25 09:25:43 0:29:34 0:19:54 0:09:40 smithi master rhel 8.5 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/asok_dump_tree} 2
pass 6804745 2022-04-25 07:47:56 2022-04-25 08:58:39 2022-04-25 09:29:16 0:30:37 0:22:21 0:08:16 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/fsstress}} 2
fail 6804746 2022-04-25 07:47:57 2022-04-25 09:00:10 2022-04-25 09:32:28 0:32:18 0:22:06 0:10:12 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804747 2022-04-25 07:47:58 2022-04-25 09:00:30 2022-04-25 09:43:28 0:42:58 0:32:31 0:10:27 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/pjd} wsync/{yes}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804748 2022-04-25 07:47:59 2022-04-25 09:02:51 2022-04-25 09:34:07 0:31:16 0:22:07 0:09:09 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/iozone}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804749 2022-04-25 07:47:59 2022-04-25 09:02:51 2022-04-25 09:18:52 0:16:01 0:08:36 0:07:25 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/pjd}} 2
Failure Reason:

Command failed on smithi019 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6804750 2022-04-25 07:48:00 2022-04-25 09:04:02 2022-04-25 09:21:42 0:17:40 0:07:30 0:10:10 smithi master centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/auto-repair} 2
Failure Reason:

Command failed on smithi007 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804751 2022-04-25 07:48:01 2022-04-25 09:04:02 2022-04-25 09:27:34 0:23:32 0:14:03 0:09:29 smithi master centos 8.stream fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{faked-ino whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress} 2
fail 6804752 2022-04-25 07:48:02 2022-04-25 09:04:23 2022-04-25 09:23:04 0:18:41 0:10:21 0:08:20 smithi master centos 8.stream fs/bugs/client_trim_caps/{begin/{0-install 1-ceph 2-logrotate} centos_latest clusters/small-cluster conf/{client mds mon osd} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/trim-i24137} 1
Failure Reason:

SELinux denials found on ubuntu@smithi006.front.sepia.ceph.com: ['type=AVC msg=audit(1650877890.603:203): avc: denied { node_bind } for pid=1527 comm="ping" saddr=172.21.15.6 scontext=system_u:system_r:ping_t:s0 tcontext=system_u:object_r:node_t:s0 tclass=icmp_socket permissive=1']

pass 6804753 2022-04-25 07:48:03 2022-04-25 09:04:23 2022-04-25 10:00:23 0:56:00 0:45:28 0:10:32 smithi master centos 8.stream fs/cephadm/multivolume/{0-start 1-mount 2-workload/dbench distro/single-container-host} 2
fail 6804754 2022-04-25 07:48:03 2022-04-25 09:04:23 2022-04-25 09:27:46 0:23:23 0:13:06 0:10:17 smithi master ubuntu 20.04 fs/full/{begin/{0-install 1-ceph 2-logrotate} clusters/1-node-1-mds-1-osd conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/mgr-osd-full} 1
Failure Reason:

Command failed (workunit test fs/full/subvolume_rm.sh) on smithi192 with status 110: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9bc2ff4611c3c4e3862566d39262fbe7a8b33b2c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/full/subvolume_rm.sh'

fail 6804755 2022-04-25 07:48:04 2022-04-25 09:04:33 2022-04-25 09:26:08 0:21:35 0:10:16 0:11:19 smithi master ubuntu 20.04 fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{ubuntu_latest} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client} 2
Failure Reason:

Command failed (workunit test client/test.sh) on smithi080 with status 139: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9bc2ff4611c3c4e3862566d39262fbe7a8b33b2c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/client/test.sh'

fail 6804756 2022-04-25 07:48:05 2022-04-25 09:04:34 2022-04-25 09:27:29 0:22:55 0:11:10 0:11:45 smithi master centos 8.stream fs/mirror/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/one-per-cluster clients/{mirror} cluster/{1-node} mount/fuse objectstore/bluestore-bitmap overrides/{whitelist_health} supported-random-distros$/{centos_8} tasks/mirror} 1
Failure Reason:

Test failure: test_add_directory_path_normalization (tasks.cephfs.test_mirroring.TestMirroring)

fail 6804757 2022-04-25 07:48:06 2022-04-25 09:05:54 2022-04-25 09:31:07 0:25:13 0:15:11 0:10:02 smithi master ubuntu 20.04 fs/mirror-ha/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/three-per-cluster clients/{mirror} cluster/{1-node} objectstore/bluestore-bitmap overrides/{whitelist_health} supported-random-distro$/{ubuntu_latest} workloads/cephfs-mirror-ha-workunit} 1
Failure Reason:

Command failed (workunit test fs/cephfs_mirror_ha_gen.sh) on smithi124 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/mirror && cd -- /home/ubuntu/cephtest/mnt.1/mirror && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9bc2ff4611c3c4e3862566d39262fbe7a8b33b2c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 1h /home/ubuntu/cephtest/clone.client.1/qa/workunits/fs/cephfs_mirror_ha_gen.sh'

fail 6804758 2022-04-25 07:48:06 2022-04-25 09:05:55 2022-04-25 09:21:55 0:16:00 0:08:31 0:07:29 smithi master rhel 8.5 fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped} objectstore-ec/bluestore-ec-root overrides/{osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/kernel_cfuse_workunits_dbench_iozone} 2
Failure Reason:

Command failed on smithi103 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

dead 6804759 2022-04-25 07:48:07 2022-04-25 09:06:25 2022-04-25 15:50:36 6:44:11 smithi master ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-bitmap overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/cephfs_misc_tests} 5
Failure Reason:

hit max job timeout

fail 6804760 2022-04-25 07:48:08 2022-04-25 09:11:56 2022-04-25 09:37:35 0:25:39 0:13:56 0:11:43 smithi master ubuntu 20.04 fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/multifs-auth} 2
Failure Reason:

Test failure: test_mount_mon_and_osd_caps_present_mds_caps_absent (tasks.cephfs.test_multifs_auth.TestClientsWithoutAuth)

pass 6804761 2022-04-25 07:48:09 2022-04-25 09:11:56 2022-04-25 09:36:22 0:24:26 0:11:56 0:12:30 smithi master ubuntu 20.04 fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_misc} 2
fail 6804762 2022-04-25 07:48:10 2022-04-25 09:14:37 2022-04-25 09:35:46 0:21:09 0:10:52 0:10:17 smithi master ubuntu 20.04 fs/shell/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/cephfs-shell} 2
Failure Reason:

Test failure: test_cd_with_no_args (tasks.cephfs.test_cephfs_shell.TestCD)

pass 6804763 2022-04-25 07:48:11 2022-04-25 09:15:07 2022-04-25 10:22:21 1:07:14 0:56:15 0:10:59 smithi master rhel 8.5 fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/workunit/snaps} 2
pass 6804764 2022-04-25 07:48:11 2022-04-25 09:17:28 2022-04-25 10:16:14 0:58:46 0:48:27 0:10:19 smithi master ubuntu 20.04 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore/bluestore-bitmap overrides/{frag multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_snaptests}} 2
pass 6804765 2022-04-25 07:48:12 2022-04-25 09:18:59 2022-04-25 09:42:20 0:23:21 0:13:05 0:10:16 smithi master rhel 8.5 fs/top/{begin/{0-install 1-ceph 2-logrotate} cluster/{1-node} mount/fuse objectstore/bluestore-bitmap overrides/whitelist_health supported-random-distros$/{rhel_8} tasks/fstop} 1
pass 6804766 2022-04-25 07:48:13 2022-04-25 09:21:39 2022-04-25 09:48:32 0:26:53 0:18:42 0:08:11 smithi master rhel 8.5 fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress traceless/50pc} 2
pass 6804767 2022-04-25 07:48:14 2022-04-25 09:21:40 2022-04-25 10:03:42 0:42:02 0:30:52 0:11:10 smithi master centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
fail 6804768 2022-04-25 07:48:15 2022-04-25 09:22:00 2022-04-25 09:52:12 0:30:12 0:21:36 0:08:36 smithi master centos 8.stream fs/valgrind/{begin/{0-install 1-ceph 2-logrotate} centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/whitelist_health tasks/mirror}} 1
Failure Reason:

Test failure: test_add_ancestor_and_child_directory (tasks.cephfs.test_mirroring.TestMirroring)

pass 6804769 2022-04-25 07:48:15 2022-04-25 09:22:00 2022-04-25 10:21:31 0:59:31 0:50:05 0:09:26 smithi master centos 8.stream fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/dbench validater/valgrind} 2
fail 6804770 2022-04-25 07:48:16 2022-04-25 09:22:31 2022-04-25 09:52:32 0:30:01 0:20:52 0:09:09 smithi master rhel 8.5 fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/basic}} 2
Failure Reason:

Test failure: test_default_uid_gid_subvolume_group (tasks.cephfs.test_volumes.TestSubvolumeGroups)

fail 6804771 2022-04-25 07:48:17 2022-04-25 09:23:21 2022-04-25 10:09:03 0:45:42 0:33:10 0:12:32 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{secure} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/direct_io} wsync/{no}} 3
Failure Reason:

error during scrub thrashing: Command failed on smithi025 with status 110: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell mds.1:0 scrub start / force,recursive'

pass 6804772 2022-04-25 07:48:18 2022-04-25 09:25:42 2022-04-25 11:33:20 2:07:38 1:58:19 0:09:19 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
fail 6804773 2022-04-25 07:48:19 2022-04-25 09:25:52 2022-04-25 10:01:11 0:35:19 0:25:26 0:09:53 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804774 2022-04-25 07:48:19 2022-04-25 09:26:13 2022-04-25 10:09:19 0:43:06 0:34:38 0:08:28 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{yes}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804775 2022-04-25 07:48:20 2022-04-25 09:27:43 2022-04-25 09:50:52 0:23:09 0:11:20 0:11:49 smithi master ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/backtrace} 2
Failure Reason:

Command failed on smithi006 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.0 --id 0)"

fail 6804776 2022-04-25 07:48:21 2022-04-25 09:27:53 2022-04-25 09:53:47 0:25:54 0:18:54 0:07:00 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} 2
Failure Reason:

reached maximum tries (90) after waiting for 540 seconds

fail 6804777 2022-04-25 07:48:22 2022-04-25 09:29:14 2022-04-25 09:48:54 0:19:40 0:07:42 0:11:58 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Command failed on smithi008 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6804778 2022-04-25 07:48:22 2022-04-25 09:29:24 2022-04-25 10:06:07 0:36:43 0:24:58 0:11:45 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/iozone}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804779 2022-04-25 07:48:23 2022-04-25 09:31:15 2022-04-25 10:15:27 0:44:12 0:31:25 0:12:47 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/kernel_untar_build} wsync/{yes}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804780 2022-04-25 07:48:24 2022-04-25 09:34:16 2022-04-25 09:50:29 0:16:13 0:08:42 0:07:31 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/pjd}} 2
Failure Reason:

Command failed on smithi016 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6804781 2022-04-25 07:48:25 2022-04-25 09:35:56 2022-04-25 09:51:53 0:15:57 0:08:29 0:07:28 smithi master rhel 8.5 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/cap-flush} 2
Failure Reason:

Command failed on smithi155 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804782 2022-04-25 07:48:25 2022-04-25 09:36:27 2022-04-25 10:22:40 0:46:13 0:36:16 0:09:57 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 6804783 2022-04-25 07:48:26 2022-04-25 09:37:37 2022-04-25 10:18:19 0:40:42 0:24:38 0:16:04 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6804784 2022-04-25 07:48:27 2022-04-25 09:42:28 2022-04-25 11:04:19 1:21:51 1:10:56 0:10:55 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
pass 6804785 2022-04-25 07:48:28 2022-04-25 09:43:38 2022-04-25 10:33:45 0:50:07 0:35:31 0:14:36 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{yes}} 3
pass 6804786 2022-04-25 07:48:29 2022-04-25 09:48:40 2022-04-25 10:13:53 0:25:13 0:17:59 0:07:14 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
fail 6804787 2022-04-25 07:48:29 2022-04-25 09:49:00 2022-04-25 10:09:41 0:20:41 0:08:06 0:12:35 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Command failed on smithi093 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804788 2022-04-25 07:48:30 2022-04-25 09:50:20 2022-04-25 10:21:01 0:30:41 0:22:33 0:08:08 smithi master rhel 8.5 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client-limits} 2
fail 6804789 2022-04-25 07:48:31 2022-04-25 09:50:31 2022-04-25 10:24:47 0:34:16 0:26:43 0:07:33 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/dbench} wsync/{yes}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6804790 2022-04-25 07:48:32 2022-04-25 09:51:41 2022-04-25 10:17:33 0:25:52 0:15:07 0:10:45 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/iozone}} 2
fail 6804791 2022-04-25 07:48:33 2022-04-25 09:52:02 2022-04-25 10:07:03 0:15:01 0:08:43 0:06:18 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/pjd}} 2
Failure Reason:

Command failed on smithi007 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804792 2022-04-25 07:48:33 2022-04-25 09:52:22 2022-04-25 11:32:06 1:39:44 1:29:13 0:10:31 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
fail 6804793 2022-04-25 07:48:34 2022-04-25 09:52:33 2022-04-25 10:34:22 0:41:49 0:31:33 0:10:16 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/ffsb} wsync/{no}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804794 2022-04-25 07:48:35 2022-04-25 09:53:23 2022-04-25 10:10:14 0:16:51 0:09:15 0:07:36 smithi master centos 8.stream fs/upgrade/nofs/{bluestore-bitmap centos_latest conf/{client mds mon osd} no-mds-cluster overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-upgrade}} 1
Failure Reason:

SELinux denials found on ubuntu@smithi071.front.sepia.ceph.com: ['type=AVC msg=audit(1650880844.362:202): avc: denied { node_bind } for pid=1545 comm="ping" saddr=172.21.15.71 scontext=system_u:system_r:ping_t:s0 tcontext=system_u:object_r:node_t:s0 tclass=icmp_socket permissive=1']

fail 6804795 2022-04-25 07:48:36 2022-04-25 09:53:24 2022-04-25 10:14:14 0:20:50 0:11:33 0:09:17 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
Failure Reason:

Command failed on smithi005 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6804796 2022-04-25 07:48:37 2022-04-25 09:53:54 2022-04-25 10:14:05 0:20:11 0:07:38 0:12:33 smithi master centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client-readahead} 2
Failure Reason:

Command failed on smithi022 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804797 2022-04-25 07:48:37 2022-04-25 09:57:15 2022-04-25 10:26:34 0:29:19 0:20:32 0:08:47 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/fsstress}} 2
fail 6804798 2022-04-25 07:48:38 2022-04-25 09:59:15 2022-04-25 10:18:22 0:19:07 0:07:10 0:11:57 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Command failed on smithi066 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6804799 2022-04-25 07:48:39 2022-04-25 10:00:26 2022-04-25 10:40:14 0:39:48 0:29:40 0:10:08 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/norstats} wsync/{no}} 3
Failure Reason:

error during scrub thrashing: Command failed on smithi080 with status 110: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell mds.1:0 scrub start / force,recursive'

fail 6804800 2022-04-25 07:48:40 2022-04-25 10:01:16 2022-04-25 10:32:52 0:31:36 0:21:37 0:09:59 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/iozone}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804801 2022-04-25 07:48:41 2022-04-25 10:01:47 2022-04-25 10:19:40 0:17:53 0:08:36 0:09:17 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/pjd}} 2
Failure Reason:

Command failed on smithi110 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804802 2022-04-25 07:48:41 2022-04-25 10:03:47 2022-04-25 10:39:21 0:35:34 0:29:16 0:06:18 smithi master rhel 8.5 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client-recovery} 2
pass 6804803 2022-04-25 07:48:42 2022-04-25 10:03:48 2022-04-25 10:41:00 0:37:12 0:26:27 0:10:45 smithi master rhel 8.5 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_fsstress}} 2
fail 6804804 2022-04-25 07:48:43 2022-04-25 10:06:08 2022-04-25 10:48:36 0:42:28 0:32:15 0:10:13 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsstress} wsync/{no}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6804805 2022-04-25 07:48:44 2022-04-25 10:09:09 2022-04-25 11:45:52 1:36:43 1:27:46 0:08:57 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
fail 6804806 2022-04-25 07:48:45 2022-04-25 10:09:09 2022-04-25 10:32:56 0:23:47 0:14:27 0:09:20 smithi master centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
Failure Reason:

Extra data: line 2 column 475 (char 475)

fail 6804807 2022-04-25 07:48:46 2022-04-25 10:09:30 2022-04-25 10:30:00 0:20:30 0:08:32 0:11:58 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
Failure Reason:

Command failed on smithi093 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6804808 2022-04-25 07:48:46 2022-04-25 10:09:50 2022-04-25 10:33:15 0:23:25 0:11:15 0:12:10 smithi master rhel 8.5 fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{faked-ino whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Command failed on smithi174 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6804809 2022-04-25 07:48:47 2022-04-25 10:12:21 2022-04-25 10:39:00 0:26:39 0:15:58 0:10:41 smithi master rhel 8.5 fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Command failed on smithi049 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.0 --id 0)"

fail 6804810 2022-04-25 07:48:48 2022-04-25 10:13:11 2022-04-25 10:55:09 0:41:58 0:31:07 0:10:51 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsx} wsync/{no}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6804811 2022-04-25 07:48:49 2022-04-25 10:14:02 2022-04-25 10:40:55 0:26:53 0:20:44 0:06:09 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} 2
fail 6804812 2022-04-25 07:48:49 2022-04-25 10:14:12 2022-04-25 10:32:41 0:18:29 0:10:14 0:08:15 smithi master centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/damage} 2
Failure Reason:

Test failure: test_damaged_dentry (tasks.cephfs.test_damage.TestDamage)

fail 6804813 2022-04-25 07:48:50 2022-04-25 10:14:23 2022-04-25 10:38:40 0:24:17 0:13:52 0:10:25 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
Failure Reason:

SELinux denials found on ubuntu@smithi154.front.sepia.ceph.com: ['type=AVC msg=audit(1650882202.512:191): avc: denied { node_bind } for pid=1528 comm="ping" saddr=172.21.15.154 scontext=system_u:system_r:ping_t:s0 tcontext=system_u:object_r:node_t:s0 tclass=icmp_socket permissive=1']

fail 6804814 2022-04-25 07:48:51 2022-04-25 10:15:33 2022-04-25 10:51:19 0:35:46 0:25:23 0:10:23 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/iozone}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6804815 2022-04-25 07:48:52 2022-04-25 10:16:24 2022-04-25 10:56:51 0:40:27 0:29:21 0:11:06 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{yes}} 3
pass 6804816 2022-04-25 07:48:53 2022-04-25 10:17:34 2022-04-25 10:39:29 0:21:55 0:13:46 0:08:09 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/pjd}} 2
fail 6804817 2022-04-25 07:48:54 2022-04-25 10:18:15 2022-04-25 10:44:15 0:26:00 0:14:34 0:11:26 smithi master rhel 8.5 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/data-scan} 2
Failure Reason:

Test failure: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)

pass 6804818 2022-04-25 07:48:54 2022-04-25 10:18:25 2022-04-25 12:26:19 2:07:54 1:58:08 0:09:46 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
fail 6804819 2022-04-25 07:48:55 2022-04-25 10:18:26 2022-04-25 10:50:28 0:32:02 0:21:53 0:10:09 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804820 2022-04-25 07:48:56 2022-04-25 10:19:36 2022-04-25 10:54:12 0:34:36 0:26:52 0:07:44 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{no}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

dead 6804821 2022-04-25 07:48:57 2022-04-25 10:21:07 2022-04-25 16:59:11 6:38:04 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 6804822 2022-04-25 07:48:58 2022-04-25 10:21:07 2022-04-25 10:46:43 0:25:36 0:18:50 0:06:46 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
Failure Reason:

reached maximum tries (90) after waiting for 540 seconds

pass 6804823 2022-04-25 07:48:59 2022-04-25 10:21:37 2022-04-25 10:43:50 0:22:13 0:10:55 0:11:18 smithi master ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/ior-shared-file} 4
fail 6804824 2022-04-25 07:48:59 2022-04-25 10:22:28 2022-04-25 10:37:39 0:15:11 0:08:46 0:06:25 smithi master rhel 8.5 fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root overrides/{mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/failover} 2
Failure Reason:

Command failed on smithi047 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804825 2022-04-25 07:49:00 2022-04-25 10:22:38 2022-04-25 11:12:15 0:49:37 0:43:01 0:06:36 smithi master rhel 8.5 fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/workunit/snaps} 2
fail 6804826 2022-04-25 07:49:01 2022-04-25 10:22:49 2022-04-25 10:45:27 0:22:38 0:11:40 0:10:58 smithi master rhel 8.5 fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/fsstress validater/lockdep} 2
Failure Reason:

Command failed on smithi029 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6804827 2022-04-25 07:49:02 2022-04-25 10:24:29 2022-04-25 10:41:45 0:17:16 0:10:08 0:07:08 smithi master rhel 8.5 fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/clone}} 2
Failure Reason:

Test failure: test_clone_failure_status_failed (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)

fail 6804828 2022-04-25 07:49:02 2022-04-25 10:24:40 2022-04-25 10:57:37 0:32:57 0:25:22 0:07:35 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804829 2022-04-25 07:49:03 2022-04-25 10:24:50 2022-04-25 10:39:31 0:14:41 0:08:33 0:06:08 smithi master rhel 8.5 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/exports} 2
Failure Reason:

Command failed on smithi085 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6804830 2022-04-25 07:49:04 2022-04-25 10:24:50 2022-04-25 11:11:22 0:46:32 0:31:15 0:15:17 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/iogen} wsync/{yes}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804831 2022-04-25 07:49:05 2022-04-25 10:30:01 2022-04-25 11:03:19 0:33:18 0:21:44 0:11:34 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/iozone}} 2
Failure Reason:

reached maximum tries (90) after waiting for 540 seconds

fail 6804832 2022-04-25 07:49:06 2022-04-25 10:32:52 2022-04-25 10:47:53 0:15:01 0:08:30 0:06:31 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/pjd}} 2
Failure Reason:

Command failed on smithi082 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804833 2022-04-25 07:49:06 2022-04-25 10:33:02 2022-04-25 11:20:16 0:47:14 0:40:13 0:07:01 smithi master rhel 8.5 fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/kernel_cfuse_workunits_dbench_iozone} 2
pass 6804834 2022-04-25 07:49:07 2022-04-25 10:33:23 2022-04-25 11:00:59 0:27:36 0:18:34 0:09:02 smithi master centos 8.stream fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_blogbench traceless/50pc} 2
fail 6804835 2022-04-25 07:49:08 2022-04-25 10:33:53 2022-04-25 11:07:00 0:33:07 0:24:40 0:08:27 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804836 2022-04-25 07:49:09 2022-04-25 10:33:54 2022-04-25 11:13:17 0:39:23 0:28:56 0:10:27 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{secure} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/iozone} wsync/{no}} 3
Failure Reason:

error during scrub thrashing: Command failed on smithi105 with status 110: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell mds.1:0 scrub status'

fail 6804837 2022-04-25 07:49:10 2022-04-25 10:34:24 2022-04-25 11:12:52 0:38:28 0:25:45 0:12:43 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804838 2022-04-25 07:49:11 2022-04-25 10:37:45 2022-04-25 11:02:22 0:24:37 0:13:20 0:11:17 smithi master ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/forward-scrub} 2
Failure Reason:

Test failure: test_backtrace_repair (tasks.cephfs.test_forward_scrub.TestForwardScrub)

pass 6804839 2022-04-25 07:49:11 2022-04-25 10:38:45 2022-04-25 11:07:01 0:28:16 0:20:24 0:07:52 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/fsstress}} 2
fail 6804840 2022-04-25 07:49:12 2022-04-25 10:39:06 2022-04-25 11:12:53 0:33:47 0:24:14 0:09:33 smithi master centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/pacific}} 3
Failure Reason:

Extra data: line 2 column 557 (char 557)

fail 6804841 2022-04-25 07:49:13 2022-04-25 10:39:36 2022-04-25 10:59:40 0:20:04 0:08:03 0:12:01 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Command failed on smithi103 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6804842 2022-04-25 07:49:14 2022-04-25 10:39:37 2022-04-25 11:19:14 0:39:37 0:32:11 0:07:26 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/suites/pjd} wsync/{yes}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804843 2022-04-25 07:49:15 2022-04-25 10:40:17 2022-04-25 11:09:57 0:29:40 0:18:49 0:10:51 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/iozone}} 2
Failure Reason:

reached maximum tries (90) after waiting for 540 seconds

fail 6804844 2022-04-25 07:49:15 2022-04-25 10:40:58 2022-04-25 10:56:06 0:15:08 0:08:29 0:06:39 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/pjd}} 2
Failure Reason:

Command failed on smithi124 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6804845 2022-04-25 07:49:16 2022-04-25 10:41:08 2022-04-25 10:59:01 0:17:53 0:07:30 0:10:23 smithi master centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/fragment} 2
Failure Reason:

Command failed on smithi035 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804846 2022-04-25 07:49:17 2022-04-25 10:41:49 2022-04-25 11:03:04 0:21:15 0:13:19 0:07:56 smithi master rhel 8.5 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore/bluestore-bitmap overrides/{frag multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_suites_pjd}} 2
fail 6804847 2022-04-25 07:49:18 2022-04-25 10:43:59 2022-04-25 11:24:52 0:40:53 0:32:19 0:08:34 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/direct_io} wsync/{no}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804848 2022-04-25 07:49:19 2022-04-25 10:44:00 2022-04-25 11:20:12 0:36:12 0:25:43 0:10:29 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804849 2022-04-25 07:49:19 2022-04-25 10:44:20 2022-04-25 11:16:28 0:32:08 0:22:03 0:10:05 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6804850 2022-04-25 07:49:20 2022-04-25 10:45:30 2022-04-25 12:08:51 1:23:21 1:12:06 0:11:15 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{secure} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{yes}} 3
fail 6804851 2022-04-25 07:49:21 2022-04-25 10:47:31 2022-04-25 11:02:58 0:15:27 0:08:47 0:06:40 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} 2
Failure Reason:

Command failed on smithi082 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804852 2022-04-25 07:49:22 2022-04-25 10:48:02 2022-04-25 11:32:02 0:44:00 0:34:18 0:09:42 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 6804853 2022-04-25 07:49:22 2022-04-25 10:48:42 2022-04-25 11:12:07 0:23:25 0:16:29 0:06:56 smithi master rhel 8.5 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/journal-repair} 2
fail 6804854 2022-04-25 07:49:23 2022-04-25 10:48:42 2022-04-25 11:23:21 0:34:39 0:24:23 0:10:16 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6804855 2022-04-25 07:49:24 2022-04-25 10:50:33 2022-04-25 11:18:36 0:28:03 0:16:56 0:11:07 smithi master ubuntu 20.04 fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp overrides/{faked-ino whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress} 2
fail 6804856 2022-04-25 07:49:25 2022-04-25 10:51:23 2022-04-25 11:16:33 0:25:10 0:13:28 0:11:42 smithi master rhel 8.5 fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{rhel_8} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/ino_release_cb} 2
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_ino_release_cb'

pass 6804857 2022-04-25 07:49:26 2022-04-25 10:54:14 2022-04-25 11:20:00 0:25:46 0:15:32 0:10:14 smithi master rhel 8.5 fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_misc} 2
fail 6804858 2022-04-25 07:49:27 2022-04-25 10:54:55 2022-04-25 11:25:57 0:31:02 0:22:29 0:08:33 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/iozone}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804859 2022-04-25 07:49:27 2022-04-25 10:55:15 2022-04-25 11:28:10 0:32:55 0:26:34 0:06:21 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/kernel_untar_build} wsync/{no}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6804860 2022-04-25 07:49:28 2022-04-25 10:55:15 2022-04-25 11:15:07 0:19:52 0:13:08 0:06:44 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/pjd}} 2
fail 6804861 2022-04-25 07:49:29 2022-04-25 10:56:16 2022-04-25 17:19:07 6:22:51 6:10:50 0:12:01 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed (workunit test fs/snaps/untar_snap_rm.sh) on smithi155 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9bc2ff4611c3c4e3862566d39262fbe7a8b33b2c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/untar_snap_rm.sh'

pass 6804862 2022-04-25 07:49:30 2022-04-25 10:56:56 2022-04-25 11:16:32 0:19:36 0:10:17 0:09:19 smithi master centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/mds-flush} 2
fail 6804863 2022-04-25 07:49:31 2022-04-25 10:57:47 2022-04-25 11:30:05 0:32:18 0:22:04 0:10:14 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804864 2022-04-25 07:49:31 2022-04-25 10:59:07 2022-04-25 11:41:14 0:42:07 0:31:22 0:10:45 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{no}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804865 2022-04-25 07:49:32 2022-04-25 10:59:48 2022-04-25 11:16:13 0:16:25 0:08:27 0:07:58 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
Failure Reason:

Command failed on smithi120 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804866 2022-04-25 07:49:33 2022-04-25 11:01:08 2022-04-25 11:37:34 0:36:26 0:26:50 0:09:36 smithi master centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
pass 6804867 2022-04-25 07:49:34 2022-04-25 11:02:29 2022-04-25 11:24:34 0:22:05 0:11:19 0:10:46 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/trivial_sync}} 2
pass 6804868 2022-04-25 07:49:35 2022-04-25 11:03:09 2022-04-25 11:34:08 0:30:59 0:21:23 0:09:36 smithi master centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/mds-full} 2
fail 6804869 2022-04-25 07:49:35 2022-04-25 11:03:09 2022-04-25 12:38:18 1:35:09 1:26:58 0:08:11 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{secure} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/dbench} wsync/{yes}} 3
Failure Reason:

error during scrub thrashing: Command failed on smithi005 with status 110: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell mds.1:0 scrub start / force,recursive'

fail 6804870 2022-04-25 07:49:36 2022-04-25 11:03:30 2022-04-25 11:23:48 0:20:18 0:08:43 0:11:35 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/iozone}} 2
Failure Reason:

Command failed on smithi003 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804871 2022-04-25 07:49:37 2022-04-25 11:04:20 2022-04-25 11:27:14 0:22:54 0:13:24 0:09:30 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/pjd}} 2
fail 6804872 2022-04-25 07:49:38 2022-04-25 11:07:11 2022-04-25 11:38:03 0:30:52 0:21:27 0:09:25 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804873 2022-04-25 07:49:39 2022-04-25 11:07:12 2022-04-25 11:44:31 0:37:19 0:26:23 0:10:56 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/suites/ffsb} wsync/{yes}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804874 2022-04-25 07:49:39 2022-04-25 11:11:33 2022-04-25 11:46:10 0:34:37 0:25:21 0:09:16 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6804875 2022-04-25 07:49:40 2022-04-25 11:11:33 2022-04-25 11:29:52 0:18:19 0:11:06 0:07:13 smithi master rhel 8.5 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/mds_creation_retry} 2
fail 6804876 2022-04-25 07:49:41 2022-04-25 11:12:13 2022-04-25 11:27:47 0:15:34 0:08:38 0:06:56 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/fsstress}} 2
Failure Reason:

Command failed on smithi134 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6804877 2022-04-25 07:49:42 2022-04-25 11:12:24 2022-04-25 11:46:01 0:33:37 0:24:28 0:09:09 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804878 2022-04-25 07:49:43 2022-04-25 11:12:54 2022-04-25 11:54:03 0:41:09 0:32:35 0:08:34 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/norstats} wsync/{yes}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

dead 6804879 2022-04-25 07:49:44 2022-04-25 11:12:54 2022-04-25 17:50:49 6:37:55 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 6804880 2022-04-25 07:49:44 2022-04-25 11:13:25 2022-04-25 11:46:16 0:32:51 0:21:51 0:11:00 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/iozone}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6804881 2022-04-25 07:49:45 2022-04-25 11:15:15 2022-04-25 11:35:07 0:19:52 0:12:57 0:06:55 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/pjd}} 2
fail 6804882 2022-04-25 07:49:46 2022-04-25 11:16:16 2022-04-25 11:45:21 0:29:05 0:20:24 0:08:41 smithi master ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/metrics} 2
Failure Reason:

Test failure: test_delayed_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)

dead 6804883 2022-04-25 07:49:47 2022-04-25 11:16:36 2022-04-25 17:55:08 6:38:32 smithi master ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/mdtest} 5
Failure Reason:

hit max job timeout

pass 6804884 2022-04-25 07:49:48 2022-04-25 11:16:37 2022-04-25 11:55:30 0:38:53 0:28:49 0:10:04 smithi master rhel 8.5 fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-comp overrides/{mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/multifs-auth} 2
pass 6804885 2022-04-25 07:49:48 2022-04-25 11:18:37 2022-04-25 12:10:13 0:51:36 0:42:54 0:08:42 smithi master rhel 8.5 fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-comp overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/workunit/snaps} 2
pass 6804886 2022-04-25 07:49:49 2022-04-25 11:19:18 2022-04-25 11:43:38 0:24:20 0:12:46 0:11:34 smithi master ubuntu 20.04 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_trivial_sync}} 2
pass 6804887 2022-04-25 07:49:50 2022-04-25 11:20:08 2022-04-25 12:21:42 1:01:34 0:52:10 0:09:24 smithi master ubuntu 20.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-comp overrides/{mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/dbench validater/valgrind} 2
fail 6804888 2022-04-25 07:49:51 2022-04-25 11:20:19 2022-04-25 11:40:13 0:19:54 0:09:27 0:10:27 smithi master centos 8.stream fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/misc}} 2
Failure Reason:

Test failure: test_connection_expiration (tasks.cephfs.test_volumes.TestMisc)

pass 6804889 2022-04-25 07:49:52 2022-04-25 11:20:19 2022-04-25 12:02:24 0:42:05 0:30:43 0:11:22 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{secure} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsstress} wsync/{no}} 3
fail 6804890 2022-04-25 07:49:53 2022-04-25 11:23:30 2022-04-25 11:44:17 0:20:47 0:11:15 0:09:32 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed on smithi003 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804891 2022-04-25 07:49:53 2022-04-25 11:23:50 2022-04-25 12:35:06 1:11:16 1:01:20 0:09:56 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
fail 6804892 2022-04-25 07:49:54 2022-04-25 11:23:50 2022-04-25 11:57:45 0:33:55 0:26:38 0:07:17 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsx} wsync/{no}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804893 2022-04-25 07:49:55 2022-04-25 11:25:01 2022-04-25 11:39:54 0:14:53 0:08:30 0:06:23 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} 2
Failure Reason:

Command failed on smithi123 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6804894 2022-04-25 07:49:56 2022-04-25 11:25:01 2022-04-25 11:50:53 0:25:52 0:15:33 0:10:19 smithi master rhel 8.5 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/multimds_misc} 2
Failure Reason:

Test failure: test_apply_tag (tasks.cephfs.test_multimds_misc.TestScrub2)

pass 6804895 2022-04-25 07:49:57 2022-04-25 11:26:02 2022-04-25 11:51:20 0:25:18 0:13:02 0:12:16 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
fail 6804896 2022-04-25 07:49:57 2022-04-25 11:27:23 2022-04-25 11:51:11 0:23:48 0:14:24 0:09:24 smithi master centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
Failure Reason:

Extra data: line 2 column 477 (char 477)

pass 6804897 2022-04-25 07:49:58 2022-04-25 11:28:13 2022-04-25 11:57:46 0:29:33 0:18:11 0:11:22 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/iozone}} 2
pass 6804898 2022-04-25 07:49:59 2022-04-25 11:28:13 2022-04-25 12:25:50 0:57:37 0:44:05 0:13:32 smithi master fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
pass 6804899 2022-04-25 07:50:00 2022-04-25 11:29:54 2022-04-25 12:28:40 0:58:46 0:46:31 0:12:15 smithi master ubuntu 20.04 fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_dbench traceless/50pc} 2
fail 6804900 2022-04-25 07:50:01 2022-04-25 11:30:14 2022-04-25 12:11:54 0:41:40 0:30:50 0:10:50 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{yes}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6804901 2022-04-25 07:50:01 2022-04-25 11:32:15 2022-04-25 11:52:45 0:20:30 0:12:26 0:08:04 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/pjd}} 2
fail 6804902 2022-04-25 07:50:02 2022-04-25 11:33:25 2022-04-25 11:53:15 0:19:50 0:10:48 0:09:02 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed on smithi082 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804903 2022-04-25 07:50:03 2022-04-25 11:34:16 2022-04-25 12:00:20 0:26:04 0:16:33 0:09:31 smithi master rhel 8.5 fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{faked-ino whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
pass 6804904 2022-04-25 07:50:04 2022-04-25 11:35:16 2022-04-25 11:59:01 0:23:45 0:12:09 0:11:36 smithi master centos 8.stream fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
pass 6804905 2022-04-25 07:50:05 2022-04-25 11:37:37 2022-04-25 11:58:41 0:21:04 0:13:59 0:07:05 smithi master rhel 8.5 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/openfiletable} 2
pass 6804906 2022-04-25 07:50:06 2022-04-25 11:37:38 2022-04-25 12:57:33 1:19:55 1:08:51 0:11:04 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
fail 6804907 2022-04-25 07:50:07 2022-04-25 11:38:08 2022-04-25 12:25:07 0:46:59 0:36:44 0:10:15 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{no}} 3
Failure Reason:

error during scrub thrashing: Command failed on smithi123 with status 110: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell mds.1:0 scrub start / force,recursive'

fail 6804908 2022-04-25 07:50:07 2022-04-25 11:39:59 2022-04-25 12:06:08 0:26:09 0:18:57 0:07:12 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
Failure Reason:

reached maximum tries (90) after waiting for 540 seconds

pass 6804909 2022-04-25 07:50:08 2022-04-25 11:40:19 2022-04-25 12:05:04 0:24:45 0:11:12 0:13:33 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/trivial_sync}} 2
fail 6804910 2022-04-25 07:50:09 2022-04-25 11:41:19 2022-04-25 12:16:39 0:35:20 0:26:28 0:08:52 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/suites/iogen} wsync/{yes}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804911 2022-04-25 07:50:10 2022-04-25 11:43:40 2022-04-25 12:00:33 0:16:53 0:07:32 0:09:21 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/iozone}} 2
Failure Reason:

Command failed on smithi003 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804912 2022-04-25 07:50:11 2022-04-25 11:44:20 2022-04-25 12:08:03 0:23:43 0:11:57 0:11:46 smithi master ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/pool-perm} 2
dead 6804913 2022-04-25 07:50:11 2022-04-25 11:44:41 2022-04-25 15:27:30 3:42:49 3:32:43 0:10:06 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi183 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9bc2ff4611c3c4e3862566d39262fbe7a8b33b2c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/fsstress.sh'

pass 6804914 2022-04-25 07:50:12 2022-04-25 11:44:41 2022-04-25 12:04:28 0:19:47 0:13:38 0:06:09 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/pjd}} 2
fail 6804915 2022-04-25 07:50:13 2022-04-25 11:45:02 2022-04-25 12:14:03 0:29:01 0:18:03 0:10:58 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
Failure Reason:

reached maximum tries (90) after waiting for 540 seconds

fail 6804916 2022-04-25 07:50:14 2022-04-25 11:45:22 2022-04-25 12:03:09 0:17:47 0:07:35 0:10:12 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/iozone} wsync/{yes}} 3
Failure Reason:

Command failed on smithi149 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-fuse cephfs-java libcephfs_jni1 libcephfs1 librados2 librbd1 python-ceph rbd-fuse python3-cephfs cephfs-top cephfs-mirror bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel'

fail 6804917 2022-04-25 07:50:15 2022-04-25 11:46:02 2022-04-25 12:02:51 0:16:49 0:07:41 0:09:08 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
Failure Reason:

Command failed on smithi038 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-fuse cephfs-java libcephfs_jni1 libcephfs1 librados2 librbd1 python-ceph rbd-fuse python3-cephfs cephfs-top cephfs-mirror bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel'

fail 6804918 2022-04-25 07:50:15 2022-04-25 11:46:13 2022-04-25 12:03:09 0:16:56 0:07:25 0:09:31 smithi master rhel 8.5 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/quota} 2
Failure Reason:

Command failed on smithi196 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-fuse cephfs-java libcephfs_jni1 libcephfs1 librados2 librbd1 python-ceph rbd-fuse python3-cephfs cephfs-top cephfs-mirror bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel bison flex elfutils-libelf-devel openssl-devel NetworkManager iproute util-linux libacl-devel libaio-devel libattr-devel libtool libuuid-devel xfsdump xfsprogs xfsprogs-devel libaio-devel libtool libuuid-devel xfsprogs-devel'

fail 6804919 2022-04-25 07:50:16 2022-04-25 11:46:23 2022-04-25 12:05:47 0:19:24 0:08:34 0:10:50 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/fsstress}} 2
Failure Reason:

Command failed on smithi008 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6804920 2022-04-25 07:50:17 2022-04-25 11:51:04 2022-04-25 12:27:28 0:36:24 0:24:52 0:11:32 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6804921 2022-04-25 07:50:18 2022-04-25 11:51:25 2022-04-25 12:31:58 0:40:33 0:30:39 0:09:54 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/pjd} wsync/{no}} 3
fail 6804922 2022-04-25 07:50:19 2022-04-25 11:52:55 2022-04-25 12:12:47 0:19:52 0:08:24 0:11:28 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/iozone}} 2
Failure Reason:

Command failed on smithi082 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804923 2022-04-25 07:50:19 2022-04-25 11:53:26 2022-04-25 12:13:08 0:19:42 0:12:32 0:07:10 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/pjd}} 2
fail 6804924 2022-04-25 07:50:20 2022-04-25 11:54:06 2022-04-25 12:10:30 0:16:24 0:05:50 0:10:34 smithi master centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/pacific}} 3
Failure Reason:

No module named 'tasks'

fail 6804925 2022-04-25 07:50:21 2022-04-25 11:55:37 2022-04-25 12:13:01 0:17:24 0:08:39 0:08:45 smithi master rhel 8.5 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/recovery-fs} 2
Failure Reason:

Command failed on smithi134 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

dead 6804926 2022-04-25 07:50:22 2022-04-25 12:44:45 2022-04-25 19:23:33 6:38:48 smithi master rhel 8.5 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore/bluestore-bitmap overrides/{frag multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mon 2-workunit/ffsb}} 2
Failure Reason:

hit max job timeout

fail 6804927 2022-04-25 07:50:23 2022-04-25 12:44:46 2022-04-25 13:20:27 0:35:41 0:29:23 0:06:18 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/direct_io} wsync/{no}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804928 2022-04-25 07:50:24 2022-04-25 12:45:36 2022-04-25 13:04:01 0:18:25 0:07:04 0:11:21 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed on smithi145 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6804929 2022-04-25 07:50:24 2022-04-25 12:46:46 2022-04-25 13:18:37 0:31:51 0:21:32 0:10:19 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804930 2022-04-25 07:50:25 2022-04-25 12:46:57 2022-04-25 13:37:05 0:50:08 0:40:37 0:09:31 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{yes}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804931 2022-04-25 07:50:26 2022-04-25 12:47:47 2022-04-25 13:02:33 0:14:46 0:08:54 0:05:52 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} 2
Failure Reason:

Command failed on smithi003 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6804932 2022-04-25 07:50:27 2022-04-25 12:47:58 2022-04-25 13:07:47 0:19:49 0:09:11 0:10:38 smithi master centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/scrub} 2
Failure Reason:

Test failure: test_scrub_checks (tasks.cephfs.test_scrub_checks.TestScrubChecks)

fail 6804933 2022-04-25 07:50:28 2022-04-25 12:48:08 2022-04-25 13:22:03 0:33:55 0:24:35 0:09:20 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804934 2022-04-25 07:50:29 2022-04-25 12:48:58 2022-04-25 13:11:29 0:22:31 0:08:36 0:13:55 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/iozone}} 2
Failure Reason:

Command failed on smithi079 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6804935 2022-04-25 07:50:29 2022-04-25 12:50:09 2022-04-25 15:42:16 2:52:07 2:41:36 0:10:31 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{secure} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/kernel_untar_build} wsync/{yes}} 3
Failure Reason:

error during scrub thrashing: Command failed on smithi007 with status 110: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell mds.1:0 scrub start / force,recursive'

pass 6804936 2022-04-25 07:50:30 2022-04-25 12:50:50 2022-04-25 13:14:09 0:23:19 0:15:30 0:07:49 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/pjd}} 2
fail 6804937 2022-04-25 07:50:31 2022-04-25 12:53:00 2022-04-25 13:27:26 0:34:26 0:22:27 0:11:59 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi122 with status 126: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9bc2ff4611c3c4e3862566d39262fbe7a8b33b2c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

fail 6804938 2022-04-25 07:50:32 2022-04-25 12:56:11 2022-04-25 13:17:01 0:20:50 0:07:39 0:13:11 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed on smithi005 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6804939 2022-04-25 07:50:33 2022-04-25 12:57:31 2022-04-25 13:19:05 0:21:34 0:11:11 0:10:23 smithi master centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/sessionmap} 2
Failure Reason:

Test failure: test_session_evict_blocklisted (tasks.cephfs.test_sessionmap.TestSessionMap)

pass 6804940 2022-04-25 07:50:33 2022-04-25 12:57:32 2022-04-25 14:09:27 1:11:55 0:59:52 0:12:03 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
fail 6804941 2022-04-25 07:50:34 2022-04-25 12:57:42 2022-04-25 13:43:57 0:46:15 0:38:40 0:07:35 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{yes}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804942 2022-04-25 07:50:35 2022-04-25 12:58:33 2022-04-25 13:13:37 0:15:04 0:08:29 0:06:35 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
Failure Reason:

Command failed on smithi174 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804943 2022-04-25 07:50:36 2022-04-25 12:58:43 2022-04-25 13:27:19 0:28:36 0:15:47 0:12:49 smithi master rhel 8.5 fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{faked-ino whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
pass 6804944 2022-04-25 07:50:37 2022-04-25 13:02:04 2022-04-25 13:28:53 0:26:49 0:18:38 0:08:11 smithi master centos 8.stream fs/cephadm/renamevolume/{0-start 1-rename distro/single-container-host overrides/whitelist_health} 2
fail 6804945 2022-04-25 07:50:38 2022-04-25 13:02:34 2022-04-25 13:29:27 0:26:53 0:15:09 0:11:44 smithi master rhel 8.5 fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{rhel_8} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/libcephfs/{frag test}} 2
Failure Reason:

Command failed (workunit test libcephfs/test.sh) on smithi049 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9bc2ff4611c3c4e3862566d39262fbe7a8b33b2c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/libcephfs/test.sh'

fail 6804946 2022-04-25 07:50:38 2022-04-25 13:03:25 2022-04-25 13:29:43 0:26:18 0:13:14 0:13:04 smithi master ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-ec-root overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/cephfs_misc_tests} 4
Failure Reason:

Test failure: test_drop_cache_command_dead (tasks.cephfs.test_misc.TestCacheDrop)

fail 6804947 2022-04-25 07:50:39 2022-04-25 13:06:06 2022-04-25 13:25:13 0:19:07 0:07:24 0:11:43 smithi master centos 8.stream fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/failover} 2
Failure Reason:

Command failed on smithi019 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804948 2022-04-25 07:50:40 2022-04-25 13:07:56 2022-04-25 13:35:17 0:27:21 0:16:27 0:10:54 smithi master rhel 8.5 fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
pass 6804949 2022-04-25 07:50:41 2022-04-25 13:10:17 2022-04-25 14:52:14 1:41:57 1:31:42 0:10:15 smithi master centos 8.stream fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/workunit/snaps} 2
pass 6804950 2022-04-25 07:50:42 2022-04-25 13:11:17 2022-04-25 13:40:59 0:29:42 0:18:42 0:11:00 smithi master ubuntu 20.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/fuse objectstore-ec/bluestore-ec-root overrides/{mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/fsstress validater/lockdep} 2
fail 6804951 2022-04-25 07:50:42 2022-04-25 13:11:38 2022-04-25 13:32:20 0:20:42 0:09:17 0:11:25 smithi master centos 8.stream fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/snapshot}} 2
Failure Reason:

Test failure: test_subvolume_group_snapshot_unsupported_status (tasks.cephfs.test_volumes.TestSubvolumeGroupSnapshots)

fail 6804952 2022-04-25 07:50:43 2022-04-25 13:13:38 2022-04-25 13:47:24 0:33:46 0:23:58 0:09:48 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804953 2022-04-25 07:50:44 2022-04-25 13:14:19 2022-04-25 14:00:18 0:45:59 0:35:17 0:10:42 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/dbench} wsync/{no}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804954 2022-04-25 07:50:45 2022-04-25 13:17:09 2022-04-25 13:47:00 0:29:51 0:18:29 0:11:22 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/iozone}} 2
Failure Reason:

reached maximum tries (90) after waiting for 540 seconds

fail 6804955 2022-04-25 07:50:46 2022-04-25 13:17:30 2022-04-25 13:48:11 0:30:41 0:23:30 0:07:11 smithi master rhel 8.5 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/snap-schedule} 2
Failure Reason:

Test failure: test_multi_snap_schedule (tasks.cephfs.test_snap_schedules.TestSnapSchedules)

pass 6804956 2022-04-25 07:50:46 2022-04-25 13:18:40 2022-04-25 13:38:25 0:19:45 0:13:29 0:06:16 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/pjd}} 2
pass 6804957 2022-04-25 07:50:47 2022-04-25 13:19:11 2022-04-25 13:57:24 0:38:13 0:27:30 0:10:43 smithi master centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
fail 6804958 2022-04-25 07:50:48 2022-04-25 13:20:31 2022-04-25 13:52:55 0:32:24 0:22:04 0:10:20 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6804959 2022-04-25 07:50:49 2022-04-25 13:22:12 2022-04-25 14:32:45 1:10:33 0:55:51 0:14:42 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{secure} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/ffsb} wsync/{no}} 3
pass 6804960 2022-04-25 07:50:50 2022-04-25 13:27:23 2022-04-25 14:37:01 1:09:38 0:57:18 0:12:20 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
pass 6804961 2022-04-25 07:50:50 2022-04-25 13:27:33 2022-04-25 13:57:29 0:29:56 0:23:18 0:06:38 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/fsstress}} 2
fail 6804962 2022-04-25 07:50:51 2022-04-25 13:28:54 2022-04-25 14:07:26 0:38:32 0:27:31 0:11:01 smithi master rhel 8.5 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/snapshots} 2
Failure Reason:

Test failure: test_snapclient_cache (tasks.cephfs.test_snapshots.TestSnapshots)

fail 6804963 2022-04-25 07:50:52 2022-04-25 13:29:34 2022-04-25 14:02:39 0:33:05 0:24:35 0:08:30 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804964 2022-04-25 07:50:53 2022-04-25 13:29:45 2022-04-25 14:06:53 0:37:08 0:30:32 0:06:36 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/norstats} wsync/{no}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804965 2022-04-25 07:50:54 2022-04-25 13:29:45 2022-04-25 13:53:07 0:23:22 0:11:00 0:12:22 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/iozone}} 2
Failure Reason:

Command failed on smithi174 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804966 2022-04-25 07:50:54 2022-04-25 13:32:26 2022-04-25 14:31:13 0:58:47 0:48:35 0:10:12 smithi master rhel 8.5 fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped} objectstore-ec/bluestore-comp overrides/{osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/kernel_cfuse_workunits_dbench_iozone} 2
pass 6804967 2022-04-25 07:50:55 2022-04-25 13:35:26 2022-04-25 14:42:27 1:07:01 0:56:07 0:10:54 smithi master ubuntu 20.04 fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp overrides/{frag whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_ffsb traceless/50pc} 2
pass 6804968 2022-04-25 07:50:56 2022-04-25 13:37:07 2022-04-25 13:58:05 0:20:58 0:13:21 0:07:37 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/pjd}} 2
fail 6804969 2022-04-25 07:50:57 2022-04-25 13:38:27 2022-04-25 14:06:09 0:27:42 0:16:16 0:11:26 smithi master centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/strays} 2
Failure Reason:

Test failure: test_hardlink_reintegration (tasks.cephfs.test_strays.TestStrays)

pass 6804970 2022-04-25 07:50:58 2022-04-25 13:41:08 2022-04-25 14:05:10 0:24:02 0:14:14 0:09:48 smithi master rhel 8.5 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mds 2-workunit/iozone}} 2
fail 6804971 2022-04-25 07:50:58 2022-04-25 13:43:59 2022-04-25 14:28:17 0:44:18 0:31:37 0:12:41 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsstress} wsync/{no}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6804972 2022-04-25 07:50:59 2022-04-25 13:47:09 2022-04-25 14:04:16 0:17:07 0:08:53 0:08:14 smithi master centos 8.stream fs/upgrade/nofs/{bluestore-bitmap centos_latest conf/{client mds mon osd} no-mds-cluster overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-upgrade}} 1
fail 6804973 2022-04-25 07:51:00 2022-04-25 13:47:10 2022-04-25 14:04:27 0:17:17 0:06:59 0:10:18 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed on smithi138 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804974 2022-04-25 07:51:01 2022-04-25 13:47:30 2022-04-25 14:56:14 1:08:44 0:56:14 0:12:30 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
fail 6804975 2022-04-25 07:51:02 2022-04-25 13:48:21 2022-04-25 14:45:56 0:57:35 0:43:42 0:13:53 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsx} wsync/{no}} 3
Failure Reason:

error during scrub thrashing: Command failed on smithi078 with status 110: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell mds.1:0 scrub status'

fail 6804976 2022-04-25 07:51:02 2022-04-25 13:53:12 2022-04-25 14:10:34 0:17:22 0:08:27 0:08:55 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} 2
Failure Reason:

Command failed on smithi103 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6804977 2022-04-25 07:51:03 2022-04-25 13:54:52 2022-04-25 14:11:20 0:16:28 0:08:47 0:07:41 smithi master rhel 8.5 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/test_journal_migration} 2
Failure Reason:

Command failed on smithi016 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804978 2022-04-25 07:51:04 2022-04-25 13:56:43 2022-04-25 14:18:09 0:21:26 0:12:07 0:09:19 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
fail 6804979 2022-04-25 07:51:05 2022-04-25 13:57:33 2022-04-25 14:18:34 0:21:01 0:11:19 0:09:42 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/iozone}} 2
Failure Reason:

Command failed on smithi124 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6804980 2022-04-25 07:51:05 2022-04-25 13:57:33 2022-04-25 14:31:54 0:34:21 0:26:54 0:07:27 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{yes}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6804981 2022-04-25 07:51:06 2022-04-25 13:58:14 2022-04-25 14:17:28 0:19:14 0:12:37 0:06:37 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/pjd}} 2
fail 6804982 2022-04-25 07:51:07 2022-04-25 13:58:24 2022-04-25 14:19:00 0:20:36 0:10:46 0:09:50 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed on smithi082 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804983 2022-04-25 07:51:08 2022-04-25 13:59:55 2022-04-25 14:23:23 0:23:28 0:14:03 0:09:25 smithi master rhel 8.5 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/truncate_delay} 2
pass 6804984 2022-04-25 07:51:09 2022-04-25 14:00:25 2022-04-25 14:42:55 0:42:30 0:33:54 0:08:36 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 6804985 2022-04-25 07:51:09 2022-04-25 14:00:26 2022-04-25 14:36:01 0:35:35 0:24:58 0:10:37 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804986 2022-04-25 07:51:10 2022-04-25 14:00:56 2022-04-25 14:44:11 0:43:15 0:31:57 0:11:18 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{yes}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804987 2022-04-25 07:51:11 2022-04-25 14:02:47 2022-04-25 14:19:40 0:16:53 0:08:45 0:08:08 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
Failure Reason:

Command failed on smithi142 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6804988 2022-04-25 07:51:12 2022-04-25 14:04:17 2022-04-25 14:25:59 0:21:42 0:11:16 0:10:26 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/fs/trivial_sync}} 2
pass 6804989 2022-04-25 07:51:13 2022-04-25 14:04:38 2022-04-25 14:55:20 0:50:42 0:39:11 0:11:31 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{legacy} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/iogen} wsync/{yes}} 3
fail 6804990 2022-04-25 07:51:13 2022-04-25 14:06:18 2022-04-25 14:37:57 0:31:39 0:22:31 0:09:08 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/iozone}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6804991 2022-04-25 07:51:14 2022-04-25 14:06:59 2022-04-25 14:28:14 0:21:15 0:12:29 0:08:46 smithi master centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/workunit/dir-max-entries} 2
pass 6804992 2022-04-25 07:51:15 2022-04-25 14:06:59 2022-04-25 14:37:37 0:30:38 0:17:56 0:12:42 smithi master ubuntu 20.04 fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{faked-ino whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress} 2
pass 6804993 2022-04-25 07:51:16 2022-04-25 14:07:29 2022-04-25 14:30:45 0:23:16 0:12:20 0:10:56 smithi master centos 8.stream fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_misc} 2
pass 6804994 2022-04-25 07:51:17 2022-04-25 14:09:30 2022-04-25 14:32:06 0:22:36 0:14:07 0:08:29 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/pjd}} 2
fail 6804995 2022-04-25 07:51:17 2022-04-25 14:10:41 2022-04-25 14:45:49 0:35:08 0:25:33 0:09:35 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6804996 2022-04-25 07:51:18 2022-04-25 14:11:21 2022-04-25 14:50:25 0:39:04 0:28:49 0:10:15 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/iozone} wsync/{no}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6804997 2022-04-25 07:51:19 2022-04-25 14:15:22 2022-04-25 15:20:01 1:04:39 0:53:41 0:10:58 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
fail 6804998 2022-04-25 07:51:20 2022-04-25 14:15:22 2022-04-25 14:38:20 0:22:58 0:14:02 0:08:56 smithi master centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
Failure Reason:

Extra data: line 2 column 478 (char 478)

fail 6804999 2022-04-25 07:51:21 2022-04-25 14:15:23 2022-04-25 14:30:15 0:14:52 0:09:06 0:05:46 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/fsstress}} 2
Failure Reason:

Command failed on smithi104 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6805000 2022-04-25 07:51:22 2022-04-25 14:15:33 2022-04-25 14:32:29 0:16:56 0:11:24 0:05:32 smithi master rhel 8.5 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/workunit/quota} 2
pass 6805001 2022-04-25 07:51:22 2022-04-25 14:15:34 2022-04-25 14:36:52 0:21:18 0:10:57 0:10:21 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/trivial_sync}} 2
fail 6805002 2022-04-25 07:51:23 2022-04-25 14:15:34 2022-04-25 14:57:14 0:41:40 0:33:20 0:08:20 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/pjd} wsync/{no}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6805003 2022-04-25 07:51:24 2022-04-25 14:15:34 2022-04-25 14:37:40 0:22:06 0:08:34 0:13:32 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/iozone}} 2
Failure Reason:

Command failed on smithi137 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6805004 2022-04-25 07:51:25 2022-04-25 14:15:35 2022-04-25 14:34:36 0:19:01 0:13:26 0:05:35 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/pjd}} 2
fail 6805005 2022-04-25 07:51:25 2022-04-25 14:15:35 2022-04-25 14:35:34 0:19:59 0:08:50 0:11:09 smithi master ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/acls} 2
Failure Reason:

"2022-04-25T14:34:08.382767+0000 mon.a (mon.0) 223 : cluster [WRN] Health check failed: Degraded data redundancy: 9/48 objects degraded (18.750%), 4 pgs degraded (PG_DEGRADED)" in cluster log

pass 6805006 2022-04-25 07:51:26 2022-04-25 14:15:35 2022-04-25 14:41:54 0:26:19 0:11:28 0:14:51 smithi master ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-bitmap overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/ior-shared-file} 5
pass 6805007 2022-04-25 07:51:27 2022-04-25 14:15:36 2022-04-25 14:44:33 0:28:57 0:23:06 0:05:51 smithi master rhel 8.5 fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/multifs-auth} 2
pass 6805008 2022-04-25 07:51:28 2022-04-25 14:15:36 2022-04-25 14:52:10 0:36:34 0:31:44 0:04:50 smithi master rhel 8.5 fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/workunit/snaps} 2
pass 6805009 2022-04-25 07:51:29 2022-04-25 14:15:37 2022-04-25 15:05:06 0:49:29 0:42:54 0:06:35 smithi master rhel 8.5 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore/bluestore-bitmap overrides/{frag multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_snaptests}} 2
pass 6805010 2022-04-25 07:51:29 2022-04-25 14:38:07 2022-04-25 15:45:17 1:07:10 0:59:29 0:07:41 smithi master rhel 8.5 fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/dbench validater/valgrind} 2
fail 6805011 2022-04-25 07:51:30 2022-04-25 15:14:47 1679 smithi master rhel 8.5 fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/basic}} 2
Failure Reason:

Test failure: test_subvolume_group_quota_mds_path_restriction_to_group_path (tasks.cephfs.test_volumes.TestSubvolumeGroups)

fail 6805012 2022-04-25 07:51:31 2022-04-25 14:40:18 2022-04-25 15:15:58 0:35:40 0:26:02 0:09:38 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/direct_io} wsync/{yes}} 3
Failure Reason:

error during scrub thrashing: Command failed on smithi005 with status 110: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell mds.1:0 scrub start / force,recursive'

pass 6805013 2022-04-25 07:51:32 2022-04-25 14:40:48 2022-04-25 16:11:35 1:30:47 1:23:00 0:07:47 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
fail 6805014 2022-04-25 07:51:33 2022-04-25 14:41:59 2022-04-25 15:03:08 0:21:09 0:11:10 0:09:59 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
Failure Reason:

Command failed on smithi063 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

dead 6805015 2022-04-25 07:51:33 2022-04-25 14:41:59 2022-04-25 21:20:06 6:38:07 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 6805016 2022-04-25 07:51:34 2022-04-25 14:41:59 2022-04-25 15:25:55 0:43:56 0:34:48 0:09:08 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{legacy} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{no}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6805017 2022-04-25 07:51:35 2022-04-25 14:43:00 2022-04-25 15:14:27 0:31:27 0:21:38 0:09:49 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6805018 2022-04-25 07:51:36 2022-04-25 14:43:20 2022-04-25 15:05:28 0:22:08 0:10:58 0:11:10 smithi master centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/admin} 2
Failure Reason:

Test failure: test_add_already_in_use_metadata_pool (tasks.cephfs.test_admin.TestAddDataPool)

fail 6805019 2022-04-25 07:51:37 2022-04-25 14:44:21 2022-04-25 14:59:40 0:15:19 0:08:50 0:06:29 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Command failed on smithi062 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6805020 2022-04-25 07:51:37 2022-04-25 14:44:41 2022-04-25 15:08:51 0:24:10 0:12:53 0:11:17 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/iozone}} 2
fail 6805021 2022-04-25 07:51:38 2022-04-25 14:45:02 2022-04-25 15:20:29 0:35:27 0:27:31 0:07:56 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/kernel_untar_build} wsync/{no}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6805022 2022-04-25 07:51:39 2022-04-25 14:45:52 2022-04-25 15:06:44 0:20:52 0:11:28 0:09:24 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/pjd}} 2
Failure Reason:

Command failed on smithi078 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6805023 2022-04-25 07:51:40 2022-04-25 14:46:03 2022-04-25 16:46:16 2:00:13 1:52:59 0:07:14 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
fail 6805024 2022-04-25 07:51:41 2022-04-25 14:47:03 2022-04-25 15:11:53 0:24:50 0:13:54 0:10:56 smithi master rhel 8.5 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/alternate-pool} 2
Failure Reason:

Test failure: test_rebuild_simple (tasks.cephfs.test_recovery_pool.TestRecoveryPool)

fail 6805025 2022-04-25 07:51:41 2022-04-25 14:50:24 2022-04-25 15:25:31 0:35:07 0:25:20 0:09:47 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6805026 2022-04-25 07:51:42 2022-04-25 14:50:24 2022-04-25 15:37:39 0:47:15 0:38:59 0:08:16 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{yes}} 3
fail 6805027 2022-04-25 07:51:43 2022-04-25 14:50:35 2022-04-25 15:14:25 0:23:50 0:10:36 0:13:14 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
Failure Reason:

Command failed on smithi080 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.0 --id 0)"

fail 6805028 2022-04-25 07:51:44 2022-04-25 14:50:55 2022-04-25 15:27:08 0:36:13 0:24:55 0:11:18 smithi master centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/pacific}} 3
Failure Reason:

Extra data: line 2 column 554 (char 554)

fail 6805029 2022-04-25 07:51:44 2022-04-25 14:52:16 2022-04-25 15:06:57 0:14:41 0:09:08 0:05:33 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Command failed on smithi017 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6805030 2022-04-25 07:51:45 2022-04-25 14:52:16 2022-04-25 15:17:26 0:25:10 0:08:15 0:16:55 smithi master fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-ec-root overrides/{osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
Failure Reason:

"2022-04-25T15:15:12.392865+0000 mon.a (mon.0) 294 : cluster [WRN] Health check failed: Degraded data redundancy: 9/48 objects degraded (18.750%), 4 pgs degraded (PG_DEGRADED)" in cluster log

fail 6805031 2022-04-25 07:51:46 2022-04-25 14:53:36 2022-04-25 15:15:11 0:21:35 0:11:08 0:10:27 smithi master rhel 8.5 fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress traceless/50pc} 2
Failure Reason:

Command failed on smithi137 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6805032 2022-04-25 07:51:47 2022-04-25 14:54:07 2022-04-25 16:16:38 1:22:31 1:11:26 0:11:05 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{legacy} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/dbench} wsync/{yes}} 3
Failure Reason:

error during scrub thrashing: Command failed on smithi079 with status 110: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell mds.1:0 scrub start / force,recursive'

fail 6805033 2022-04-25 07:51:48 2022-04-25 14:55:27 2022-04-25 15:31:59 0:36:32 0:24:12 0:12:20 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/iozone}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6805034 2022-04-25 07:51:48 2022-04-25 14:56:18 2022-04-25 15:28:04 0:31:46 0:21:11 0:10:35 smithi master centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/asok_dump_tree} 2
fail 6805035 2022-04-25 07:51:49 2022-04-25 14:56:48 2022-04-25 15:28:15 0:31:27 0:22:28 0:08:59 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/pjd}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6805036 2022-04-25 07:51:50 2022-04-25 14:57:19 2022-04-25 16:00:25 1:03:06 0:56:21 0:06:45 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
fail 6805037 2022-04-25 07:51:51 2022-04-25 14:57:19 2022-04-25 15:33:16 0:35:57 0:26:31 0:09:26 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/ffsb} wsync/{yes}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6805038 2022-04-25 07:51:52 2022-04-25 15:00:00 2022-04-25 15:16:47 0:16:47 0:07:06 0:09:41 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
Failure Reason:

Command failed on smithi062 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6805039 2022-04-25 07:51:53 2022-04-25 15:00:00 2022-04-25 15:25:21 0:25:21 0:16:02 0:09:19 smithi master rhel 8.5 fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp overrides/{faked-ino whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
fail 6805040 2022-04-25 07:51:53 2022-04-25 15:00:21 2022-04-25 15:21:26 0:21:05 0:09:53 0:11:12 smithi master centos 8.stream fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{centos_8} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/libcephfs_python} 2
Failure Reason:

Command failed (workunit test fs/test_python.sh) on smithi063 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9bc2ff4611c3c4e3862566d39262fbe7a8b33b2c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/test_python.sh'

pass 6805041 2022-04-25 07:51:54 2022-04-25 15:02:31 2022-04-25 15:27:40 0:25:09 0:16:06 0:09:03 smithi master rhel 8.5 fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
fail 6805042 2022-04-25 07:51:55 2022-04-25 15:02:42 2022-04-25 15:23:33 0:20:51 0:10:55 0:09:56 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/fsstress}} 2
Failure Reason:

Command failed on smithi035 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.0 --id 0)"

pass 6805043 2022-04-25 07:51:56 2022-04-25 15:02:52 2022-04-25 15:30:36 0:27:44 0:15:05 0:12:39 smithi master ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/auto-repair} 2
pass 6805044 2022-04-25 07:51:56 2022-04-25 15:05:13 2022-04-25 15:48:08 0:42:55 0:33:26 0:09:29 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 6805045 2022-04-25 07:51:57 2022-04-25 15:05:13 2022-04-25 15:20:53 0:15:40 0:08:37 0:07:03 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Command failed on smithi160 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6805046 2022-04-25 07:51:58 2022-04-25 15:05:33 2022-04-25 15:41:48 0:36:15 0:26:09 0:10:06 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/norstats} wsync/{yes}} 3
Failure Reason:

error during scrub thrashing: Command failed on smithi017 with status 110: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell mds.1:0 scrub start / force,recursive'

pass 6805047 2022-04-25 07:51:59 2022-04-25 15:07:04 2022-04-25 15:40:10 0:33:06 0:22:07 0:10:59 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/iozone}} 2
fail 6805048 2022-04-25 07:51:59 2022-04-25 15:08:55 2022-04-25 15:32:49 0:23:54 0:11:38 0:12:16 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/pjd}} 2
Failure Reason:

Command failed on smithi124 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6805049 2022-04-25 07:52:00 2022-04-25 15:11:45 2022-04-25 15:42:52 0:31:07 0:22:50 0:08:17 smithi master rhel 8.5 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_suites_fsstress}} 2
fail 6805050 2022-04-25 07:52:01 2022-04-25 15:11:46 2022-04-25 15:55:58 0:44:12 0:33:46 0:10:26 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{secure} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsstress} wsync/{yes}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6805051 2022-04-25 07:52:02 2022-04-25 15:12:36 2022-04-25 15:30:56 0:18:20 0:10:23 0:07:57 smithi master rhel 8.5 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/backtrace} 2
pass 6805052 2022-04-25 07:52:03 2022-04-25 15:13:47 2022-04-25 16:45:30 1:31:43 1:25:11 0:06:32 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
fail 6805053 2022-04-25 07:52:03 2022-04-25 15:14:27 2022-04-25 15:33:59 0:19:32 0:08:04 0:11:28 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
Failure Reason:

Command failed on smithi089 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6805054 2022-04-25 07:52:04 2022-04-25 15:14:37 2022-04-25 15:48:01 0:33:24 0:26:41 0:06:43 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsx} wsync/{no}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6805055 2022-04-25 07:52:05 2022-04-25 15:15:18 2022-04-25 15:46:43 0:31:25 0:21:51 0:09:34 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6805056 2022-04-25 07:52:06 2022-04-25 15:15:39 2022-04-25 15:53:02 0:37:23 0:28:09 0:09:14 smithi master centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
pass 6805057 2022-04-25 07:52:07 2022-04-25 15:16:09 2022-04-25 15:35:08 0:18:59 0:10:23 0:08:36 smithi master centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/cap-flush} 2
fail 6805058 2022-04-25 07:52:07 2022-04-25 15:16:09 2022-04-25 15:31:40 0:15:31 0:08:50 0:06:41 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Command failed on smithi062 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6805059 2022-04-25 07:52:08 2022-04-25 15:16:50 2022-04-25 15:50:40 0:33:50 0:24:04 0:09:46 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/iozone}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6805060 2022-04-25 07:52:09 2022-04-25 15:17:30 2022-04-25 15:53:03 0:35:33 0:25:20 0:10:13 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{no}} 3
fail 6805061 2022-04-25 07:52:10 2022-04-25 15:20:31 2022-04-25 15:42:19 0:21:48 0:11:38 0:10:10 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/pjd}} 2
Failure Reason:

Command failed on smithi039 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6805062 2022-04-25 07:52:11 2022-04-25 15:20:31 2022-04-25 17:16:24 1:55:53 1:48:14 0:07:39 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
fail 6805063 2022-04-25 07:52:12 2022-04-25 15:21:02 2022-04-25 15:44:26 0:23:24 0:12:51 0:10:33 smithi master centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client-limits} 2
Failure Reason:

Test failure: test_client_min_caps_working_set (tasks.cephfs.test_client_limits.TestClientLimits)

fail 6805064 2022-04-25 07:52:12 2022-04-25 15:21:32 2022-04-25 15:42:51 0:21:19 0:07:50 0:13:29 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
Failure Reason:

Command failed on smithi035 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6805065 2022-04-25 07:52:13 2022-04-25 15:23:43 2022-04-25 16:08:06 0:44:23 0:34:11 0:10:12 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{yes}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6805066 2022-04-25 07:52:14 2022-04-25 15:25:14 2022-04-25 15:58:46 0:33:32 0:22:16 0:11:16 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
pass 6805067 2022-04-25 07:52:15 2022-04-25 15:25:24 2022-04-25 15:51:14 0:25:50 0:14:09 0:11:41 smithi master ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/mdtest} 4
fail 6805068 2022-04-25 07:52:16 2022-04-25 15:26:05 2022-04-25 15:45:56 0:19:51 0:08:23 0:11:28 smithi master ubuntu 20.04 fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root overrides/{mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/failover} 2
Failure Reason:

"2022-04-25T15:44:06.267737+0000 mon.a (mon.0) 298 : cluster [WRN] Health check failed: Degraded data redundancy: 5/48 objects degraded (10.417%), 4 pgs degraded (PG_DEGRADED)" in cluster log

pass 6805069 2022-04-25 07:52:16 2022-04-25 15:26:35 2022-04-25 16:28:42 1:02:07 0:51:09 0:10:58 smithi master ubuntu 20.04 fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/workunit/snaps} 2
fail 6805070 2022-04-25 07:52:17 2022-04-25 15:26:56 2022-04-25 15:47:46 0:20:50 0:11:34 0:09:16 smithi master rhel 8.5 fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/fsstress validater/lockdep} 2
Failure Reason:

Command failed on smithi037 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6805071 2022-04-25 07:52:18 2022-04-25 15:27:16 2022-04-25 15:48:58 0:21:42 0:10:25 0:11:17 smithi master ubuntu 20.04 fs/volumes/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/clone}} 2
Failure Reason:

Test failure: test_clone_failure_status_failed (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)

fail 6805072 2022-04-25 07:52:19 2022-04-25 15:27:37 2022-04-25 15:43:01 0:15:24 0:08:42 0:06:42 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Command failed on smithi106 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

dead 6805073 2022-04-25 07:52:20 2022-04-25 15:27:47 2022-04-25 22:06:11 6:38:24 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 6805074 2022-04-25 07:52:20 2022-04-25 15:28:07 2022-04-25 16:01:21 0:33:14 0:26:54 0:06:20 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/iogen} wsync/{yes}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6805075 2022-04-25 07:52:21 2022-04-25 15:28:18 2022-04-25 16:01:37 0:33:19 0:21:08 0:12:11 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/iozone}} 2
Failure Reason:

reached maximum tries (90) after waiting for 540 seconds

fail 6805076 2022-04-25 07:52:22 2022-04-25 15:30:38 2022-04-25 15:48:04 0:17:26 0:10:21 0:07:05 smithi master rhel 8.5 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client-readahead} 2
Failure Reason:

Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)

fail 6805077 2022-04-25 07:52:23 2022-04-25 15:30:59 2022-04-25 15:52:41 0:21:42 0:11:32 0:10:10 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/pjd}} 2
Failure Reason:

Command failed on smithi062 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6805078 2022-04-25 07:52:24 2022-04-25 15:31:49 2022-04-25 16:37:40 1:05:51 0:57:53 0:07:58 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
fail 6805079 2022-04-25 07:52:24 2022-04-25 15:32:10 2022-04-25 16:08:26 0:36:16 0:26:32 0:09:44 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/iozone} wsync/{yes}} 3
Failure Reason:

error during scrub thrashing: Command failed on smithi085 with status 110: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell mds.1:0 scrub start / force,recursive'

fail 6805080 2022-04-25 07:52:25 2022-04-25 15:33:20 2022-04-25 15:50:14 0:16:54 0:07:00 0:09:54 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
Failure Reason:

Command failed on smithi124 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6805081 2022-04-25 07:52:26 2022-04-25 15:33:20 2022-04-25 16:07:02 0:33:42 0:24:00 0:09:42 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/fsstress}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6805082 2022-04-25 07:52:27 2022-04-25 15:34:01 2022-04-25 16:14:22 0:40:21 0:30:57 0:09:24 smithi master centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client-recovery} 2
fail 6805083 2022-04-25 07:52:28 2022-04-25 15:35:11 2022-04-25 15:52:56 0:17:45 0:08:37 0:09:08 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Command failed on smithi107 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6805084 2022-04-25 07:52:28 2022-04-25 15:37:42 2022-04-25 16:23:32 0:45:50 0:34:10 0:11:40 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{secure} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/suites/pjd} wsync/{no}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

pass 6805085 2022-04-25 07:52:29 2022-04-25 15:40:13 2022-04-25 16:09:26 0:29:13 0:20:32 0:08:41 smithi master rhel 8.5 fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{faked-ino whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress} 2
pass 6805086 2022-04-25 07:52:30 2022-04-25 15:40:13 2022-04-25 16:06:30 0:26:17 0:16:33 0:09:44 smithi master rhel 8.5 fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_misc} 2
fail 6805087 2022-04-25 07:52:31 2022-04-25 15:41:54 2022-04-25 16:13:58 0:32:04 0:21:14 0:10:50 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/iozone}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6805088 2022-04-25 07:52:32 2022-04-25 15:42:24 2022-04-25 16:07:20 0:24:56 0:15:05 0:09:51 smithi master centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
Failure Reason:

Extra data: line 2 column 473 (char 473)

fail 6805089 2022-04-25 07:52:32 2022-04-25 15:42:25 2022-04-25 16:02:18 0:19:53 0:08:48 0:11:05 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/pjd}} 2
Failure Reason:

Command failed on smithi078 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

pass 6805090 2022-04-25 07:52:33 2022-04-25 15:42:55 2022-04-25 16:10:11 0:27:16 0:16:25 0:10:51 smithi master ubuntu 20.04 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore/bluestore-bitmap overrides/{frag multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_pjd}} 2
fail 6805091 2022-04-25 07:52:34 2022-04-25 15:42:55 2022-04-25 16:16:06 0:33:11 0:26:05 0:07:06 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/direct_io} wsync/{yes}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6805092 2022-04-25 07:52:35 2022-04-25 15:43:06 2022-04-25 16:03:32 0:20:26 0:10:51 0:09:35 smithi master centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/damage} 2
Failure Reason:

Test failure: test_damaged_dentry (tasks.cephfs.test_damage.TestDamage)

pass 6805093 2022-04-25 07:52:36 2022-04-25 15:44:36 2022-04-25 17:14:49 1:30:13 1:23:38 0:06:35 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
fail 6805094 2022-04-25 07:52:36 2022-04-25 15:45:27 2022-04-25 16:31:06 0:45:39 0:35:32 0:10:07 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
Failure Reason:

SELinux denials found on ubuntu@smithi059.front.sepia.ceph.com: ['type=AVC msg=audit(1650902009.662:192): avc: denied { node_bind } for pid=1478 comm="ping" saddr=172.21.15.59 scontext=system_u:system_r:ping_t:s0 tcontext=system_u:object_r:node_t:s0 tclass=icmp_socket permissive=1']

pass 6805095 2022-04-25 07:52:37 2022-04-25 15:46:07 2022-04-25 16:56:34 1:10:27 1:00:05 0:10:22 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{no}} 3
fail 6805096 2022-04-25 07:52:38 2022-04-25 15:47:48 2022-04-25 16:23:10 0:35:22 0:25:23 0:09:59 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} 2
Failure Reason:

Timed out waiting for MDS daemons to become healthy

dead 6805097 2022-04-25 07:52:39 2022-04-25 15:48:08 2022-04-25 22:26:07 6:37:59 smithi master rhel 8.5 fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
Failure Reason:

hit max job timeout

pass 6805098 2022-04-25 07:52:39 2022-04-25 15:48:09 2022-04-25 16:16:51 0:28:42 0:20:12 0:08:30 smithi master centos 8.stream fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_blogbench traceless/50pc} 2
fail 6805099 2022-04-25 07:52:40 2022-04-25 15:48:09 2022-04-25 16:03:02 0:14:53 0:08:41 0:06:12 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Command failed on smithi138 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6805100 2022-04-25 07:52:41 2022-04-25 15:48:09 2022-04-25 16:05:31 0:17:22 0:10:41 0:06:41 smithi master rhel 8.5 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/data-scan} 2
Failure Reason:

Test failure: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)

fail 6805101 2022-04-25 07:52:42 2022-04-25 15:48:40 2022-04-25 16:15:52 0:27:12 0:17:47 0:09:25 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/iozone}} 2
Failure Reason:

reached maximum tries (90) after waiting for 540 seconds

dead 6805102 2022-04-25 07:52:42 2022-04-25 15:48:40 2022-04-25 19:30:09 3:41:29 3:32:11 0:09:18 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi183 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9bc2ff4611c3c4e3862566d39262fbe7a8b33b2c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/fsstress.sh'

fail 6805103 2022-04-25 07:52:43 2022-04-25 15:49:00 2022-04-25 16:33:57 0:44:57 0:33:39 0:11:18 smithi master rhel 8.5 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/kernel_untar_build} wsync/{yes}} 3
Failure Reason:

Timed out waiting for MDS daemons to become healthy

fail 6805104 2022-04-25 07:52:44 2022-04-25 15:50:21 2022-04-25 16:12:01 0:21:40 0:11:18 0:10:22 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/pjd}} 2
Failure Reason:

Command failed on smithi120 with status 1: "(cd /home/ubuntu/cephtest && exec sudo nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.admin sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-fuse -f --admin-socket '/var/run/ceph/$cluster-$name.$pid.asok' /home/ubuntu/cephtest/mnt.admin --id admin --client_fs=cephfs)"

fail 6805105 2022-04-25 07:52:45 2022-04-25 15:50:51 2022-04-25 16:24:20 0:33:29 0:25:50 0:07:39 smithi master rhel 8.5 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi097 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9bc2ff4611c3c4e3862566d39262fbe7a8b33b2c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-multiple-capsnaps.sh'

fail 6805106 2022-04-25 07:52:46 2022-04-25 15:51:22 2022-04-25 16:15:47 0:24:25 0:16:02 0:08:23 smithi master centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/exports} 2
Failure Reason:

Test failure: test_ephemeral_pin_dist_failover (tasks.cephfs.test_exports.TestEphemeralPins)