Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 6749759 2022-03-21 06:07:16 2022-03-21 06:12:31 2022-03-21 15:15:00 9:02:29 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{no}} 3
Failure Reason:

hit max job timeout

fail 6749760 2022-03-21 06:07:16 2022-03-21 06:19:42 2022-03-21 06:47:08 0:27:26 0:16:26 0:11:00 smithi master centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/quota} 2
Failure Reason:

"2022-03-21T06:43:49.781290+0000 mon.a (mon.0) 539 : cluster [WRN] Replacing daemon mds.d as rank 0 with standby daemon mds.a" in cluster log

dead 6749761 2022-03-21 06:07:17 2022-03-21 06:19:53 2022-03-21 13:00:11 6:40:18 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/iozone}} 2
Failure Reason:

hit max job timeout

fail 6749762 2022-03-21 06:07:18 2022-03-21 06:19:53 2022-03-21 09:53:24 3:33:31 3:22:24 0:11:07 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
Failure Reason:

Command failed (workunit test suites/ffsb.sh) on smithi017 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d3799b8cfa5046465710eadeea2bed7bb4681409 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'

fail 6749763 2022-03-21 06:07:19 2022-03-21 06:19:53 2022-03-21 12:57:40 6:37:47 6:27:25 0:10:22 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/pjd} wsync/{no}} 3
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi002 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d3799b8cfa5046465710eadeea2bed7bb4681409 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

dead 6749764 2022-03-21 06:07:20 2022-03-21 06:24:54 2022-03-21 13:05:09 6:40:15 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/iozone}} 2
Failure Reason:

hit max job timeout

fail 6749765 2022-03-21 06:07:20 2022-03-21 06:25:25 2022-03-21 07:17:56 0:52:31 0:36:58 0:15:33 smithi master centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
Failure Reason:

"2022-03-21T07:03:31.451976+0000 mon.a (mon.0) 331 : cluster [WRN] Replacing daemon mds.c as rank 0 with standby daemon mds.b" in cluster log

fail 6749766 2022-03-21 06:07:21 2022-03-21 06:28:56 2022-03-21 08:10:45 1:41:49 1:29:28 0:12:21 smithi master centos 8.stream fs/valgrind/{begin/{0-install 1-ceph 2-logrotate} centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/whitelist_health tasks/mirror}} 1
Failure Reason:

Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)

fail 6749767 2022-03-21 06:07:22 2022-03-21 06:31:46 2022-03-21 07:07:46 0:36:00 0:22:25 0:13:35 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed on smithi039 with status 1: 'sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp'

dead 6749768 2022-03-21 06:07:23 2022-03-21 06:38:07 2022-03-21 13:28:47 6:50:40 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
Failure Reason:

hit max job timeout

fail 6749769 2022-03-21 06:07:24 2022-03-21 06:40:58 2022-03-21 10:24:51 3:43:53 3:37:47 0:06:06 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/kernel_untar_build} wsync/{no}} 3
Failure Reason:

Command failed (workunit test kernel_untar_build.sh) on smithi043 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d3799b8cfa5046465710eadeea2bed7bb4681409 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh'

fail 6749770 2022-03-21 06:07:24 2022-03-21 06:41:29 2022-03-21 07:43:46 1:02:17 0:49:22 0:12:55 smithi master rhel 8.4 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/snap-schedule} 2
Failure Reason:

"2022-03-21T07:04:23.592368+0000 mon.a (mon.0) 446 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 6749771 2022-03-21 06:07:25 2022-03-21 06:42:49 2022-03-21 11:58:58 5:16:09 5:08:10 0:07:59 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed (workunit test fs/snaps/untar_snap_rm.sh) on smithi065 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d3799b8cfa5046465710eadeea2bed7bb4681409 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/untar_snap_rm.sh'

dead 6749772 2022-03-21 06:07:26 2022-03-21 06:43:40 2022-03-21 13:31:17 6:47:37 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
Failure Reason:

hit max job timeout

dead 6749773 2022-03-21 06:07:27 2022-03-21 06:44:00 2022-03-21 13:25:36 6:41:36 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{no}} 3
Failure Reason:

hit max job timeout

dead 6749774 2022-03-21 06:07:28 2022-03-21 06:47:41 2022-03-21 13:28:31 6:40:50 smithi master rhel 8.4 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/snapshots} 2
Failure Reason:

hit max job timeout

fail 6749775 2022-03-21 06:07:28 2022-03-21 06:49:51 2022-03-21 07:23:52 0:34:01 0:21:56 0:12:05 smithi master ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/strays} 2
Failure Reason:

Test failure: test_dirfrag_limit_fragmented (tasks.cephfs.test_strays.TestStrays), test_dirfrag_limit_fragmented (tasks.cephfs.test_strays.TestStrays)

fail 6749776 2022-03-21 06:07:29 2022-03-21 06:52:22 2022-03-21 07:13:45 0:21:23 0:12:04 0:09:19 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/ffsb} wsync/{no}} 3
Failure Reason:

Command failed on smithi164 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:d3799b8cfa5046465710eadeea2bed7bb4681409 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a9106f3c-a8e5-11ec-8c35-001a4aab830c -- ceph orch daemon add osd smithi164:vg_nvme/lv_3'

dead 6749777 2022-03-21 06:07:30 2022-03-21 06:54:33 2022-03-21 13:43:08 6:48:35 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
Failure Reason:

hit max job timeout

pass 6749778 2022-03-21 06:07:31 2022-03-21 06:55:43 2022-03-21 07:31:19 0:35:36 0:25:37 0:09:59 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/fsstress}} 2
fail 6749779 2022-03-21 06:07:32 2022-03-21 06:55:43 2022-03-21 07:38:14 0:42:31 0:35:00 0:07:31 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed on smithi092 with status 1: 'sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp'

pass 6749780 2022-03-21 06:07:33 2022-03-21 06:57:14 2022-03-21 07:30:51 0:33:37 0:22:43 0:10:54 smithi master centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
dead 6749781 2022-03-21 06:07:33 2022-03-21 06:57:44 2022-03-21 13:48:42 6:50:58 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
Failure Reason:

hit max job timeout

fail 6749782 2022-03-21 06:07:34 2022-03-21 06:59:55 2022-03-21 11:45:12 4:45:17 4:29:50 0:15:27 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsx} wsync/{no}} 3
Failure Reason:

Command failed (workunit test suites/fsx.sh) on smithi039 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d3799b8cfa5046465710eadeea2bed7bb4681409 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh'

pass 6749783 2022-03-21 06:07:35 2022-03-21 07:07:56 2022-03-21 09:19:30 2:11:34 1:59:55 0:11:39 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
dead 6749784 2022-03-21 06:07:36 2022-03-21 07:12:17 2022-03-21 13:59:06 6:46:49 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
Failure Reason:

hit max job timeout

dead 6749785 2022-03-21 06:07:37 2022-03-21 07:13:48 2022-03-21 13:51:43 6:37:55 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

dead 6749786 2022-03-21 06:07:37 2022-03-21 07:13:48 2022-03-21 16:14:54 9:01:06 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/iogen} wsync/{no}} 3
Failure Reason:

hit max job timeout

dead 6749787 2022-03-21 06:07:38 2022-03-21 07:17:59 2022-03-21 14:11:14 6:53:15 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
Failure Reason:

hit max job timeout

pass 6749788 2022-03-21 06:07:39 2022-03-21 07:24:00 2022-03-21 08:06:56 0:42:56 0:27:42 0:15:14 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/fsstress}} 2
fail 6749789 2022-03-21 06:07:40 2022-03-21 07:27:51 2022-03-21 08:10:58 0:43:07 0:35:05 0:08:02 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-dir-rename.sh) on smithi114 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d3799b8cfa5046465710eadeea2bed7bb4681409 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-dir-rename.sh'

dead 6749790 2022-03-21 06:07:41 2022-03-21 07:29:51 2022-03-21 14:18:05 6:48:14 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
Failure Reason:

hit max job timeout

dead 6749791 2022-03-21 06:07:42 2022-03-21 07:30:52 2022-03-21 14:09:46 6:38:54 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{no}} 3
Failure Reason:

hit max job timeout

pass 6749792 2022-03-21 06:07:43 2022-03-21 07:31:22 2022-03-21 08:35:26 1:04:04 0:49:45 0:14:19 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} 2
fail 6749793 2022-03-21 06:07:43 2022-03-21 07:34:23 2022-03-21 10:30:09 2:55:46 2:44:48 0:10:58 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/kernel_untar_build} wsync/{no}} 3
Failure Reason:

error during scrub thrashing: reached maximum tries (30) after waiting for 900 seconds

pass 6749794 2022-03-21 06:07:44 2022-03-21 07:34:23 2022-03-21 09:32:05 1:57:42 1:50:02 0:07:40 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
dead 6749795 2022-03-21 06:07:45 2022-03-21 07:35:04 2022-03-21 14:23:47 6:48:43 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
Failure Reason:

hit max job timeout

dead 6749796 2022-03-21 06:07:46 2022-03-21 07:38:25 2022-03-21 16:42:05 9:03:40 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{yes}} 3
Failure Reason:

hit max job timeout

fail 6749797 2022-03-21 06:07:47 2022-03-21 07:46:46 2022-03-21 08:32:00 0:45:14 0:33:33 0:11:41 smithi master centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
Failure Reason:

"2022-03-21T08:18:35.150569+0000 mon.a (mon.0) 226 : cluster [WRN] Replacing daemon mds.c as rank 0 with standby daemon mds.b" in cluster log

pass 6749798 2022-03-21 06:07:48 2022-03-21 07:46:46 2022-03-21 09:29:03 1:42:17 1:16:53 0:25:24 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
dead 6749799 2022-03-21 06:07:48 2022-03-21 08:05:59 2022-03-21 14:54:28 6:48:29 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
Failure Reason:

hit max job timeout

fail 6749800 2022-03-21 06:07:49 2022-03-21 08:06:00 2022-03-21 12:02:11 3:56:11 3:48:10 0:08:01 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/norstats} wsync/{yes}} 3
Failure Reason:

Command failed (workunit test fs/norstats/kernel_untar_tar.sh) on smithi037 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d3799b8cfa5046465710eadeea2bed7bb4681409 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/norstats/kernel_untar_tar.sh'

dead 6749801 2022-03-21 06:07:50 2022-03-21 08:07:00 2022-03-21 14:46:55 6:39:55 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

dead 6749802 2022-03-21 06:07:51 2022-03-21 08:07:31 2022-03-21 14:59:10 6:51:39 smithi master centos 8.stream fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-comp overrides/{mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/fsstress validater/lockdep} 2
Failure Reason:

hit max job timeout

pass 6749803 2022-03-21 06:07:52 2022-03-21 08:07:31 2022-03-21 08:48:00 0:40:29 0:26:56 0:13:33 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsstress} wsync/{no}} 3
fail 6749804 2022-03-21 06:07:52 2022-03-21 08:11:02 2022-03-21 08:54:35 0:43:33 0:33:41 0:09:52 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed on smithi067 with status 1: 'sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp'

dead 6749805 2022-03-21 06:07:53 2022-03-21 08:13:32 2022-03-21 15:00:44 6:47:12 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
Failure Reason:

hit max job timeout

dead 6749806 2022-03-21 06:07:54 2022-03-21 08:14:03 2022-03-21 17:12:00 8:57:57 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{no}} 3
Failure Reason:

hit max job timeout

dead 6749807 2022-03-21 06:07:55 2022-03-21 08:14:03 2022-03-21 15:05:08 6:51:05 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
Failure Reason:

hit max job timeout

fail 6749808 2022-03-21 06:07:56 2022-03-21 08:17:34 2022-03-21 13:19:19 5:01:45 4:44:06 0:17:39 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{no}} 3
Failure Reason:

Command failed (workunit test fs/test_o_trunc.sh) on smithi125 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d3799b8cfa5046465710eadeea2bed7bb4681409 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/test_o_trunc.sh'

fail 6749809 2022-03-21 06:07:57 2022-03-21 08:25:45 2022-03-21 08:54:19 0:28:34 0:22:44 0:05:50 smithi master rhel 8.4 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client-recovery} 2
Failure Reason:

Test failure: test_filelock_eviction (tasks.cephfs.test_client_recovery.TestClientRecovery), test_filelock_eviction (tasks.cephfs.test_client_recovery.TestClientRecovery)

dead 6749810 2022-03-21 06:07:57 2022-03-21 08:25:46 2022-03-21 15:19:51 6:54:05 smithi master centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
Failure Reason:

hit max job timeout

fail 6749811 2022-03-21 06:07:58 2022-03-21 08:32:07 2022-03-21 09:02:42 0:30:35 0:20:38 0:09:57 smithi master rhel 8.4 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mds 2-workunit/ffsb}} 2
Failure Reason:

Command failed (workunit test suites/ffsb.sh) on smithi159 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d3799b8cfa5046465710eadeea2bed7bb4681409 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'

dead 6749812 2022-03-21 06:07:59 2022-03-21 08:35:28 2022-03-21 15:17:51 6:42:23 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

hit max job timeout

dead 6749813 2022-03-21 06:08:00 2022-03-21 08:37:58 2022-03-21 15:21:54 6:43:56 smithi master rhel 8.4 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/exports} 2
Failure Reason:

hit max job timeout

fail 6749814 2022-03-21 06:08:01 2022-03-21 08:37:59 2022-03-21 12:19:17 3:41:18 3:25:18 0:16:00 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{yes}} 3
Failure Reason:

Command failed (workunit test fs/misc/dirfrag.sh) on smithi114 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d3799b8cfa5046465710eadeea2bed7bb4681409 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/dirfrag.sh'

dead 6749816 2022-03-21 06:08:01 2022-03-21 08:48:11 2022-03-21 15:25:41 6:37:30 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

dead 6749818 2022-03-21 06:08:02 2022-03-21 08:48:11 2022-03-21 15:34:06 6:45:55 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

hit max job timeout

fail 6749820 2022-03-21 06:08:03 2022-03-21 08:54:23 2022-03-21 09:39:36 0:45:13 0:34:58 0:10:15 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
Failure Reason:

Command failed (workunit test suites/ffsb.sh) on smithi067 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d3799b8cfa5046465710eadeea2bed7bb4681409 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'

pass 6749822 2022-03-21 06:08:04 2022-03-21 08:54:43 2022-03-21 09:32:06 0:37:23 0:21:45 0:15:38 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
dead 6749824 2022-03-21 06:08:04 2022-03-21 09:00:24 2022-03-21 18:05:14 9:04:50 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/dbench} wsync/{yes}} 3
Failure Reason:

hit max job timeout

pass 6749826 2022-03-21 06:08:05 2022-03-21 09:00:25 2022-03-21 09:30:31 0:30:06 0:16:28 0:13:38 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/pjd}} 2
pass 6749828 2022-03-21 06:08:06 2022-03-21 09:02:45 2022-03-21 09:47:13 0:44:28 0:30:11 0:14:17 smithi master centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
dead 6749830 2022-03-21 06:08:07 2022-03-21 09:05:56 2022-03-21 15:45:51 6:39:55 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
Failure Reason:

hit max job timeout

dead 6749832 2022-03-21 06:08:08 2022-03-21 09:05:57 2022-03-21 15:46:11 6:40:14 smithi master rhel 8.4 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/journal-repair} 2
Failure Reason:

hit max job timeout

pass 6749834 2022-03-21 06:08:08 2022-03-21 09:07:47 2022-03-21 09:35:11 0:27:24 0:17:46 0:09:38 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/iozone}} 2
dead 6749836 2022-03-21 06:08:09 2022-03-21 09:07:47 2022-03-21 15:51:08 6:43:21 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsstress} wsync/{no}} 3
Failure Reason:

hit max job timeout

dead 6749838 2022-03-21 06:08:10 2022-03-21 09:13:29 2022-03-21 15:53:22 6:39:53 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

hit max job timeout

fail 6749840 2022-03-21 06:08:11 2022-03-21 09:13:29 2022-03-21 14:17:12 5:03:43 4:48:18 0:15:25 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsx} wsync/{yes}} 3
Failure Reason:

Command failed (workunit test suites/fsx.sh) on smithi036 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d3799b8cfa5046465710eadeea2bed7bb4681409 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh'

fail 6749842 2022-03-21 06:08:12 2022-03-21 09:19:40 2022-03-21 09:48:53 0:29:13 0:18:15 0:10:58 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

reached maximum tries (90) after waiting for 540 seconds

dead 6749844 2022-03-21 06:08:12 2022-03-21 09:23:31 2022-03-21 18:19:52 8:56:21 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{yes}} 3
Failure Reason:

hit max job timeout

pass 6749846 2022-03-21 06:08:13 2022-03-21 09:23:31 2022-03-21 10:02:35 0:39:04 0:23:23 0:15:41 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
fail 6749849 2022-03-21 06:08:14 2022-03-21 09:29:13 2022-03-21 10:00:10 0:30:57 0:20:19 0:10:38 smithi master ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/metrics} 2
Failure Reason:

Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)

dead 6749851 2022-03-21 06:08:15 2022-03-21 09:30:33 2022-03-21 16:12:48 6:42:15 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
Failure Reason:

hit max job timeout

pass 6749853 2022-03-21 06:08:16 2022-03-21 09:32:14 2022-03-21 10:08:18 0:36:04 0:21:53 0:14:11 smithi master centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
dead 6749855 2022-03-21 06:08:16 2022-03-21 09:35:14 2022-03-21 16:18:20 6:43:06 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/pjd} wsync/{yes}} 3
Failure Reason:

hit max job timeout

fail 6749857 2022-03-21 06:08:17 2022-03-21 09:39:45 2022-03-21 10:14:34 0:34:49 0:19:58 0:14:51 smithi master rhel 8.4 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/openfiletable} 2
Failure Reason:

Test failure: test_max_items_per_obj (tasks.cephfs.test_openfiletable.OpenFileTable)

dead 6749859 2022-03-21 06:08:18 2022-03-21 09:47:16 2022-03-21 16:36:07 6:48:51 smithi master centos 8.stream fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/fsstress validater/lockdep} 2
Failure Reason:

hit max job timeout

dead 6749861 2022-03-21 06:08:19 2022-03-21 09:48:57 2022-03-21 16:33:38 6:44:41 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

hit max job timeout

dead 6749863 2022-03-21 06:08:20 2022-03-21 09:53:28 2022-03-21 10:45:16 0:51:48 0:34:20 0:17:28 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi170 with status 126: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d3799b8cfa5046465710eadeea2bed7bb4681409 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/fsstress.sh'

dead 6749865 2022-03-21 06:08:20 2022-03-21 10:00:19 2022-03-21 19:00:05 8:59:46 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/kernel_untar_build} wsync/{no}} 3
Failure Reason:

hit max job timeout

fail 6749867 2022-03-21 06:08:21 2022-03-21 10:02:40 2022-03-21 10:30:49 0:28:09 0:18:20 0:09:49 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

reached maximum tries (90) after waiting for 540 seconds

pass 6749869 2022-03-21 06:08:22 2022-03-21 10:05:40 2022-03-21 10:41:29 0:35:49 0:22:20 0:13:29 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
dead 6749871 2022-03-21 06:08:23 2022-03-21 10:08:21 2022-03-21 16:47:37 6:39:16 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
Failure Reason:

hit max job timeout

fail 6749873 2022-03-21 06:08:23 2022-03-21 10:08:21 2022-03-21 13:56:39 3:48:18 3:24:08 0:24:10 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/ffsb} wsync/{yes}} 3
Failure Reason:

Command failed (workunit test suites/ffsb.sh) on smithi043 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d3799b8cfa5046465710eadeea2bed7bb4681409 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'

dead 6749875 2022-03-21 06:08:24 2022-03-21 12:46:21 2022-03-21 19:26:50 6:40:29 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

hit max job timeout

dead 6749877 2022-03-21 06:08:25 2022-03-21 12:46:21 2022-03-21 21:40:49 8:54:28 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsx} wsync/{yes}} 3
Failure Reason:

hit max job timeout

pass 6749879 2022-03-21 06:08:26 2022-03-21 12:48:22 2022-03-21 13:40:13 0:51:51 0:32:55 0:18:56 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} 2
fail 6749881 2022-03-21 06:08:27 2022-03-21 12:57:43 2022-03-21 13:48:49 0:51:06 0:45:08 0:05:58 smithi master rhel 8.4 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/snap-schedule} 2
Failure Reason:

"2022-03-21T13:30:19.230602+0000 mon.a (mon.0) 1501 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 6749883 2022-03-21 06:08:27 2022-03-21 12:57:43 2022-03-21 13:43:38 0:45:55 0:32:38 0:13:17 smithi master centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
Failure Reason:

"2022-03-21T13:29:17.678092+0000 mon.a (mon.0) 291 : cluster [WRN] Replacing daemon mds.c as rank 0 with standby daemon mds.a" in cluster log

dead 6749885 2022-03-21 06:08:28 2022-03-21 12:58:24 2022-03-21 19:37:25 6:39:01 smithi master ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/snapshots} 2
Failure Reason:

hit max job timeout

dead 6749887 2022-03-21 06:08:29 2022-03-21 12:58:24 2022-03-21 19:42:49 6:44:25 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

hit max job timeout

fail 6749889 2022-03-21 06:08:30 2022-03-21 13:03:05 2022-03-21 14:00:20 0:57:15 0:31:02 0:26:13 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
Failure Reason:

Command failed (workunit test suites/ffsb.sh) on smithi125 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d3799b8cfa5046465710eadeea2bed7bb4681409 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'

pass 6749891 2022-03-21 06:08:30 2022-03-21 13:19:27 2022-03-21 14:23:01 1:03:34 0:53:09 0:10:25 smithi master ubuntu 20.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/dbench validater/valgrind} 2
dead 6749894 2022-03-21 06:08:31 2022-03-21 13:19:28 2022-03-21 20:01:24 6:41:56 smithi master centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn syntax whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

hit max job timeout

fail 6749896 2022-03-21 06:08:32 2022-03-21 13:23:39 2022-03-21 14:03:36 0:39:57 0:26:03 0:13:54 smithi master centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/strays} 2
Failure Reason:

Test failure: test_migrate_unlinked_dir (tasks.cephfs.test_strays.TestStrays)

dead 6749898 2022-03-21 06:08:33 2022-03-21 13:26:39 2022-03-21 20:06:15 6:39:36 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
Failure Reason:

hit max job timeout

fail 6749900 2022-03-21 06:08:34 2022-03-21 13:26:40 2022-03-21 13:57:47 0:31:07 0:20:19 0:10:48 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/fsstress}} 2
Failure Reason:

reached maximum tries (90) after waiting for 540 seconds

pass 6749902 2022-03-21 06:08:34 2022-03-21 13:26:40 2022-03-21 13:54:33 0:27:53 0:17:42 0:10:11 smithi master ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/iozone}} 2
fail 6749904 2022-03-21 06:08:35 2022-03-21 13:26:50 2022-03-21 13:55:53 0:29:03 0:20:52 0:08:11 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-1.sh) on smithi006 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d3799b8cfa5046465710eadeea2bed7bb4681409 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-1.sh'

dead 6749906 2022-03-21 06:08:36 2022-03-21 13:29:21 2022-03-21 20:18:07 6:48:46 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{no}} 3
Failure Reason:

hit max job timeout

dead 6749908 2022-03-21 06:08:37 2022-03-21 13:40:23 2022-03-21 20:33:39 6:53:16 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
Failure Reason:

hit max job timeout

fail 6749910 2022-03-21 06:08:38 2022-03-21 13:41:14 2022-03-21 16:32:55 2:51:41 2:38:20 0:13:21 smithi master rhel 8.4 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/kernel_untar_build} wsync/{no}} 3
Failure Reason:

error during scrub thrashing: reached maximum tries (30) after waiting for 900 seconds

fail 6749912 2022-03-21 06:08:38 2022-03-21 13:43:44 2022-03-21 17:20:32 3:36:48 3:26:25 0:10:23 smithi master rhel 8.4 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-2.sh) on smithi055 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d3799b8cfa5046465710eadeea2bed7bb4681409 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-2.sh'