Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7019953 2022-09-08 06:57:58 2022-09-08 07:04:29 2022-09-08 07:50:54 0:46:25 0:33:36 0:12:49 smithi main rhel 8.6 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_fsstress}} 2
Failure Reason:

"1662622643.9983664 mon.a (mon.0) 410 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

fail 7019954 2022-09-08 06:57:58 2022-09-08 07:05:50 2022-09-08 07:42:01 0:36:11 0:24:45 0:11:26 smithi main fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
Failure Reason:

Command failed (workunit test kernel_untar_build.sh) on smithi157 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/kernel_untar_build.sh'

fail 7019955 2022-09-08 06:57:59 2022-09-08 07:05:50 2022-09-08 10:47:56 3:42:06 3:31:19 0:10:47 smithi main rhel 8.6 fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_ffsb traceless/50pc} 2
Failure Reason:

Command failed (workunit test suites/ffsb.sh) on smithi153 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'

pass 7019956 2022-09-08 06:58:00 2022-09-08 07:06:40 2022-09-08 08:01:08 0:54:28 0:35:22 0:19:06 smithi main centos 8.stream fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} scrub/no standby-replay subvolume/{no-subvolume} tasks/{0-check-counter workunit/suites/blogbench}} 3
pass 7019957 2022-09-08 06:58:01 2022-09-08 07:16:46 2022-09-08 08:06:14 0:49:28 0:30:37 0:18:51 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
pass 7019958 2022-09-08 06:58:02 2022-09-08 07:21:17 2022-09-08 07:40:25 0:19:08 0:11:05 0:08:03 smithi main centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/osd 2-workunit/fs/trivial_sync}} 2
pass 7019959 2022-09-08 06:58:03 2022-09-08 07:23:08 2022-09-08 07:49:48 0:26:40 0:18:09 0:08:31 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/snapshots} 2
fail 7019960 2022-09-08 06:58:04 2022-09-08 07:24:39 2022-09-08 08:54:44 1:30:05 1:12:13 0:17:52 smithi main centos 8.stream fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} scrub/yes standby-replay subvolume/{with-namespace-isolated} tasks/{0-check-counter workunit/suites/dbench}} 3
Failure Reason:

error during scrub thrashing: Command failed on smithi081 with status 13: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell mds.1:0 scrub status'

pass 7019961 2022-09-08 06:58:05 2022-09-08 07:31:10 2022-09-08 08:13:07 0:41:57 0:30:45 0:11:12 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/strays} 2
pass 7019962 2022-09-08 06:58:06 2022-09-08 07:31:10 2022-09-08 08:18:33 0:47:23 0:34:50 0:12:33 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7019963 2022-09-08 06:58:07 2022-09-08 07:33:11 2022-09-08 08:08:21 0:35:10 0:17:18 0:17:52 smithi main ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/mdtest} 5
Failure Reason:

"1662624117.95465 mon.a (mon.0) 305 : cluster [WRN] Health check failed: 3 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

pass 7019964 2022-09-08 06:58:08 2022-09-08 08:25:14 2022-09-08 09:08:26 0:43:12 0:31:53 0:11:19 smithi main rhel 8.6 fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/multifs-auth} 2
fail 7019965 2022-09-08 06:58:08 2022-09-08 08:25:24 2022-09-08 09:42:33 1:17:09 1:06:17 0:10:52 smithi main centos 8.stream fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/workunit/snaps} 2
Failure Reason:

"1662626542.3302948 mon.a (mon.0) 117 : cluster [WRN] Health check failed: Degraded data redundancy: 2/4 objects degraded (50.000%), 1 pg degraded (PG_DEGRADED)" in cluster log

pass 7019966 2022-09-08 06:58:09 2022-09-08 08:25:34 2022-09-08 08:51:37 0:26:03 0:14:46 0:11:17 smithi main ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/iozone}} 2
fail 7019967 2022-09-08 06:58:10 2022-09-08 08:26:25 2022-09-08 08:59:21 0:32:56 0:19:43 0:13:13 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/test_journal_migration} 2
Failure Reason:

"1662627043.0628986 mon.a (mon.0) 385 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

pass 7019968 2022-09-08 06:58:11 2022-09-08 08:27:15 2022-09-08 09:02:33 0:35:18 0:21:25 0:13:53 smithi main centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
fail 7019969 2022-09-08 06:58:12 2022-09-08 08:29:36 2022-09-08 15:03:59 6:34:23 6:22:24 0:11:59 smithi main ubuntu 20.04 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_suites_pjd}} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi174 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/pjd.sh'

fail 7019970 2022-09-08 06:58:13 2022-09-08 08:30:06 2022-09-08 09:07:57 0:37:51 0:26:05 0:11:46 smithi main rhel 8.6 fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

"1662627529.969061 mon.a (mon.0) 205 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

pass 7019971 2022-09-08 06:58:14 2022-09-08 08:30:27 2022-09-08 08:56:54 0:26:27 0:14:01 0:12:26 smithi main ubuntu 20.04 fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_misc} 2
pass 7019972 2022-09-08 06:58:15 2022-09-08 08:31:47 2022-09-08 08:48:42 0:16:55 0:09:43 0:07:12 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/truncate_delay} 2
pass 7019973 2022-09-08 06:58:16 2022-09-08 08:31:48 2022-09-08 09:46:51 1:15:03 0:59:14 0:15:49 smithi main centos 8.stream fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} scrub/no standby-replay subvolume/{no-subvolume} tasks/{0-check-counter workunit/suites/ffsb}} 3
fail 7019974 2022-09-08 06:58:17 2022-09-08 08:33:18 2022-09-08 09:00:08 0:26:50 0:13:34 0:13:16 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/workunit/dir-max-entries} 2
Failure Reason:

"1662627331.6389616 mon.a (mon.0) 200 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

fail 7019975 2022-09-08 06:58:18 2022-09-08 08:34:39 2022-09-08 15:25:55 6:51:16 6:39:15 0:12:01 smithi main rhel 8.6 fs/verify/{begin/{0-install 1-ceph 2-logrotate} centos_8 clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/3 tasks/fsstress validater/valgrind} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi008 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

fail 7019976 2022-09-08 06:58:19 2022-09-08 08:34:49 2022-09-08 15:09:59 6:35:10 6:21:44 0:13:26 smithi main centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/pjd}} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi130 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

fail 7019977 2022-09-08 06:58:20 2022-09-08 08:35:00 2022-09-08 09:00:49 0:25:49 0:12:42 0:13:07 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi002 with status 127: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/daemon-base:latest-pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f737fec4-2f53-11ed-8431-001a4aab830c -- ceph orch daemon add osd smithi002:vg_nvme/lv_4'

fail 7019978 2022-09-08 06:58:21 2022-09-08 08:35:10 2022-09-08 09:09:31 0:34:21 0:21:17 0:13:04 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/workunit/quota} 2
Failure Reason:

Command failed (workunit test fs/quota/quota.sh) on smithi189 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/fs/quota/quota.sh'

fail 7019979 2022-09-08 06:58:21 2022-09-08 08:38:31 2022-09-08 09:28:42 0:50:11 0:36:31 0:13:40 smithi main centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
Failure Reason:

"1662627775.7444937 mon.b (mon.1) 112 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

pass 7019980 2022-09-08 06:58:22 2022-09-08 08:45:32 2022-09-08 09:14:41 0:29:09 0:23:08 0:06:01 smithi main centos 8.stream fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} scrub/yes standby-replay subvolume/{with-namespace-isolated-and-quota} tasks/{0-check-counter workunit/fs/norstats}} 3
pass 7019981 2022-09-08 06:58:23 2022-09-08 08:45:33 2022-09-08 09:23:33 0:38:00 0:25:14 0:12:46 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
pass 7019982 2022-09-08 06:58:24 2022-09-08 08:46:13 2022-09-08 09:01:09 0:14:56 0:09:02 0:05:54 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/acls} 2
fail 7019983 2022-09-08 06:58:25 2022-09-08 08:46:14 2022-09-08 10:34:39 1:48:25 1:39:36 0:08:49 smithi main centos 8.stream fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped} objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_dbench_iozone} 2
Failure Reason:

"1662628296.2667868 mon.a (mon.0) 234 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

fail 7019984 2022-09-08 06:58:26 2022-09-08 08:47:24 2022-09-08 15:26:32 6:39:08 6:28:04 0:11:04 smithi main rhel 8.6 fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress traceless/50pc} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi107 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

fail 7019985 2022-09-08 06:58:27 2022-09-08 08:48:45 2022-09-08 09:16:57 0:28:12 0:16:04 0:12:08 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/admin} 2
Failure Reason:

Test failure: test_dump_loads (tasks.cephfs.test_admin.TestAdminCommandDumpLoads)

fail 7019986 2022-09-08 06:58:28 2022-09-08 08:54:18 2022-09-08 09:55:55 1:01:37 0:45:22 0:16:15 smithi main ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
Failure Reason:

"1662628721.2851417 mon.a (mon.0) 345 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

fail 7019987 2022-09-08 06:58:29 2022-09-08 08:54:29 2022-09-08 09:23:23 0:28:54 0:15:38 0:13:16 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi070 with status 1: "sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid aa09fbcc-2f56-11ed-8431-001a4aab830c -- bash -c 'ceph fs set cephfs allow_standby_replay true'"

pass 7019988 2022-09-08 06:58:30 2022-09-08 08:54:29 2022-09-08 09:24:42 0:30:13 0:12:21 0:17:52 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/alternate-pool} 2
fail 7019989 2022-09-08 06:58:31 2022-09-08 08:54:49 2022-09-08 09:28:01 0:33:12 0:18:16 0:14:56 smithi main ubuntu 20.04 fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

"1662628820.2093868 mon.a (mon.0) 193 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

pass 7019990 2022-09-08 06:58:32 2022-09-08 08:55:00 2022-09-08 09:19:46 0:24:46 0:12:36 0:12:10 smithi main centos 8.stream fs/bugs/client_trim_caps/{begin/{0-install 1-ceph 2-logrotate} centos_latest clusters/small-cluster conf/{client mds mon osd} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/trim-i24137} 1
pass 7019991 2022-09-08 06:58:33 2022-09-08 08:55:00 2022-09-08 09:56:23 1:01:23 0:48:18 0:13:05 smithi main centos 8.stream fs/cephadm/multivolume/{0-start 1-mount 2-workload/dbench distro/single-container-host} 2
pass 7019992 2022-09-08 06:58:34 2022-09-08 08:55:10 2022-09-08 09:34:16 0:39:06 0:25:55 0:13:11 smithi main rhel 8.6 fs/full/{begin/{0-install 1-ceph 2-logrotate} clusters/1-node-1-mds-1-osd conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/mgr-osd-full} 1
fail 7019993 2022-09-08 06:58:35 2022-09-08 08:55:21 2022-09-08 12:23:41 3:28:20 3:12:17 0:16:03 smithi main ubuntu 20.04 fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{ubuntu_latest} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client} 2
Failure Reason:

Command failed (workunit test client/test.sh) on smithi197 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/client/test.sh'

fail 7019994 2022-09-08 06:58:36 2022-09-08 08:55:51 2022-09-08 11:22:56 2:27:05 2:10:37 0:16:28 smithi main ubuntu 20.04 fs/mirror/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/one-per-cluster clients/{mirror} cluster/{1-node} mount/fuse objectstore/bluestore-bitmap overrides/{whitelist_health} supported-random-distros$/{ubuntu_latest} tasks/mirror} 1
Failure Reason:

"1662630967.8354058 mon.a (mon.0) 2543 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

fail 7019995 2022-09-08 06:58:37 2022-09-08 08:55:52 2022-09-08 09:39:42 0:43:50 0:28:59 0:14:51 smithi main centos 8.stream fs/mirror-ha/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/three-per-cluster clients/{mirror} cluster/{1-node} objectstore/bluestore-bitmap overrides/{whitelist_health} supported-random-distro$/{centos_8} workloads/cephfs-mirror-ha-workunit} 1
Failure Reason:

Command failed on smithi178 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs volume rm dc --yes-i-really-mean-it'"

pass 7019996 2022-09-08 06:58:38 2022-09-08 08:56:12 2022-09-08 10:13:05 1:16:53 0:59:53 0:17:00 smithi main ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cephfs_misc_tests} 5
pass 7019997 2022-09-08 06:58:38 2022-09-08 08:57:03 2022-09-08 09:43:23 0:46:20 0:33:04 0:13:16 smithi main centos 8.stream fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/multifs-auth} 2
pass 7019998 2022-09-08 06:58:39 2022-09-08 08:57:33 2022-09-08 09:31:52 0:34:19 0:21:42 0:12:37 smithi main rhel 8.6 fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_misc} 2
pass 7019999 2022-09-08 06:58:40 2022-09-08 08:58:34 2022-09-08 10:01:34 1:03:00 0:46:57 0:16:03 smithi main ubuntu 20.04 fs/shell/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/cephfs-shell} 2
fail 7020000 2022-09-08 06:58:41 2022-09-08 08:59:24 2022-09-08 10:35:47 1:36:23 1:21:59 0:14:24 smithi main ubuntu 20.04 fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/workunit/snaps} 2
Failure Reason:

"1662629360.9036512 mon.a (mon.0) 303 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

pass 7020001 2022-09-08 06:58:42 2022-09-08 09:00:15 2022-09-08 11:29:53 2:29:38 2:16:49 0:12:49 smithi main centos 8.stream fs/valgrind/{begin/{0-install 1-ceph 2-logrotate} centos_latest debug mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/whitelist_health tasks/mirror}} 1
pass 7020002 2022-09-08 06:58:43 2022-09-08 09:00:35 2022-09-08 09:38:29 0:37:54 0:26:50 0:11:04 smithi main centos 8.stream fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 scrub/no standby-replay subvolume/{no-subvolume} tasks/{0-check-counter workunit/suites/fsstress}} 3
pass 7020003 2022-09-08 06:58:44 2022-09-08 09:00:55 2022-09-08 09:18:29 0:17:34 0:10:26 0:07:08 smithi main centos 8.stream fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_trivial_sync}} 2
pass 7020004 2022-09-08 06:58:45 2022-09-08 09:00:56 2022-09-08 09:45:41 0:44:45 0:31:08 0:13:37 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
pass 7020005 2022-09-08 06:58:46 2022-09-08 09:02:36 2022-09-08 09:29:46 0:27:10 0:19:30 0:07:40 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/asok_dump_tree} 2
pass 7020006 2022-09-08 06:58:47 2022-09-08 09:02:37 2022-09-08 09:28:06 0:25:29 0:11:47 0:13:42 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/auto-repair} 2
fail 7020007 2022-09-08 06:58:48 2022-09-08 09:02:37 2022-09-08 09:45:34 0:42:57 0:26:24 0:16:33 smithi main rhel 8.6 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/fsstress}} 2
Failure Reason:

"1662629844.7370324 mon.a (mon.0) 256 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

pass 7020008 2022-09-08 06:58:49 2022-09-08 09:08:08 2022-09-08 09:29:37 0:21:29 0:10:50 0:10:39 smithi main centos 8.stream fs/upgrade/nofs/{bluestore-bitmap centos_latest conf/{client mds mon osd} no-mds-cluster overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} tasks/{0-pacific 1-upgrade}} 1
pass 7020009 2022-09-08 06:58:50 2022-09-08 09:08:29 2022-09-08 10:08:52 1:00:23 0:48:58 0:11:25 smithi main centos 8.stream fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} scrub/yes standby-replay subvolume/{with-no-extra-options} tasks/{0-check-counter workunit/suites/fsx}} 3
fail 7020010 2022-09-08 06:58:51 2022-09-08 09:09:39 2022-09-08 09:37:59 0:28:20 0:11:33 0:16:47 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/backtrace} 2
Failure Reason:

"1662629578.7931364 mon.a (mon.0) 231 : cluster [WRN] Health check failed: Degraded data redundancy: 2/4 objects degraded (50.000%), 1 pg degraded (PG_DEGRADED)" in cluster log

pass 7020011 2022-09-08 06:58:52 2022-09-08 09:14:50 2022-09-08 09:32:40 0:17:50 0:10:22 0:07:28 smithi main centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mon 2-workunit/fs/trivial_sync}} 2
pass 7020012 2022-09-08 06:58:53 2022-09-08 09:15:41 2022-09-08 09:30:37 0:14:56 0:08:41 0:06:15 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/cap-flush} 2
pass 7020013 2022-09-08 06:58:54 2022-09-08 09:16:11 2022-09-08 09:40:28 0:24:17 0:11:26 0:12:51 smithi main centos 8.stream fs/upgrade/upgraded_client/from_nautilus/{bluestore-bitmap centos_latest clusters/{1-mds-1-client-micro} conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} tasks/{0-nautilus 1-client-upgrade 2-client-sanity}} 2
fail 7020014 2022-09-08 06:58:55 2022-09-08 09:17:02 2022-09-08 10:20:53 1:03:51 0:51:05 0:12:46 smithi main ubuntu 20.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate} centos_8 clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/5 tasks/dbench validater/lockdep} 2
Failure Reason:

"1662630637.2493737 mon.a (mon.0) 245 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

pass 7020015 2022-09-08 06:58:56 2022-09-08 09:18:32 2022-09-08 09:56:34 0:38:02 0:27:05 0:10:57 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client-limits} 2
pass 7020016 2022-09-08 06:58:57 2022-09-08 09:19:23 2022-09-08 09:56:19 0:36:56 0:24:24 0:12:32 smithi main centos 8.stream fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} scrub/no standby-replay subvolume/{with-namespace-isolated-and-quota} tasks/{0-check-counter workunit/suites/fsync-tester}} 3
fail 7020017 2022-09-08 06:58:57 2022-09-08 09:21:24 2022-09-08 10:04:42 0:43:18 0:33:00 0:10:18 smithi main rhel 8.6 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mon 2-workunit/ffsb}} 2
Failure Reason:

"1662630544.9204528 mon.a (mon.0) 294 : cluster [WRN] Health check failed: 2 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

fail 7020018 2022-09-08 06:58:58 2022-09-08 09:21:34 2022-09-08 15:57:09 6:35:35 6:21:41 0:13:54 smithi main ubuntu 20.04 fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi070 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

fail 7020019 2022-09-08 06:58:59 2022-09-08 09:23:25 2022-09-08 15:57:12 6:33:47 6:21:03 0:12:44 smithi main centos 8.stream fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi112 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

pass 7020020 2022-09-08 06:59:00 2022-09-08 09:23:35 2022-09-08 10:11:05 0:47:30 0:35:37 0:11:53 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7020021 2022-09-08 06:59:01 2022-09-08 09:23:35 2022-09-08 09:48:46 0:25:11 0:11:04 0:14:07 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client-readahead} 2
fail 7020022 2022-09-08 06:59:02 2022-09-08 09:24:46 2022-09-08 10:13:13 0:48:27 0:38:44 0:09:43 smithi main centos 8.stream fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_dbench_iozone} 2
Failure Reason:

"1662630813.6527753 mon.a (mon.0) 325 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

dead 7020023 2022-09-08 06:59:03 2022-09-08 09:28:07 2022-09-08 21:41:42 12:13:35 smithi main ubuntu 20.04 fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_blogbench traceless/50pc} 2
Failure Reason:

hit max job timeout

pass 7020024 2022-09-08 06:59:04 2022-09-08 09:28:08 2022-09-08 09:54:16 0:26:08 0:14:45 0:11:23 smithi main ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/iozone}} 2
pass 7020025 2022-09-08 06:59:05 2022-09-08 09:28:48 2022-09-08 10:03:03 0:34:15 0:26:40 0:07:35 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/client-recovery} 2
pass 7020026 2022-09-08 06:59:06 2022-09-08 09:29:49 2022-09-08 09:55:38 0:25:49 0:12:29 0:13:20 smithi main ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/ior-shared-file} 4
pass 7020027 2022-09-08 06:59:07 2022-09-08 09:31:59 2022-09-08 10:26:01 0:54:02 0:47:03 0:06:59 smithi main centos 8.stream fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/failover} 2
fail 7020028 2022-09-08 06:59:08 2022-09-08 09:32:50 2022-09-08 10:33:28 1:00:38 0:52:41 0:07:57 smithi main centos 8.stream fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/workunit/snaps} 2
Failure Reason:

"1662630586.7884781 mon.a (mon.0) 310 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

pass 7020029 2022-09-08 06:59:09 2022-09-08 09:34:20 2022-09-08 10:09:33 0:35:13 0:21:21 0:13:52 smithi main centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
pass 7020030 2022-09-08 06:59:10 2022-09-08 09:37:11 2022-09-08 10:16:05 0:38:54 0:31:07 0:07:47 smithi main centos 8.stream fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} scrub/yes standby-replay subvolume/{with-namespace-isolated} tasks/{0-check-counter workunit/fs/test_o_trunc}} 3
pass 7020031 2022-09-08 06:59:11 2022-09-08 09:38:32 2022-09-08 10:17:31 0:38:59 0:27:56 0:11:03 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/damage} 2
fail 7020032 2022-09-08 06:59:12 2022-09-08 09:38:33 2022-09-08 16:17:02 6:38:29 6:23:26 0:15:03 smithi main ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/pjd}} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi103 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

pass 7020033 2022-09-08 06:59:13 2022-09-08 09:40:33 2022-09-08 10:31:42 0:51:09 0:37:20 0:13:49 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7020034 2022-09-08 06:59:14 2022-09-08 09:42:44 2022-09-08 10:25:23 0:42:39 0:29:08 0:13:31 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/data-scan} 2
pass 7020035 2022-09-08 06:59:15 2022-09-08 09:43:25 2022-09-08 10:21:10 0:37:45 0:24:52 0:12:53 smithi main rhel 8.6 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/iozone}} 2
pass 7020036 2022-09-08 06:59:15 2022-09-08 09:45:35 2022-09-08 10:30:57 0:45:22 0:35:01 0:10:21 smithi main centos 8.stream fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} scrub/no standby-replay subvolume/{with-namespace-isolated-and-quota} tasks/{0-check-counter workunit/suites/iogen}} 3
fail 7020037 2022-09-08 06:59:16 2022-09-08 09:45:46 2022-09-08 10:34:01 0:48:15 0:39:30 0:08:45 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/exports} 2
Failure Reason:

"1662631423.6793084 mon.a (mon.0) 851 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

pass 7020038 2022-09-08 06:59:17 2022-09-08 09:46:56 2022-09-08 10:20:48 0:33:52 0:22:37 0:11:15 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
fail 7020039 2022-09-08 06:59:18 2022-09-08 09:47:17 2022-09-08 10:17:24 0:30:07 0:17:35 0:12:32 smithi main ubuntu 20.04 fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

"1662631724.4405782 mon.a (mon.0) 189 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

pass 7020040 2022-09-08 06:59:19 2022-09-08 09:47:17 2022-09-08 10:10:29 0:23:12 0:11:22 0:11:50 smithi main centos 8.stream fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{centos_8} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/ino_release_cb} 2
pass 7020041 2022-09-08 06:59:20 2022-09-08 09:48:48 2022-09-08 10:19:24 0:30:36 0:13:51 0:16:45 smithi main centos 8.stream fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_misc} 2
pass 7020042 2022-09-08 06:59:21 2022-09-08 09:54:19 2022-09-08 10:25:19 0:31:00 0:15:35 0:15:25 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/forward-scrub} 2
fail 7020043 2022-09-08 06:59:22 2022-09-08 09:55:39 2022-09-08 11:08:09 1:12:30 1:07:17 0:05:13 smithi main centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
Failure Reason:

"1662631767.4216378 mon.a (mon.0) 375 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

pass 7020044 2022-09-08 06:59:23 2022-09-08 09:55:40 2022-09-08 10:24:58 0:29:18 0:18:28 0:10:50 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/fragment} 2
pass 7020045 2022-09-08 06:59:24 2022-09-08 09:56:00 2022-09-08 10:43:20 0:47:20 0:35:52 0:11:28 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7020046 2022-09-08 06:59:25 2022-09-08 09:56:21 2022-09-08 11:50:06 1:53:45 1:41:31 0:12:14 smithi main ubuntu 20.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate} centos_8 clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/1 tasks/fsstress validater/valgrind} 2
Failure Reason:

"1662632958.6771314 mon.a (mon.0) 295 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

pass 7020047 2022-09-08 06:59:26 2022-09-08 09:56:31 2022-09-08 10:37:51 0:41:20 0:29:35 0:11:45 smithi main centos 8.stream fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 scrub/yes standby-replay subvolume/{with-namespace-isolated} tasks/{0-check-counter workunit/suites/iozone}} 3
fail 7020048 2022-09-08 06:59:27 2022-09-08 09:56:41 2022-09-08 10:55:36 0:58:55 0:43:21 0:15:34 smithi main centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
Failure Reason:

"1662632483.0572038 mon.a (mon.0) 10 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

pass 7020049 2022-09-08 06:59:28 2022-09-08 10:01:42 2022-09-08 10:24:11 0:22:29 0:14:47 0:07:42 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/journal-repair} 2
pass 7020050 2022-09-08 06:59:28 2022-09-08 10:03:13 2022-09-08 10:55:23 0:52:10 0:35:19 0:16:51 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
pass 7020051 2022-09-08 06:59:29 2022-09-08 10:10:38 2022-09-08 10:40:39 0:30:01 0:18:32 0:11:29 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/mds-flush} 2
fail 7020052 2022-09-08 06:59:30 2022-09-08 10:11:09 2022-09-08 10:25:31 0:14:22 smithi main fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=kernel&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=a127ffb7df2a46d9c720771310f2b324a2af81f2

fail 7020053 2022-09-08 06:59:31 2022-09-08 10:13:09 2022-09-08 13:50:23 3:37:14 3:27:07 0:10:07 smithi main rhel 8.6 fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_dbench traceless/50pc} 2
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi062 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'

dead 7020054 2022-09-08 06:59:32 2022-09-08 10:13:10 2022-09-08 22:39:02 12:25:52 smithi main rhel 8.6 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} 2
Failure Reason:

hit max job timeout

fail 7020055 2022-09-08 06:59:33 2022-09-08 10:13:20 2022-09-08 10:49:40 0:36:20 0:16:43 0:19:37 smithi main ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/mdtest} 5
Failure Reason:

"1662633697.2372713 mon.a (mon.0) 183 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

pass 7020056 2022-09-08 06:59:34 2022-09-08 10:17:31 2022-09-08 10:55:06 0:37:35 0:23:33 0:14:02 smithi main ubuntu 20.04 fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/multifs-auth} 2
fail 7020057 2022-09-08 06:59:35 2022-09-08 10:17:32 2022-09-08 11:22:39 1:05:07 0:51:34 0:13:33 smithi main centos 8.stream fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/workunit/snaps} 2
Failure Reason:

"1662633446.3741665 mon.a (mon.0) 118 : cluster [WRN] Health check failed: Degraded data redundancy: 2/4 objects degraded (50.000%), 1 pg degraded (PG_DEGRADED)" in cluster log

fail 7020058 2022-09-08 06:59:36 2022-09-08 10:19:32 2022-09-08 10:58:26 0:38:54 0:25:08 0:13:46 smithi main centos 8.stream fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} scrub/no standby-replay subvolume/{with-no-extra-options} tasks/{0-check-counter workunit/suites/pjd}} 3
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi084 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

pass 7020059 2022-09-08 06:59:37 2022-09-08 10:20:53 2022-09-08 11:04:38 0:43:45 0:30:22 0:13:23 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/mds-full} 2
pass 7020060 2022-09-08 06:59:38 2022-09-08 10:21:03 2022-09-08 11:10:45 0:49:42 0:37:50 0:11:52 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7020061 2022-09-08 06:59:39 2022-09-08 10:21:14 2022-09-08 13:19:51 2:58:37 2:43:47 0:14:50 smithi main ubuntu 20.04 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_snaptests}} 2
Failure Reason:

"1662633995.3970559 mon.a (mon.0) 309 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

pass 7020062 2022-09-08 06:59:40 2022-09-08 10:24:15 2022-09-08 10:40:07 0:15:52 0:08:32 0:07:20 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/mds_creation_retry} 2
fail 7020063 2022-09-08 06:59:41 2022-09-08 10:25:05 2022-09-08 16:59:28 6:34:23 6:20:48 0:13:35 smithi main centos 8.stream fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi061 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

fail 7020064 2022-09-08 06:59:42 2022-09-08 10:25:25 2022-09-08 16:59:13 6:33:48 6:20:38 0:13:10 smithi main centos 8.stream fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi169 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

pass 7020065 2022-09-08 06:59:43 2022-09-08 10:25:26 2022-09-08 11:01:02 0:35:36 0:20:57 0:14:39 smithi main centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
fail 7020066 2022-09-08 06:59:44 2022-09-08 10:25:36 2022-09-08 11:17:13 0:51:37 0:37:34 0:14:03 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/metrics} 2
Failure Reason:

"1662635021.1772397 mon.a (mon.0) 1935 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 2 pgs peering (PG_AVAILABILITY)" in cluster log

pass 7020067 2022-09-08 06:59:45 2022-09-08 10:26:07 2022-09-08 10:48:06 0:21:59 0:10:18 0:11:41 smithi main centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
pass 7020068 2022-09-08 06:59:46 2022-09-08 10:32:00 2022-09-08 11:01:11 0:29:11 0:21:01 0:08:10 smithi main centos 8.stream fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} scrub/yes standby-replay subvolume/{no-subvolume} tasks/{0-check-counter workunit/direct_io}} 3
fail 7020069 2022-09-08 06:59:46 2022-09-08 10:34:10 2022-09-08 11:06:02 0:31:52 0:18:04 0:13:48 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/multimds_misc} 2
Failure Reason:

"1662634729.5018916 mon.a (mon.0) 705 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log

pass 7020070 2022-09-08 06:59:47 2022-09-08 10:34:41 2022-09-08 11:09:02 0:34:21 0:22:35 0:11:46 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
pass 7020071 2022-09-08 06:59:48 2022-09-08 10:35:52 2022-09-08 11:01:42 0:25:50 0:14:13 0:11:37 smithi main centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/iozone}} 2
pass 7020072 2022-09-08 06:59:49 2022-09-08 10:36:42 2022-09-08 10:55:10 0:18:28 0:09:45 0:08:43 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/openfiletable} 2
pass 7020073 2022-09-08 06:59:50 2022-09-08 10:37:53 2022-09-08 11:35:22 0:57:29 0:45:18 0:12:11 smithi main centos 8.stream fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} scrub/no standby-replay subvolume/{no-subvolume} tasks/{0-check-counter workunit/fs/misc}} 3
pass 7020074 2022-09-08 06:59:51 2022-09-08 10:40:14 2022-09-08 11:04:41 0:24:27 0:12:35 0:11:52 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/pool-perm} 2
pass 7020075 2022-09-08 06:59:52 2022-09-08 10:40:45 2022-09-08 11:28:30 0:47:45 0:35:26 0:12:19 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7020076 2022-09-08 06:59:53 2022-09-08 10:43:26 2022-09-08 11:09:03 0:25:37 0:14:19 0:11:18 smithi main centos 8.stream fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} msgr-failures/none objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_suites_fsstress}} 2
Failure Reason:

"1662634901.0116212 mon.b (mon.1) 18 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

fail 7020077 2022-09-08 06:59:54 2022-09-08 10:48:07 2022-09-08 11:49:47 1:01:40 0:45:56 0:15:44 smithi main ubuntu 20.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate} centos_8 clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/3 tasks/dbench validater/lockdep} 2
Failure Reason:

"1662635578.8729296 mon.a (mon.0) 194 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

pass 7020078 2022-09-08 06:59:55 2022-09-08 10:48:17 2022-09-08 11:15:59 0:27:42 0:13:01 0:14:41 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/quota} 2
fail 7020079 2022-09-08 06:59:56 2022-09-08 10:49:17 2022-09-08 17:26:01 6:36:44 6:20:50 0:15:54 smithi main ubuntu 20.04 fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi104 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

fail 7020080 2022-09-08 06:59:57 2022-09-08 10:49:48 2022-09-08 11:45:24 0:55:36 0:41:51 0:13:45 smithi main centos 8.stream fs/cephadm/renamevolume/{0-start 1-rename distro/single-container-host overrides/ignorelist_health} 2
Failure Reason:

Command failed on smithi027 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 pull'

fail 7020081 2022-09-08 06:59:58 2022-09-08 10:49:48 2022-09-08 11:16:43 0:26:55 0:13:18 0:13:37 smithi main centos 8.stream fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{centos_8} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/libcephfs/{frag test}} 2
Failure Reason:

"1662635385.6716373 mon.a (mon.0) 121 : cluster [WRN] Health check failed: Degraded data redundancy: 1/4 objects degraded (25.000%), 1 pg degraded (PG_DEGRADED)" in cluster log

pass 7020082 2022-09-08 06:59:59 2022-09-08 10:49:49 2022-09-08 12:07:09 1:17:20 0:59:52 0:17:28 smithi main ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cephfs_misc_tests} 4
pass 7020083 2022-09-08 07:00:00 2022-09-08 10:55:20 2022-09-08 11:56:52 1:01:32 0:50:30 0:11:02 smithi main ubuntu 20.04 fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/failover} 2
fail 7020084 2022-09-08 07:00:01 2022-09-08 10:55:30 2022-09-08 17:27:50 6:32:20 6:20:05 0:12:15 smithi main centos 8.stream fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi066 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

fail 7020085 2022-09-08 07:00:02 2022-09-08 10:55:41 2022-09-08 11:57:45 1:02:04 0:48:40 0:13:24 smithi main ubuntu 20.04 fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/workunit/snaps} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi084 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-multiple-capsnaps.sh'

fail 7020086 2022-09-08 07:00:03 2022-09-08 13:08:15 2022-09-08 19:51:05 6:42:50 6:27:45 0:15:05 smithi main ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/pjd}} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi090 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

pass 7020087 2022-09-08 07:00:04 2022-09-08 13:08:15 2022-09-08 13:50:06 0:41:51 0:28:26 0:13:25 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
pass 7020088 2022-09-08 07:00:05 2022-09-08 13:08:56 2022-09-08 13:25:03 0:16:07 0:08:17 0:07:50 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/recovery-fs} 2
fail 7020089 2022-09-08 07:00:06 2022-09-08 13:09:46 2022-09-08 14:28:53 1:19:07 1:11:08 0:07:59 smithi main centos 8.stream fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_dbench_iozone} 2
Failure Reason:

"1662643673.8083873 mon.a (mon.0) 275 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

fail 7020090 2022-09-08 07:00:07 2022-09-08 13:11:07 2022-09-08 16:44:39 3:33:32 3:20:03 0:13:29 smithi main ubuntu 20.04 fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_ffsb traceless/50pc} 2
Failure Reason:

Command failed (workunit test suites/ffsb.sh) on smithi078 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'

fail 7020091 2022-09-08 07:00:08 2022-09-08 13:12:57 2022-09-08 16:36:19 3:23:22 3:11:14 0:12:08 smithi main centos 8.stream fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/default} scrub/yes standby-replay subvolume/{with-namespace-isolated-and-quota} tasks/{0-check-counter workunit/kernel_untar_build}} 3
Failure Reason:

error during scrub thrashing: reached maximum tries (30) after waiting for 900 seconds

fail 7020092 2022-09-08 07:00:09 2022-09-08 13:12:58 2022-09-08 13:53:12 0:40:14 0:28:49 0:11:25 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/scrub} 2
Failure Reason:

"1662644488.6350436 mon.a (mon.0) 344 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

fail 7020093 2022-09-08 07:00:10 2022-09-08 13:14:08 2022-09-08 14:58:49 1:44:41 1:34:53 0:09:48 smithi main centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

"1662644271.7616339 mon.a (mon.0) 1488 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

pass 7020094 2022-09-08 07:00:11 2022-09-08 13:17:49 2022-09-08 13:38:34 0:20:45 0:10:46 0:09:59 smithi main centos 8.stream fs/upgrade/nofs/{bluestore-bitmap centos_latest conf/{client mds mon osd} no-mds-cluster overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} tasks/{0-pacific 1-upgrade}} 1
pass 7020095 2022-09-08 07:00:12 2022-09-08 13:17:50 2022-09-08 13:53:00 0:35:10 0:23:16 0:11:54 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/sessionmap} 2
pass 7020096 2022-09-08 07:00:13 2022-09-08 13:18:00 2022-09-08 14:05:33 0:47:33 0:37:40 0:09:53 smithi main centos 8.stream fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/1 scrub/no standby-replay subvolume/{with-namespace-isolated-and-quota} tasks/{0-check-counter workunit/suites/blogbench}} 3
fail 7020097 2022-09-08 07:00:14 2022-09-08 13:18:10 2022-09-08 13:54:42 0:36:32 0:24:44 0:11:48 smithi main ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
Failure Reason:

"1662644441.0481427 mon.a (mon.0) 188 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

pass 7020098 2022-09-08 07:00:15 2022-09-08 13:19:31 2022-09-08 13:43:07 0:23:36 0:11:39 0:11:57 smithi main centos 8.stream fs/upgrade/upgraded_client/from_nautilus/{bluestore-bitmap centos_latest clusters/{1-mds-1-client-micro} conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} tasks/{0-nautilus 1-client-upgrade 2-client-sanity}} 2
fail 7020099 2022-09-08 07:00:16 2022-09-08 13:19:31 2022-09-08 13:49:05 0:29:34 0:17:14 0:12:20 smithi main ubuntu 20.04 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_pjd}} 2
Failure Reason:

"1662644361.6864047 mon.a (mon.0) 372 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

fail 7020100 2022-09-08 07:00:17 2022-09-08 13:19:42 2022-09-08 13:42:52 0:23:10 0:12:04 0:11:06 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/pacific 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi063 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/daemon-base:latest-pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8828e502-2f7b-11ed-8431-001a4aab830c -- ceph mon dump -f json'

fail 7020101 2022-09-08 07:00:18 2022-09-08 13:19:52 2022-09-08 13:59:06 0:39:14 0:25:03 0:14:11 smithi main rhel 8.6 fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

"1662645040.1051297 mon.a (mon.0) 304 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

pass 7020102 2022-09-08 07:00:19 2022-09-08 13:22:03 2022-09-08 13:48:10 0:26:07 0:14:35 0:11:32 smithi main ubuntu 20.04 fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_misc} 2
pass 7020103 2022-09-08 07:00:20 2022-09-08 13:23:03 2022-09-08 14:12:02 0:48:59 0:37:41 0:11:18 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/snapshots} 2
pass 7020104 2022-09-08 07:00:21 2022-09-08 13:23:14 2022-09-08 15:30:34 2:07:20 2:00:45 0:06:35 smithi main centos 8.stream fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/3 replication/always} scrub/yes standby-replay subvolume/{no-subvolume} tasks/{0-check-counter workunit/suites/dbench}} 3
fail 7020105 2022-09-08 07:00:22 2022-09-08 13:23:34 2022-09-08 19:58:00 6:34:26 6:23:16 0:11:10 smithi main ubuntu 20.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/no races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi119 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

pass 7020106 2022-09-08 07:00:23 2022-09-08 13:24:15 2022-09-08 13:56:47 0:32:32 0:25:52 0:06:40 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/strays} 2
pass 7020107 2022-09-08 07:00:24 2022-09-08 13:24:25 2022-09-08 13:58:08 0:33:43 0:20:55 0:12:48 smithi main centos 8.stream fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
pass 7020108 2022-09-08 07:00:25 2022-09-08 13:24:45 2022-09-08 13:50:49 0:26:04 0:12:00 0:14:04 smithi main ubuntu 20.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/ior-shared-file} 5
pass 7020109 2022-09-08 07:00:26 2022-09-08 13:24:56 2022-09-08 13:50:25 0:25:29 0:19:04 0:06:25 smithi main centos 8.stream fs/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug} tasks/multifs-auth} 2
fail 7020110 2022-09-08 07:00:27 2022-09-08 13:25:06 2022-09-08 14:06:05 0:40:59 0:34:39 0:06:20 smithi main centos 8.stream fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/workunit/snaps} 2
Failure Reason:

"1662644331.3293333 mon.a (mon.0) 204 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

fail 7020111 2022-09-08 07:00:28 2022-09-08 13:25:07 2022-09-08 20:07:42 6:42:35 6:29:01 0:13:34 smithi main ubuntu 20.04 fs/verify/{begin/{0-install 1-ceph 2-logrotate} centos_8 clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/5 tasks/fsstress validater/valgrind} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi197 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

fail 7020112 2022-09-08 07:00:29 2022-09-08 13:25:47 2022-09-08 13:55:15 0:29:28 0:17:43 0:11:45 smithi main ubuntu 20.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/test_journal_migration} 2
Failure Reason:

"1662644869.3763356 mon.a (mon.0) 408 : cluster [WRN] Health check failed: 1 clients failing to respond to capability release (MDS_CLIENT_LATE_RELEASE)" in cluster log

pass 7020113 2022-09-08 07:00:29 2022-09-08 13:25:58 2022-09-08 13:43:21 0:17:23 0:10:06 0:07:17 smithi main centos 8.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/osd 2-workunit/fs/trivial_sync}} 2
pass 7020114 2022-09-08 07:00:30 2022-09-08 13:26:58 2022-09-08 14:12:15 0:45:17 0:32:53 0:12:24 smithi main centos 8.stream fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/default} scrub/no standby-replay subvolume/{no-subvolume} tasks/{0-check-counter workunit/suites/ffsb}} 3
fail 7020115 2022-09-08 07:00:31 2022-09-08 13:28:49 2022-09-08 13:55:02 0:26:13 0:12:08 0:14:05 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi039 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 200fb3ae-2f7d-11ed-8431-001a4aab830c -- ceph mon dump -f json'

fail 7020116 2022-09-08 07:00:32 2022-09-08 13:31:19 2022-09-08 13:48:24 0:17:05 smithi main fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=kernel&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=a127ffb7df2a46d9c720771310f2b324a2af81f2

fail 7020117 2022-09-08 07:00:33 2022-09-08 13:38:41 2022-09-08 20:18:50 6:40:09 6:22:58 0:17:11 smithi main ubuntu 20.04 fs/traceless/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress traceless/50pc} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi043 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

pass 7020118 2022-09-08 07:00:34 2022-09-08 13:50:14 2022-09-08 14:19:10 0:28:56 0:19:03 0:09:53 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/truncate_delay} 2
pass 7020119 2022-09-08 07:00:35 2022-09-08 13:50:24 2022-09-08 14:15:52 0:25:28 0:12:58 0:12:30 smithi main centos 8.stream fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/none objectstore/bluestore-bitmap overrides/{frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_trivial_sync}} 2
pass 7020120 2022-09-08 07:00:36 2022-09-08 13:50:34 2022-09-08 14:08:27 0:17:53 0:09:57 0:07:56 smithi main centos 8.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_latest k-stock} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/workunit/dir-max-entries} 2
pass 7020121 2022-09-08 07:00:37 2022-09-08 13:50:55 2022-09-08 14:30:10 0:39:15 0:26:36 0:12:39 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_latest clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
pass 7020122 2022-09-08 07:00:38 2022-09-08 13:50:55 2022-09-08 14:36:11 0:45:16 0:31:41 0:13:35 smithi main centos 8.stream fs/workload/{0-centos_8 begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{export-check n/5 replication/always} scrub/yes standby-replay subvolume/{with-namespace-isolated} tasks/{0-check-counter workunit/fs/norstats}} 3
fail 7020123 2022-09-08 07:00:39 2022-09-08 13:53:16 2022-09-08 14:33:42 0:40:26 0:27:16 0:13:10 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/workunit/quota} 2
fail 7020124 2022-09-08 07:00:40 2022-09-08 13:54:46 2022-09-08 20:35:10 6:40:24 6:28:17 0:12:07 smithi main rhel 8.6 fs/32bits/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi027 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

pass 7020125 2022-09-08 07:00:41 2022-09-08 13:55:07 2022-09-08 14:18:17 0:23:10 0:11:23 0:11:47 smithi main centos 8.stream fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{centos_8} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/libcephfs_python} 2
fail 7020126 2022-09-08 07:00:42 2022-09-08 13:55:17 2022-09-08 20:28:27 6:33:10 6:20:32 0:12:38 smithi main ubuntu 20.04 fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi032 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66b52ac0fc1f2bc1d633b5c0480f3c59f4e91139 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'