User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
pdonnell | 2021-11-10 14:59:40 | 2021-11-10 15:05:08 | 2021-11-10 23:10:01 | 8:04:53 | fs | wip-pdonnell-testing-20211109.180315 | smithi | 944ff3a | 14 | 12 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 6496367 |
![]() ![]() |
2021-11-10 14:59:46 | 2021-11-10 15:05:08 | 2021-11-10 15:30:39 | 0:25:31 | 0:11:55 | 0:13:36 | smithi | master | centos | 8.3 | fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client-readahead} | 2 |
Failure Reason:
Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) |
||||||||||||||
fail | 6496368 |
![]() ![]() |
2021-11-10 14:59:47 | 2021-11-10 15:05:08 | 2021-11-10 18:33:25 | 3:28:17 | 3:16:16 | 0:12:01 | smithi | master | centos | 8.3 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/ffsb} wsync/{yes}} | 3 |
Failure Reason:
Command failed (workunit test suites/ffsb.sh) on smithi007 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=944ff3aec61bad509f93d2743d64bd08bdc242d8 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh' |
||||||||||||||
pass | 6496369 |
![]() |
2021-11-10 14:59:48 | 2021-11-10 15:05:38 | 2021-11-10 15:29:24 | 0:23:46 | 0:12:50 | 0:10:56 | smithi | master | centos | 8.3 | fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} | 2 |
pass | 6496370 |
![]() |
2021-11-10 14:59:49 | 2021-11-10 15:05:59 | 2021-11-10 15:39:35 | 0:33:36 | 0:27:38 | 0:05:58 | smithi | master | rhel | 8.4 | fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/data-scan} | 2 |
fail | 6496371 |
![]() |
2021-11-10 14:59:50 | 2021-11-10 15:06:09 | 2021-11-10 17:45:51 | 2:39:42 | 2:33:19 | 0:06:23 | smithi | master | rhel | 8.4 | fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/snaps}} | 2 |
Failure Reason:
"2021-11-10T15:36:25.971741+0000 mds.f (mds.0) 29 : cluster [WRN] client.4544 isn't responding to mclientcaps(revoke), ino 0x100000052da pending pAsxXsxFc issued pAsxXsxFxcwb, sent 300.037412 seconds ago" in cluster log |
||||||||||||||
fail | 6496372 |
![]() |
2021-11-10 14:59:51 | 2021-11-10 15:06:29 | 2021-11-10 15:33:04 | 0:26:35 | 0:14:17 | 0:12:18 | smithi | master | ubuntu | 20.04 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/direct_io} wsync/{yes}} | 3 |
Failure Reason:
"2021-11-10T15:26:51.546596+0000 mon.a (mon.0) 181 : cluster [WRN] Health check failed: Degraded data redundancy: 1/4 objects degraded (25.000%), 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
pass | 6496373 |
![]() |
2021-11-10 14:59:52 | 2021-11-10 15:08:40 | 2021-11-10 15:48:37 | 0:39:57 | 0:28:44 | 0:11:13 | smithi | master | centos | 8.3 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{yes}} | 3 |
pass | 6496374 |
![]() |
2021-11-10 14:59:53 | 2021-11-10 15:09:11 | 2021-11-10 16:39:18 | 1:30:07 | 1:17:23 | 0:12:44 | smithi | master | ubuntu | 20.04 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/dbench} wsync/{no}} | 3 |
pass | 6496375 |
![]() |
2021-11-10 14:59:54 | 2021-11-10 15:10:41 | 2021-11-10 15:40:40 | 0:29:59 | 0:21:20 | 0:08:39 | smithi | master | rhel | 8.4 | fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/metrics} | 2 |
pass | 6496376 |
![]() |
2021-11-10 14:59:55 | 2021-11-10 15:13:02 | 2021-11-10 16:06:51 | 0:53:49 | 0:41:14 | 0:12:35 | smithi | master | centos | 8.3 | fs/volumes/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/clone}} | 2 |
pass | 6496377 |
![]() |
2021-11-10 14:59:56 | 2021-11-10 15:13:22 | 2021-11-10 16:28:08 | 1:14:46 | 1:03:13 | 0:11:33 | smithi | master | ubuntu | 20.04 | fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/fs/snaps}} | 2 |
fail | 6496378 |
![]() |
2021-11-10 14:59:57 | 2021-11-10 15:14:23 | 2021-11-10 23:10:01 | 7:55:38 | 7:44:52 | 0:10:46 | smithi | master | centos | 8.3 | fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/fs/snaps}} | 2 |
Failure Reason:
wait_for_recovery: failed before timeout expired |
||||||||||||||
pass | 6496379 |
![]() |
2021-11-10 14:59:58 | 2021-11-10 15:14:23 | 2021-11-10 16:20:34 | 1:06:11 | 0:56:08 | 0:10:03 | smithi | master | ubuntu | 20.04 | fs/verify/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu/{latest overrides}} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/dbench validater/valgrind} | 2 |
pass | 6496380 |
![]() |
2021-11-10 14:59:59 | 2021-11-10 15:14:23 | 2021-11-10 15:44:13 | 0:29:50 | 0:13:40 | 0:16:10 | smithi | master | centos | 8.3 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/direct_io} wsync/{no}} | 3 |
fail | 6496381 |
![]() ![]() |
2021-11-10 15:00:00 | 2021-11-10 15:18:34 | 2021-11-10 16:40:32 | 1:21:58 | 1:10:57 | 0:11:01 | smithi | master | ubuntu | 20.04 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{yes}} | 3 |
Failure Reason:
Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi053 with status 23: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=944ff3aec61bad509f93d2743d64bd08bdc242d8 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/multiple_rsync.sh' |
||||||||||||||
pass | 6496382 |
![]() |
2021-11-10 15:00:01 | 2021-11-10 15:18:35 | 2021-11-10 16:16:41 | 0:58:06 | 0:48:40 | 0:09:26 | smithi | master | rhel | 8.4 | fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} | 2 |
fail | 6496383 |
![]() ![]() |
2021-11-10 15:00:02 | 2021-11-10 15:20:55 | 2021-11-10 15:49:50 | 0:28:55 | 0:16:09 | 0:12:46 | smithi | master | ubuntu | 20.04 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/pjd} wsync/{yes}} | 3 |
Failure Reason:
Command failed (workunit test suites/pjd.sh) on smithi129 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=944ff3aec61bad509f93d2743d64bd08bdc242d8 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh' |
||||||||||||||
pass | 6496384 |
![]() |
2021-11-10 15:00:03 | 2021-11-10 15:21:26 | 2021-11-10 16:25:28 | 1:04:02 | 0:49:17 | 0:14:45 | smithi | master | centos | 8.3 | fs/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp overrides/{mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/failover} | 2 |
pass | 6496385 |
![]() |
2021-11-10 15:00:04 | 2021-11-10 15:22:16 | 2021-11-10 16:12:28 | 0:50:12 | 0:38:04 | 0:12:08 | smithi | master | centos | 8.3 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} ms_mode/{crc} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/kernel_untar_build} wsync/{yes}} | 3 |
pass | 6496386 |
![]() |
2021-11-10 15:00:05 | 2021-11-10 15:23:17 | 2021-11-10 16:06:29 | 0:43:12 | 0:27:54 | 0:15:18 | smithi | master | ubuntu | 20.04 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{no}} | 3 |
fail | 6496387 |
![]() ![]() |
2021-11-10 15:00:06 | 2021-11-10 15:23:47 | 2021-11-10 16:54:45 | 1:30:58 | 1:20:54 | 0:10:04 | smithi | master | rhel | 8.4 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/ffsb} wsync/{no}} | 3 |
Failure Reason:
Command failed (workunit test suites/ffsb.sh) on smithi027 with status 135: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=944ff3aec61bad509f93d2743d64bd08bdc242d8 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh' |
||||||||||||||
fail | 6496388 |
![]() ![]() |
2021-11-10 15:00:07 | 2021-11-10 15:25:18 | 2021-11-10 15:51:35 | 0:26:17 | 0:10:33 | 0:15:44 | smithi | master | ubuntu | 20.04 | fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client-readahead} | 2 |
Failure Reason:
Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) |
||||||||||||||
fail | 6496389 |
![]() ![]() |
2021-11-10 15:00:08 | 2021-11-10 15:25:28 | 2021-11-10 15:58:15 | 0:32:47 | 0:20:50 | 0:11:57 | smithi | master | ubuntu | 20.04 | fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client-recovery} | 2 |
Failure Reason:
Test failure: test_mount_after_evicted_client (tasks.cephfs.test_client_recovery.TestClientRecovery) |
||||||||||||||
fail | 6496390 |
![]() ![]() |
2021-11-10 15:00:09 | 2021-11-10 15:26:18 | 2021-11-10 16:35:28 | 1:09:10 | 0:54:50 | 0:14:20 | smithi | master | ubuntu | 20.04 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} ms_mode/{legacy} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{no}} | 3 |
Failure Reason:
Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi013 with status 23: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=944ff3aec61bad509f93d2743d64bd08bdc242d8 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/multiple_rsync.sh' |
||||||||||||||
fail | 6496391 |
![]() |
2021-11-10 15:00:10 | 2021-11-10 15:27:19 | 2021-11-10 16:04:56 | 0:37:37 | 0:29:06 | 0:08:31 | smithi | master | rhel | 8.4 | fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/{secure} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{yes}} | 3 |
Failure Reason:
"2021-11-10T15:55:43.393631+0000 mds.c (mds.0) 56 : cluster [WRN] Scrub error on inode 0x100000001fe (/client.0/tmp/blogbench-1.0/src) see mds.c log and `damage ls` output for details" in cluster log |
||||||||||||||
pass | 6496392 |
![]() |
2021-11-10 15:00:11 | 2021-11-10 15:29:30 | 2021-11-10 15:52:41 | 0:23:11 | 0:17:12 | 0:05:59 | smithi | master | rhel | 8.4 | fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8.stream} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/pjd}} | 2 |