Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6250241 2021-07-02 18:59:10 2021-07-02 19:01:08 2021-07-02 20:38:43 1:37:35 1:27:49 0:09:46 smithi master ubuntu 20.04 fs/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/failover} 2
fail 6250242 2021-07-02 18:59:11 2021-07-02 19:01:08 2021-07-02 19:36:49 0:35:41 0:22:16 0:13:25 smithi master centos 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/mds-full} 2
Failure Reason:

Test failure: test_full_fsync (tasks.cephfs.test_full.TestClusterFull)

pass 6250243 2021-07-02 18:59:12 2021-07-02 19:01:08 2021-07-02 20:40:19 1:39:11 1:29:59 0:09:12 smithi master centos 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
fail 6250244 2021-07-02 18:59:13 2021-07-02 19:01:09 2021-07-02 20:01:11 1:00:02 0:46:51 0:13:11 smithi master ubuntu 20.04 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{no}} 3
Failure Reason:

Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi001 with status 23: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=407f59d9cfc6d20bff6b77f1a87ac4cef39b3e57 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/multiple_rsync.sh'

fail 6250245 2021-07-02 18:59:14 2021-07-02 19:01:39 2021-07-02 20:04:55 1:03:16 0:52:50 0:10:26 smithi master centos 8.3 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{yes}} 3
Failure Reason:

Command failed on smithi059 with status 1: 'sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp'

fail 6250246 2021-07-02 18:59:15 2021-07-02 19:01:40 2021-07-02 19:27:41 0:26:01 0:18:01 0:08:00 smithi master rhel 8.4 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/mds-full} 2
Failure Reason:

Test failure: test_full_fsync (tasks.cephfs.test_full.TestClusterFull)

fail 6250247 2021-07-02 18:59:16 2021-07-02 19:02:30 2021-07-02 19:55:16 0:52:46 0:42:51 0:09:55 smithi master ubuntu 20.04 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{yes}} 3
Failure Reason:

"2021-07-02T19:33:40.389134+0000 mds.l (mds.1) 8 : cluster [WRN] Scrub error on inode 0x20000002ebe (/client.0/tmp/blogbench-1.0/src/blogtest_in/blog-77) see mds.l log and `damage ls` output for details" in cluster log