Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6245983 2021-06-30 22:27:42 2021-06-30 22:30:40 2021-06-30 23:10:33 0:39:53 0:25:33 0:14:20 smithi master centos 8.3 fs/volumes/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/snapshot}} 2
fail 6245984 2021-06-30 22:27:43 2021-06-30 22:30:40 2021-06-30 23:16:10 0:45:30 0:32:23 0:13:07 smithi master centos 8.3 fs/volumes/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/basic}} 2
Failure Reason:

Test failure: test_authorize_deauthorize_legacy_subvolume (tasks.cephfs.test_volumes.TestSubvolumes)

pass 6245985 2021-06-30 22:27:44 2021-06-30 22:30:41 2021-06-30 22:50:17 0:19:36 0:12:03 0:07:33 smithi master rhel 8.4 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/cap-flush} 2
pass 6245986 2021-06-30 22:27:45 2021-06-30 22:30:41 2021-06-30 22:58:15 0:27:34 0:16:28 0:11:06 smithi master centos 8.3 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{no}} 3
fail 6245987 2021-06-30 22:27:47 2021-06-30 22:30:42 2021-06-30 23:14:32 0:43:50 0:33:02 0:10:48 smithi master ubuntu 20.04 fs/volumes/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/clone}} 2
Failure Reason:

Test failure: test_subvolume_snapshot_clone_retain_suid_guid (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)

fail 6245988 2021-06-30 22:27:48 2021-06-30 22:30:42 2021-06-30 23:01:27 0:30:45 0:18:34 0:12:11 smithi master centos 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/data-scan} 2
Failure Reason:

Test failure: test_parallel_execution (tasks.cephfs.test_data_scan.TestDataScan)

fail 6245989 2021-06-30 22:27:49 2021-06-30 22:30:43 2021-06-30 23:23:38 0:52:55 0:39:50 0:13:05 smithi master centos 8.3 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{yes}} 3
Failure Reason:

Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi152 with status 23: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f392a1ed50ecb2ed3af823ef9effa029ff6c14b3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/multiple_rsync.sh'

pass 6245990 2021-06-30 22:27:50 2021-06-30 22:30:43 2021-06-30 23:05:28 0:34:45 0:25:50 0:08:55 smithi master centos 8.3 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/blogbench} wsync/{no}} 3
pass 6245991 2021-06-30 22:27:51 2021-06-30 22:30:44 2021-06-30 22:56:14 0:25:30 0:16:20 0:09:10 smithi master ubuntu 20.04 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/fs/norstats} wsync/{no}} 3
pass 6245992 2021-06-30 22:27:52 2021-06-30 22:30:44 2021-06-30 22:57:33 0:26:49 0:20:55 0:05:54 smithi master rhel 8.4 fs/volumes/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/snapshot}} 2
fail 6245993 2021-06-30 22:27:53 2021-06-30 22:30:46 2021-06-30 23:44:43 1:13:57 1:01:55 0:12:02 smithi master centos 8.3 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{no}} 3
Failure Reason:

"2021-06-30T23:17:25.000686+0000 mds.i (mds.1) 52 : cluster [WRN] Scrub error on inode 0x10000007545 (/client.0/tmp/payload.1/multiple_rsync_payload.148701/firmware/dabusb/firmware.fw) see mds.i log and `damage ls` output for details" in cluster log