Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6247058 2021-07-01 06:50:22 2021-07-01 06:54:22 2021-07-01 07:29:20 0:34:58 0:19:58 0:15:00 smithi master centos 8.3 fs/volumes/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/basic}} 2
Failure Reason:

Test failure: test_subvolume_group_create_with_desired_mode (tasks.cephfs.test_volumes.TestSubvolumeGroups)

fail 6247059 2021-07-01 06:50:23 2021-07-01 06:54:23 2021-07-01 07:22:16 0:27:53 0:16:40 0:11:13 smithi master ubuntu 20.04 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no standby-replay tasks/{0-check-counter workunit/fs/norstats} wsync/{yes}} 3
Failure Reason:

"2021-07-01T07:13:45.570825+0000 mon.a (mon.0) 152 : cluster [WRN] Health check failed: Degraded data redundancy: 1/4 objects degraded (25.000%), 1 pg degraded (PG_DEGRADED)" in cluster log

fail 6247060 2021-07-01 06:50:24 2021-07-01 06:54:24 2021-07-01 07:26:17 0:31:53 0:16:59 0:14:54 smithi master centos 8.3 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{no}} 3
Failure Reason:

"2021-07-01T07:20:35.399942+0000 mon.a (mon.0) 146 : cluster [WRN] Health check failed: Degraded data redundancy: 1/4 objects degraded (25.000%), 1 pg degraded (PG_DEGRADED)" in cluster log

dead 6247061 2021-07-01 06:50:25 2021-07-01 06:54:24 2021-07-01 07:09:28 0:15:04 smithi master ubuntu 20.04 fs/volumes/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/clone}} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

fail 6247062 2021-07-01 06:50:27 2021-07-01 06:54:25 2021-07-01 12:08:23 5:13:58 5:07:35 0:06:23 smithi master rhel 8.4 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
Failure Reason:

"2021-07-01T07:19:37.721698+0000 mds.f (mds.0) 27 : cluster [WRN] client.4723 isn't responding to mclientcaps(revoke), ino 0x100000012d9 pending pAsLsXsFsc issued pAsLsXsFscb, sent 300.425862 seconds ago" in cluster log

fail 6247063 2021-07-01 06:50:28 2021-07-01 06:55:06 2021-07-01 07:37:21 0:42:15 0:27:06 0:15:09 smithi master centos 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/data-scan} 2
Failure Reason:

Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan)

pass 6247064 2021-07-01 06:50:29 2021-07-01 06:56:16 2021-07-01 07:23:29 0:27:13 0:16:35 0:10:38 smithi master centos 8.3 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/pjd} wsync/{yes}} 3
fail 6247065 2021-07-01 06:50:30 2021-07-01 06:56:17 2021-07-01 07:24:37 0:28:20 0:17:29 0:10:51 smithi master ubuntu 20.04 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/fs/norstats} wsync/{no}} 3
Failure Reason:

"2021-07-01T07:15:09.862717+0000 mon.a (mon.0) 162 : cluster [WRN] Health check failed: Degraded data redundancy: 1/4 objects degraded (25.000%), 1 pg degraded (PG_DEGRADED)" in cluster log

pass 6247066 2021-07-01 06:50:31 2021-07-01 06:57:08 2021-07-01 08:50:45 1:53:37 1:44:51 0:08:46 smithi master ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
pass 6247067 2021-07-01 06:50:33 2021-07-01 06:57:18 2021-07-01 07:19:19 0:22:01 0:14:51 0:07:10 smithi master rhel 8.4 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/iozone} wsync/{yes}} 3
fail 6247068 2021-07-01 06:50:34 2021-07-01 06:57:49 2021-07-01 07:54:42 0:56:53 0:44:40 0:12:13 smithi master centos 8.3 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes standby-replay tasks/{0-check-counter workunit/fs/misc} wsync/{no}} 3
Failure Reason:

Command failed (workunit test fs/misc/multiple_rsync.sh) on smithi154 with status 23: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=89eb8f67424a5ebc79e4e6e54b81bc66cfdf8a07 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/multiple_rsync.sh'

pass 6247069 2021-07-01 06:50:35 2021-07-01 06:59:29 2021-07-01 08:29:57 1:30:28 1:18:55 0:11:33 smithi master ubuntu 20.04 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no standby-replay tasks/{0-check-counter workunit/suites/dbench} wsync/{no}} 3