Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 5856056 2021-02-04 14:03:54 2021-02-04 14:06:05 2021-02-04 15:46:42 1:40:37 1:30:13 0:10:24 gibba master rhel 8.3 fs/mixed-clients/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{frag_enable osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/kernel_cfuse_workunits_dbench_iozone} 2
fail 5856057 2021-02-04 14:03:55 2021-02-04 14:06:06 2021-02-04 15:06:44 1:00:38 0:43:37 0:17:01 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/fs/misc}} 3
Failure Reason:

"2021-02-04T14:33:18.256442+0000 mds.c (mds.0) 14 : cluster [WRN] Scrub error on inode 0x10000000264 (/client.0/tmp/testdir/dir1/dir2/dir3) see mds.c log and `damage ls` output for details" in cluster log

pass 5856058 2021-02-04 14:03:56 2021-02-04 14:09:57 2021-02-04 15:07:34 0:57:37 0:45:24 0:12:13 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
pass 5856059 2021-02-04 14:03:57 2021-02-04 14:10:48 2021-02-04 15:16:37 1:05:49 0:49:12 0:16:37 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/suites/ffsb}} 3
fail 5856060 2021-02-04 14:03:57 2021-02-04 14:13:49 2021-02-04 14:40:40 0:26:51 0:14:10 0:12:41 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/pjd}} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on gibba009 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7b82115b49a9d68d8ec2201588f5d21ec6e4a029 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

pass 5856061 2021-02-04 14:03:58 2021-02-04 14:13:49 2021-02-04 14:40:57 0:27:08 0:12:26 0:14:42 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/suites/fsync-tester}} 3
pass 5856062 2021-02-04 14:03:59 2021-02-04 14:16:10 2021-02-04 14:58:10 0:42:00 0:27:27 0:14:33 gibba master rhel 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/admin} 2
fail 5856063 2021-02-04 14:04:00 2021-02-04 14:19:23 2021-02-04 14:50:44 0:31:21 0:14:04 0:17:17 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/suites/pjd}} 3
Failure Reason:

"2021-02-04T14:47:20.297892+0000 mds.c (mds.0) 19 : cluster [WRN] Scrub error on inode 0x1000000034d (/client.0/tmp/tmp/fstest_b0499185b4d39e9e2f627343a96e234a) see mds.c log and `damage ls` output for details" in cluster log

pass 5856064 2021-02-04 14:04:01 2021-02-04 14:23:54 2021-02-04 14:47:20 0:23:26 0:11:38 0:11:48 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/trivial_sync}} 2
fail 5856065 2021-02-04 14:04:01 2021-02-04 14:24:34 2021-02-04 16:48:18 2:23:44 2:09:20 0:14:24 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/suites/blogbench}} 3
Failure Reason:

"2021-02-04T14:48:31.904943+0000 mds.e (mds.0) 14 : cluster [WRN] Scrub error on inode 0x100000001fc (/client.0/tmp/blogbench-1.0/src) see mds.e log and `damage ls` output for details" in cluster log

pass 5856066 2021-02-04 14:04:02 2021-02-04 14:25:25 2021-02-04 15:18:12 0:52:47 0:40:56 0:11:51 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
pass 5856067 2021-02-04 14:04:03 2021-02-04 14:25:25 2021-02-04 15:07:49 0:42:24 0:14:57 0:27:27 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/suites/fsstress}} 3
pass 5856068 2021-02-04 14:04:04 2021-02-04 14:41:01 2021-02-04 16:05:45 1:24:44 1:05:30 0:19:14 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/suites/iogen}} 3
pass 5856069 2021-02-04 14:04:05 2021-02-04 14:46:53 2021-02-04 15:11:12 0:24:19 0:13:33 0:10:46 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/pjd}} 2
pass 5856070 2021-02-04 14:04:06 2021-02-04 14:46:53 2021-02-04 15:18:11 0:31:18 0:19:04 0:12:14 gibba master rhel 8.3 fs/thrash/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag_enable multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_suites_fsstress}} 2
fail 5856071 2021-02-04 14:04:06 2021-02-04 14:47:24 2021-02-04 15:33:45 0:46:21 0:32:14 0:14:07 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/fs/misc}} 3
Failure Reason:

"2021-02-04T15:13:25.193753+0000 mds.f (mds.1) 4 : cluster [WRN] Scrub error on inode 0x10000000264 (/client.0/tmp/testdir/dir1/dir2/dir3) see mds.f log and `damage ls` output for details" in cluster log

pass 5856072 2021-02-04 14:04:07 2021-02-04 14:49:07 2021-02-04 15:33:26 0:44:19 0:29:22 0:14:57 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/suites/ffsb}} 3
pass 5856073 2021-02-04 14:04:08 2021-02-04 14:50:47 2021-02-04 15:21:02 0:30:15 0:10:46 0:19:29 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
pass 5856074 2021-02-04 14:04:09 2021-02-04 14:58:19 2021-02-04 15:31:29 0:33:10 0:12:09 0:21:01 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/suites/fsync-tester}} 3
pass 5856075 2021-02-04 14:04:10 2021-02-04 15:06:50 2021-02-04 16:00:21 0:53:31 0:40:35 0:12:56 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2