Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 5911347 2021-02-24 05:54:52 2021-02-24 06:09:05 2021-02-24 06:29:53 0:20:48 gibba master ubuntu 18.04 fs/32bits/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{faked-ino frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

fail 5911348 2021-02-24 05:54:52 2021-02-24 06:14:02 2021-02-24 06:35:53 0:21:51 0:08:42 0:13:09 gibba master fs/bugs/client_trim_caps/{begin clusters/small-cluster conf/{client mds mon osd} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/trim-i22073} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_ino_release_cb'

pass 5911349 2021-02-24 05:54:53 2021-02-24 06:16:44 2021-02-24 07:23:29 1:06:45 0:56:00 0:10:45 gibba master centos 8.2 fs/cephadm/multivolume/{0-start 1-mount 2-workload/dbench distro/centos_latest} 2
dead 5911350 2021-02-24 05:54:54 2021-02-24 06:16:44 2021-02-24 06:45:50 0:29:06 gibba master centos 8.2 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/acls} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911351 2021-02-24 05:54:55 2021-02-24 06:29:58 2021-02-24 06:55:08 0:25:10 0:09:34 0:15:36 gibba master ubuntu 18.04 fs/libcephfs/{begin clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{ubuntu_latest} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client} 2
pass 5911352 2021-02-24 05:54:56 2021-02-24 06:36:11 2021-02-24 07:38:56 1:02:45 0:49:09 0:13:36 gibba master centos 8.2 fs/mirror/{begin cephfs-mirror/one-per-cluster clients/{mirror} cluster/{1-node} mount/fuse objectstore/bluestore-bitmap overrides/{whitelist_health} supported-random-distros$/{centos_8} tasks/mirror} 1
dead 5911353 2021-02-24 05:54:57 2021-02-24 06:40:03 2021-02-24 07:01:48 0:21:45 gibba master rhel 8.3 fs/mixed-clients/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{frag_enable osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/kernel_cfuse_workunits_dbench_iozone} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911354 2021-02-24 05:54:58 2021-02-24 06:45:55 2021-02-24 07:17:40 0:31:45 0:17:40 0:14:05 gibba master ubuntu 20.04 fs/multiclient/{begin clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cephfs_misc_tests} 4
pass 5911355 2021-02-24 05:54:59 2021-02-24 06:48:17 2021-02-24 07:42:15 0:53:58 0:38:22 0:15:36 gibba master ubuntu 18.04 fs/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/failover} 2
pass 5911356 2021-02-24 05:55:02 2021-02-24 06:55:10 2021-02-24 07:16:40 0:21:30 0:09:18 0:12:12 gibba master ubuntu 18.04 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_misc} 2
pass 5911357 2021-02-24 05:55:04 2021-02-24 06:57:33 2021-02-24 07:49:10 0:51:37 0:37:20 0:14:17 gibba master ubuntu 18.04 fs/shell/{begin clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/cephfs-shell} 2
dead 5911358 2021-02-24 05:55:06 2021-02-24 07:00:05 2021-02-24 07:17:57 0:17:52 gibba master rhel 8.3 fs/snaps/{begin clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/workunit/snaps} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911359 2021-02-24 05:55:07 2021-02-24 07:02:06 2021-02-24 09:49:19 2:47:13 2:24:06 0:23:07 gibba master centos 8.2 fs/thrash/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/none objectstore/bluestore-bitmap overrides/{frag_enable multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_snaptests}} 2
pass 5911360 2021-02-24 05:55:08 2021-02-24 07:16:41 2021-02-24 07:36:42 0:20:01 0:10:24 0:09:37 gibba master rhel 8.3 fs/top/{begin cluster/{1-node} mount/fuse objectstore/bluestore-bitmap overrides/whitelist_health supported-random-distros$/{rhel_8} tasks/fstop} 1
pass 5911361 2021-02-24 05:55:10 2021-02-24 07:18:01 2021-02-24 07:48:49 0:30:48 0:23:48 0:07:00 gibba master rhel 8.3 fs/traceless/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_blogbench traceless/50pc} 2
dead 5911362 2021-02-24 05:55:11 2021-02-24 07:18:02 2021-02-24 07:34:04 0:16:02 gibba master fs/upgrade/featureful_client/old_client/{bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{frag_enable multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-nautilus 1-client 2-upgrade 3-compat_client/no}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911363 2021-02-24 05:55:12 2021-02-24 07:18:13 2021-02-24 08:38:22 1:20:09 1:05:44 0:14:25 gibba master centos 8.2 fs/valgrind/{begin centos_latest mirror/{cephfs-mirror/one-per-cluster clients/mirror cluster/1-node mount/fuse overrides/whitelist_health tasks/mirror} notcmalloc} 1
pass 5911364 2021-02-24 05:55:13 2021-02-24 07:23:34 2021-02-24 08:49:44 1:26:10 1:16:00 0:10:10 gibba master centos 8.2 fs/verify/{begin centos_latest clusters/1a5s-mds-1c-client conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/dbench validater/lockdep} 2
pass 5911365 2021-02-24 05:55:14 2021-02-24 07:24:55 2021-02-24 08:21:58 0:57:03 0:47:23 0:09:40 gibba master ubuntu 18.04 fs/volumes/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/basic}} 2
dead 5911366 2021-02-24 05:55:15 2021-02-24 07:24:56 2021-02-24 07:50:11 0:25:15 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no tasks/{0-check-counter workunit/direct_io} wsync/{no}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

fail 5911367 2021-02-24 05:55:16 2021-02-24 07:34:19 2021-02-24 08:18:07 0:43:48 0:34:08 0:09:40 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/fs/misc} wsync/{yes}} 3
Failure Reason:

"2021-02-24T07:56:09.293387+0000 mds.f (mds.2) 1168 : cluster [WRN] Scrub error on inode 0x10000000265 (/client.0/tmp/testdir/dir1/dir2/dir3) see mds.f log and `damage ls` output for details" in cluster log

pass 5911368 2021-02-24 05:55:17 2021-02-24 07:37:12 2021-02-24 09:09:12 1:32:00 1:16:45 0:15:15 gibba master centos 8.2 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
pass 5911369 2021-02-24 05:55:18 2021-02-24 07:42:24 2021-02-24 08:30:44 0:48:20 0:32:42 0:15:38 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no tasks/{0-check-counter workunit/kernel_untar_build} wsync/{yes}} 3
pass 5911370 2021-02-24 05:55:19 2021-02-24 07:47:55 2021-02-24 08:19:44 0:31:49 0:24:18 0:07:31 gibba master rhel 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/admin} 2
pass 5911371 2021-02-24 05:55:20 2021-02-24 07:48:56 2021-02-24 08:32:03 0:43:07 0:37:04 0:06:03 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
dead 5911372 2021-02-24 05:55:21 2021-02-24 07:49:17 2021-02-24 08:06:20 0:17:03 gibba master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/suites/blogbench} wsync/{yes}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911373 2021-02-24 05:55:21 2021-02-24 07:50:28 2021-02-24 08:33:09 0:42:41 0:16:53 0:25:48 gibba master centos 8.2 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} 2
pass 5911374 2021-02-24 05:55:22 2021-02-24 08:06:31 2021-02-24 09:48:55 1:42:24 1:24:15 0:18:09 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no tasks/{0-check-counter workunit/suites/dbench} wsync/{yes}} 3
pass 5911375 2021-02-24 05:55:23 2021-02-24 08:18:14 2021-02-24 08:42:42 0:24:28 0:12:10 0:12:18 gibba master centos 8.2 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/alternate-pool} 2
dead 5911376 2021-02-24 05:55:24 2021-02-24 08:19:55 2021-02-24 08:38:00 0:18:05 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/suites/ffsb} wsync/{yes}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911377 2021-02-24 05:55:25 2021-02-24 08:22:06 2021-02-24 08:52:27 0:30:21 0:10:41 0:19:40 gibba master ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
pass 5911378 2021-02-24 05:55:26 2021-02-24 08:30:48 2021-02-24 08:58:58 0:28:10 0:15:36 0:12:34 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no tasks/{0-check-counter workunit/fs/norstats} wsync/{no}} 3
pass 5911379 2021-02-24 05:55:27 2021-02-24 08:32:10 2021-02-24 09:04:27 0:32:17 0:19:55 0:12:22 gibba master ubuntu 20.04 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/asok_dump_tree} 2
pass 5911380 2021-02-24 05:55:28 2021-02-24 08:33:11 2021-02-24 08:57:12 0:24:01 0:12:24 0:11:37 gibba master ubuntu 18.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/iozone}} 2
dead 5911381 2021-02-24 05:55:29 2021-02-24 08:34:02 2021-02-24 08:54:05 0:20:03 gibba master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/suites/fsstress} wsync/{no}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911382 2021-02-24 05:55:31 2021-02-24 08:38:14 2021-02-24 08:59:11 0:20:57 0:15:24 0:05:33 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/pjd}} 2
pass 5911383 2021-02-24 05:55:32 2021-02-24 08:38:24 2021-02-24 09:31:11 0:52:47 0:41:26 0:11:21 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no tasks/{0-check-counter workunit/suites/fsx} wsync/{yes}} 3
fail 5911384 2021-02-24 05:55:34 2021-02-24 08:40:15 2021-02-24 08:57:36 0:17:21 0:08:59 0:08:22 gibba master rhel 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/auto-repair} 2
Failure Reason:

Test failure: test_backtrace_repair (tasks.cephfs.test_auto_repair.TestMDSAutoRepair)

pass 5911385 2021-02-24 05:55:35 2021-02-24 08:42:47 2021-02-24 09:10:18 0:27:31 0:12:06 0:15:25 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{yes}} 3
fail 5911386 2021-02-24 05:55:36 2021-02-24 08:51:29 2021-02-24 10:52:19 2:00:50 1:53:43 0:07:07 gibba master rhel 8.3 fs/thrash/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag_enable multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_suites_fsstress}} 2
Failure Reason:

"2021-02-24T09:25:21.813987+0000 mds.c (mds.0) 1 : cluster [WRN] evicting unresponsive client gibba031:1 (4433), after 1066.65 seconds" in cluster log

dead 5911387 2021-02-24 05:55:37 2021-02-24 08:51:30 2021-02-24 09:10:03 0:18:33 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{no}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911388 2021-02-24 05:55:38 2021-02-24 08:54:11 2021-02-24 09:14:58 0:20:47 0:11:35 0:09:12 gibba master centos 8.2 fs/32bits/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{faked-ino frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
fail 5911389 2021-02-24 05:55:39 2021-02-24 08:54:22 2021-02-24 09:14:10 0:19:48 0:09:38 0:10:10 gibba master rhel 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/backtrace} 2
Failure Reason:

Test failure: test_backtrace (tasks.cephfs.test_backtrace.TestBacktrace)

pass 5911390 2021-02-24 05:55:40 2021-02-24 08:57:23 2021-02-24 09:20:22 0:22:59 0:17:23 0:05:36 gibba master rhel 8.3 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
pass 5911391 2021-02-24 05:55:40 2021-02-24 08:57:43 2021-02-24 10:26:04 1:28:21 1:16:42 0:11:39 gibba master centos 8.2 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
fail 5911392 2021-02-24 05:55:41 2021-02-24 08:59:14 2021-02-24 10:20:26 1:21:12 1:10:45 0:10:27 gibba master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/suites/iogen} wsync/{yes}} 3
Failure Reason:

"2021-02-24T09:29:39.281034+0000 mon.a (mon.0) 346 : cluster [WRN] Health check failed: Degraded data redundancy: 28/200 objects degraded (14.000%), 2 pgs degraded (PG_DEGRADED)" in cluster log

pass 5911393 2021-02-24 05:55:43 2021-02-24 08:59:15 2021-02-24 09:52:01 0:52:46 0:38:01 0:14:45 gibba master ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
dead 5911394 2021-02-24 05:55:44 2021-02-24 09:04:48 2021-02-24 09:26:07 0:21:19 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no tasks/{0-check-counter workunit/suites/iozone} wsync/{no}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911395 2021-02-24 05:55:45 2021-02-24 09:10:09 2021-02-24 09:34:58 0:24:49 0:13:03 0:11:46 gibba master centos 8.2 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/cap-flush} 2
pass 5911396 2021-02-24 05:55:46 2021-02-24 09:10:20 2021-02-24 09:28:54 0:18:34 0:12:38 0:05:56 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/suites/pjd} wsync/{yes}} 3
pass 5911397 2021-02-24 05:55:47 2021-02-24 09:10:21 2021-02-24 09:41:19 0:30:58 0:18:12 0:12:46 gibba master ubuntu 18.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
pass 5911398 2021-02-24 05:55:48 2021-02-24 09:14:22 2021-02-24 10:43:54 1:29:32 1:14:20 0:15:12 gibba master centos 8.2 fs/mixed-clients/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
dead 5911399 2021-02-24 05:55:49 2021-02-24 09:15:04 2021-02-24 09:42:10 0:27:06 gibba master ubuntu 20.04 fs/multiclient/{begin clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/ior-shared-file} 5
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911400 2021-02-24 05:55:50 2021-02-24 09:26:17 2021-02-24 09:52:57 0:26:40 0:20:10 0:06:30 gibba master rhel 8.3 fs/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/multifs-auth} 2
fail 5911401 2021-02-24 05:55:50 2021-02-24 09:26:18 2021-02-24 09:56:14 0:29:56 0:15:40 0:14:16 gibba master fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{frag_enable multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-nautilus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
Failure Reason:

Command failed on gibba004 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

pass 5911402 2021-02-24 05:55:51 2021-02-24 09:29:00 2021-02-24 10:56:12 1:27:12 1:14:36 0:12:36 gibba master centos 8.2 fs/verify/{begin centos_latest clusters/1a5s-mds-1c-client conf/{client mds mon osd} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/fsstress validater/valgrind} 2
pass 5911403 2021-02-24 05:55:53 2021-02-24 09:31:21 2021-02-24 09:57:53 0:26:32 0:12:57 0:13:35 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no tasks/{0-check-counter workunit/direct_io} wsync/{no}} 3
pass 5911404 2021-02-24 05:55:54 2021-02-24 09:35:04 2021-02-24 09:58:23 0:23:19 0:11:04 0:12:15 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/trivial_sync}} 2
pass 5911405 2021-02-24 05:55:56 2021-02-24 09:41:36 2021-02-24 10:11:33 0:29:57 0:16:57 0:13:00 gibba master ubuntu 20.04 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client-limits} 2
dead 5911406 2021-02-24 05:55:57 2021-02-24 09:42:28 2021-02-24 09:58:23 0:15:55 gibba master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/fs/misc} wsync/{yes}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911407 2021-02-24 05:55:58 2021-02-24 09:42:29 2021-02-24 11:20:01 1:37:32 1:21:17 0:16:15 gibba master rhel 8.3 fs/snaps/{begin clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/workunit/snaps} 2
pass 5911408 2021-02-24 05:55:59 2021-02-24 09:49:11 2021-02-24 11:11:57 1:22:46 1:13:27 0:09:19 gibba master centos 8.2 fs/traceless/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_dbench traceless/50pc} 2
fail 5911409 2021-02-24 05:56:00 2021-02-24 09:49:22 2021-02-24 10:29:08 0:39:46 0:30:36 0:09:10 gibba master rhel 8.3 fs/volumes/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/clone}} 2
Failure Reason:

Test failure: test_subvolume_snapshot_clone_with_upgrade (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)

pass 5911410 2021-02-24 05:56:01 2021-02-24 09:52:03 2021-02-24 10:19:53 0:27:50 0:16:46 0:11:04 gibba master centos 8.2 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/iozone}} 2
pass 5911411 2021-02-24 05:56:02 2021-02-24 09:53:04 2021-02-24 11:22:49 1:29:45 1:18:46 0:10:59 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no tasks/{0-check-counter workunit/kernel_untar_build} wsync/{no}} 3
pass 5911412 2021-02-24 05:56:02 2021-02-24 09:57:57 2021-02-24 10:25:17 0:27:20 0:14:56 0:12:24 gibba master centos 8.2 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client-readahead} 2
dead 5911413 2021-02-24 05:56:03 2021-02-24 09:58:28 2021-02-24 10:14:34 0:16:06 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/suites/blogbench} wsync/{no}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

fail 5911414 2021-02-24 05:56:04 2021-02-24 09:58:38 2021-02-24 10:39:10 0:40:32 0:15:53 0:24:39 gibba master ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/pjd}} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on gibba029 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2426538adff197d2896080787a966758aaf9b31d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

dead 5911415 2021-02-24 05:56:05 2021-02-24 10:11:41 2021-02-24 10:30:48 0:19:07 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no tasks/{0-check-counter workunit/suites/dbench} wsync/{no}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911416 2021-02-24 05:56:06 2021-02-24 10:14:53 2021-02-24 10:46:36 0:31:43 0:14:43 0:17:00 gibba master centos 8.2 fs/thrash/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore/bluestore-bitmap overrides/{frag_enable multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_pjd}} 2
pass 5911417 2021-02-24 05:56:07 2021-02-24 10:19:55 2021-02-24 10:54:28 0:34:33 0:29:32 0:05:01 gibba master rhel 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client-recovery} 2
pass 5911418 2021-02-24 05:56:08 2021-02-24 10:20:05 2021-02-24 11:13:41 0:53:36 0:43:14 0:10:22 gibba master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/suites/ffsb} wsync/{yes}} 3
pass 5911419 2021-02-24 05:56:08 2021-02-24 10:20:36 2021-02-24 11:43:47 1:23:11 1:13:47 0:09:24 gibba master ubuntu 18.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
pass 5911420 2021-02-24 05:56:09 2021-02-24 10:20:37 2021-02-24 10:50:57 0:30:20 0:15:30 0:14:50 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no tasks/{0-check-counter workunit/fs/norstats} wsync/{yes}} 3
pass 5911421 2021-02-24 05:56:10 2021-02-24 10:26:09 2021-02-24 10:47:59 0:21:50 0:13:10 0:08:40 gibba master rhel 8.3 fs/32bits/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp overrides/{faked-ino frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress} 2
dead 5911422 2021-02-24 05:56:11 2021-02-24 10:29:20 2021-02-24 10:46:43 0:17:23 gibba master centos 8.2 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/damage} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911423 2021-02-24 05:56:12 2021-02-24 10:30:51 2021-02-24 10:47:43 0:16:52 0:10:30 0:06:22 gibba master rhel 8.3 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_misc} 2
pass 5911424 2021-02-24 05:56:13 2021-02-24 10:31:03 2021-02-24 11:04:53 0:33:50 0:14:46 0:19:04 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/suites/fsstress} wsync/{yes}} 3
dead 5911425 2021-02-24 05:56:13 2021-02-24 10:44:05 2021-02-24 11:02:54 0:18:49 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911426 2021-02-24 05:56:15 2021-02-24 10:46:46 2021-02-24 11:15:36 0:28:50 0:19:14 0:09:36 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no tasks/{0-check-counter workunit/suites/fsx} wsync/{yes}} 3
pass 5911427 2021-02-24 05:56:16 2021-02-24 10:46:57 2021-02-24 11:12:32 0:25:35 0:14:43 0:10:52 gibba master centos 8.2 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/fsstress}} 2
pass 5911428 2021-02-24 05:56:17 2021-02-24 10:47:47 2021-02-24 11:16:27 0:28:40 0:15:04 0:13:36 gibba master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{yes}} 3
pass 5911429 2021-02-24 05:56:18 2021-02-24 10:51:08 2021-02-24 11:28:21 0:37:13 0:26:13 0:11:00 gibba master ubuntu 20.04 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/data-scan} 2
pass 5911430 2021-02-24 05:56:19 2021-02-24 10:51:09 2021-02-24 11:13:31 0:22:22 0:11:28 0:10:54 gibba master ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/fs/trivial_sync}} 2
pass 5911431 2021-02-24 05:56:20 2021-02-24 10:52:30 2021-02-24 11:29:08 0:36:38 0:23:15 0:13:23 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{yes}} 3
dead 5911432 2021-02-24 05:56:21 2021-02-24 10:56:21 2021-02-24 11:19:19 0:22:58 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/suites/iogen} wsync/{no}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911433 2021-02-24 05:56:22 2021-02-24 11:03:23 2021-02-24 11:57:52 0:54:29 0:46:02 0:08:27 gibba master rhel 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/exports} 2
pass 5911434 2021-02-24 05:56:23 2021-02-24 11:05:04 2021-02-24 11:33:33 0:28:29 0:12:00 0:16:29 gibba master ubuntu 18.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/iozone}} 2
pass 5911435 2021-02-24 05:56:23 2021-02-24 11:12:06 2021-02-24 11:39:25 0:27:19 0:16:02 0:11:17 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no tasks/{0-check-counter workunit/suites/iozone} wsync/{no}} 3
pass 5911436 2021-02-24 05:56:24 2021-02-24 11:12:36 2021-02-24 11:32:29 0:19:53 0:12:14 0:07:39 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/pjd}} 2
pass 5911437 2021-02-24 05:56:25 2021-02-24 11:13:37 2021-02-24 11:37:10 0:23:33 0:12:58 0:10:35 gibba master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/suites/pjd} wsync/{yes}} 3
fail 5911438 2021-02-24 05:56:26 2021-02-24 11:13:48 2021-02-24 11:32:28 0:18:40 0:09:28 0:09:12 gibba master rhel 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/forward-scrub} 2
Failure Reason:

Test failure: test_apply_tag (tasks.cephfs.test_forward_scrub.TestForwardScrub)

pass 5911439 2021-02-24 05:56:27 2021-02-24 11:15:39 2021-02-24 11:39:21 0:23:42 0:12:13 0:11:29 gibba master centos 8.2 fs/libcephfs/{begin clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{centos_8} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/libcephfs} 2
pass 5911440 2021-02-24 05:56:28 2021-02-24 11:16:30 2021-02-24 13:05:48 1:49:18 1:35:53 0:13:25 gibba master ubuntu 20.04 fs/mixed-clients/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped} objectstore-ec/bluestore-comp overrides/{frag_enable osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/kernel_cfuse_workunits_dbench_iozone} 2
pass 5911441 2021-02-24 05:56:29 2021-02-24 11:16:30 2021-02-24 11:43:25 0:26:55 0:10:30 0:16:25 gibba master ubuntu 20.04 fs/multiclient/{begin clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/mdtest} 4
pass 5911442 2021-02-24 05:56:29 2021-02-24 11:20:12 2021-02-24 12:17:41 0:57:29 0:43:27 0:14:02 gibba master centos 8.2 fs/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/failover} 2
dead 5911443 2021-02-24 05:56:30 2021-02-24 11:22:53 2021-02-24 11:38:47 0:15:54 gibba master ubuntu 20.04 fs/thrash/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag_enable multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_trivial_sync}} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

fail 5911444 2021-02-24 05:56:31 2021-02-24 11:22:53 2021-02-24 12:06:20 0:43:27 0:25:10 0:18:17 gibba master fs/upgrade/featureful_client/old_client/{bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{frag_enable multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-nautilus 1-client 2-upgrade 3-compat_client/pacific}} 3
Failure Reason:

Command failed on gibba015 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

pass 5911445 2021-02-24 05:56:32 2021-02-24 11:29:15 2021-02-24 12:46:12 1:16:57 1:07:12 0:09:45 gibba master centos 8.2 fs/verify/{begin centos_latest clusters/1a5s-mds-1c-client conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp overrides/{frag_enable mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/dbench validater/lockdep} 2
pass 5911446 2021-02-24 05:56:33 2021-02-24 11:29:15 2021-02-24 11:55:49 0:26:34 0:11:50 0:14:44 gibba master ubuntu 18.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no tasks/{0-check-counter workunit/direct_io} wsync/{yes}} 3
fail 5911447 2021-02-24 05:56:34 2021-02-24 11:32:37 2021-02-24 12:06:26 0:33:49 0:26:47 0:07:02 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/fs/misc} wsync/{no}} 3
Failure Reason:

SSH connection to gibba036 was lost: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,name=0,conf=/etc/ceph/ceph.conf,norbytes,nowsync'

pass 5911448 2021-02-24 05:56:35 2021-02-24 11:33:38 2021-02-24 12:44:09 1:10:31 0:58:03 0:12:28 gibba master centos 8.2 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
dead 5911449 2021-02-24 05:56:36 2021-02-24 11:37:20 2021-02-24 11:54:42 0:17:22 gibba master centos 8.2 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/fragment} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911450 2021-02-24 05:56:36 2021-02-24 11:38:51 2021-02-24 12:38:29 0:59:38 0:49:29 0:10:09 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no tasks/{0-check-counter workunit/kernel_untar_build} wsync/{yes}} 3
pass 5911451 2021-02-24 05:56:37 2021-02-24 11:39:31 2021-02-24 12:42:19 1:02:48 0:53:22 0:09:26 gibba master centos 8.2 fs/snaps/{begin clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/workunit/snaps} 2
pass 5911452 2021-02-24 05:56:38 2021-02-24 11:39:32 2021-02-24 12:40:21 1:00:49 0:47:30 0:13:19 gibba master centos 8.2 fs/traceless/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_ffsb traceless/50pc} 2
pass 5911453 2021-02-24 05:56:39 2021-02-24 11:43:33 2021-02-24 12:12:24 0:28:51 0:16:23 0:12:28 gibba master centos 8.2 fs/volumes/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/misc}} 2
pass 5911454 2021-02-24 05:56:40 2021-02-24 11:43:33 2021-02-24 12:29:04 0:45:31 0:34:24 0:11:07 gibba master ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
fail 5911455 2021-02-24 05:56:41 2021-02-24 11:43:54 2021-02-24 12:37:00 0:53:06 0:39:23 0:13:43 gibba master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/suites/blogbench} wsync/{yes}} 3
Failure Reason:

"2021-02-24T12:04:51.864128+0000 mds.e (mds.0) 19 : cluster [WRN] Scrub error on inode 0x100000001fe (/client.0/tmp/blogbench-1.0/src) see mds.e log and `damage ls` output for details" in cluster log

dead 5911456 2021-02-24 05:56:42 2021-02-24 11:47:45 2021-02-24 12:10:40 0:22:55 gibba master centos 8.2 fs/32bits/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{faked-ino frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911457 2021-02-24 05:56:43 2021-02-24 11:54:47 2021-02-24 12:25:10 0:30:23 0:17:44 0:12:39 gibba master ubuntu 20.04 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/journal-repair} 2
pass 5911458 2021-02-24 05:56:43 2021-02-24 11:55:58 2021-02-24 12:14:45 0:18:47 0:10:26 0:08:21 gibba master centos 8.2 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
pass 5911459 2021-02-24 05:56:44 2021-02-24 11:55:58 2021-02-24 12:21:11 0:25:13 0:13:57 0:11:16 gibba master ubuntu 18.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} 2
pass 5911460 2021-02-24 05:56:45 2021-02-24 11:57:59 2021-02-24 13:21:16 1:23:17 1:08:22 0:14:55 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no tasks/{0-check-counter workunit/suites/dbench} wsync/{yes}} 3
pass 5911461 2021-02-24 05:56:46 2021-02-24 12:06:30 2021-02-24 12:37:11 0:30:41 0:25:15 0:05:26 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/suites/ffsb} wsync/{yes}} 3
dead 5911462 2021-02-24 05:56:47 2021-02-24 12:06:31 2021-02-24 12:26:45 0:20:14 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

fail 5911463 2021-02-24 05:56:47 2021-02-24 12:10:52 2021-02-24 12:31:19 0:20:27 0:08:20 0:12:07 gibba master centos 8.2 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/mds-flush} 2
Failure Reason:

Test failure: test_flush (tasks.cephfs.test_flush.TestFlush)

pass 5911464 2021-02-24 05:56:48 2021-02-24 12:12:33 2021-02-24 12:44:51 0:32:18 0:16:19 0:15:59 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no tasks/{0-check-counter workunit/fs/norstats} wsync/{no}} 3
pass 5911465 2021-02-24 05:56:49 2021-02-24 12:18:04 2021-02-24 12:48:05 0:30:01 0:16:57 0:13:04 gibba master centos 8.2 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/iozone}} 2
fail 5911466 2021-02-24 05:56:50 2021-02-24 12:21:15 2021-02-24 12:48:35 0:27:20 0:12:39 0:14:41 gibba master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/suites/fsstress} wsync/{no}} 3
Failure Reason:

"2021-02-24T12:43:08.017833+0000 mds.f (mds.0) 19 : cluster [WRN] Scrub error on inode 0x10000001e8b (/client.0/tmp/fsstress/ltp-full-20091231/testcases/kernel) see mds.f log and `damage ls` output for details" in cluster log

dead 5911467 2021-02-24 05:56:51 2021-02-24 12:25:16 2021-02-24 12:42:50 0:17:34 gibba master rhel 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/mds-full} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911468 2021-02-24 05:56:52 2021-02-24 12:26:57 2021-02-24 12:54:23 0:27:26 0:13:49 0:13:37 gibba master ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/pjd}} 2
pass 5911469 2021-02-24 05:56:53 2021-02-24 12:29:08 2021-02-24 13:09:50 0:40:42 0:22:35 0:18:07 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no tasks/{0-check-counter workunit/suites/fsx} wsync/{no}} 3
fail 5911470 2021-02-24 05:56:53 2021-02-24 12:37:10 2021-02-24 13:16:09 0:38:59 0:33:31 0:05:28 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{no}} 3
Failure Reason:

SSH connection to gibba015 was lost: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,name=0,conf=/etc/ceph/ceph.conf,norbytes,nowsync'

pass 5911471 2021-02-24 05:56:54 2021-02-24 12:37:21 2021-02-24 13:28:06 0:50:45 0:41:44 0:09:01 gibba master centos 8.2 fs/thrash/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/none objectstore/bluestore-bitmap overrides/{frag_enable multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mds 2-workunit/ffsb}} 2
pass 5911472 2021-02-24 05:56:55 2021-02-24 12:37:21 2021-02-24 12:59:19 0:21:58 0:09:54 0:12:04 gibba master centos 8.2 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/mds_creation_retry} 2
pass 5911473 2021-02-24 05:56:56 2021-02-24 12:38:32 2021-02-24 13:13:10 0:34:38 0:22:07 0:12:31 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{no}} 3
pass 5911474 2021-02-24 05:56:57 2021-02-24 12:40:23 2021-02-24 14:38:55 1:58:32 1:48:10 0:10:22 gibba master centos 8.2 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
pass 5911475 2021-02-24 05:56:58 2021-02-24 12:42:24 2021-02-24 13:43:27 1:01:03 0:49:32 0:11:31 gibba master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/suites/iogen} wsync/{yes}} 3
pass 5911476 2021-02-24 05:56:59 2021-02-24 12:44:15 2021-02-24 13:12:03 0:27:48 0:16:53 0:10:55 gibba master ubuntu 20.04 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/metrics} 2
dead 5911477 2021-02-24 05:56:59 2021-02-24 12:44:55 2021-02-24 13:00:48 0:15:53 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911478 2021-02-24 05:57:00 2021-02-24 12:44:56 2021-02-24 13:06:54 0:21:58 0:11:59 0:09:59 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no tasks/{0-check-counter workunit/suites/iozone} wsync/{yes}} 3
fail 5911479 2021-02-24 05:57:01 2021-02-24 12:48:07 2021-02-24 13:27:03 0:38:56 0:33:05 0:05:51 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/suites/pjd} wsync/{no}} 3
Failure Reason:

SSH connection to gibba007 was lost: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,name=0,conf=/etc/ceph/ceph.conf,norbytes,nowsync'

pass 5911480 2021-02-24 05:57:02 2021-02-24 12:48:38 2021-02-24 13:19:16 0:30:38 0:14:16 0:16:22 gibba master centos 8.2 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
pass 5911481 2021-02-24 05:57:03 2021-02-24 12:54:29 2021-02-24 13:16:06 0:21:37 0:10:38 0:10:59 gibba master rhel 8.3 fs/32bits/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{faked-ino frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
dead 5911482 2021-02-24 05:57:03 2021-02-24 12:59:20 2021-02-24 13:16:44 0:17:24 gibba master rhel 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/multimds_misc} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911483 2021-02-24 05:57:04 2021-02-24 13:00:50 2021-02-24 14:24:49 1:23:59 1:13:05 0:10:54 gibba master rhel 8.3 fs/mixed-clients/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped} objectstore-ec/bluestore-ec-root overrides/{frag_enable osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
pass 5911484 2021-02-24 05:57:05 2021-02-24 13:06:02 2021-02-24 13:41:11 0:35:09 0:17:21 0:17:48 gibba master ubuntu 20.04 fs/multiclient/{begin clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cephfs_misc_tests} 5
fail 5911485 2021-02-24 05:57:06 2021-02-24 13:09:53 2021-02-24 13:29:10 0:19:17 0:08:53 0:10:24 gibba master ubuntu 20.04 fs/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/multifs-auth} 2
Failure Reason:

Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multifs_auth.TestClientsWithoutAuth)

pass 5911486 2021-02-24 05:57:07 2021-02-24 13:09:53 2021-02-24 13:30:51 0:20:58 0:10:25 0:10:33 gibba master centos 8.2 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
fail 5911487 2021-02-24 05:57:08 2021-02-24 13:12:04 2021-02-24 13:46:22 0:34:18 0:23:34 0:10:44 gibba master fs/upgrade/featureful_client/old_client/{bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{frag_enable multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-nautilus 1-client 2-upgrade 3-compat_client/no}} 3
Failure Reason:

Command failed on gibba002 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

pass 5911488 2021-02-24 05:57:09 2021-02-24 13:13:15 2021-02-24 14:32:55 1:19:40 1:05:59 0:13:41 gibba master centos 8.2 fs/verify/{begin centos_latest clusters/1a5s-mds-1c-client conf/{client mds mon osd} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-ec-root overrides/{frag_enable mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/fsstress validater/valgrind} 2
pass 5911489 2021-02-24 05:57:11 2021-02-24 13:16:16 2021-02-24 13:39:13 0:22:57 0:12:54 0:10:03 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no tasks/{0-check-counter workunit/direct_io} wsync/{no}} 3
dead 5911490 2021-02-24 05:57:12 2021-02-24 13:16:26 2021-02-24 13:32:49 0:16:23 gibba master ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/trivial_sync}} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

fail 5911491 2021-02-24 05:57:14 2021-02-24 13:16:57 2021-02-24 14:10:32 0:53:35 0:39:11 0:14:24 gibba master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/fs/misc} wsync/{no}} 3
Failure Reason:

"2021-02-24T13:39:05.814714+0000 mds.c (mds.0) 24 : cluster [WRN] Scrub error on inode 0x10000000262 (/client.0/tmp/testdir/dir1) see mds.c log and `damage ls` output for details" in cluster log

pass 5911492 2021-02-24 05:57:14 2021-02-24 13:21:18 2021-02-24 13:40:26 0:19:08 0:10:05 0:09:03 gibba master ubuntu 18.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/iozone}} 2
pass 5911493 2021-02-24 05:57:15 2021-02-24 13:21:19 2021-02-24 15:33:52 2:12:33 1:56:38 0:15:55 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no tasks/{0-check-counter workunit/kernel_untar_build} wsync/{yes}} 3
pass 5911494 2021-02-24 05:57:16 2021-02-24 13:27:10 2021-02-24 13:44:57 0:17:47 0:10:25 0:07:22 gibba master rhel 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/openfiletable} 2
fail 5911495 2021-02-24 05:57:17 2021-02-24 13:28:11 2021-02-24 14:03:38 0:35:27 0:26:11 0:09:16 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/suites/blogbench} wsync/{no}} 3
Failure Reason:

SSH connection to gibba016 was lost: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,name=0,conf=/etc/ceph/ceph.conf,norbytes,nowsync'

dead 5911496 2021-02-24 05:57:18 2021-02-24 13:30:51 2021-02-24 13:48:45 0:17:54 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/pjd}} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911497 2021-02-24 05:57:19 2021-02-24 13:32:52 2021-02-24 14:58:42 1:25:50 1:08:43 0:17:07 gibba master ubuntu 20.04 fs/snaps/{begin clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/workunit/snaps} 2
pass 5911498 2021-02-24 05:57:20 2021-02-24 13:39:24 2021-02-24 14:06:34 0:27:10 0:15:17 0:11:53 gibba master ubuntu 18.04 fs/traceless/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress traceless/50pc} 2
pass 5911499 2021-02-24 05:57:21 2021-02-24 13:39:24 2021-02-24 14:09:46 0:30:22 0:17:41 0:12:41 gibba master ubuntu 20.04 fs/volumes/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/snapshot}} 2
pass 5911500 2021-02-24 05:57:22 2021-02-24 13:40:35 2021-02-24 15:30:06 1:49:31 1:39:43 0:09:48 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no tasks/{0-check-counter workunit/suites/dbench} wsync/{yes}} 3
pass 5911501 2021-02-24 05:57:23 2021-02-24 13:41:21 2021-02-24 14:06:12 0:24:51 0:12:23 0:12:28 gibba master centos 8.2 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/pool-perm} 2
pass 5911502 2021-02-24 05:57:23 2021-02-24 13:41:22 2021-02-24 14:04:23 0:23:01 0:14:03 0:08:58 gibba master rhel 8.3 fs/thrash/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag_enable multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mon 2-workunit/iozone}} 2
fail 5911503 2021-02-24 05:57:24 2021-02-24 13:43:33 2021-02-24 14:38:31 0:54:58 0:43:09 0:11:49 gibba master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/suites/ffsb} wsync/{yes}} 3
Failure Reason:

"2021-02-24T14:03:13.951075+0000 mds.f (mds.0) 19 : cluster [WRN] Scrub error on inode 0x10000000348 (/client.0/tmp/tmp) see mds.f log and `damage ls` output for details" in cluster log

pass 5911504 2021-02-24 05:57:25 2021-02-24 13:45:04 2021-02-24 14:49:07 1:04:03 0:53:11 0:10:52 gibba master centos 8.2 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
dead 5911505 2021-02-24 05:57:26 2021-02-24 13:46:28 2021-02-24 14:04:50 0:18:22 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no tasks/{0-check-counter workunit/fs/norstats} wsync/{no}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911506 2021-02-24 05:57:27 2021-02-24 13:48:58 2021-02-24 14:22:40 0:33:42 0:09:10 0:24:32 gibba master ubuntu 20.04 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/quota} 2
fail 5911507 2021-02-24 05:57:28 2021-02-24 14:03:41 2021-02-24 14:23:18 0:19:37 0:12:58 0:06:39 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/suites/fsstress} wsync/{yes}} 3
Failure Reason:

"2021-02-24T14:19:21.330851+0000 mds.f (mds.0) 19 : cluster [WRN] Scrub error on inode 0x100000012e0 (/client.0/tmp/fsstress/ltp-full-20091231/testcases/open_posix_testsuite) see mds.f log and `damage ls` output for details" in cluster log

pass 5911508 2021-02-24 05:57:29 2021-02-24 14:04:32 2021-02-24 15:08:07 1:03:35 0:52:09 0:11:26 gibba master ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
dead 5911509 2021-02-24 05:57:29 2021-02-24 14:05:02 2021-02-24 14:22:33 0:17:31 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no tasks/{0-check-counter workunit/suites/fsx} wsync/{no}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

fail 5911510 2021-02-24 05:57:30 2021-02-24 14:06:23 2021-02-24 14:23:34 0:17:11 0:09:23 0:07:48 gibba master rhel 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/scrub} 2
Failure Reason:

Test failure: test_scrub_checks (tasks.cephfs.test_scrub_checks.TestScrubChecks)

pass 5911511 2021-02-24 05:57:31 2021-02-24 14:06:54 2021-02-24 14:32:58 0:26:04 0:14:23 0:11:41 gibba master ubuntu 18.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/fsstress}} 2
pass 5911512 2021-02-24 05:57:32 2021-02-24 14:09:55 2021-02-24 14:31:42 0:21:47 0:10:55 0:10:52 gibba master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{no}} 3
pass 5911513 2021-02-24 05:57:33 2021-02-24 14:10:36 2021-02-24 14:41:42 0:31:06 0:12:38 0:18:28 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/fs/trivial_sync}} 2
dead 5911514 2021-02-24 05:57:34 2021-02-24 14:22:56 2021-02-24 14:38:48 0:15:52 gibba master ubuntu 18.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{no}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911515 2021-02-24 05:57:34 2021-02-24 14:22:56 2021-02-24 14:48:02 0:25:06 0:14:31 0:10:35 gibba master centos 8.2 fs/32bits/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{faked-ino frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress} 2
pass 5911516 2021-02-24 05:57:35 2021-02-24 14:23:27 2021-02-24 14:42:36 0:19:09 0:11:41 0:07:28 gibba master rhel 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/sessionmap} 2
pass 5911517 2021-02-24 05:57:36 2021-02-24 14:23:38 2021-02-24 14:43:38 0:20:00 0:10:11 0:09:49 gibba master centos 8.2 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_misc} 2
fail 5911518 2021-02-24 05:57:37 2021-02-24 14:24:51 2021-02-24 15:12:52 0:48:01 0:33:10 0:14:51 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/suites/iogen} wsync/{no}} 3
Failure Reason:

SSH connection to gibba036 was lost: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,name=0,conf=/etc/ceph/ceph.conf,norbytes,nowsync'

pass 5911519 2021-02-24 05:57:38 2021-02-24 14:32:01 2021-02-24 15:03:54 0:31:53 0:16:22 0:15:31 gibba master centos 8.2 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/iozone}} 2
pass 5911520 2021-02-24 05:57:38 2021-02-24 14:33:06 2021-02-24 15:03:58 0:30:52 0:17:30 0:13:22 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no tasks/{0-check-counter workunit/suites/iozone} wsync/{yes}} 3
pass 5911521 2021-02-24 05:57:39 2021-02-24 14:33:06 2021-02-24 15:07:37 0:34:31 0:18:04 0:16:27 gibba master centos 8.2 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/snap-schedule} 2
dead 5911522 2021-02-24 05:57:40 2021-02-24 14:38:54 2021-02-24 14:54:47 0:15:53 gibba master ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/pjd}} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

fail 5911523 2021-02-24 05:57:41 2021-02-24 14:38:54 2021-02-24 15:02:26 0:23:32 0:12:42 0:10:50 gibba master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/suites/pjd} wsync/{yes}} 3
Failure Reason:

"2021-02-24T14:57:33.757266+0000 mds.f (mds.0) 29 : cluster [WRN] Scrub error on inode 0x1000000045f (/client.0/tmp/tmp/fstest_042da8672029728d2311eef8b66b0561/fstest_a14ca18ecbc078ab4c71ecc57cf55e6b) see mds.f log and `damage ls` output for details" in cluster log

pass 5911524 2021-02-24 05:57:42 2021-02-24 14:39:15 2021-02-24 14:56:32 0:17:17 0:09:12 0:08:05 gibba master rhel 8.3 fs/libcephfs/{begin clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/{rhel_8} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/libcephfs_python} 2
pass 5911525 2021-02-24 05:57:43 2021-02-24 14:41:46 2021-02-24 16:37:35 1:55:49 1:43:35 0:12:14 gibba master centos 8.2 fs/mixed-clients/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{frag_enable osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/kernel_cfuse_workunits_dbench_iozone} 2
pass 5911526 2021-02-24 05:57:44 2021-02-24 14:42:46 2021-02-24 15:07:27 0:24:41 0:07:56 0:16:45 gibba master ubuntu 20.04 fs/multiclient/{begin clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/ior-shared-file} 4
pass 5911527 2021-02-24 05:57:45 2021-02-24 14:48:09 2021-02-24 15:38:30 0:50:21 0:39:01 0:11:20 gibba master ubuntu 20.04 fs/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/failover} 2
dead 5911528 2021-02-24 05:57:45 2021-02-24 14:49:19 2021-02-24 15:10:45 0:21:26 gibba master centos 8.2 fs/thrash/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore/bluestore-bitmap overrides/{frag_enable multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_snaptests}} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

fail 5911529 2021-02-24 05:57:46 2021-02-24 14:54:50 2021-02-24 15:21:52 0:27:02 0:14:31 0:12:31 gibba master fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{frag_enable multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-nautilus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
Failure Reason:

Command failed on gibba016 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

pass 5911530 2021-02-24 05:57:47 2021-02-24 14:56:38 2021-02-24 16:21:25 1:24:47 1:14:08 0:10:39 gibba master centos 8.2 fs/verify/{begin centos_latest clusters/1a5s-mds-1c-client conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/dbench validater/lockdep} 2
pass 5911531 2021-02-24 05:57:48 2021-02-24 14:58:44 2021-02-24 15:25:38 0:26:54 0:13:53 0:13:01 gibba master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no tasks/{0-check-counter workunit/direct_io} wsync/{yes}} 3
fail 5911532 2021-02-24 05:57:49 2021-02-24 15:02:36 2021-02-24 15:35:09 0:32:33 0:19:09 0:13:24 gibba master ubuntu 20.04 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/snapshots} 2
Failure Reason:

Test failure: test_snapclient_cache (tasks.cephfs.test_snapshots.TestSnapshots), test_snapclient_cache (tasks.cephfs.test_snapshots.TestSnapshots)

pass 5911533 2021-02-24 05:57:50 2021-02-24 15:03:56 2021-02-24 16:02:55 0:58:59 0:48:31 0:10:28 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/fs/misc} wsync/{no}} 3
pass 5911534 2021-02-24 05:57:50 2021-02-24 15:04:07 2021-02-24 18:04:44 3:00:37 2:48:34 0:12:03 gibba master ubuntu 18.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/snaps}} 2
fail 5911535 2021-02-24 05:57:51 2021-02-24 15:07:28 2021-02-24 15:48:26 0:40:58 0:34:16 0:06:42 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no tasks/{0-check-counter workunit/kernel_untar_build} wsync/{no}} 3
Failure Reason:

SSH connection to gibba003 was lost: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,name=0,conf=/etc/ceph/ceph.conf,norbytes,nowsync'

pass 5911536 2021-02-24 05:57:52 2021-02-24 15:07:38 2021-02-24 15:40:53 0:33:15 0:26:04 0:07:11 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
dead 5911537 2021-02-24 05:57:53 2021-02-24 15:08:09 2021-02-24 15:26:40 0:18:31 gibba master rhel 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/strays} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

fail 5911538 2021-02-24 05:57:54 2021-02-24 15:10:50 2021-02-24 16:41:42 1:30:52 1:17:56 0:12:56 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/suites/blogbench} wsync/{no}} 3
Failure Reason:

"2021-02-24T15:34:00.128808+0000 mds.c (mds.0) 24 : cluster [WRN] Scrub error on inode 0x10000002159 (/client.0/tmp/blogbench-1.0/src/blogtest_in/blog-43) see mds.c log and `damage ls` output for details" in cluster log

pass 5911539 2021-02-24 05:57:56 2021-02-24 15:13:01 2021-02-24 15:54:38 0:41:37 0:18:51 0:22:46 gibba master centos 8.2 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/fsstress}} 2
dead 5911540 2021-02-24 05:57:57 2021-02-24 15:25:55 2021-02-24 15:43:28 0:17:33 gibba master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no tasks/{0-check-counter workunit/suites/dbench} wsync/{no}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911541 2021-02-24 05:57:58 2021-02-24 15:26:45 2021-02-24 15:51:06 0:24:21 0:14:29 0:09:52 gibba master rhel 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/test_journal_migration} 2
fail 5911542 2021-02-24 05:57:59 2021-02-24 15:30:16 2021-02-24 16:48:40 1:18:24 0:59:43 0:18:41 gibba master ubuntu 18.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/suites/ffsb} wsync/{no}} 3
Failure Reason:

"2021-02-24T15:57:47.646017+0000 mds.d (mds.0) 24 : cluster [WRN] Scrub error on inode 0x10000000348 (/client.0/tmp/tmp) see mds.d log and `damage ls` output for details" in cluster log

pass 5911543 2021-02-24 05:57:59 2021-02-24 15:33:57 2021-02-24 16:46:50 1:12:53 0:59:33 0:13:20 gibba master ubuntu 20.04 fs/snaps/{begin clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/workunit/snaps} 2
dead 5911544 2021-02-24 05:58:00 2021-02-24 15:33:58 2021-02-24 15:35:25 0:01:27 gibba master centos 8.2 fs/traceless/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_blogbench traceless/50pc} 2
Failure Reason:

Error reimaging machines: Expecting value: line 1 column 1 (char 0)

pass 5911545 2021-02-24 05:58:01 2021-02-24 15:35:18 2021-02-24 16:26:48 0:51:30 0:38:16 0:13:14 gibba master ubuntu 20.04 fs/volumes/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/basic}} 2
pass 5911546 2021-02-24 05:58:02 2021-02-24 15:35:29 2021-02-24 16:00:01 0:24:32 0:10:11 0:14:21 gibba master ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
dead 5911547 2021-02-24 05:58:03 2021-02-24 15:38:40 2021-02-24 15:59:22 0:20:42 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no tasks/{0-check-counter workunit/fs/norstats} wsync/{no}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911548 2021-02-24 05:58:04 2021-02-24 15:43:31 2021-02-24 16:05:08 0:21:37 0:11:50 0:09:47 gibba master ubuntu 18.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/iozone}} 2
pass 5911549 2021-02-24 05:58:05 2021-02-24 15:43:41 2021-02-24 16:09:13 0:25:32 0:10:44 0:14:48 gibba master centos 8.2 fs/32bits/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp overrides/{faked-ino frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
pass 5911550 2021-02-24 05:58:06 2021-02-24 15:48:33 2021-02-24 16:14:00 0:25:27 0:11:11 0:14:16 gibba master centos 8.2 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/truncate_delay} 2
pass 5911551 2021-02-24 05:58:06 2021-02-24 15:51:14 2021-02-24 16:15:41 0:24:27 0:10:40 0:13:47 gibba master centos 8.2 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
dead 5911552 2021-02-24 05:58:07 2021-02-24 15:54:55 2021-02-24 16:15:29 0:20:34 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/suites/fsstress} wsync/{no}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

fail 5911553 2021-02-24 05:58:08 2021-02-24 15:59:36 2021-02-24 16:16:53 0:17:17 0:11:36 0:05:41 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/pjd}} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on gibba002 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2426538adff197d2896080787a966758aaf9b31d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

pass 5911554 2021-02-24 05:58:09 2021-02-24 16:00:07 2021-02-24 16:38:10 0:38:03 0:25:56 0:12:07 gibba master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no tasks/{0-check-counter workunit/suites/fsx} wsync/{no}} 3
pass 5911555 2021-02-24 05:58:10 2021-02-24 16:02:57 2021-02-24 16:31:47 0:28:50 0:18:06 0:10:44 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{yes}} 3
pass 5911556 2021-02-24 05:58:11 2021-02-24 16:03:08 2021-02-24 16:24:27 0:21:19 0:10:34 0:10:45 gibba master ubuntu 20.04 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/workunit/quota} 2
pass 5911557 2021-02-24 05:58:12 2021-02-24 16:03:18 2021-02-24 16:36:23 0:33:05 0:21:59 0:11:06 gibba master ubuntu 20.04 fs/thrash/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag_enable multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_fsstress}} 2
fail 5911558 2021-02-24 05:58:13 2021-02-24 16:03:19 2021-02-24 16:48:17 0:44:58 0:33:14 0:11:44 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{no}} 3
Failure Reason:

SSH connection to gibba003 was lost: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,name=0,conf=/etc/ceph/ceph.conf,norbytes,nowsync'

pass 5911559 2021-02-24 05:58:13 2021-02-24 16:09:30 2021-02-24 17:56:45 1:47:15 1:32:51 0:14:24 gibba master centos 8.2 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/snaps}} 2
dead 5911560 2021-02-24 05:58:15 2021-02-24 16:14:11 2021-02-24 16:31:36 0:17:25 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/suites/iogen} wsync/{no}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911561 2021-02-24 05:58:16 2021-02-24 16:15:42 2021-02-24 16:37:04 0:21:22 0:10:23 0:10:59 gibba master ubuntu 20.04 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/acls} 2
pass 5911562 2021-02-24 05:58:17 2021-02-24 16:15:52 2021-02-24 16:56:06 0:40:14 0:28:26 0:11:48 gibba master ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
pass 5911563 2021-02-24 05:58:18 2021-02-24 16:17:03 2021-02-24 16:42:46 0:25:43 0:11:42 0:14:01 gibba master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no tasks/{0-check-counter workunit/suites/iozone} wsync/{yes}} 3
pass 5911564 2021-02-24 05:58:19 2021-02-24 16:21:34 2021-02-24 16:45:47 0:24:13 0:12:05 0:12:08 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/suites/pjd} wsync/{yes}} 3
dead 5911565 2021-02-24 05:58:20 2021-02-24 16:26:56 2021-02-24 16:47:43 0:20:47 gibba master ubuntu 18.04 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/admin} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911566 2021-02-24 05:58:20 2021-02-24 16:31:47 2021-02-24 16:56:55 0:25:08 0:14:54 0:10:14 gibba master ubuntu 18.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
pass 5911567 2021-02-24 05:58:21 2021-02-24 16:31:48 2021-02-24 18:03:18 1:31:30 1:18:46 0:12:44 gibba master ubuntu 20.04 fs/mixed-clients/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
pass 5911568 2021-02-24 05:58:22 2021-02-24 16:32:09 2021-02-24 17:02:44 0:30:35 0:12:31 0:18:04 gibba master ubuntu 20.04 fs/multiclient/{begin clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/mdtest} 5
pass 5911569 2021-02-24 05:58:23 2021-02-24 16:37:20 2021-02-24 17:14:29 0:37:09 0:27:37 0:09:32 gibba master centos 8.2 fs/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/multifs-auth} 2
fail 5911570 2021-02-24 05:58:24 2021-02-24 16:37:41 2021-02-24 17:11:28 0:33:47 0:23:53 0:09:54 gibba master fs/upgrade/featureful_client/old_client/{bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{frag_enable multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-nautilus 1-client 2-upgrade 3-compat_client/pacific}} 3
Failure Reason:

Command failed on gibba012 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json'

pass 5911571 2021-02-24 05:58:26 2021-02-24 16:38:12 2021-02-24 18:02:33 1:24:21 1:11:23 0:12:58 gibba master centos 8.2 fs/verify/{begin centos_latest clusters/1a5s-mds-1c-client conf/{client mds mon osd} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/fsstress validater/valgrind} 2
fail 5911572 2021-02-24 05:58:28 2021-02-24 16:41:53 2021-02-24 17:13:46 0:31:53 0:24:49 0:07:04 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no tasks/{0-check-counter workunit/direct_io} wsync/{no}} 3
Failure Reason:

SSH connection to gibba006 was lost: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,name=0,conf=/etc/ceph/ceph.conf,norbytes,nowsync'

pass 5911573 2021-02-24 05:58:29 2021-02-24 16:42:55 2021-02-24 17:02:43 0:19:48 0:10:05 0:09:43 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/trivial_sync}} 2
fail 5911574 2021-02-24 05:58:30 2021-02-24 16:45:56 2021-02-24 17:43:44 0:57:48 0:47:16 0:10:32 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/fs/misc} wsync/{yes}} 3
Failure Reason:

"2021-02-24T17:07:44.110284+0000 mds.d (mds.2) 4 : cluster [WRN] Scrub error on inode 0x10000000266 (/client.0/tmp/testdir/dir1/dir2/dir3/dir4) see mds.d log and `damage ls` output for details" in cluster log

dead 5911575 2021-02-24 05:58:32 2021-02-24 16:46:56 2021-02-24 17:03:38 0:16:42 gibba master rhel 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/alternate-pool} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

fail 5911576 2021-02-24 05:58:33 2021-02-24 16:47:47 2021-02-24 17:23:05 0:35:18 0:22:57 0:12:21 gibba master centos 8.2 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/iozone}} 2
Failure Reason:

"2021-02-24T17:13:48.675965+0000 mon.a (mon.0) 431 : cluster [WRN] Health check failed: 1 daemons have recently crashed (RECENT_CRASH)" in cluster log

pass 5911577 2021-02-24 05:58:35 2021-02-24 16:48:27 2021-02-24 17:23:56 0:35:29 0:25:07 0:10:22 gibba master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no tasks/{0-check-counter workunit/kernel_untar_build} wsync/{yes}} 3
fail 5911578 2021-02-24 05:58:36 2021-02-24 16:48:48 2021-02-24 17:33:26 0:44:38 0:26:36 0:18:02 gibba master ubuntu 18.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/suites/blogbench} wsync/{yes}} 3
Failure Reason:

"2021-02-24T17:14:50.318574+0000 mds.f (mds.0) 24 : cluster [WRN] Scrub error on inode 0x100000001fa (/client.0/tmp/blogbench-1.0/man) see mds.f log and `damage ls` output for details" in cluster log

pass 5911579 2021-02-24 05:58:37 2021-02-24 16:56:09 2021-02-24 17:20:12 0:24:03 0:13:50 0:10:13 gibba master ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/pjd}} 2
pass 5911580 2021-02-24 05:58:39 2021-02-24 16:57:00 2021-02-24 17:28:03 0:31:03 0:15:08 0:15:55 gibba master ubuntu 18.04 fs/32bits/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-ec-root overrides/{faked-ino frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress} 2
pass 5911581 2021-02-24 05:58:40 2021-02-24 17:02:51 2021-02-24 17:37:47 0:34:56 0:23:37 0:11:19 gibba master centos 8.2 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/asok_dump_tree} 2
pass 5911582 2021-02-24 05:58:41 2021-02-24 17:02:51 2021-02-24 17:22:02 0:19:11 0:10:14 0:08:57 gibba master ubuntu 18.04 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_misc} 2
dead 5911583 2021-02-24 05:58:42 2021-02-24 17:02:52 2021-02-24 17:19:33 0:16:41 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no tasks/{0-check-counter workunit/suites/dbench} wsync/{no}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911584 2021-02-24 05:58:43 2021-02-24 17:03:42 2021-02-24 17:30:11 0:26:29 0:12:16 0:14:13 gibba master rhel 8.3 fs/thrash/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse msgr-failures/none objectstore/bluestore-bitmap overrides/{frag_enable multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_suites_pjd}} 2
pass 5911585 2021-02-24 05:58:43 2021-02-24 17:11:34 2021-02-24 18:12:30 1:00:56 0:48:48 0:12:08 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/suites/ffsb} wsync/{yes}} 3
pass 5911586 2021-02-24 05:58:44 2021-02-24 17:13:54 2021-02-24 17:35:03 0:21:09 0:09:18 0:11:51 gibba master ubuntu 20.04 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/auto-repair} 2
pass 5911587 2021-02-24 05:58:45 2021-02-24 17:13:55 2021-02-24 18:47:23 1:33:28 1:23:20 0:10:08 gibba master centos 8.2 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
dead 5911588 2021-02-24 05:58:46 2021-02-24 17:14:35 2021-02-24 17:35:38 0:21:03 gibba master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no tasks/{0-check-counter workunit/fs/norstats} wsync/{no}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911589 2021-02-24 05:58:47 2021-02-24 17:19:47 2021-02-24 18:51:05 1:31:18 1:21:34 0:09:44 gibba master centos 8.2 fs/snaps/{begin clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/workunit/snaps} 2
pass 5911590 2021-02-24 05:58:48 2021-02-24 17:20:17 2021-02-24 18:27:16 1:06:59 0:55:48 0:11:11 gibba master ubuntu 18.04 fs/traceless/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_dbench traceless/50pc} 2
pass 5911591 2021-02-24 05:58:49 2021-02-24 17:22:08 2021-02-24 18:03:52 0:41:44 0:31:45 0:09:59 gibba master centos 8.2 fs/volumes/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/clone}} 2
fail 5911592 2021-02-24 05:58:50 2021-02-24 17:23:08 2021-02-24 17:52:50 0:29:42 0:16:06 0:13:36 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/suites/fsstress} wsync/{no}} 3
Failure Reason:

"2021-02-24T17:44:09.655273+0000 mds.c (mds.0) 24 : cluster [WRN] Scrub error on inode 0x10000001409 (/client.0/tmp/fsstress/ltp-full-20091231/testcases/open_posix_testsuite/conformance/interfaces) see mds.c log and `damage ls` output for details" in cluster log

pass 5911593 2021-02-24 05:58:51 2021-02-24 17:23:59 2021-02-24 18:02:56 0:38:57 0:29:02 0:09:55 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
pass 5911594 2021-02-24 05:58:52 2021-02-24 17:28:10 2021-02-24 17:49:31 0:21:21 0:08:36 0:12:45 gibba master ubuntu 18.04 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/backtrace} 2
pass 5911595 2021-02-24 05:58:53 2021-02-24 17:30:20 2021-02-24 17:54:19 0:23:59 0:15:38 0:08:21 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no tasks/{0-check-counter workunit/suites/fsx} wsync/{yes}} 3
pass 5911596 2021-02-24 05:58:55 2021-02-24 17:33:41 2021-02-24 18:02:02 0:28:21 0:17:20 0:11:01 gibba master centos 8.2 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/fsstress}} 2
dead 5911597 2021-02-24 05:58:57 2021-02-24 17:35:12 2021-02-24 17:51:44 0:16:32 gibba master centos 8.2 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{yes}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911598 2021-02-24 05:58:58 2021-02-24 17:35:53 2021-02-24 17:52:39 0:16:46 0:09:25 0:07:21 gibba master rhel 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/cap-flush} 2
pass 5911599 2021-02-24 05:59:00 2021-02-24 17:37:53 2021-02-24 18:03:02 0:25:09 0:09:39 0:15:30 gibba master ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/fs/trivial_sync}} 2
pass 5911600 2021-02-24 05:59:01 2021-02-24 17:43:54 2021-02-24 18:20:45 0:36:51 0:20:03 0:16:48 gibba master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/no tasks/{0-check-counter workunit/fs/test_o_trunc} wsync/{yes}} 3
dead 5911601 2021-02-24 05:59:03 2021-02-24 17:49:36 2021-02-24 18:07:48 0:18:12 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/suites/iogen} wsync/{no}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 5911602 2021-02-24 05:59:06 2021-02-24 17:51:56 2021-02-24 18:17:35 0:25:39 0:13:57 0:11:42 gibba master centos 8.2 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/iozone}} 2
pass 5911603 2021-02-24 05:59:07 2021-02-24 17:52:47 2021-02-24 18:23:37 0:30:50 0:19:24 0:11:26 gibba master centos 8.2 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/client-limits} 2
fail 5911604 2021-02-24 05:59:09 2021-02-24 17:52:58 2021-02-24 18:25:16 0:32:18 0:25:00 0:07:18 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/no tasks/{0-check-counter workunit/suites/iozone} wsync/{no}} 3
Failure Reason:

SSH connection to gibba002 was lost: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage nsenter --net=/var/run/netns/ceph-ns--home-ubuntu-cephtest-mnt.0 /bin/mount -t ceph :/ /home/ubuntu/cephtest/mnt.0 -v -o norequire_active_mds,name=0,conf=/etc/ceph/ceph.conf,norbytes,nowsync'