Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6029455 2021-04-08 18:41:17 2021-04-08 18:42:12 2021-04-08 19:47:27 1:05:15 0:45:33 0:19:42 gibba master rhel 8.3 fs/mixed-clients/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} kclient-overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{frag_enable osd-asserts whitelist_health whitelist_wrongly_marked_down} tasks/kernel_cfuse_workunits_dbench_iozone} 2
fail 6029456 2021-04-08 18:41:18 2021-04-08 18:42:13 2021-04-08 19:49:28 1:07:15 0:46:55 0:20:20 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/fs/misc}} 3
Failure Reason:

"2021-04-08T19:16:35.767811+0000 mds.c (mds.2) 4 : cluster [WRN] Scrub error on inode 0x10000000264 (/client.0/tmp/testdir/dir1/dir2/dir3) see mds.c log and `damage ls` output for details" in cluster log

pass 6029457 2021-04-08 18:41:19 2021-04-08 18:42:13 2021-04-08 19:47:28 1:05:15 0:46:06 0:19:09 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/ffsb}} 2
pass 6029458 2021-04-08 18:41:20 2021-04-08 18:42:14 2021-04-08 19:53:28 1:11:14 0:51:10 0:20:04 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/suites/ffsb}} 3
pass 6029459 2021-04-08 18:41:21 2021-04-08 18:42:15 2021-04-08 19:19:05 0:36:50 0:18:25 0:18:25 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/osd 2-workunit/suites/pjd}} 2
pass 6029460 2021-04-08 18:41:22 2021-04-08 18:42:15 2021-04-08 19:19:24 0:37:09 0:15:55 0:21:14 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/suites/fsync-tester}} 3
pass 6029461 2021-04-08 18:41:23 2021-04-08 18:42:16 2021-04-08 19:32:24 0:50:08 0:29:56 0:20:12 gibba master rhel 8.3 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/admin} 2
fail 6029462 2021-04-08 18:41:24 2021-04-08 18:42:17 2021-04-08 19:18:55 0:36:38 0:17:14 0:19:24 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/suites/pjd}} 3
Failure Reason:

"2021-04-08T19:16:34.257701+0000 mds.e (mds.0) 29 : cluster [WRN] Scrub error on inode 0x100000001f7 (/client.0/tmp/pjd-fstest-20090130-RC) see mds.e log and `damage ls` output for details" in cluster log

pass 6029463 2021-04-08 18:41:25 2021-04-08 18:42:17 2021-04-08 19:14:24 0:32:07 0:14:20 0:17:47 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/osd 2-workunit/fs/trivial_sync}} 2
dead 6029464 2021-04-08 18:41:26 2021-04-08 18:42:18 2021-04-08 19:34:00 0:51:42 0:31:12 0:20:30 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/suites/blogbench}} 3
Failure Reason:

SSH connection to gibba003 was lost: 'sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0'

pass 6029465 2021-04-08 18:41:27 2021-04-08 18:42:19 2021-04-08 19:47:23 1:05:04 0:45:44 0:19:20 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/ffsb}} 2
pass 6029466 2021-04-08 18:41:28 2021-04-08 18:42:19 2021-04-08 19:22:24 0:40:05 0:18:38 0:21:27 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/suites/fsstress}} 3
dead 6029467 2021-04-08 18:41:29 2021-04-08 18:42:20 2021-04-08 18:54:27 0:12:07 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/suites/iogen}} 3
Failure Reason:

Error reimaging machines: Failed to power on gibba018

pass 6029468 2021-04-08 18:41:30 2021-04-08 18:53:23 2021-04-08 19:23:25 0:30:02 0:16:43 0:13:19 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/pjd}} 2
pass 6029469 2021-04-08 18:41:31 2021-04-08 18:54:34 2021-04-08 19:31:05 0:36:31 0:19:45 0:16:46 gibba master rhel 8.3 fs/thrash/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag_enable multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_suites_fsstress}} 2
fail 6029470 2021-04-08 18:41:32 2021-04-08 19:00:07 2021-04-08 19:59:36 0:59:29 0:30:35 0:28:54 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/fs/misc}} 3
Failure Reason:

"2021-04-08T19:41:01.073582+0000 mds.c (mds.0) 14 : cluster [WRN] Scrub error on inode 0x10000000262 (/client.0/tmp/testdir/dir1) see mds.c log and `damage ls` output for details" in cluster log

pass 6029471 2021-04-08 18:41:33 2021-04-08 19:14:44 2021-04-08 19:58:02 0:43:18 0:24:23 0:18:55 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/suites/ffsb}} 3
pass 6029472 2021-04-08 18:41:34 2021-04-08 19:19:00 2021-04-08 19:46:00 0:27:00 0:13:32 0:13:28 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mds 2-workunit/fs/trivial_sync}} 2
pass 6029473 2021-04-08 18:41:36 2021-04-08 19:19:11 2021-04-08 19:48:54 0:29:43 0:14:37 0:15:06 gibba master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/suites/fsync-tester}} 3
dead 6029474 2021-04-08 18:41:37 2021-04-08 19:19:31 2021-04-08 20:06:57 0:47:26 0:33:10 0:14:16 gibba master rhel 8.3 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
Failure Reason:

SSH connection to gibba017 was lost: 'sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0'