Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 5834082 2021-01-28 02:03:09 2021-01-28 02:05:23 2021-01-28 02:59:20 0:53:57 0:38:07 0:15:50 smithi master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes tasks/{0-check-counter workunit/fs/misc}} 3
Failure Reason:

"2021-01-28T02:33:17.752312+0000 mds.c (mds.0) 44 : cluster [WRN] Scrub error on inode 0x1000000788a (/client.0/tmp/payload.1/multiple_rsync_payload.146433/grub) see mds.c log and `damage ls` output for details" in cluster log

dead 5834084 2021-01-28 02:03:11 2021-01-28 02:05:23 2021-01-28 14:07:52 12:02:29 smithi master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/suites/iogen}} 3
dead 5834086 2021-01-28 02:03:13 2021-01-28 02:05:33 2021-01-28 03:41:34 1:36:01 1:09:12 0:26:49 smithi master ubuntu 20.04 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes tasks/{0-check-counter workunit/fs/misc}} 3
Failure Reason:

[Errno 113] No route to host

fail 5834088 2021-01-28 02:03:15 2021-01-28 02:05:48 2021-01-28 03:15:48 1:10:00 1:01:06 0:08:54 smithi master rhel 8.3 fs/workload/{begin clusters/1a5s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/5 scrub/yes tasks/{0-check-counter workunit/suites/blogbench}} 3
Failure Reason:

"2021-01-28T02:22:52.048007+0000 mds.c (mds.0) 14 : cluster [WRN] Scrub error on inode 0x10000000643 (/client.0/tmp/blogbench-1.0/src/blogtest_in/blog-5) see mds.c log and `damage ls` output for details" in cluster log

pass 5834090 2021-01-28 02:03:17 2021-01-28 02:09:57 2021-01-28 02:39:55 0:29:58 0:16:05 0:13:53 smithi master ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag_enable session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mon 2-workunit/suites/pjd}} 2
fail 5834092 2021-01-28 02:03:19 2021-01-28 02:12:38 2021-01-28 03:50:38 1:38:00 1:23:06 0:14:54 smithi master centos 8.2 fs/verify/{begin centos_latest clusters/1a5s-mds-1c-client conf/{client mds mon osd} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable mon-debug session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 tasks/fsstress validater/valgrind} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi134 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=164b0f632a5fe3384fa7cbb10929c65ed7cc0f12 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'