Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 5250472 2020-07-23 07:50:06 2020-07-23 07:50:12 2020-07-23 08:44:13 0:54:01 0:34:40 0:19:21 smithi master ubuntu 18.04 fs/32bits/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap overrides/{faked-ino frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi017 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

pass 5250473 2020-07-23 07:50:07 2020-07-23 07:50:12 2020-07-23 08:16:12 0:26:00 0:11:06 0:14:54 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/acls-fuse-client} 2
pass 5250474 2020-07-23 07:50:08 2020-07-23 07:50:13 2020-07-23 09:32:15 1:42:02 1:29:55 0:12:07 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_kernel_untar_build} 2
pass 5250475 2020-07-23 07:50:09 2020-07-23 07:50:14 2020-07-23 08:10:14 0:20:00 0:09:24 0:10:36 smithi master fs/bugs/client_trim_caps/{begin clusters/small-cluster conf/{client mds mon osd} objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/trim-i22073} 1
pass 5250476 2020-07-23 07:50:10 2020-07-23 07:50:31 2020-07-23 09:24:33 1:34:02 0:19:01 1:15:01 smithi master ubuntu 18.04 fs/multiclient/{begin clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cephfs_misc_tests} 4
pass 5250477 2020-07-23 07:50:11 2020-07-23 07:50:40 2020-07-23 08:48:41 0:58:01 0:45:08 0:12:53 smithi master rhel 8.1 fs/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/failover} 2
fail 5250478 2020-07-23 07:50:12 2020-07-23 07:51:38 2020-07-23 08:49:38 0:58:00 0:36:28 0:21:32 smithi master rhel 8.1 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_misc} 2
Failure Reason:

Command failed (workunit test fs/misc/acl.sh) on smithi200 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/acl.sh'

fail 5250479 2020-07-23 07:50:13 2020-07-23 07:52:14 2020-07-23 08:36:15 0:44:01 0:32:53 0:11:08 smithi master centos 8.1 fs/snaps/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/snaptests} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-0.sh) on smithi052 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-0.sh'

fail 5250480 2020-07-23 07:50:14 2020-07-23 07:54:17 2020-07-23 08:40:18 0:46:01 0:22:55 0:23:06 smithi master rhel 8.1 fs/thrash/{begin ceph-thrash/mds clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_snaptests} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-1.sh) on smithi125 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-1.sh'

pass 5250481 2020-07-23 07:50:15 2020-07-23 07:54:18 2020-07-23 08:30:18 0:36:00 0:24:34 0:11:26 smithi master rhel 8.1 fs/traceless/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_blogbench traceless/50pc} 2
dead 5250482 2020-07-23 07:50:16 2020-07-23 07:54:35 2020-07-23 19:57:12 12:02:37 smithi master fs/upgrade/featureful_client/old_client/{bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{frag_enable multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-nautilus 1-client 2-upgrade 3-compat_client/no}} 3
pass 5250483 2020-07-23 07:50:17 2020-07-23 07:55:16 2020-07-23 08:57:16 1:02:00 0:53:14 0:08:46 smithi master centos 8.1 fs/verify/{begin centos_latest clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_dbench validater/lockdep} 2
pass 5250484 2020-07-23 07:50:18 2020-07-23 07:56:18 2020-07-23 09:08:20 1:12:02 0:52:18 0:19:44 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_misc} 2
pass 5250485 2020-07-23 07:50:19 2020-07-23 07:57:43 2020-07-23 08:31:43 0:34:00 0:18:02 0:15:58 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/admin} 2
pass 5250486 2020-07-23 07:50:20 2020-07-23 07:58:11 2020-07-23 08:38:12 0:40:01 0:24:40 0:15:21 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_misc_test_o_trunc} 2
pass 5250487 2020-07-23 07:50:21 2020-07-23 07:58:21 2020-07-23 08:32:21 0:34:00 0:18:17 0:15:43 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_norstats} 2
pass 5250488 2020-07-23 07:50:22 2020-07-23 07:58:26 2020-07-23 08:36:26 0:38:00 0:14:50 0:23:10 smithi master rhel 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/alternate-pool} 2
pass 5250489 2020-07-23 07:50:23 2020-07-23 07:58:31 2020-07-23 08:34:31 0:36:00 0:21:23 0:14:37 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/filestore-xfs omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_blogbench} 2
fail 5250490 2020-07-23 07:50:24 2020-07-23 07:58:31 2020-07-23 08:34:31 0:36:00 0:23:37 0:12:23 smithi master ubuntu 18.04 fs/thrash/{begin ceph-thrash/mon clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi172 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

pass 5250491 2020-07-23 07:50:25 2020-07-23 07:59:51 2020-07-23 08:55:52 0:56:01 0:45:38 0:10:23 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_dbench} 2
pass 5250492 2020-07-23 07:50:26 2020-07-23 08:00:11 2020-07-23 08:30:11 0:30:00 0:21:01 0:08:59 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/asok_dump_tree} 2
pass 5250493 2020-07-23 07:50:27 2020-07-23 08:00:16 2020-07-23 08:54:17 0:54:01 0:47:53 0:06:08 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_ffsb} 2
fail 5250494 2020-07-23 07:50:28 2020-07-23 08:02:22 2020-07-23 09:14:23 1:12:01 0:36:33 0:35:28 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi077 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

pass 5250495 2020-07-23 07:50:29 2020-07-23 08:02:22 2020-07-23 08:24:22 0:22:00 0:10:46 0:11:14 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/auto-repair} 2
pass 5250496 2020-07-23 07:50:30 2020-07-23 08:02:22 2020-07-23 08:54:23 0:52:01 0:29:44 0:22:17 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_fsx} 2
fail 5250497 2020-07-23 07:50:31 2020-07-23 08:02:26 2020-07-23 08:26:26 0:24:00 0:15:51 0:08:09 smithi master rhel 8.1 fs/thrash/{begin ceph-thrash/mds clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Found coredumps on ubuntu@smithi157.front.sepia.ceph.com

pass 5250498 2020-07-23 07:50:32 2020-07-23 08:02:36 2020-07-23 08:24:35 0:21:59 0:14:56 0:07:03 smithi master rhel 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/backtrace} 2
pass 5250499 2020-07-23 07:50:33 2020-07-23 08:04:00 2020-07-23 08:34:00 0:30:00 0:11:51 0:18:09 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/filestore-xfs omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsync} 2
pass 5250500 2020-07-23 07:50:34 2020-07-23 08:04:16 2020-07-23 08:52:16 0:48:00 0:30:36 0:17:24 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_iogen} 2
fail 5250501 2020-07-23 07:50:36 2020-07-23 08:04:22 2020-07-23 08:24:21 0:19:59 0:12:08 0:07:51 smithi master centos 8.1 fs/32bits/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{faked-ino frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

"2020-07-23T08:19:53.491931+0000 mon.a (mon.0) 250 : cluster [WRN] Replacing daemon mds.b as rank 0 with standby daemon mds.a" in cluster log

pass 5250502 2020-07-23 07:50:37 2020-07-23 08:06:26 2020-07-23 08:32:26 0:26:00 0:10:31 0:15:29 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cap-flush} 2
pass 5250503 2020-07-23 07:50:37 2020-07-23 08:06:26 2020-07-23 09:46:28 1:40:02 0:11:33 1:28:29 smithi master ubuntu 18.04 fs/multiclient/{begin clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/ior-shared-file} 5
fail 5250504 2020-07-23 07:50:38 2020-07-23 08:06:26 2020-07-23 09:00:27 0:54:01 0:35:00 0:19:01 smithi master ubuntu 18.04 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi029 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

pass 5250505 2020-07-23 07:50:39 2020-07-23 08:08:22 2020-07-23 08:32:22 0:24:00 0:17:25 0:06:35 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_iozone} 2
pass 5250506 2020-07-23 07:50:40 2020-07-23 08:08:22 2020-07-23 08:52:23 0:44:01 0:12:15 0:31:46 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_pjd} 2
pass 5250507 2020-07-23 07:50:41 2020-07-23 08:08:22 2020-07-23 08:40:22 0:32:00 0:10:42 0:21:18 smithi master centos 8.1 fs/thrash/{begin ceph-thrash/mon clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_trivial_sync} 2
fail 5250508 2020-07-23 07:50:42 2020-07-23 08:08:36 2020-07-23 08:56:36 0:48:00 0:33:55 0:14:05 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cephfs-shell} 2
Failure Reason:

"2020-07-23T08:28:18.232412+0000 mon.b (mon.0) 599 : cluster [WRN] Replacing daemon mds.a as rank 0 with standby daemon mds.b" in cluster log

pass 5250509 2020-07-23 07:50:43 2020-07-23 08:10:31 2020-07-23 08:28:31 0:18:00 0:10:30 0:07:30 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_truncate_delay} 2
pass 5250510 2020-07-23 07:50:44 2020-07-23 08:10:31 2020-07-23 08:32:31 0:22:00 0:13:42 0:08:18 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/filestore-xfs omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_trivial_sync} 2
pass 5250511 2020-07-23 07:50:45 2020-07-23 08:10:32 2020-07-23 08:44:31 0:33:59 0:18:34 0:15:25 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cephfs_scrub_tests} 2
fail 5250512 2020-07-23 07:50:46 2020-07-23 08:10:47 2020-07-23 11:34:53 3:24:06 3:11:32 0:12:34 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/libcephfs_interface_tests} 2
Failure Reason:

Command failed (workunit test libcephfs/test.sh) on smithi136 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/libcephfs/test.sh'

pass 5250513 2020-07-23 07:50:47 2020-07-23 08:12:23 2020-07-23 09:54:25 1:42:02 1:17:45 0:24:17 smithi master ubuntu 18.04 fs/upgrade/volumes/import-legacy/{bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{frag_enable pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-nautilus 1-client 2-upgrade 3-verify} ubuntu_18.04} 2
fail 5250514 2020-07-23 07:50:48 2020-07-23 08:12:23 2020-07-23 09:00:23 0:48:00 0:37:32 0:10:28 smithi master rhel 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_quota} 2
Failure Reason:

Command failed (workunit test fs/quota/quota.sh) on smithi203 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/quota/quota.sh'

pass 5250515 2020-07-23 07:50:49 2020-07-23 08:12:23 2020-07-23 10:12:26 2:00:03 1:40:19 0:19:44 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_kernel_untar_build} 2
fail 5250516 2020-07-23 07:50:50 2020-07-23 08:16:23 2020-07-23 08:42:22 0:25:59 0:18:52 0:07:07 smithi master centos 8.1 fs/thrash/{begin ceph-thrash/mds clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/none objectstore-ec/filestore-xfs overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_snaptests} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-0.sh) on smithi151 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-0.sh'

pass 5250517 2020-07-23 07:50:51 2020-07-23 08:16:23 2020-07-23 09:28:24 1:12:01 1:03:31 0:08:30 smithi master rhel 8.1 fs/traceless/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_dbench traceless/50pc} 2
fail 5250518 2020-07-23 07:50:52 2020-07-23 08:16:23 2020-07-23 09:18:24 1:02:01 0:55:43 0:06:18 smithi master centos 8.1 fs/verify/{begin centos_latest clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress validater/valgrind} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi028 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

pass 5250519 2020-07-23 07:50:52 2020-07-23 08:16:23 2020-07-23 09:18:24 1:02:01 0:47:17 0:14:44 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_misc} 2
pass 5250520 2020-07-23 07:50:53 2020-07-23 08:16:32 2020-07-23 08:44:32 0:28:00 0:18:20 0:09:40 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/client-limits} 2
pass 5250521 2020-07-23 07:50:54 2020-07-23 08:18:22 2020-07-23 08:50:22 0:32:00 0:24:01 0:07:59 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_misc_test_o_trunc} 2
pass 5250522 2020-07-23 07:50:55 2020-07-23 08:18:22 2020-07-23 08:42:22 0:24:00 0:17:06 0:06:54 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/filestore-xfs omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_norstats} 2
pass 5250523 2020-07-23 07:50:56 2020-07-23 08:18:23 2020-07-23 08:34:23 0:16:00 0:11:09 0:04:51 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/client-readahad} 2
pass 5250524 2020-07-23 07:50:58 2020-07-23 08:18:27 2020-07-23 08:52:27 0:34:00 0:24:49 0:09:11 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_blogbench} 2
fail 5250525 2020-07-23 07:50:58 2020-07-23 08:20:20 2020-07-23 08:54:20 0:34:00 0:24:00 0:10:00 smithi master centos 8.1 fs/thrash/{begin ceph-thrash/mon clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi059 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

pass 5250526 2020-07-23 07:50:59 2020-07-23 08:20:20 2020-07-23 09:24:21 1:04:01 0:51:03 0:12:58 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_dbench} 2
fail 5250527 2020-07-23 07:51:00 2020-07-23 08:20:21 2020-07-23 09:10:22 0:50:01 0:34:21 0:15:40 smithi master ubuntu 18.04 fs/32bits/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp overrides/{faked-ino frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi112 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

pass 5250528 2020-07-23 07:51:01 2020-07-23 08:20:21 2020-07-23 09:16:22 0:56:01 0:39:19 0:16:42 smithi master rhel 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/client-recovery} 2
pass 5250529 2020-07-23 07:51:02 2020-07-23 08:20:22 2020-07-23 09:08:22 0:48:00 0:12:40 0:35:20 smithi master ubuntu 18.04 fs/multiclient/{begin clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/mdtest} 4
pass 5250530 2020-07-23 07:51:03 2020-07-23 08:20:24 2020-07-23 09:14:24 0:54:00 0:43:33 0:10:27 smithi master ubuntu 18.04 fs/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/failover} 2
fail 5250531 2020-07-23 07:51:04 2020-07-23 08:21:00 2020-07-23 09:07:01 0:46:01 0:32:28 0:13:33 smithi master ubuntu 18.04 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_misc} 2
Failure Reason:

Command failed (workunit test fs/misc/acl.sh) on smithi106 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/acl.sh'

fail 5250532 2020-07-23 07:51:05 2020-07-23 08:22:32 2020-07-23 09:04:32 0:42:00 0:33:09 0:08:51 smithi master centos 8.1 fs/snaps/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/snaptests} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-0.sh) on smithi022 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-0.sh'

pass 5250533 2020-07-23 07:51:06 2020-07-23 08:22:32 2020-07-23 09:04:32 0:42:00 0:32:25 0:09:35 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_ffsb} 2
fail 5250534 2020-07-23 07:51:07 2020-07-23 08:24:26 2020-07-23 09:12:26 0:48:00 0:35:06 0:12:54 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi123 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

fail 5250535 2020-07-23 07:51:08 2020-07-23 08:24:26 2020-07-23 09:04:26 0:40:00 0:19:11 0:20:49 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/damage} 2
Failure Reason:

"2020-07-23T08:58:49.637610+0000 mon.b (mon.0) 1587 : cluster [WRN] Replacing daemon mds.a as rank 0 with standby daemon mds.b" in cluster log

pass 5250536 2020-07-23 07:51:09 2020-07-23 08:24:26 2020-07-23 08:56:26 0:32:00 0:17:21 0:14:39 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/filestore-xfs omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_fsx} 2
fail 5250537 2020-07-23 07:51:10 2020-07-23 08:24:37 2020-07-23 09:08:37 0:44:00 0:25:01 0:18:59 smithi master rhel 8.1 fs/thrash/{begin ceph-thrash/mds clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi061 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

fail 5250538 2020-07-23 07:51:11 2020-07-23 08:26:36 2020-07-23 08:58:36 0:32:00 0:25:48 0:06:12 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/data-scan} 2
Failure Reason:

"2020-07-23T08:53:04.507651+0000 mon.b (mon.0) 1383 : cluster [WRN] Replacing daemon mds.b as rank 0 with standby daemon mds.a" in cluster log

pass 5250539 2020-07-23 07:51:12 2020-07-23 08:26:37 2020-07-23 09:02:36 0:35:59 0:13:28 0:22:31 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsync} 2
pass 5250540 2020-07-23 07:51:13 2020-07-23 08:26:36 2020-07-23 09:18:37 0:52:01 0:40:32 0:11:29 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_iogen} 2
pass 5250541 2020-07-23 07:51:14 2020-07-23 08:27:58 2020-07-23 08:53:58 0:26:00 0:13:32 0:12:28 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/forward-scrub} 2
pass 5250542 2020-07-23 07:51:15 2020-07-23 08:28:05 2020-07-23 08:52:04 0:23:59 0:14:33 0:09:26 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_iozone} 2
fail 5250543 2020-07-23 07:51:16 2020-07-23 08:28:32 2020-07-23 08:50:32 0:22:00 0:12:44 0:09:16 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

"2020-07-23T08:44:53.707641+0000 mon.b (mon.0) 150 : cluster [WRN] Replacing daemon mds.a as rank 0 with standby daemon mds.b" in cluster log

pass 5250544 2020-07-23 07:51:17 2020-07-23 08:30:34 2020-07-23 08:54:34 0:24:00 0:14:39 0:09:21 smithi master rhel 8.1 fs/thrash/{begin ceph-thrash/mon clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_trivial_sync} 2
fail 5250545 2020-07-23 07:51:18 2020-07-23 08:30:34 2020-07-23 09:16:34 0:46:00 0:37:41 0:08:19 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/fragment} 2
Failure Reason:

Test failure: test_split_straydir (tasks.cephfs.test_fragment.TestFragmentation), test_split_straydir (tasks.cephfs.test_fragment.TestFragmentation)

pass 5250546 2020-07-23 07:51:19 2020-07-23 08:30:34 2020-07-23 08:48:33 0:17:59 0:10:25 0:07:34 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/filestore-xfs omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_truncate_delay} 2
pass 5250547 2020-07-23 07:51:20 2020-07-23 08:32:02 2020-07-23 08:52:01 0:19:59 0:10:28 0:09:31 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_trivial_sync} 2
fail 5250548 2020-07-23 07:51:21 2020-07-23 08:32:22 2020-07-23 08:58:22 0:26:00 0:14:31 0:11:29 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/journal-repair} 2
Failure Reason:

Test failure: test_reset (tasks.cephfs.test_journal_repair.TestJournalRepair)

dead 5250549 2020-07-23 07:51:22 2020-07-23 08:32:24 2020-07-23 20:35:01 12:02:37 smithi master fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{frag_enable multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-nautilus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
fail 5250550 2020-07-23 07:51:23 2020-07-23 08:32:27 2020-07-23 11:56:33 3:24:06 3:11:43 0:12:23 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/libcephfs_interface_tests} 2
Failure Reason:

Command failed (workunit test libcephfs/test.sh) on smithi122 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/libcephfs/test.sh'

fail 5250551 2020-07-23 07:51:24 2020-07-23 08:32:28 2020-07-23 09:04:28 0:32:00 0:12:32 0:19:28 smithi master ubuntu 18.04 fs/32bits/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-ec-root overrides/{faked-ino frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

"2020-07-23T08:59:48.406445+0000 mon.a (mon.0) 172 : cluster [WRN] Replacing daemon mds.a as rank 0 with standby daemon mds.b" in cluster log

fail 5250552 2020-07-23 07:51:25 2020-07-23 08:32:32 2020-07-23 08:54:32 0:22:00 0:14:33 0:07:27 smithi master rhel 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/libcephfs_python} 2
Failure Reason:

"2020-07-23T08:48:01.332354+0000 mon.b (mon.0) 149 : cluster [WRN] Replacing daemon mds.b as rank 0 with standby daemon mds.a" in cluster log

pass 5250553 2020-07-23 07:51:26 2020-07-23 08:34:07 2020-07-23 10:06:09 1:32:02 1:25:57 0:06:05 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_kernel_untar_build} 2
pass 5250554 2020-07-23 07:51:27 2020-07-23 08:34:08 2020-07-23 11:06:11 2:32:03 0:19:25 2:12:38 smithi master ubuntu 18.04 fs/multiclient/{begin clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cephfs_misc_tests} 5
fail 5250555 2020-07-23 07:51:28 2020-07-23 08:34:12 2020-07-23 09:20:13 0:46:01 0:38:47 0:07:14 smithi master rhel 8.1 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi060 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

fail 5250556 2020-07-23 07:51:29 2020-07-23 08:34:24 2020-07-23 09:20:24 0:46:00 0:19:12 0:26:48 smithi master ubuntu 18.04 fs/thrash/{begin ceph-thrash/mds clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_snaptests} 2
Failure Reason:

Command failed (workunit test fs/snaps/snap-rm-diff.sh) on smithi158 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snap-rm-diff.sh'

pass 5250557 2020-07-23 07:51:30 2020-07-23 08:34:33 2020-07-23 09:26:33 0:52:00 0:42:10 0:09:50 smithi master ubuntu 18.04 fs/traceless/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_ffsb traceless/50pc} 2
pass 5250558 2020-07-23 07:51:31 2020-07-23 08:34:33 2020-07-23 09:34:34 1:00:01 0:53:35 0:06:26 smithi master centos 8.1 fs/verify/{begin centos_latest clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_dbench validater/lockdep} 2
pass 5250559 2020-07-23 07:51:32 2020-07-23 08:36:33 2020-07-23 09:40:34 1:04:01 0:51:56 0:12:05 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_misc} 2
pass 5250560 2020-07-23 07:51:33 2020-07-23 08:36:33 2020-07-23 08:58:33 0:22:00 0:10:33 0:11:27 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/mds-flush} 2
pass 5250561 2020-07-23 07:51:34 2020-07-23 08:38:29 2020-07-23 09:10:29 0:32:00 0:20:07 0:11:53 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/filestore-xfs omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_misc_test_o_trunc} 2
pass 5250562 2020-07-23 07:51:35 2020-07-23 08:38:29 2020-07-23 09:02:29 0:24:00 0:13:35 0:10:25 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_norstats} 2
pass 5250563 2020-07-23 07:51:36 2020-07-23 08:40:34 2020-07-23 09:12:34 0:32:00 0:22:08 0:09:52 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/mds-full} 2
pass 5250564 2020-07-23 07:51:37 2020-07-23 08:40:34 2020-07-23 09:12:34 0:32:00 0:19:17 0:12:43 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_blogbench} 2
fail 5250565 2020-07-23 07:51:38 2020-07-23 08:42:35 2020-07-23 09:22:35 0:40:00 0:23:30 0:16:30 smithi master ubuntu 18.04 fs/thrash/{begin ceph-thrash/mon clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/filestore-xfs overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi014 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

pass 5250566 2020-07-23 07:51:39 2020-07-23 08:42:35 2020-07-23 09:46:36 1:04:01 0:46:22 0:17:39 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_dbench} 2
pass 5250567 2020-07-23 07:51:40 2020-07-23 08:42:35 2020-07-23 09:08:35 0:26:00 0:11:06 0:14:54 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/mds_creation_retry} 2
pass 5250568 2020-07-23 07:51:41 2020-07-23 08:44:28 2020-07-23 09:42:29 0:58:01 0:47:51 0:10:10 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_ffsb} 2
fail 5250569 2020-07-23 07:51:42 2020-07-23 08:44:33 2020-07-23 09:14:33 0:30:00 0:21:37 0:08:23 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/filestore-xfs omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi103 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

pass 5250570 2020-07-23 07:51:43 2020-07-23 08:44:34 2020-07-23 09:02:33 0:17:59 0:11:26 0:06:33 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/openfiletable} 2
pass 5250571 2020-07-23 07:51:44 2020-07-23 08:46:21 2020-07-23 09:16:21 0:30:00 0:22:02 0:07:58 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_fsx} 2
fail 5250572 2020-07-23 07:51:45 2020-07-23 08:46:21 2020-07-23 09:18:21 0:32:00 0:19:58 0:12:02 smithi master ubuntu 18.04 fs/thrash/{begin ceph-thrash/mds clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi200 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

pass 5250573 2020-07-23 07:51:46 2020-07-23 08:46:46 2020-07-23 09:10:45 0:23:59 0:11:26 0:12:33 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/pool-perm} 2
pass 5250574 2020-07-23 07:51:47 2020-07-23 08:48:22 2020-07-23 09:20:22 0:32:00 0:24:36 0:07:24 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_fsync} 2
pass 5250575 2020-07-23 07:51:48 2020-07-23 08:48:35 2020-07-23 09:34:35 0:46:00 0:37:32 0:08:28 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_iogen} 2
fail 5250576 2020-07-23 07:51:49 2020-07-23 08:48:43 2020-07-23 09:16:43 0:28:00 0:21:54 0:06:06 smithi master rhel 8.1 fs/32bits/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/filestore-xfs overrides/{faked-ino frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi081 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

fail 5250577 2020-07-23 07:51:50 2020-07-23 08:49:57 2020-07-23 09:13:56 0:23:59 0:10:17 0:13:42 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/quota} 2
Failure Reason:

Test failure: test_remote_update_df (tasks.cephfs.test_quota.TestQuota)

pass 5250578 2020-07-23 07:51:51 2020-07-23 08:50:23 2020-07-23 09:36:24 0:46:01 0:10:00 0:36:01 smithi master ubuntu 18.04 fs/multiclient/{begin clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/filestore-xfs overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/ior-shared-file} 4
pass 5250579 2020-07-23 07:51:52 2020-07-23 08:50:36 2020-07-23 09:42:36 0:52:00 0:46:15 0:05:45 smithi master rhel 8.1 fs/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/failover} 2
fail 5250580 2020-07-23 07:51:53 2020-07-23 08:52:18 2020-07-23 09:36:18 0:44:00 0:36:02 0:07:58 smithi master rhel 8.1 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/filestore-xfs overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_misc} 2
Failure Reason:

Command failed (workunit test fs/misc/acl.sh) on smithi171 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/acl.sh'

fail 5250581 2020-07-23 07:51:54 2020-07-23 08:52:18 2020-07-23 09:46:19 0:54:01 0:32:51 0:21:10 smithi master ubuntu 18.04 fs/snaps/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/snaptests} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-0.sh) on smithi072 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-0.sh'

pass 5250582 2020-07-23 07:51:55 2020-07-23 08:52:18 2020-07-23 09:16:18 0:24:00 0:13:25 0:10:35 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_iozone} 2
fail 5250583 2020-07-23 07:51:56 2020-07-23 08:52:25 2020-07-23 09:38:24 0:45:59 0:33:58 0:12:01 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/filestore-xfs omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi132 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

pass 5250584 2020-07-23 07:51:56 2020-07-23 08:52:28 2020-07-23 09:14:28 0:22:00 0:11:35 0:10:25 smithi master ubuntu 18.04 fs/thrash/{begin ceph-thrash/mon clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_trivial_sync} 2
fail 5250585 2020-07-23 07:51:57 2020-07-23 08:54:14 2020-07-23 09:18:14 0:24:00 0:15:37 0:08:23 smithi master rhel 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/sessionmap/{sessionmap}} 2
Failure Reason:

Test failure: test_session_evict_blacklisted (tasks.cephfs.test_sessionmap.TestSessionMap)

pass 5250586 2020-07-23 07:51:58 2020-07-23 08:54:18 2020-07-23 09:14:17 0:19:59 0:13:52 0:06:07 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_truncate_delay} 2
pass 5250587 2020-07-23 07:51:59 2020-07-23 08:54:21 2020-07-23 09:30:21 0:36:00 0:22:10 0:13:50 smithi master fs/upgrade/featureful_client/old_client/{bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{frag_enable multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-nautilus 1-client 2-upgrade 3-compat_client/pacific}} 3
pass 5250588 2020-07-23 07:52:00 2020-07-23 08:54:24 2020-07-23 09:12:24 0:18:00 0:10:21 0:07:39 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_trivial_sync} 2
fail 5250589 2020-07-23 07:52:01 2020-07-23 08:54:32 2020-07-23 09:14:31 0:19:59 0:13:30 0:06:29 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/strays} 2
Failure Reason:

Test failure: test_hardlink_reintegration (tasks.cephfs.test_strays.TestStrays)

fail 5250590 2020-07-23 07:52:02 2020-07-23 08:54:33 2020-07-23 12:12:39 3:18:06 3:11:46 0:06:20 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/libcephfs_interface_tests} 2
Failure Reason:

Command failed (workunit test libcephfs/test.sh) on smithi142 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/libcephfs/test.sh'

fail 5250591 2020-07-23 07:52:03 2020-07-23 08:54:36 2020-07-23 09:30:35 0:35:59 0:21:50 0:14:09 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/test_journal_migration} 2
Failure Reason:

Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration), test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)

pass 5250592 2020-07-23 07:52:04 2020-07-23 08:56:10 2020-07-23 10:50:13 1:54:03 1:41:20 0:12:43 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_kernel_untar_build} 2
fail 5250593 2020-07-23 07:52:05 2020-07-23 08:56:10 2020-07-23 09:26:10 0:30:00 0:19:18 0:10:42 smithi master centos 8.1 fs/thrash/{begin ceph-thrash/mds clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_snaptests} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-0.sh) on smithi029 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-0.sh'

fail 5250594 2020-07-23 07:52:06 2020-07-23 08:56:27 2020-07-23 09:44:28 0:48:01 0:35:30 0:12:31 smithi master ubuntu 18.04 fs/traceless/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsstress traceless/50pc} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi092 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

fail 5250595 2020-07-23 07:52:07 2020-07-23 08:56:38 2020-07-23 10:02:38 1:06:00 0:56:57 0:09:03 smithi master centos 8.1 fs/verify/{begin centos_latest clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress validater/valgrind} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi203 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

pass 5250596 2020-07-23 07:52:08 2020-07-23 08:57:38 2020-07-23 09:47:39 0:50:01 0:42:43 0:07:18 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/filestore-xfs omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_misc} 2
fail 5250597 2020-07-23 07:52:09 2020-07-23 08:58:23 2020-07-23 09:18:23 0:20:00 0:12:02 0:07:58 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/volume-client/{task/test/{test}}} 2
Failure Reason:

Test failure: test_21501 (tasks.cephfs.test_volume_client.TestVolumeClient)

pass 5250598 2020-07-23 07:52:10 2020-07-23 08:58:34 2020-07-23 09:30:34 0:32:00 0:20:42 0:11:18 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_misc_test_o_trunc} 2
pass 5250599 2020-07-23 07:52:11 2020-07-23 08:58:52 2020-07-23 09:22:52 0:24:00 0:17:48 0:06:12 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_norstats} 2
fail 5250600 2020-07-23 07:52:12 2020-07-23 09:00:41 2020-07-23 09:56:41 0:56:00 0:40:07 0:15:53 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/volumes} 2
Failure Reason:

Test failure: test_subvolume_pin_export (tasks.cephfs.test_volumes.TestVolumes)

pass 5250601 2020-07-23 07:52:12 2020-07-23 09:00:41 2020-07-23 09:30:41 0:30:00 0:21:19 0:08:41 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_blogbench} 2
fail 5250602 2020-07-23 07:52:13 2020-07-23 09:00:41 2020-07-23 09:40:41 0:40:00 0:23:44 0:16:16 smithi master ubuntu 18.04 fs/thrash/{begin ceph-thrash/mon clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi019 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

pass 5250603 2020-07-23 07:52:14 2020-07-23 09:00:41 2020-07-23 10:08:42 1:08:01 0:50:13 0:17:48 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_dbench} 2
fail 5250604 2020-07-23 07:52:15 2020-07-23 09:02:48 2020-07-23 09:48:48 0:46:00 0:34:16 0:11:44 smithi master ubuntu 18.04 fs/32bits/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap overrides/{faked-ino frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi163 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

pass 5250605 2020-07-23 07:52:16 2020-07-23 09:02:48 2020-07-23 09:22:47 0:19:59 0:11:52 0:08:07 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/acls-fuse-client} 2
pass 5250606 2020-07-23 07:52:17 2020-07-23 09:02:48 2020-07-23 09:54:48 0:52:00 0:19:57 0:32:03 smithi master ubuntu 18.04 fs/multiclient/{begin clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/mdtest} 5
fail 5250607 2020-07-23 07:52:18 2020-07-23 09:04:44 2020-07-23 09:50:45 0:46:01 0:34:29 0:11:32 smithi master ubuntu 18.04 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi188 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

pass 5250608 2020-07-23 07:52:18 2020-07-23 09:04:44 2020-07-23 09:46:45 0:42:01 0:32:56 0:09:05 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/filestore-xfs omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_ffsb} 2
fail 5250609 2020-07-23 07:52:19 2020-07-23 09:04:45 2020-07-23 09:48:45 0:44:00 0:34:06 0:09:54 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi205 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

pass 5250610 2020-07-23 07:52:20 2020-07-23 09:04:44 2020-07-23 09:30:44 0:26:00 0:18:01 0:07:59 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/admin} 2
pass 5250611 2020-07-23 07:52:21 2020-07-23 09:07:18 2020-07-23 09:57:19 0:50:01 0:34:06 0:15:55 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsx} 2
pass 5250612 2020-07-23 07:52:22 2020-07-23 09:08:06 2020-07-23 09:32:05 0:23:59 0:16:48 0:07:11 smithi master rhel 8.1 fs/thrash/{begin ceph-thrash/mds clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/none objectstore-ec/filestore-xfs overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_pjd} 2
pass 5250613 2020-07-23 07:52:23 2020-07-23 09:08:36 2020-07-23 09:24:36 0:16:00 0:10:43 0:05:17 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/alternate-pool} 2
pass 5250614 2020-07-23 07:52:24 2020-07-23 09:08:36 2020-07-23 09:32:36 0:24:00 0:13:16 0:10:44 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsync} 2
pass 5250615 2020-07-23 07:52:25 2020-07-23 09:08:36 2020-07-23 09:50:37 0:42:01 0:31:49 0:10:12 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_iogen} 2
pass 5250616 2020-07-23 07:52:26 2020-07-23 09:08:38 2020-07-23 09:38:38 0:30:00 0:21:14 0:08:46 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/asok_dump_tree} 2
pass 5250617 2020-07-23 07:52:26 2020-07-23 09:10:39 2020-07-23 09:36:38 0:25:59 0:18:34 0:07:25 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/filestore-xfs omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_iozone} 2
pass 5250618 2020-07-23 07:52:27 2020-07-23 09:10:39 2020-07-23 09:30:38 0:19:59 0:11:32 0:08:27 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_pjd} 2
pass 5250619 2020-07-23 07:52:28 2020-07-23 09:10:46 2020-07-23 09:28:46 0:18:00 0:11:12 0:06:48 smithi master centos 8.1 fs/thrash/{begin ceph-thrash/mon clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_trivial_sync} 2
pass 5250620 2020-07-23 07:52:29 2020-07-23 09:12:45 2020-07-23 09:34:45 0:22:00 0:14:45 0:07:15 smithi master rhel 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/auto-repair} 2
dead 5250621 2020-07-23 07:52:30 2020-07-23 09:12:46 2020-07-23 21:15:23 12:02:37 smithi master fs/upgrade/featureful_client/old_client/{bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{frag_enable multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-nautilus 1-client 2-upgrade 3-compat_client/no}} 3
pass 5250622 2020-07-23 07:52:31 2020-07-23 09:12:46 2020-07-23 09:34:45 0:21:59 0:14:10 0:07:49 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_truncate_delay} 2
pass 5250623 2020-07-23 07:52:32 2020-07-23 09:12:46 2020-07-23 09:30:45 0:17:59 0:10:05 0:07:54 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_trivial_sync} 2
pass 5250624 2020-07-23 07:52:33 2020-07-23 09:14:16 2020-07-23 09:36:16 0:22:00 0:14:26 0:07:34 smithi master rhel 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/backtrace} 2
fail 5250625 2020-07-23 07:52:34 2020-07-23 09:14:19 2020-07-23 10:20:20 1:06:01 0:12:32 0:53:29 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/libcephfs_interface_tests} 2
Failure Reason:

Command failed (workunit test libcephfs/test.sh) on smithi197 with status 134: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/libcephfs/test.sh'

fail 5250626 2020-07-23 07:52:35 2020-07-23 09:14:25 2020-07-23 10:00:25 0:46:00 0:34:58 0:11:02 smithi master ubuntu 18.04 fs/32bits/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{faked-ino frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi141 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

pass 5250627 2020-07-23 07:52:35 2020-07-23 09:14:26 2020-07-23 09:34:25 0:19:59 0:10:15 0:09:44 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cap-flush} 2
pass 5250628 2020-07-23 07:52:36 2020-07-23 09:14:29 2020-07-23 10:50:31 1:36:02 1:26:22 0:09:40 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/filestore-xfs omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_kernel_untar_build} 2
pass 5250629 2020-07-23 07:52:37 2020-07-23 09:14:33 2020-07-23 10:04:33 0:50:00 0:18:43 0:31:17 smithi master ubuntu 18.04 fs/multiclient/{begin clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cephfs_misc_tests} 4
pass 5250630 2020-07-23 07:52:38 2020-07-23 09:14:34 2020-07-23 10:08:35 0:54:01 0:43:19 0:10:42 smithi master ubuntu 18.04 fs/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/failover} 2
fail 5250631 2020-07-23 07:52:39 2020-07-23 09:16:36 2020-07-23 10:00:36 0:44:00 0:37:21 0:06:39 smithi master rhel 8.1 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_misc} 2
Failure Reason:

Command failed (workunit test fs/misc/acl.sh) on smithi057 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/acl.sh'

fail 5250632 2020-07-23 07:52:40 2020-07-23 09:16:36 2020-07-23 10:00:36 0:44:00 0:36:43 0:07:17 smithi master rhel 8.1 fs/snaps/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/snaptests} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-0.sh) on smithi200 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-0.sh'

fail 5250633 2020-07-23 07:52:41 2020-07-23 09:16:36 2020-07-23 09:44:36 0:28:00 0:18:55 0:09:05 smithi master centos 8.1 fs/thrash/{begin ceph-thrash/mds clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_snaptests} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-0.sh) on smithi110 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-0.sh'

pass 5250634 2020-07-23 07:52:41 2020-07-23 09:16:37 2020-07-23 09:56:37 0:40:00 0:20:37 0:19:23 smithi master centos 8.1 fs/traceless/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/filestore-xfs overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_blogbench traceless/50pc} 2
pass 5250635 2020-07-23 07:52:42 2020-07-23 09:16:37 2020-07-23 10:16:37 1:00:00 0:50:21 0:09:39 smithi master centos 8.1 fs/verify/{begin centos_latest clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/filestore-xfs overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_dbench validater/lockdep} 2
pass 5250636 2020-07-23 07:52:43 2020-07-23 09:16:44 2020-07-23 10:14:45 0:58:01 0:46:53 0:11:08 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_misc} 2
fail 5250637 2020-07-23 07:52:44 2020-07-23 09:18:34 2020-07-23 10:04:34 0:46:00 0:34:02 0:11:58 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cephfs-shell} 2
Failure Reason:

"2020-07-23T09:37:26.811313+0000 mon.b (mon.0) 586 : cluster [WRN] Replacing daemon mds.a as rank 0 with standby daemon mds.b" in cluster log

pass 5250638 2020-07-23 07:52:45 2020-07-23 09:18:34 2020-07-23 09:52:34 0:34:00 0:20:40 0:13:20 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_misc_test_o_trunc} 2
pass 5250639 2020-07-23 07:52:46 2020-07-23 09:18:34 2020-07-23 09:48:33 0:29:59 0:17:24 0:12:35 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_norstats} 2
pass 5250640 2020-07-23 07:52:47 2020-07-23 09:18:34 2020-07-23 09:46:34 0:28:00 0:19:33 0:08:27 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cephfs_scrub_tests} 2
pass 5250641 2020-07-23 07:52:48 2020-07-23 09:18:34 2020-07-23 09:54:34 0:36:00 0:19:33 0:16:27 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_blogbench} 2
fail 5250642 2020-07-23 07:52:49 2020-07-23 09:18:38 2020-07-23 09:52:38 0:34:00 0:23:46 0:10:14 smithi master centos 8.1 fs/thrash/{begin ceph-thrash/mon clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi050 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

pass 5250643 2020-07-23 07:52:50 2020-07-23 09:20:32 2020-07-23 10:44:33 1:24:01 0:54:55 0:29:06 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/filestore-xfs omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_dbench} 2
fail 5250644 2020-07-23 07:52:51 2020-07-23 09:20:32 2020-07-23 10:04:32 0:44:00 0:33:49 0:10:11 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_quota} 2
Failure Reason:

Command failed (workunit test fs/quota/quota.sh) on smithi036 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/quota/quota.sh'

pass 5250645 2020-07-23 07:52:52 2020-07-23 09:20:32 2020-07-23 10:18:33 0:58:01 0:41:13 0:16:48 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_ffsb} 2
fail 5250646 2020-07-23 07:52:53 2020-07-23 09:22:25 2020-07-23 10:12:26 0:50:01 0:35:25 0:14:36 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi129 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

pass 5250647 2020-07-23 07:52:54 2020-07-23 09:22:37 2020-07-23 09:56:38 0:34:01 0:18:18 0:15:43 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/client-limits} 2
pass 5250648 2020-07-23 07:52:55 2020-07-23 09:22:49 2020-07-23 10:04:49 0:42:00 0:24:31 0:17:29 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsx} 2
fail 5250649 2020-07-23 07:52:55 2020-07-23 09:22:53 2020-07-23 10:04:54 0:42:01 0:24:11 0:17:50 smithi master rhel 8.1 fs/thrash/{begin ceph-thrash/mds clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Command failed on smithi159 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early osd dump --format=json'

pass 5250650 2020-07-23 07:52:56 2020-07-23 09:23:58 2020-07-23 09:49:57 0:25:59 0:14:06 0:11:53 smithi master rhel 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/client-readahad} 2
pass 5250651 2020-07-23 07:52:57 2020-07-23 09:24:22 2020-07-23 10:00:22 0:36:00 0:20:25 0:15:35 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsync} 2
pass 5250652 2020-07-23 07:52:58 2020-07-23 09:24:34 2020-07-23 10:08:34 0:44:00 0:28:39 0:15:21 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/filestore-xfs omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_iogen} 2
pass 5250653 2020-07-23 07:52:59 2020-07-23 09:24:37 2020-07-23 09:46:37 0:22:00 0:11:34 0:10:26 smithi master centos 8.1 fs/32bits/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp overrides/{faked-ino frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_pjd} 2
pass 5250654 2020-07-23 07:53:00 2020-07-23 09:26:30 2020-07-23 10:14:31 0:48:01 0:34:51 0:13:10 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/client-recovery} 2
pass 5250655 2020-07-23 07:53:01 2020-07-23 09:26:38 2020-07-23 10:02:38 0:36:00 0:11:14 0:24:46 smithi master ubuntu 18.04 fs/multiclient/{begin clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/ior-shared-file} 5
fail 5250656 2020-07-23 07:53:02 2020-07-23 09:28:41 2020-07-23 10:10:41 0:42:00 0:34:45 0:07:15 smithi master centos 8.1 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi137 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

pass 5250657 2020-07-23 07:53:03 2020-07-23 09:28:47 2020-07-23 09:52:47 0:24:00 0:16:36 0:07:24 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_iozone} 2
pass 5250658 2020-07-23 07:53:04 2020-07-23 09:30:40 2020-07-23 09:50:38 0:19:58 0:12:00 0:07:58 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_pjd} 2
pass 5250659 2020-07-23 07:53:05 2020-07-23 09:30:40 2020-07-23 09:52:39 0:21:59 0:11:25 0:10:34 smithi master ubuntu 18.04 fs/thrash/{begin ceph-thrash/mon clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/filestore-xfs overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_trivial_sync} 2
dead 5250660 2020-07-23 07:53:06 2020-07-23 09:30:40 2020-07-23 21:33:15 12:02:35 smithi master fs/upgrade/featureful_client/upgraded_client/{bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{frag_enable multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-nautilus 1-client 2-upgrade 3-client-upgrade 4-compat_client 5-client-sanity}} 3
fail 5250661 2020-07-23 07:53:06 2020-07-23 09:30:40 2020-07-23 10:02:39 0:31:59 0:22:55 0:09:04 smithi master rhel 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/damage} 2
Failure Reason:

"2020-07-23T09:56:11.839877+0000 mon.a (mon.0) 2000 : cluster [WRN] Replacing daemon mds.a as rank 0 with standby daemon mds.b" in cluster log

pass 5250662 2020-07-23 07:53:07 2020-07-23 09:30:42 2020-07-23 09:48:42 0:18:00 0:10:28 0:07:32 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_truncate_delay} 2
pass 5250663 2020-07-23 07:53:08 2020-07-23 09:30:46 2020-07-23 09:54:45 0:23:59 0:13:52 0:10:07 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_trivial_sync} 2
fail 5250664 2020-07-23 07:53:09 2020-07-23 09:30:47 2020-07-23 10:36:48 1:06:01 0:40:30 0:25:31 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/data-scan} 2
Failure Reason:

Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan), test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)

fail 5250665 2020-07-23 07:53:10 2020-07-23 09:32:22 2020-07-23 12:54:28 3:22:06 3:11:31 0:10:35 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/filestore-xfs omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/libcephfs_interface_tests} 2
Failure Reason:

Command failed (workunit test libcephfs/test.sh) on smithi038 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/libcephfs/test.sh'

pass 5250666 2020-07-23 07:53:11 2020-07-23 09:32:22 2020-07-23 09:56:22 0:24:00 0:13:32 0:10:28 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/forward-scrub} 2
pass 5250667 2020-07-23 07:53:12 2020-07-23 09:32:37 2020-07-23 11:38:40 2:06:03 1:43:18 0:22:45 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_kernel_untar_build} 2
fail 5250668 2020-07-23 07:53:13 2020-07-23 09:34:44 2020-07-23 10:06:44 0:32:00 0:22:55 0:09:05 smithi master rhel 8.1 fs/thrash/{begin ceph-thrash/mds clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_snaptests} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-0.sh) on smithi169 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-0.sh'

pass 5250669 2020-07-23 07:53:13 2020-07-23 09:34:44 2020-07-23 10:36:45 1:02:01 0:53:45 0:08:16 smithi master centos 8.1 fs/traceless/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_dbench traceless/50pc} 2
fail 5250670 2020-07-23 07:53:14 2020-07-23 09:34:44 2020-07-23 10:44:45 1:10:01 0:54:09 0:15:52 smithi master centos 8.1 fs/verify/{begin centos_latest clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_fsstress validater/valgrind} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi110 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

pass 5250671 2020-07-23 07:53:15 2020-07-23 09:34:47 2020-07-23 10:42:48 1:08:01 0:52:49 0:15:12 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_misc} 2
fail 5250672 2020-07-23 07:53:16 2020-07-23 09:34:47 2020-07-23 10:24:47 0:50:00 0:41:36 0:08:24 smithi master rhel 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/fragment} 2
Failure Reason:

Test failure: test_split_straydir (tasks.cephfs.test_fragment.TestFragmentation), test_split_straydir (tasks.cephfs.test_fragment.TestFragmentation)

pass 5250673 2020-07-23 07:53:17 2020-07-23 09:36:34 2020-07-23 10:04:33 0:27:59 0:20:39 0:07:20 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_misc_test_o_trunc} 2
pass 5250674 2020-07-23 07:53:18 2020-07-23 09:36:34 2020-07-23 10:00:33 0:23:59 0:17:51 0:06:08 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_norstats} 2
fail 5250675 2020-07-23 07:53:19 2020-07-23 09:36:34 2020-07-23 10:08:34 0:32:00 0:15:09 0:16:51 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/journal-repair} 2
Failure Reason:

Test failure: test_reset (tasks.cephfs.test_journal_repair.TestJournalRepair)

pass 5250676 2020-07-23 07:53:20 2020-07-23 09:36:40 2020-07-23 10:06:39 0:29:59 0:20:18 0:09:41 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/filestore-xfs omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_blogbench} 2
fail 5250677 2020-07-23 07:53:21 2020-07-23 09:38:42 2020-07-23 10:28:42 0:50:00 0:23:10 0:26:50 smithi master ubuntu 18.04 fs/thrash/{begin ceph-thrash/mon clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi094 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

pass 5250678 2020-07-23 07:53:22 2020-07-23 09:38:42 2020-07-23 10:52:43 1:14:01 0:54:03 0:19:58 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_dbench} 2
fail 5250679 2020-07-23 07:53:23 2020-07-23 09:38:47 2020-07-23 10:22:47 0:44:00 0:34:44 0:09:16 smithi master centos 8.1 fs/32bits/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-ec-root overrides/{faked-ino frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi140 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

fail 5250680 2020-07-23 07:53:24 2020-07-23 09:39:14 2020-07-23 10:01:13 0:21:59 0:14:47 0:07:12 smithi master rhel 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/libcephfs_python} 2
Failure Reason:

"2020-07-23T09:55:06.119768+0000 mon.a (mon.0) 163 : cluster [WRN] Replacing daemon mds.b as rank 0 with standby daemon mds.a" in cluster log

pass 5250681 2020-07-23 07:53:24 2020-07-23 09:40:53 2020-07-23 10:54:54 1:14:01 0:13:33 1:00:28 smithi master ubuntu 18.04 fs/multiclient/{begin clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/mdtest} 4
pass 5250682 2020-07-23 07:53:25 2020-07-23 09:40:53 2020-07-23 10:46:53 1:06:00 0:43:11 0:22:49 smithi master ubuntu 18.04 fs/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} mount/fuse objectstore-ec/filestore-xfs overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/failover} 2
fail 5250683 2020-07-23 07:53:26 2020-07-23 09:41:51 2020-07-23 10:25:51 0:44:00 0:32:43 0:11:17 smithi master centos 8.1 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_misc} 2
Failure Reason:

Command failed (workunit test fs/misc/acl.sh) on smithi112 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/misc/acl.sh'

fail 5250684 2020-07-23 07:53:27 2020-07-23 09:42:51 2020-07-23 10:30:51 0:48:00 0:36:13 0:11:47 smithi master rhel 8.1 fs/snaps/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/filestore-xfs overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/snaptests} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-0.sh) on smithi148 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-0.sh'

pass 5250685 2020-07-23 07:53:28 2020-07-23 09:42:51 2020-07-23 10:48:52 1:06:01 0:57:03 0:08:58 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_ffsb} 2
fail 5250686 2020-07-23 07:53:29 2020-07-23 09:44:47 2020-07-23 10:30:47 0:46:00 0:35:07 0:10:53 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi086 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

pass 5250687 2020-07-23 07:53:30 2020-07-23 09:44:47 2020-07-23 10:06:46 0:21:59 0:14:34 0:07:25 smithi master rhel 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/mds-flush} 2
pass 5250688 2020-07-23 07:53:31 2020-07-23 09:46:37 2020-07-23 10:28:37 0:42:00 0:31:13 0:10:47 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsx} 2
pass 5250689 2020-07-23 07:53:32 2020-07-23 09:46:37 2020-07-23 10:12:37 0:26:00 0:17:03 0:08:57 smithi master rhel 8.1 fs/thrash/{begin ceph-thrash/mds clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_pjd} 2
pass 5250690 2020-07-23 07:53:33 2020-07-23 09:46:38 2020-07-23 10:18:37 0:31:59 0:21:52 0:10:07 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/mds-full} 2
pass 5250691 2020-07-23 07:53:34 2020-07-23 09:46:38 2020-07-23 10:16:37 0:29:59 0:12:30 0:17:29 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/filestore-xfs omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsync} 2
pass 5250692 2020-07-23 07:53:35 2020-07-23 09:46:38 2020-07-23 10:28:38 0:42:00 0:31:00 0:11:00 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_iogen} 2
pass 5250693 2020-07-23 07:53:36 2020-07-23 09:46:46 2020-07-23 10:02:46 0:16:00 0:09:38 0:06:22 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/mds_creation_retry} 2
pass 5250694 2020-07-23 07:53:37 2020-07-23 09:47:56 2020-07-23 10:11:56 0:24:00 0:16:57 0:07:03 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_iozone} 2
pass 5250695 2020-07-23 07:53:37 2020-07-23 09:48:35 2020-07-23 10:22:35 0:34:00 0:23:10 0:10:50 smithi master fs/upgrade/featureful_client/old_client/{bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{frag_enable multimds/no pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-nautilus 1-client 2-upgrade 3-compat_client/pacific}} 3
fail 5250696 2020-07-23 07:53:38 2020-07-23 09:48:43 2020-07-23 10:08:43 0:20:00 0:12:31 0:07:29 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

"2020-07-23T10:03:49.342352+0000 mon.b (mon.0) 147 : cluster [WRN] Replacing daemon mds.a as rank 0 with standby daemon mds.b" in cluster log

pass 5250697 2020-07-23 07:53:39 2020-07-23 09:48:46 2020-07-23 10:10:46 0:22:00 0:14:55 0:07:05 smithi master rhel 8.1 fs/thrash/{begin ceph-thrash/mon clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/none objectstore-ec/bluestore-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_trivial_sync} 2
pass 5250698 2020-07-23 07:53:40 2020-07-23 09:48:49 2020-07-23 10:20:50 0:32:01 0:15:25 0:16:36 smithi master rhel 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/openfiletable} 2
pass 5250699 2020-07-23 07:53:41 2020-07-23 09:50:16 2020-07-23 10:08:15 0:17:59 0:10:40 0:07:19 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_truncate_delay} 2
pass 5250700 2020-07-23 07:53:42 2020-07-23 09:50:38 2020-07-23 10:10:38 0:20:00 0:10:30 0:09:30 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/filestore-xfs omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_trivial_sync} 2
pass 5250701 2020-07-23 07:53:43 2020-07-23 09:50:40 2020-07-23 10:14:40 0:24:00 0:14:59 0:09:01 smithi master rhel 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/pool-perm} 2
fail 5250702 2020-07-23 07:53:44 2020-07-23 09:50:46 2020-07-23 13:12:53 3:22:07 3:11:42 0:10:25 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/libcephfs_interface_tests} 2
Failure Reason:

Command failed (workunit test libcephfs/test.sh) on smithi192 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/libcephfs/test.sh'

pass 5250703 2020-07-23 07:53:45 2020-07-23 09:52:55 2020-07-23 10:12:54 0:19:59 0:11:47 0:08:12 smithi master centos 8.1 fs/32bits/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/filestore-xfs overrides/{faked-ino frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_pjd} 2
fail 5250704 2020-07-23 07:53:46 2020-07-23 09:52:55 2020-07-23 10:16:55 0:24:00 0:11:08 0:12:52 smithi master centos 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/quota} 2
Failure Reason:

Test failure: test_remote_update_df (tasks.cephfs.test_quota.TestQuota)

pass 5250705 2020-07-23 07:53:47 2020-07-23 09:52:55 2020-07-23 11:36:58 1:44:03 1:34:47 0:09:16 smithi master centos 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_kernel_untar_build} 2
pass 5250706 2020-07-23 07:53:47 2020-07-23 09:52:55 2020-07-23 11:28:57 1:36:02 0:18:53 1:17:09 smithi master ubuntu 18.04 fs/multiclient/{begin clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/filestore-xfs overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} tasks/cephfs_misc_tests} 5
fail 5250707 2020-07-23 07:53:48 2020-07-23 09:54:43 2020-07-23 10:36:43 0:42:00 0:33:31 0:08:29 smithi master centos 8.1 fs/permission/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/filestore-xfs overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi121 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

fail 5250708 2020-07-23 07:53:49 2020-07-23 09:54:43 2020-07-23 10:24:43 0:30:00 0:18:34 0:11:26 smithi master ubuntu 18.04 fs/thrash/{begin ceph-thrash/mds clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/filestore-xfs overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_snaptests} 2
Failure Reason:

Command failed (workunit test fs/snaps/snap-rm-diff.sh) on smithi119 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snap-rm-diff.sh'

pass 5250709 2020-07-23 07:53:50 2020-07-23 09:54:47 2020-07-23 11:02:48 1:08:01 0:53:18 0:14:43 smithi master rhel 8.1 fs/traceless/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_ffsb traceless/50pc} 2
pass 5250710 2020-07-23 07:53:51 2020-07-23 09:54:50 2020-07-23 10:56:51 1:02:01 0:54:02 0:07:59 smithi master centos 8.1 fs/verify/{begin centos_latest clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable mon-debug whitelist_health whitelist_wrongly_marked_down} tasks/cfuse_workunit_suites_dbench validater/lockdep} 2
pass 5250711 2020-07-23 07:53:52 2020-07-23 09:56:42 2020-07-23 10:48:42 0:52:00 0:41:56 0:10:04 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-comp omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_misc} 2
fail 5250712 2020-07-23 07:53:53 2020-07-23 09:56:42 2020-07-23 10:22:41 0:25:59 0:15:40 0:10:19 smithi master rhel 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/sessionmap/{sessionmap}} 2
Failure Reason:

Test failure: test_session_evict_blacklisted (tasks.cephfs.test_sessionmap.TestSessionMap)

pass 5250713 2020-07-23 07:53:54 2020-07-23 09:56:42 2020-07-23 10:30:42 0:34:00 0:24:31 0:09:29 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_misc_test_o_trunc} 2
pass 5250714 2020-07-23 07:53:54 2020-07-23 09:56:44 2020-07-23 10:26:43 0:29:59 0:13:33 0:16:26 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/filestore-xfs omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_norstats} 2
fail 5250715 2020-07-23 07:53:55 2020-07-23 09:57:20 2020-07-23 10:23:20 0:26:00 0:13:34 0:12:26 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/strays} 2
Failure Reason:

Test failure: test_hardlink_reintegration (tasks.cephfs.test_strays.TestStrays)

pass 5250716 2020-07-23 07:53:56 2020-07-23 10:00:41 2020-07-23 10:34:41 0:34:00 0:25:06 0:08:54 smithi master rhel 8.1 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/cfuse_workunit_suites_blogbench} 2
fail 5250717 2020-07-23 07:53:57 2020-07-23 10:00:41 2020-07-23 10:32:41 0:32:00 0:22:56 0:09:04 smithi master centos 8.1 fs/thrash/{begin ceph-thrash/mon clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{centos_8} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi073 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

pass 5250718 2020-07-23 07:53:58 2020-07-23 10:00:41 2020-07-23 11:00:42 1:00:01 0:49:15 0:10:46 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_dbench} 2
fail 5250719 2020-07-23 07:53:59 2020-07-23 10:00:42 2020-07-23 10:34:41 0:33:59 0:25:02 0:08:57 smithi master rhel 8.1 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{rhel_8} tasks/test_journal_migration} 2
Failure Reason:

Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration), test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)

pass 5250720 2020-07-23 07:54:00 2020-07-23 10:00:41 2020-07-23 10:50:42 0:50:01 0:38:53 0:11:08 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_ffsb} 2
fail 5250721 2020-07-23 07:54:01 2020-07-23 10:01:15 2020-07-23 10:47:16 0:46:01 0:35:02 0:10:59 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/yes mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsstress} 2
Failure Reason:

Command failed (workunit test suites/fsstress.sh) on smithi036 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=067d3bb511499788cc774a2a17330529188c080a TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsstress.sh'

fail 5250722 2020-07-23 07:54:01 2020-07-23 10:02:58 2020-07-23 10:26:57 0:23:59 0:12:10 0:11:49 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-ec-root overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/volume-client/{task/test/{test}}} 2
Failure Reason:

Test failure: test_21501 (tasks.cephfs.test_volume_client.TestVolumeClient)

pass 5250723 2020-07-23 07:54:02 2020-07-23 10:02:59 2020-07-23 10:34:58 0:31:59 0:20:24 0:11:35 smithi master ubuntu 18.04 fs/basic_workload/{begin clusters/fixed-2-ucephfs conf/{client mds mon osd} inline/no mount/fuse objectstore-ec/filestore-xfs omap_limit/10000 overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_fsx} 2
fail 5250724 2020-07-23 07:54:03 2020-07-23 10:02:59 2020-07-23 10:28:58 0:25:59 0:13:24 0:12:35 smithi master ubuntu 18.04 fs/thrash/{begin ceph-thrash/mds clusters/1-mds-1-client-coloc conf/{client mds mon osd} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag_enable whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Found coredumps on ubuntu@smithi191.front.sepia.ceph.com

fail 5250725 2020-07-23 07:54:04 2020-07-23 10:02:59 2020-07-23 10:58:59 0:56:00 0:40:18 0:15:42 smithi master ubuntu 18.04 fs/basic_functional/{begin clusters/1-mds-4-client-coloc conf/{client mds mon osd} mount/fuse objectstore/bluestore-bitmap overrides/{frag_enable no_client_pidfile whitelist_health whitelist_wrongly_marked_down} supported-random-distros$/{ubuntu_latest} tasks/volumes} 2
Failure Reason:

Test failure: test_subvolume_pin_export (tasks.cephfs.test_volumes.TestVolumes)