Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 2757253 2018-07-09 03:28:29 2018-07-09 03:29:03 2018-07-09 04:05:02 0:35:59 0:12:59 0:23:00 smithi master centos 7.4 fs/basic_functional/{begin.yaml clusters/1-mds-4-client-coloc.yaml conf/{client.yaml mds.yaml} mount/fuse.yaml objectstore/bluestore-ec-root.yaml overrides/{frag_enable.yaml no_client_pidfile.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/alternate-pool.yaml} 2
Failure Reason:

Test failure: test_rebuild_simple (tasks.cephfs.test_recovery_pool.TestRecoveryPool)

fail 2757255 2018-07-09 03:28:30 2018-07-09 03:29:03 2018-07-09 04:11:03 0:42:00 0:21:50 0:20:10 smithi master ubuntu 18.04 fs/basic_workload/{begin.yaml clusters/fixed-2-ucephfs.yaml conf/{client.yaml mds.yaml} inline/no.yaml mount/fuse.yaml objectstore-ec/bluestore-comp-ec-root.yaml omap_limit/10.yaml overrides/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/cfuse_workunit_kernel_untar_build.yaml} 2
Failure Reason:

Command failed (workunit test kernel_untar_build.sh) on smithi199 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-pdonnell-testing-20180704.202326 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh'

fail 2757257 2018-07-09 03:28:31 2018-07-09 03:29:06 2018-07-09 03:53:05 0:23:59 0:10:57 0:13:02 smithi master fs/upgrade/snaps/{clusters/3-mds.yaml conf/{client.yaml mds.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml multimds/no.yaml whitelist_health.yaml whitelist_rstat.yaml whitelist_wrongly_marked_down.yaml} tasks/{0-luminous.yaml 1-client.yaml 2-upgrade.yaml 3-sanity.yaml 4-client-upgrade/no.yaml 5-client-sanity.yaml 6-snap-upgrade.yaml 7-client-sanity.yaml}} 3
Failure Reason:

Command failed on smithi091 with status 1: 'mkdir -- /home/ubuntu/cephtest/mnt.0'

fail 2757259 2018-07-09 03:28:32 2018-07-09 03:29:10 2018-07-09 03:53:10 0:24:00 0:12:47 0:11:13 smithi master fs/upgrade/snaps/{clusters/3-mds.yaml conf/{client.yaml mds.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml multimds/yes.yaml whitelist_health.yaml whitelist_rstat.yaml whitelist_wrongly_marked_down.yaml} tasks/{0-luminous.yaml 1-client.yaml 2-upgrade.yaml 3-sanity.yaml 4-client-upgrade/yes.yaml 5-client-sanity.yaml 6-snap-upgrade.yaml 7-client-sanity.yaml}} 3
Failure Reason:

"2018-07-09 03:47:24.723275 mds.a (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi040: (4273), after waiting 45 seconds during MDS startup" in cluster log

fail 2757261 2018-07-09 03:28:33 2018-07-09 03:30:53 2018-07-09 04:12:53 0:42:00 0:22:02 0:19:58 smithi master ubuntu 18.04 fs/basic_workload/{begin.yaml clusters/fixed-2-ucephfs.yaml conf/{client.yaml mds.yaml} inline/no.yaml mount/fuse.yaml objectstore-ec/bluestore-ec-root.yaml omap_limit/10.yaml overrides/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/cfuse_workunit_kernel_untar_build.yaml} 2
Failure Reason:

Command failed (workunit test kernel_untar_build.sh) on smithi023 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-pdonnell-testing-20180704.202326 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh'

fail 2757263 2018-07-09 03:28:33 2018-07-09 03:30:53 2018-07-09 04:24:53 0:54:00 0:11:42 0:42:18 smithi master fs/upgrade/snaps/{clusters/3-mds.yaml conf/{client.yaml mds.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml multimds/no.yaml whitelist_health.yaml whitelist_rstat.yaml whitelist_wrongly_marked_down.yaml} tasks/{0-luminous.yaml 1-client.yaml 2-upgrade.yaml 3-sanity.yaml 4-client-upgrade/no.yaml 5-client-sanity.yaml 6-snap-upgrade.yaml 7-client-sanity.yaml}} 3
Failure Reason:

Command failed on smithi031 with status 1: 'mkdir -- /home/ubuntu/cephtest/mnt.0'

fail 2757265 2018-07-09 03:28:34 2018-07-09 03:30:56 2018-07-09 04:06:56 0:36:00 0:12:34 0:23:26 smithi master fs/upgrade/snaps/{clusters/3-mds.yaml conf/{client.yaml mds.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml multimds/yes.yaml whitelist_health.yaml whitelist_rstat.yaml whitelist_wrongly_marked_down.yaml} tasks/{0-luminous.yaml 1-client.yaml 2-upgrade.yaml 3-sanity.yaml 4-client-upgrade/yes.yaml 5-client-sanity.yaml 6-snap-upgrade.yaml 7-client-sanity.yaml}} 3
Failure Reason:

"2018-07-09 04:01:02.963053 mds.a (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi115: (4278), after waiting 45 seconds during MDS startup" in cluster log

fail 2757267 2018-07-09 03:28:35 2018-07-09 03:31:02 2018-07-09 03:53:02 0:22:00 0:10:45 0:11:15 smithi master fs/upgrade/snaps/{clusters/3-mds.yaml conf/{client.yaml mds.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml multimds/no.yaml whitelist_health.yaml whitelist_rstat.yaml whitelist_wrongly_marked_down.yaml} tasks/{0-luminous.yaml 1-client.yaml 2-upgrade.yaml 3-sanity.yaml 4-client-upgrade/no.yaml 5-client-sanity.yaml 6-snap-upgrade.yaml 7-client-sanity.yaml}} 3
Failure Reason:

Command failed on smithi057 with status 1: 'mkdir -- /home/ubuntu/cephtest/mnt.0'

fail 2757269 2018-07-09 03:28:36 2018-07-09 03:32:55 2018-07-09 04:32:55 1:00:00 0:12:41 0:47:19 smithi master fs/upgrade/snaps/{clusters/3-mds.yaml conf/{client.yaml mds.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml multimds/yes.yaml whitelist_health.yaml whitelist_rstat.yaml whitelist_wrongly_marked_down.yaml} tasks/{0-luminous.yaml 1-client.yaml 2-upgrade.yaml 3-sanity.yaml 4-client-upgrade/yes.yaml 5-client-sanity.yaml 6-snap-upgrade.yaml 7-client-sanity.yaml}} 3
Failure Reason:

"2018-07-09 04:28:15.769159 mds.b (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi029: (4275), after waiting 45 seconds during MDS startup" in cluster log

pass 2757271 2018-07-09 03:28:37 2018-07-09 03:32:55 2018-07-09 04:00:54 0:27:59 0:17:00 0:10:59 smithi master ubuntu 16.04 fs/basic_functional/{begin.yaml clusters/1-mds-4-client-coloc.yaml conf/{client.yaml mds.yaml} mount/fuse.yaml objectstore/bluestore.yaml overrides/{frag_enable.yaml no_client_pidfile.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/journal-repair.yaml} 2
fail 2757273 2018-07-09 03:28:38 2018-07-09 03:32:59 2018-07-09 05:37:01 2:04:02 1:44:25 0:19:37 smithi master rhel 7.5 fs/thrash/{begin.yaml ceph-thrash/default.yaml clusters/1-mds-1-client-coloc.yaml conf/{client.yaml mds.yaml} mount/fuse.yaml msgr-failures/none.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{rhel_latest.yaml} tasks/cfuse_workunit_snaptests.yaml} 2
Failure Reason:

"2018-07-09 04:03:14.501923 mon.b (mon.0) 317 : cluster [WRN] Health check failed: 2 slow ops, oldest one blocked for 31 sec, mon.c has slow ops (SLOW_OPS)" in cluster log

fail 2757275 2018-07-09 03:28:39 2018-07-09 03:33:11 2018-07-09 04:03:11 0:30:00 0:10:55 0:19:05 smithi master fs/upgrade/snaps/{clusters/3-mds.yaml conf/{client.yaml mds.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml multimds/no.yaml whitelist_health.yaml whitelist_rstat.yaml whitelist_wrongly_marked_down.yaml} tasks/{0-luminous.yaml 1-client.yaml 2-upgrade.yaml 3-sanity.yaml 4-client-upgrade/no.yaml 5-client-sanity.yaml 6-snap-upgrade.yaml 7-client-sanity.yaml}} 3
Failure Reason:

Command failed on smithi031 with status 1: 'mkdir -- /home/ubuntu/cephtest/mnt.0'

fail 2757277 2018-07-09 03:28:40 2018-07-09 03:33:30 2018-07-09 04:57:31 1:24:01 0:12:48 1:11:13 smithi master fs/upgrade/snaps/{clusters/3-mds.yaml conf/{client.yaml mds.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml multimds/yes.yaml whitelist_health.yaml whitelist_rstat.yaml whitelist_wrongly_marked_down.yaml} tasks/{0-luminous.yaml 1-client.yaml 2-upgrade.yaml 3-sanity.yaml 4-client-upgrade/yes.yaml 5-client-sanity.yaml 6-snap-upgrade.yaml 7-client-sanity.yaml}} 3
Failure Reason:

"2018-07-09 04:51:50.359542 mds.c (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi185: (4253), after waiting 45 seconds during MDS startup" in cluster log

fail 2757279 2018-07-09 03:28:41 2018-07-09 03:33:36 2018-07-09 04:11:36 0:38:00 0:21:14 0:16:46 smithi master ubuntu 18.04 fs/basic_workload/{begin.yaml clusters/fixed-2-ucephfs.yaml conf/{client.yaml mds.yaml} inline/no.yaml mount/fuse.yaml objectstore-ec/bluestore.yaml omap_limit/10000.yaml overrides/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/cfuse_workunit_kernel_untar_build.yaml} 2
Failure Reason:

Command failed (workunit test kernel_untar_build.sh) on smithi024 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-pdonnell-testing-20180704.202326 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh'

fail 2757281 2018-07-09 03:28:42 2018-07-09 03:33:42 2018-07-09 06:19:45 2:46:03 2:32:23 0:13:40 smithi master centos 7.4 fs/thrash/{begin.yaml ceph-thrash/default.yaml clusters/1-mds-1-client-coloc.yaml conf/{client.yaml mds.yaml} mount/fuse.yaml msgr-failures/none.yaml objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/cfuse_workunit_snaptests.yaml} 2
Failure Reason:

"2018-07-09 05:09:03.339390 mon.b (mon.0) 1132 : cluster [WRN] Health check failed: 1 slow ops, oldest one blocked for 30 sec, mon.c has slow ops (SLOW_OPS)" in cluster log

fail 2757283 2018-07-09 03:28:43 2018-07-09 03:35:00 2018-07-09 04:37:00 1:02:00 0:11:03 0:50:57 smithi master fs/upgrade/snaps/{clusters/3-mds.yaml conf/{client.yaml mds.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml multimds/no.yaml whitelist_health.yaml whitelist_rstat.yaml whitelist_wrongly_marked_down.yaml} tasks/{0-luminous.yaml 1-client.yaml 2-upgrade.yaml 3-sanity.yaml 4-client-upgrade/no.yaml 5-client-sanity.yaml 6-snap-upgrade.yaml 7-client-sanity.yaml}} 3
Failure Reason:

Command failed on smithi040 with status 1: 'mkdir -- /home/ubuntu/cephtest/mnt.0'

fail 2757285 2018-07-09 03:28:43 2018-07-09 03:36:58 2018-07-09 04:14:58 0:38:00 0:13:02 0:24:58 smithi master fs/upgrade/snaps/{clusters/3-mds.yaml conf/{client.yaml mds.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml multimds/yes.yaml whitelist_health.yaml whitelist_rstat.yaml whitelist_wrongly_marked_down.yaml} tasks/{0-luminous.yaml 1-client.yaml 2-upgrade.yaml 3-sanity.yaml 4-client-upgrade/yes.yaml 5-client-sanity.yaml 6-snap-upgrade.yaml 7-client-sanity.yaml}} 3
Failure Reason:

"2018-07-09 04:09:25.528767 mds.b (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi057: (4273), after waiting 45 seconds during MDS startup" in cluster log

fail 2757287 2018-07-09 03:28:44 2018-07-09 03:36:58 2018-07-09 04:16:58 0:40:00 0:11:38 0:28:22 smithi master fs/upgrade/snaps/{clusters/3-mds.yaml conf/{client.yaml mds.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml multimds/yes.yaml whitelist_health.yaml whitelist_rstat.yaml whitelist_wrongly_marked_down.yaml} tasks/{0-luminous.yaml 1-client.yaml 2-upgrade.yaml 3-sanity.yaml 4-client-upgrade/no.yaml 5-client-sanity.yaml 6-snap-upgrade.yaml 7-client-sanity.yaml}} 3
Failure Reason:

Command failed on smithi182 with status 1: 'mkdir -- /home/ubuntu/cephtest/mnt.0'

fail 2757289 2018-07-09 03:28:45 2018-07-09 03:39:13 2018-07-09 04:09:13 0:30:00 0:12:31 0:17:29 smithi master fs/upgrade/snaps/{clusters/3-mds.yaml conf/{client.yaml mds.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml multimds/no.yaml whitelist_health.yaml whitelist_rstat.yaml whitelist_wrongly_marked_down.yaml} tasks/{0-luminous.yaml 1-client.yaml 2-upgrade.yaml 3-sanity.yaml 4-client-upgrade/yes.yaml 5-client-sanity.yaml 6-snap-upgrade.yaml 7-client-sanity.yaml}} 3
Failure Reason:

"2018-07-09 04:02:59.318613 mds.b (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi005: (4281), after waiting 45 seconds during MDS startup" in cluster log

pass 2757291 2018-07-09 03:28:46 2018-07-09 03:41:07 2018-07-09 04:55:07 1:14:00 0:56:19 0:17:41 smithi master centos 7.4 fs/basic_workload/{begin.yaml clusters/fixed-2-ucephfs.yaml conf/{client.yaml mds.yaml} inline/no.yaml mount/fuse.yaml objectstore-ec/bluestore-ec-root.yaml omap_limit/10000.yaml overrides/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/cfuse_workunit_misc.yaml} 2
fail 2757294 2018-07-09 03:28:47 2018-07-09 03:41:07 2018-07-09 06:13:10 2:32:03 2:16:14 0:15:49 smithi master ubuntu 16.04 fs/thrash/{begin.yaml ceph-thrash/default.yaml clusters/1-mds-1-client-coloc.yaml conf/{client.yaml mds.yaml} mount/fuse.yaml msgr-failures/osd-mds-delay.yaml objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_16.04.yaml} tasks/cfuse_workunit_snaptests.yaml} 2
Failure Reason:

"2018-07-09 04:04:47.027231 mon.a (mon.0) 363 : cluster [WRN] Health check failed: 1 slow ops, oldest one blocked for 33 sec, mon.c has slow ops (SLOW_OPS)" in cluster log

fail 2757295 2018-07-09 03:28:48 2018-07-09 03:41:07 2018-07-09 04:07:07 0:26:00 0:11:30 0:14:30 smithi master fs/upgrade/snaps/{clusters/3-mds.yaml conf/{client.yaml mds.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml multimds/yes.yaml whitelist_health.yaml whitelist_rstat.yaml whitelist_wrongly_marked_down.yaml} tasks/{0-luminous.yaml 1-client.yaml 2-upgrade.yaml 3-sanity.yaml 4-client-upgrade/no.yaml 5-client-sanity.yaml 6-snap-upgrade.yaml 7-client-sanity.yaml}} 3
Failure Reason:

Command failed on smithi149 with status 1: 'mkdir -- /home/ubuntu/cephtest/mnt.0'

fail 2757298 2018-07-09 03:28:49 2018-07-09 03:41:22 2018-07-09 04:15:21 0:33:59 0:12:18 0:21:41 smithi master fs/upgrade/snaps/{clusters/3-mds.yaml conf/{client.yaml mds.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml multimds/no.yaml whitelist_health.yaml whitelist_rstat.yaml whitelist_wrongly_marked_down.yaml} tasks/{0-luminous.yaml 1-client.yaml 2-upgrade.yaml 3-sanity.yaml 4-client-upgrade/yes.yaml 5-client-sanity.yaml 6-snap-upgrade.yaml 7-client-sanity.yaml}} 3
Failure Reason:

"2018-07-09 04:09:36.141642 mds.a (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi040: (4282), after waiting 45 seconds during MDS startup" in cluster log

fail 2757299 2018-07-09 03:28:50 2018-07-09 03:42:45 2018-07-09 04:16:44 0:33:59 0:23:30 0:10:29 smithi master ubuntu 18.04 fs/basic_workload/{begin.yaml clusters/fixed-2-ucephfs.yaml conf/{client.yaml mds.yaml} inline/yes.yaml mount/fuse.yaml objectstore-ec/filestore-xfs.yaml omap_limit/10.yaml overrides/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/cfuse_workunit_kernel_untar_build.yaml} 2
Failure Reason:

Command failed (workunit test kernel_untar_build.sh) on smithi184 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-pdonnell-testing-20180704.202326 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh'

fail 2757302 2018-07-09 03:28:51 2018-07-09 03:42:45 2018-07-09 04:06:44 0:23:59 0:10:53 0:13:06 smithi master fs/upgrade/snaps/{clusters/3-mds.yaml conf/{client.yaml mds.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml multimds/yes.yaml whitelist_health.yaml whitelist_rstat.yaml whitelist_wrongly_marked_down.yaml} tasks/{0-luminous.yaml 1-client.yaml 2-upgrade.yaml 3-sanity.yaml 4-client-upgrade/no.yaml 5-client-sanity.yaml 6-snap-upgrade.yaml 7-client-sanity.yaml}} 3
Failure Reason:

Command failed on smithi122 with status 1: 'mkdir -- /home/ubuntu/cephtest/mnt.0'

fail 2757303 2018-07-09 03:28:52 2018-07-09 03:42:46 2018-07-09 04:38:46 0:56:00 0:13:20 0:42:40 smithi master fs/upgrade/snaps/{clusters/3-mds.yaml conf/{client.yaml mds.yaml} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml multimds/no.yaml whitelist_health.yaml whitelist_rstat.yaml whitelist_wrongly_marked_down.yaml} tasks/{0-luminous.yaml 1-client.yaml 2-upgrade.yaml 3-sanity.yaml 4-client-upgrade/yes.yaml 5-client-sanity.yaml 6-snap-upgrade.yaml 7-client-sanity.yaml}} 3
Failure Reason:

"2018-07-09 04:33:36.025419 mds.b (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi014: (4262), after waiting 45 seconds during MDS startup" in cluster log

fail 2757305 2018-07-09 03:28:53 2018-07-09 03:43:02 2018-07-09 04:05:01 0:21:59 0:10:23 0:11:36 smithi master centos 7.4 fs/basic_functional/{begin.yaml clusters/1-mds-4-client-coloc.yaml conf/{client.yaml mds.yaml} mount/fuse.yaml objectstore/bluestore.yaml overrides/{frag_enable.yaml no_client_pidfile.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/libcephfs_java/libcephfs_java.yaml} 2
Failure Reason:

Command failed (workunit test libcephfs-java/test.sh) on smithi035 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-pdonnell-testing-20180704.202326 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/libcephfs-java/test.sh'

fail 2757308 2018-07-09 03:28:54 2018-07-09 03:43:02 2018-07-09 04:11:01 0:27:59 0:11:20 0:16:39 smithi master fs/upgrade/snaps/{clusters/3-mds.yaml conf/{client.yaml mds.yaml} objectstore-ec/bluestore-comp.yaml overrides/{frag_enable.yaml multimds/yes.yaml whitelist_health.yaml whitelist_rstat.yaml whitelist_wrongly_marked_down.yaml} tasks/{0-luminous.yaml 1-client.yaml 2-upgrade.yaml 3-sanity.yaml 4-client-upgrade/no.yaml 5-client-sanity.yaml 6-snap-upgrade.yaml 7-client-sanity.yaml}} 3
Failure Reason:

Command failed on smithi029 with status 1: 'mkdir -- /home/ubuntu/cephtest/mnt.0'

fail 2757310 2018-07-09 03:28:54 2018-07-09 03:43:18 2018-07-09 04:11:18 0:28:00 0:12:38 0:15:22 smithi master fs/upgrade/snaps/{clusters/3-mds.yaml conf/{client.yaml mds.yaml} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml multimds/no.yaml whitelist_health.yaml whitelist_rstat.yaml whitelist_wrongly_marked_down.yaml} tasks/{0-luminous.yaml 1-client.yaml 2-upgrade.yaml 3-sanity.yaml 4-client-upgrade/yes.yaml 5-client-sanity.yaml 6-snap-upgrade.yaml 7-client-sanity.yaml}} 3
Failure Reason:

"2018-07-09 04:05:43.536050 mds.a (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi066: (4268), after waiting 45 seconds during MDS startup" in cluster log

fail 2757312 2018-07-09 03:28:56 2018-07-09 03:44:52 2018-07-09 04:16:52 0:32:00 0:11:37 0:20:23 smithi master fs/upgrade/snaps/{clusters/3-mds.yaml conf/{client.yaml mds.yaml} objectstore-ec/bluestore.yaml overrides/{frag_enable.yaml multimds/yes.yaml whitelist_health.yaml whitelist_rstat.yaml whitelist_wrongly_marked_down.yaml} tasks/{0-luminous.yaml 1-client.yaml 2-upgrade.yaml 3-sanity.yaml 4-client-upgrade/no.yaml 5-client-sanity.yaml 6-snap-upgrade.yaml 7-client-sanity.yaml}} 3
Failure Reason:

Command failed on smithi105 with status 1: 'mkdir -- /home/ubuntu/cephtest/mnt.0'

fail 2757314 2018-07-09 03:28:56 2018-07-09 03:44:57 2018-07-09 04:14:56 0:29:59 0:17:11 0:12:48 smithi master centos 7.4 fs/basic_functional/{begin.yaml clusters/1-mds-4-client-coloc.yaml conf/{client.yaml mds.yaml} mount/fuse.yaml objectstore/bluestore.yaml overrides/{frag_enable.yaml no_client_pidfile.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{centos_latest.yaml} tasks/strays.yaml} 2
Failure Reason:

Test failure: test_files_throttle (tasks.cephfs.test_strays.TestStrays)

fail 2757316 2018-07-09 03:28:57 2018-07-09 03:45:18 2018-07-09 04:19:17 0:33:59 0:22:20 0:11:39 smithi master ubuntu 18.04 fs/basic_workload/{begin.yaml clusters/fixed-2-ucephfs.yaml conf/{client.yaml mds.yaml} inline/yes.yaml mount/fuse.yaml objectstore-ec/filestore-xfs.yaml omap_limit/10000.yaml overrides/{frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} supported-random-distros$/{ubuntu_latest.yaml} tasks/cfuse_workunit_kernel_untar_build.yaml} 2
Failure Reason:

Command failed (workunit test kernel_untar_build.sh) on smithi118 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-pdonnell-testing-20180704.202326 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh'

fail 2757318 2018-07-09 03:28:58 2018-07-09 03:45:37 2018-07-09 04:37:37 0:52:00 0:12:13 0:39:47 smithi master fs/upgrade/snaps/{clusters/3-mds.yaml conf/{client.yaml mds.yaml} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml multimds/no.yaml whitelist_health.yaml whitelist_rstat.yaml whitelist_wrongly_marked_down.yaml} tasks/{0-luminous.yaml 1-client.yaml 2-upgrade.yaml 3-sanity.yaml 4-client-upgrade/yes.yaml 5-client-sanity.yaml 6-snap-upgrade.yaml 7-client-sanity.yaml}} 3
Failure Reason:

"2018-07-09 04:31:17.107312 mds.a (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi057: (4239), after waiting 45 seconds during MDS startup" in cluster log