Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 2188352 2018-02-14 20:59:01 2018-02-14 21:00:51 2018-02-15 00:30:59 3:30:08 0:50:34 2:39:34 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v3.yaml tasks/nfs-workunit-kernel-untar-build.yaml} 4
Failure Reason:

"2018-02-14 23:59:29.433830 mon.a mon.0 158.69.93.34:6789/0 127 : cluster [WRN] MDS health message (mds.0): Behind on trimming (62/30)" in cluster log

pass 2188353 2018-02-14 20:59:02 2018-02-14 21:00:51 2018-02-14 21:52:51 0:52:00 0:20:35 0:31:25 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v4.yaml tasks/nfs_workunit_misc.yaml} 4
pass 2188354 2018-02-14 20:59:03 2018-02-14 21:00:51 2018-02-14 21:48:51 0:48:00 0:28:05 0:19:55 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v3.yaml tasks/nfs_workunit_suites_blogbench.yaml} 4
pass 2188355 2018-02-14 20:59:04 2018-02-14 21:00:51 2018-02-15 00:25:02 3:24:11 0:31:35 2:52:36 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v4.yaml tasks/nfs_workunit_suites_dbench.yaml} 4
pass 2188356 2018-02-14 20:59:04 2018-02-14 21:03:35 2018-02-14 22:45:41 1:42:06 0:46:58 0:55:08 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v3.yaml tasks/nfs_workunit_suites_ffsb.yaml} 4
pass 2188357 2018-02-14 20:59:05 2018-02-14 21:03:49 2018-02-15 00:25:58 3:22:09 0:24:01 2:58:08 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v4.yaml tasks/nfs_workunit_suites_fsstress.yaml} 4
fail 2188358 2018-02-14 20:59:06 2018-02-14 21:04:44 2018-02-15 02:07:16 5:02:32 1:39:22 3:23:10 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v3.yaml tasks/nfs_workunit_suites_iozone.yaml} 4
Failure Reason:

"2018-02-15 00:56:30.646240 mon.a mon.0 158.69.65.101:6789/0 184 : cluster [WRN] Health check failed: 1 nearfull osd(s) (OSD_NEARFULL)" in cluster log

dead 2188359 2018-02-14 20:59:07 2018-02-14 21:06:16 2018-02-15 09:08:56 12:02:40 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v4.yaml tasks/nfs-workunit-kernel-untar-build.yaml}
pass 2188360 2018-02-14 20:59:07 2018-02-14 21:10:21 2018-02-15 00:12:26 3:02:05 0:19:32 2:42:33 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v3.yaml tasks/nfs_workunit_misc.yaml} 4
pass 2188361 2018-02-14 20:59:08 2018-02-14 21:13:52 2018-02-15 00:29:58 3:16:06 0:30:02 2:46:04 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v4.yaml tasks/nfs_workunit_suites_blogbench.yaml} 4
pass 2188362 2018-02-14 20:59:09 2018-02-14 21:14:07 2018-02-14 22:18:07 1:04:00 0:30:43 0:33:17 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v3.yaml tasks/nfs_workunit_suites_dbench.yaml} 4
fail 2188363 2018-02-14 20:59:10 2018-02-14 21:15:41 2018-02-15 00:37:49 3:22:08 0:44:10 2:37:58 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v4.yaml tasks/nfs_workunit_suites_ffsb.yaml} 4
Failure Reason:

Command failed (workunit test suites/ffsb.sh) on ovh087 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=luminous TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/ffsb.sh'

pass 2188364 2018-02-14 20:59:10 2018-02-14 21:17:08 2018-02-15 02:57:31 5:40:23 0:24:11 5:16:12 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v3.yaml tasks/nfs_workunit_suites_fsstress.yaml} 4
fail 2188365 2018-02-14 20:59:11 2018-02-14 21:24:41 2018-02-15 01:42:52 4:18:11 1:32:23 2:45:48 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v4.yaml tasks/nfs_workunit_suites_iozone.yaml} 4
Failure Reason:

"2018-02-15 00:39:41.511954 mon.b mon.0 158.69.64.10:6789/0 168 : cluster [WRN] Health check failed: 1 nearfull osd(s) (OSD_NEARFULL)" in cluster log