Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 2684497 2018-06-20 05:25:54 2018-06-23 21:30:40 2018-06-23 23:26:41 1:56:01 0:13:40 1:42:21 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v3.yaml tasks/nfs-workunit-kernel-untar-build.yaml} 4
Failure Reason:

"2018-06-23 22:00:09.116345 mon.a mon.0 158.69.92.181:6789/0 58 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 2684498 2018-06-20 05:25:55 2018-06-23 21:44:38 2018-06-23 23:42:39 1:58:01 0:13:54 1:44:07 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v4.yaml tasks/nfs_workunit_misc.yaml} 4
Failure Reason:

Command failed on ovh055 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1'

fail 2684499 2018-06-20 05:25:56 2018-06-23 21:48:14 2018-06-23 23:38:15 1:50:01 0:14:55 1:35:06 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v3.yaml tasks/nfs_workunit_suites_blogbench.yaml} 4
Failure Reason:

Command failed on ovh062 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 2'

fail 2684500 2018-06-20 05:25:57 2018-06-23 22:09:19 2018-06-24 00:13:21 2:04:02 0:14:40 1:49:22 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v4.yaml tasks/nfs_workunit_suites_dbench.yaml} 4
Failure Reason:

Command failed on ovh067 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 4'

fail 2684501 2018-06-20 05:25:58 2018-06-23 22:09:41 2018-06-24 00:47:44 2:38:03 0:13:56 2:24:07 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v3.yaml tasks/nfs_workunit_suites_ffsb.yaml} 4
Failure Reason:

Command failed on ovh003 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1'

fail 2684502 2018-06-20 05:25:59 2018-06-23 22:22:57 2018-06-24 00:52:59 2:30:02 0:14:06 2:15:56 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v4.yaml tasks/nfs_workunit_suites_fsstress.yaml} 4
Failure Reason:

Command failed on ovh082 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1'

fail 2684503 2018-06-20 05:26:00 2018-06-23 22:36:07 2018-06-24 01:08:09 2:32:02 0:13:25 2:18:37 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v3.yaml tasks/nfs_workunit_suites_iozone.yaml} 4
Failure Reason:

Command failed on ovh097 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1'

fail 2684504 2018-06-20 05:26:01 2018-06-23 22:38:47 2018-06-24 01:02:54 2:24:07 0:13:34 2:10:33 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v4.yaml tasks/nfs-workunit-kernel-untar-build.yaml} 4
Failure Reason:

Command failed on ovh073 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1'

fail 2684505 2018-06-20 05:26:02 2018-06-23 22:50:43 2018-06-24 00:34:44 1:44:01 0:15:36 1:28:25 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v3.yaml tasks/nfs_workunit_misc.yaml} 4
Failure Reason:

Command failed on ovh055 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 2'

fail 2684506 2018-06-20 05:26:03 2018-06-23 23:12:24 2018-06-24 01:28:26 2:16:02 0:14:46 2:01:16 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v4.yaml tasks/nfs_workunit_suites_blogbench.yaml} 4
Failure Reason:

Command failed on ovh062 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1'

dead 2684507 2018-06-20 05:26:04 2018-06-23 23:22:55 2018-06-24 11:25:14 12:02:19 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v3.yaml tasks/nfs_workunit_suites_dbench.yaml}
fail 2684508 2018-06-20 05:26:05 2018-06-23 23:26:11 2018-06-24 01:16:13 1:50:02 0:14:01 1:36:01 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v4.yaml tasks/nfs_workunit_suites_ffsb.yaml} 4
Failure Reason:

"2018-06-23 23:48:43.834397 mon.a mon.0 158.69.93.188:6789/0 114 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)" in cluster log

fail 2684509 2018-06-20 05:26:06 2018-06-23 23:26:44 2018-06-24 02:26:51 3:00:07 0:13:52 2:46:15 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v3.yaml tasks/nfs_workunit_suites_fsstress.yaml} 4
Failure Reason:

Command failed on ovh084 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1'

fail 2684510 2018-06-20 05:26:07 2018-06-23 23:28:46 2018-06-24 02:02:50 2:34:04 0:14:05 2:19:59 ovh master knfs/basic/{ceph/base.yaml clusters/extra-client.yaml mount/v4.yaml tasks/nfs_workunit_suites_iozone.yaml} 4
Failure Reason:

Command failed on ovh017 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 2'