Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 2793309 2018-07-18 05:15:57 2018-07-20 21:26:50 2018-07-20 21:44:49 0:17:59 0:12:51 0:05:08 ovh master krbd/basic/{ceph/ceph.yaml clusters/fixed-1.yaml conf.yaml tasks/krbd_blkroset.yaml} 1
Failure Reason:

Command failed on ovh054 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1'

pass 2793311 2018-07-18 05:15:58 2018-07-20 21:29:03 2018-07-20 22:23:03 0:54:00 0:40:33 0:13:27 ovh master krbd/rbd/{clusters/fixed-3.yaml conf.yaml msgr-failures/few.yaml tasks/rbd_fio.yaml} 3
pass 2793312 2018-07-18 05:15:58 2018-07-20 21:35:01 2018-07-20 23:17:02 1:42:01 1:23:34 0:18:27 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/few.yaml tasks/krbd_data_pool.yaml} 3
dead 2793314 2018-07-18 05:15:59 2018-07-20 21:37:11 2018-07-21 09:55:15 12:18:04 ovh master krbd/singleton/{conf.yaml msgr-failures/few.yaml tasks/rbd_xfstests.yaml} 4
pass 2793316 2018-07-18 05:16:00 2018-07-20 21:45:00 2018-07-20 23:13:01 1:28:01 0:57:25 0:30:36 ovh master krbd/thrash/{ceph/ceph.yaml clusters/fixed-3.yaml conf.yaml thrashers/backoff.yaml thrashosds-health.yaml workloads/rbd_fio.yaml} 3
pass 2793318 2018-07-18 05:16:01 2018-07-20 21:51:57 2018-07-20 22:13:57 0:22:00 0:13:35 0:08:25 ovh master krbd/unmap/{ceph/ceph.yaml clusters/separate-client.yaml conf.yaml filestore-xfs.yaml kernels/pre-single-major.yaml tasks/unmap.yaml} 2
fail 2793320 2018-07-18 05:16:02 2018-07-20 22:14:11 2018-07-20 22:30:10 0:15:59 0:11:25 0:04:34 ovh master krbd/wac/sysfs/{ceph/ceph.yaml clusters/fixed-1.yaml conf.yaml tasks/stable_pages_required.yaml} 1
Failure Reason:

Command failed on ovh038 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 0'

pass 2793322 2018-07-18 05:16:02 2018-07-20 22:17:13 2018-07-20 22:51:13 0:34:00 0:19:28 0:14:32 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/many.yaml tasks/krbd_exclusive_option.yaml} 3
pass 2793324 2018-07-18 05:16:03 2018-07-20 22:24:11 2018-07-20 23:46:12 1:22:01 1:07:00 0:15:01 ovh master krbd/rbd/{clusters/fixed-3.yaml conf.yaml msgr-failures/many.yaml tasks/rbd_workunit_kernel_untar_build.yaml} 3
pass 2793326 2018-07-18 05:16:04 2018-07-20 22:30:22 2018-07-20 23:24:22 0:54:00 0:24:54 0:29:06 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/few.yaml tasks/krbd_fallocate.yaml} 3
pass 2793328 2018-07-18 05:16:05 2018-07-21 02:13:47 2018-07-21 03:11:48 0:58:01 0:41:15 0:16:46 ovh master krbd/rbd/{clusters/fixed-3.yaml conf.yaml msgr-failures/few.yaml tasks/rbd_workunit_suites_dbench.yaml} 3
pass 2793330 2018-07-18 05:16:06 2018-07-21 02:20:43 2018-07-21 02:50:43 0:30:00 0:19:58 0:10:02 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/many.yaml tasks/krbd_latest_osdmap_on_map.yaml} 3
fail 2793332 2018-07-18 05:16:07 2018-07-21 02:22:20 2018-07-21 03:42:21 1:20:01 1:06:25 0:13:36 ovh master krbd/thrash/{ceph/ceph.yaml clusters/fixed-3.yaml conf.yaml thrashers/mon-thrasher.yaml thrashosds-health.yaml workloads/rbd_workunit_suites_ffsb.yaml} 3
Failure Reason:

"2018-07-21 02:47:39.469392 mon.b mon.1 158.69.87.161:6789/0 19 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" in cluster log

fail 2793334 2018-07-18 05:16:07 2018-07-21 02:23:32 2018-07-21 06:27:35 4:04:03 3:50:41 0:13:22 ovh master krbd/rbd/{clusters/fixed-3.yaml conf.yaml msgr-failures/many.yaml tasks/rbd_workunit_suites_ffsb.yaml} 3
Failure Reason:

Command failed (workunit test suites/ffsb.sh) on ovh016 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=mimic TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'

pass 2793336 2018-07-18 05:16:08 2018-07-21 03:15:24 2018-07-21 04:07:24 0:52:00 0:23:15 0:28:45 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/few.yaml tasks/rbd_concurrent.yaml} 3
fail 2793338 2018-07-18 05:16:09 2018-07-21 03:21:10 2018-07-21 03:41:09 0:19:59 0:11:59 0:08:00 ovh master krbd/basic/{ceph/ceph.yaml clusters/fixed-1.yaml conf.yaml tasks/krbd_huge_image.yaml} 1
Failure Reason:

Command failed on ovh004 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 0'

fail 2793340 2018-07-18 05:16:10 2018-07-21 03:36:02 2018-07-21 07:24:06 3:48:04 3:15:53 0:32:11 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/many.yaml tasks/rbd_huge_tickets.yaml} 3
Failure Reason:

Command failed (workunit test rbd/huge-tickets.sh) on ovh051 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=mimic TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/huge-tickets.sh'

pass 2793342 2018-07-18 05:16:11 2018-07-21 03:41:20 2018-07-21 04:11:20 0:30:00 0:17:17 0:12:43 ovh master krbd/rbd/{clusters/fixed-3.yaml conf.yaml msgr-failures/few.yaml tasks/rbd_workunit_suites_fsstress.yaml} 3
pass 2793344 2018-07-18 05:16:11 2018-07-21 03:43:52 2018-07-21 04:17:51 0:33:59 0:16:51 0:17:08 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/few.yaml tasks/rbd_image_read.yaml} 3
pass 2793346 2018-07-18 05:16:12 2018-07-21 03:59:31 2018-07-21 05:21:32 1:22:01 1:08:06 0:13:55 ovh master krbd/thrash/{ceph/ceph.yaml clusters/fixed-3.yaml conf.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/rbd_fio.yaml} 3
pass 2793348 2018-07-18 05:16:13 2018-07-21 04:07:26 2018-07-21 04:55:26 0:48:00 0:28:06 0:19:54 ovh master krbd/rbd/{clusters/fixed-3.yaml conf.yaml msgr-failures/many.yaml tasks/rbd_workunit_suites_fsstress_ext4.yaml} 3
pass 2793350 2018-07-18 05:16:14 2018-07-21 04:17:54 2018-07-21 05:09:54 0:52:00 0:15:57 0:36:03 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/many.yaml tasks/rbd_kernel.yaml} 3
pass 2793352 2018-07-18 05:16:15 2018-07-21 04:25:33 2018-07-21 05:11:33 0:46:00 0:31:43 0:14:17 ovh master krbd/rbd/{clusters/fixed-3.yaml conf.yaml msgr-failures/few.yaml tasks/rbd_workunit_suites_fsx.yaml} 3
pass 2793354 2018-07-18 05:16:15 2018-07-21 04:31:03 2018-07-21 05:27:03 0:56:00 0:27:28 0:28:32 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/few.yaml tasks/rbd_kfsx.yaml} 3
pass 2793356 2018-07-18 05:16:16 2018-07-21 04:46:12 2018-07-21 05:14:12 0:28:00 0:13:51 0:14:09 ovh master krbd/unmap/{ceph/ceph.yaml clusters/separate-client.yaml conf.yaml filestore-xfs.yaml kernels/single-major-off.yaml tasks/unmap.yaml} 2
pass 2793357 2018-07-18 05:16:17 2018-07-21 04:53:32 2018-07-21 05:53:32 1:00:00 0:30:02 0:29:58 ovh master krbd/wac/wac/{ceph/ceph.yaml clusters/fixed-3.yaml conf.yaml tasks/wac.yaml verify/many-resets.yaml} 3
pass 2793359 2018-07-18 05:16:18 2018-07-21 04:57:53 2018-07-21 05:25:52 0:27:59 0:15:16 0:12:43 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/many.yaml tasks/rbd_map_snapshot_io.yaml} 3
pass 2793361 2018-07-18 05:16:18 2018-07-21 05:01:18 2018-07-21 06:35:19 1:34:01 1:16:28 0:17:33 ovh master krbd/thrash/{ceph/ceph.yaml clusters/fixed-3.yaml conf.yaml thrashers/upmap.yaml thrashosds-health.yaml workloads/rbd_workunit_suites_ffsb.yaml} 3
fail 2793363 2018-07-18 05:16:19 2018-07-21 10:05:52 2018-07-21 14:29:56 4:24:04 3:48:23 0:35:41 ovh master krbd/rbd/{clusters/fixed-3.yaml conf.yaml msgr-failures/many.yaml tasks/rbd_workunit_suites_iozone.yaml} 3
Failure Reason:

Command failed (workunit test suites/iozone.sh) on ovh069 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=mimic TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/iozone.sh'

fail 2793365 2018-07-18 05:16:20 2018-07-21 10:06:19 2018-07-21 10:22:18 0:15:59 0:12:37 0:03:22 ovh master krbd/basic/{ceph/ceph.yaml clusters/fixed-1.yaml conf.yaml tasks/krbd_msgr_segments.yaml} 1
Failure Reason:

Command failed on ovh056 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 0'

fail 2793367 2018-07-18 05:16:21 2018-07-21 10:11:55 2018-07-21 10:45:55 0:34:00 0:14:29 0:19:31 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/few.yaml tasks/rbd_map_unmap.yaml} 3
Failure Reason:

"2018-07-21 10:43:16.402798 mon.b mon.0 158.69.65.47:6789/0 148 : cluster [WRN] Health check failed: 4 osds down (OSD_DOWN)" in cluster log

fail 2793369 2018-07-18 05:16:22 2018-07-21 10:15:55 2018-07-21 10:43:54 0:27:59 0:14:32 0:13:27 ovh master krbd/rbd/{clusters/fixed-3.yaml conf.yaml msgr-failures/few.yaml tasks/rbd_workunit_trivial_sync.yaml} 3
Failure Reason:

"2018-07-21 10:41:36.644620 mon.a mon.0 158.69.65.35:6789/0 143 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)" in cluster log

fail 2793371 2018-07-18 05:16:22 2018-07-21 10:19:41 2018-07-21 10:45:40 0:25:59 0:14:03 0:11:56 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/many.yaml tasks/rbd_simple_big.yaml} 3
Failure Reason:

"2018-07-21 10:42:38.794094 mon.b mon.0 158.69.65.5:6789/0 154 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 2793373 2018-07-18 05:16:23 2018-07-21 10:21:57 2018-07-21 10:51:57 0:30:00 0:13:52 0:16:08 ovh master krbd/rbd/{clusters/fixed-3.yaml conf.yaml msgr-failures/many.yaml tasks/rbd_fio.yaml} 3
Failure Reason:

"2018-07-21 10:48:54.619440 mon.a mon.0 158.69.65.67:6789/0 95 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 2793375 2018-07-18 05:16:24 2018-07-21 10:24:00 2018-07-21 14:38:04 4:14:04 3:55:28 0:18:36 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/many.yaml tasks/krbd_data_pool.yaml} 3
Failure Reason:

Command failed (workunit test rbd/krbd_data_pool.sh) on ovh079 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=mimic TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/krbd_data_pool.sh'

fail 2793377 2018-07-18 05:16:25 2018-07-21 10:27:43 2018-07-21 12:09:44 1:42:01 0:14:11 1:27:50 ovh master krbd/singleton/{conf.yaml msgr-failures/many.yaml tasks/rbd_xfstests.yaml} 4
Failure Reason:

"2018-07-21 11:36:31.521214 mon.a mon.0 158.69.66.25:6789/0 94 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

pass 2793379 2018-07-18 05:16:25 2018-07-21 10:34:27 2018-07-21 12:34:28 2:00:01 1:23:08 0:36:53 ovh master krbd/thrash/{ceph/ceph.yaml clusters/fixed-3.yaml conf.yaml thrashers/backoff.yaml thrashosds-health.yaml workloads/rbd_workunit_suites_ffsb.yaml} 3
fail 2793382 2018-07-18 05:16:26 2018-07-21 10:44:14 2018-07-21 11:24:14 0:40:00 0:14:09 0:25:51 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/few.yaml tasks/krbd_exclusive_option.yaml} 3
Failure Reason:

"2018-07-21 11:05:14.921755 mon.a mon.0 158.69.66.123:6789/0 72 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 2793383 2018-07-18 05:16:27 2018-07-21 10:45:53 2018-07-21 11:27:52 0:41:59 0:13:30 0:28:29 ovh master krbd/rbd/{clusters/fixed-3.yaml conf.yaml msgr-failures/few.yaml tasks/rbd_workunit_kernel_untar_build.yaml} 3
Failure Reason:

"2018-07-21 11:08:50.730055 mon.a mon.0 158.69.66.155:6789/0 132 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)" in cluster log

fail 2793384 2018-07-18 05:16:28 2018-07-21 10:45:56 2018-07-21 11:11:55 0:25:59 0:13:38 0:12:21 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/many.yaml tasks/krbd_fallocate.yaml} 3
Failure Reason:

"2018-07-21 11:09:55.726282 mon.a mon.0 158.69.66.151:6789/0 137 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 2793385 2018-07-18 05:16:28 2018-07-21 10:49:59 2018-07-21 11:19:58 0:29:59 0:11:07 0:18:52 ovh master krbd/basic/{ceph/ceph.yaml clusters/fixed-1.yaml conf.yaml tasks/krbd_parent_overlap.yaml} 1
Failure Reason:

Command failed on ovh008 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 0'

fail 2793386 2018-07-18 05:16:29 2018-07-21 10:52:05 2018-07-21 11:16:04 0:23:59 0:14:15 0:09:44 ovh master krbd/rbd/{clusters/fixed-3.yaml conf.yaml msgr-failures/many.yaml tasks/rbd_workunit_suites_dbench.yaml} 3
Failure Reason:

"2018-07-21 11:13:51.804954 mon.b mon.0 158.69.66.160:6789/0 102 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 2793387 2018-07-18 05:16:30 2018-07-21 10:52:05 2018-07-21 11:26:04 0:33:59 0:13:55 0:20:04 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/few.yaml tasks/krbd_latest_osdmap_on_map.yaml} 3
Failure Reason:

"2018-07-21 11:15:36.573037 mon.a mon.0 158.69.66.167:6789/0 96 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 2793388 2018-07-18 05:16:31 2018-07-21 10:54:05 2018-07-21 11:44:05 0:50:00 0:13:42 0:36:18 ovh master krbd/thrash/{ceph/ceph.yaml clusters/fixed-3.yaml conf.yaml thrashers/mon-thrasher.yaml thrashosds-health.yaml workloads/rbd_fio.yaml} 3
Failure Reason:

Command failed on ovh083 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 3'

fail 2793389 2018-07-18 05:16:32 2018-07-21 10:56:01 2018-07-21 11:46:01 0:50:00 0:13:16 0:36:44 ovh master krbd/rbd/{clusters/fixed-3.yaml conf.yaml msgr-failures/few.yaml tasks/rbd_workunit_suites_ffsb.yaml} 3
Failure Reason:

"2018-07-21 11:19:44.971226 mon.a mon.0 158.69.66.182:6789/0 109 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)" in cluster log

fail 2793390 2018-07-18 05:16:32 2018-07-21 11:01:33 2018-07-21 11:49:32 0:47:59 0:14:36 0:33:23 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/many.yaml tasks/rbd_concurrent.yaml} 3
Failure Reason:

Command failed on ovh006 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1'

fail 2793391 2018-07-18 05:16:33 2018-07-21 11:01:48 2018-07-21 11:23:47 0:21:59 0:12:34 0:09:25 ovh master krbd/unmap/{ceph/ceph.yaml clusters/separate-client.yaml conf.yaml filestore-xfs.yaml kernels/single-major-on.yaml tasks/unmap.yaml} 2
Failure Reason:

Command failed on ovh017 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 2'

pass 2793392 2018-07-18 05:16:34 2018-07-21 11:06:00 2018-07-21 12:02:00 0:56:00 0:21:13 0:34:47 ovh master krbd/wac/wac/{ceph/ceph.yaml clusters/fixed-3.yaml conf.yaml tasks/wac.yaml verify/no-resets.yaml} 3
fail 2793393 2018-07-18 05:16:35 2018-07-21 11:09:42 2018-07-21 12:09:43 1:00:01 0:14:12 0:45:49 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/few.yaml tasks/rbd_huge_tickets.yaml} 3
Failure Reason:

"2018-07-21 11:44:05.091871 mon.b mon.0 158.69.66.3:6789/0 86 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 2793394 2018-07-18 05:16:35 2018-07-21 11:11:49 2018-07-21 12:01:49 0:50:00 0:13:11 0:36:49 ovh master krbd/rbd/{clusters/fixed-3.yaml conf.yaml msgr-failures/many.yaml tasks/rbd_workunit_suites_fsstress.yaml} 3
Failure Reason:

Command failed on ovh072 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1'

fail 2793395 2018-07-18 05:16:36 2018-07-21 11:11:57 2018-07-21 11:55:56 0:43:59 0:14:25 0:29:34 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/many.yaml tasks/rbd_image_read.yaml} 3
Failure Reason:

Command failed on ovh054 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 3'

fail 2793396 2018-07-18 05:16:37 2018-07-21 11:16:06 2018-07-21 12:20:06 1:04:00 0:13:36 0:50:24 ovh master krbd/thrash/{ceph/ceph.yaml clusters/fixed-3.yaml conf.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/rbd_workunit_suites_ffsb.yaml} 3
Failure Reason:

Command failed on ovh091 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 3'

fail 2793397 2018-07-18 05:16:38 2018-07-21 11:20:12 2018-07-21 12:00:12 0:40:00 0:14:15 0:25:45 ovh master krbd/rbd/{clusters/fixed-3.yaml conf.yaml msgr-failures/few.yaml tasks/rbd_workunit_suites_fsstress_ext4.yaml} 3
Failure Reason:

"2018-07-21 11:42:07.310839 mon.a mon.0 158.69.66.27:6789/0 129 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN)" in cluster log

fail 2793398 2018-07-18 05:16:38 2018-07-21 11:22:06 2018-07-21 12:32:06 1:10:00 0:13:56 0:56:04 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/few.yaml tasks/rbd_kernel.yaml} 3
Failure Reason:

"2018-07-21 11:42:14.701818 mon.a mon.0 158.69.66.33:6789/0 181 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)" in cluster log

fail 2793399 2018-07-18 05:16:39 2018-07-21 11:23:59 2018-07-21 11:39:59 0:16:00 0:11:12 0:04:48 ovh master krbd/basic/{ceph/ceph.yaml clusters/fixed-1.yaml conf.yaml tasks/krbd_whole_object_discard.yaml} 1
Failure Reason:

Command failed on ovh010 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 0'

pass 2793400 2018-07-18 05:16:40 2018-07-21 11:24:15 2018-07-21 13:40:17 2:16:02 0:46:45 1:29:17 ovh master krbd/rbd/{clusters/fixed-3.yaml conf.yaml msgr-failures/many.yaml tasks/rbd_workunit_suites_fsx.yaml} 3
fail 2793401 2018-07-18 05:16:41 2018-07-21 11:26:07 2018-07-21 12:06:06 0:39:59 0:15:08 0:24:51 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/many.yaml tasks/rbd_kfsx.yaml} 3
Failure Reason:

"2018-07-21 11:54:41.044396 mon.a mon.0 158.69.66.73:6789/0 118 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)" in cluster log

fail 2793402 2018-07-18 05:16:41 2018-07-21 11:28:00 2018-07-21 11:50:00 0:22:00 0:13:52 0:08:08 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/few.yaml tasks/rbd_map_snapshot_io.yaml} 3
Failure Reason:

"2018-07-21 11:48:05.864350 mon.b mon.0 158.69.66.7:6789/0 73 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 2793403 2018-07-18 05:16:42 2018-07-21 11:28:01 2018-07-21 12:22:00 0:53:59 0:13:54 0:40:05 ovh master krbd/thrash/{ceph/ceph.yaml clusters/fixed-3.yaml conf.yaml thrashers/upmap.yaml thrashosds-health.yaml workloads/rbd_fio.yaml} 3
Failure Reason:

Command failed on ovh090 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 3'

pass 2793404 2018-07-18 05:16:43 2018-07-21 11:30:20 2018-07-21 14:04:22 2:34:02 1:47:40 0:46:22 ovh master krbd/rbd/{clusters/fixed-3.yaml conf.yaml msgr-failures/few.yaml tasks/rbd_workunit_suites_iozone.yaml} 3
fail 2793405 2018-07-18 05:16:44 2018-07-21 11:38:00 2018-07-21 12:50:00 1:12:00 0:13:26 0:58:34 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/many.yaml tasks/rbd_map_unmap.yaml} 3
Failure Reason:

"2018-07-21 12:09:07.552064 mon.a mon.0 158.69.67.100:6789/0 140 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 2793406 2018-07-18 05:16:44 2018-07-21 11:38:07 2018-07-21 12:16:07 0:38:00 0:13:49 0:24:11 ovh master krbd/rbd/{clusters/fixed-3.yaml conf.yaml msgr-failures/many.yaml tasks/rbd_workunit_trivial_sync.yaml} 3
Failure Reason:

"2018-07-21 11:58:27.023407 mon.b mon.0 158.69.66.8:6789/0 123 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 2793407 2018-07-18 05:16:45 2018-07-21 11:40:11 2018-07-21 12:40:11 1:00:00 0:15:05 0:44:55 ovh master krbd/rbd-nomount/{clusters/fixed-3.yaml conf.yaml install/ceph.yaml msgr-failures/few.yaml tasks/rbd_simple_big.yaml} 3
Failure Reason:

"2018-07-21 12:13:59.328382 mon.a mon.0 158.69.67.117:6789/0 138 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log