Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version
fail 3574487 2019-02-11 07:02:17 2019-02-11 08:31:11 2019-02-11 09:01:11 0:30:00 0:11:41 0:18:19 ovh master ubuntu 16.04
Failure Reason:

failed during ceph-deploy cmd: disk zap ovh053:/dev/sdb , ec=1

pass 3574488 2019-02-11 07:02:18 2019-02-11 08:31:39 2019-02-11 09:13:39 0:42:00 0:23:27 0:18:33 ovh master centos 7.4
fail 3574489 2019-02-11 07:02:19 2019-02-11 08:31:50 2019-02-11 09:55:51 1:24:01 0:23:04 1:00:57 ovh master centos 7.5
Failure Reason:

ceph-deploy: Failed during gather keys

pass 3574490 2019-02-11 07:02:19 2019-02-11 08:31:51 2019-02-11 09:17:51 0:46:00 0:30:22 0:15:38 ovh master centos 7.4
pass 3574491 2019-02-11 07:02:20 2019-02-11 08:35:58 2019-02-11 10:17:59 1:42:01 1:28:21 0:13:40 ovh master centos 7.4
pass 3574492 2019-02-11 07:02:21 2019-02-11 08:39:56 2019-02-11 09:27:56 0:48:00 0:19:31 0:28:29 ovh master centos 7.4
pass 3574493 2019-02-11 07:02:21 2019-02-11 08:52:11 2019-02-11 09:32:11 0:40:00 0:19:34 0:20:26 ovh master centos 7.4
pass 3574494 2019-02-11 07:02:22 2019-02-11 08:56:37 2019-02-11 10:02:38 1:06:01 0:45:21 0:20:40 ovh master centos 7.4
pass 3574495 2019-02-11 07:02:23 2019-02-11 09:01:23 2019-02-11 09:57:23 0:56:00 0:27:57 0:28:03 ovh master centos 7.4
fail 3574496 2019-02-11 07:02:23 2019-02-11 09:08:01 2019-02-11 09:14:00 0:05:59 ovh master centos 7.4
Failure Reason:

[Errno None] Unable to connect to port 22 on 158.69.69.113

fail 3574497 2019-02-11 07:02:24 2019-02-11 09:13:51 2019-02-11 09:51:51 0:38:00 0:23:47 0:14:13 ovh master centos 7.4
Failure Reason:

Command failed (workunit test libcephfs/test.sh) on ovh016 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b4fa47390d79cc835be6e19e65b2bd6cf29f5173 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/libcephfs/test.sh'

fail 3574498 2019-02-11 07:02:25 2019-02-11 09:14:01 2019-02-11 12:50:04 3:36:03 3:19:15 0:16:48 ovh master centos 7.4
Failure Reason:

Command failed (workunit test rados/test.sh) on ovh042 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b4fa47390d79cc835be6e19e65b2bd6cf29f5173 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 3574499 2019-02-11 07:02:25 2019-02-11 09:17:53 2019-02-11 13:01:56 3:44:03 3:23:50 0:20:13 ovh master centos 7.4
Failure Reason:

Command failed (workunit test rados/test.sh) on ovh068 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b4fa47390d79cc835be6e19e65b2bd6cf29f5173 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 3574500 2019-02-11 07:02:26 2019-02-11 09:28:09 2019-02-11 11:20:10 1:52:01 1:17:41 0:34:20 ovh master centos 7.4
pass 3574501 2019-02-11 07:02:27 2019-02-11 09:31:46 2019-02-11 10:47:47 1:16:01 0:56:38 0:19:23 ovh master centos 7.4
fail 3574502 2019-02-11 07:02:28 2019-02-11 09:32:12 2019-02-11 10:08:11 0:35:59 0:18:38 0:17:21 ovh master ubuntu 16.04
Failure Reason:

ceph-deploy: Failed to zap osds

pass 3574503 2019-02-11 07:02:28 2019-02-11 09:51:53 2019-02-11 10:25:53 0:34:00 0:19:50 0:14:10 ovh master centos 7.4
pass 3574504 2019-02-11 07:02:29 2019-02-11 09:55:53 2019-02-11 10:43:53 0:48:00 0:27:17 0:20:43 ovh master centos 7.4
pass 3574505 2019-02-11 07:02:30 2019-02-11 09:57:25 2019-02-11 10:47:25 0:50:00 0:21:12 0:28:48 ovh master centos 7.4
pass 3574506 2019-02-11 07:02:30 2019-02-11 10:02:50 2019-02-11 10:54:50 0:52:00 0:31:07 0:20:53 ovh master centos 7.4
pass 3574507 2019-02-11 07:02:31 2019-02-11 10:08:23 2019-02-11 11:20:24 1:12:01 0:52:40 0:19:21 ovh master centos 7.4
pass 3574508 2019-02-11 07:02:32 2019-02-11 10:18:02 2019-02-11 10:52:01 0:33:59 0:17:44 0:16:15 ovh master centos 7.4
pass 3574509 2019-02-11 07:02:32 2019-02-11 10:25:55 2019-02-11 11:03:55 0:38:00 0:21:25 0:16:35 ovh master centos 7.4
pass 3574510 2019-02-11 07:02:33 2019-02-11 10:43:58 2019-02-11 11:29:57 0:45:59 0:27:46 0:18:13 ovh master centos 7.4
fail 3574511 2019-02-11 07:02:34 2019-02-11 10:47:26 2019-02-11 12:39:27 1:52:01 1:24:42 0:27:19 ovh master centos 7.4
Failure Reason:

"2019-02-11 11:28:29.072604 mon.b (mon.0) 107 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

pass 3574512 2019-02-11 07:02:34 2019-02-11 10:47:48 2019-02-11 12:05:48 1:18:00 1:03:05 0:14:55 ovh master centos 7.4
pass 3574513 2019-02-11 07:02:35 2019-02-11 10:52:04 2019-02-11 11:50:04 0:58:00 0:42:49 0:15:11 ovh master centos 7.4
pass 3574514 2019-02-11 07:02:36 2019-02-11 10:55:00 2019-02-11 11:35:00 0:40:00 0:21:00 0:19:00 ovh master centos 7.4