User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2017-05-08 04:05:04 | 2017-05-08 17:46:46 | 2017-05-08 20:35:46 | 2:49:00 | ceph-ansible | master | vps | 6152fd9 | 1 | 5 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 1113372 | 2017-05-08 04:06:00 | 2017-05-08 17:46:46 | 2017-05-08 20:30:50 | 2:44:04 | 0:18:47 | 2:25:17 | vps | master | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-config/ceph_ansible.yaml 3-tasks/ceph-admin-commands.yaml} | 3 | |
Failure Reason:
'check health' reached maximum tries (6) after waiting for 90 seconds |
||||||||||||||
fail | 1113373 | 2017-05-08 04:06:01 | 2017-05-08 17:47:19 | 2017-05-08 18:47:21 | 1:00:02 | 0:16:08 | 0:43:54 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm155 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
fail | 1113374 | 2017-05-08 04:06:01 | 2017-05-08 17:49:40 | 2017-05-08 19:53:42 | 2:04:02 | 0:17:40 | 1:46:22 | vps | master | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-config/ceph_ansible.yaml 3-tasks/rbd_import_export.yaml} | 3 | |
Failure Reason:
'check health' reached maximum tries (6) after waiting for 90 seconds |
||||||||||||||
pass | 1113375 | 2017-05-08 04:06:02 | 2017-05-08 17:49:40 | 2017-05-08 18:29:40 | 0:40:00 | 0:17:29 | 0:22:31 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/ceph-admin-commands.yaml} | 3 | |
fail | 1113376 | 2017-05-08 04:06:02 | 2017-05-08 17:51:42 | 2017-05-08 19:45:45 | 1:54:03 | 0:19:18 | 1:34:45 | vps | master | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-config/ceph_ansible.yaml 3-tasks/cls.yaml} | 3 | |
Failure Reason:
'check health' reached maximum tries (6) after waiting for 90 seconds |
||||||||||||||
fail | 1113377 | 2017-05-08 04:06:03 | 2017-05-08 17:51:42 | 2017-05-08 20:35:46 | 2:44:04 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/rbd_import_export.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@vpm065.front.sepia.ceph.com |