User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2017-05-29 04:05:02 | 2017-05-29 04:08:14 | 2017-05-29 05:26:25 | 1:18:11 | ceph-ansible | master | vps | 0d3e6a9 | 1 | 5 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 1241070 | 2017-05-29 04:05:34 | 2017-05-29 04:08:14 | 2017-05-29 04:56:15 | 0:48:01 | 0:20:49 | 0:27:12 | vps | master | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-config/ceph_ansible.yaml 3-tasks/ceph-admin-commands.yaml} | 3 | |
Failure Reason:
'check health' reached maximum tries (6) after waiting for 90 seconds |
||||||||||||||
fail | 1241071 | 2017-05-29 04:05:39 | 2017-05-29 04:10:27 | 2017-05-29 05:08:25 | 0:57:58 | 0:16:03 | 0:41:55 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm133 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
fail | 1241072 | 2017-05-29 04:05:40 | 2017-05-29 04:10:27 | 2017-05-29 05:26:25 | 1:15:58 | 0:19:29 | 0:56:29 | vps | master | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-config/ceph_ansible.yaml 3-tasks/rbd_import_export.yaml} | 3 | |
Failure Reason:
'check health' reached maximum tries (6) after waiting for 90 seconds |
||||||||||||||
pass | 1241073 | 2017-05-29 04:05:41 | 2017-05-29 04:12:20 | 2017-05-29 04:52:20 | 0:40:00 | 0:15:30 | 0:24:30 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/ceph-admin-commands.yaml} | 3 | |
fail | 1241074 | 2017-05-29 04:05:41 | 2017-05-29 04:16:33 | 2017-05-29 05:08:34 | 0:52:01 | 0:19:36 | 0:32:25 | vps | master | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-config/ceph_ansible.yaml 3-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed on vpm157 with status 2: 'cd ~/ceph-ansible ; virtualenv venv ; source venv/bin/activate ; pip install --upgrade pip ; pip install setuptools>=11.3 ansible==2.2.1 ; ansible-playbook -vv -i inven.yml site.yml' |
||||||||||||||
fail | 1241075 | 2017-05-29 04:05:42 | 2017-05-29 04:18:34 | 2017-05-29 05:04:35 | 0:46:01 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/rbd_import_export.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@vpm173.front.sepia.ceph.com |