User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2017-05-23 05:15:02 | 2017-05-23 07:18:16 | 2017-05-23 08:41:05 | 1:22:49 | ceph-ansible | kraken | vps | ae0eab5 | 5 | 3 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 1220247 | 2017-05-23 05:15:44 | 2017-05-23 07:06:20 | 2017-05-23 08:04:21 | 0:58:01 | 0:18:58 | 0:39:03 | vps | master | centos | 7.2 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-config/ceph_ansible.yaml 3-tasks/ceph-admin-commands.yaml} | 3 | |
fail | 1220250 | 2017-05-23 05:15:45 | 2017-05-23 07:07:04 | 2017-05-23 07:47:04 | 0:40:00 | 0:16:41 | 0:23:19 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm029 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=kraken TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
pass | 1220253 | 2017-05-23 05:15:46 | 2017-05-23 07:13:57 | 2017-05-23 07:55:57 | 0:42:00 | 0:21:35 | 0:20:25 | vps | master | centos | 7.2 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-config/ceph_ansible.yaml 3-tasks/rbd_cli_tests.yaml} | 3 | |
fail | 1220257 | 2017-05-23 05:15:47 | 2017-05-23 07:16:08 | 2017-05-23 07:32:08 | 0:16:00 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/rbd_import_export.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@vpm183.front.sepia.ceph.com |
||||||||||||||
pass | 1220261 | 2017-05-23 05:15:48 | 2017-05-23 07:18:16 | 2017-05-23 08:28:14 | 1:09:58 | 0:18:22 | 0:51:36 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/ceph-admin-commands.yaml} | 3 | |
fail | 1220265 | 2017-05-23 05:15:49 | 2017-05-23 07:19:04 | 2017-05-23 08:41:05 | 1:22:01 | 0:18:11 | 1:03:50 | vps | master | centos | 7.2 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-config/ceph_ansible.yaml 3-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm077 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=kraken TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
pass | 1220269 | 2017-05-23 05:15:50 | 2017-05-23 07:19:35 | 2017-05-23 08:27:36 | 1:08:01 | 0:17:58 | 0:50:03 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/rbd_cli_tests.yaml} | 3 | |
pass | 1220273 | 2017-05-23 05:15:51 | 2017-05-23 07:20:51 | 2017-05-23 08:02:51 | 0:42:00 | 0:20:22 | 0:21:38 | vps | master | centos | 7.2 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-config/ceph_ansible.yaml 3-tasks/rbd_import_export.yaml} | 3 |