User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2017-06-24 05:15:03 | 2017-06-24 08:24:22 | 2017-06-24 10:06:27 | 1:42:05 | ceph-ansible | kraken | vps | 2f5c65b | 5 | 3 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 1322731 | 2017-06-24 05:15:40 | 2017-06-24 07:53:26 | 2017-06-24 09:19:26 | 1:26:00 | 0:20:50 | 1:05:10 | vps | master | centos | 7.2 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-config/ceph_ansible.yaml 3-tasks/ceph-admin-commands.yaml} | 3 | |
fail | 1322734 | 2017-06-24 05:15:41 | 2017-06-24 08:14:15 | 2017-06-24 09:54:15 | 1:40:00 | 0:17:29 | 1:22:31 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm195 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=kraken TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
pass | 1322738 | 2017-06-24 05:15:42 | 2017-06-24 08:18:36 | 2017-06-24 09:00:36 | 0:42:00 | 0:21:02 | 0:20:58 | vps | master | centos | 7.2 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-config/ceph_ansible.yaml 3-tasks/rbd_cli_tests.yaml} | 3 | |
fail | 1322742 | 2017-06-24 05:15:42 | 2017-06-24 08:24:21 | 2017-06-24 08:44:20 | 0:19:59 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/rbd_import_export.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@vpm031.front.sepia.ceph.com |
||||||||||||||
pass | 1322746 | 2017-06-24 05:15:43 | 2017-06-24 08:24:22 | 2017-06-24 09:54:21 | 1:29:59 | 0:16:05 | 1:13:54 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/ceph-admin-commands.yaml} | 3 | |
fail | 1322750 | 2017-06-24 05:15:44 | 2017-06-24 08:26:26 | 2017-06-24 10:06:27 | 1:40:01 | 0:20:18 | 1:19:43 | vps | master | centos | 7.2 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-config/ceph_ansible.yaml 3-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm117 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=kraken TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
pass | 1322754 | 2017-06-24 05:15:45 | 2017-06-24 08:30:02 | 2017-06-24 09:30:01 | 0:59:59 | 0:16:33 | 0:43:26 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/rbd_cli_tests.yaml} | 3 | |
pass | 1322758 | 2017-06-24 05:15:45 | 2017-06-24 08:31:19 | 2017-06-24 09:23:17 | 0:51:58 | 0:21:42 | 0:30:16 | vps | master | centos | 7.2 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-config/ceph_ansible.yaml 3-tasks/rbd_import_export.yaml} | 3 |