User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2017-06-17 05:15:01 | 2017-06-17 08:24:22 | 2017-06-17 10:34:56 | 2:10:34 | ceph-ansible | kraken | vps | ae0eab5 | 5 | 3 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 1296717 | 2017-06-17 05:15:31 | 2017-06-17 07:54:42 | 2017-06-17 08:30:42 | 0:36:00 | 0:19:55 | 0:16:05 | vps | master | centos | 7.2 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-config/ceph_ansible.yaml 3-tasks/ceph-admin-commands.yaml} | 3 | |
fail | 1296721 | 2017-06-17 05:15:31 | 2017-06-17 08:00:39 | 2017-06-17 08:38:37 | 0:37:58 | 0:15:16 | 0:22:42 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm153 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=kraken TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
pass | 1296726 | 2017-06-17 05:15:32 | 2017-06-17 08:04:16 | 2017-06-17 08:44:16 | 0:40:00 | 0:19:37 | 0:20:23 | vps | master | centos | 7.2 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-config/ceph_ansible.yaml 3-tasks/rbd_cli_tests.yaml} | 3 | |
pass | 1296729 | 2017-06-17 05:15:33 | 2017-06-17 08:04:22 | 2017-06-17 10:30:24 | 2:26:02 | 0:15:16 | 2:10:46 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/rbd_import_export.yaml} | 3 | |
fail | 1296734 | 2017-06-17 05:15:33 | 2017-06-17 08:14:35 | 2017-06-17 10:32:37 | 2:18:02 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/ceph-admin-commands.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@vpm019.front.sepia.ceph.com |
||||||||||||||
fail | 1296738 | 2017-06-17 05:15:34 | 2017-06-17 08:16:55 | 2017-06-17 10:34:56 | 2:18:01 | 0:19:04 | 1:58:57 | vps | master | centos | 7.2 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-config/ceph_ansible.yaml 3-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm183 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=kraken TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
pass | 1296741 | 2017-06-17 05:15:34 | 2017-06-17 08:16:55 | 2017-06-17 10:26:55 | 2:10:00 | 0:16:17 | 1:53:43 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/rbd_cli_tests.yaml} | 3 | |
pass | 1296745 | 2017-06-17 05:15:35 | 2017-06-17 08:24:22 | 2017-06-17 10:30:24 | 2:06:02 | 0:20:53 | 1:45:09 | vps | master | centos | 7.2 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-config/ceph_ansible.yaml 3-tasks/rbd_import_export.yaml} | 3 |