User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2017-06-03 05:15:02 | 2017-06-03 10:32:23 | 2017-06-03 12:56:43 | 2:24:20 | ceph-ansible | kraken | vps | ae0eab5 | 5 | 3 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 1258845 | 2017-06-03 05:15:33 | 2017-06-03 09:56:22 | 2017-06-03 12:50:26 | 2:54:04 | 0:21:13 | 2:32:51 | vps | master | centos | 7.2 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-config/ceph_ansible.yaml 3-tasks/ceph-admin-commands.yaml} | 3 | |
fail | 1258849 | 2017-06-03 05:15:34 | 2017-06-03 10:08:23 | 2017-06-03 10:42:23 | 0:34:00 | 0:14:29 | 0:19:31 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm159 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=kraken TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
pass | 1258852 | 2017-06-03 05:15:34 | 2017-06-03 10:20:56 | 2017-06-03 11:50:57 | 1:30:01 | 0:24:55 | 1:05:06 | vps | master | centos | 7.2 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-config/ceph_ansible.yaml 3-tasks/rbd_cli_tests.yaml} | 3 | |
pass | 1258856 | 2017-06-03 05:15:35 | 2017-06-03 10:24:47 | 2017-06-03 12:36:49 | 2:12:02 | 0:15:34 | 1:56:28 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/rbd_import_export.yaml} | 3 | |
pass | 1258859 | 2017-06-03 05:15:35 | 2017-06-03 10:28:48 | 2017-06-03 12:54:51 | 2:26:03 | 0:14:54 | 2:11:09 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/ceph-admin-commands.yaml} | 3 | |
fail | 1258863 | 2017-06-03 05:15:36 | 2017-06-03 10:32:23 | 2017-06-03 11:22:23 | 0:50:00 | 0:21:03 | 0:28:57 | vps | master | centos | 7.2 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-config/ceph_ansible.yaml 3-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm159 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=kraken TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
fail | 1258868 | 2017-06-03 05:15:37 | 2017-06-03 10:32:23 | 2017-06-03 12:20:24 | 1:48:01 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/rbd_cli_tests.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@vpm197.front.sepia.ceph.com |
||||||||||||||
pass | 1258871 | 2017-06-03 05:15:37 | 2017-06-03 10:36:40 | 2017-06-03 12:56:43 | 2:20:03 | 0:20:26 | 1:59:37 | vps | master | centos | 7.2 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-config/ceph_ansible.yaml 3-tasks/rbd_import_export.yaml} | 3 |