User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
vasu | 2017-08-22 20:37:53 | 2017-08-22 20:39:12 | 2017-08-22 21:53:02 | 1:13:50 | ceph-ansible | master | vps | 85b6367 | 6 | 12 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 1551625 | 2017-08-22 20:38:22 | 2017-08-22 20:39:15 | 2017-08-22 21:19:01 | 0:39:46 | 0:27:00 | 0:12:46 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/bluestore_with_dmcrypt.yaml 4-tasks/ceph-admin-commands.yaml} | 3 | |
fail | 1551626 | 2017-08-22 20:38:23 | 2017-08-22 20:39:23 | 2017-08-22 21:19:03 | 0:39:40 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_off.yaml 4-tasks/cls.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@vpm069.front.sepia.ceph.com |
||||||||||||||
pass | 1551627 | 2017-08-22 20:38:24 | 2017-08-22 20:39:12 | 2017-08-22 21:43:08 | 1:03:56 | 0:31:15 | 0:32:41 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_on.yaml 4-tasks/rbd_import_export.yaml} | 3 | |
fail | 1551628 | 2017-08-22 20:38:24 | 2017-08-22 20:39:12 | 2017-08-22 21:09:02 | 0:29:50 | 0:20:44 | 0:09:06 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_on.yaml 4-tasks/ceph-admin-commands.yaml} | 3 | |
Failure Reason:
Command failed on vpm089 with status 250: 'cd ~/ceph-ansible ; source venv/bin/activate ; ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook -vv -e ireallymeanit=yes -i inven.yml infrastructure-playbooks/purge-cluster.yml' |
||||||||||||||
fail | 1551629 | 2017-08-22 20:38:25 | 2017-08-22 20:39:11 | 2017-08-22 21:21:10 | 0:41:59 | 0:30:44 | 0:11:15 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/bluestore_with_dmcrypt.yaml 4-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm127 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-ansbile-fixes TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
fail | 1551630 | 2017-08-22 20:38:26 | 2017-08-22 20:39:15 | 2017-08-22 21:09:16 | 0:30:01 | 0:22:11 | 0:07:50 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_off.yaml 4-tasks/rbd_import_export.yaml} | 3 | |
Failure Reason:
Command failed on vpm177 with status 250: 'cd ~/ceph-ansible ; source venv/bin/activate ; ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook -vv -e ireallymeanit=yes -i inven.yml infrastructure-playbooks/purge-cluster.yml' |
||||||||||||||
pass | 1551631 | 2017-08-22 20:38:27 | 2017-08-22 20:39:11 | 2017-08-22 21:25:01 | 0:45:50 | 0:33:47 | 0:12:03 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_off.yaml 4-tasks/ceph-admin-commands.yaml} | 3 | |
fail | 1551632 | 2017-08-22 20:38:27 | 2017-08-22 20:39:16 | 2017-08-22 20:53:07 | 0:13:51 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_on.yaml 4-tasks/cls.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@vpm025.front.sepia.ceph.com |
||||||||||||||
pass | 1551633 | 2017-08-22 20:38:28 | 2017-08-22 20:39:12 | 2017-08-22 21:53:02 | 1:13:50 | 0:58:34 | 0:15:16 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/bluestore_with_dmcrypt.yaml 4-tasks/rbd_import_export.yaml} | 3 | |
fail | 1551634 | 2017-08-22 20:38:29 | 2017-08-22 20:39:12 | 2017-08-22 21:07:14 | 0:28:02 | 0:19:49 | 0:08:13 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/bluestore_with_dmcrypt.yaml 4-tasks/ceph-admin-commands.yaml} | 3 | |
Failure Reason:
Command failed on vpm037 with status 250: 'cd ~/ceph-ansible ; source venv/bin/activate ; ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook -vv -e ireallymeanit=yes -i inven.yml infrastructure-playbooks/purge-cluster.yml' |
||||||||||||||
fail | 1551635 | 2017-08-22 20:38:30 | 2017-08-22 20:39:12 | 2017-08-22 21:27:07 | 0:47:55 | 0:37:15 | 0:10:40 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_off.yaml 4-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm103 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-ansbile-fixes TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
fail | 1551636 | 2017-08-22 20:38:30 | 2017-08-22 20:39:12 | 2017-08-22 21:07:08 | 0:27:56 | 0:19:55 | 0:08:01 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_on.yaml 4-tasks/rbd_import_export.yaml} | 3 | |
Failure Reason:
Command failed on vpm163 with status 250: 'cd ~/ceph-ansible ; source venv/bin/activate ; ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook -vv -e ireallymeanit=yes -i inven.yml infrastructure-playbooks/purge-cluster.yml' |
||||||||||||||
pass | 1551637 | 2017-08-22 20:38:31 | 2017-08-22 20:39:11 | 2017-08-22 21:21:09 | 0:41:58 | 0:28:23 | 0:13:35 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_on.yaml 4-tasks/ceph-admin-commands.yaml} | 3 | |
fail | 1551638 | 2017-08-22 20:38:32 | 2017-08-22 20:39:14 | 2017-08-22 21:05:09 | 0:25:55 | 0:18:16 | 0:07:39 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/bluestore_with_dmcrypt.yaml 4-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm137 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-ansbile-fixes TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
pass | 1551639 | 2017-08-22 20:38:33 | 2017-08-22 20:39:12 | 2017-08-22 21:15:09 | 0:35:57 | 0:24:39 | 0:11:18 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_off.yaml 4-tasks/rbd_import_export.yaml} | 3 | |
fail | 1551640 | 2017-08-22 20:38:34 | 2017-08-22 20:39:11 | 2017-08-22 21:15:09 | 0:35:58 | 0:27:56 | 0:08:02 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_off.yaml 4-tasks/ceph-admin-commands.yaml} | 3 | |
Failure Reason:
Command failed on vpm029 with status 250: 'cd ~/ceph-ansible ; source venv/bin/activate ; ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook -vv -e ireallymeanit=yes -i inven.yml infrastructure-playbooks/purge-cluster.yml' |
||||||||||||||
fail | 1551641 | 2017-08-22 20:38:35 | 2017-08-22 20:39:17 | 2017-08-22 21:25:10 | 0:45:53 | 0:34:27 | 0:11:26 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_on.yaml 4-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm003 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-ansbile-fixes TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
fail | 1551642 | 2017-08-22 20:38:35 | 2017-08-22 20:39:16 | 2017-08-22 21:09:05 | 0:29:49 | 0:21:41 | 0:08:08 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/bluestore_with_dmcrypt.yaml 4-tasks/rbd_import_export.yaml} | 3 | |
Failure Reason:
Command failed on vpm199 with status 250: 'cd ~/ceph-ansible ; source venv/bin/activate ; ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook -vv -e ireallymeanit=yes -i inven.yml infrastructure-playbooks/purge-cluster.yml' |