User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
vasu | 2017-08-23 18:29:28 | 2017-08-23 18:30:30 | 2017-08-23 19:44:22 | 1:13:52 | ceph-ansible | master | vps | 47ffcb1 | 6 | 12 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 1555059 | 2017-08-23 18:29:46 | 2017-08-23 18:30:30 | 2017-08-23 19:04:20 | 0:33:50 | 0:22:01 | 0:11:49 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/bluestore_with_dmcrypt.yaml 4-tasks/ceph-admin-commands.yaml} | 3 | |
fail | 1555060 | 2017-08-23 18:29:47 | 2017-08-23 18:30:30 | 2017-08-23 19:44:22 | 1:13:52 | 0:16:07 | 0:57:45 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_off.yaml 4-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm007 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-ansbile-fixes TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
pass | 1555061 | 2017-08-23 18:29:48 | 2017-08-23 18:30:30 | 2017-08-23 19:08:20 | 0:37:50 | 0:25:04 | 0:12:46 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_on.yaml 4-tasks/rbd_import_export.yaml} | 3 | |
pass | 1555062 | 2017-08-23 18:29:48 | 2017-08-23 18:30:30 | 2017-08-23 19:32:22 | 1:01:52 | 0:18:13 | 0:43:39 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_on.yaml 4-tasks/ceph-admin-commands.yaml} | 3 | |
fail | 1555063 | 2017-08-23 18:29:49 | 2017-08-23 18:30:30 | 2017-08-23 19:08:21 | 0:37:51 | 0:24:22 | 0:13:29 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/bluestore_with_dmcrypt.yaml 4-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm155 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-ansbile-fixes TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
pass | 1555064 | 2017-08-23 18:29:50 | 2017-08-23 18:30:30 | 2017-08-23 19:24:22 | 0:53:52 | 0:18:16 | 0:35:36 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_off.yaml 4-tasks/rbd_import_export.yaml} | 3 | |
fail | 1555065 | 2017-08-23 18:29:50 | 2017-08-23 18:30:30 | 2017-08-23 19:16:21 | 0:45:51 | 0:15:14 | 0:30:37 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_off.yaml 4-tasks/ceph-admin-commands.yaml} | 3 | |
Failure Reason:
Command failed on vpm165 with status 2: 'cd ~/ceph-ansible ; virtualenv venv ; source venv/bin/activate ; pip install --upgrade pip ; pip install selinux ; pip install setuptools>=11.3 ansible==2.2.1 ; ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook -vv -i inven.yml site.yml' |
||||||||||||||
fail | 1555066 | 2017-08-23 18:29:51 | 2017-08-23 18:30:30 | 2017-08-23 19:22:21 | 0:51:51 | 0:18:08 | 0:33:43 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_on.yaml 4-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm197 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-ansbile-fixes TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
pass | 1555067 | 2017-08-23 18:29:52 | 2017-08-23 18:30:30 | 2017-08-23 19:08:21 | 0:37:51 | 0:24:06 | 0:13:45 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/bluestore_with_dmcrypt.yaml 4-tasks/rbd_import_export.yaml} | 3 | |
fail | 1555068 | 2017-08-23 18:29:52 | 2017-08-23 18:30:30 | 2017-08-23 19:36:21 | 1:05:51 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/bluestore_with_dmcrypt.yaml 4-tasks/ceph-admin-commands.yaml} | 3 | |||
Failure Reason:
failed to install new kernel version within timeout |
||||||||||||||
fail | 1555069 | 2017-08-23 18:29:53 | 2017-08-23 18:30:30 | 2017-08-23 19:04:21 | 0:33:51 | 0:21:39 | 0:12:12 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_off.yaml 4-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm171 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-ansbile-fixes TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
fail | 1555070 | 2017-08-23 18:29:54 | 2017-08-23 18:30:30 | 2017-08-23 19:08:21 | 0:37:51 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_on.yaml 4-tasks/rbd_import_export.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@vpm195.front.sepia.ceph.com |
||||||||||||||
pass | 1555071 | 2017-08-23 18:29:54 | 2017-08-23 18:30:30 | 2017-08-23 19:08:21 | 0:37:51 | 0:24:25 | 0:13:26 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_on.yaml 4-tasks/ceph-admin-commands.yaml} | 3 | |
fail | 1555072 | 2017-08-23 18:29:55 | 2017-08-23 18:30:30 | 2017-08-23 19:30:21 | 0:59:51 | 0:17:05 | 0:42:46 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/bluestore_with_dmcrypt.yaml 4-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm077 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-ansbile-fixes TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
fail | 1555073 | 2017-08-23 18:29:56 | 2017-08-23 18:30:30 | 2017-08-23 19:20:21 | 0:49:51 | 0:14:13 | 0:35:38 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_off.yaml 4-tasks/rbd_import_export.yaml} | 3 | |
Failure Reason:
Command failed on vpm167 with status 2: 'cd ~/ceph-ansible ; virtualenv venv ; source venv/bin/activate ; pip install --upgrade pip ; pip install selinux ; pip install setuptools>=11.3 ansible==2.2.1 ; ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook -vv -i inven.yml site.yml' |
||||||||||||||
fail | 1555074 | 2017-08-23 18:29:56 | 2017-08-23 18:30:30 | 2017-08-23 18:58:21 | 0:27:51 | 0:17:24 | 0:10:27 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_off.yaml 4-tasks/ceph-admin-commands.yaml} | 3 | |
Failure Reason:
Command failed on vpm121 with status 250: 'cd ~/ceph-ansible ; source venv/bin/activate ; ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook -vv -e ireallymeanit=yes -i inven.yml infrastructure-playbooks/purge-cluster.yml' |
||||||||||||||
fail | 1555075 | 2017-08-23 18:29:57 | 2017-08-23 18:30:30 | 2017-08-23 19:06:20 | 0:35:50 | 0:25:02 | 0:10:48 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_on.yaml 4-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm043 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-ansbile-fixes TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
fail | 1555076 | 2017-08-23 18:29:58 | 2017-08-23 18:30:30 | 2017-08-23 18:58:21 | 0:27:51 | 0:17:25 | 0:10:26 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/bluestore_with_dmcrypt.yaml 4-tasks/rbd_import_export.yaml} | 3 | |
Failure Reason:
Command failed on vpm101 with status 250: 'cd ~/ceph-ansible ; source venv/bin/activate ; ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook -vv -e ireallymeanit=yes -i inven.yml infrastructure-playbooks/purge-cluster.yml' |