User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
vasu | 2017-08-28 23:21:53 | 2017-08-28 23:22:47 | 2017-08-29 00:54:40 | 1:31:53 | ceph-ansible | luminous | vps | 9df9e82 | 6 | 12 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 1575077 | 2017-08-28 23:22:01 | 2017-08-28 23:22:33 | 2017-08-29 00:00:26 | 0:37:53 | 0:26:10 | 0:11:43 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/bluestore_with_dmcrypt.yaml 4-tasks/ceph-admin-commands.yaml} | 3 | |
fail | 1575078 | 2017-08-28 23:22:02 | 2017-08-28 23:23:15 | 2017-08-29 00:09:11 | 0:45:56 | 0:26:59 | 0:18:57 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_off.yaml 4-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm041 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-ansbile-fixes TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
pass | 1575079 | 2017-08-28 23:22:03 | 2017-08-28 23:22:34 | 2017-08-29 00:00:26 | 0:37:52 | 0:27:17 | 0:10:35 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_on.yaml 4-tasks/rbd_import_export.yaml} | 3 | |
fail | 1575080 | 2017-08-28 23:22:03 | 2017-08-28 23:23:11 | 2017-08-28 23:57:11 | 0:34:00 | 0:21:39 | 0:12:21 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_on.yaml 4-tasks/ceph-admin-commands.yaml} | 3 | |
Failure Reason:
Command failed on vpm097 with status 250: 'cd ~/ceph-ansible ; source venv/bin/activate ; ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook -vv -e ireallymeanit=yes -i inven.yml infrastructure-playbooks/purge-cluster.yml' |
||||||||||||||
fail | 1575081 | 2017-08-28 23:22:04 | 2017-08-28 23:23:08 | 2017-08-28 23:57:23 | 0:34:15 | 0:24:49 | 0:09:26 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/bluestore_with_dmcrypt.yaml 4-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm173 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-ansbile-fixes TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
fail | 1575082 | 2017-08-28 23:22:05 | 2017-08-28 23:22:32 | 2017-08-29 00:16:26 | 0:53:54 | 0:41:12 | 0:12:42 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_off.yaml 4-tasks/rbd_import_export.yaml} | 3 | |
Failure Reason:
Command failed on vpm167 with status 250: 'cd ~/ceph-ansible ; source venv/bin/activate ; ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook -vv -e ireallymeanit=yes -i inven.yml infrastructure-playbooks/purge-cluster.yml' |
||||||||||||||
pass | 1575083 | 2017-08-28 23:22:05 | 2017-08-28 23:22:45 | 2017-08-28 23:58:47 | 0:36:02 | 0:24:20 | 0:11:42 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_off.yaml 4-tasks/ceph-admin-commands.yaml} | 3 | |
fail | 1575084 | 2017-08-28 23:22:06 | 2017-08-28 23:23:02 | 2017-08-29 00:11:07 | 0:48:05 | 0:26:28 | 0:21:37 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_on.yaml 4-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm095 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-ansbile-fixes TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
pass | 1575085 | 2017-08-28 23:22:07 | 2017-08-28 23:23:16 | 2017-08-29 00:01:12 | 0:37:56 | 0:26:28 | 0:11:28 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/bluestore_with_dmcrypt.yaml 4-tasks/rbd_import_export.yaml} | 3 | |
fail | 1575086 | 2017-08-28 23:22:07 | 2017-08-28 23:22:33 | 2017-08-28 23:58:26 | 0:35:53 | 0:23:30 | 0:12:23 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/bluestore_with_dmcrypt.yaml 4-tasks/ceph-admin-commands.yaml} | 3 | |
Failure Reason:
Command failed on vpm165 with status 250: 'cd ~/ceph-ansible ; source venv/bin/activate ; ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook -vv -e ireallymeanit=yes -i inven.yml infrastructure-playbooks/purge-cluster.yml' |
||||||||||||||
fail | 1575087 | 2017-08-28 23:22:08 | 2017-08-28 23:23:14 | 2017-08-29 00:03:13 | 0:39:59 | 0:29:40 | 0:10:19 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_off.yaml 4-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm043 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-ansbile-fixes TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
fail | 1575088 | 2017-08-28 23:22:09 | 2017-08-28 23:22:52 | 2017-08-29 00:54:40 | 1:31:48 | 1:11:59 | 0:19:49 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_on.yaml 4-tasks/rbd_import_export.yaml} | 3 | |
Failure Reason:
Command failed on vpm143 with status 250: 'cd ~/ceph-ansible ; source venv/bin/activate ; ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook -vv -e ireallymeanit=yes -i inven.yml infrastructure-playbooks/purge-cluster.yml' |
||||||||||||||
pass | 1575089 | 2017-08-28 23:22:09 | 2017-08-28 23:23:15 | 2017-08-29 00:01:11 | 0:37:56 | 0:25:51 | 0:12:05 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_on.yaml 4-tasks/ceph-admin-commands.yaml} | 3 | |
fail | 1575090 | 2017-08-28 23:22:10 | 2017-08-28 23:23:29 | 2017-08-29 00:00:05 | 0:36:36 | 0:17:28 | 0:19:08 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/bluestore_with_dmcrypt.yaml 4-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm033 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-ansbile-fixes TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
pass | 1575091 | 2017-08-28 23:22:11 | 2017-08-28 23:22:39 | 2017-08-28 23:58:39 | 0:36:00 | 0:24:21 | 0:11:39 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_off.yaml 4-tasks/rbd_import_export.yaml} | 3 | |
fail | 1575092 | 2017-08-28 23:22:11 | 2017-08-28 23:22:47 | 2017-08-28 23:54:39 | 0:31:52 | 0:17:58 | 0:13:54 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_off.yaml 4-tasks/ceph-admin-commands.yaml} | 3 | |
Failure Reason:
Command failed on vpm019 with status 250: 'cd ~/ceph-ansible ; source venv/bin/activate ; ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook -vv -e ireallymeanit=yes -i inven.yml infrastructure-playbooks/purge-cluster.yml' |
||||||||||||||
fail | 1575093 | 2017-08-28 23:22:12 | 2017-08-28 23:22:32 | 2017-08-29 00:00:26 | 0:37:54 | 0:26:44 | 0:11:10 | vps | wip-ansible-fixes | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_on.yaml 4-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm133 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-ansbile-fixes TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
fail | 1575094 | 2017-08-28 23:22:12 | 2017-08-28 23:22:46 | 2017-08-29 00:10:40 | 0:47:54 | 0:26:11 | 0:21:43 | vps | wip-ansible-fixes | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-ceph/ceph_ansible.yaml 3-config/bluestore_with_dmcrypt.yaml 4-tasks/rbd_import_export.yaml} | 3 | |
Failure Reason:
Command failed on vpm121 with status 250: 'cd ~/ceph-ansible ; source venv/bin/activate ; ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook -vv -e ireallymeanit=yes -i inven.yml infrastructure-playbooks/purge-cluster.yml' |