User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2016-08-02 06:06:02 | 2016-08-02 06:13:47 | 2016-08-02 21:47:04 | 15:33:17 | ceph-ansible | master | vps | 98b9ed1 | 18 | 6 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 346428 | 2016-08-02 06:06:49 | 2016-08-02 06:13:47 | 2016-08-02 12:44:54 | 6:31:07 | 3:01:09 | 3:29:58 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/ceph-admin-commands.yaml} | 3 | |
Failure Reason:
Command failed with status 1: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": false, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_Cwvx_r --limit vpm165.front.sepia.ceph.com,vpm159.front.sepia.ceph.com,vpm121.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook_XadgaD' |
||||||||||||||
fail | 346429 | 2016-08-02 06:06:50 | 2016-08-02 06:13:47 | 2016-08-02 13:17:00 | 7:03:13 | 6:34:24 | 0:28:49 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/cls.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm011 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=98b9ed1d7ce05b0079b8d8351d2142ae05874b3e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/cls/test_cls_hello.sh' |
||||||||||||||
dead | 346430 | 2016-08-02 06:06:52 | 2016-08-02 06:13:36 | 2016-08-02 18:17:14 | 12:03:38 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/rbd_import_export.yaml} | 3 | |||
fail | 346431 | 2016-08-02 06:06:53 | 2016-08-02 06:25:03 | 2016-08-02 10:12:24 | 3:47:21 | 3:29:59 | 0:17:22 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/ceph-admin-commands.yaml} | 1 | |
Failure Reason:
'NoneType' object has no attribute 'getpeername' |
||||||||||||||
dead | 346432 | 2016-08-02 06:06:55 | 2016-08-02 06:40:59 | 2016-08-02 18:42:48 | 12:01:49 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/cls.yaml} | — | |||
fail | 346433 | 2016-08-02 06:06:56 | 2016-08-02 06:48:51 | 2016-08-02 13:26:36 | 6:37:45 | 6:32:15 | 0:05:30 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/rbd_import_export.yaml} | 1 | |
Failure Reason:
Command failed (workunit test rbd/import_export.sh) on vpm131 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=98b9ed1d7ce05b0079b8d8351d2142ae05874b3e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/import_export.sh' |
||||||||||||||
fail | 346434 | 2016-08-02 06:06:58 | 2016-08-02 06:57:14 | 2016-08-02 12:59:03 | 6:01:49 | 4:51:24 | 1:10:25 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/ceph-admin-commands.yaml} | 3 | |
Failure Reason:
Command failed with status 1: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": false, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_6W5pKm --limit vpm001.front.sepia.ceph.com,vpm091.front.sepia.ceph.com,vpm127.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook__1QG4F' |
||||||||||||||
fail | 346435 | 2016-08-02 06:06:59 | 2016-08-02 06:57:19 | 2016-08-02 14:07:07 | 7:09:48 | 6:47:53 | 0:21:55 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/cls.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm135 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=98b9ed1d7ce05b0079b8d8351d2142ae05874b3e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/cls/test_cls_hello.sh' |
||||||||||||||
dead | 346436 | 2016-08-02 06:07:01 | 2016-08-02 06:57:16 | 2016-08-02 18:59:21 | 12:02:05 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/rbd_import_export.yaml} | — | |||
fail | 346437 | 2016-08-02 06:07:02 | 2016-08-02 07:17:09 | 2016-08-02 14:13:00 | 6:55:51 | 6:51:53 | 0:03:58 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/ceph-admin-commands.yaml} | 1 | |
Failure Reason:
Command failed (workunit test ceph-tests/ceph-admin-commands.sh) on vpm095 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=98b9ed1d7ce05b0079b8d8351d2142ae05874b3e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/ceph-tests/ceph-admin-commands.sh' |
||||||||||||||
fail | 346438 | 2016-08-02 06:07:04 | 2016-08-02 07:17:16 | 2016-08-02 19:13:10 | 11:55:54 | 6:16:11 | 5:39:43 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/cls.yaml} | 3 | |
Failure Reason:
Command failed with status 1: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": false, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_VebeDi --limit vpm115.front.sepia.ceph.com,vpm077.front.sepia.ceph.com,vpm149.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook_O8_zbS' |
||||||||||||||
fail | 346439 | 2016-08-02 06:07:05 | 2016-08-02 07:18:03 | 2016-08-02 14:17:46 | 6:59:43 | 6:54:54 | 0:04:49 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/rbd_import_export.yaml} | 1 | |
Failure Reason:
Command failed (workunit test rbd/import_export.sh) on vpm129 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=98b9ed1d7ce05b0079b8d8351d2142ae05874b3e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/import_export.sh' |
||||||||||||||
fail | 346440 | 2016-08-02 06:07:07 | 2016-08-02 07:25:50 | 2016-08-02 14:27:43 | 7:01:53 | 6:57:51 | 0:04:02 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/ceph-admin-commands.yaml} | 1 | |
Failure Reason:
Command failed (workunit test ceph-tests/ceph-admin-commands.sh) on vpm153 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=98b9ed1d7ce05b0079b8d8351d2142ae05874b3e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/ceph-tests/ceph-admin-commands.sh' |
||||||||||||||
dead | 346441 | 2016-08-02 06:07:08 | 2016-08-02 08:10:21 | 2016-08-02 20:12:17 | 12:01:56 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/cls.yaml} | — | |||
fail | 346442 | 2016-08-02 06:07:10 | 2016-08-02 08:10:13 | 2016-08-02 09:09:53 | 0:59:40 | 0:32:22 | 0:27:18 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/rbd_import_export.yaml} | 1 | |
Failure Reason:
{'vpm139.front.sepia.ceph.com': {'msg': 'SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh', 'unreachable': True, 'changed': False}} |
||||||||||||||
dead | 346443 | 2016-08-02 06:07:11 | 2016-08-02 08:43:42 | 2016-08-02 20:45:33 | 12:01:51 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/ceph-admin-commands.yaml} | — | |||
fail | 346444 | 2016-08-02 06:07:13 | 2016-08-02 08:53:07 | 2016-08-02 11:34:50 | 2:41:43 | 2:37:36 | 0:04:07 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/cls.yaml} | 1 | |
Failure Reason:
Command failed with status 1: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": false, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_2LWYO4 --limit vpm079.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook_D8Wv1j' |
||||||||||||||
fail | 346445 | 2016-08-02 06:07:15 | 2016-08-02 08:53:14 | 2016-08-02 15:12:59 | 6:19:45 | 5:04:38 | 1:15:07 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/rbd_import_export.yaml} | 3 | |
Failure Reason:
Command failed with status 1: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": true, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_HjWuuj --limit vpm029.front.sepia.ceph.com,vpm043.front.sepia.ceph.com,vpm049.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook_NYLcuG' |
||||||||||||||
fail | 346446 | 2016-08-02 06:07:16 | 2016-08-02 09:01:37 | 2016-08-02 11:31:14 | 2:29:37 | 2:25:21 | 0:04:16 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/ceph-admin-commands.yaml} | 1 | |
Failure Reason:
Command failed with status 1: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": false, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_5J0WTv --limit vpm119.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook_eVBXin' |
||||||||||||||
fail | 346447 | 2016-08-02 06:07:18 | 2016-08-02 09:09:51 | 2016-08-02 16:35:41 | 7:25:50 | 5:25:58 | 1:59:52 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/cls.yaml} | 3 | |
Failure Reason:
Command failed with status 1: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": true, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_QTq4xn --limit vpm123.front.sepia.ceph.com,vpm055.front.sepia.ceph.com,vpm177.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook_XFMY5O' |
||||||||||||||
fail | 346448 | 2016-08-02 06:07:19 | 2016-08-02 09:10:25 | 2016-08-02 16:12:11 | 7:01:46 | 6:58:00 | 0:03:46 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/rbd_import_export.yaml} | 1 | |
Failure Reason:
Command failed (workunit test rbd/import_export.sh) on vpm003 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=98b9ed1d7ce05b0079b8d8351d2142ae05874b3e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/import_export.sh' |
||||||||||||||
dead | 346449 | 2016-08-02 06:07:21 | 2016-08-02 09:45:11 | 2016-08-02 21:47:04 | 12:01:53 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/ceph-admin-commands.yaml} | — | |||
fail | 346450 | 2016-08-02 06:07:22 | 2016-08-02 09:45:12 | 2016-08-02 12:24:35 | 2:39:23 | 2:30:11 | 0:09:12 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/cls.yaml} | 1 | |
Failure Reason:
Command failed with status 1: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": false, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_BTeB93 --limit vpm041.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook_G013AS' |
||||||||||||||
fail | 346451 | 2016-08-02 06:07:24 | 2016-08-02 09:45:56 | 2016-08-02 15:23:27 | 5:37:31 | 5:15:19 | 0:22:12 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/rbd_import_export.yaml} | 3 | |
Failure Reason:
Command failed with status 1: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": true, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_XQqS9q --limit vpm047.front.sepia.ceph.com,vpm063.front.sepia.ceph.com,vpm053.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook_rpghLo' |