User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2016-07-19 06:06:02 | 2016-07-19 06:10:46 | 2016-07-19 10:36:40 | 4:25:54 | ceph-ansible | master | vps | d1f681a | 3 | 21 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 322302 | 2016-07-19 06:06:43 | 2016-07-19 06:10:44 | 2016-07-19 08:02:26 | 1:51:42 | 1:10:22 | 0:41:20 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/ceph-admin-commands.yaml} | 3 | |
Failure Reason:
Command failed with status 2: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": false, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_2JlG_c --limit vpm007.front.sepia.ceph.com,vpm021.front.sepia.ceph.com,vpm087.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook_ZVmlxt' |
||||||||||||||
fail | 322303 | 2016-07-19 06:06:45 | 2016-07-19 06:10:42 | 2016-07-19 10:10:29 | 3:59:47 | 3:50:20 | 0:09:27 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/cls.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm085 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d1f681a2741ac805d1a21087023147dddd218940 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/cls/test_cls_hello.sh' |
||||||||||||||
pass | 322304 | 2016-07-19 06:06:46 | 2016-07-19 06:10:49 | 2016-07-19 09:54:34 | 3:43:45 | 1:18:44 | 2:25:01 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/rbd_import_export.yaml} | 3 | |
fail | 322305 | 2016-07-19 06:06:48 | 2016-07-19 06:10:48 | 2016-07-19 10:10:35 | 3:59:47 | 3:49:59 | 0:09:48 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/ceph-admin-commands.yaml} | 1 | |
Failure Reason:
Command failed (workunit test ceph-tests/ceph-admin-commands.sh) on vpm127 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d1f681a2741ac805d1a21087023147dddd218940 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/ceph-tests/ceph-admin-commands.sh' |
||||||||||||||
fail | 322306 | 2016-07-19 06:06:49 | 2016-07-19 06:10:49 | 2016-07-19 09:24:34 | 3:13:45 | 1:05:59 | 2:07:46 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/cls.yaml} | 3 | |
Failure Reason:
Command failed with status 2: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": false, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_bQXaQm --limit vpm033.front.sepia.ceph.com,vpm025.front.sepia.ceph.com,vpm165.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook_6CT6po' |
||||||||||||||
fail | 322307 | 2016-07-19 06:06:50 | 2016-07-19 06:10:52 | 2016-07-19 10:08:39 | 3:57:47 | 3:49:45 | 0:08:02 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/rbd_import_export.yaml} | 1 | |
Failure Reason:
Command failed (workunit test rbd/import_export.sh) on vpm155 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d1f681a2741ac805d1a21087023147dddd218940 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/import_export.sh' |
||||||||||||||
fail | 322308 | 2016-07-19 06:06:52 | 2016-07-19 06:10:50 | 2016-07-19 07:54:32 | 1:43:42 | 1:08:45 | 0:34:57 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/ceph-admin-commands.yaml} | 3 | |
Failure Reason:
Command failed with status 3: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": false, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_nf24ev --limit vpm103.front.sepia.ceph.com,vpm019.front.sepia.ceph.com,vpm047.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook_oHkcca' |
||||||||||||||
fail | 322309 | 2016-07-19 06:06:53 | 2016-07-19 06:10:43 | 2016-07-19 10:28:30 | 4:17:47 | 3:59:08 | 0:18:39 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/cls.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm123 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d1f681a2741ac805d1a21087023147dddd218940 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/cls/test_cls_hello.sh' |
||||||||||||||
pass | 322310 | 2016-07-19 06:06:55 | 2016-07-19 06:10:47 | 2016-07-19 09:32:32 | 3:21:45 | 1:01:14 | 2:20:31 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/rbd_import_export.yaml} | 3 | |
fail | 322311 | 2016-07-19 06:06:56 | 2016-07-19 06:10:45 | 2016-07-19 10:16:32 | 4:05:47 | 3:58:21 | 0:07:26 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/ceph-admin-commands.yaml} | 1 | |
Failure Reason:
Command failed (workunit test ceph-tests/ceph-admin-commands.sh) on vpm145 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d1f681a2741ac805d1a21087023147dddd218940 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/ceph-tests/ceph-admin-commands.sh' |
||||||||||||||
fail | 322312 | 2016-07-19 06:06:58 | 2016-07-19 06:10:52 | 2016-07-19 09:10:35 | 2:59:43 | 1:01:47 | 1:57:56 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/cls.yaml} | 3 | |
Failure Reason:
Command failed with status 2: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": false, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_ExUg10 --limit vpm021.front.sepia.ceph.com,vpm087.front.sepia.ceph.com,vpm007.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook_3Q5_5c' |
||||||||||||||
fail | 322313 | 2016-07-19 06:06:59 | 2016-07-19 06:10:46 | 2016-07-19 10:18:33 | 4:07:47 | 3:59:36 | 0:08:11 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/rbd_import_export.yaml} | 1 | |
Failure Reason:
Command failed (workunit test rbd/import_export.sh) on vpm071 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d1f681a2741ac805d1a21087023147dddd218940 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/import_export.sh' |
||||||||||||||
fail | 322314 | 2016-07-19 06:07:01 | 2016-07-19 06:10:53 | 2016-07-19 07:28:32 | 1:17:39 | 0:58:46 | 0:18:53 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/ceph-admin-commands.yaml} | 1 | |
Failure Reason:
Command failed with status 2: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": false, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_g9Jkll --limit vpm031.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook_KjxAJT' |
||||||||||||||
fail | 322315 | 2016-07-19 06:07:02 | 2016-07-19 06:10:45 | 2016-07-19 09:26:29 | 3:15:44 | 0:57:54 | 2:17:50 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/cls.yaml} | 3 | |
Failure Reason:
Command failed with status 2: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": true, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_cHDRfS --limit vpm037.front.sepia.ceph.com,vpm129.front.sepia.ceph.com,vpm003.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook_6jL_cP' |
||||||||||||||
fail | 322316 | 2016-07-19 06:07:04 | 2016-07-19 06:10:51 | 2016-07-19 07:16:32 | 1:05:41 | 0:55:33 | 0:10:08 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/rbd_import_export.yaml} | 1 | |
Failure Reason:
Command failed with status 2: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": false, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_UdB_Mx --limit vpm001.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook_EDSNM4' |
||||||||||||||
fail | 322317 | 2016-07-19 06:07:05 | 2016-07-19 06:10:44 | 2016-07-19 07:56:25 | 1:45:41 | 1:09:38 | 0:36:03 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/ceph-admin-commands.yaml} | 3 | |
Failure Reason:
Command failed with status 2: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": true, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_5VCp8G --limit vpm169.front.sepia.ceph.com,vpm107.front.sepia.ceph.com,vpm045.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook_jcuaNW' |
||||||||||||||
fail | 322318 | 2016-07-19 06:07:07 | 2016-07-19 06:10:54 | 2016-07-19 10:36:40 | 4:25:46 | 4:00:10 | 0:25:36 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/cls.yaml} | 1 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm121 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=d1f681a2741ac805d1a21087023147dddd218940 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/cls/test_cls_hello.sh' |
||||||||||||||
pass | 322319 | 2016-07-19 06:07:08 | 2016-07-19 06:10:47 | 2016-07-19 08:42:30 | 2:31:43 | 1:06:34 | 1:25:09 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/rbd_import_export.yaml} | 3 | |
fail | 322320 | 2016-07-19 06:07:10 | 2016-07-19 06:10:47 | 2016-07-19 07:04:27 | 0:53:40 | 0:46:44 | 0:06:56 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/ceph-admin-commands.yaml} | 1 | |
Failure Reason:
Command failed with status 2: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": false, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_Bp3tu8 --limit vpm067.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook_NG1vWp' |
||||||||||||||
fail | 322321 | 2016-07-19 06:07:11 | 2016-07-19 06:11:20 | 2016-07-19 08:12:59 | 2:01:39 | 1:20:43 | 0:40:56 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/cls.yaml} | 3 | |
Failure Reason:
Command failed with status 2: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": true, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_z9LRb6 --limit vpm037.front.sepia.ceph.com,vpm033.front.sepia.ceph.com,vpm165.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook_aGgGe_' |
||||||||||||||
fail | 322322 | 2016-07-19 06:07:13 | 2016-07-19 06:19:33 | 2016-07-19 07:19:13 | 0:59:40 | 0:49:19 | 0:10:21 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/rbd_import_export.yaml} | 1 | |
Failure Reason:
Command failed with status 2: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": false, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_CsPKA2 --limit vpm101.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook_j2qyPv' |
||||||||||||||
fail | 322323 | 2016-07-19 06:07:14 | 2016-07-19 06:19:33 | 2016-07-19 08:35:16 | 2:15:43 | 1:19:10 | 0:56:33 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/ceph-admin-commands.yaml} | 3 | |
Failure Reason:
Command failed with status 2: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": true, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_838Oer --limit vpm055.front.sepia.ceph.com,vpm077.front.sepia.ceph.com,vpm141.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook_Vl40fD' |
||||||||||||||
fail | 322324 | 2016-07-19 06:07:16 | 2016-07-19 06:33:19 | 2016-07-19 07:28:56 | 0:55:37 | 0:50:42 | 0:04:55 | vps | master | ubuntu | 14.04 | ceph-ansible/smoke/{0-clusters/single_mon_osd.yaml 1-distros/ubuntu_14.04.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/from_teuthology.yaml journal/collocated.yaml} 5-tests/cls.yaml} | 1 | |
Failure Reason:
Command failed with status 2: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": false, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_ekzBvl --limit vpm193.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook_Im8TNz' |
||||||||||||||
fail | 322325 | 2016-07-19 06:07:17 | 2016-07-19 06:33:19 | 2016-07-19 09:29:00 | 2:55:41 | 1:12:23 | 1:43:18 | vps | master | centos | 7.2 | ceph-ansible/smoke/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-setup/ceph_ansible.yaml 3-common/{os_tuning/vm_friendly.yaml source/upstream_stable.yaml} 4-osd/{devices/osd_auto_discovery.yaml journal/collocated.yaml} 5-tests/rbd_import_export.yaml} | 3 | |
Failure Reason:
Command failed with status 3: 'ansible-playbook -v --extra-vars \'{"ceph_origin": "upstream", "ceph_stable": true, "ceph_test": true, "ansible_ssh_user": "ubuntu", "journal_collocation": true, "osd_auto_discovery": true, "os_tuning_params": "[{\\"name\\": \\"kernel.pid_max\\", \\"value\\": 4194303},{\\"name\\": \\"fs.file-max\\", \\"value\\": 26234859}]", "journal_size": 1024}\' -i /tmp/teuth_ansible_hosts_1ccPRz --limit vpm051.front.sepia.ceph.com,vpm049.front.sepia.ceph.com,vpm097.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-ansible_master/teuth_ansible_playbook__1Ghk6' |