User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2015-07-13 00:05:02 | 2015-07-13 13:37:45 | 2015-07-13 22:06:31 | 8:28:46 | upgrade:giant-x | next | vps | — | 65 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 970914 | 2015-07-13 00:05:13 | 2015-07-13 12:23:31 | 2015-07-13 16:03:47 | 3:40:16 | 0:15:03 | 3:25:13 | vps | master | rhel | 7.0 | upgrade:giant-x/stress-split-erasure-code-x86_64/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=isa-k=2-m=1.yaml distros/rhel_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm155 with status 1: 'sudo yum install ceph-devel -y' |
||||||||||||||
fail | 970916 | 2015-07-13 00:05:14 | 2015-07-13 12:24:42 | 2015-07-13 12:56:43 | 0:32:01 | 0:14:42 | 0:17:19 | vps | master | centos | 6.5 | upgrade:giant-x/stress-split/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed with status 2: 'ansible-playbook -v --extra-vars \'{"ansible_ssh_user": "ubuntu"}\' -i /etc/ansible/hosts --limit vpm041.front.sepia.ceph.com,vpm045.front.sepia.ceph.com,vpm119.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-cm-ansible_master/cephlab.yml' |
||||||||||||||
fail | 970918 | 2015-07-13 00:05:15 | 2015-07-13 12:25:21 | 2015-07-13 13:31:25 | 1:06:04 | 0:23:09 | 0:42:55 | vps | master | centos | 6.5 | upgrade:giant-x/stress-split-erasure-code/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Administratively prohibited |
||||||||||||||
fail | 970920 | 2015-07-13 00:05:17 | 2015-07-13 12:25:25 | 2015-07-13 14:07:32 | 1:42:07 | 0:23:47 | 1:18:20 | vps | master | centos | 6.5 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/parallel_run/{ec-rados-parallel.yaml rados_api.yaml rados_loadgenbig.yaml test_cache-pool-snaps.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm045 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/dea73c1dddbb8d2349ec1e972734aa6b38d59e46/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 970922 | 2015-07-13 00:05:18 | 2015-07-13 12:26:03 | 2015-07-13 13:04:05 | 0:38:02 | 0:11:45 | 0:26:17 | vps | master | debian | 7.0 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/ec-rados-sequential.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm189 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 970924 | 2015-07-13 00:05:19 | 2015-07-13 12:27:20 | 2015-07-13 14:53:31 | 2:26:11 | 1:06:10 | 1:20:01 | vps | master | ubuntu | 12.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/rados_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm082 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test-upgrade-v9.0.1.sh' |
||||||||||||||
fail | 970926 | 2015-07-13 00:05:20 | 2015-07-13 12:30:46 | 2015-07-13 15:08:58 | 2:38:12 | 1:21:10 | 1:17:02 | vps | master | ubuntu | 14.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 970928 | 2015-07-13 00:05:22 | 2015-07-13 12:30:57 | 2015-07-13 15:35:11 | 3:04:14 | 0:09:03 | 2:55:11 | vps | master | centos | 6.5 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed with status 2: 'ansible-playbook -v --extra-vars \'{"ansible_ssh_user": "ubuntu"}\' -i /etc/ansible/hosts --limit vpm146.front.sepia.ceph.com,vpm124.front.sepia.ceph.com,vpm153.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-cm-ansible_master/cephlab.yml' |
||||||||||||||
fail | 970930 | 2015-07-13 00:05:23 | 2015-07-13 12:35:24 | 2015-07-13 13:59:29 | 1:24:05 | 1:06:25 | 0:17:40 | vps | master | debian | 7.0 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm177 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test-upgrade-v9.0.1.sh' |
||||||||||||||
fail | 970932 | 2015-07-13 00:05:24 | 2015-07-13 12:35:33 | 2015-07-13 15:27:46 | 2:52:13 | 0:10:28 | 2:41:45 | vps | master | ubuntu | 12.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_python.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed with status 2: 'ansible-playbook -v --extra-vars \'{"ansible_ssh_user": "ubuntu"}\' -i /etc/ansible/hosts --limit vpm054.front.sepia.ceph.com,vpm130.front.sepia.ceph.com,vpm082.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-cm-ansible_master/cephlab.yml' |
||||||||||||||
fail | 970934 | 2015-07-13 00:05:25 | 2015-07-13 12:36:04 | 2015-07-13 14:42:13 | 2:06:09 | 0:22:36 | 1:43:33 | vps | master | ubuntu | 14.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/parallel_run/{ec-rados-parallel.yaml rados_api.yaml rados_loadgenbig.yaml test_cache-pool-snaps.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm010 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 970936 | 2015-07-13 00:05:27 | 2015-07-13 12:36:20 | 2015-07-13 13:02:21 | 0:26:01 | 0:15:12 | 0:10:49 | vps | master | centos | 6.5 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/ec-rados-sequential.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed with status 2: 'ansible-playbook -v --extra-vars \'{"ansible_ssh_user": "ubuntu"}\' -i /etc/ansible/hosts --limit vpm097.front.sepia.ceph.com,vpm198.front.sepia.ceph.com,vpm126.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-cm-ansible_master/cephlab.yml' |
||||||||||||||
fail | 970938 | 2015-07-13 00:05:28 | 2015-07-13 12:38:45 | 2015-07-13 13:56:49 | 1:18:04 | 1:03:30 | 0:14:34 | vps | master | debian | 7.0 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/rados_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 970940 | 2015-07-13 00:05:29 | 2015-07-13 12:40:06 | 2015-07-13 14:14:12 | 1:34:06 | 1:24:33 | 0:09:33 | vps | master | ubuntu | 12.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm104 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test-upgrade-v9.0.1.sh' |
||||||||||||||
fail | 970942 | 2015-07-13 00:05:30 | 2015-07-13 12:41:41 | 2015-07-13 14:21:48 | 1:40:07 | 0:21:35 | 1:18:32 | vps | master | ubuntu | 14.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm054 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --pool-snaps --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op delete 50 --pool base' |
||||||||||||||
fail | 970944 | 2015-07-13 00:05:32 | 2015-07-13 12:42:37 | 2015-07-13 13:16:38 | 0:34:01 | 0:21:15 | 0:12:46 | vps | master | centos | 6.5 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm098 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/dea73c1dddbb8d2349ec1e972734aa6b38d59e46/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 970946 | 2015-07-13 00:05:33 | 2015-07-13 12:46:48 | 2015-07-13 14:10:53 | 1:24:05 | 1:00:30 | 0:23:35 | vps | master | debian | 7.0 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_python.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm008 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test-upgrade-v9.0.1.sh' |
||||||||||||||
fail | 970948 | 2015-07-13 00:05:34 | 2015-07-13 12:48:42 | 2015-07-13 13:30:44 | 0:42:02 | 0:10:46 | 0:31:16 | vps | master | debian | 7.0 | upgrade:giant-x/stress-split/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Administratively prohibited |
||||||||||||||
fail | 970950 | 2015-07-13 00:05:35 | 2015-07-13 12:49:42 | 2015-07-13 14:47:50 | 1:58:08 | 0:09:11 | 1:48:57 | vps | master | debian | 7.0 | upgrade:giant-x/stress-split-erasure-code/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Administratively prohibited |
||||||||||||||
fail | 970952 | 2015-07-13 00:05:37 | 2015-07-13 12:50:17 | 2015-07-13 16:08:32 | 3:18:15 | 0:19:05 | 2:59:10 | vps | master | ubuntu | 12.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/parallel_run/{ec-rados-parallel.yaml rados_api.yaml rados_loadgenbig.yaml test_cache-pool-snaps.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm192 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 970954 | 2015-07-13 00:05:38 | 2015-07-13 12:50:59 | 2015-07-13 14:39:06 | 1:48:07 | 0:21:26 | 1:26:41 | vps | master | ubuntu | 14.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/ec-rados-sequential.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm119 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 970956 | 2015-07-13 00:05:39 | 2015-07-13 12:51:22 | 2015-07-13 14:37:29 | 1:46:07 | 0:19:16 | 1:26:51 | vps | master | centos | 6.5 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/rados_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm106 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/dea73c1dddbb8d2349ec1e972734aa6b38d59e46/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 970958 | 2015-07-13 00:05:40 | 2015-07-13 12:51:41 | 2015-07-13 14:47:49 | 1:56:08 | 1:01:20 | 0:54:48 | vps | master | debian | 7.0 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 970960 | 2015-07-13 00:05:42 | 2015-07-13 12:55:42 | 2015-07-13 13:43:45 | 0:48:03 | 0:20:27 | 0:27:36 | vps | master | ubuntu | 12.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm003 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --pool-snaps --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op delete 50 --pool base' |
||||||||||||||
fail | 970962 | 2015-07-13 00:05:43 | 2015-07-13 12:56:49 | 2015-07-13 15:45:01 | 2:48:12 | 1:12:26 | 1:35:46 | vps | master | ubuntu | 14.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 970964 | 2015-07-13 00:05:44 | 2015-07-13 12:56:58 | 2015-07-13 14:53:06 | 1:56:08 | 0:19:48 | 1:36:20 | vps | master | centos | 6.5 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_python.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm016 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/dea73c1dddbb8d2349ec1e972734aa6b38d59e46/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 970966 | 2015-07-13 00:05:45 | 2015-07-13 12:58:10 | 2015-07-13 13:22:11 | 0:24:01 | 0:10:09 | 0:13:52 | vps | master | debian | 7.0 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/parallel_run/{ec-rados-parallel.yaml rados_api.yaml rados_loadgenbig.yaml test_cache-pool-snaps.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm149 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 970968 | 2015-07-13 00:05:47 | 2015-07-13 13:00:16 | 2015-07-13 15:08:25 | 2:08:09 | 0:19:00 | 1:49:09 | vps | master | ubuntu | 12.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/ec-rados-sequential.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm035 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 970970 | 2015-07-13 00:05:48 | 2015-07-13 13:01:44 | 2015-07-13 16:17:58 | 3:16:14 | 1:14:58 | 2:01:16 | vps | master | ubuntu | 14.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/rados_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm079 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test-upgrade-v9.0.1.sh' |
||||||||||||||
fail | 970972 | 2015-07-13 00:05:49 | 2015-07-13 13:02:26 | 2015-07-13 14:50:34 | 1:48:08 | 0:20:43 | 1:27:25 | vps | master | centos | 6.5 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm070 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/dea73c1dddbb8d2349ec1e972734aa6b38d59e46/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 970974 | 2015-07-13 00:05:50 | 2015-07-13 13:02:32 | 2015-07-13 13:24:32 | 0:22:00 | 0:07:50 | 0:14:10 | vps | master | debian | 7.0 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm079 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --pool-snaps --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op delete 50 --pool base' |
||||||||||||||
fail | 970975 | 2015-07-13 00:05:52 | 2015-07-13 13:04:10 | 2015-07-13 16:32:26 | 3:28:16 | 1:08:46 | 2:19:30 | vps | master | ubuntu | 12.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm117 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test-upgrade-v9.0.1.sh' |
||||||||||||||
fail | 970976 | 2015-07-13 00:05:53 | 2015-07-13 13:08:53 | 2015-07-13 14:47:00 | 1:38:07 | 1:20:21 | 0:17:46 | vps | master | ubuntu | 14.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_python.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 970977 | 2015-07-13 00:05:54 | 2015-07-13 13:09:40 | 2015-07-13 15:43:51 | 2:34:11 | 0:23:46 | 2:10:25 | vps | master | ubuntu | 14.04 | upgrade:giant-x/stress-split-erasure-code-x86_64/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=isa-k=2-m=1.yaml distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Administratively prohibited |
||||||||||||||
fail | 970978 | 2015-07-13 00:05:55 | 2015-07-13 13:10:47 | 2015-07-13 14:18:51 | 1:08:04 | 0:20:39 | 0:47:25 | vps | master | ubuntu | 12.04 | upgrade:giant-x/stress-split/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Administratively prohibited |
||||||||||||||
fail | 970979 | 2015-07-13 00:05:56 | 2015-07-13 13:14:48 | 2015-07-13 13:40:48 | 0:26:00 | 0:19:05 | 0:06:55 | vps | master | ubuntu | 12.04 | upgrade:giant-x/stress-split-erasure-code/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Administratively prohibited |
||||||||||||||
fail | 970980 | 2015-07-13 00:05:58 | 2015-07-13 13:15:51 | 2015-07-13 15:07:59 | 1:52:08 | 0:21:23 | 1:30:45 | vps | master | centos | 6.5 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/parallel_run/{ec-rados-parallel.yaml rados_api.yaml rados_loadgenbig.yaml test_cache-pool-snaps.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm106 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/dea73c1dddbb8d2349ec1e972734aa6b38d59e46/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 970981 | 2015-07-13 00:05:59 | 2015-07-13 13:16:44 | 2015-07-13 13:42:46 | 0:26:02 | 0:10:33 | 0:15:29 | vps | master | debian | 7.0 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/ec-rados-sequential.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm191 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 970982 | 2015-07-13 00:06:00 | 2015-07-13 13:16:52 | 2015-07-13 16:35:07 | 3:18:15 | 1:11:23 | 2:06:52 | vps | master | ubuntu | 12.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/rados_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 970983 | 2015-07-13 00:06:01 | 2015-07-13 13:19:20 | 2015-07-13 14:57:23 | 1:38:03 | 1:24:31 | 0:13:32 | vps | master | ubuntu | 14.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm079 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test-upgrade-v9.0.1.sh' |
||||||||||||||
fail | 970984 | 2015-07-13 00:06:02 | 2015-07-13 13:20:46 | 2015-07-13 14:46:48 | 1:26:02 | 0:20:05 | 1:05:57 | vps | master | centos | 6.5 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm166 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/dea73c1dddbb8d2349ec1e972734aa6b38d59e46/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 970985 | 2015-07-13 00:06:03 | 2015-07-13 13:21:03 | 2015-07-13 14:59:09 | 1:38:06 | 0:54:26 | 0:43:40 | vps | master | debian | 7.0 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm162 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test-upgrade-v9.0.1.sh' |
||||||||||||||
fail | 970986 | 2015-07-13 00:06:05 | 2015-07-13 13:22:17 | 2015-07-13 15:42:27 | 2:20:10 | 1:08:09 | 1:12:01 | vps | master | ubuntu | 12.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_python.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm120 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test-upgrade-v9.0.1.sh' |
||||||||||||||
fail | 970987 | 2015-07-13 00:06:06 | 2015-07-13 13:22:48 | 2015-07-13 17:23:07 | 4:00:19 | 0:23:16 | 3:37:03 | vps | master | ubuntu | 14.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/parallel_run/{ec-rados-parallel.yaml rados_api.yaml rados_loadgenbig.yaml test_cache-pool-snaps.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm147 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 970988 | 2015-07-13 00:06:07 | 2015-07-13 13:24:37 | 2015-07-13 14:00:39 | 0:36:02 | 0:24:23 | 0:11:39 | vps | master | centos | 6.5 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/ec-rados-sequential.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm193 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/dea73c1dddbb8d2349ec1e972734aa6b38d59e46/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 970989 | 2015-07-13 00:06:08 | 2015-07-13 13:28:28 | 2015-07-13 14:36:32 | 1:08:04 | 1:03:27 | 0:04:37 | vps | master | debian | 7.0 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/rados_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm014 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test-upgrade-v9.0.1.sh' |
||||||||||||||
fail | 970990 | 2015-07-13 00:06:09 | 2015-07-13 13:30:34 | 2015-07-13 15:00:40 | 1:30:06 | 1:12:53 | 0:17:13 | vps | master | ubuntu | 12.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 970991 | 2015-07-13 00:06:11 | 2015-07-13 13:30:49 | 2015-07-13 16:07:01 | 2:36:12 | 0:21:37 | 2:14:35 | vps | master | ubuntu | 14.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm172 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --pool-snaps --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op delete 50 --pool base' |
||||||||||||||
fail | 970992 | 2015-07-13 00:06:12 | 2015-07-13 13:31:31 | 2015-07-13 15:37:40 | 2:06:09 | 0:11:30 | 1:54:39 | vps | master | centos | 6.5 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed with status 2: 'ansible-playbook -v --extra-vars \'{"ansible_ssh_user": "ubuntu"}\' -i /etc/ansible/hosts --limit vpm050.front.sepia.ceph.com,vpm042.front.sepia.ceph.com,vpm069.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-cm-ansible_master/cephlab.yml' |
||||||||||||||
fail | 970993 | 2015-07-13 00:06:13 | 2015-07-13 13:37:45 | 2015-07-13 17:38:03 | 4:00:18 | 1:00:42 | 2:59:36 | vps | master | debian | 7.0 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_python.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm182 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test-upgrade-v9.0.1.sh' |
||||||||||||||
fail | 970994 | 2015-07-13 00:06:14 | 2015-07-13 13:41:36 | 2015-07-13 14:11:37 | 0:30:01 | 0:23:18 | 0:06:43 | vps | master | ubuntu | 14.04 | upgrade:giant-x/stress-split/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Administratively prohibited |
||||||||||||||
fail | 970995 | 2015-07-13 00:06:15 | 2015-07-13 13:42:51 | 2015-07-13 16:31:04 | 2:48:13 | 0:24:15 | 2:23:58 | vps | master | ubuntu | 14.04 | upgrade:giant-x/stress-split-erasure-code/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Administratively prohibited |
||||||||||||||
fail | 970996 | 2015-07-13 00:06:16 | 2015-07-13 13:43:04 | 2015-07-13 15:43:13 | 2:00:09 | 0:20:18 | 1:39:51 | vps | master | ubuntu | 12.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/parallel_run/{ec-rados-parallel.yaml rados_api.yaml rados_loadgenbig.yaml test_cache-pool-snaps.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm072 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 970997 | 2015-07-13 00:06:18 | 2015-07-13 13:43:10 | 2015-07-13 15:57:20 | 2:14:10 | 0:23:59 | 1:50:11 | vps | master | ubuntu | 14.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/ec-rados-sequential.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm035 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 970998 | 2015-07-13 00:06:19 | 2015-07-13 13:43:29 | 2015-07-13 14:27:31 | 0:44:02 | 0:19:26 | 0:24:36 | vps | master | centos | 6.5 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/rados_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm176 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/dea73c1dddbb8d2349ec1e972734aa6b38d59e46/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 970999 | 2015-07-13 00:06:20 | 2015-07-13 13:43:50 | 2015-07-13 16:18:02 | 2:34:12 | 1:11:23 | 1:22:49 | vps | master | debian | 7.0 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm154 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test-upgrade-v9.0.1.sh' |
||||||||||||||
fail | 971000 | 2015-07-13 00:06:21 | 2015-07-13 13:44:48 | 2015-07-13 14:26:51 | 0:42:03 | 0:20:28 | 0:21:35 | vps | master | ubuntu | 12.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm071 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --pool-snaps --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op delete 50 --pool base' |
||||||||||||||
fail | 971001 | 2015-07-13 00:06:22 | 2015-07-13 13:46:59 | 2015-07-13 15:17:05 | 1:30:06 | 1:11:15 | 0:18:51 | vps | master | ubuntu | 14.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm182 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test-upgrade-v9.0.1.sh' |
||||||||||||||
fail | 971002 | 2015-07-13 00:06:24 | 2015-07-13 13:53:02 | 2015-07-13 14:33:01 | 0:39:59 | 0:19:48 | 0:20:11 | vps | master | centos | 6.5 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_python.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm113 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/dea73c1dddbb8d2349ec1e972734aa6b38d59e46/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 971003 | 2015-07-13 00:06:25 | 2015-07-13 13:53:04 | 2015-07-13 15:11:09 | 1:18:05 | 0:08:32 | 1:09:33 | vps | master | debian | 7.0 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/parallel_run/{ec-rados-parallel.yaml rados_api.yaml rados_loadgenbig.yaml test_cache-pool-snaps.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm008 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 971004 | 2015-07-13 00:06:26 | 2015-07-13 13:54:11 | 2015-07-13 14:28:12 | 0:34:01 | 0:21:55 | 0:12:06 | vps | master | ubuntu | 12.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/ec-rados-sequential.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm097 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
dead | 971005 | 2015-07-13 00:06:27 | 2015-07-13 13:57:56 | 2015-07-13 22:06:31 | 8:08:35 | vps | master | ubuntu | 14.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/rados_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | — | |||
fail | 971006 | 2015-07-13 00:06:28 | 2015-07-13 13:58:16 | 2015-07-13 15:36:22 | 1:38:06 | 0:06:52 | 1:31:14 | vps | master | centos | 6.5 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed with status 2: 'ansible-playbook -v --extra-vars \'{"ansible_ssh_user": "ubuntu"}\' -i /etc/ansible/hosts --limit vpm135.front.sepia.ceph.com,vpm136.front.sepia.ceph.com,vpm065.front.sepia.ceph.com /var/lib/teuthworker/src/ceph-cm-ansible_master/cephlab.yml' |
||||||||||||||
fail | 971007 | 2015-07-13 00:06:29 | 2015-07-13 14:01:07 | 2015-07-13 14:57:11 | 0:56:04 | 0:09:01 | 0:47:03 | vps | master | debian | 7.0 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_cache-pool-snaps.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm012 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --pool-snaps --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op delete 50 --pool base' |
||||||||||||||
fail | 971008 | 2015-07-13 00:06:31 | 2015-07-13 14:01:08 | 2015-07-13 18:25:28 | 4:24:20 | 1:13:36 | 3:10:44 | vps | master | ubuntu | 12.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_api.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm053 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test-upgrade-v9.0.1.sh' |
||||||||||||||
fail | 971009 | 2015-07-13 00:06:32 | 2015-07-13 14:01:08 | 2015-07-13 16:03:17 | 2:02:09 | 1:20:35 | 0:41:34 | vps | master | ubuntu | 14.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_python.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test-upgrade-v9.0.1.sh) on vpm014 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test-upgrade-v9.0.1.sh' |