User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Fail |
---|---|---|---|---|---|---|---|---|---|
teuthology | 2015-06-26 00:10:03 | 2015-06-26 01:01:54 | 2015-06-26 06:08:42 | 5:06:48 | upgrade:hammer-x | next | vps | — | 30 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 949343 | 2015-06-26 00:10:56 | 2015-06-26 00:55:27 | 2015-06-26 04:47:46 | 3:52:19 | 0:17:29 | 3:34:50 | vps | master | rhel | 7.0 | upgrade:hammer-x/stress-split-erasure-code-x86_64/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=isa-k=2-m=1.yaml distros/rhel_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm107 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_1' |
||||||||||||||
fail | 949344 | 2015-06-26 00:10:57 | 2015-06-26 00:57:14 | 2015-06-26 02:39:21 | 1:42:07 | 0:12:44 | 1:29:23 | vps | master | centos | 6.5 | upgrade:hammer-x/stress-split/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm167 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/555da2a15b3efce8b76851037330c4b82271cef7/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 949345 | 2015-06-26 00:10:58 | 2015-06-26 00:57:57 | 2015-06-26 04:46:18 | 3:48:21 | 0:12:40 | 3:35:41 | vps | master | centos | 6.5 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm185 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/555da2a15b3efce8b76851037330c4b82271cef7/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 949346 | 2015-06-26 00:11:02 | 2015-06-26 01:01:28 | 2015-06-26 03:01:36 | 2:00:08 | vps | master | centos | 6.5 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml} | 3 | |||
Failure Reason:
Command failed (run chef solo-from-scratch) on vpm074 with status 1: "wget -q -O- 'http://git.ceph.com/?p=ceph-qa-chef.git;a=blob_plain;f=solo/solo-from-scratch;hb=HEAD' | sh" |
||||||||||||||
fail | 949347 | 2015-06-26 00:11:05 | 2015-06-26 01:01:54 | 2015-06-26 03:24:04 | 2:22:10 | 2:15:08 | 0:07:02 | vps | master | debian | 7.0 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm120 with status 1: 'CEPH_CLIENT_ID=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_1' |
||||||||||||||
fail | 949348 | 2015-06-26 00:11:06 | 2015-06-26 01:04:10 | 2015-06-26 02:34:13 | 1:30:03 | 1:17:11 | 0:12:52 | vps | master | debian | 7.0 | upgrade:hammer-x/stress-split/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm053 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_3' |
||||||||||||||
fail | 949349 | 2015-06-26 00:11:08 | 2015-06-26 01:05:03 | 2015-06-26 03:25:13 | 2:20:10 | 0:20:15 | 1:59:55 | vps | master | debian | 7.0 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm035 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_1' |
||||||||||||||
fail | 949350 | 2015-06-26 00:11:09 | 2015-06-26 01:06:43 | 2015-06-26 04:53:13 | 3:46:30 | 0:12:36 | 3:33:54 | vps | master | rhel | 6.4 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_6.4.yaml} | 3 | |
Failure Reason:
Command failed on vpm138 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/555da2a15b3efce8b76851037330c4b82271cef7/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 949351 | 2015-06-26 00:11:10 | 2015-06-26 01:09:43 | 2015-06-26 01:49:45 | 0:40:02 | 0:16:06 | 0:23:56 | vps | master | rhel | 6.5 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm107 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/555da2a15b3efce8b76851037330c4b82271cef7/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 949352 | 2015-06-26 00:11:12 | 2015-06-26 01:10:03 | 2015-06-26 02:14:06 | 1:04:03 | 0:14:15 | 0:49:48 | vps | master | rhel | 6.4 | upgrade:hammer-x/stress-split/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/rhel_6.4.yaml} | 3 | |
Failure Reason:
Command failed on vpm034 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/555da2a15b3efce8b76851037330c4b82271cef7/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 949353 | 2015-06-26 00:11:13 | 2015-06-26 01:10:12 | 2015-06-26 02:16:16 | 1:06:04 | 0:16:58 | 0:49:06 | vps | master | rhel | 6.4 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/rhel_6.4.yaml} | 3 | |
Failure Reason:
Command failed on vpm057 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/555da2a15b3efce8b76851037330c4b82271cef7/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 949354 | 2015-06-26 00:11:14 | 2015-06-26 01:13:13 | 2015-06-26 04:37:38 | 3:24:25 | 2:08:16 | 1:16:09 | vps | master | rhel | 7.0 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm008 with status 1: 'CEPH_CLIENT_ID=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_1' |
||||||||||||||
fail | 949355 | 2015-06-26 00:11:16 | 2015-06-26 01:14:49 | 2015-06-26 04:51:12 | 3:36:23 | 1:04:35 | 2:31:48 | vps | master | ubuntu | 12.04 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 949356 | 2015-06-26 00:11:17 | 2015-06-26 01:15:27 | 2015-06-26 01:55:30 | 0:40:03 | 0:15:38 | 0:24:25 | vps | master | rhel | 6.5 | upgrade:hammer-x/stress-split/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/rhel_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm161 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/555da2a15b3efce8b76851037330c4b82271cef7/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 949357 | 2015-06-26 00:11:18 | 2015-06-26 01:18:00 | 2015-06-26 01:50:01 | 0:32:01 | 0:15:34 | 0:16:27 | vps | master | rhel | 6.5 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/rhel_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm034 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/555da2a15b3efce8b76851037330c4b82271cef7/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 949358 | 2015-06-26 00:11:21 | 2015-06-26 01:20:41 | 2015-06-26 05:13:12 | 3:52:31 | 0:13:28 | 3:39:03 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/stress-split-erasure-code-x86_64/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=isa-k=2-m=1.yaml distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm012 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_1' |
||||||||||||||
fail | 949359 | 2015-06-26 00:11:27 | 2015-06-26 01:24:24 | 2015-06-26 05:46:51 | 4:22:27 | 1:57:14 | 2:25:13 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm097 with status 1: 'CEPH_CLIENT_ID=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_1' |
||||||||||||||
fail | 949360 | 2015-06-26 00:11:29 | 2015-06-26 01:25:12 | 2015-06-26 01:51:13 | 0:26:01 | 0:11:05 | 0:14:56 | vps | master | centos | 6.5 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm190 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/555da2a15b3efce8b76851037330c4b82271cef7/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 949361 | 2015-06-26 00:11:30 | 2015-06-26 01:28:13 | 2015-06-26 03:30:25 | 2:02:12 | 1:32:39 | 0:29:33 | vps | master | rhel | 7.0 | upgrade:hammer-x/stress-split/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/rhel_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm060 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_3' |
||||||||||||||
fail | 949362 | 2015-06-26 00:11:32 | 2015-06-26 01:29:56 | 2015-06-26 02:21:59 | 0:52:03 | 0:17:39 | 0:34:24 | vps | master | rhel | 7.0 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/rhel_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm197 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_1' |
||||||||||||||
fail | 949363 | 2015-06-26 00:11:33 | 2015-06-26 01:30:05 | 2015-06-26 06:08:42 | 4:38:37 | 1:55:37 | 2:43:00 | vps | master | debian | 7.0 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm053 with status 1: 'CEPH_CLIENT_ID=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_1' |
||||||||||||||
fail | 949364 | 2015-06-26 00:11:34 | 2015-06-26 01:30:22 | 2015-06-26 01:56:23 | 0:26:01 | 0:15:57 | 0:10:04 | vps | master | rhel | 6.4 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_6.4.yaml} | 3 | |
Failure Reason:
Command failed on vpm115 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/555da2a15b3efce8b76851037330c4b82271cef7/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 949365 | 2015-06-26 00:11:35 | 2015-06-26 01:31:30 | 2015-06-26 04:15:43 | 2:44:13 | 1:20:34 | 1:23:39 | vps | master | ubuntu | 12.04 | upgrade:hammer-x/stress-split/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm045 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_3' |
||||||||||||||
fail | 949366 | 2015-06-26 00:11:37 | 2015-06-26 01:31:38 | 2015-06-26 03:29:50 | 1:58:12 | 0:12:48 | 1:45:24 | vps | master | ubuntu | 12.04 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm072 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_1' |
||||||||||||||
fail | 949367 | 2015-06-26 00:11:38 | 2015-06-26 01:31:47 | 2015-06-26 03:13:54 | 1:42:07 | 0:12:52 | 1:29:15 | vps | master | rhel | 6.5 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm098 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/555da2a15b3efce8b76851037330c4b82271cef7/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 949368 | 2015-06-26 00:11:39 | 2015-06-26 01:33:45 | 2015-06-26 02:49:50 | 1:16:05 | 1:00:15 | 0:15:50 | vps | master | rhel | 7.0 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_7.0.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 949369 | 2015-06-26 00:11:41 | 2015-06-26 01:34:44 | 2015-06-26 03:32:59 | 1:58:15 | 1:31:39 | 0:26:36 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/stress-split/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm058 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_3' |
||||||||||||||
fail | 949370 | 2015-06-26 00:11:42 | 2015-06-26 01:36:01 | 2015-06-26 02:44:05 | 1:08:04 | 0:14:16 | 0:53:48 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm045 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_1' |
||||||||||||||
fail | 949371 | 2015-06-26 00:11:43 | 2015-06-26 01:38:46 | 2015-06-26 03:56:57 | 2:18:11 | 2:01:53 | 0:16:18 | vps | master | ubuntu | 12.04 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm172 with status 1: 'CEPH_CLIENT_ID=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_1' |
||||||||||||||
fail | 949372 | 2015-06-26 00:11:44 | 2015-06-26 01:42:38 | 2015-06-26 04:29:00 | 2:46:22 | 0:59:59 | 1:46:23 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |