User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2015-06-24 00:10:06 | 2015-06-24 05:24:09 | 2015-06-25 17:36:18 | 1 day, 12:12:09 | upgrade:hammer-x | next | vps | — | 29 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 946702 | 2015-06-24 00:10:54 | 2015-06-24 05:21:04 | 2015-06-24 06:53:10 | 1:32:06 | 0:17:40 | 1:14:26 | vps | master | rhel | 7.0 | upgrade:hammer-x/stress-split-erasure-code-x86_64/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=isa-k=2-m=1.yaml distros/rhel_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm068 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_1' |
||||||||||||||
fail | 946704 | 2015-06-24 00:10:56 | 2015-06-24 05:21:05 | 2015-06-24 06:43:13 | 1:22:08 | 0:12:36 | 1:09:32 | vps | master | centos | 6.5 | upgrade:hammer-x/stress-split/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm193 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/b806b00a7984db92306541a0a2699beb029032da/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 946706 | 2015-06-24 00:10:58 | 2015-06-24 05:22:34 | 2015-06-24 07:06:39 | 1:44:05 | 0:10:29 | 1:33:36 | vps | master | centos | 6.5 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm180 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/b806b00a7984db92306541a0a2699beb029032da/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 946708 | 2015-06-24 00:10:59 | 2015-06-24 05:22:45 | 2015-06-24 05:50:46 | 0:28:01 | 0:11:29 | 0:16:32 | vps | master | centos | 6.5 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm176 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/b806b00a7984db92306541a0a2699beb029032da/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 946710 | 2015-06-24 00:11:00 | 2015-06-24 05:22:53 | 2015-06-24 06:59:00 | 1:36:07 | 1:02:39 | 0:33:28 | vps | master | debian | 7.0 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 946711 | 2015-06-24 00:11:02 | 2015-06-24 05:23:14 | 2015-06-24 05:51:15 | 0:28:01 | 0:15:25 | 0:12:36 | vps | master | debian | 7.0 | upgrade:hammer-x/stress-split/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rbd/import_export.sh) on vpm080 with status 134: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=hammer TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin RBD_CREATE_ARGS=--new-format adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/import_export.sh' |
||||||||||||||
fail | 946712 | 2015-06-24 00:11:03 | 2015-06-24 05:24:09 | 2015-06-24 05:44:09 | 0:20:00 | 0:14:09 | 0:05:51 | vps | master | debian | 7.0 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm156 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_1' |
||||||||||||||
fail | 946713 | 2015-06-24 00:11:04 | 2015-06-24 05:24:49 | 2015-06-24 05:56:51 | 0:32:02 | 0:20:12 | 0:11:50 | vps | master | rhel | 6.4 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_6.4.yaml} | 3 | |
Failure Reason:
Command failed on vpm046 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/b806b00a7984db92306541a0a2699beb029032da/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 946714 | 2015-06-24 00:11:06 | 2015-06-24 05:26:03 | 2015-06-24 06:22:06 | 0:56:03 | 0:15:55 | 0:40:08 | vps | master | rhel | 6.5 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm099 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/b806b00a7984db92306541a0a2699beb029032da/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 946715 | 2015-06-24 00:11:07 | 2015-06-24 05:26:34 | 2015-06-24 06:18:37 | 0:52:03 | 0:16:28 | 0:35:35 | vps | master | rhel | 6.4 | upgrade:hammer-x/stress-split/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/rhel_6.4.yaml} | 3 | |
Failure Reason:
Command failed on vpm195 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/b806b00a7984db92306541a0a2699beb029032da/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 946716 | 2015-06-24 00:11:08 | 2015-06-24 05:29:12 | 2015-06-24 06:01:10 | 0:31:58 | 0:15:44 | 0:16:14 | vps | master | rhel | 6.4 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/rhel_6.4.yaml} | 3 | |
Failure Reason:
Command failed on vpm070 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/b806b00a7984db92306541a0a2699beb029032da/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 946717 | 2015-06-24 00:11:10 | 2015-06-24 05:29:25 | 2015-06-24 08:27:38 | 2:58:13 | 2:00:43 | 0:57:30 | vps | master | rhel | 7.0 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm016 with status 1: 'CEPH_CLIENT_ID=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_1' |
||||||||||||||
fail | 946718 | 2015-06-24 00:11:11 | 2015-06-24 05:29:32 | 2015-06-24 08:15:44 | 2:46:12 | 1:55:05 | 0:51:07 | vps | master | ubuntu | 12.04 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm035 with status 1: 'CEPH_CLIENT_ID=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_1' |
||||||||||||||
fail | 946719 | 2015-06-24 00:11:12 | 2015-06-24 05:31:50 | 2015-06-24 06:05:51 | 0:34:01 | 0:20:13 | 0:13:48 | vps | master | rhel | 6.5 | upgrade:hammer-x/stress-split/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/rhel_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm045 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/b806b00a7984db92306541a0a2699beb029032da/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 946720 | 2015-06-24 00:11:14 | 2015-06-24 05:32:19 | 2015-06-24 06:06:20 | 0:34:01 | 0:18:37 | 0:15:24 | vps | master | rhel | 6.5 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/rhel_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm006 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/b806b00a7984db92306541a0a2699beb029032da/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
dead | 946721 | 2015-06-24 00:11:15 | 2015-06-24 05:33:22 | 2015-06-25 17:36:18 | 1 day, 12:02:56 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/stress-split-erasure-code-x86_64/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=isa-k=2-m=1.yaml distros/ubuntu_14.04.yaml} | — | |||
fail | 946722 | 2015-06-24 00:11:16 | 2015-06-24 05:33:30 | 2015-06-24 07:55:41 | 2:22:11 | 2:01:34 | 0:20:37 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm104 with status 1: 'CEPH_CLIENT_ID=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_1' |
||||||||||||||
fail | 946723 | 2015-06-24 00:11:18 | 2015-06-24 05:33:33 | 2015-06-24 06:23:36 | 0:50:03 | 0:11:51 | 0:38:12 | vps | master | centos | 6.5 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm013 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/b806b00a7984db92306541a0a2699beb029032da/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 946724 | 2015-06-24 00:11:19 | 2015-06-24 05:36:26 | 2015-06-24 09:28:43 | 3:52:17 | 3:21:04 | 0:31:13 | vps | master | rhel | 7.0 | upgrade:hammer-x/stress-split/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/rhel_7.0.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd.sh) on vpm050 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=hammer TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd.sh' |
||||||||||||||
fail | 946725 | 2015-06-24 00:11:20 | 2015-06-24 05:38:25 | 2015-06-24 06:36:28 | 0:58:03 | 0:18:52 | 0:39:11 | vps | master | rhel | 7.0 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/rhel_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm003 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_1' |
||||||||||||||
fail | 946726 | 2015-06-24 00:11:21 | 2015-06-24 05:38:25 | 2015-06-24 08:06:36 | 2:28:11 | 2:02:29 | 0:25:42 | vps | master | debian | 7.0 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm146 with status 1: 'CEPH_CLIENT_ID=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_1' |
||||||||||||||
fail | 946727 | 2015-06-24 00:11:23 | 2015-06-24 05:38:43 | 2015-06-24 06:44:46 | 1:06:03 | 0:13:48 | 0:52:15 | vps | master | rhel | 6.4 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_6.4.yaml} | 3 | |
Failure Reason:
Command failed on vpm186 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/b806b00a7984db92306541a0a2699beb029032da/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 946728 | 2015-06-24 00:11:24 | 2015-06-24 05:38:55 | 2015-06-24 07:23:02 | 1:44:07 | 1:09:08 | 0:34:59 | vps | master | ubuntu | 12.04 | upgrade:hammer-x/stress-split/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm189 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_3' |
||||||||||||||
fail | 946729 | 2015-06-24 00:11:26 | 2015-06-24 05:38:52 | 2015-06-24 06:16:54 | 0:38:02 | 0:13:02 | 0:25:00 | vps | master | ubuntu | 12.04 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm156 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_1' |
||||||||||||||
fail | 946730 | 2015-06-24 00:11:27 | 2015-06-24 05:42:00 | 2015-06-24 06:41:59 | 0:59:59 | 0:16:20 | 0:43:39 | vps | master | rhel | 6.5 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm056 with status 1: 'sudo rpm -Uv http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/sha1/b806b00a7984db92306541a0a2699beb029032da/noarch/ceph-release-1-0.el6.noarch.rpm' |
||||||||||||||
fail | 946731 | 2015-06-24 00:11:28 | 2015-06-24 05:42:00 | 2015-06-24 08:04:06 | 2:22:06 | 1:50:59 | 0:31:07 | vps | master | rhel | 7.0 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm070 with status 1: 'CEPH_CLIENT_ID=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_1' |
||||||||||||||
fail | 946732 | 2015-06-24 00:11:29 | 2015-06-24 05:42:24 | 2015-06-24 07:44:33 | 2:02:09 | 1:03:05 | 0:59:04 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/stress-split/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm191 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_3' |
||||||||||||||
fail | 946733 | 2015-06-24 00:11:31 | 2015-06-24 05:45:05 | 2015-06-24 06:41:05 | 0:56:00 | 0:23:39 | 0:32:21 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm046 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op append_excl 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op write_excl 0 --op rmattr 25 --op append 50 --op delete 50 --pool unique_pool_1' |
||||||||||||||
fail | 946734 | 2015-06-24 00:11:32 | 2015-06-24 05:46:54 | 2015-06-24 08:05:05 | 2:18:11 | 1:59:16 | 0:18:55 | vps | master | ubuntu | 12.04 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm058 with status 1: 'CEPH_CLIENT_ID=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_1' |
||||||||||||||
fail | 946735 | 2015-06-24 00:11:33 | 2015-06-24 05:50:22 | 2015-06-24 08:08:32 | 2:18:10 | 1:55:11 | 0:22:59 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm153 with status 1: 'CEPH_CLIENT_ID=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_1' |