User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2015-01-19 01:05:01 | 2015-01-19 03:02:35 | 2015-01-19 15:39:04 | 12:36:29 | upgrade:giant-x | next | vps | — | 12 | 18 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 709427 | 2015-01-19 01:05:23 | 2015-01-19 02:57:42 | 2015-01-19 07:48:06 | 4:50:24 | 1:44:53 | 3:05:31 | vps | master | centos | 6.5 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm085 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=896c8899ac28eb0403bfaa20454f3756f3705c51 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test.sh' |
||||||||||||||
fail | 709429 | 2015-01-19 01:05:24 | 2015-01-19 03:02:35 | 2015-01-19 05:54:53 | 2:52:18 | 1:37:10 | 1:15:08 | vps | master | debian | 7.0 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm101 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=896c8899ac28eb0403bfaa20454f3756f3705c51 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test.sh' |
||||||||||||||
fail | 709431 | 2015-01-19 01:05:24 | 2015-01-19 03:05:10 | 2015-01-19 08:33:37 | 5:28:27 | 1:49:25 | 3:39:02 | vps | master | rhel | 6.4 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_6.4.yaml} | 3 | |
Failure Reason:
Command failed on vpm072 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=896c8899ac28eb0403bfaa20454f3756f3705c51 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test.sh' |
||||||||||||||
fail | 709433 | 2015-01-19 01:05:24 | 2015-01-19 03:54:45 | 2015-01-19 06:22:57 | 2:28:12 | 1:53:09 | 0:35:03 | vps | master | rhel | 6.5 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm155 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=896c8899ac28eb0403bfaa20454f3756f3705c51 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test.sh' |
||||||||||||||
fail | 709435 | 2015-01-19 01:05:25 | 2015-01-19 03:58:00 | 2015-01-19 11:40:39 | 7:42:39 | 1:43:31 | 5:59:08 | vps | master | rhel | 7.0 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm185 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=896c8899ac28eb0403bfaa20454f3756f3705c51 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test.sh' |
||||||||||||||
fail | 709437 | 2015-01-19 01:05:25 | 2015-01-19 04:02:23 | 2015-01-19 07:30:40 | 3:28:17 | 1:45:00 | 1:43:17 | vps | master | ubuntu | 12.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm124 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=896c8899ac28eb0403bfaa20454f3756f3705c51 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test.sh' |
||||||||||||||
fail | 709439 | 2015-01-19 01:05:26 | 2015-01-19 04:08:13 | 2015-01-19 06:16:23 | 2:08:10 | 1:36:00 | 0:32:10 | vps | master | ubuntu | 14.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm199 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=896c8899ac28eb0403bfaa20454f3756f3705c51 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test.sh' |
||||||||||||||
fail | 709441 | 2015-01-19 01:05:26 | 2015-01-19 04:09:31 | 2015-01-19 08:39:53 | 4:30:22 | 2:03:38 | 2:26:44 | vps | master | centos | 6.5 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm191 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=896c8899ac28eb0403bfaa20454f3756f3705c51 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test.sh' |
||||||||||||||
fail | 709443 | 2015-01-19 01:05:26 | 2015-01-19 04:13:33 | 2015-01-19 13:48:19 | 9:34:46 | 1:40:10 | 7:54:36 | vps | master | debian | 7.0 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/debian_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm083 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=896c8899ac28eb0403bfaa20454f3756f3705c51 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test.sh' |
||||||||||||||
fail | 709445 | 2015-01-19 01:05:27 | 2015-01-19 04:13:59 | 2015-01-19 06:36:10 | 2:22:11 | 2:01:33 | 0:20:38 | vps | master | rhel | 6.4 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_6.4.yaml} | 3 | |
Failure Reason:
Command failed on vpm167 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=896c8899ac28eb0403bfaa20454f3756f3705c51 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test.sh' |
||||||||||||||
fail | 709446 | 2015-01-19 01:05:27 | 2015-01-19 04:17:29 | 2015-01-19 14:10:18 | 9:52:49 | 1:49:24 | 8:03:25 | vps | master | rhel | 6.5 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm199 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=896c8899ac28eb0403bfaa20454f3756f3705c51 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test.sh' |
||||||||||||||
fail | 709447 | 2015-01-19 01:05:28 | 2015-01-19 04:17:45 | 2015-01-19 08:08:03 | 3:50:18 | 1:48:38 | 2:01:40 | vps | master | rhel | 7.0 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_7.0.yaml} | 3 | |
Failure Reason:
Command failed on vpm053 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=896c8899ac28eb0403bfaa20454f3756f3705c51 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test.sh' |
||||||||||||||
fail | 709448 | 2015-01-19 01:05:28 | 2015-01-19 04:19:58 | 2015-01-19 11:44:35 | 7:24:37 | 1:35:41 | 5:48:56 | vps | master | ubuntu | 12.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_12.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm128 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=896c8899ac28eb0403bfaa20454f3756f3705c51 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test.sh' |
||||||||||||||
fail | 709449 | 2015-01-19 01:05:28 | 2015-01-19 04:20:12 | 2015-01-19 06:16:21 | 1:56:09 | 1:36:21 | 0:19:48 | vps | master | ubuntu | 14.04 | upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm044 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=896c8899ac28eb0403bfaa20454f3756f3705c51 TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/rados/test.sh' |
||||||||||||||
pass | 709450 | 2015-01-19 01:05:29 | 2015-01-19 04:20:14 | 2015-01-19 09:42:41 | 5:22:27 | 2:29:45 | 2:52:42 | vps | master | centos | 6.5 | upgrade:giant-x/stress-split/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_6.5.yaml} | 3 | |
pass | 709451 | 2015-01-19 01:05:29 | 2015-01-19 04:23:08 | 2015-01-19 09:03:30 | 4:40:22 | 4:33:55 | 0:06:27 | vps | master | debian | 7.0 | upgrade:giant-x/stress-split/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/debian_7.0.yaml} | 3 | |
pass | 709452 | 2015-01-19 01:05:29 | 2015-01-19 04:23:08 | 2015-01-19 08:31:28 | 4:08:20 | 2:30:28 | 1:37:52 | vps | master | rhel | 6.4 | upgrade:giant-x/stress-split/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/rhel_6.4.yaml} | 3 | |
pass | 709453 | 2015-01-19 01:05:30 | 2015-01-19 04:23:27 | 2015-01-19 12:00:09 | 7:36:42 | 3:08:39 | 4:28:03 | vps | master | rhel | 6.5 | upgrade:giant-x/stress-split/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/rhel_6.5.yaml} | 3 | |
fail | 709454 | 2015-01-19 01:05:30 | 2015-01-19 04:24:08 | 2015-01-19 15:39:04 | 11:14:56 | 3:08:27 | 8:06:29 | vps | master | rhel | 7.0 | upgrade:giant-x/stress-split/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/rhel_7.0.yaml} | 3 | |
Failure Reason:
'2015-01-19T08:01:01.514994-05:00 vpm113 crond[23063]: (root) INFO (Job execution of per-minute job scheduled for 08:00 delayed into subsequent minute 08:01. Skipping job run.) ' in syslog |
||||||||||||||
pass | 709455 | 2015-01-19 01:05:30 | 2015-01-19 04:25:01 | 2015-01-19 08:35:21 | 4:10:20 | 3:43:09 | 0:27:11 | vps | master | ubuntu | 12.04 | upgrade:giant-x/stress-split/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_12.04.yaml} | 3 | |
fail | 709456 | 2015-01-19 01:05:31 | 2015-01-19 04:26:13 | 2015-01-19 09:10:36 | 4:44:23 | 0:16:19 | 4:28:04 | vps | master | ubuntu | 14.04 | upgrade:giant-x/stress-split/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm072 with status 134: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" RBD_CREATE_ARGS=--new-format adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/import_export.sh' |
||||||||||||||
fail | 709457 | 2015-01-19 01:05:31 | 2015-01-19 04:26:42 | 2015-01-19 05:26:47 | 1:00:05 | 0:34:58 | 0:25:07 | vps | master | centos | 6.5 | upgrade:giant-x/stress-split-erasure-code/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/centos_6.5.yaml} | 3 | |
Failure Reason:
Command failed on vpm089 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f -i a' |
||||||||||||||
pass | 709458 | 2015-01-19 01:05:32 | 2015-01-19 04:27:00 | 2015-01-19 10:41:31 | 6:14:31 | 1:19:33 | 4:54:58 | vps | master | debian | 7.0 | upgrade:giant-x/stress-split-erasure-code/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/debian_7.0.yaml} | 3 | |
pass | 709459 | 2015-01-19 01:05:32 | 2015-01-19 04:27:08 | 2015-01-19 08:15:26 | 3:48:18 | 1:41:33 | 2:06:45 | vps | master | rhel | 6.4 | upgrade:giant-x/stress-split-erasure-code/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/rhel_6.4.yaml} | 3 | |
pass | 709460 | 2015-01-19 01:05:32 | 2015-01-19 04:27:13 | 2015-01-19 08:03:30 | 3:36:17 | 1:26:04 | 2:10:13 | vps | master | rhel | 6.5 | upgrade:giant-x/stress-split-erasure-code/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/rhel_6.5.yaml} | 3 | |
pass | 709461 | 2015-01-19 01:05:33 | 2015-01-19 04:27:24 | 2015-01-19 10:27:53 | 6:00:29 | 1:38:51 | 4:21:38 | vps | master | rhel | 7.0 | upgrade:giant-x/stress-split-erasure-code/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/rhel_7.0.yaml} | 3 | |
pass | 709462 | 2015-01-19 01:05:33 | 2015-01-19 04:35:40 | 2015-01-19 07:51:56 | 3:16:16 | 1:19:47 | 1:56:29 | vps | master | ubuntu | 12.04 | upgrade:giant-x/stress-split-erasure-code/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_12.04.yaml} | 3 | |
pass | 709463 | 2015-01-19 01:05:33 | 2015-01-19 04:36:33 | 2015-01-19 06:38:43 | 2:02:10 | 1:42:49 | 0:19:21 | vps | master | ubuntu | 14.04 | upgrade:giant-x/stress-split-erasure-code/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_14.04.yaml} | 3 | |
pass | 709464 | 2015-01-19 01:05:34 | 2015-01-19 04:37:02 | 2015-01-19 13:25:46 | 8:48:44 | 1:18:08 | 7:30:36 | vps | master | rhel | 7.0 | upgrade:giant-x/stress-split-erasure-code-x86_64/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=isa-k=2-m=1.yaml distros/rhel_7.0.yaml} | 3 | |
fail | 709465 | 2015-01-19 01:05:34 | 2015-01-19 04:41:11 | 2015-01-19 06:11:23 | 1:30:12 | 0:28:03 | 1:02:09 | vps | master | ubuntu | 14.04 | upgrade:giant-x/stress-split-erasure-code-x86_64/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=isa-k=2-m=1.yaml distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed on vpm107 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f -i a' |