User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2016-03-14 09:10:02 | 2016-03-14 09:10:50 | 2016-03-14 21:17:20 | 12:06:30 | upgrade:hammer-x | jewel | vps | — | 3 | 8 | 7 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 58345 | 2016-03-14 09:10:38 | 2016-03-14 09:10:50 | 2016-03-14 09:54:51 | 0:44:01 | 0:41:04 | 0:02:57 | vps | master | centos | 7.2 | upgrade:hammer-x/f-h-x-offline/{0-install.yaml 1-pre.yaml 2-upgrade.yaml 3-infernalis.yaml 4-after.yaml distros/centos_7.2.yaml} | 1 | |
fail | 58346 | 2016-03-14 09:10:39 | 2016-03-14 09:11:33 | 2016-03-14 10:09:34 | 0:58:01 | 0:51:36 | 0:06:25 | vps | master | centos | 7.2 | upgrade:hammer-x/parallel/{0-tz-eastern.yaml 4-infernalis.yaml 0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 5-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.2.yaml} | 3 | |
Failure Reason:
Command failed on vpm065 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd dump --format=json' |
||||||||||||||
dead | 58347 | 2016-03-14 09:10:40 | 2016-03-14 09:11:01 | 2016-03-14 21:13:31 | 12:02:30 | vps | master | centos | 7.2 | upgrade:hammer-x/stress-split/{0-tz-eastern.yaml 0-cluster/{openstack.yaml start.yaml} 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_7.2.yaml} | 3 | |||
dead | 58348 | 2016-03-14 09:10:40 | 2016-03-14 09:12:35 | 2016-03-14 21:15:05 | 12:02:30 | vps | master | centos | 7.2 | upgrade:hammer-x/stress-split-erasure-code/{0-tz-eastern.yaml 0-cluster/{openstack.yaml start.yaml} 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-no-shec.yaml distros/centos_7.2.yaml} | 3 | |||
dead | 58349 | 2016-03-14 09:10:41 | 2016-03-14 09:12:45 | 2016-03-14 21:15:21 | 12:02:36 | vps | master | upgrade:hammer-x/stress-split-erasure-code-x86_64/{0-tz-eastern.yaml 0-x86_64.yaml 0-cluster/{openstack.yaml start.yaml} 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=isa-k=2-m=1.yaml} | 3 | |||||
fail | 58350 | 2016-03-14 09:10:42 | 2016-03-14 09:13:27 | 2016-03-14 10:15:28 | 1:02:01 | 0:54:38 | 0:07:23 | vps | master | centos | 7.2 | upgrade:hammer-x/tiering/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-setup-cache-tiering/{0-create-base-tier/create-ec-pool.yaml 1-create-cache-tier/create-cache-tier.yaml} 3-upgrade/upgrade.yaml 4-finish-upgrade/flip-success.yaml distros/centos_7.2.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
pass | 58351 | 2016-03-14 09:10:43 | 2016-03-14 09:16:05 | 2016-03-14 09:52:05 | 0:36:00 | 0:13:55 | 0:22:05 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/v0-94-4-stop/{ignore.yaml v0-94-4-stop.yaml distros/centos_7.2.yaml distros/ubuntu_14.04.yaml} | 2 | |
fail | 58352 | 2016-03-14 09:10:44 | 2016-03-14 09:11:10 | 2016-03-14 10:01:11 | 0:50:01 | 0:43:24 | 0:06:37 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/parallel/{0-tz-eastern.yaml 4-infernalis.yaml 0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 5-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
dead | 58353 | 2016-03-14 09:10:45 | 2016-03-14 09:14:43 | 2016-03-14 21:17:20 | 12:02:37 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/stress-split-erasure-code/{0-tz-eastern.yaml 0-cluster/{openstack.yaml start.yaml} 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_14.04.yaml} | 3 | |||
fail | 58354 | 2016-03-14 09:10:46 | 2016-03-14 09:13:06 | 2016-03-14 10:03:07 | 0:50:01 | 0:41:09 | 0:08:52 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/tiering/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-setup-cache-tiering/{0-create-base-tier/create-replicated-pool.yaml 1-create-cache-tier/create-cache-tier.yaml} 3-upgrade/upgrade.yaml 4-finish-upgrade/flip-success.yaml distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
pass | 58355 | 2016-03-14 09:10:46 | 2016-03-14 09:15:28 | 2016-03-14 09:55:29 | 0:40:01 | 0:36:09 | 0:03:52 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/f-h-x-offline/{0-install.yaml 1-pre.yaml 2-upgrade.yaml 3-infernalis.yaml 4-after.yaml distros/ubuntu_14.04.yaml} | 1 | |
fail | 58356 | 2016-03-14 09:10:47 | 2016-03-14 09:15:39 | 2016-03-14 10:13:40 | 0:58:01 | 0:52:19 | 0:05:42 | vps | master | centos | 7.2 | upgrade:hammer-x/parallel/{0-tz-eastern.yaml 4-infernalis.yaml 0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 5-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.2.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
dead | 58357 | 2016-03-14 09:10:48 | 2016-03-14 09:14:32 | 2016-03-14 21:17:02 | 12:02:30 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/stress-split/{0-tz-eastern.yaml 0-cluster/{openstack.yaml start.yaml} 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml} | 3 | |||
dead | 58358 | 2016-03-14 09:10:49 | 2016-03-14 09:13:16 | 2016-03-14 21:15:56 | 12:02:40 | vps | master | centos | 7.2 | upgrade:hammer-x/stress-split-erasure-code/{0-tz-eastern.yaml 0-cluster/{openstack.yaml start.yaml} 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/centos_7.2.yaml} | 3 | |||
fail | 58359 | 2016-03-14 09:10:50 | 2016-03-14 09:11:54 | 2016-03-14 10:09:55 | 0:58:01 | 0:51:22 | 0:06:39 | vps | master | centos | 7.2 | upgrade:hammer-x/tiering/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-setup-cache-tiering/{0-create-base-tier/create-replicated-pool.yaml 1-create-cache-tier/create-cache-tier.yaml} 3-upgrade/upgrade.yaml 4-finish-upgrade/flip-success.yaml distros/centos_7.2.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 58360 | 2016-03-14 09:10:50 | 2016-03-14 09:12:15 | 2016-03-14 09:36:14 | 0:23:59 | 0:18:13 | 0:05:46 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/parallel/{0-tz-eastern.yaml 4-infernalis.yaml 0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 5-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_refcount.sh) on vpm146 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=hammer TESTDIR="/home/ubuntu/cephtest" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/cls/test_cls_refcount.sh' |
||||||||||||||
dead | 58362 | 2016-03-14 09:10:51 | 2016-03-14 09:12:04 | 2016-03-14 21:14:35 | 12:02:31 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/stress-split-erasure-code/{0-tz-eastern.yaml 0-cluster/{openstack.yaml start.yaml} 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-no-shec.yaml distros/ubuntu_14.04.yaml} | 3 | |||
fail | 58364 | 2016-03-14 09:10:52 | 2016-03-14 09:13:58 | 2016-03-14 09:59:59 | 0:46:01 | 0:39:43 | 0:06:18 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/tiering/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-setup-cache-tiering/{0-create-base-tier/create-ec-pool.yaml 1-create-cache-tier/create-cache-tier.yaml} 3-upgrade/upgrade.yaml 4-finish-upgrade/flip-success.yaml distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |