User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Fail |
---|---|---|---|---|---|---|---|---|---|
yuriw | 2016-10-11 01:08:08 | 2016-10-11 01:08:16 | 2016-10-11 02:52:14 | 1:43:58 | upgrade:jewel-x | wip-sage-testing | vps | ecf5cd9 | 11 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 467349 | 2016-10-11 01:08:10 | 2016-10-11 01:08:12 | 2016-10-11 02:52:14 | 1:44:02 | 1:35:00 | 0:09:02 | vps | master | centos | 7.2 | upgrade:jewel-x/parallel/{kraken.yaml 0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.2.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 467350 | 2016-10-11 01:08:11 | 2016-10-11 01:08:12 | 2016-10-11 01:54:13 | 0:46:01 | 0:38:09 | 0:07:52 | vps | master | centos | 7.2 | upgrade:jewel-x/point-to-point-x/{point-to-point-upgrade.yaml distros/centos_7.2.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on vpm015 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 467351 | 2016-10-11 01:08:12 | 2016-10-11 01:08:13 | 2016-10-11 01:40:13 | 0:32:00 | 0:22:51 | 0:09:09 | vps | master | centos | 7.2 | upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_7.2.yaml} | 3 | |
Failure Reason:
Command crashed: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 467352 | 2016-10-11 01:08:12 | 2016-10-11 01:08:13 | 2016-10-11 01:38:14 | 0:30:01 | 0:20:23 | 0:09:38 | vps | master | centos | 7.2 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/centos_7.2.yaml} | 3 | |
Failure Reason:
Command crashed: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 467353 | 2016-10-11 01:08:13 | 2016-10-11 01:08:14 | 2016-10-11 01:30:14 | 0:22:00 | 0:16:08 | 0:05:52 | vps | master | upgrade:jewel-x/stress-split-erasure-code-x86_64/{0-x86_64.yaml 0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml} | 3 | |||
Failure Reason:
Command crashed: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 467354 | 2016-10-11 01:08:14 | 2016-10-11 01:08:15 | 2016-10-11 01:36:15 | 0:28:00 | 0:21:51 | 0:06:09 | vps | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{kraken.yaml 0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command crashed: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph health' |
||||||||||||||
fail | 467355 | 2016-10-11 01:08:15 | 2016-10-11 01:08:16 | 2016-10-11 01:46:16 | 0:38:00 | 0:28:12 | 0:09:48 | vps | master | centos | 7.2 | upgrade:jewel-x/parallel/{kraken.yaml 0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.2.yaml} | 3 | |
Failure Reason:
Command crashed: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 467356 | 2016-10-11 01:08:15 | 2016-10-11 01:08:16 | 2016-10-11 01:52:17 | 0:44:01 | 0:37:43 | 0:06:18 | vps | master | ubuntu | 14.04 | upgrade:jewel-x/point-to-point-x/{point-to-point-upgrade.yaml distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on vpm131 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 467357 | 2016-10-11 01:08:16 | 2016-10-11 01:08:17 | 2016-10-11 01:30:17 | 0:22:00 | 0:16:03 | 0:05:57 | vps | master | ubuntu | 14.04 | upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command crashed: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 467358 | 2016-10-11 01:08:17 | 2016-10-11 01:08:18 | 2016-10-11 01:32:18 | 0:24:00 | 0:15:56 | 0:08:04 | vps | master | ubuntu | 14.04 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command crashed: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json' |
||||||||||||||
fail | 467359 | 2016-10-11 01:08:17 | 2016-10-11 01:08:18 | 2016-10-11 02:42:21 | 1:34:03 | 1:27:41 | 0:06:22 | vps | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{kraken.yaml 0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |