User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail |
---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2017-04-10 01:15:34 | 2017-04-10 01:17:31 | 2017-04-10 04:39:36 | 3:22:05 | upgrade:hammer-x | jewel | vps | a64d3e4 | 10 | 7 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 1006199 | 2017-04-10 01:17:02 | 2017-04-10 01:17:31 | 2017-04-10 04:35:35 | 3:18:04 | 3:15:04 | 0:03:00 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/f-h-x-offline/{0-install.yaml 1-pre.yaml 2-upgrade.yaml 3-jewel.yaml 4-after.yaml ubuntu_14.04.yaml} | 1 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on vpm035 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 1006200 | 2017-04-10 01:17:03 | 2017-04-10 01:17:32 | 2017-04-10 02:47:32 | 1:30:00 | 1:20:30 | 0:09:30 | vps | master | centos | 7.3 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 0-tz-eastern.yaml 1-hammer-install/hammer.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-jewel.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.3.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_numops.sh) on vpm195 with status 127: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_numops.sh' |
||||||||||||||
fail | 1006201 | 2017-04-10 01:17:04 | 2017-04-10 01:17:32 | 2017-04-10 04:39:36 | 3:22:04 | 3:13:10 | 0:08:54 | vps | master | centos | 7.3 | upgrade:hammer-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 0-tz-eastern.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-finish-upgrade/last-osds-and-monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_7.3.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd_python.sh) on vpm135 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh' |
||||||||||||||
pass | 1006202 | 2017-04-10 01:17:04 | 2017-04-10 01:17:31 | 2017-04-10 02:47:32 | 1:30:01 | 1:19:37 | 0:10:24 | vps | master | centos | 7.3 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 0-tz-eastern.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-no-shec.yaml 6-next-mon/monb.yaml 8-finish-upgrade/last-osds-and-monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/centos_7.3.yaml} | 3 | |
pass | 1006203 | 2017-04-10 01:17:05 | 2017-04-10 01:17:32 | 2017-04-10 02:59:33 | 1:42:01 | 1:33:52 | 0:08:09 | vps | master | upgrade:hammer-x/stress-split-erasure-code-x86_64/{0-cluster/{openstack.yaml start.yaml} 0-tz-eastern.yaml 0-x86_64.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-finish-upgrade/last-osds-and-monc.yaml 9-workload/ec-rados-plugin=isa-k=2-m=1.yaml} | 3 | |||
pass | 1006204 | 2017-04-10 01:17:06 | 2017-04-10 01:17:32 | 2017-04-10 02:33:32 | 1:16:00 | 1:05:37 | 0:10:23 | vps | master | centos | 7.3 | upgrade:hammer-x/tiering/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-setup-cache-tiering/{0-create-base-tier/create-ec-pool.yaml 1-create-cache-tier/create-cache-tier.yaml} 3-upgrade/upgrade.yaml 4-finish-upgrade/flip-success.yaml distros/centos_7.3.yaml} | 3 | |
pass | 1006205 | 2017-04-10 01:17:06 | 2017-04-10 01:17:32 | 2017-04-10 01:39:31 | 0:21:59 | 0:16:19 | 0:05:40 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/v0-94-4-stop/{distros/centos_7.3.yaml distros/ubuntu_14.04.yaml ignore.yaml v0-94-4-stop.yaml} | 2 | |
fail | 1006206 | 2017-04-10 01:17:07 | 2017-04-10 01:17:32 | 2017-04-10 02:41:33 | 1:24:01 | 1:14:02 | 0:09:59 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 0-tz-eastern.yaml 1-hammer-install/hammer.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-osd-mds-mon.yaml 4-jewel.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_numops.sh) on vpm005 with status 127: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_numops.sh' |
||||||||||||||
pass | 1006207 | 2017-04-10 01:17:08 | 2017-04-10 01:17:31 | 2017-04-10 03:01:33 | 1:44:02 | 1:35:04 | 0:08:58 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 0-tz-eastern.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-finish-upgrade/last-osds-and-monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_14.04.yaml} | 3 | |
pass | 1006208 | 2017-04-10 01:17:08 | 2017-04-10 01:17:32 | 2017-04-10 02:13:32 | 0:56:00 | 0:47:42 | 0:08:18 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/tiering/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-setup-cache-tiering/{0-create-base-tier/create-replicated-pool.yaml 1-create-cache-tier/create-cache-tier.yaml} 3-upgrade/upgrade.yaml 4-finish-upgrade/flip-success.yaml distros/ubuntu_14.04.yaml} | 3 | |
fail | 1006209 | 2017-04-10 01:17:09 | 2017-04-10 01:17:31 | 2017-04-10 03:05:33 | 1:48:02 | 1:20:29 | 0:27:33 | vps | master | centos | 7.3 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 0-tz-eastern.yaml 1-hammer-install/hammer.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-osd-mds-mon.yaml 4-jewel.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.3.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_numops.sh) on vpm021 with status 127: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_numops.sh' |
||||||||||||||
fail | 1006210 | 2017-04-10 01:17:10 | 2017-04-10 01:17:32 | 2017-04-10 02:51:33 | 1:34:01 | 1:26:28 | 0:07:33 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 0-tz-eastern.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-finish-upgrade/last-osds-and-monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Thrasher instance has no attribute 'ceph_objectstore_tool' |
||||||||||||||
pass | 1006211 | 2017-04-10 01:17:10 | 2017-04-10 01:17:32 | 2017-04-10 03:23:34 | 2:06:02 | 1:55:02 | 0:11:00 | vps | master | centos | 7.3 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 0-tz-eastern.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-finish-upgrade/last-osds-and-monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/centos_7.3.yaml} | 3 | |
pass | 1006212 | 2017-04-10 01:17:11 | 2017-04-10 01:17:32 | 2017-04-10 02:21:32 | 1:04:00 | 0:53:31 | 0:10:29 | vps | master | centos | 7.3 | upgrade:hammer-x/tiering/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-setup-cache-tiering/{0-create-base-tier/create-replicated-pool.yaml 1-create-cache-tier/create-cache-tier.yaml} 3-upgrade/upgrade.yaml 4-finish-upgrade/flip-success.yaml distros/centos_7.3.yaml} | 3 | |
fail | 1006213 | 2017-04-10 01:17:12 | 2017-04-10 01:17:31 | 2017-04-10 02:35:37 | 1:18:06 | 1:09:36 | 0:08:30 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 0-tz-eastern.yaml 1-hammer-install/hammer.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-jewel.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_numops.sh) on vpm073 with status 127: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_numops.sh' |
||||||||||||||
pass | 1006214 | 2017-04-10 01:17:12 | 2017-04-10 01:17:31 | 2017-04-10 02:33:32 | 1:16:01 | 1:09:08 | 0:06:53 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 0-tz-eastern.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-no-shec.yaml 6-next-mon/monb.yaml 8-finish-upgrade/last-osds-and-monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_14.04.yaml} | 3 | |
pass | 1006215 | 2017-04-10 01:17:13 | 2017-04-10 01:17:32 | 2017-04-10 02:25:32 | 1:08:00 | 1:00:22 | 0:07:38 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/tiering/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-setup-cache-tiering/{0-create-base-tier/create-ec-pool.yaml 1-create-cache-tier/create-cache-tier.yaml} 3-upgrade/upgrade.yaml 4-finish-upgrade/flip-success.yaml distros/ubuntu_14.04.yaml} | 3 |