User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2017-01-09 18:15:02 | 2017-01-09 18:15:35 | 2017-01-10 06:18:49 | 12:03:14 | upgrade:hammer-x | jewel | vps | 5b402f8 | 6 | 10 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 702762 | 2017-01-09 18:15:26 | 2017-01-09 18:15:33 | 2017-01-09 18:27:32 | 0:11:59 | 0:08:18 | 0:03:41 | vps | master | centos | 7.2 | upgrade:hammer-x/f-h-x-offline/{0-install.yaml 1-pre.yaml 2-upgrade.yaml 3-jewel.yaml 4-after.yaml distros/centos_7.2.yaml} | 1 | |
Failure Reason:
Command failed on vpm147 with status 1: "sudo yum -y install '' ceph-radosgw" |
||||||||||||||
fail | 702763 | 2017-01-09 18:15:26 | 2017-01-09 18:15:34 | 2017-01-09 20:13:35 | 1:58:01 | 1:48:30 | 0:09:31 | vps | master | centos | 7.2 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 0-tz-eastern.yaml 1-hammer-install/hammer.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-jewel.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.2.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd_python.sh) on vpm127 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=hammer TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd_python.sh' |
||||||||||||||
fail | 702764 | 2017-01-09 18:15:27 | 2017-01-09 18:15:34 | 2017-01-09 21:31:37 | 3:16:03 | 3:06:30 | 0:09:33 | vps | master | centos | 7.2 | upgrade:hammer-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 0-tz-eastern.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-finish-upgrade/last-osds-and-monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_7.2.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd_python.sh) on vpm119 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=hammer TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd_python.sh' |
||||||||||||||
fail | 702765 | 2017-01-09 18:15:28 | 2017-01-09 18:15:34 | 2017-01-09 19:27:34 | 1:12:00 | 1:03:32 | 0:08:28 | vps | master | centos | 7.2 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 0-tz-eastern.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-no-shec.yaml 6-next-mon/monb.yaml 8-finish-upgrade/last-osds-and-monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/centos_7.2.yaml} | 3 | |
Failure Reason:
Command failed on vpm027 with status 1: 'sudo adjust-ulimits ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-5 --journal-path /var/lib/ceph/osd/ceph-5/journal --log-file=/var/log/ceph/objectstore_tool.\\$pid.log --op remove --pgid 0.20' |
||||||||||||||
pass | 702766 | 2017-01-09 18:15:29 | 2017-01-09 18:15:33 | 2017-01-09 19:51:34 | 1:36:01 | 1:29:17 | 0:06:44 | vps | master | upgrade:hammer-x/stress-split-erasure-code-x86_64/{0-cluster/{openstack.yaml start.yaml} 0-tz-eastern.yaml 0-x86_64.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-finish-upgrade/last-osds-and-monc.yaml 9-workload/ec-rados-plugin=isa-k=2-m=1.yaml} | 3 | |||
pass | 702767 | 2017-01-09 18:15:30 | 2017-01-09 18:15:33 | 2017-01-09 19:25:34 | 1:10:01 | 1:00:56 | 0:09:05 | vps | master | centos | 7.2 | upgrade:hammer-x/tiering/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-setup-cache-tiering/{0-create-base-tier/create-ec-pool.yaml 1-create-cache-tier/create-cache-tier.yaml} 3-upgrade/upgrade.yaml 4-finish-upgrade/flip-success.yaml distros/centos_7.2.yaml} | 3 | |
fail | 702768 | 2017-01-09 18:15:30 | 2017-01-09 18:15:34 | 2017-01-09 18:33:33 | 0:17:59 | 0:11:36 | 0:06:23 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/v0-94-4-stop/{distros/centos_7.2.yaml distros/ubuntu_14.04.yaml ignore.yaml v0-94-4-stop.yaml} | 2 | |
Failure Reason:
Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F14.04%2Fx86_64&sha1=95cefea9fd9ab740263bf8bb4796fd864d9afe2b |
||||||||||||||
fail | 702769 | 2017-01-09 18:15:31 | 2017-01-09 18:15:33 | 2017-01-09 19:59:35 | 1:44:02 | 1:36:22 | 0:07:40 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 0-tz-eastern.yaml 1-hammer-install/hammer.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-osd-mds-mon.yaml 4-jewel.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd_python.sh) on vpm083 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=hammer TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd_python.sh' |
||||||||||||||
dead | 702770 | 2017-01-09 18:15:32 | 2017-01-09 18:15:34 | 2017-01-10 05:17:47 | 11:02:13 | 10:54:17 | 0:07:56 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 0-tz-eastern.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-finish-upgrade/last-osds-and-monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
SSH connection to vpm053 was lost: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 0 --op rmattr 25 --op append 100 --op delete 50 --pool unique_pool_1' |
||||||||||||||
pass | 702771 | 2017-01-09 18:15:33 | 2017-01-09 18:15:34 | 2017-01-09 19:21:35 | 1:06:01 | 0:58:55 | 0:07:06 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/tiering/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-setup-cache-tiering/{0-create-base-tier/create-replicated-pool.yaml 1-create-cache-tier/create-cache-tier.yaml} 3-upgrade/upgrade.yaml 4-finish-upgrade/flip-success.yaml distros/ubuntu_14.04.yaml} | 3 | |
pass | 702772 | 2017-01-09 18:15:33 | 2017-01-09 18:15:35 | 2017-01-09 19:25:35 | 1:10:00 | 1:05:59 | 0:04:01 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/f-h-x-offline/{0-install.yaml 1-pre.yaml 2-upgrade.yaml 3-jewel.yaml 4-after.yaml distros/ubuntu_14.04.yaml} | 1 | |
fail | 702773 | 2017-01-09 18:15:34 | 2017-01-09 18:15:36 | 2017-01-09 20:13:37 | 1:58:01 | 1:48:44 | 0:09:17 | vps | master | centos | 7.2 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 0-tz-eastern.yaml 1-hammer-install/hammer.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-osd-mds-mon.yaml 4-jewel.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_7.2.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd_python.sh) on vpm197 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=hammer TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd_python.sh' |
||||||||||||||
fail | 702774 | 2017-01-09 18:15:35 | 2017-01-09 18:15:37 | 2017-01-09 21:21:40 | 3:06:03 | 2:58:45 | 0:07:18 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 0-tz-eastern.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-finish-upgrade/last-osds-and-monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd_python.sh) on vpm039 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=hammer TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd_python.sh' |
||||||||||||||
pass | 702775 | 2017-01-09 18:15:36 | 2017-01-09 18:15:38 | 2017-01-09 20:07:39 | 1:52:01 | 1:42:11 | 0:09:50 | vps | master | centos | 7.2 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 0-tz-eastern.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-finish-upgrade/last-osds-and-monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/centos_7.2.yaml} | 3 | |
pass | 702776 | 2017-01-09 18:15:37 | 2017-01-09 18:15:38 | 2017-01-09 19:19:38 | 1:04:00 | 0:55:35 | 0:08:25 | vps | master | centos | 7.2 | upgrade:hammer-x/tiering/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-setup-cache-tiering/{0-create-base-tier/create-replicated-pool.yaml 1-create-cache-tier/create-cache-tier.yaml} 3-upgrade/upgrade.yaml 4-finish-upgrade/flip-success.yaml distros/centos_7.2.yaml} | 3 | |
fail | 702777 | 2017-01-09 18:15:37 | 2017-01-09 18:15:39 | 2017-01-09 20:09:40 | 1:54:01 | 1:46:54 | 0:07:07 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/parallel/{0-cluster/start.yaml 0-tz-eastern.yaml 1-hammer-install/hammer.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-jewel.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd_python.sh) on vpm067 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=hammer TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.0/rbd/test_librbd_python.sh' |
||||||||||||||
dead | 702778 | 2017-01-09 18:15:38 | 2017-01-09 18:15:39 | 2017-01-10 06:18:49 | 12:03:10 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 0-tz-eastern.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-no-shec.yaml 6-next-mon/monb.yaml 8-finish-upgrade/last-osds-and-monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_14.04.yaml} | 3 | |||
fail | 702779 | 2017-01-09 18:15:39 | 2017-01-09 18:15:40 | 2017-01-09 19:09:40 | 0:54:00 | 0:46:57 | 0:07:03 | vps | master | ubuntu | 14.04 | upgrade:hammer-x/tiering/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-setup-cache-tiering/{0-create-base-tier/create-ec-pool.yaml 1-create-cache-tier/create-cache-tier.yaml} 3-upgrade/upgrade.yaml 4-finish-upgrade/flip-success.yaml distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |