User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|
smithfarm | 2017-04-16 18:32:18 | 2017-04-16 18:32:33 | 2017-04-16 19:44:34 | 1:12:01 | upgrade:jewel-x | wip-kraken-backports | vps | 91ecccf | 4 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 1033425 | 2017-04-16 18:32:23 | 2017-04-16 18:32:33 | 2017-04-16 19:44:34 | 1:12:01 | 1:01:56 | 0:10:05 | vps | master | centos | 7.3 | upgrade:jewel-x/point-to-point-x/{distros/centos_latest.yaml point-to-point-upgrade.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
dead | 1033426 | 2017-04-16 18:32:23 | 2017-04-16 18:32:33 | 2017-04-16 18:36:33 | 0:04:00 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml kraken.yaml} | — | |||
Failure Reason:
'downburst create' reached maximum tries (3) after waiting for 180 seconds |
||||||||||||||
fail | 1033427 | 2017-04-16 18:32:24 | 2017-04-16 18:32:33 | 2017-04-16 19:08:33 | 0:36:00 | 0:28:37 | 0:07:23 | vps | master | ubuntu | 14.04 | upgrade:jewel-x/point-to-point-x/{distros/ubuntu_14.04.yaml point-to-point-upgrade.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on vpm115 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CLS_RBD_GTEST_FILTER=\'*:-TestClsRbd.mirror_image\' adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 1033428 | 2017-04-16 18:32:25 | 2017-04-16 18:32:33 | 2017-04-16 18:48:33 | 0:16:00 | 0:08:09 | 0:07:51 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/point-to-point-x/{distros/ubuntu_latest.yaml point-to-point-upgrade.yaml} | 3 | |
Failure Reason:
Command failed on vpm011 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph-mds=10.2.0-1xenial rbd-fuse=10.2.0-1xenial librbd1=10.2.0-1xenial ceph-fuse=10.2.0-1xenial python-ceph=10.2.0-1xenial ceph-common=10.2.0-1xenial libcephfs-java=10.2.0-1xenial ceph=10.2.0-1xenial libcephfs-jni=10.2.0-1xenial ceph-test=10.2.0-1xenial radosgw=10.2.0-1xenial librados2=10.2.0-1xenial' |
||||||||||||||
fail | 1033429 | 2017-04-16 18:32:25 | 2017-04-16 18:32:33 | 2017-04-16 18:44:33 | 0:12:00 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_latest.yaml} | 3 | |||
Failure Reason:
failed to install new kernel version within timeout |