Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
vpm053.front.sepia.ceph.com | vps | False | False | centos | 7.3 | x86_64 | None |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 1277177 | 2017-06-11 02:25:50 | 2017-06-11 02:26:05 | 2017-06-11 03:58:07 | 1:32:02 | 1:21:52 | 0:10:10 | vps | master | centos | 7.3 | upgrade:kraken-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous-with-mgr.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml objectstore/bluestore.yaml} | 3 | |
Failure Reason:
SSH connection to vpm053 was lost: 'rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0' |
||||||||||||||
pass | 1276225 | 2017-06-10 05:57:00 | 2017-06-10 09:19:02 | 2017-06-10 09:45:02 | 0:26:00 | 0:15:58 | 0:10:02 | vps | master | centos | 7.3 | ceph-deploy/basic/{ceph-deploy-overrides/disable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/centos_latest.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
fail | 1275923 | 2017-06-10 05:15:38 | 2017-06-10 08:10:37 | 2017-06-10 09:04:38 | 0:54:01 | 0:19:42 | 0:34:19 | vps | master | centos | 7.2 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-config/ceph_ansible.yaml 3-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm195 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=kraken TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |
||||||||||||||
pass | 1275903 | 2017-06-10 05:15:35 | 2017-06-10 07:58:31 | 2017-06-10 08:30:31 | 0:32:00 | 0:20:40 | 0:11:20 | vps | master | centos | 7.2 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-config/ceph_ansible.yaml 3-tasks/ceph-admin-commands.yaml} | 3 | |
fail | 1274973 | 2017-06-10 05:02:02 | 2017-06-10 05:46:26 | 2017-06-10 06:24:26 | 0:38:00 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm053.front.sepia.ceph.com |
||||||||||||||
pass | 1274936 | 2017-06-10 04:21:43 | 2017-06-10 06:43:06 | 2017-06-10 11:23:12 | 4:40:06 | 1:23:39 | 3:16:27 | vps | master | ubuntu | 14.04 | upgrade:jewel-x/point-to-point-x/{distros/ubuntu_14.04.yaml point-to-point-upgrade.yaml} | 3 | |
pass | 1274932 | 2017-06-10 04:21:41 | 2017-06-10 06:26:49 | 2017-06-10 07:50:50 | 1:24:01 | 0:51:28 | 0:32:33 | vps | master | centos | 7.3 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/centos_latest.yaml} | 3 | |
pass | 1274911 | 2017-06-10 03:59:35 | 2017-06-10 04:47:43 | 2017-06-10 05:21:42 | 0:33:59 | 0:11:32 | 0:22:27 | vps | master | ubuntu | 16.04 | ceph-deploy/basic/{ceph-deploy-overrides/enable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_latest.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
fail | 1274827 | 2017-06-10 03:26:04 | 2017-06-10 03:26:17 | 2017-06-10 06:12:19 | 2:46:02 | 0:37:52 | 2:08:10 | vps | master | ubuntu | 14.04 | upgrade:hammer-jewel-x/parallel/{0-cluster/start.yaml 1-hammer-jewel-install/hammer-jewel.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 3.5-finish.yaml 4-jewel.yaml 5-hammer-jewel-x-upgrade/hammer-jewel-x.yaml 6-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 7-upgrade-sequence/upgrade-all.yaml 8-kraken.yaml 9-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_s3tests.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 1274822 | 2017-06-10 03:26:01 | 2017-06-10 03:26:17 | 2017-06-10 06:38:20 | 3:12:03 | 0:08:10 | 3:03:53 | vps | master | ubuntu | 16.04 | upgrade:hammer-jewel-x/parallel/{0-cluster/start.yaml 1-hammer-jewel-install/hammer-jewel.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 3.5-finish.yaml 4-jewel.yaml 5-hammer-jewel-x-upgrade/hammer-jewel-x.yaml 6-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 7-upgrade-sequence/upgrade-all.yaml 8-kraken.yaml 9-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_s3tests.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
Command failed on vpm073 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph-mds=0.94.10-1xenial rbd-fuse=0.94.10-1xenial librbd1=0.94.10-1xenial ceph-fuse=0.94.10-1xenial python-ceph=0.94.10-1xenial ceph-common=0.94.10-1xenial libcephfs-java=0.94.10-1xenial ceph=0.94.10-1xenial libcephfs-jni=0.94.10-1xenial ceph-test=0.94.10-1xenial radosgw=0.94.10-1xenial librados2=0.94.10-1xenial' |
||||||||||||||
fail | 1274207 | 2017-06-10 02:25:43 | 2017-06-10 02:25:48 | 2017-06-10 05:01:51 | 2:36:03 | 2:28:31 | 0:07:32 | vps | master | ubuntu | 14.04 | upgrade:kraken-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous-with-mgr.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml objectstore/bluestore.yaml} | 3 | |
Failure Reason:
need more than 0 values to unpack |
||||||||||||||
pass | 1272737 | 2017-06-09 17:43:23 | 2017-06-09 17:44:13 | 2017-06-09 19:02:13 | 1:18:00 | 1:07:39 | 0:10:21 | vps | master | centos | 7.3 | upgrade:kraken-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-ec-workload.yaml 5-finish-upgrade.yaml 6-luminous-with-mgr.yaml 7-final-workload.yaml distros/centos_latest.yaml objectstore/filestore-xfs.yaml} | 3 | |
pass | 1272307 | 2017-06-09 05:00:45 | 2017-06-09 05:00:58 | 2017-06-09 07:11:00 | 2:10:02 | 2:03:38 | 0:06:24 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_api_tests.yaml} | 3 | |||
dead | 1269418 | 2017-06-08 02:26:54 | 2017-06-08 02:27:34 | 2017-06-08 14:30:02 | 12:02:28 | vps | master | ubuntu | 14.04 | upgrade:kraken-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous-with-mgr.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml objectstore/bluestore.yaml} | 3 | |||
pass | 1267690 | 2017-06-07 11:11:34 | 2017-06-07 11:20:19 | 2017-06-07 12:34:19 | 1:14:00 | 0:35:07 | 0:38:53 | vps | master | centos | 7.3 | ceph-disk/basic/{distros/centos_latest.yaml tasks/ceph-disk.yaml} | 2 | |
pass | 1267559 | 2017-06-07 07:31:17 | 2017-06-07 07:37:29 | 2017-06-07 08:15:29 | 0:38:00 | 0:31:24 | 0:06:36 | vps | master | centos | 7.3 | ceph-disk/basic/{distros/centos_latest.yaml tasks/ceph-disk.yaml} | 2 | |
pass | 1267160 | 2017-06-07 03:48:28 | 2017-06-07 13:25:47 | 2017-06-07 15:29:48 | 2:04:01 | 1:20:01 | 0:44:00 | vps | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 | |
fail | 1267155 | 2017-06-07 03:48:25 | 2017-06-07 12:30:18 | 2017-06-07 17:22:24 | 4:52:06 | 1:04:13 | 3:47:53 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 1267129 | 2017-06-07 03:48:07 | 2017-06-07 08:59:21 | 2017-06-07 09:47:22 | 0:48:01 | 0:36:53 | 0:11:08 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
pass | 1267128 | 2017-06-07 03:48:06 | 2017-06-07 08:21:11 | 2017-06-07 14:07:19 | 5:46:08 | 1:23:54 | 4:22:14 | vps | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/rados_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml} | 3 |