Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
vpm063.front.sepia.ceph.com | vps | False | False | ubuntu | 16.04 | x86_64 | None |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 1252032 | 2017-06-01 05:56:29 | 2017-06-01 09:56:00 | 2017-06-01 10:22:00 | 0:26:00 | 0:11:40 | 0:14:20 | vps | master | ubuntu | 14.04 | ceph-deploy/basic/{ceph-deploy-overrides/enable_dmcrypt_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_14.04.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
pass | 1252013 | 2017-06-01 05:56:24 | 2017-06-01 09:33:42 | 2017-06-01 10:03:39 | 0:29:57 | 0:10:26 | 0:19:31 | vps | master | ubuntu | 14.04 | ceph-deploy/basic/{ceph-deploy-overrides/enable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_14.04.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
fail | 1250718 | 2017-06-01 05:02:18 | 2017-06-01 07:37:20 | 2017-06-01 09:49:22 | 2:12:02 | 2:03:15 | 0:08:47 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} | 3 | |||
Failure Reason:
need more than 0 values to unpack |
||||||||||||||
fail | 1250680 | 2017-06-01 05:02:10 | 2017-06-01 05:59:23 | 2017-06-01 07:41:23 | 1:42:00 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cache_snaps.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm083.front.sepia.ceph.com |
||||||||||||||
pass | 1250652 | 2017-06-01 05:02:05 | 2017-06-01 05:13:16 | 2017-06-01 12:55:27 | 7:42:11 | 2:24:10 | 5:18:01 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_pjd.yaml} | 3 | |||
pass | 1250527 | 2017-06-01 04:21:30 | 2017-06-01 04:21:42 | 2017-06-01 07:27:44 | 3:06:02 | 2:23:42 | 0:42:20 | vps | master | centos | 7.3 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml kraken.yaml} | 3 | |
fail | 1249714 | 2017-06-01 02:26:39 | 2017-06-01 02:26:57 | 2017-06-01 04:49:03 | 2:22:06 | 2:15:23 | 0:06:43 | vps | master | ubuntu | 16.04 | upgrade:kraken-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous-with-mgr.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml} | 3 | |
Failure Reason:
need more than 0 values to unpack |
||||||||||||||
fail | 1247931 | 2017-05-31 05:01:49 | 2017-05-31 07:49:49 | 2017-05-31 10:23:52 | 2:34:03 | 2:00:55 | 0:33:08 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} | 3 | |||
Failure Reason:
need more than 0 values to unpack |
||||||||||||||
pass | 1247871 | 2017-05-31 05:01:35 | 2017-05-31 05:21:16 | 2017-05-31 08:17:19 | 2:56:03 | 2:28:21 | 0:27:42 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
fail | 1247849 | 2017-05-31 05:01:30 | 2017-05-31 05:02:07 | 2017-05-31 05:42:08 | 0:40:01 | 0:37:32 | 0:02:29 | vps | master | ubuntu | 16.04 | smoke/1node/{clusters/{fixed-1.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/ceph-deploy.yaml} | 1 | |
Failure Reason:
Command failed on vpm063 with status 32: 'sudo umount /dev/vdb1' |
||||||||||||||
fail | 1246581 | 2017-05-31 02:27:38 | 2017-05-31 02:28:05 | 2017-05-31 05:02:07 | 2:34:02 | 2:27:16 | 0:06:46 | vps | master | ubuntu | 14.04 | upgrade:kraken-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous-with-mgr.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml objectstore/bluestore.yaml} | 3 | |
Failure Reason:
need more than 0 values to unpack |
||||||||||||||
pass | 1244329 | 2017-05-30 05:55:34 | 2017-05-30 15:15:24 | 2017-05-30 15:33:24 | 0:18:00 | 0:10:39 | 0:07:21 | vps | master | ubuntu | 14.04 | ceph-deploy/basic/{ceph-deploy-overrides/disable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_14.04.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
fail | 1244001 | 2017-05-30 05:15:32 | 2017-05-30 05:15:45 | 2017-05-30 05:57:45 | 0:42:00 | 0:11:46 | 0:30:14 | vps | master | centos | 7.2 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.2.yaml 2-config/ceph_ansible.yaml 3-tasks/ceph-admin-commands.yaml} | 3 | |
Failure Reason:
Command failed on vpm009 with status 2: 'cd ~/ceph-ansible ; virtualenv venv ; source venv/bin/activate ; pip install --upgrade pip ; pip install setuptools>=11.3 ansible==2.2.1 ; ansible-playbook -vv -i inven.yml site.yml' |
||||||||||||||
pass | 1243051 | 2017-05-30 05:04:18 | 2017-05-30 05:04:38 | 2017-05-30 08:40:42 | 3:36:04 | 2:40:14 | 0:55:50 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
fail | 1242762 | 2017-05-30 03:28:12 | 2017-05-30 03:28:25 | 2017-05-30 03:42:24 | 0:13:59 | 0:08:03 | 0:05:56 | vps | master | upgrade:hammer-jewel-x/tiering/{0-cluster/start.yaml 1-install-hammer-and-upgrade-to-jewel/hammer-to-jewel.yaml 2-setup-cache-tiering/{0-create-base-tier/create-ec-pool.yaml 1-create-cache-tier.yaml} 3-upgrade.yaml} | 3 | |||
Failure Reason:
("bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)",) |
||||||||||||||
fail | 1242280 | 2017-05-30 02:25:47 | 2017-05-30 02:25:48 | 2017-05-30 02:45:48 | 0:20:00 | 0:10:52 | 0:09:08 | vps | master | centos | 7.3 | upgrade:kraken-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-luminous-with-mgr.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml objectstore/bluestore.yaml} | 3 | |
Failure Reason:
("bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)",) |
||||||||||||||
pass | 1241332 | 2017-05-29 05:03:35 | 2017-05-29 05:41:51 | 2017-05-29 06:37:51 | 0:56:00 | 0:34:45 | 0:21:15 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_workunit_suites_iozone.yaml} | 3 | |||
fail | 1241324 | 2017-05-29 05:03:29 | 2017-05-29 05:08:26 | 2017-05-29 06:58:28 | 1:50:02 | 0:17:12 | 1:32:50 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} | 3 | |||
Failure Reason:
Command failed (workunit test cls/test_cls_sdk.sh) on vpm063 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_sdk.sh' |
||||||||||||||
pass | 1241315 | 2017-05-29 05:03:23 | 2017-05-29 05:03:33 | 2017-05-29 05:59:39 | 0:56:06 | 0:46:22 | 0:09:44 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
fail | 1241310 | 2017-05-29 05:03:20 | 2017-05-29 05:03:33 | 2017-05-29 07:39:41 | 2:36:08 | vps | master | centos | 7.3 | smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} | 4 | |||
Failure Reason:
Could not reconnect to ubuntu@vpm123.front.sepia.ceph.com |