Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
vpm071.front.sepia.ceph.com | vps | False | False | centos | 7.4 | x86_64 | VPSHOST repurposed as testnode |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 2174040 | 2018-02-08 23:07:36 | 2018-02-08 23:29:44 | 2018-02-08 23:55:44 | 0:26:00 | 0:20:51 | 0:05:09 | vps | master | centos | 7.4 | ceph-deploy/basic/{ceph-deploy-overrides/enable_dmcrypt_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/centos_latest.yaml objectstore/bluestore.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
fail | 2173956 | 2018-02-08 23:07:09 | 2018-02-08 23:07:41 | 2018-02-08 23:29:41 | 0:22:00 | 0:12:21 | 0:09:39 | vps | master | centos | 7.4 | ceph-deploy/ceph-volume/{cluster/4node.yaml config/ceph_volume_dmcrypt_off.yaml distros/centos_latest.yaml tasks/rbd_import_export.yaml} | 4 | |
Failure Reason:
{'vpm091.front.sepia.ceph.com': {'msg': 'All items completed', 'failed': True, 'changed': False}, 'vpm175.front.sepia.ceph.com': {'msg': 'All items completed', 'failed': True, 'changed': False}, 'vpm071.front.sepia.ceph.com': {'msg': 'All items completed', 'failed': True, 'changed': False}, 'vpm043.front.sepia.ceph.com': {'msg': 'All items completed', 'failed': True, 'changed': False}} |
||||||||||||||
fail | 2117838 | 2018-01-27 05:55:45 | 2018-01-27 05:55:46 | 2018-01-27 06:17:46 | 0:22:00 | 0:15:33 | 0:06:27 | vps | master | ubuntu | 16.04 | ceph-deploy/basic/{ceph-deploy-overrides/enable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
ceph-deploy: Failed to zap osds |
||||||||||||||
pass | 2116744 | 2018-01-27 03:59:51 | 2018-01-27 03:59:53 | 2018-01-27 04:27:52 | 0:27:59 | 0:21:30 | 0:06:29 | vps | master | centos | 7.4 | ceph-deploy/basic/{ceph-deploy-overrides/enable_dmcrypt_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/centos_latest.yaml objectstore/filestore-xfs.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
fail | 2109019 | 2018-01-25 05:55:46 | 2018-01-25 05:55:48 | 2018-01-25 06:21:47 | 0:25:59 | 0:19:04 | 0:06:55 | vps | master | centos | 7.4 | ceph-deploy/basic/{ceph-deploy-overrides/enable_dmcrypt_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/centos_latest.yaml objectstore/bluestore.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
ceph-deploy: Failed to zap osds |
||||||||||||||
fail | 2103418 | 2018-01-23 05:55:55 | 2018-01-23 05:56:00 | 2018-01-23 06:24:00 | 0:28:00 | 0:21:15 | 0:06:45 | vps | master | centos | 7.4 | ceph-deploy/basic/{ceph-deploy-overrides/enable_dmcrypt_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/centos_latest.yaml objectstore/bluestore.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
ceph-deploy: Failed to zap osds |
||||||||||||||
pass | 2098364 | 2018-01-22 03:59:40 | 2018-01-22 03:59:41 | 2018-01-22 04:25:41 | 0:26:00 | 0:19:44 | 0:06:16 | vps | master | centos | 7.4 | ceph-deploy/basic/{ceph-deploy-overrides/disable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/centos_latest.yaml objectstore/filestore-xfs.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
fail | 2094687 | 2018-01-20 05:55:31 | 2018-01-20 05:55:46 | 2018-01-20 06:19:45 | 0:23:59 | 0:17:41 | 0:06:18 | vps | master | centos | 7.4 | ceph-deploy/basic/{ceph-deploy-overrides/enable_dmcrypt_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/centos_latest.yaml objectstore/filestore-xfs.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
ceph-deploy: Failed to zap osds |
||||||||||||||
pass | 2093620 | 2018-01-20 04:00:01 | 2018-01-20 04:10:19 | 2018-01-20 04:38:19 | 0:28:00 | 0:13:08 | 0:14:52 | vps | master | ubuntu | 16.04 | ceph-deploy/basic/{ceph-deploy-overrides/enable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
pass | 2093592 | 2018-01-20 03:59:42 | 2018-01-20 04:00:00 | 2018-01-20 04:19:59 | 0:19:59 | 0:14:18 | 0:05:41 | vps | master | ubuntu | 16.04 | ceph-deploy/basic/{ceph-deploy-overrides/disable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
dead | 2084245 | 2018-01-17 23:22:00 | 2018-01-17 23:22:01 | 2018-01-17 23:42:00 | 0:19:59 | vps | master | centos | 7.4 | ceph-deploy/basic/{ceph-deploy-overrides/enable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/centos_latest.yaml objectstore/filestore-xfs.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |||
pass | 2043145 | 2018-01-08 18:03:57 | 2018-01-08 18:22:07 | 2018-01-08 18:42:06 | 0:19:59 | 0:14:25 | 0:05:34 | vps | master | ubuntu | 16.04 | ceph-deploy/basic/{ceph-deploy-overrides/enable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
pass | 2041884 | 2018-01-08 04:23:34 | 2018-01-08 07:05:53 | 2018-01-08 15:58:04 | 8:52:11 | 1:22:07 | 7:30:04 | vps | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-workload.yaml 6-luminous-with-mgr.yaml 6.5-crush-compat.yaml 7-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} 8-jewel-workload.yaml distros/ubuntu_14.04.yaml} | 4 | |
dead | 2040656 | 2018-01-08 02:25:35 | 2018-01-08 02:25:39 | 2018-01-08 14:28:10 | 12:02:31 | vps | master | centos | 7.4 | upgrade:luminous-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-ceph-install/luminous.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml} | 3 | |||
pass | 2038846 | 2018-01-07 05:00:37 | 2018-01-07 05:49:56 | 2018-01-07 06:59:57 | 1:10:01 | 0:21:20 | 0:48:41 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
pass | 2038844 | 2018-01-07 05:00:37 | 2018-01-07 05:49:47 | 2018-01-07 07:49:49 | 2:00:02 | 0:46:35 | 1:13:27 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
dead | 2038303 | 2018-01-07 04:20:41 | 2018-01-07 04:42:06 | 2018-01-07 16:44:23 | 12:02:17 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-all.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_16.04.yaml} | 3 | |||
fail | 2038277 | 2018-01-07 04:20:35 | 2018-01-07 04:20:36 | 2018-01-07 05:46:37 | 1:26:01 | 0:56:22 | 0:29:39 | vps | master | ubuntu | 14.04 | upgrade:jewel-x/point-to-point-x/{distros/ubuntu_14.04.yaml point-to-point-upgrade.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_refcount.sh) on vpm159 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CLS_RBD_GTEST_FILTER=\'*:-TestClsRbd.mirror_image\' adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/cls/test_cls_refcount.sh' |
||||||||||||||
fail | 2038260 | 2018-01-07 04:20:31 | 2018-01-07 04:20:34 | 2018-01-07 06:32:35 | 2:12:01 | 0:40:04 | 1:31:57 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_16.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
pass | 2038011 | 2018-01-07 04:15:25 | 2018-01-07 04:15:36 | 2018-01-07 04:41:36 | 0:26:00 | 0:17:23 | 0:08:37 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/ceph-admin-commands.yaml} | 3 |