Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
vpm159.front.sepia.ceph.com | vps | True | False | centos | 7.4 | x86_64 | None |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 2173974 | 2018-02-08 23:07:14 | 2018-02-08 23:07:40 | 2018-02-08 23:35:40 | 0:28:00 | 0:21:58 | 0:06:02 | vps | master | centos | 7.4 | ceph-deploy/basic/{ceph-deploy-overrides/disable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/centos_latest.yaml objectstore/filestore-xfs.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
fail | 2117850 | 2018-01-27 05:55:53 | 2018-01-27 06:11:52 | 2018-01-27 06:29:51 | 0:17:59 | 0:13:05 | 0:04:54 | vps | master | ubuntu | 16.04 | ceph-deploy/basic/{ceph-deploy-overrides/enable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
ceph-deploy: Failed to zap osds |
||||||||||||||
fail | 2117834 | 2018-01-27 05:55:42 | 2018-01-27 05:55:43 | 2018-01-27 06:11:43 | 0:16:00 | 0:11:21 | 0:04:39 | vps | master | ubuntu | 14.04 | ceph-deploy/basic/{ceph-deploy-overrides/enable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_14.04.yaml objectstore/bluestore.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
Command failed on vpm087 with status 1: 'sudo stop ceph-all || sudo service ceph stop || sudo systemctl stop ceph.target' |
||||||||||||||
pass | 2116727 | 2018-01-27 03:59:40 | 2018-01-27 03:59:51 | 2018-01-27 04:27:50 | 0:27:59 | 0:21:39 | 0:06:20 | vps | master | centos | 7.4 | ceph-deploy/basic/{ceph-deploy-overrides/ceph_deploy_dmcrypt.yaml config_options/cephdeploy_conf.yaml distros/centos_latest.yaml objectstore/filestore-xfs.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
fail | 2109024 | 2018-01-25 05:55:49 | 2018-01-25 06:07:52 | 2018-01-25 06:23:51 | 0:15:59 | 0:10:56 | 0:05:03 | vps | master | ubuntu | 16.04 | ceph-deploy/basic/{ceph-deploy-overrides/ceph_deploy_dmcrypt.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
Command failed on vpm039 with status 5: 'sudo stop ceph-all || sudo service ceph stop || sudo systemctl stop ceph.target' |
||||||||||||||
fail | 2109003 | 2018-01-25 05:55:35 | 2018-01-25 05:55:46 | 2018-01-25 06:07:45 | 0:11:59 | vps | master | ubuntu | 16.04 | ceph-deploy/basic/{ceph-deploy-overrides/enable_dmcrypt_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |||
Failure Reason:
Could not reconnect to ubuntu@vpm159.front.sepia.ceph.com |
||||||||||||||
fail | 2103415 | 2018-01-23 05:55:53 | 2018-01-23 05:55:57 | 2018-01-23 06:21:57 | 0:26:00 | 0:19:23 | 0:06:37 | vps | master | centos | 7.4 | ceph-deploy/basic/{ceph-deploy-overrides/ceph_deploy_dmcrypt.yaml config_options/cephdeploy_conf.yaml distros/centos_latest.yaml objectstore/filestore-xfs.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
ceph-deploy: Failed to zap osds |
||||||||||||||
pass | 2098362 | 2018-01-22 03:59:39 | 2018-01-22 03:59:40 | 2018-01-22 04:23:40 | 0:24:00 | 0:18:12 | 0:05:48 | vps | master | ubuntu | 16.04 | ceph-deploy/basic/{ceph-deploy-overrides/ceph_deploy_dmcrypt.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_latest.yaml objectstore/bluestore.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
fail | 2094721 | 2018-01-20 05:55:53 | 2018-01-20 06:14:04 | 2018-01-20 06:30:03 | 0:15:59 | 0:08:42 | 0:07:17 | vps | master | ubuntu | 14.04 | ceph-deploy/basic/{ceph-deploy-overrides/disable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_14.04.yaml objectstore/bluestore.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
Command failed on vpm101 with status 1: 'sudo stop ceph-all || sudo service ceph stop || sudo systemctl stop ceph.target' |
||||||||||||||
fail | 2094697 | 2018-01-20 05:55:37 | 2018-01-20 05:55:46 | 2018-01-20 06:15:44 | 0:19:58 | 0:10:53 | 0:09:05 | vps | master | ubuntu | 14.04 | ceph-deploy/basic/{ceph-deploy-overrides/disable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_14.04.yaml objectstore/filestore-xfs.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
Command failed on vpm101 with status 1: 'sudo stop ceph-all || sudo service ceph stop || sudo systemctl stop ceph.target' |
||||||||||||||
fail | 2093593 | 2018-01-20 03:59:43 | 2018-01-20 03:59:59 | 2018-01-20 04:29:58 | 0:29:59 | 0:24:27 | 0:05:32 | vps | master | centos | 7.4 | ceph-deploy/basic/{ceph-deploy-overrides/enable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/centos_latest.yaml objectstore/bluestore.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
ceph-deploy: Failed to install ceph-test |
||||||||||||||
dead | 2084235 | 2018-01-17 23:21:53 | 2018-01-17 23:21:57 | 2018-01-17 23:41:56 | 0:19:59 | vps | master | ubuntu | 16.04 | ceph-deploy/basic/{ceph-deploy-overrides/disable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |||
pass | 2043152 | 2018-01-08 18:04:01 | 2018-01-08 18:26:06 | 2018-01-08 19:00:06 | 0:34:00 | 0:22:03 | 0:11:57 | vps | master | centos | 7.4 | ceph-deploy/basic/{ceph-deploy-overrides/disable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/centos_latest.yaml objectstore/bluestore.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
fail | 2043141 | 2018-01-08 18:03:54 | 2018-01-08 18:04:05 | 2018-01-08 18:28:03 | 0:23:58 | 0:17:44 | 0:06:14 | vps | master | ubuntu | 16.04 | ceph-deploy/basic/{ceph-deploy-overrides/enable_dmcrypt_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_latest.yaml objectstore/bluestore.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
Command failed on vpm121 with status 1: 'sudo tar cz -f /tmp/tmpXkXlxz -C /var/lib/ceph/mon -- .' |
||||||||||||||
fail | 2041801 | 2018-01-08 04:05:31 | 2018-01-08 04:45:50 | 2018-01-08 14:40:02 | 9:54:12 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/4-node.yaml 1-distros/ubuntu_latest.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_on.yaml 4-tasks/ceph-admin-commands.yaml} | 4 | |||
Failure Reason:
Could not reconnect to ubuntu@vpm141.front.sepia.ceph.com |
||||||||||||||
dead | 2040651 | 2018-01-08 02:25:32 | 2018-01-08 02:25:39 | 2018-01-08 14:28:10 | 12:02:31 | vps | master | centos | 7.4 | upgrade:luminous-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-luminous-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-ec-workload.yaml 5-finish-upgrade.yaml 7-final-workload.yaml distros/centos_latest.yaml objectstore/bluestore.yaml thrashosds-health.yaml} | 3 | |||
pass | 2038846 | 2018-01-07 05:00:37 | 2018-01-07 05:49:56 | 2018-01-07 06:59:57 | 1:10:01 | 0:21:20 | 0:48:41 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
pass | 2038844 | 2018-01-07 05:00:37 | 2018-01-07 05:49:47 | 2018-01-07 07:49:49 | 2:00:02 | 0:46:35 | 1:13:27 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
dead | 2038303 | 2018-01-07 04:20:41 | 2018-01-07 04:42:06 | 2018-01-07 16:44:23 | 12:02:17 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/rados_loadgenbig.yaml 3-upgrade-sequence/upgrade-all.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_16.04.yaml} | 3 | |||
fail | 2038277 | 2018-01-07 04:20:35 | 2018-01-07 04:20:36 | 2018-01-07 05:46:37 | 1:26:01 | 0:56:22 | 0:29:39 | vps | master | ubuntu | 14.04 | upgrade:jewel-x/point-to-point-x/{distros/ubuntu_14.04.yaml point-to-point-upgrade.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_refcount.sh) on vpm159 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CLS_RBD_GTEST_FILTER=\'*:-TestClsRbd.mirror_image\' adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/cls/test_cls_refcount.sh' |