Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
vpm077.front.sepia.ceph.com | vps | False | False | centos | 7.4 | x86_64 | None |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 2173988 | 2018-02-08 23:07:19 | 2018-02-08 23:07:45 | 2018-02-08 23:35:46 | 0:28:01 | 0:22:49 | 0:05:12 | vps | master | centos | 7.4 | ceph-deploy/basic/{ceph-deploy-overrides/enable_dmcrypt_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/centos_latest.yaml objectstore/filestore-xfs.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
fail | 2117841 | 2018-01-27 05:55:47 | 2018-01-27 05:55:48 | 2018-01-27 06:17:47 | 0:21:59 | 0:15:27 | 0:06:32 | vps | master | ubuntu | 16.04 | ceph-deploy/basic/{ceph-deploy-overrides/disable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_latest.yaml objectstore/bluestore.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
ceph-deploy: Failed to zap osds |
||||||||||||||
pass | 2116747 | 2018-01-27 03:59:53 | 2018-01-27 03:59:54 | 2018-01-27 04:39:54 | 0:40:00 | 0:18:22 | 0:21:38 | vps | master | centos | 7.4 | ceph-deploy/basic/{ceph-deploy-overrides/disable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/centos_latest.yaml objectstore/bluestore.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
fail | 2116743 | 2018-01-27 03:59:50 | 2018-01-27 03:59:53 | 2018-01-27 04:15:51 | 0:15:58 | 0:10:11 | 0:05:47 | vps | master | ubuntu | 16.04 | ceph-deploy/basic/{ceph-deploy-overrides/enable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_latest.yaml objectstore/bluestore.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
Command failed on vpm077 with status 5: 'sudo stop ceph-all || sudo service ceph stop || sudo systemctl stop ceph.target' |
||||||||||||||
fail | 2103420 | 2018-01-23 05:55:57 | 2018-01-23 05:56:04 | 2018-01-23 06:20:04 | 0:24:00 | 0:16:52 | 0:07:08 | vps | master | ubuntu | 16.04 | ceph-deploy/basic/{ceph-deploy-overrides/disable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_latest.yaml objectstore/bluestore.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
ceph-deploy: Failed to zap osds |
||||||||||||||
pass | 2098368 | 2018-01-22 03:59:43 | 2018-01-22 03:59:44 | 2018-01-22 04:25:43 | 0:25:59 | 0:19:53 | 0:06:06 | vps | master | centos | 7.4 | ceph-deploy/basic/{ceph-deploy-overrides/disable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/centos_latest.yaml objectstore/filestore-xfs.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
fail | 2094705 | 2018-01-20 05:55:42 | 2018-01-20 05:55:46 | 2018-01-20 06:19:45 | 0:23:59 | 0:17:41 | 0:06:18 | vps | master | centos | 7.4 | ceph-deploy/basic/{ceph-deploy-overrides/disable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/centos_latest.yaml objectstore/filestore-xfs.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
ceph-deploy: Failed to zap osds |
||||||||||||||
fail | 2093609 | 2018-01-20 03:59:53 | 2018-01-20 04:00:01 | 2018-01-20 04:32:00 | 0:31:59 | 0:20:35 | 0:11:24 | vps | master | centos | 7.4 | ceph-deploy/ceph-volume/{cluster/4node.yaml config/ceph_volume_filestore.yaml distros/centos_latest.yaml tasks/rbd_import_export.yaml} | 4 | |
Failure Reason:
Command failed on vpm157 with status 5: 'sudo stop ceph-all || sudo service ceph stop || sudo systemctl stop ceph.target' |
||||||||||||||
dead | 2084248 | 2018-01-17 23:22:02 | 2018-01-17 23:22:03 | 2018-01-17 23:42:02 | 0:19:59 | vps | master | centos | 7.4 | ceph-deploy/ceph-volume/{cluster/4node.yaml config/ceph_volume_filestore.yaml distros/centos_latest.yaml tasks/rbd_import_export.yaml} | 4 | |||
pass | 2043131 | 2018-01-08 18:03:48 | 2018-01-08 18:04:08 | 2018-01-08 18:44:07 | 0:39:59 | 0:12:58 | 0:27:01 | vps | master | ubuntu | 14.04 | ceph-deploy/basic/{ceph-deploy-overrides/disable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_14.04.yaml objectstore/bluestore.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
fail | 2043122 | 2018-01-08 18:03:42 | 2018-01-08 18:04:06 | 2018-01-08 18:24:04 | 0:19:58 | 0:15:01 | 0:04:57 | vps | master | ubuntu | 16.04 | ceph-deploy/basic/{ceph-deploy-overrides/disable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
Command failed on vpm157 with status 1: 'sudo tar cz -f /tmp/tmpZxwqu2 -C /var/lib/ceph/mon -- .' |
||||||||||||||
pass | 2042043 | 2018-01-08 05:00:45 | 2018-01-08 05:36:03 | 2018-01-08 06:36:03 | 1:00:00 | 0:23:09 | 0:36:51 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} | 3 | |||
pass | 2042009 | 2018-01-08 05:00:35 | 2018-01-08 05:06:01 | 2018-01-08 05:40:01 | 0:34:00 | 0:20:50 | 0:13:10 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_pjd.yaml} | 3 | |||
pass | 2041994 | 2018-01-08 04:24:02 | 2018-01-08 11:02:26 | 2018-01-08 14:44:30 | 3:42:04 | 1:33:34 | 2:08:30 | vps | master | centos | 7.4 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-workload.yaml 6-luminous-with-mgr.yaml 6.5-crush-compat.yaml 7-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} 8-jewel-workload.yaml distros/centos_latest.yaml} | 4 | |
pass | 2041985 | 2018-01-08 04:24:00 | 2018-01-08 10:46:02 | 2018-01-08 12:54:04 | 2:08:02 | 1:47:32 | 0:20:30 | vps | master | centos | 7.4 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-workload.yaml 6-luminous-with-mgr.yaml 6.5-crush-compat.yaml 7-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} 8-jewel-workload.yaml distros/centos_latest.yaml} | 4 | |
fail | 2041827 | 2018-01-08 04:05:47 | 2018-01-08 06:35:50 | 2018-01-08 10:15:55 | 3:40:05 | 3:21:01 | 0:19:04 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_latest.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_on.yaml 4-tasks/rest.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rest/test-restful.sh) on vpm067 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rest/test-restful.sh' |
||||||||||||||
fail | 2041821 | 2018-01-08 04:05:44 | 2018-01-08 06:18:05 | 2018-01-08 06:48:04 | 0:29:59 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_latest.yaml 2-ceph/ceph_ansible.yaml 3-config/dmcrypt_off.yaml 4-tasks/rest.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@vpm067.front.sepia.ceph.com |
||||||||||||||
pass | 2041786 | 2018-01-08 03:59:41 | 2018-01-08 04:23:34 | 2018-01-08 05:13:34 | 0:50:00 | 0:13:20 | 0:36:40 | vps | master | ubuntu | 16.04 | ceph-deploy/basic/{ceph-deploy-overrides/enable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_latest.yaml objectstore/bluestore.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
pass | 2041782 | 2018-01-08 03:59:38 | 2018-01-08 04:05:49 | 2018-01-08 04:55:50 | 0:50:01 | 0:12:41 | 0:37:20 | vps | master | ubuntu | 16.04 | ceph-deploy/basic/{ceph-deploy-overrides/enable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_latest.yaml objectstore/bluestore.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
fail | 2040658 | 2018-01-08 02:25:37 | 2018-01-08 02:25:39 | 2018-01-08 04:37:44 | 2:12:05 | 2:04:59 | 0:07:06 | vps | master | ubuntu | 16.04 | upgrade:luminous-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-ceph-install/luminous.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_latest.yaml objectstore/bluestore.yaml thrashosds-health.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd.sh) on vpm089 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=luminous TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd.sh' |