Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
vpm193.front.sepia.ceph.com | vps | True | True | 2021-11-10 20:06:23.144145 | choffman@fedora | x86_64 | None |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 1249691 | 2017-06-01 02:26:24 | 2017-06-01 02:26:57 | 2017-06-01 14:29:42 | 12:02:45 | vps | master | centos | 7.3 | upgrade:kraken-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-luminous-with-mgr.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml objectstore/bluestore.yaml} | 3 | |||
fail | 1249060 | 2017-06-01 01:41:47 | 2017-06-01 01:41:50 | 2017-06-01 01:57:48 | 0:15:58 | 0:13:39 | 0:02:19 | vps | master | rados/singleton-nomsgr/{all/admin_socket_output.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-06-01 01:52:58.637361 mon.0 172.21.2.193:6789/0 169 : cluster [WRN] MDS health message (mds.0): MDS in read-only mode" in cluster log |
||||||||||||||
pass | 1247919 | 2017-05-31 05:01:46 | 2017-05-31 07:28:48 | 2017-05-31 09:52:50 | 2:24:02 | 2:07:40 | 0:16:22 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} | 3 | |||
pass | 1247916 | 2017-05-31 05:01:45 | 2017-05-31 07:12:20 | 2017-05-31 12:04:27 | 4:52:07 | 2:03:12 | 2:48:55 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} | 3 | |||
dead | 1247859 | 2017-05-31 05:01:32 | 2017-05-31 05:08:08 | 2017-05-31 07:38:12 | 2:30:04 | 2:17:29 | 0:12:35 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_fsstress.yaml} | 3 | |||
Failure Reason:
[Errno 110] Connection timed out |
||||||||||||||
fail | 1246577 | 2017-05-31 02:27:35 | 2017-05-31 02:28:03 | 2017-05-31 05:14:06 | 2:46:03 | 2:35:52 | 0:10:11 | vps | master | centos | 7.3 | upgrade:kraken-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous-with-mgr.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml} | 3 | |
Failure Reason:
need more than 0 values to unpack |
||||||||||||||
fail | 1244344 | 2017-05-30 05:55:39 | 2017-05-30 15:15:35 | 2017-05-30 15:33:35 | 0:18:00 | vps | master | ubuntu | 16.04 | ceph-deploy/basic/{ceph-deploy-overrides/ceph_deploy_dmcrypt.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_latest.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |||
Failure Reason:
Could not reconnect to ubuntu@vpm137.front.sepia.ceph.com |
||||||||||||||
pass | 1243061 | 2017-05-30 05:04:25 | 2017-05-30 05:04:38 | 2017-05-30 07:40:40 | 2:36:02 | 2:19:51 | 0:16:11 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_python.yaml} | 3 | |||
fail | 1243056 | 2017-05-30 05:04:21 | 2017-05-30 05:04:36 | 2017-05-30 05:14:35 | 0:09:59 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_bench.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm139.front.sepia.ceph.com |
||||||||||||||
fail | 1242285 | 2017-05-30 02:25:51 | 2017-05-30 02:26:05 | 2017-05-30 04:50:07 | 2:24:02 | 2:16:44 | 0:07:18 | vps | master | ubuntu | 14.04 | upgrade:kraken-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous-with-mgr.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml objectstore/filestore-xfs.yaml} | 3 | |
Failure Reason:
Command failed on vpm193 with status 22: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph osd require-osd-release luminous'" |
||||||||||||||
fail | 1241070 | 2017-05-29 04:05:34 | 2017-05-29 04:08:14 | 2017-05-29 04:56:15 | 0:48:01 | 0:20:49 | 0:27:12 | vps | master | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-config/ceph_ansible.yaml 3-tasks/ceph-admin-commands.yaml} | 3 | |
Failure Reason:
'check health' reached maximum tries (6) after waiting for 90 seconds |
||||||||||||||
pass | 1240998 | 2017-05-29 03:59:28 | 2017-05-29 04:00:20 | 2017-05-29 04:26:19 | 0:25:59 | 0:19:09 | 0:06:50 | vps | master | centos | 7.3 | ceph-deploy/basic/{ceph-deploy-overrides/enable_dmcrypt_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/centos_latest.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
fail | 1240891 | 2017-05-29 03:26:05 | 2017-05-29 03:26:06 | 2017-05-29 03:42:05 | 0:15:59 | 0:09:17 | 0:06:42 | vps | master | ubuntu | 16.04 | upgrade:hammer-jewel-x/parallel/{0-cluster/start.yaml 1-hammer-jewel-install/hammer-jewel.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 3.5-finish.yaml 4-jewel.yaml 5-hammer-jewel-x-upgrade/hammer-jewel-x.yaml 6-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 7-upgrade-sequence/upgrade-all.yaml 8-kraken.yaml 9-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_s3tests.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
Command failed on vpm195 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph-mds=0.94.10-1xenial rbd-fuse=0.94.10-1xenial librbd1=0.94.10-1xenial ceph-fuse=0.94.10-1xenial python-ceph=0.94.10-1xenial ceph-common=0.94.10-1xenial libcephfs-java=0.94.10-1xenial ceph=0.94.10-1xenial libcephfs-jni=0.94.10-1xenial ceph-test=0.94.10-1xenial radosgw=0.94.10-1xenial librados2=0.94.10-1xenial' |
||||||||||||||
fail | 1239497 | 2017-05-29 02:27:32 | 2017-05-29 02:27:42 | 2017-05-29 02:53:42 | 0:26:00 | 0:15:53 | 0:10:07 | vps | master | centos | 7.3 | upgrade:kraken-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-luminous-with-mgr.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml objectstore/bluestore.yaml} | 3 | |
Failure Reason:
Command failed on vpm193 with status 237: 'sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap' |
||||||||||||||
pass | 1238186 | 2017-05-28 04:22:04 | 2017-05-28 04:22:09 | 2017-05-28 07:16:12 | 2:54:03 | 2:46:22 | 0:07:41 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_latest.yaml} | 3 | |
fail | 1238132 | 2017-05-28 03:29:04 | 2017-05-28 03:29:12 | 2017-05-28 04:13:12 | 0:44:00 | 0:35:03 | 0:08:57 | vps | master | ubuntu | 14.04 | upgrade:hammer-jewel-x/parallel/{0-cluster/start.yaml 1-hammer-jewel-install/hammer-jewel.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 3.5-finish.yaml 4-jewel.yaml 5-hammer-jewel-x-upgrade/hammer-jewel-x.yaml 6-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 7-upgrade-sequence/upgrade-by-daemon.yaml 8-kraken.yaml 9-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_s3tests.yaml} distros/ubuntu_14.04.yaml} | 3 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 1237654 | 2017-05-28 02:26:31 | 2017-05-28 02:26:33 | 2017-05-28 02:48:33 | 0:22:00 | 0:09:35 | 0:12:25 | vps | master | centos | 7.3 | upgrade:kraken-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-luminous-with-mgr.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml} | 3 | |
Failure Reason:
while scanning a plain scalar in "/tmp/teuth_ansible_failures_iyRgvP", line 1, column 165 found unexpected ':' in "/tmp/teuth_ansible_failures_iyRgvP", line 1, column 180 Please check http://pyyaml.org/wiki/YAMLColonInFlowContext for details. |
||||||||||||||
pass | 1237134 | 2017-05-27 05:57:26 | 2017-05-27 07:33:02 | 2017-05-27 07:55:02 | 0:22:00 | 0:10:45 | 0:11:15 | vps | master | ubuntu | 14.04 | ceph-deploy/basic/{ceph-deploy-overrides/enable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_14.04.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
pass | 1236837 | 2017-05-27 05:16:06 | 2017-05-27 06:57:49 | 2017-05-27 07:35:49 | 0:38:00 | 0:15:51 | 0:22:09 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/rbd_cli_tests.yaml} | 3 | |
pass | 1236432 | 2017-05-27 05:10:47 | 2017-05-27 06:39:00 | 2017-05-27 07:13:00 | 0:34:00 | 0:23:59 | 0:10:01 | vps | master | ubuntu | 14.04 | ceph-disk/basic/{distros/ubuntu_14.04.yaml tasks/ceph-disk.yaml} | 2 |