Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
vpm147.front.sepia.ceph.com vps False False ubuntu 16.04 x86_64 None
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 1365394 2017-07-06 02:26:03 2017-07-06 02:26:04 2017-07-06 14:28:37 12:02:33 vps master ubuntu 16.04 upgrade:kraken-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-ec-workload.yaml 5-finish-upgrade.yaml 6-luminous-with-mgr.yaml 7-final-workload.yaml distros/ubuntu_latest.yaml objectstore/bluestore.yaml} 3
pass 1363263 2017-07-05 05:01:31 2017-07-05 07:08:28 2017-07-05 11:44:35 4:36:07 2:14:04 2:22:03 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} 3
pass 1363236 2017-07-05 05:01:25 2017-07-05 05:46:33 2017-07-05 09:26:37 3:40:04 2:12:43 1:27:21 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_bench.yaml} 3
fail 1363197 2017-07-05 05:01:15 2017-07-05 05:08:13 2017-07-05 07:04:16 1:56:03 1:46:05 0:09:58 vps master ubuntu 16.04 smoke/1node/{clusters/{fixed-1.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/ceph-deploy.yaml} 1
Failure Reason:

failed during ceph-deploy cmd: osd prepare vpm147:/dev/vdb , ec=1

pass 1363157 2017-07-05 04:50:33 2017-07-05 04:57:08 2017-07-05 05:17:08 0:20:00 0:11:28 0:08:32 vps master ubuntu 14.04 ceph-deploy/basic/{ceph-deploy-overrides/disable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_14.04.yaml tasks/ceph-admin-commands.yaml} 2
pass 1362881 2017-07-05 04:10:33 2017-07-05 04:11:08 2017-07-05 04:39:08 0:28:00 0:22:11 0:05:49 vps master ubuntu 14.04 ceph-disk/basic/{distros/ubuntu_14.04.yaml tasks/ceph-disk.yaml} 2
fail 1362381 2017-07-05 03:29:36 2017-07-05 03:30:01 2017-07-05 05:00:02 1:30:01 0:12:06 1:17:55 vps master centos 7.3 upgrade:hammer-jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-hammer-install-and-upgrade-to-jewel/hammer-to-jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_latest.yaml} 3
Failure Reason:

Command failed on vpm147 with status 22: 'sudo ceph osd create 7afd5fb9-939a-4e7c-ab49-82d44e686ddc 3'

pass 1361887 2017-07-05 02:27:18 2017-07-05 02:27:20 2017-07-05 04:09:21 1:42:01 0:54:36 0:47:25 vps master ubuntu 16.04 upgrade:kraken-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-ec-workload.yaml 5-finish-upgrade.yaml 6-luminous-with-mgr.yaml 7-final-workload.yaml distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml} 3
fail 1361871 2017-07-05 02:27:07 2017-07-05 02:27:08 2017-07-05 02:57:08 0:30:00 0:22:17 0:07:43 vps master ubuntu 14.04 upgrade:kraken-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-ec-workload.yaml 5-finish-upgrade.yaml 6-luminous-with-mgr.yaml 7-final-workload.yaml distros/ubuntu_14.04.yaml objectstore/filestore-xfs.yaml} 3
Failure Reason:

reached maximum tries (50) after waiting for 300 seconds

pass 1360809 2017-07-04 15:05:55 2017-07-04 15:06:02 2017-07-04 15:24:03 0:18:01 0:11:22 0:06:39 vps master ubuntu 14.04 upgrade:client-upgrade/infernalis-client-x/rbd/{0-cluster/start.yaml 1-install/infernalis-client-x.yaml 2-workload/rbd_notification_tests.yaml distros/ubuntu_14.04.yaml} 2
pass 1359213 2017-07-04 05:15:42 2017-07-04 07:02:36 2017-07-04 07:54:36 0:52:00 0:16:31 0:35:29 vps master ubuntu 16.04 ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/ceph-admin-commands.yaml} 3
fail 1358157 2017-07-04 05:02:29 2017-07-04 05:20:06 2017-07-04 07:30:15 2:10:09 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm037.front.sepia.ceph.com

pass 1358124 2017-07-04 05:02:21 2017-07-04 05:02:37 2017-07-04 10:12:43 5:10:06 2:12:10 2:57:56 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/mon_thrash.yaml} 3
fail 1358015 2017-07-04 04:22:04 2017-07-04 04:22:25 2017-07-04 07:20:28 2:58:03 2:14:48 0:43:15 vps master ubuntu 14.04 upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml kraken.yaml} 3
Failure Reason:

Command failed on vpm015 with status 1: "SWIFT_TEST_CONFIG_FILE=/home/ubuntu/cephtest/archive/testswift.client.1.conf /home/ubuntu/cephtest/swift/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/swift/test/functional -v -a '!fails_on_rgw'"

fail 1357445 2017-07-04 02:26:47 2017-07-04 02:26:48 2017-07-04 05:00:52 2:34:04 2:23:20 0:10:44 vps master centos 7.3 upgrade:kraken-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous-with-mgr.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml} 3
Failure Reason:

'default_idle_timeout'

pass 1357036 2017-07-03 21:10:20 2017-07-03 21:10:24 2017-07-03 21:26:24 0:16:00 0:10:45 0:05:15 vps master ubuntu 14.04 upgrade:client-upgrade/infernalis-client-x/basic/{0-cluster/start.yaml 1-install/infernalis-client-x.yaml 2-workload/rbd_cli_import_export.yaml distros/ubuntu_14.04.yaml} 2
pass 1355632 2017-07-03 05:02:08 2017-07-03 05:52:38 2017-07-03 09:38:43 3:46:05 2:34:11 1:11:54 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_bench.yaml} 3
fail 1355598 2017-07-03 05:02:00 2017-07-03 05:14:01 2017-07-03 12:08:06 6:54:05 1:36:56 5:17:09 vps master centos 7.3 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
Failure Reason:

ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes

fail 1355532 2017-07-03 04:21:12 2017-07-03 10:28:32 2017-07-03 14:38:37 4:10:05 2:14:50 1:55:15 vps master ubuntu 16.04 upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml kraken.yaml} 3
Failure Reason:

Command failed on vpm115 with status 1: "SWIFT_TEST_CONFIG_FILE=/home/ubuntu/cephtest/archive/testswift.client.1.conf /home/ubuntu/cephtest/swift/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/swift/test/functional -v -a '!fails_on_rgw'"

fail 1355510 2017-07-03 03:59:45 2017-07-03 09:37:01 2017-07-03 10:13:00 0:35:59 0:28:35 0:07:24 vps master ubuntu 16.04 ceph-deploy/basic/{ceph-deploy-overrides/ceph_deploy_dmcrypt.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_latest.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} 2
Failure Reason:

ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes