Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
vpm051.front.sepia.ceph.com vps False False ubuntu 16.04 x86_64 None
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 1277186 2017-06-11 02:25:55 2017-06-11 02:26:06 2017-06-11 14:28:37 12:02:31 vps master ubuntu 16.04 upgrade:kraken-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous-with-mgr.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml} 3
dead 1274210 2017-06-10 02:25:45 2017-06-10 02:25:48 2017-06-10 14:28:11 12:02:23 vps master ubuntu 14.04 upgrade:kraken-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous-with-mgr.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml objectstore/bluestore.yaml} 3
fail 1272740 2017-06-09 17:43:25 2017-06-09 17:44:12 2017-06-09 21:00:15 3:16:03 2:34:15 0:41:48 vps master ubuntu 14.04 upgrade:kraken-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous-with-mgr.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml objectstore/bluestore.yaml} 3
Failure Reason:

need more than 0 values to unpack

fail 1272734 2017-06-09 17:43:21 2017-06-09 17:44:11 2017-06-09 17:56:10 0:11:59 vps master ubuntu 16.04 upgrade:kraken-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous-with-mgr.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm149.front.sepia.ceph.com

pass 1272289 2017-06-09 05:00:35 2017-06-09 05:00:59 2017-06-09 09:01:03 4:00:04 1:38:36 2:21:28 vps master centos 7.3 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
pass 1272278 2017-06-09 04:20:38 2017-06-09 04:20:49 2017-06-09 07:06:51 2:46:02 2:38:04 0:07:58 vps master ubuntu 14.04 upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml} 3
fail 1272244 2017-06-09 03:25:38 2017-06-09 03:25:54 2017-06-09 03:41:53 0:15:59 0:07:53 0:08:06 vps master upgrade:hammer-jewel-x/tiering/{0-cluster/start.yaml 1-install-hammer-and-upgrade-to-jewel/hammer-to-jewel.yaml 2-setup-cache-tiering/{0-create-base-tier/create-ec-pool.yaml 1-create-cache-tier.yaml} 3-upgrade.yaml} 3
Failure Reason:

Command failed on vpm153 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph-mds=0.94.10-1xenial rbd-fuse=0.94.10-1xenial librbd1=0.94.10-1xenial ceph-fuse=0.94.10-1xenial python-ceph=0.94.10-1xenial ceph-common=0.94.10-1xenial libcephfs-java=0.94.10-1xenial ceph=0.94.10-1xenial libcephfs-jni=0.94.10-1xenial ceph-test=0.94.10-1xenial radosgw=0.94.10-1xenial librados2=0.94.10-1xenial'

fail 1272190 2017-06-09 01:16:54 2017-06-09 01:17:18 2017-06-09 02:09:17 0:51:59 0:41:34 0:10:25 vps master centos 7.3 upgrade:hammer-x/tiering/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-setup-cache-tiering/{0-create-base-tier/create-replicated-pool.yaml 1-create-cache-tier/create-cache-tier.yaml} 3-upgrade/upgrade.yaml 4-finish-upgrade/flip-success.yaml distros/centos_7.3.yaml} 3
Failure Reason:

'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds

pass 1270071 2017-06-08 05:10:37 2017-06-08 09:05:27 2017-06-08 10:03:27 0:58:00 0:30:53 0:27:07 vps master centos 7.3 ceph-disk/basic/{distros/centos_latest.yaml tasks/ceph-disk.yaml} 2
pass 1269516 2017-06-08 05:01:46 2017-06-08 06:21:19 2017-06-08 12:23:27 6:02:08 2:12:45 3:49:23 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_cli_import_export.yaml} 3
pass 1269505 2017-06-08 05:01:40 2017-06-08 05:27:13 2017-06-08 09:27:17 4:00:04 2:12:18 1:47:46 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/libcephfs_interface_tests.yaml} 3
fail 1269503 2017-06-08 05:01:39 2017-06-08 05:17:12 2017-06-08 07:09:14 1:52:02 1:11:40 0:40:22 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} 3
Failure Reason:

Command failed on vpm051 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove'

fail 1269468 2017-06-08 03:27:01 2017-06-08 03:27:10 2017-06-08 05:53:12 2:26:02 0:35:03 1:50:59 vps master ubuntu 14.04 upgrade:hammer-jewel-x/parallel/{0-cluster/start.yaml 1-hammer-jewel-install/hammer-jewel.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 3.5-finish.yaml 4-jewel.yaml 5-hammer-jewel-x-upgrade/hammer-jewel-x.yaml 6-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 7-upgrade-sequence/upgrade-by-daemon.yaml 8-kraken.yaml 9-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_s3tests.yaml} distros/ubuntu_14.04.yaml} 3
Failure Reason:

'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds

fail 1269403 2017-06-08 02:26:46 2017-06-08 02:27:32 2017-06-08 05:11:35 2:44:03 2:31:53 0:12:10 vps master centos 7.3 upgrade:kraken-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous-with-mgr.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml} 3
Failure Reason:

need more than 0 values to unpack

fail 1267208 2017-06-07 05:02:20 2017-06-07 08:02:00 2017-06-07 12:30:07 4:28:07 1:48:55 2:39:12 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_ec_s3tests.yaml} 3
Failure Reason:

need more than 0 values to unpack

pass 1267205 2017-06-07 05:02:18 2017-06-07 07:43:16 2017-06-07 16:07:26 8:24:10 2:07:36 6:16:34 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} 3
pass 1267174 2017-06-07 04:21:29 2017-06-07 15:01:32 2017-06-07 18:45:36 3:44:04 2:29:49 1:14:15 vps master ubuntu 14.04 upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml} 3
pass 1267144 2017-06-07 03:48:17 2017-06-07 10:31:26 2017-06-07 13:55:30 3:24:04 1:21:18 2:02:46 vps master ubuntu 16.04 upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml} 3
pass 1267127 2017-06-07 03:48:06 2017-06-07 08:20:18 2017-06-07 10:38:17 2:17:59 1:28:39 0:49:20 vps master centos 7.3 upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.5-final-scrub.yaml 2-workload/ec-rados-default.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml} 3
fail 1267116 2017-06-07 03:47:57 2017-06-07 04:42:02 2017-06-07 09:04:07 4:22:05 4:03:42 0:18:23 vps master centos 7.3 upgrade:jewel-x/point-to-point-x/{distros/centos_7.3.yaml point-to-point-upgrade.yaml} 3
Failure Reason:

Command failed (workunit test rados/test-upgrade-v11.0.0.sh) on vpm051 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=jewel TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test-upgrade-v11.0.0.sh'