Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
vpm151.front.sepia.ceph.com | vps | False | False | ubuntu | 14.04 | x86_64 | None |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 1365409 | 2017-07-06 02:26:08 | 2017-07-06 02:26:09 | 2017-07-06 03:28:09 | 1:02:00 | 0:54:51 | 0:07:09 | vps | master | ubuntu | 14.04 | upgrade:kraken-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous-with-mgr.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml objectstore/bluestore.yaml} | 3 | |
Failure Reason:
timed out waiting for admin_socket to appear after osd.5 restart |
||||||||||||||
dead | 1361875 | 2017-07-05 02:27:10 | 2017-07-05 02:27:11 | 2017-07-05 14:29:38 | 12:02:27 | vps | master | ubuntu | 16.04 | upgrade:kraken-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-ec-workload.yaml 5-finish-upgrade.yaml 6-luminous-with-mgr.yaml 7-final-workload.yaml distros/ubuntu_latest.yaml objectstore/bluestore.yaml} | 3 | |||
pass | 1360810 | 2017-07-04 15:05:56 | 2017-07-04 15:06:02 | 2017-07-04 15:28:03 | 0:22:01 | 0:14:06 | 0:07:55 | vps | master | ubuntu | 14.04 | upgrade:client-upgrade/hammer-client-x/basic/{0-cluster/start.yaml 1-install/hammer-client-x.yaml 2-workload/rbd_api_tests.yaml distros/ubuntu_14.04.yaml} | 3 | |
pass | 1358815 | 2017-07-04 05:10:37 | 2017-07-04 06:21:47 | 2017-07-04 07:03:47 | 0:42:00 | 0:33:59 | 0:08:01 | vps | master | centos | 7.3 | ceph-disk/basic/{distros/centos_latest.yaml tasks/ceph-disk.yaml} | 2 | |
pass | 1358102 | 2017-07-04 05:02:15 | 2017-07-04 05:02:36 | 2017-07-04 09:36:42 | 4:34:06 | 2:25:56 | 2:08:10 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_iozone.yaml} | 3 | |||
dead | 1357453 | 2017-07-04 02:26:52 | 2017-07-04 02:26:53 | 2017-07-04 06:20:58 | 3:54:05 | 3:46:23 | 0:07:42 | vps | master | ubuntu | 14.04 | upgrade:kraken-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-luminous-with-mgr.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml objectstore/filestore-xfs.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rados/load-gen-big.sh) on vpm151 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=kraken TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-big.sh' |
||||||||||||||
fail | 1355663 | 2017-07-03 05:02:15 | 2017-07-03 06:42:09 | 2017-07-03 08:34:10 | 1:52:01 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm151.front.sepia.ceph.com |
||||||||||||||
dead | 1355541 | 2017-07-03 04:21:18 | 2017-07-03 10:54:27 | 2017-07-03 12:54:29 | 2:00:02 | 1:28:58 | 0:31:04 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml kraken.yaml} | 3 | |
Failure Reason:
SSH connection to vpm151 was lost: 'rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0' |
||||||||||||||
pass | 1355529 | 2017-07-03 04:21:10 | 2017-07-03 10:15:06 | 2017-07-03 11:17:06 | 1:02:00 | 0:48:47 | 0:13:13 | vps | master | centos | 7.3 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/centos_latest.yaml} | 3 | |
fail | 1355500 | 2017-07-03 03:59:39 | 2017-07-03 09:00:20 | 2017-07-03 10:12:21 | 1:12:01 | 0:26:20 | 0:45:41 | vps | master | ubuntu | 14.04 | ceph-deploy/basic/{ceph-deploy-overrides/enable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_14.04.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
fail | 1355496 | 2017-07-03 03:59:37 | 2017-07-03 08:46:52 | 2017-07-03 09:36:52 | 0:50:00 | 0:30:22 | 0:19:38 | vps | master | centos | 7.3 | ceph-deploy/basic/{ceph-deploy-overrides/enable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/centos_latest.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
ceph health was unable to get 'HEALTH_OK' after waiting 15 minutes |
||||||||||||||
fail | 1355443 | 2017-07-03 03:29:40 | 2017-07-03 08:03:19 | 2017-07-03 08:53:19 | 0:50:00 | 0:11:01 | 0:38:59 | vps | master | centos | 7.3 | upgrade:hammer-jewel-x/parallel/{0-cluster/start.yaml 1-hammer-jewel-install/hammer-jewel.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-osd-mds-mon.yaml 3.5-finish.yaml 4-jewel.yaml 5-hammer-jewel-x-upgrade/hammer-jewel-x.yaml 6-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 7-upgrade-sequence/upgrade-all.yaml 8-luminous.yaml 9-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_s3tests.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
Command failed on vpm117 with status 22: 'sudo ceph osd create 9902ca31-788e-42dc-8c93-56758a2ca2cc 2' |
||||||||||||||
fail | 1355367 | 2017-07-03 03:26:09 | 2017-07-03 04:35:16 | 2017-07-03 08:23:18 | 3:48:02 | 0:08:17 | 3:39:45 | vps | master | ubuntu | 16.04 | upgrade:hammer-jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-hammer-install-and-upgrade-to-jewel/hammer-to-jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
Command failed on vpm151 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph-mds=0.94.10-1xenial rbd-fuse=0.94.10-1xenial librbd1=0.94.10-1xenial ceph-fuse=0.94.10-1xenial python-ceph=0.94.10-1xenial ceph-common=0.94.10-1xenial libcephfs-java=0.94.10-1xenial ceph=0.94.10-1xenial libcephfs-jni=0.94.10-1xenial ceph-test=0.94.10-1xenial radosgw=0.94.10-1xenial librados2=0.94.10-1xenial' |
||||||||||||||
fail | 1354378 | 2017-07-03 02:26:57 | 2017-07-03 02:27:53 | 2017-07-03 08:07:57 | 5:40:04 | 2:35:12 | 3:04:52 | vps | master | centos | 7.3 | upgrade:kraken-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous-with-mgr.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml} | 3 | |
Failure Reason:
'default_idle_timeout' |
||||||||||||||
pass | 1353945 | 2017-07-03 01:16:44 | 2017-07-03 01:16:53 | 2017-07-03 05:20:52 | 4:03:59 | 3:54:12 | 0:09:47 | vps | master | centos | 7.3 | upgrade:hammer-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 0-tz-eastern.yaml 1-hammer-install/hammer.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-finish-upgrade/last-osds-and-monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_7.3.yaml} | 3 | |
pass | 1353782 | 2017-07-02 05:02:09 | 2017-07-02 06:10:04 | 2017-07-02 09:58:08 | 3:48:04 | 2:33:14 | 1:14:50 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} | 3 | |||
fail | 1353738 | 2017-07-02 05:02:00 | 2017-07-02 05:18:11 | 2017-07-02 05:30:08 | 0:11:57 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm149.front.sepia.ceph.com |
||||||||||||||
pass | 1353621 | 2017-07-02 04:21:09 | 2017-07-02 04:41:45 | 2017-07-02 07:05:47 | 2:24:02 | 1:28:21 | 0:55:41 | vps | master | ubuntu | 14.04 | upgrade:jewel-x/point-to-point-x/{distros/ubuntu_14.04.yaml point-to-point-upgrade.yaml} | 3 | |
fail | 1352935 | 2017-07-02 03:29:35 | 2017-07-02 03:29:44 | 2017-07-02 05:19:45 | 1:50:01 | 0:12:20 | 1:37:41 | vps | master | centos | 7.3 | upgrade:hammer-jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-hammer-install-and-upgrade-to-jewel/hammer-to-jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_latest.yaml} | 3 | |
Failure Reason:
Command failed on vpm081 with status 22: 'sudo ceph osd create ff6dfb24-ab0d-4bc3-9b1c-8ddae94f6d5d 0' |
||||||||||||||
dead | 1352448 | 2017-07-02 02:27:04 | 2017-07-02 02:27:51 | 2017-07-02 04:57:53 | 2:30:02 | 2:21:10 | 0:08:52 | vps | master | ubuntu | 14.04 | upgrade:kraken-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-luminous-with-mgr.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml objectstore/bluestore.yaml} | 3 | |
Failure Reason:
[Errno 113] No route to host |