Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
vpm049.front.sepia.ceph.com | vps | False | False | ubuntu | 16.04 | x86_64 | None |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 1122834 | 2017-05-11 05:16:04 | 2017-05-11 11:40:08 | 2017-05-11 14:50:12 | 3:10:04 | 0:15:39 | 2:54:25 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/ceph-admin-commands.yaml} | 3 | |
fail | 1121643 | 2017-05-11 04:22:41 | 2017-05-11 10:06:23 | 2017-05-11 15:04:29 | 4:58:06 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_latest.yaml} | 3 | |||
Failure Reason:
Could not reconnect to ubuntu@vpm049.front.sepia.ceph.com |
||||||||||||||
dead | 1121107 | 2017-05-11 02:26:07 | 2017-05-11 02:26:18 | 2017-05-11 14:28:48 | 12:02:30 | vps | master | ubuntu | 16.04 | upgrade:kraken-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-ec-workload.yaml 5-finish-upgrade.yaml 6-luminous-with-mgr.yaml 7-final-workload.yaml distros/ubuntu_latest.yaml objectstore/bluestore.yaml} | 3 | |||
fail | 1120780 | 2017-05-10 20:55:36 | 2017-05-10 20:55:54 | 2017-05-10 21:21:52 | 0:25:58 | 0:15:55 | 0:10:03 | vps | master | ubuntu | 16.04 | smoke/basic/{clusters/fixed3.yaml config/config.yaml tasks/rados_api_tests.yaml} | 4 | |
Failure Reason:
managers |
||||||||||||||
pass | 1120289 | 2017-05-10 05:02:38 | 2017-05-10 09:36:01 | 2017-05-10 15:54:08 | 6:18:07 | 0:23:41 | 5:54:26 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_ec_s3tests.yaml} | 3 | |||
pass | 1120273 | 2017-05-10 05:02:34 | 2017-05-10 09:01:05 | 2017-05-10 14:57:13 | 5:56:08 | 0:24:11 | 5:31:57 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_api_tests.yaml} | 3 | |||
fail | 1120261 | 2017-05-10 05:02:31 | 2017-05-10 08:36:07 | 2017-05-10 15:24:15 | 6:48:08 | 0:17:22 | 6:30:46 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} | 3 | |||
Failure Reason:
Command failed (workunit test cls/test_cls_sdk.sh) on vpm049 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_sdk.sh' |
||||||||||||||
pass | 1120256 | 2017-05-10 05:02:30 | 2017-05-10 08:24:12 | 2017-05-10 10:04:14 | 1:40:02 | 0:58:04 | 0:41:58 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cache_snaps.yaml} | 3 | |||
fail | 1120244 | 2017-05-10 05:02:27 | 2017-05-10 07:40:03 | 2017-05-10 08:34:03 | 0:54:00 | 0:18:30 | 0:35:30 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/libcephfs_interface_tests.yaml} | 3 | |||
Failure Reason:
Command failed (workunit test libcephfs/test.sh) on vpm049 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/libcephfs/test.sh' |
||||||||||||||
pass | 1120238 | 2017-05-10 05:02:25 | 2017-05-10 06:26:02 | 2017-05-10 10:32:10 | 4:06:08 | 0:18:24 | 3:47:44 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
pass | 1120235 | 2017-05-10 05:02:25 | 2017-05-10 06:10:22 | 2017-05-10 08:12:25 | 2:02:03 | 0:45:35 | 1:16:28 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
pass | 1119985 | 2017-05-10 04:16:12 | 2017-05-10 10:00:51 | 2017-05-10 16:21:02 | 6:20:11 | 0:15:54 | 6:04:17 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/ceph-admin-commands.yaml} | 3 | |
pass | 1119983 | 2017-05-10 04:16:11 | 2017-05-10 04:51:00 | 2017-05-10 08:59:05 | 4:08:05 | 0:18:48 | 3:49:17 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/cls.yaml} | 3 | |
pass | 1119900 | 2017-05-10 03:27:46 | 2017-05-10 03:27:47 | 2017-05-10 07:19:53 | 3:52:06 | 3:16:36 | 0:35:30 | vps | master | centos | 7.3 | upgrade:hammer-jewel-x/parallel/{0-cluster/start.yaml 1-hammer-jewel-install/hammer-jewel.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-osd-mds-mon.yaml 3.5-finish.yaml 4-jewel.yaml 5-hammer-jewel-x-upgrade/hammer-jewel-x.yaml 6-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 7-upgrade-sequence/upgrade-by-daemon.yaml 8-kraken.yaml 9-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_s3tests.yaml} distros/centos_latest.yaml} | 3 | |
dead | 1118837 | 2017-05-10 02:26:07 | 2017-05-10 02:26:14 | 2017-05-10 14:28:34 | 12:02:20 | vps | master | ubuntu | 16.04 | upgrade:kraken-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-ec-workload.yaml 5-finish-upgrade.yaml 6-luminous-with-mgr.yaml 7-final-workload.yaml distros/ubuntu_latest.yaml objectstore/bluestore.yaml} | 3 | |||
dead | 1116262 | 2017-05-09 05:03:43 | 2017-05-09 15:30:26 | 2017-05-10 03:33:11 | 12:02:45 | vps | master | smoke/basic/{bluestore.yaml clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_ec_snaps.yaml} | 3 | |||||
pass | 1116228 | 2017-05-09 05:03:28 | 2017-05-09 14:30:06 | 2017-05-09 16:00:08 | 1:30:02 | 0:26:50 | 1:03:12 | vps | master | smoke/basic/{bluestore.yaml clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/filestore-xfs.yaml tasks/rbd_api_tests.yaml} | 3 | |||
pass | 1116134 | 2017-05-09 05:03:06 | 2017-05-09 11:19:50 | 2017-05-09 13:45:52 | 2:26:02 | 1:04:13 | 1:21:49 | vps | master | smoke/basic/{bluestore.yaml clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cache_snaps.yaml} | 3 | |||
pass | 1116072 | 2017-05-09 05:02:51 | 2017-05-09 08:20:32 | 2017-05-09 09:06:33 | 0:46:01 | 0:28:09 | 0:17:52 | vps | master | smoke/basic/{bluestore.yaml clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore-comp.yaml tasks/rados_workunit_loadgen_mix.yaml} | 3 | |||
fail | 1116063 | 2017-05-09 05:02:49 | 2017-05-09 08:07:55 | 2017-05-09 08:33:54 | 0:25:59 | 0:15:43 | 0:10:16 | vps | master | smoke/basic/{bluestore.yaml clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} | 3 | |||
Failure Reason:
Command failed (workunit test cls/test_cls_sdk.sh) on vpm049 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_sdk.sh' |