Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
vpm149.front.sepia.ceph.com | vps | False | False | ubuntu | 16.04 | x86_64 | None |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 1365388 | 2017-07-06 02:25:59 | 2017-07-06 02:26:00 | 2017-07-06 14:28:33 | 12:02:33 | vps | master | ubuntu | 16.04 | upgrade:kraken-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-luminous-with-mgr.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml objectstore/bluestore.yaml} | 3 | |||
fail | 1364882 | 2017-07-05 23:04:40 | 2017-07-06 00:51:28 | 2017-07-06 01:23:28 | 0:32:00 | 0:24:51 | 0:07:09 | vps | master | centos | 7.3 | rgw/hadoop-s3a/s3a-hadoop-v28.yaml | 4 | |
Failure Reason:
Command failed on vpm129 with status 1: 'cd /home/ubuntu/cephtest/hadoop/hadoop-tools/hadoop-aws/ && /home/ubuntu/cephtest/apache-maven-3.3.9/bin/mvn -Dit.test=ITestS3A* -Dparallel-tests -Dscale -Dfs.s3a.scale.test.huge.filesize=128M verify' |
||||||||||||||
pass | 1363567 | 2017-07-05 10:50:44 | 2017-07-05 10:50:52 | 2017-07-05 11:20:53 | 0:30:01 | 0:24:13 | 0:05:48 | vps | master | ubuntu | 14.04 | ceph-disk/basic/{distros/ubuntu_14.04.yaml tasks/ceph-disk.yaml} | 2 | |
fail | 1363254 | 2017-07-05 05:01:29 | 2017-07-05 06:39:26 | 2017-07-05 07:51:27 | 1:12:01 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_workunit_loadgen_mix.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm149.front.sepia.ceph.com |
||||||||||||||
pass | 1363248 | 2017-07-05 05:01:28 | 2017-07-05 06:10:05 | 2017-07-05 10:10:10 | 4:00:05 | 2:13:43 | 1:46:22 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_ec_snaps.yaml} | 3 | |||
fail | 1363075 | 2017-07-05 04:21:16 | 2017-07-05 04:56:03 | 2017-07-05 07:42:12 | 2:46:09 | 2:17:25 | 0:28:44 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml kraken.yaml} | 3 | |
Failure Reason:
Command failed on vpm149 with status 1: "SWIFT_TEST_CONFIG_FILE=/home/ubuntu/cephtest/archive/testswift.client.1.conf /home/ubuntu/cephtest/swift/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/swift/test/functional -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 1361865 | 2017-07-05 02:27:03 | 2017-07-05 02:27:04 | 2017-07-05 05:19:08 | 2:52:04 | 2:37:52 | 0:14:12 | vps | master | centos | 7.3 | upgrade:kraken-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-luminous-with-mgr.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_latest.yaml objectstore/bluestore.yaml} | 3 | |
Failure Reason:
'default_idle_timeout' |
||||||||||||||
pass | 1360805 | 2017-07-04 15:05:52 | 2017-07-04 15:06:02 | 2017-07-04 15:26:03 | 0:20:01 | 0:12:22 | 0:07:39 | vps | master | ubuntu | 14.04 | upgrade:client-upgrade/firefly-client-x/basic/{0-cluster/start.yaml 1-install/firefly-client-x.yaml 2-workload/rbd_cli_import_export.yaml distros/ubuntu_14.04.yaml} | 3 | |
pass | 1359819 | 2017-07-04 10:39:56 | 2017-07-04 10:40:02 | 2017-07-04 11:16:03 | 0:36:01 | 0:30:52 | 0:05:09 | vps | master | ubuntu | 16.04 | ceph-disk/basic/{distros/ubuntu_latest.yaml tasks/ceph-disk.yaml} | 2 | |
pass | 1359812 | 2017-07-04 09:49:22 | 2017-07-04 09:49:24 | 2017-07-04 10:29:24 | 0:40:00 | 0:31:09 | 0:08:51 | vps | master | centos | 7.3 | ceph-disk/basic/{distros/centos_latest.yaml tasks/ceph-disk.yaml} | 2 | |
pass | 1358163 | 2017-07-04 05:02:31 | 2017-07-04 05:28:32 | 2017-07-04 09:34:37 | 4:06:05 | 2:37:03 | 1:29:02 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_workunit_suites_iozone.yaml} | 3 | |||
fail | 1358126 | 2017-07-04 05:02:22 | 2017-07-04 05:02:37 | 2017-07-04 06:52:39 | 1:50:02 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_api_tests.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm149.front.sepia.ceph.com |
||||||||||||||
fail | 1358017 | 2017-07-04 04:22:06 | 2017-07-04 04:22:25 | 2017-07-04 06:44:32 | 2:22:07 | 2:04:43 | 0:17:24 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
Command failed on vpm149 with status 1: "SWIFT_TEST_CONFIG_FILE=/home/ubuntu/cephtest/archive/testswift.client.0.conf /home/ubuntu/cephtest/swift/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/swift/test/functional -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 1357967 | 2017-07-04 03:29:43 | 2017-07-04 03:29:44 | 2017-07-04 04:33:45 | 1:04:01 | 0:08:02 | 0:55:59 | vps | master | ubuntu | 16.04 | upgrade:hammer-jewel-x/parallel/{0-cluster/start.yaml 1-hammer-jewel-install/hammer-jewel.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-osd-mds-mon.yaml 3.5-finish.yaml 4-jewel.yaml 5-hammer-jewel-x-upgrade/hammer-jewel-x.yaml 6-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 7-upgrade-sequence/upgrade-all.yaml 8-luminous.yaml 9-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_s3tests.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
Command failed on vpm177 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph-mds=0.94.10-1xenial rbd-fuse=0.94.10-1xenial librbd1=0.94.10-1xenial ceph-fuse=0.94.10-1xenial python-ceph=0.94.10-1xenial ceph-common=0.94.10-1xenial libcephfs-java=0.94.10-1xenial ceph=0.94.10-1xenial libcephfs-jni=0.94.10-1xenial ceph-test=0.94.10-1xenial radosgw=0.94.10-1xenial librados2=0.94.10-1xenial' |
||||||||||||||
fail | 1357942 | 2017-07-04 03:29:37 | 2017-07-04 03:29:38 | 2017-07-04 04:19:38 | 0:50:00 | 0:08:37 | 0:41:23 | vps | master | ubuntu | 16.04 | upgrade:hammer-jewel-x/parallel/{0-cluster/start.yaml 1-hammer-jewel-install/hammer-jewel.yaml 2-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 3.5-finish.yaml 4-jewel.yaml 5-hammer-jewel-x-upgrade/hammer-jewel-x.yaml 6-workload/{ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 7-upgrade-sequence/upgrade-all.yaml 8-luminous.yaml 9-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_s3tests.yaml} distros/ubuntu_latest.yaml} | 3 | |
Failure Reason:
Command failed on vpm149 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph-mds=0.94.10-1xenial rbd-fuse=0.94.10-1xenial librbd1=0.94.10-1xenial ceph-fuse=0.94.10-1xenial python-ceph=0.94.10-1xenial ceph-common=0.94.10-1xenial libcephfs-java=0.94.10-1xenial ceph=0.94.10-1xenial libcephfs-jni=0.94.10-1xenial ceph-test=0.94.10-1xenial radosgw=0.94.10-1xenial librados2=0.94.10-1xenial' |
||||||||||||||
fail | 1357441 | 2017-07-04 02:26:44 | 2017-07-04 02:26:45 | 2017-07-04 03:58:47 | 1:32:02 | 1:23:28 | 0:08:34 | vps | master | ubuntu | 14.04 | upgrade:kraken-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous-with-mgr.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml objectstore/filestore-xfs.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_rbd.sh) on vpm149 with status 139: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=kraken TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rbd.sh' |
||||||||||||||
fail | 1355675 | 2017-07-03 05:02:17 | 2017-07-03 07:06:06 | 2017-07-03 11:14:10 | 4:08:04 | 2:19:34 | 1:48:30 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} | 3 | |||
Failure Reason:
'default_idle_timeout' |
||||||||||||||
fail | 1355642 | 2017-07-03 05:02:10 | 2017-07-03 06:07:30 | 2017-07-03 08:49:33 | 2:42:03 | 2:26:46 | 0:15:17 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} | 3 | |||
Failure Reason:
Command failed (workunit test cls/test_cls_sdk.sh) on vpm149 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_sdk.sh' |
||||||||||||||
dead | 1355531 | 2017-07-03 04:21:11 | 2017-07-03 10:18:26 | 2017-07-03 22:20:45 | 12:02:19 | vps | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-kraken.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml kraken.yaml} | 3 | |||
fail | 1355523 | 2017-07-03 04:05:42 | 2017-07-03 10:13:02 | 2017-07-03 11:49:03 | 1:36:01 | 0:23:05 | 1:12:56 | vps | master | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-config/ceph_ansible.yaml 3-tasks/cls.yaml} | 3 | |
Failure Reason:
Command failed (workunit test cls/test_cls_hello.sh) on vpm043 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh' |