Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
vpm185.front.sepia.ceph.com | vps | False | False | ubuntu | 14.04 | x86_64 | None |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dead | 942276 | 2017-03-25 02:25:44 | 2017-03-25 02:25:45 | 2017-03-25 03:43:46 | 1:18:01 | 1:10:56 | 0:07:05 | vps | master | ubuntu | 14.04 | upgrade:kraken-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/ubuntu_14.04.yaml objectstore/filestore.yaml} | 3 | |
Failure Reason:
failed to recover before timeout expired |
||||||||||||||
pass | 941231 | 2017-03-24 21:49:52 | 2017-03-24 21:49:53 | 2017-03-24 22:17:53 | 0:28:00 | 0:21:44 | 0:06:16 | vps | master | ubuntu | 14.04 | ceph-disk/basic/{distros/ubuntu_14.04.yaml tasks/ceph-disk.yaml} | 2 | |
fail | 938476 | 2017-03-24 02:29:13 | 2017-03-24 02:29:14 | 2017-03-24 06:37:19 | 4:08:05 | 3:57:34 | 0:10:31 | vps | master | centos | 7.3 | upgrade:kraken-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_latest.yaml objectstore/bluestore.yaml} | 3 | |
Failure Reason:
Command failed on vpm021 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone git://git.ceph.com/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd -- /home/ubuntu/cephtest/clone.client.0 && git checkout kraken' |
||||||||||||||
pass | 938321 | 2017-03-24 01:16:42 | 2017-03-24 01:17:14 | 2017-03-24 02:29:15 | 1:12:01 | 1:02:55 | 0:09:06 | vps | master | centos | 7.3 | upgrade:hammer-x/tiering/{0-cluster/start.yaml 1-hammer-install/hammer.yaml 2-setup-cache-tiering/{0-create-base-tier/create-ec-pool.yaml 1-create-cache-tier/create-cache-tier.yaml} 3-upgrade/upgrade.yaml 4-finish-upgrade/flip-success.yaml distros/centos_7.3.yaml} | 3 | |
fail | 936753 | 2017-03-23 05:15:30 | 2017-03-23 08:46:11 | 2017-03-23 09:34:12 | 0:48:01 | 0:09:23 | 0:38:38 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/rbd_import_export.yaml} | 3 | |
Failure Reason:
Command failed on vpm143 with status 2: "cd ~/ceph-ansible ; virtualenv venv ; source venv/bin/activate ; pip install --upgrade pip ; pip install 'setuptools>=11.3' ansible==2.2.1 ; ansible-playbook -vv -i inven.yml site.yml" |
||||||||||||||
pass | 936559 | 2017-03-23 05:10:22 | 2017-03-23 08:33:52 | 2017-03-23 09:17:52 | 0:44:00 | 0:34:25 | 0:09:35 | vps | master | ubuntu | 14.04 | ceph-disk/basic/{distros/ubuntu_14.04.yaml tasks/ceph-disk.yaml} | 2 | |
pass | 936216 | 2017-03-23 05:02:22 | 2017-03-23 07:55:01 | 2017-03-23 08:41:02 | 0:46:01 | 0:20:19 | 0:25:42 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} fs/btrfs.yaml tasks/rgw_ec_s3tests.yaml} | 3 | |||
pass | 936197 | 2017-03-23 05:02:15 | 2017-03-23 07:21:56 | 2017-03-23 10:54:06 | 3:32:10 | 0:58:28 | 2:33:42 | vps | master | ubuntu | 16.04 | smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distro/ubuntu_16.04.yaml tasks/systemd.yaml} | 4 | |
dead | 936195 | 2017-03-23 05:02:14 | 2017-03-23 07:17:01 | 2017-03-23 08:15:02 | 0:58:01 | 0:25:08 | 0:32:53 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} fs/btrfs.yaml tasks/rados_cache_snaps.yaml} | 3 | |||
Failure Reason:
SSH connection to vpm185 was lost: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=12.0.0-1652-g950abe1-1trusty ceph-mds=12.0.0-1652-g950abe1-1trusty ceph-mgr=12.0.0-1652-g950abe1-1trusty ceph-common=12.0.0-1652-g950abe1-1trusty ceph-fuse=12.0.0-1652-g950abe1-1trusty ceph-test=12.0.0-1652-g950abe1-1trusty radosgw=12.0.0-1652-g950abe1-1trusty python-ceph=12.0.0-1652-g950abe1-1trusty libcephfs2=12.0.0-1652-g950abe1-1trusty libcephfs-dev=12.0.0-1652-g950abe1-1trusty libcephfs-java=12.0.0-1652-g950abe1-1trusty libcephfs-jni=12.0.0-1652-g950abe1-1trusty librados2=12.0.0-1652-g950abe1-1trusty librbd1=12.0.0-1652-g950abe1-1trusty rbd-fuse=12.0.0-1652-g950abe1-1trusty' |
||||||||||||||
pass | 936179 | 2017-03-23 05:02:08 | 2017-03-23 06:28:06 | 2017-03-23 07:46:07 | 1:18:01 | 0:28:11 | 0:49:50 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} fs/btrfs.yaml tasks/kclient_workunit_direct_io.yaml} | 3 | |||
fail | 936144 | 2017-03-23 04:22:03 | 2017-03-23 08:16:00 | 2017-03-23 09:50:01 | 1:34:01 | 0:08:38 | 1:25:23 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/point-to-point-x/{distros/ubuntu_latest.yaml point-to-point-upgrade.yaml} | 3 | |
Failure Reason:
Command failed on vpm185 with status 100: u'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph-mds=10.2.0-1xenial rbd-fuse=10.2.0-1xenial librbd1=10.2.0-1xenial ceph-fuse=10.2.0-1xenial python-ceph=10.2.0-1xenial ceph-common=10.2.0-1xenial libcephfs-java=10.2.0-1xenial ceph=10.2.0-1xenial libcephfs-jni=10.2.0-1xenial ceph-test=10.2.0-1xenial radosgw=10.2.0-1xenial librados2=10.2.0-1xenial' |
||||||||||||||
dead | 934603 | 2017-03-23 02:25:33 | 2017-03-23 02:25:41 | 2017-03-23 07:11:48 | 4:46:07 | 4:39:44 | 0:06:23 | vps | master | ubuntu | 14.04 | upgrade:kraken-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml objectstore/filestore.yaml} | 3 | |
Failure Reason:
Command failed (workunit test rbd/test_librbd_python.sh) on vpm007 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=kraken TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/test_librbd_python.sh' |
||||||||||||||
dead | 930129 | 2017-03-22 02:25:39 | 2017-03-22 02:25:40 | 2017-03-22 02:57:40 | 0:32:00 | 0:24:00 | 0:08:00 | vps | master | ubuntu | 16.04 | upgrade:kraken-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml objectstore/bluestore.yaml} | 3 | |
Failure Reason:
, line 28, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 57, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 225, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 123, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 57, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 225, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 123, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 57, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 217, in represent_list return self.represent_sequence(u'tag:yaml.org,2002:seq', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 101, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 57, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 225, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 123, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 57, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 217, in represent_list return self.represent_sequence(u'tag:yaml.org,2002:seq', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 101, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 67, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 249, in represent_undefined raise RepresenterError("cannot represent an object: %s" % data)RepresenterError: cannot represent an object: libboost-thread1.58.0 |
||||||||||||||
pass | 925830 | 2017-03-21 02:25:45 | 2017-03-21 02:25:46 | 2017-03-21 06:17:52 | 3:52:06 | 3:43:49 | 0:08:17 | vps | master | ubuntu | 16.04 | upgrade:kraken-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_latest.yaml objectstore/filestore.yaml} | 3 | |
pass | 922154 | 2017-03-20 02:25:39 | 2017-03-20 02:25:41 | 2017-03-20 07:19:48 | 4:54:07 | 4:46:03 | 0:08:04 | vps | master | ubuntu | 14.04 | upgrade:kraken-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-kraken-install/kraken.yaml 2-workload/{blogbench.yaml ec-rados-default.yaml rados_api.yaml rados_loadgenbig.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/ubuntu_14.04.yaml objectstore/filestore.yaml} | 3 | |
fail | 921652 | 2017-03-19 05:02:55 | 2017-03-19 07:06:03 | 2017-03-19 09:36:06 | 2:30:03 | vps | master | ubuntu | 16.04 | smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distro/ubuntu_16.04.yaml tasks/systemd.yaml} | 4 | |||
Failure Reason:
Could not reconnect to ubuntu@vpm123.front.sepia.ceph.com |
||||||||||||||
pass | 921574 | 2017-03-19 04:50:58 | 2017-03-19 08:42:18 | 2017-03-19 09:24:18 | 0:42:00 | 0:36:49 | 0:05:11 | vps | master | ubuntu | 14.04 | ceph-deploy/basic/{ceph-deploy-overrides/ceph_deploy_dmcrypt.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_14.04.yaml tasks/ceph-admin-commands.yaml} | 2 | |
pass | 921505 | 2017-03-19 04:24:41 | 2017-03-19 08:26:52 | 2017-03-19 10:18:54 | 1:52:02 | 0:39:10 | 1:12:52 | vps | master | ubuntu | 16.04 | upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/ubuntu_latest.yaml} | 3 | |
pass | 921420 | 2017-03-19 04:15:36 | 2017-03-19 04:15:48 | 2017-03-19 08:43:53 | 4:28:05 | 0:39:03 | 3:49:02 | vps | master | centos | 7.3 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/centos_7.3.yaml 2-config/ceph_ansible.yaml 3-tasks/cls.yaml} | 3 | |
pass | 921419 | 2017-03-19 04:15:36 | 2017-03-19 04:15:48 | 2017-03-19 07:29:51 | 3:14:03 | 0:39:58 | 2:34:05 | vps | master | ubuntu | 16.04 | ceph-ansible/smoke/basic/{0-clusters/3-node.yaml 1-distros/ubuntu_16.04.yaml 2-config/ceph_ansible.yaml 3-tasks/ceph-admin-commands.yaml} | 3 |