User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2017-05-21 05:00:26 | 2017-05-21 05:02:37 | 2017-05-21 18:27:45 | 13:25:08 | smoke | master | vps | e3319da | 13 | 14 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 1204146 | 2017-05-21 05:02:35 | 2017-05-21 05:02:36 | 2017-05-21 05:20:36 | 0:18:00 | 0:14:46 | 0:03:14 | vps | master | ubuntu | 16.04 | smoke/1node/{clusters/{fixed-1.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/ceph-deploy.yaml} | 1 | |
Failure Reason:
Command failed on vpm045 with status 32: 'sudo umount /dev/vdb1' |
||||||||||||||
pass | 1204150 | 2017-05-21 05:02:36 | 2017-05-21 05:02:37 | 2017-05-21 06:24:38 | 1:22:01 | 0:23:22 | 0:58:39 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_blogbench.yaml} | 3 | |||
pass | 1204151 | 2017-05-21 05:02:36 | 2017-05-21 05:15:59 | 2017-05-21 08:16:02 | 3:00:03 | 0:38:36 | 2:21:27 | vps | master | centos | 7.3 | smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} | 4 | |
pass | 1204154 | 2017-05-21 05:02:37 | 2017-05-21 05:20:50 | 2017-05-21 07:18:51 | 1:58:01 | 0:21:36 | 1:36:25 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_fsstress.yaml} | 3 | |||
fail | 1204157 | 2017-05-21 05:02:38 | 2017-05-21 05:31:06 | 2017-05-21 06:57:07 | 1:26:01 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_iozone.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm131.front.sepia.ceph.com |
||||||||||||||
fail | 1204160 | 2017-05-21 05:02:38 | 2017-05-21 05:31:06 | 2017-05-21 06:49:07 | 1:18:01 | 0:09:03 | 1:08:58 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_pjd.yaml} | 3 | |||
Failure Reason:
{'vpm173.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)', 'E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?'], 'changed': False, '_ansible_no_log': False, 'stdout': '', 'cache_updated': False, 'failed': True, 'stderr': 'E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)\nE: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?\n', 'invocation': {'module_args': {'dpkg_options': 'force-confdef,force-confold', 'autoremove': None, 'force': False, 'name': 'ntp', 'package': ['ntp'], 'purge': False, 'allow_unauthenticated': False, 'state': 'present', 'upgrade': None, 'update_cache': None, 'deb': None, 'only_upgrade': False, 'cache_valid_time': 0, 'default_release': None, 'install_recommends': None}}, 'msg': '\'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install \'ntp\'\' failed: E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)\nE: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?\n', 'stdout_lines': [], 'cache_update_time': 1495348875}} |
||||||||||||||
fail | 1204163 | 2017-05-21 05:02:39 | 2017-05-21 05:38:00 | 2017-05-21 06:36:00 | 0:58:00 | 0:16:56 | 0:41:04 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} | 3 | |||
Failure Reason:
Command failed on vpm173 with status 110: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 172.21.2.113:6789,172.21.2.77:6790,172.21.2.77:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds' |
||||||||||||||
fail | 1204166 | 2017-05-21 05:02:40 | 2017-05-21 05:46:00 | 2017-05-21 07:32:01 | 1:46:01 | 0:18:58 | 1:27:03 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
Failure Reason:
Command failed on vpm123 with status 110: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 172.21.2.193:6789,172.21.2.51:6790,172.21.2.51:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds' |
||||||||||||||
fail | 1204169 | 2017-05-21 05:02:40 | 2017-05-21 05:49:06 | 2017-05-21 06:47:06 | 0:58:00 | 0:16:38 | 0:41:22 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
Failure Reason:
Command failed on vpm093 with status 110: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 172.21.2.131:6789,172.21.2.161:6790,172.21.2.161:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds' |
||||||||||||||
fail | 1204172 | 2017-05-21 05:02:41 | 2017-05-21 05:55:07 | 2017-05-21 06:51:07 | 0:56:00 | 0:17:36 | 0:38:24 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||
Failure Reason:
Command failed on vpm137 with status 110: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 172.21.2.65:6789,172.21.2.135:6790,172.21.2.135:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds' |
||||||||||||||
pass | 1204175 | 2017-05-21 05:02:42 | 2017-05-21 05:57:06 | 2017-05-21 06:31:06 | 0:34:00 | 0:24:30 | 0:09:30 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/libcephfs_interface_tests.yaml} | 3 | |||
fail | 1204178 | 2017-05-21 05:02:43 | 2017-05-21 06:01:03 | 2017-05-21 10:19:08 | 4:18:05 | 3:17:25 | 1:00:40 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/mon_thrash.yaml} | 3 | |||
Failure Reason:
Command failed (workunit test rados/test.sh) on vpm021 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 1204181 | 2017-05-21 05:02:43 | 2017-05-21 06:07:08 | 2017-05-21 06:45:08 | 0:38:00 | 0:30:39 | 0:07:21 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_api_tests.yaml} | 3 | |||
pass | 1204184 | 2017-05-21 05:02:44 | 2017-05-21 06:11:06 | 2017-05-21 07:37:07 | 1:26:01 | 0:40:34 | 0:45:27 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_bench.yaml} | 3 | |||
fail | 1204187 | 2017-05-21 05:02:45 | 2017-05-21 06:15:01 | 2017-05-21 06:25:00 | 0:09:59 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cache_snaps.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm031.front.sepia.ceph.com |
||||||||||||||
fail | 1204190 | 2017-05-21 05:02:45 | 2017-05-21 06:15:49 | 2017-05-21 07:53:50 | 1:38:01 | 0:29:29 | 1:08:32 | vps | master | ubuntu | 16.04 | smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} | 4 | |
Failure Reason:
Command failed on vpm101 with status 1: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage sudo ceph --cluster ceph health' |
||||||||||||||
fail | 1204193 | 2017-05-21 05:02:46 | 2017-05-21 06:24:43 | 2017-05-21 06:36:43 | 0:12:00 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm055.front.sepia.ceph.com |
||||||||||||||
dead | 1204196 | 2017-05-21 05:02:47 | 2017-05-21 06:25:17 | 2017-05-21 18:27:45 | 12:02:28 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_ec_snaps.yaml} | 3 | |||||
pass | 1204199 | 2017-05-21 05:02:47 | 2017-05-21 06:30:02 | 2017-05-21 07:12:02 | 0:42:00 | 0:21:03 | 0:20:57 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_python.yaml} | 3 | |||
pass | 1204202 | 2017-05-21 05:02:48 | 2017-05-21 06:31:14 | 2017-05-21 07:21:14 | 0:50:00 | 0:30:54 | 0:19:06 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_workunit_loadgen_mix.yaml} | 3 | |||
fail | 1204205 | 2017-05-21 05:02:49 | 2017-05-21 06:36:21 | 2017-05-21 06:52:20 | 0:15:59 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_api_tests.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm031.front.sepia.ceph.com |
||||||||||||||
pass | 1204208 | 2017-05-21 05:02:49 | 2017-05-21 06:36:44 | 2017-05-21 07:14:44 | 0:38:00 | 0:17:14 | 0:20:46 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_cli_import_export.yaml} | 3 | |||
pass | 1204211 | 2017-05-21 05:02:50 | 2017-05-21 06:44:49 | 2017-05-21 07:28:49 | 0:44:00 | 0:19:45 | 0:24:15 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} | 3 | |||
pass | 1204214 | 2017-05-21 05:02:51 | 2017-05-21 06:45:09 | 2017-05-21 07:35:09 | 0:50:00 | 0:28:00 | 0:22:00 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} | 3 | |||
fail | 1204217 | 2017-05-21 05:02:51 | 2017-05-21 06:46:48 | 2017-05-21 07:10:48 | 0:24:00 | 0:17:20 | 0:06:40 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
Command failed on vpm085 with status 110: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage rbd --user 0 -p rbd map testimage.client.0 && while test '!' -e /dev/rbd/rbd/testimage.client.0 ; do sleep 1 ; done" |
||||||||||||||
pass | 1204220 | 2017-05-21 05:02:52 | 2017-05-21 06:47:07 | 2017-05-21 07:17:07 | 0:30:00 | 0:24:19 | 0:05:41 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_ec_s3tests.yaml} | 3 | |||
pass | 1204223 | 2017-05-21 05:02:53 | 2017-05-21 06:48:48 | 2017-05-21 07:46:48 | 0:58:00 | 0:25:58 | 0:32:02 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_s3tests.yaml} | 3 | |||
fail | 1204225 | 2017-05-21 05:02:53 | 2017-05-21 06:48:48 | 2017-05-21 07:20:48 | 0:32:00 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm183.front.sepia.ceph.com |