User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2017-08-06 05:00:15 | 2017-08-06 05:23:35 | 2017-08-06 17:26:01 | 12:02:26 | smoke | master | vps | dbb4c1c | 9 | 18 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 1489846 | 2017-08-06 05:01:34 | 2017-08-06 05:03:03 | 2017-08-06 06:05:03 | 1:02:00 | 0:55:19 | 0:06:41 | vps | master | ubuntu | 16.04 | smoke/1node/{clusters/{fixed-1.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/ceph-deploy.yaml} | 1 | |
Failure Reason:
'check health' reached maximum tries (6) after waiting for 60 seconds |
||||||||||||||
pass | 1489850 | 2017-08-06 05:01:35 | 2017-08-06 05:03:42 | 2017-08-06 07:13:44 | 2:10:02 | 1:59:31 | 0:10:31 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_blogbench.yaml} | 3 | |||
fail | 1489853 | 2017-08-06 05:01:36 | 2017-08-06 05:06:39 | 2017-08-06 13:30:51 | 8:24:12 | 1:42:14 | 6:41:58 | vps | master | centos | 7.3 | smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} | 4 | |
Failure Reason:
Command failed (workunit test rados/load-gen-mix.sh) on vpm087 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix.sh' |
||||||||||||||
fail | 1489856 | 2017-08-06 05:01:36 | 2017-08-06 05:06:53 | 2017-08-06 05:20:47 | 0:13:54 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_fsstress.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm067.front.sepia.ceph.com |
||||||||||||||
pass | 1489859 | 2017-08-06 05:01:37 | 2017-08-06 05:07:31 | 2017-08-06 07:43:36 | 2:36:05 | 2:27:36 | 0:08:29 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_iozone.yaml} | 3 | |||
pass | 1489862 | 2017-08-06 05:01:37 | 2017-08-06 05:08:35 | 2017-08-06 08:02:37 | 2:54:02 | 2:35:14 | 0:18:48 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_pjd.yaml} | 3 | |||
pass | 1489865 | 2017-08-06 05:01:38 | 2017-08-06 05:08:43 | 2017-08-06 07:16:45 | 2:08:02 | 1:49:31 | 0:18:31 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} | 3 | |||
pass | 1489868 | 2017-08-06 05:01:39 | 2017-08-06 05:08:46 | 2017-08-06 10:04:51 | 4:56:05 | 2:28:08 | 2:27:57 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
pass | 1489871 | 2017-08-06 05:01:39 | 2017-08-06 05:08:56 | 2017-08-06 08:22:57 | 3:14:01 | 2:08:47 | 1:05:14 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
fail | 1489874 | 2017-08-06 05:01:40 | 2017-08-06 05:09:31 | 2017-08-06 07:17:32 | 2:08:01 | 1:44:16 | 0:23:45 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||
Failure Reason:
{'vpm025.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)', 'E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?'], 'changed': False, '_ansible_no_log': False, 'stdout': '', 'cache_updated': False, 'failed': True, 'stderr': 'E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)\nE: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?\n', 'invocation': {'module_args': {'dpkg_options': 'force-confdef,force-confold', 'autoremove': None, 'force': False, 'name': 'ntp', 'package': ['ntp'], 'purge': False, 'allow_unauthenticated': False, 'state': 'present', 'upgrade': None, 'update_cache': None, 'deb': None, 'only_upgrade': False, 'cache_valid_time': 0, 'default_release': None, 'install_recommends': None}}, 'msg': '\'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install \'ntp\'\' failed: E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)\nE: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?\n', 'stdout_lines': [], 'cache_update_time': 1501999610}} |
||||||||||||||
pass | 1489877 | 2017-08-06 05:01:40 | 2017-08-06 05:10:40 | 2017-08-06 07:22:38 | 2:11:58 | 1:52:41 | 0:19:17 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/libcephfs_interface_tests.yaml} | 3 | |||
fail | 1489880 | 2017-08-06 05:01:41 | 2017-08-06 05:12:41 | 2017-08-06 07:52:42 | 2:40:01 | 2:04:10 | 0:35:51 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/mon_thrash.yaml} | 3 | |||
Failure Reason:
"2017-08-06 07:30:40.421978 mon.b mon.0 172.21.2.73:6789/0 151 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,a (MON_DOWN)" in cluster log |
||||||||||||||
fail | 1489883 | 2017-08-06 05:01:42 | 2017-08-06 05:16:35 | 2017-08-06 08:18:36 | 3:02:01 | 2:10:01 | 0:52:00 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_api_tests.yaml} | 3 | |||
Failure Reason:
"2017-08-06 07:54:22.962611 mon.b mon.0 172.21.2.21:6789/0 106 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
fail | 1489886 | 2017-08-06 05:01:42 | 2017-08-06 05:18:48 | 2017-08-06 09:22:44 | 4:03:56 | 2:29:43 | 1:34:13 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_bench.yaml} | 3 | |||
Failure Reason:
"2017-08-06 08:43:32.474061 mon.b mon.0 172.21.2.15:6789/0 126 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
fail | 1489889 | 2017-08-06 05:01:43 | 2017-08-06 05:21:03 | 2017-08-06 10:19:07 | 4:58:04 | 2:53:44 | 2:04:20 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cache_snaps.yaml} | 3 | |||
Failure Reason:
"2017-08-06 09:31:35.403943 mon.a mon.0 172.21.2.31:6789/0 179 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
dead | 1489892 | 2017-08-06 05:01:44 | 2017-08-06 05:23:35 | 2017-08-06 17:26:01 | 12:02:26 | vps | master | ubuntu | 16.04 | smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} | 4 | |||
fail | 1489895 | 2017-08-06 05:01:45 | 2017-08-06 05:26:49 | 2017-08-06 10:46:54 | 5:20:05 | 2:16:44 | 3:03:21 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} | 3 | |||
Failure Reason:
"2017-08-06 10:20:03.388062 mon.b mon.0 172.21.2.45:6789/0 155 : cluster [WRN] Health check failed: application not enabled on 1 pool(s) (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 1489898 | 2017-08-06 05:01:45 | 2017-08-06 05:28:52 | 2017-08-06 09:16:55 | 3:48:03 | 2:18:27 | 1:29:36 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_ec_snaps.yaml} | 3 | |||
Failure Reason:
"2017-08-06 08:53:02.814229 mon.a mon.0 172.21.2.133:6789/0 116 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
fail | 1489901 | 2017-08-06 05:01:46 | 2017-08-06 05:41:17 | 2017-08-06 08:19:18 | 2:38:01 | 1:29:35 | 1:08:26 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_python.yaml} | 3 | |||
Failure Reason:
{'vpm029.front.sepia.ceph.com': {'_ansible_parsed': True, 'stderr_lines': ['E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)', 'E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?'], 'cmd': 'apt-get update', '_ansible_no_log': False, 'stdout': 'Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease\nGet:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]\nGet:3 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]\nGet:4 http://archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB]\nFetched 306 kB in 3s (99.9 kB/s)\n', 'changed': False, 'failed': True, 'stderr': 'E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)\nE: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?\n', 'rc': 100, 'invocation': {'module_args': {'dpkg_options': 'force-confdef,force-confold', 'autoremove': None, 'force': False, 'package': None, 'purge': False, 'allow_unauthenticated': False, 'state': 'present', 'upgrade': None, 'update_cache': True, 'deb': None, 'only_upgrade': False, 'cache_valid_time': 0, 'default_release': None, 'install_recommends': None}}, 'stdout_lines': ['Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease', 'Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]', 'Get:3 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]', 'Get:4 http://archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB]', 'Fetched 306 kB in 3s (99.9 kB/s)'], 'msg': 'E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)\nE: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?'}} |
||||||||||||||
pass | 1489904 | 2017-08-06 05:01:46 | 2017-08-06 05:59:23 | 2017-08-06 10:05:26 | 4:06:03 | 2:15:06 | 1:50:57 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_workunit_loadgen_mix.yaml} | 3 | |||
fail | 1489907 | 2017-08-06 05:01:47 | 2017-08-06 06:01:23 | 2017-08-06 09:31:26 | 3:30:03 | 2:18:38 | 1:11:25 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_api_tests.yaml} | 3 | |||
Failure Reason:
"2017-08-06 09:14:04.250509 mon.b mon.0 172.21.2.37:6789/0 187 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET)" in cluster log |
||||||||||||||
pass | 1489910 | 2017-08-06 05:01:48 | 2017-08-06 06:01:24 | 2017-08-06 09:33:27 | 3:32:03 | 2:06:56 | 1:25:07 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_cli_import_export.yaml} | 3 | |||
fail | 1489913 | 2017-08-06 05:01:48 | 2017-08-06 06:05:08 | 2017-08-06 09:31:09 | 3:26:01 | 2:28:45 | 0:57:16 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} | 3 | |||
Failure Reason:
"2017-08-06 09:05:45.721933 mon.b mon.0 172.21.2.127:6789/0 296 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log |
||||||||||||||
fail | 1489916 | 2017-08-06 05:01:49 | 2017-08-06 06:45:32 | 2017-08-06 08:13:30 | 1:27:58 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm089.front.sepia.ceph.com |
||||||||||||||
fail | 1489919 | 2017-08-06 05:01:50 | 2017-08-06 06:45:42 | 2017-08-06 09:45:44 | 3:00:02 | 2:28:44 | 0:31:18 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_workunit_suites_iozone.yaml} | 3 | |||
Failure Reason:
"2017-08-06 09:14:18.550764 mon.b mon.0 172.21.2.57:6789/0 163 : cluster [WRN] Health check failed: application not enabled on 1 pool(s) (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 1489922 | 2017-08-06 05:01:50 | 2017-08-06 06:49:47 | 2017-08-06 07:35:49 | 0:46:02 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_ec_s3tests.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm161.front.sepia.ceph.com |
||||||||||||||
fail | 1489925 | 2017-08-06 05:01:51 | 2017-08-06 06:54:56 | 2017-08-06 09:26:52 | 2:31:56 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_s3tests.yaml} | 3 | |||||
Failure Reason:
Could not reconnect to ubuntu@vpm025.front.sepia.ceph.com |
||||||||||||||
fail | 1489928 | 2017-08-06 05:01:51 | 2017-08-06 07:05:47 | 2017-08-06 09:27:51 | 2:22:04 | 2:07:05 | 0:14:59 | vps | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} | 3 | |||
Failure Reason:
'default_idle_timeout' |