Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 1402268 2017-07-15 05:01:16 2017-07-15 05:01:29 2017-07-15 06:33:27 1:31:58 1:27:01 0:04:57 vps master ubuntu 16.04 smoke/1node/{clusters/{fixed-1.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/ceph-deploy.yaml} 1
Failure Reason:

'check health' reached maximum tries (6) after waiting for 60 seconds

fail 1402269 2017-07-15 05:01:16 2017-07-15 05:03:46 2017-07-15 07:53:48 2:50:02 2:00:10 0:49:52 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_blogbench.yaml} 3
Failure Reason:

"2017-07-15 07:42:12.299522 mon.b mon.0 172.21.2.23:6789/0 244 : cluster [WRN] Health check failed: Reduced data availability: 8 pgs inactive (PG_AVAILABILITY)" in cluster log

fail 1402270 2017-07-15 05:01:17 2017-07-15 05:05:26 2017-07-15 09:35:30 4:30:04 1:26:00 3:04:04 vps master centos 7.3 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
Failure Reason:

Command failed (workunit test rados/load-gen-mix.sh) on vpm087 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix.sh'

pass 1402271 2017-07-15 05:01:18 2017-07-15 05:05:29 2017-07-15 07:49:28 2:43:59 2:22:43 0:21:16 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_fsstress.yaml} 3
pass 1402272 2017-07-15 05:01:19 2017-07-15 05:07:30 2017-07-15 07:49:31 2:42:01 2:32:10 0:09:51 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_iozone.yaml} 3
fail 1402273 2017-07-15 05:01:19 2017-07-15 05:07:32 2017-07-15 05:27:31 0:19:59 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_pjd.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm139.front.sepia.ceph.com

pass 1402274 2017-07-15 05:01:20 2017-07-15 05:09:31 2017-07-15 07:17:33 2:08:02 1:47:01 0:21:01 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} 3
pass 1402275 2017-07-15 05:01:21 2017-07-15 05:09:33 2017-07-15 07:49:34 2:40:01 2:21:13 0:18:48 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} 3
fail 1402276 2017-07-15 05:01:21 2017-07-15 05:09:32 2017-07-15 05:23:30 0:13:58 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm083.front.sepia.ceph.com

pass 1402277 2017-07-15 05:01:22 2017-07-15 05:11:24 2017-07-15 10:19:29 5:08:05 2:23:27 2:44:38 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} 3
pass 1402280 2017-07-15 05:01:23 2017-07-15 05:11:25 2017-07-15 07:35:27 2:24:02 1:50:54 0:33:08 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/libcephfs_interface_tests.yaml} 3
fail 1402283 2017-07-15 05:01:23 2017-07-15 05:13:39 2017-07-15 09:35:43 4:22:04 2:01:29 2:20:35 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/mon_thrash.yaml} 3
Failure Reason:

"2017-07-15 09:16:06.789281 mon.a mon.0 172.21.2.21:6789/0 176 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log

fail 1402286 2017-07-15 05:01:24 2017-07-15 05:13:40 2017-07-15 07:35:40 2:22:00 2:14:49 0:07:11 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_api_tests.yaml} 3
Failure Reason:

"2017-07-15 07:08:10.726300 mon.b mon.0 172.21.2.41:6789/0 228 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1402289 2017-07-15 05:01:25 2017-07-15 05:15:35 2017-07-15 07:41:37 2:26:02 2:11:47 0:14:15 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_bench.yaml} 3
Failure Reason:

"2017-07-15 07:17:42.359833 mon.b mon.0 172.21.2.77:6789/0 1542 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log

fail 1402292 2017-07-15 05:01:25 2017-07-15 05:15:38 2017-07-15 10:11:42 4:56:04 2:14:28 2:41:36 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cache_snaps.yaml} 3
Failure Reason:

ceph-objectstore-tool: exp list-pgs failure with status 1

fail 1402295 2017-07-15 05:01:26 2017-07-15 05:21:35 2017-07-15 10:49:40 5:28:05 2:31:25 2:56:40 vps master ubuntu 16.04 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
Failure Reason:

Command failed (workunit test rados/load-gen-mix.sh) on vpm137 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix.sh'

fail 1402298 2017-07-15 05:01:27 2017-07-15 05:21:35 2017-07-15 08:25:36 3:04:01 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm141.front.sepia.ceph.com

fail 1402301 2017-07-15 05:01:27 2017-07-15 05:23:31 2017-07-15 08:05:32 2:42:01 2:13:22 0:28:39 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_ec_snaps.yaml} 3
Failure Reason:

"2017-07-15 07:46:14.453183 mon.b mon.0 172.21.2.59:6789/0 145 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1402304 2017-07-15 05:01:28 2017-07-15 05:23:31 2017-07-15 07:39:32 2:16:01 2:05:57 0:10:04 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_python.yaml} 3
Failure Reason:

"2017-07-15 07:25:39.183137 mon.a mon.0 172.21.2.99:6789/0 349 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1402307 2017-07-15 05:01:29 2017-07-15 05:23:32 2017-07-15 05:47:31 0:23:59 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_workunit_loadgen_mix.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm153.front.sepia.ceph.com

fail 1402309 2017-07-15 05:01:29 2017-07-15 05:23:37 2017-07-15 07:43:38 2:20:01 2:06:03 0:13:58 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_api_tests.yaml} 3
Failure Reason:

"2017-07-15 07:27:09.536504 mon.b mon.0 172.21.2.65:6789/0 333 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET)" in cluster log

pass 1402312 2017-07-15 05:01:30 2017-07-15 05:23:36 2017-07-15 07:29:38 2:06:02 1:51:25 0:14:37 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_cli_import_export.yaml} 3
fail 1402315 2017-07-15 05:01:31 2017-07-15 05:23:38 2017-07-15 09:19:43 3:56:05 2:06:33 1:49:32 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} 3
Failure Reason:

ceph-objectstore-tool: exp list-pgs failure with status 1

pass 1402318 2017-07-15 05:01:31 2017-07-15 05:27:37 2017-07-15 07:41:38 2:14:01 1:59:49 0:14:12 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} 3
pass 1402321 2017-07-15 05:01:32 2017-07-15 05:31:31 2017-07-15 10:35:34 5:04:03 2:53:11 2:10:52 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_workunit_suites_iozone.yaml} 3
fail 1402325 2017-07-15 05:01:33 2017-07-15 05:31:29 2017-07-15 09:21:33 3:50:04 1:48:55 2:01:09 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_ec_s3tests.yaml} 3
Failure Reason:

'default_idle_timeout'

fail 1402327 2017-07-15 05:01:33 2017-07-15 05:35:21 2017-07-15 09:21:25 3:46:04 1:35:24 2:10:40 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_s3tests.yaml} 3
Failure Reason:

'default_idle_timeout'

fail 1402331 2017-07-15 05:01:34 2017-07-15 05:35:28 2017-07-15 11:27:34 5:52:06 1:52:14 3:59:52 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} 3
Failure Reason:

'default_idle_timeout'