Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 1597209 2017-09-05 05:01:56 2017-09-05 05:05:54 2017-09-05 05:21:53 0:15:59 0:12:35 0:03:24 vps master ubuntu 16.04 smoke/1node/{clusters/{fixed-1.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/ceph-deploy.yaml} 1
Failure Reason:

'check health' reached maximum tries (6) after waiting for 60 seconds

pass 1597213 2017-09-05 05:01:57 2017-09-05 05:05:54 2017-09-05 05:35:53 0:29:59 0:22:48 0:07:11 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_blogbench.yaml} 3
fail 1597215 2017-09-05 05:01:58 2017-09-05 05:07:04 2017-09-05 10:53:11 5:46:07 0:26:05 5:20:02 vps master centos 7.3 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
Failure Reason:

Command failed (workunit test rados/load-gen-mix.sh) on vpm175 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix.sh'

pass 1597218 2017-09-05 05:01:58 2017-09-05 05:07:04 2017-09-05 05:37:04 0:30:00 0:23:05 0:06:55 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_fsstress.yaml} 3
pass 1597221 2017-09-05 05:01:59 2017-09-05 05:10:58 2017-09-05 06:18:59 1:08:01 0:35:35 0:32:26 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_iozone.yaml} 3
pass 1597224 2017-09-05 05:01:59 2017-09-05 05:10:58 2017-09-05 05:44:58 0:34:00 0:21:27 0:12:33 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_pjd.yaml} 3
fail 1597228 2017-09-05 05:02:00 2017-09-05 05:10:58 2017-09-05 05:44:58 0:34:00 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm009.front.sepia.ceph.com

fail 1597230 2017-09-05 05:02:01 2017-09-05 05:17:03 2017-09-05 09:53:08 4:36:05 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm177.front.sepia.ceph.com

pass 1597234 2017-09-05 05:02:01 2017-09-05 05:21:55 2017-09-05 05:59:55 0:38:00 0:25:23 0:12:37 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} 3
pass 1597237 2017-09-05 05:02:02 2017-09-05 05:25:15 2017-09-05 06:19:15 0:54:00 0:27:50 0:26:10 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} 3
pass 1597239 2017-09-05 05:02:03 2017-09-05 05:31:07 2017-09-05 07:21:08 1:50:01 0:26:44 1:23:17 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/libcephfs_interface_tests.yaml} 3
fail 1597242 2017-09-05 05:02:04 2017-09-05 05:35:56 2017-09-05 10:46:01 5:10:05 0:34:18 4:35:47 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/mon_thrash.yaml} 3
Failure Reason:

"2017-09-05 10:26:05.305079 mon.b mon.0 172.21.2.1:6789/0 164 : cluster [WRN] Health check failed: 1/3 mons down, quorum b,a (MON_DOWN)" in cluster log

fail 1597245 2017-09-05 05:02:04 2017-09-05 05:37:09 2017-09-05 07:55:11 2:18:02 0:41:00 1:37:02 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_api_tests.yaml} 3
Failure Reason:

"2017-09-05 07:27:34.834618 mon.a mon.0 172.21.2.73:6789/0 149 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1597248 2017-09-05 05:02:05 2017-09-05 05:37:09 2017-09-05 07:07:10 1:30:01 0:54:50 0:35:11 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_bench.yaml} 3
Failure Reason:

"2017-09-05 06:48:45.021138 mon.b mon.0 172.21.2.59:6789/0 923 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log

dead 1597251 2017-09-05 05:02:06 2017-09-05 05:41:18 2017-09-05 06:37:18 0:56:00 0:49:03 0:06:57 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cache_snaps.yaml} 3
Failure Reason:

[Errno 113] No route to host

fail 1597255 2017-09-05 05:02:06 2017-09-05 05:43:17 2017-09-05 10:47:23 5:04:06 0:27:08 4:36:58 vps master ubuntu 16.04 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
Failure Reason:

Command failed (workunit test rados/load-gen-mix.sh) on vpm057 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix.sh'

fail 1597258 2017-09-05 05:02:07 2017-09-05 05:45:00 2017-09-05 07:27:02 1:42:02 0:20:08 1:21:54 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} 3
Failure Reason:

Command failed (workunit test cls/test_cls_sdk.sh) on vpm007 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_sdk.sh'

fail 1597261 2017-09-05 05:02:08 2017-09-05 05:45:00 2017-09-05 06:57:01 1:12:01 0:32:19 0:39:42 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_ec_snaps.yaml} 3
Failure Reason:

"2017-09-05 06:38:12.461848 mon.b mon.0 172.21.2.167:6789/0 131 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1597263 2017-09-05 05:02:08 2017-09-05 05:55:24 2017-09-05 06:51:25 0:56:01 0:28:38 0:27:23 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_python.yaml} 3
Failure Reason:

"2017-09-05 06:42:50.488663 mon.a mon.0 172.21.2.1:6789/0 171 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS)" in cluster log

pass 1597266 2017-09-05 05:02:09 2017-09-05 05:59:57 2017-09-05 07:31:58 1:32:01 0:32:33 0:59:28 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_workunit_loadgen_mix.yaml} 3
fail 1597269 2017-09-05 05:02:10 2017-09-05 06:05:13 2017-09-05 06:55:13 0:50:00 0:30:42 0:19:18 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_api_tests.yaml} 3
Failure Reason:

"2017-09-05 06:44:14.248642 mon.b mon.0 172.21.2.9:6789/0 207 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET)" in cluster log

pass 1597272 2017-09-05 05:02:10 2017-09-05 06:17:56 2017-09-05 08:09:58 1:52:02 0:31:43 1:20:19 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_cli_import_export.yaml} 3
fail 1597275 2017-09-05 05:02:11 2017-09-05 06:19:13 2017-09-05 07:17:12 0:57:59 0:33:09 0:24:50 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} 3
Failure Reason:

"2017-09-05 07:02:02.343334 mon.a mon.0 172.21.2.67:6789/0 159 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

pass 1597278 2017-09-05 05:02:12 2017-09-05 06:19:16 2017-09-05 07:19:16 1:00:00 0:33:48 0:26:12 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} 3
pass 1597281 2017-09-05 05:02:12 2017-09-05 06:39:14 2017-09-05 07:35:13 0:55:59 0:40:53 0:15:06 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_workunit_suites_iozone.yaml} 3
fail 1597284 2017-09-05 05:02:13 2017-09-05 06:39:14 2017-09-05 07:29:12 0:49:58 0:24:25 0:25:33 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_ec_s3tests.yaml} 3
Failure Reason:

Command failed (s3 tests against rgw) on vpm097 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw,!lifecycle'"

fail 1597287 2017-09-05 05:02:14 2017-09-05 06:41:12 2017-09-05 08:29:12 1:48:00 0:30:48 1:17:12 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_s3tests.yaml} 3
Failure Reason:

Command failed (s3 tests against rgw) on vpm137 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw,!lifecycle'"

pass 1597290 2017-09-05 05:02:15 2017-09-05 06:50:24 2017-09-05 07:50:22 0:59:58 0:19:56 0:40:02 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} 3