Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 1600409 2017-09-06 05:01:45 2017-09-06 05:06:50 2017-09-06 05:26:49 0:19:59 0:15:27 0:04:32 vps master ubuntu 16.04 smoke/1node/{clusters/{fixed-1.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/ceph-deploy.yaml} 1
Failure Reason:

'check health' reached maximum tries (6) after waiting for 60 seconds

pass 1600411 2017-09-06 05:01:45 2017-09-06 05:15:10 2017-09-06 05:53:09 0:37:59 0:26:51 0:11:08 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_blogbench.yaml} 3
fail 1600414 2017-09-06 05:01:46 2017-09-06 05:18:43 2017-09-06 10:30:46 5:12:03 0:28:03 4:44:00 vps master centos 7.3 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
Failure Reason:

Command failed (workunit test rados/load-gen-mix.sh) on vpm195 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix.sh'

pass 1600417 2017-09-06 05:01:47 2017-09-06 05:20:40 2017-09-06 06:00:39 0:39:59 0:22:52 0:17:07 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_fsstress.yaml} 3
pass 1600420 2017-09-06 05:01:48 2017-09-06 05:22:40 2017-09-06 08:14:42 2:52:02 0:38:35 2:13:27 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_iozone.yaml} 3
pass 1600423 2017-09-06 05:01:48 2017-09-06 05:22:40 2017-09-06 09:44:45 4:22:05 0:21:19 4:00:46 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_pjd.yaml} 3
fail 1600426 2017-09-06 05:01:49 2017-09-06 05:22:41 2017-09-06 06:24:41 1:02:00 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} 3
Failure Reason:

Command failed on vpm195 with status 2: 'sudo dpkg -i /tmp/linux-image.deb'

pass 1600429 2017-09-06 05:01:50 2017-09-06 05:22:48 2017-09-06 06:52:48 1:30:00 0:53:49 0:36:11 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} 3
pass 1600432 2017-09-06 05:01:50 2017-09-06 05:24:50 2017-09-06 07:12:50 1:48:00 0:20:07 1:27:53 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} 3
pass 1600434 2017-09-06 05:01:51 2017-09-06 05:24:52 2017-09-06 07:54:51 2:29:59 0:57:32 1:32:27 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} 3
pass 1600438 2017-09-06 05:01:52 2017-09-06 05:27:01 2017-09-06 07:41:03 2:14:02 0:22:18 1:51:44 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/libcephfs_interface_tests.yaml} 3
fail 1600441 2017-09-06 05:01:52 2017-09-06 05:32:43 2017-09-06 06:44:43 1:12:00 0:35:45 0:36:15 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/mon_thrash.yaml} 3
Failure Reason:

"2017-09-06 06:22:55.388869 mon.a mon.0 172.21.2.113:6789/0 194 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log

fail 1600444 2017-09-06 05:01:53 2017-09-06 05:43:09 2017-09-06 06:43:08 0:59:59 0:35:19 0:24:40 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_api_tests.yaml} 3
Failure Reason:

"2017-09-06 06:23:09.438474 mon.b mon.0 172.21.2.97:6789/0 136 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1600447 2017-09-06 05:01:54 2017-09-06 05:53:28 2017-09-06 07:45:29 1:52:01 0:41:22 1:10:39 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_bench.yaml} 3
Failure Reason:

"2017-09-06 07:18:17.534438 mon.a mon.0 172.21.2.73:6789/0 150 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1600450 2017-09-06 05:01:55 2017-09-06 05:58:41 2017-09-06 07:02:42 1:04:01 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cache_snaps.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm033.front.sepia.ceph.com

fail 1600454 2017-09-06 05:01:56 2017-09-06 06:00:41 2017-09-06 09:16:44 3:16:03 0:27:54 2:48:09 vps master ubuntu 16.04 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
Failure Reason:

Command failed (workunit test rados/load-gen-mix.sh) on vpm137 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix.sh'

fail 1600457 2017-09-06 05:01:56 2017-09-06 06:00:42 2017-09-06 06:36:41 0:35:59 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm021.front.sepia.ceph.com

fail 1600460 2017-09-06 05:01:57 2017-09-06 06:16:59 2017-09-06 07:36:58 1:19:59 0:35:31 0:44:28 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_ec_snaps.yaml} 3
Failure Reason:

"2017-09-06 07:17:19.391355 mon.a mon.0 172.21.2.37:6789/0 129 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1600463 2017-09-06 05:01:58 2017-09-06 06:24:45 2017-09-06 07:18:44 0:53:59 0:28:14 0:25:45 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_python.yaml} 3
Failure Reason:

"2017-09-06 07:09:27.961956 mon.b mon.0 172.21.2.97:6789/0 126 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS)" in cluster log

pass 1600466 2017-09-06 05:01:59 2017-09-06 06:28:40 2017-09-06 07:38:40 1:10:00 0:35:24 0:34:36 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_workunit_loadgen_mix.yaml} 3
fail 1600469 2017-09-06 05:02:00 2017-09-06 06:37:22 2017-09-06 07:51:21 1:13:59 0:56:43 0:17:16 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_api_tests.yaml} 3
Failure Reason:

"2017-09-06 07:09:57.838063 mon.b mon.0 172.21.2.127:6789/0 224 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET)" in cluster log

pass 1600472 2017-09-06 05:02:00 2017-09-06 06:43:30 2017-09-06 07:31:14 0:47:44 0:23:05 0:24:39 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_cli_import_export.yaml} 3
fail 1600475 2017-09-06 05:02:01 2017-09-06 06:44:47 2017-09-06 07:40:48 0:56:01 0:22:48 0:33:13 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} 3
Failure Reason:

"2017-09-06 07:31:00.161504 mon.b mon.0 172.21.2.23:6789/0 138 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

pass 1600478 2017-09-06 05:02:02 2017-09-06 06:48:05 2017-09-06 07:30:02 0:41:57 0:28:35 0:13:22 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} 3
pass 1600481 2017-09-06 05:02:02 2017-09-06 06:49:54 2017-09-06 08:41:53 1:51:59 0:40:05 1:11:54 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_workunit_suites_iozone.yaml} 3
fail 1600484 2017-09-06 05:02:03 2017-09-06 06:53:04 2017-09-06 09:17:03 2:23:59 0:30:23 1:53:36 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_ec_s3tests.yaml} 3
Failure Reason:

Command failed (s3 tests against rgw) on vpm169 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw,!lifecycle'"

fail 1600487 2017-09-06 05:02:04 2017-09-06 06:54:14 2017-09-06 08:06:14 1:12:00 0:27:59 0:44:01 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_s3tests.yaml} 3
Failure Reason:

Command failed (s3 tests against rgw) on vpm045 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw,!lifecycle'"

fail 1600490 2017-09-06 05:02:05 2017-09-06 06:54:41 2017-09-06 07:10:42 0:16:01 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm087.front.sepia.ceph.com