Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 1544557 2017-08-20 05:00:58 2017-08-20 05:04:47 2017-08-20 05:26:46 0:21:59 0:13:33 0:08:26 vps master ubuntu 16.04 smoke/1node/{clusters/{fixed-1.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/ceph-deploy.yaml} 1
Failure Reason:

'check health' reached maximum tries (6) after waiting for 60 seconds

pass 1544560 2017-08-20 05:00:58 2017-08-20 05:06:50 2017-08-20 06:16:51 1:10:01 0:25:57 0:44:04 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_blogbench.yaml} 3
fail 1544563 2017-08-20 05:00:59 2017-08-20 05:13:00 2017-08-20 16:31:16 11:18:16 0:28:52 10:49:24 vps master centos 7.3 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
Failure Reason:

Command failed (workunit test rados/load-gen-mix.sh) on vpm009 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix.sh'

pass 1544566 2017-08-20 05:01:00 2017-08-20 05:15:01 2017-08-20 06:33:01 1:18:00 0:24:29 0:53:31 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_fsstress.yaml} 3
fail 1544569 2017-08-20 05:01:00 2017-08-20 05:19:59 2017-08-20 06:03:58 0:43:59 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_iozone.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm183.front.sepia.ceph.com

pass 1544573 2017-08-20 05:01:01 2017-08-20 05:20:41 2017-08-20 08:36:44 3:16:03 2:53:37 0:22:26 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_pjd.yaml} 3
pass 1544575 2017-08-20 05:01:02 2017-08-20 05:20:52 2017-08-20 06:02:51 0:41:59 0:18:05 0:23:54 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} 3
pass 1544580 2017-08-20 05:01:02 2017-08-20 05:22:30 2017-08-20 08:12:35 2:50:05 0:49:19 2:00:46 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} 3
pass 1544581 2017-08-20 05:01:03 2017-08-20 05:26:47 2017-08-20 06:22:45 0:55:58 0:24:50 0:31:08 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} 3
fail 1544584 2017-08-20 05:01:04 2017-08-20 05:26:46 2017-08-20 06:22:45 0:55:59 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm091.front.sepia.ceph.com

pass 1544588 2017-08-20 05:01:04 2017-08-20 05:26:48 2017-08-20 06:58:48 1:32:00 0:22:03 1:09:57 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/libcephfs_interface_tests.yaml} 3
fail 1544591 2017-08-20 05:01:05 2017-08-20 05:34:59 2017-08-20 06:26:57 0:51:58 0:40:03 0:11:55 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/mon_thrash.yaml} 3
Failure Reason:

"2017-08-20 06:04:07.829184 mon.a mon.0 172.21.2.37:6789/0 189 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log

fail 1544594 2017-08-20 05:01:05 2017-08-20 05:35:37 2017-08-20 06:33:36 0:57:59 0:46:41 0:11:18 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_api_tests.yaml} 3
Failure Reason:

"2017-08-20 06:06:09.213001 mon.b mon.0 172.21.2.117:6789/0 243 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log

fail 1544597 2017-08-20 05:01:06 2017-08-20 05:36:38 2017-08-20 06:38:36 1:01:58 0:45:58 0:16:00 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_bench.yaml} 3
Failure Reason:

"2017-08-20 06:06:37.334803 mon.a mon.0 172.21.2.19:6789/0 152 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1544600 2017-08-20 05:01:07 2017-08-20 05:38:07 2017-08-20 06:56:07 1:18:00 0:51:39 0:26:21 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cache_snaps.yaml} 3
Failure Reason:

"2017-08-20 06:15:38.121040 mon.a mon.0 172.21.2.99:6789/0 117 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1544604 2017-08-20 05:01:07 2017-08-20 05:38:12 2017-08-20 11:16:13 5:38:01 vps master ubuntu 16.04 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
Failure Reason:

Could not reconnect to ubuntu@vpm069.front.sepia.ceph.com

fail 1544607 2017-08-20 05:01:08 2017-08-20 05:40:52 2017-08-20 06:34:51 0:53:59 0:19:00 0:34:59 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} 3
Failure Reason:

Command failed (workunit test cls/test_cls_sdk.sh) on vpm007 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_sdk.sh'

fail 1544609 2017-08-20 05:01:08 2017-08-20 05:41:02 2017-08-20 07:07:07 1:26:05 0:30:39 0:55:26 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_ec_snaps.yaml} 3
Failure Reason:

"2017-08-20 06:50:07.760179 mon.b mon.0 172.21.2.31:6789/0 166 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1544612 2017-08-20 05:01:09 2017-08-20 05:42:46 2017-08-20 06:28:47 0:46:01 0:26:06 0:19:55 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_python.yaml} 3
Failure Reason:

"2017-08-20 06:20:12.188207 mon.b mon.0 172.21.2.77:6789/0 225 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1544615 2017-08-20 05:01:10 2017-08-20 05:42:50 2017-08-20 07:20:48 1:37:58 0:36:12 1:01:46 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_workunit_loadgen_mix.yaml} 3
Failure Reason:

"2017-08-20 07:16:05.439926 mon.a mon.0 172.21.2.15:6789/0 133 : cluster [WRN] daemon mds.a is not responding, replacing it as rank 0 with standby daemon mds.a-s" in cluster log

fail 1544618 2017-08-20 05:01:10 2017-08-20 05:46:13 2017-08-20 07:22:14 1:36:01 0:37:27 0:58:34 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_api_tests.yaml} 3
Failure Reason:

"2017-08-20 07:09:41.856868 mon.a mon.0 172.21.2.73:6789/0 499 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET)" in cluster log

fail 1544621 2017-08-20 05:01:11 2017-08-20 05:50:05 2017-08-20 07:20:02 1:29:57 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_cli_import_export.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm035.front.sepia.ceph.com

fail 1544625 2017-08-20 05:01:12 2017-08-20 05:50:08 2017-08-20 07:00:06 1:09:58 0:28:24 0:41:34 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} 3
Failure Reason:

"2017-08-20 06:43:44.115199 mon.b mon.0 172.21.2.93:6789/0 169 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

pass 1544628 2017-08-20 05:01:12 2017-08-20 05:51:16 2017-08-20 06:53:16 1:02:00 0:39:53 0:22:07 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} 3
pass 1544631 2017-08-20 05:01:13 2017-08-20 05:52:35 2017-08-20 07:06:36 1:14:01 0:34:23 0:39:38 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_workunit_suites_iozone.yaml} 3
fail 1544634 2017-08-20 05:01:14 2017-08-20 05:57:42 2017-08-20 06:39:39 0:41:57 0:23:30 0:18:27 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_ec_s3tests.yaml} 3
Failure Reason:

'default_idle_timeout'

fail 1544637 2017-08-20 05:01:14 2017-08-20 06:00:26 2017-08-20 06:40:15 0:39:49 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_s3tests.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm085.front.sepia.ceph.com

fail 1544640 2017-08-20 05:01:15 2017-08-20 06:02:56 2017-08-20 07:16:57 1:14:01 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm059.front.sepia.ceph.com