Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 1534896 2017-08-17 05:01:37 2017-08-17 05:05:33 2017-08-17 05:25:31 0:19:58 0:13:11 0:06:47 vps master ubuntu 16.04 smoke/1node/{clusters/{fixed-1.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/ceph-deploy.yaml} 1
Failure Reason:

'check health' reached maximum tries (6) after waiting for 60 seconds

fail 1534899 2017-08-17 05:01:37 2017-08-17 05:06:49 2017-08-17 05:30:47 0:23:58 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_blogbench.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm023.front.sepia.ceph.com

fail 1534902 2017-08-17 05:01:38 2017-08-17 05:09:28 2017-08-17 13:09:36 8:00:08 0:34:26 7:25:42 vps master centos 7.3 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
Failure Reason:

Command failed (workunit test rados/load-gen-mix.sh) on vpm023 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix.sh'

fail 1534905 2017-08-17 05:01:39 2017-08-17 05:10:50 2017-08-17 05:34:47 0:23:57 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_fsstress.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm181.front.sepia.ceph.com

pass 1534908 2017-08-17 05:01:39 2017-08-17 05:18:41 2017-08-17 06:32:41 1:14:00 0:54:11 0:19:49 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_iozone.yaml} 3
pass 1534911 2017-08-17 05:01:40 2017-08-17 05:18:46 2017-08-17 09:06:44 3:47:58 2:51:36 0:56:22 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_pjd.yaml} 3
pass 1534914 2017-08-17 05:01:41 2017-08-17 05:22:48 2017-08-17 06:06:45 0:43:57 0:22:11 0:21:46 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} 3
pass 1534917 2017-08-17 05:01:42 2017-08-17 05:22:46 2017-08-17 06:30:45 1:07:59 0:54:24 0:13:35 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} 3
pass 1534920 2017-08-17 05:01:42 2017-08-17 05:25:48 2017-08-17 06:23:43 0:57:55 0:22:31 0:35:24 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} 3
fail 1534922 2017-08-17 05:01:43 2017-08-17 05:26:37 2017-08-17 05:42:36 0:15:59 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm087.front.sepia.ceph.com

pass 1534926 2017-08-17 05:01:44 2017-08-17 05:30:56 2017-08-17 06:58:56 1:28:00 0:21:18 1:06:42 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/libcephfs_interface_tests.yaml} 3
fail 1534929 2017-08-17 05:01:45 2017-08-17 05:32:45 2017-08-17 06:38:45 1:06:00 0:51:36 0:14:24 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/mon_thrash.yaml} 3
Failure Reason:

"2017-08-17 06:17:48.403293 mon.a mon.0 172.21.2.31:6789/0 168 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log

fail 1534932 2017-08-17 05:01:45 2017-08-17 05:34:38 2017-08-17 06:52:38 1:18:00 0:46:56 0:31:04 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_api_tests.yaml} 3
Failure Reason:

"2017-08-17 06:21:03.618207 mon.a mon.0 172.21.2.37:6789/0 116 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1534936 2017-08-17 05:01:46 2017-08-17 05:35:03 2017-08-17 07:37:05 2:02:02 0:53:18 1:08:44 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_bench.yaml} 3
Failure Reason:

"2017-08-17 06:55:41.914929 mon.a mon.0 172.21.2.45:6789/0 183 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1534939 2017-08-17 05:01:47 2017-08-17 05:38:42 2017-08-17 05:52:39 0:13:57 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cache_snaps.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm033.front.sepia.ceph.com

fail 1534942 2017-08-17 05:01:48 2017-08-17 05:42:45 2017-08-17 07:40:41 1:57:56 0:31:26 1:26:30 vps master ubuntu 16.04 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
Failure Reason:

Command failed (workunit test rados/load-gen-mix.sh) on vpm043 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix.sh'

fail 1534945 2017-08-17 05:01:48 2017-08-17 05:42:42 2017-08-17 06:34:41 0:51:59 0:19:41 0:32:18 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} 3
Failure Reason:

Command failed (workunit test cls/test_cls_sdk.sh) on vpm123 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_sdk.sh'

fail 1534948 2017-08-17 05:01:49 2017-08-17 05:46:48 2017-08-17 07:58:47 2:11:59 0:51:43 1:20:16 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_ec_snaps.yaml} 3
Failure Reason:

"2017-08-17 07:20:31.741559 mon.b mon.0 172.21.2.1:6789/0 162 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1534951 2017-08-17 05:01:50 2017-08-17 05:46:55 2017-08-17 06:58:46 1:11:51 0:39:54 0:31:57 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_python.yaml} 3
Failure Reason:

"2017-08-17 06:50:37.202417 mon.b mon.0 172.21.2.15:6789/0 131 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS)" in cluster log

pass 1534954 2017-08-17 05:01:50 2017-08-17 05:52:53 2017-08-17 07:00:54 1:08:01 0:39:21 0:28:40 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_workunit_loadgen_mix.yaml} 3
fail 1534957 2017-08-17 05:01:51 2017-08-17 05:57:41 2017-08-17 06:45:41 0:48:00 0:31:06 0:16:54 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_api_tests.yaml} 3
Failure Reason:

"2017-08-17 06:33:57.390877 mon.a mon.0 172.21.2.91:6789/0 504 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET)" in cluster log

pass 1534960 2017-08-17 05:01:52 2017-08-17 05:58:41 2017-08-17 06:38:40 0:39:59 0:30:07 0:09:52 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_cli_import_export.yaml} 3
fail 1534963 2017-08-17 05:01:52 2017-08-17 06:01:39 2017-08-17 07:11:38 1:09:59 0:53:41 0:16:18 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} 3
Failure Reason:

"2017-08-17 06:29:58.353531 mon.b mon.0 172.21.2.29:6789/0 171 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

pass 1534966 2017-08-17 05:01:53 2017-08-17 06:02:19 2017-08-17 09:48:28 3:46:09 0:27:53 3:18:16 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} 3
pass 1534969 2017-08-17 05:01:54 2017-08-17 06:04:15 2017-08-17 07:02:03 0:57:48 0:34:53 0:22:55 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_workunit_suites_iozone.yaml} 3
fail 1534972 2017-08-17 05:01:54 2017-08-17 06:05:44 2017-08-17 07:07:40 1:01:56 0:27:44 0:34:12 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_ec_s3tests.yaml} 3
Failure Reason:

'default_idle_timeout'

fail 1534975 2017-08-17 05:01:55 2017-08-17 06:07:13 2017-08-17 07:01:16 0:54:03 0:18:49 0:35:14 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_s3tests.yaml} 3
Failure Reason:

'default_idle_timeout'

fail 1534978 2017-08-17 05:01:56 2017-08-17 06:09:50 2017-08-17 07:19:47 1:09:57 0:17:35 0:52:22 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} 3
Failure Reason:

'default_idle_timeout'