Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 1405830 2017-07-16 05:01:55 2017-07-16 05:03:53 2017-07-16 06:41:55 1:38:02 1:35:09 0:02:53 vps master ubuntu 16.04 smoke/1node/{clusters/{fixed-1.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/ceph-deploy.yaml} 1
Failure Reason:

'check health' reached maximum tries (6) after waiting for 60 seconds

fail 1405834 2017-07-16 05:01:56 2017-07-16 05:07:31 2017-07-16 10:41:37 5:34:06 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_blogbench.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm025.front.sepia.ceph.com

fail 1405837 2017-07-16 05:01:56 2017-07-16 05:11:49 2017-07-16 17:02:03 11:50:14 1:34:26 10:15:48 vps master centos 7.3 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
Failure Reason:

Command failed (workunit test rados/load-gen-mix.sh) on vpm141 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix.sh'

pass 1405840 2017-07-16 05:01:57 2017-07-16 05:12:48 2017-07-16 07:26:47 2:13:59 2:04:16 0:09:43 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_fsstress.yaml} 3
fail 1405843 2017-07-16 05:01:58 2017-07-16 05:16:46 2017-07-16 07:56:48 2:40:02 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_iozone.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm085.front.sepia.ceph.com

pass 1405846 2017-07-16 05:01:58 2017-07-16 05:17:23 2017-07-16 09:09:27 3:52:04 2:19:36 1:32:28 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_pjd.yaml} 3
pass 1405849 2017-07-16 05:01:59 2017-07-16 05:17:49 2017-07-16 08:37:52 3:20:03 1:53:51 1:26:12 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} 3
pass 1405852 2017-07-16 05:02:00 2017-07-16 05:21:33 2017-07-16 12:35:44 7:14:11 2:26:48 4:47:23 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} 3
pass 1405855 2017-07-16 05:02:00 2017-07-16 05:21:34 2017-07-16 07:33:38 2:12:04 2:00:13 0:11:51 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} 3
pass 1405858 2017-07-16 05:02:01 2017-07-16 05:27:38 2017-07-16 08:09:41 2:42:03 2:22:02 0:20:01 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} 3
pass 1405861 2017-07-16 05:02:02 2017-07-16 05:27:46 2017-07-16 08:27:48 3:00:02 1:56:53 1:03:09 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/libcephfs_interface_tests.yaml} 3
fail 1405864 2017-07-16 05:02:02 2017-07-16 05:33:35 2017-07-16 08:23:39 2:50:04 2:14:49 0:35:15 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/mon_thrash.yaml} 3
Failure Reason:

"2017-07-16 08:06:22.833322 mon.a mon.0 172.21.2.99:6789/0 4 : cluster [WRN] overall HEALTH_WARN 1 cache pools are missing hit_sets; 1 pools have pg_num > pgp_num; 1/3 mons down, quorum b,c" in cluster log

fail 1405867 2017-07-16 05:02:03 2017-07-16 05:42:09 2017-07-16 09:46:14 4:04:05 2:05:10 1:58:55 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_api_tests.yaml} 3
Failure Reason:

"2017-07-16 09:21:24.159753 mon.b mon.0 172.21.2.5:6789/0 728 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log

fail 1405870 2017-07-16 05:02:04 2017-07-16 06:04:00 2017-07-16 09:58:01 3:54:01 2:17:41 1:36:20 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_bench.yaml} 3
Failure Reason:

"2017-07-16 09:34:12.856739 mon.a mon.0 172.21.2.19:6789/0 151 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1405873 2017-07-16 05:02:05 2017-07-16 06:12:02 2017-07-16 10:56:07 4:44:05 2:07:33 2:36:32 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cache_snaps.yaml} 3
Failure Reason:

ceph-objectstore-tool: exp list-pgs failure with status 1

fail 1405876 2017-07-16 05:02:05 2017-07-16 06:24:14 2017-07-16 17:44:24 11:20:10 2:52:03 8:28:07 vps master ubuntu 16.04 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
Failure Reason:

Command failed (workunit test rados/load-gen-mix.sh) on vpm059 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix.sh'

fail 1405880 2017-07-16 05:02:06 2017-07-16 06:37:09 2017-07-16 09:43:10 3:06:01 2:03:03 1:02:58 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} 3
Failure Reason:

Command failed (workunit test cls/test_cls_sdk.sh) on vpm135 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_sdk.sh'

fail 1405883 2017-07-16 05:02:07 2017-07-16 06:42:17 2017-07-16 09:22:20 2:40:03 2:07:03 0:33:00 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_ec_snaps.yaml} 3
Failure Reason:

"2017-07-16 09:03:40.379480 mon.a mon.0 172.21.2.11:6789/0 225 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1405886 2017-07-16 05:02:08 2017-07-16 06:45:00 2017-07-16 11:25:05 4:40:05 1:56:02 2:44:03 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_python.yaml} 3
Failure Reason:

"2017-07-16 11:16:54.938449 mon.a mon.0 172.21.2.11:6789/0 336 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1405889 2017-07-16 05:02:08 2017-07-16 07:10:32 2017-07-16 08:42:33 1:32:01 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_workunit_loadgen_mix.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm071.front.sepia.ceph.com

fail 1405892 2017-07-16 05:02:09 2017-07-16 07:27:57 2017-07-16 10:03:58 2:36:01 2:14:37 0:21:24 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_api_tests.yaml} 3
Failure Reason:

"2017-07-16 09:45:22.140553 mon.a mon.0 172.21.2.163:6789/0 660 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET)" in cluster log

pass 1405895 2017-07-16 05:02:10 2017-07-16 07:33:46 2017-07-16 10:19:46 2:46:00 1:51:08 0:54:52 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_cli_import_export.yaml} 3
fail 1405898 2017-07-16 05:02:10 2017-07-16 07:33:47 2017-07-16 12:11:49 4:38:02 2:07:13 2:30:49 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} 3
Failure Reason:

ceph-objectstore-tool: exp list-pgs failure with status 1

pass 1405901 2017-07-16 05:02:11 2017-07-16 07:33:57 2017-07-16 11:06:04 3:32:07 2:04:03 1:28:04 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} 3
pass 1405905 2017-07-16 05:02:13 2017-07-16 07:40:29 2017-07-16 14:50:36 7:10:07 2:46:48 4:23:19 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_workunit_suites_iozone.yaml} 3
fail 1405907 2017-07-16 05:02:14 2017-07-16 07:43:39 2017-07-16 10:19:41 2:36:02 1:52:18 0:43:44 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_ec_s3tests.yaml} 3
Failure Reason:

'default_idle_timeout'

fail 1405910 2017-07-16 05:02:14 2017-07-16 07:52:37 2017-07-16 11:42:41 3:50:04 1:50:10 1:59:54 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_s3tests.yaml} 3
Failure Reason:

'default_idle_timeout'

fail 1405914 2017-07-16 05:02:16 2017-07-16 07:52:39 2017-07-16 10:02:39 2:10:00 1:55:08 0:14:52 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} 3
Failure Reason:

'default_idle_timeout'