Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 1547580 2017-08-21 05:01:36 2017-08-21 05:04:49 2017-08-21 05:22:47 0:17:58 0:13:33 0:04:25 vps master ubuntu 16.04 smoke/1node/{clusters/{fixed-1.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/ceph-deploy.yaml} 1
Failure Reason:

'check health' reached maximum tries (6) after waiting for 60 seconds

pass 1547583 2017-08-21 05:01:37 2017-08-21 05:06:55 2017-08-21 05:48:54 0:41:59 0:27:58 0:14:01 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_blogbench.yaml} 3
fail 1547586 2017-08-21 05:01:38 2017-08-21 05:08:52 2017-08-21 08:38:54 3:30:02 0:30:19 2:59:43 vps master centos 7.3 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
Failure Reason:

Command failed (workunit test rados/load-gen-mix.sh) on vpm123 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix.sh'

pass 1547589 2017-08-21 05:01:38 2017-08-21 05:09:04 2017-08-21 06:21:02 1:11:58 0:36:07 0:35:51 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_fsstress.yaml} 3
pass 1547592 2017-08-21 05:01:39 2017-08-21 05:09:28 2017-08-21 06:41:28 1:32:00 0:44:46 0:47:14 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_iozone.yaml} 3
fail 1547595 2017-08-21 05:01:40 2017-08-21 05:09:39 2017-08-21 07:03:39 1:54:00 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_pjd.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm139.front.sepia.ceph.com

pass 1547598 2017-08-21 05:01:40 2017-08-21 05:12:58 2017-08-21 06:57:01 1:44:03 0:24:23 1:19:40 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} 3
pass 1547601 2017-08-21 05:01:41 2017-08-21 05:14:44 2017-08-21 06:28:44 1:14:00 0:52:51 0:21:09 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} 3
pass 1547604 2017-08-21 05:01:42 2017-08-21 05:19:04 2017-08-21 05:55:03 0:35:59 0:24:26 0:11:33 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} 3
fail 1547607 2017-08-21 05:01:43 2017-08-21 05:20:46 2017-08-21 05:40:45 0:19:59 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm143.front.sepia.ceph.com

pass 1547610 2017-08-21 05:01:44 2017-08-21 05:23:00 2017-08-21 05:59:43 0:36:43 0:20:24 0:16:19 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/libcephfs_interface_tests.yaml} 3
fail 1547613 2017-08-21 05:01:45 2017-08-21 05:24:00 2017-08-21 07:12:02 1:48:02 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/mon_thrash.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm083.front.sepia.ceph.com

fail 1547617 2017-08-21 05:01:46 2017-08-21 05:26:57 2017-08-21 06:32:57 1:06:00 0:38:42 0:27:18 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_api_tests.yaml} 3
Failure Reason:

"2017-08-21 06:07:31.197440 mon.b mon.0 172.21.2.1:6789/0 109 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1547620 2017-08-21 05:01:46 2017-08-21 05:29:04 2017-08-21 07:46:59 2:17:55 0:51:18 1:26:37 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_bench.yaml} 3
Failure Reason:

"2017-08-21 07:23:33.793186 mon.b mon.0 172.21.2.163:6789/0 1198 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log

fail 1547623 2017-08-21 05:01:47 2017-08-21 05:29:47 2017-08-21 06:41:42 1:11:55 1:00:20 0:11:35 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cache_snaps.yaml} 3
Failure Reason:

"2017-08-21 06:03:23.786912 mon.b mon.0 172.21.2.163:6789/0 186 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1547626 2017-08-21 05:01:48 2017-08-21 05:30:47 2017-08-21 12:07:05 6:36:18 0:25:09 6:11:09 vps master ubuntu 16.04 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
Failure Reason:

Command failed (workunit test rados/load-gen-mix.sh) on vpm025 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix.sh'

fail 1547629 2017-08-21 05:01:49 2017-08-21 05:32:46 2017-08-21 06:54:46 1:22:00 0:22:57 0:59:03 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} 3
Failure Reason:

Command failed (workunit test cls/test_cls_sdk.sh) on vpm137 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_sdk.sh'

fail 1547633 2017-08-21 05:01:50 2017-08-21 05:37:24 2017-08-21 07:35:28 1:58:04 0:45:39 1:12:25 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_ec_snaps.yaml} 3
Failure Reason:

"2017-08-21 07:05:14.251073 mon.a mon.0 172.21.2.45:6789/0 183 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1547636 2017-08-21 05:01:50 2017-08-21 05:40:59 2017-08-21 06:50:59 1:10:00 0:23:27 0:46:33 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_python.yaml} 3
Failure Reason:

"2017-08-21 06:42:36.501014 mon.b mon.0 172.21.2.21:6789/0 222 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS)" in cluster log

pass 1547639 2017-08-21 05:01:51 2017-08-21 05:46:56 2017-08-21 07:58:57 2:12:01 0:36:07 1:35:54 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_workunit_loadgen_mix.yaml} 3
fail 1547642 2017-08-21 05:01:52 2017-08-21 05:48:48 2017-08-21 08:52:48 3:04:00 0:26:20 2:37:40 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_api_tests.yaml} 3
Failure Reason:

"2017-08-21 08:41:12.716868 mon.a mon.0 172.21.2.77:6789/0 492 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET)" in cluster log

pass 1547645 2017-08-21 05:01:53 2017-08-21 05:48:58 2017-08-21 06:28:57 0:39:59 0:24:14 0:15:45 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_cli_import_export.yaml} 3
fail 1547648 2017-08-21 05:01:53 2017-08-21 05:55:38 2017-08-21 08:11:38 2:16:00 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} 3
Failure Reason:

Could not reconnect to ubuntu@vpm009.front.sepia.ceph.com

pass 1547651 2017-08-21 05:01:54 2017-08-21 05:56:51 2017-08-21 07:02:47 1:05:56 0:29:23 0:36:33 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} 3
pass 1547654 2017-08-21 05:01:55 2017-08-21 06:00:26 2017-08-21 07:56:39 1:56:13 0:35:20 1:20:53 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_workunit_suites_iozone.yaml} 3
fail 1547657 2017-08-21 05:01:56 2017-08-21 06:22:29 2017-08-21 08:56:30 2:34:01 0:19:35 2:14:26 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_ec_s3tests.yaml} 3
Failure Reason:

'default_idle_timeout'

fail 1547660 2017-08-21 05:01:56 2017-08-21 06:24:57 2017-08-21 06:50:57 0:26:00 0:16:53 0:09:07 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_s3tests.yaml} 3
Failure Reason:

'default_idle_timeout'

fail 1547663 2017-08-21 05:01:57 2017-08-21 06:29:18 2017-08-21 07:31:11 1:01:53 0:18:36 0:43:17 vps master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} 3
Failure Reason:

'default_idle_timeout'