Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 1610375 2017-09-08 19:37:02 2017-09-08 19:38:26 2017-09-08 19:56:27 0:18:01 0:10:55 0:07:06 ovh master ubuntu 16.04 smoke/1node/{clusters/{fixed-1.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/ceph-deploy.yaml} 1
Failure Reason:

Command failed on ovh066 with status 32: 'sudo umount /dev/sdb1'

pass 1610376 2017-09-08 19:37:03 2017-09-08 19:38:28 2017-09-08 20:14:28 0:36:00 0:22:33 0:13:27 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_blogbench.yaml} 3
fail 1610377 2017-09-08 19:37:03 2017-09-08 19:38:28 2017-09-08 20:42:29 1:04:01 0:28:50 0:35:11 ovh master centos 7.3 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
Failure Reason:

Command failed on ovh052 with status 1: 'sudo ceph-create-keys --cluster ceph --id ovh052'

pass 1610378 2017-09-08 19:37:04 2017-09-08 19:38:27 2017-09-08 20:26:28 0:48:01 0:32:37 0:15:24 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_fsstress.yaml} 3
dead 1610379 2017-09-08 19:37:05 2017-09-08 19:38:29 2017-09-09 08:11:50 12:33:21 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_iozone.yaml} 3
pass 1610380 2017-09-08 19:37:05 2017-09-08 19:38:28 2017-09-08 20:42:32 1:04:04 0:48:43 0:15:21 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_pjd.yaml} 3
fail 1610381 2017-09-08 19:37:06 2017-09-08 19:38:29 2017-09-08 20:06:29 0:28:00 0:14:37 0:13:23 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} 3
Failure Reason:

Command failed on ovh050 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 158.69.90.200:6789,158.69.90.148:6790,158.69.90.148:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds'

dead 1610382 2017-09-08 19:37:07 2017-09-09 02:11:45 2017-09-09 07:40:43 5:28:58 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml}
fail 1610383 2017-09-08 19:37:07 2017-09-08 19:38:26 2017-09-08 20:24:28 0:46:02 0:23:53 0:22:09 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} 3
Failure Reason:

Command failed on ovh093 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 158.69.90.244:6789,158.69.90.64:6790,158.69.90.64:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds'

fail 1610384 2017-09-08 19:37:08 2017-09-08 19:38:30 2017-09-08 20:14:29 0:35:59 0:15:35 0:20:24 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} 3
Failure Reason:

Command failed on ovh057 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 158.69.90.153:6789,158.69.90.196:6790,158.69.90.196:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds'

pass 1610385 2017-09-08 19:37:08 2017-09-08 19:38:29 2017-09-08 20:10:28 0:31:59 0:16:37 0:15:22 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/libcephfs_interface_tests.yaml} 3
fail 1610386 2017-09-08 19:37:09 2017-09-08 19:38:27 2017-09-08 20:26:28 0:48:01 0:35:09 0:12:52 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/mon_thrash.yaml} 3
Failure Reason:

"2017-09-08 19:59:49.671203 mon.a mon.0 158.69.90.23:6789/0 194 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log

fail 1610387 2017-09-08 19:37:10 2017-09-08 19:38:29 2017-09-09 03:02:37 7:24:08 6:38:27 0:45:41 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_api_tests.yaml} 3
Failure Reason:

"2017-09-08 20:04:11.772025 mon.a mon.0 158.69.90.137:6789/0 145 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1610388 2017-09-08 19:37:11 2017-09-08 19:38:27 2017-09-08 20:40:28 1:02:01 0:39:08 0:22:53 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_bench.yaml} 3
Failure Reason:

"2017-09-08 20:18:31.119159 mon.a mon.0 158.69.90.7:6789/0 880 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log

fail 1610389 2017-09-08 19:37:11 2017-09-08 19:38:29 2017-09-09 03:14:37 7:36:08 6:22:10 1:13:58 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cache_snaps.yaml} 3
Failure Reason:

"2017-09-08 20:08:56.788438 mon.b mon.0 158.69.90.69:6789/0 111 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

pass 1610390 2017-09-08 19:37:12 2017-09-08 19:39:38 2017-09-08 21:10:29 1:30:51 0:33:46 0:57:05 ovh master ubuntu 16.04 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
fail 1610391 2017-09-08 19:37:12 2017-09-08 19:38:26 2017-09-08 20:14:28 0:36:02 0:21:26 0:14:36 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} 3
Failure Reason:

Command failed (workunit test cls/test_cls_sdk.sh) on ovh033 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_sdk.sh'

fail 1610392 2017-09-08 19:37:13 2017-09-08 19:38:28 2017-09-08 20:28:32 0:50:04 0:35:51 0:14:13 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_ec_snaps.yaml} 3
Failure Reason:

"2017-09-08 20:02:43.647001 mon.b mon.0 158.69.90.168:6789/0 138 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1610393 2017-09-08 19:37:14 2017-09-08 19:38:30 2017-09-08 20:10:28 0:31:58 0:18:59 0:12:59 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_python.yaml} 3
Failure Reason:

"2017-09-08 20:03:10.025929 mon.b mon.0 158.69.90.140:6789/0 150 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS)" in cluster log

pass 1610394 2017-09-08 19:37:15 2017-09-08 19:38:31 2017-09-08 20:26:31 0:48:00 0:34:23 0:13:37 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_workunit_loadgen_mix.yaml} 3
fail 1610395 2017-09-08 19:37:15 2017-09-08 19:38:29 2017-09-08 20:24:28 0:45:59 0:30:20 0:15:39 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_api_tests.yaml} 3
Failure Reason:

"2017-09-08 20:06:39.123713 mon.a mon.0 158.69.90.37:6789/0 194 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET)" in cluster log

pass 1610396 2017-09-08 19:37:16 2017-09-08 19:38:29 2017-09-08 20:10:32 0:32:03 0:15:55 0:16:08 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_cli_import_export.yaml} 3
fail 1610397 2017-09-08 19:37:17 2017-09-08 19:38:29 2017-09-08 20:26:28 0:47:59 0:35:00 0:12:59 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} 3
Failure Reason:

"2017-09-08 20:05:10.276009 mon.a mon.0 158.69.90.2:6789/0 143 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

pass 1610398 2017-09-08 19:37:17 2017-09-08 19:38:27 2017-09-08 20:24:28 0:46:01 0:27:25 0:18:36 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} 3
fail 1610399 2017-09-08 19:37:18 2017-09-08 19:38:29 2017-09-08 20:08:29 0:30:00 0:16:37 0:13:23 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_workunit_suites_iozone.yaml} 3
Failure Reason:

Command failed on ovh094 with status 110: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage rbd --user 0 -p rbd map testimage.client.0 && while test '!' -e /dev/rbd/rbd/testimage.client.0 ; do sleep 1 ; done"

fail 1610400 2017-09-08 19:37:19 2017-09-08 19:38:28 2017-09-08 20:24:28 0:46:00 0:21:49 0:24:11 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_ec_s3tests.yaml} 3
Failure Reason:

Command failed (s3 tests against rgw) on ovh041 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw,!lifecycle'"

fail 1610401 2017-09-08 19:37:19 2017-09-08 19:38:28 2017-09-08 20:24:28 0:46:00 0:25:16 0:20:44 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_s3tests.yaml} 3
Failure Reason:

Command failed (s3 tests against rgw) on ovh006 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw,!lifecycle'"

pass 1610402 2017-09-08 19:37:20 2017-09-08 19:38:27 2017-09-08 20:26:28 0:48:01 0:21:12 0:26:49 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} 3