Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 1624507 2017-09-12 23:21:25 2017-09-12 23:21:42 2017-09-12 23:39:41 0:17:59 0:11:10 0:06:49 ovh master ubuntu 16.04 smoke/1node/{clusters/{fixed-1.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/ceph-deploy.yaml} 1
Failure Reason:

Command failed on ovh076 with status 32: 'sudo umount /dev/sdb1'

pass 1624508 2017-09-12 23:21:26 2017-09-12 23:21:45 2017-09-12 23:57:44 0:35:59 0:20:30 0:15:29 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_blogbench.yaml} 3
fail 1624509 2017-09-12 23:21:26 2017-09-12 23:21:44 2017-09-13 00:13:43 0:51:59 0:28:46 0:23:13 ovh master centos 7.3 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
Failure Reason:

Command failed on ovh035 with status 1: 'sudo ceph-create-keys --cluster ceph --id ovh035'

pass 1624510 2017-09-12 23:21:27 2017-09-12 23:21:44 2017-09-13 00:01:43 0:39:59 0:23:19 0:16:40 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_fsstress.yaml} 3
dead 1624511 2017-09-12 23:21:28 2017-09-12 23:21:43 2017-09-13 11:24:21 12:02:38 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_iozone.yaml} 3
pass 1624512 2017-09-12 23:21:28 2017-09-12 23:21:45 2017-09-12 23:53:43 0:31:58 0:16:53 0:15:05 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_pjd.yaml} 3
fail 1624513 2017-09-12 23:21:29 2017-09-12 23:21:43 2017-09-12 23:51:42 0:29:59 0:15:20 0:14:39 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} 3
Failure Reason:

Command failed on ovh064 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 158.69.85.158:6789,158.69.85.128:6790,158.69.85.128:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds'

fail 1624514 2017-09-12 23:21:30 2017-09-12 23:21:43 2017-09-12 23:51:42 0:29:59 0:15:14 0:14:45 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} 3
Failure Reason:

Command failed on ovh091 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 158.69.85.123:6789,158.69.84.41:6790,158.69.84.41:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds'

fail 1624515 2017-09-12 23:21:30 2017-09-12 23:21:43 2017-09-12 23:49:42 0:27:59 0:15:15 0:12:44 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} 3
Failure Reason:

Command failed on ovh056 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 158.69.85.112:6789,158.69.84.64:6790,158.69.84.64:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds'

fail 1624516 2017-09-12 23:21:31 2017-09-12 23:21:45 2017-09-12 23:51:44 0:29:59 0:16:33 0:13:26 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} 3
Failure Reason:

Command failed on ovh038 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 158.69.85.143:6789,158.69.84.62:6790,158.69.84.62:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds'

pass 1624517 2017-09-12 23:21:32 2017-09-12 23:21:43 2017-09-12 23:59:42 0:37:59 0:13:59 0:24:00 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/libcephfs_interface_tests.yaml} 3
fail 1624518 2017-09-12 23:21:33 2017-09-12 23:21:44 2017-09-13 00:03:42 0:41:58 0:28:38 0:13:20 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/mon_thrash.yaml} 3
Failure Reason:

"2017-09-12 23:46:45.583658 mon.a mon.1 158.69.84.77:6789/0 148 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET)" in cluster log

fail 1624519 2017-09-12 23:21:33 2017-09-12 23:21:44 2017-09-13 00:03:43 0:41:59 0:29:37 0:12:22 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_api_tests.yaml} 3
Failure Reason:

"2017-09-12 23:46:23.263497 mon.a mon.0 158.69.84.78:6789/0 150 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1624520 2017-09-12 23:21:34 2017-09-12 23:21:45 2017-09-13 00:07:44 0:45:59 0:32:11 0:13:48 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_bench.yaml} 3
Failure Reason:

"2017-09-12 23:46:57.021000 mon.a mon.0 158.69.84.63:6789/0 125 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1624521 2017-09-12 23:21:35 2017-09-12 23:21:45 2017-09-13 00:21:43 0:59:58 0:47:48 0:12:10 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cache_snaps.yaml} 3
Failure Reason:

"2017-09-12 23:50:56.335522 mon.a mon.0 158.69.84.46:6789/0 640 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log

pass 1624522 2017-09-12 23:21:35 2017-09-12 23:21:44 2017-09-13 00:19:43 0:57:59 0:41:33 0:16:26 ovh master ubuntu 16.04 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
fail 1624523 2017-09-12 23:21:36 2017-09-12 23:21:44 2017-09-12 23:53:42 0:31:58 0:17:44 0:14:14 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} 3
Failure Reason:

Command failed (workunit test cls/test_cls_sdk.sh) on ovh043 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-smoke-whitelist TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_sdk.sh'

fail 1624524 2017-09-12 23:21:37 2017-09-12 23:21:45 2017-09-13 00:03:44 0:41:59 0:27:50 0:14:09 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_ec_snaps.yaml} 3
Failure Reason:

"2017-09-12 23:47:11.385390 mon.a mon.0 158.69.84.247:6789/0 141 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1624525 2017-09-12 23:21:37 2017-09-12 23:21:45 2017-09-12 23:55:44 0:33:59 0:19:23 0:14:36 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_python.yaml} 3
Failure Reason:

"2017-09-12 23:49:07.509084 mon.b mon.0 158.69.84.250:6789/0 117 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS)" in cluster log

pass 1624526 2017-09-12 23:21:38 2017-09-12 23:21:45 2017-09-13 00:01:45 0:40:00 0:27:41 0:12:19 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_workunit_loadgen_mix.yaml} 3
fail 1624527 2017-09-12 23:21:39 2017-09-12 23:21:43 2017-09-12 23:59:42 0:37:59 0:24:33 0:13:26 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_api_tests.yaml} 3
Failure Reason:

"2017-09-12 23:48:49.329219 mon.a mon.0 158.69.85.0:6789/0 194 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET)" in cluster log

pass 1624528 2017-09-12 23:21:39 2017-09-12 23:21:44 2017-09-12 23:53:43 0:31:59 0:16:21 0:15:38 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_cli_import_export.yaml} 3
fail 1624529 2017-09-12 23:21:40 2017-09-12 23:21:45 2017-09-13 00:01:44 0:39:59 0:25:39 0:14:20 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} 3
Failure Reason:

"2017-09-12 23:53:29.646048 mon.a mon.0 158.69.85.144:6789/0 769 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log

pass 1624530 2017-09-12 23:21:41 2017-09-12 23:21:44 2017-09-13 00:01:43 0:39:59 0:27:13 0:12:46 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} 3
fail 1624531 2017-09-12 23:21:41 2017-09-12 23:21:44 2017-09-12 23:49:43 0:27:59 0:15:19 0:12:40 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_workunit_suites_iozone.yaml} 3
Failure Reason:

Command failed on ovh015 with status 110: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage rbd --user 0 -p rbd map testimage.client.0 && while test '!' -e /dev/rbd/rbd/testimage.client.0 ; do sleep 1 ; done"

fail 1624532 2017-09-12 23:21:43 2017-09-12 23:21:45 2017-09-12 23:53:44 0:31:59 0:17:11 0:14:48 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_ec_s3tests.yaml} 3
Failure Reason:

Command failed on ovh052 with status 128: 'git clone -b ceph-wip-smoke-whitelist git://git.ceph.com/git/s3-tests.git /home/ubuntu/cephtest/s3-tests'

fail 1624533 2017-09-12 23:21:43 2017-09-12 23:21:46 2017-09-12 23:51:45 0:29:59 0:16:41 0:13:18 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_s3tests.yaml} 3
Failure Reason:

Command failed on ovh047 with status 128: 'git clone -b ceph-wip-smoke-whitelist git://git.ceph.com/git/s3-tests.git /home/ubuntu/cephtest/s3-tests'

pass 1624534 2017-09-12 23:21:45 2017-09-12 23:21:46 2017-09-12 23:55:46 0:34:00 0:19:11 0:14:49 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} 3