Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 1610403 2017-09-08 19:38:22 2017-09-08 19:39:04 2017-09-08 19:57:03 0:17:59 0:11:38 0:06:21 ovh master ubuntu 16.04 smoke/1node/{clusters/{fixed-1.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/ceph-deploy.yaml} 1
Failure Reason:

'check health' reached maximum tries (6) after waiting for 60 seconds

pass 1610404 2017-09-08 19:38:22 2017-09-08 19:39:04 2017-09-08 20:15:03 0:35:59 0:22:30 0:13:29 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_blogbench.yaml} 3
fail 1610405 2017-09-08 19:38:23 2017-09-08 19:39:05 2017-09-08 21:13:05 1:34:00 0:28:15 1:05:45 ovh master centos 7.3 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
Failure Reason:

Command failed on ovh027 with status 1: 'sudo ceph-create-keys --cluster ceph --id ovh027'

pass 1610406 2017-09-08 19:38:24 2017-09-08 19:39:04 2017-09-08 20:19:03 0:39:59 0:23:54 0:16:05 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_fsstress.yaml} 3
dead 1610407 2017-09-08 19:38:24 2017-09-08 19:39:05 2017-09-09 08:12:32 12:33:27 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_iozone.yaml} 3
pass 1610408 2017-09-08 19:38:25 2017-09-08 19:39:06 2017-09-08 21:05:05 1:25:59 0:28:12 0:57:47 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_pjd.yaml} 3
fail 1610409 2017-09-08 19:38:26 2017-09-08 19:39:04 2017-09-08 21:05:05 1:26:01 0:11:50 1:14:11 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} 3
Failure Reason:

Command failed on ovh050 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 158.69.92.154:6789,158.69.92.160:6790,158.69.92.160:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds'

fail 1610410 2017-09-08 19:38:27 2017-09-08 19:39:05 2017-09-08 20:07:04 0:27:59 0:16:15 0:11:44 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} 3
Failure Reason:

Command failed on ovh047 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 158.69.90.216:6789,158.69.90.249:6790,158.69.90.249:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds'

fail 1610411 2017-09-08 19:38:28 2017-09-08 19:39:05 2017-09-08 20:39:05 1:00:00 0:13:14 0:46:46 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} 3
Failure Reason:

Command failed on ovh049 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 158.69.91.187:6789,158.69.91.206:6790,158.69.91.206:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds'

fail 1610412 2017-09-08 19:38:29 2017-09-08 19:39:04 2017-09-08 20:07:04 0:28:00 0:13:46 0:14:14 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} 3
Failure Reason:

Command failed on ovh096 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 158.69.90.18:6789,158.69.90.38:6790,158.69.90.38:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds'

pass 1610413 2017-09-08 19:38:31 2017-09-08 19:39:05 2017-09-08 20:37:05 0:58:00 0:16:09 0:41:51 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/libcephfs_interface_tests.yaml} 3
fail 1610414 2017-09-08 19:38:31 2017-09-08 19:39:06 2017-09-08 20:49:06 1:10:00 0:27:11 0:42:49 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/mon_thrash.yaml} 3
Failure Reason:

"2017-09-08 20:32:35.724256 mon.a mon.0 158.69.91.181:6789/0 158 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log

fail 1610415 2017-09-08 19:38:32 2017-09-08 19:39:07 2017-09-08 20:59:08 1:20:01 0:42:53 0:37:08 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_api_tests.yaml} 3
Failure Reason:

"2017-09-08 20:26:41.838070 mon.a mon.0 158.69.91.133:6789/0 120 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1610416 2017-09-08 19:38:33 2017-09-08 19:39:06 2017-09-08 20:59:06 1:20:00 0:33:40 0:46:20 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_bench.yaml} 3
Failure Reason:

"2017-09-08 20:38:19.192183 mon.a mon.0 158.69.91.183:6789/0 496 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log

fail 1610417 2017-09-08 19:38:35 2017-09-08 19:39:05 2017-09-09 03:15:14 7:36:09 6:53:40 0:42:29 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cache_snaps.yaml} 3
Failure Reason:

"2017-09-08 20:09:16.859789 mon.b mon.0 158.69.90.71:6789/0 163 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1610418 2017-09-08 19:38:36 2017-09-08 19:39:06 2017-09-08 21:17:06 1:38:00 0:22:03 1:15:57 ovh master ubuntu 16.04 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
Failure Reason:

Command failed (workunit test rados/load-gen-mix.sh) on ovh016 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=luminous TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix.sh'

fail 1610419 2017-09-08 19:38:36 2017-09-08 19:56:39 2017-09-08 20:38:39 0:42:00 0:22:30 0:19:30 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} 3
Failure Reason:

Command failed (workunit test cls/test_cls_sdk.sh) on ovh057 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=luminous TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_sdk.sh'

fail 1610420 2017-09-08 19:38:37 2017-09-08 19:57:04 2017-09-08 20:45:05 0:48:01 0:27:20 0:20:41 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_ec_snaps.yaml} 3
Failure Reason:

"2017-09-08 20:26:36.975912 mon.b mon.0 158.69.91.122:6789/0 175 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1610421 2017-09-08 19:38:38 2017-09-08 20:06:40 2017-09-08 20:38:39 0:31:59 0:19:55 0:12:04 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_python.yaml} 3
Failure Reason:

"2017-09-08 20:27:08.229589 mon.b mon.0 158.69.91.144:6789/0 246 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS)" in cluster log

pass 1610422 2017-09-08 19:38:38 2017-09-08 20:07:05 2017-09-08 20:45:05 0:38:00 0:24:39 0:13:21 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_workunit_loadgen_mix.yaml} 3
fail 1610423 2017-09-08 19:38:39 2017-09-08 20:07:05 2017-09-08 20:39:05 0:32:00 0:22:03 0:09:57 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_api_tests.yaml} 3
Failure Reason:

"2017-09-08 20:27:51.238218 mon.b mon.0 158.69.91.141:6789/0 184 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET)" in cluster log

pass 1610424 2017-09-08 19:38:40 2017-09-08 20:08:41 2017-09-08 20:46:40 0:37:59 0:12:37 0:25:22 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_cli_import_export.yaml} 3
fail 1610425 2017-09-08 19:38:40 2017-09-08 20:10:30 2017-09-08 20:54:30 0:44:00 0:20:27 0:23:33 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} 3
Failure Reason:

"2017-09-08 20:44:13.646339 mon.b mon.0 158.69.91.35:6789/0 168 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

pass 1610426 2017-09-08 19:38:41 2017-09-08 20:10:30 2017-09-08 20:56:30 0:46:00 0:23:11 0:22:49 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} 3
fail 1610427 2017-09-08 19:38:41 2017-09-08 20:10:34 2017-09-08 20:42:33 0:31:59 0:14:07 0:17:52 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_workunit_suites_iozone.yaml} 3
Failure Reason:

Command failed on ovh044 with status 110: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage rbd --user 0 -p rbd map testimage.client.0 && while test '!' -e /dev/rbd/rbd/testimage.client.0 ; do sleep 1 ; done"

fail 1610428 2017-09-08 19:38:42 2017-09-08 20:14:30 2017-09-08 20:50:30 0:36:00 0:14:24 0:21:36 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_ec_s3tests.yaml} 3
Failure Reason:

Command failed on ovh041 with status 128: 'git clone -b ceph-luminous git://git.ceph.com/git/s3-tests.git /home/ubuntu/cephtest/s3-tests'

fail 1610429 2017-09-08 19:38:43 2017-09-08 20:14:30 2017-09-08 20:48:30 0:34:00 0:13:37 0:20:23 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_s3tests.yaml} 3
Failure Reason:

Command failed on ovh005 with status 128: 'git clone -b ceph-luminous git://git.ceph.com/git/s3-tests.git /home/ubuntu/cephtest/s3-tests'

pass 1610430 2017-09-08 19:38:43 2017-09-08 20:14:31 2017-09-08 20:54:31 0:40:00 0:16:01 0:23:59 ovh master smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} 3