Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 1733811 2017-10-13 20:52:43 2017-10-13 20:52:57 2017-10-13 21:10:56 0:17:59 0:12:30 0:05:29 ovh wip-daemon-helper-systemd ubuntu 16.04 smoke/1node/{clusters/{fixed-1.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/ceph-deploy.yaml} 1
Failure Reason:

'check health' reached maximum tries (6) after waiting for 60 seconds

pass 1733812 2017-10-13 20:52:44 2017-10-13 20:52:58 2017-10-13 21:32:56 0:39:58 0:18:28 0:21:30 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_blogbench.yaml} 3
fail 1733813 2017-10-13 20:52:44 2017-10-13 20:52:58 2017-10-13 22:04:57 1:11:59 0:29:32 0:42:27 ovh wip-daemon-helper-systemd centos 7.3 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
Failure Reason:

Command failed on ovh016 with status 1: 'sudo ceph-create-keys --cluster ceph --id ovh016'

pass 1733814 2017-10-13 20:52:45 2017-10-13 20:52:58 2017-10-13 21:30:57 0:37:59 0:22:40 0:15:19 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_fsstress.yaml} 3
dead 1733815 2017-10-13 20:52:45 2017-10-13 20:52:58 2017-10-14 08:55:29 12:02:31 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_iozone.yaml} 3
pass 1733816 2017-10-13 20:52:46 2017-10-13 20:52:58 2017-10-13 21:44:57 0:51:59 0:15:19 0:36:40 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_pjd.yaml} 3
fail 1733817 2017-10-13 20:52:47 2017-10-13 20:52:58 2017-10-13 21:20:57 0:27:59 0:14:51 0:13:08 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} 3
Failure Reason:

Command failed on ovh071 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 158.69.91.220:6789,158.69.91.133:6790,158.69.91.133:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds'

fail 1733818 2017-10-13 20:52:47 2017-10-13 20:52:58 2017-10-13 21:22:56 0:29:58 0:16:18 0:13:40 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} 3
Failure Reason:

Command failed on ovh069 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 158.69.91.212:6789,158.69.90.96:6790,158.69.90.96:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds'

fail 1733819 2017-10-13 20:52:48 2017-10-13 20:52:57 2017-10-13 21:22:56 0:29:59 0:16:40 0:13:19 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} 3
Failure Reason:

Command failed on ovh014 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 158.69.91.10:6789,158.69.91.205:6790,158.69.91.205:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds'

fail 1733820 2017-10-13 20:52:49 2017-10-13 20:52:57 2017-10-13 21:20:56 0:27:59 0:15:14 0:12:45 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} 3
Failure Reason:

Command failed on ovh085 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 158.69.90.98:6789,158.69.91.22:6790,158.69.91.22:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds'

pass 1733821 2017-10-13 20:52:49 2017-10-13 20:52:58 2017-10-13 21:42:58 0:50:00 0:14:14 0:35:46 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/libcephfs_interface_tests.yaml} 3
fail 1733822 2017-10-13 20:52:50 2017-10-13 20:52:57 2017-10-13 21:34:57 0:42:00 0:29:47 0:12:13 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/mon_thrash.yaml} 3
Failure Reason:

"2017-10-13 21:18:53.902943 mon.a mon.0 158.69.90.69:6789/0 4 : cluster [WRN] overall HEALTH_WARN 1 cache pools are missing hit_sets; 1/3 mons down, quorum b,c" in cluster log

fail 1733823 2017-10-13 20:52:50 2017-10-13 20:52:58 2017-10-13 21:34:58 0:42:00 0:29:20 0:12:40 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_api_tests.yaml} 3
Failure Reason:

"2017-10-13 21:16:39.478542 mon.a mon.0 158.69.91.100:6789/0 122 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1733824 2017-10-13 20:52:51 2017-10-13 20:52:58 2017-10-13 21:40:57 0:47:59 0:34:55 0:13:04 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_bench.yaml} 3
Failure Reason:

"2017-10-13 21:29:12.959474 mon.a mon.0 158.69.90.81:6789/0 2360 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log

fail 1733825 2017-10-13 20:52:52 2017-10-13 20:52:58 2017-10-13 22:06:58 1:14:00 1:00:54 0:13:06 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cache_snaps.yaml} 3
Failure Reason:

"2017-10-13 21:19:20.360448 mon.b mon.0 158.69.90.79:6789/0 136 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log

fail 1733826 2017-10-13 20:52:52 2017-10-13 20:52:58 2017-10-13 21:32:57 0:39:59 0:23:36 0:16:23 ovh wip-daemon-helper-systemd ubuntu 16.04 smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} 4
Failure Reason:

Command failed (workunit test rados/load-gen-mix.sh) on ovh063 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=luminous TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/load-gen-mix.sh'

fail 1733827 2017-10-13 20:52:53 2017-10-13 20:52:58 2017-10-13 21:44:57 0:51:59 0:16:52 0:35:07 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} 3
Failure Reason:

Command failed (workunit test cls/test_cls_sdk.sh) on ovh085 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=luminous TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_sdk.sh'

fail 1733828 2017-10-13 20:52:54 2017-10-13 20:52:58 2017-10-13 21:34:57 0:41:59 0:29:34 0:12:25 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_ec_snaps.yaml} 3
Failure Reason:

"2017-10-13 21:19:03.935943 mon.a mon.0 158.69.91.127:6789/0 174 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

fail 1733829 2017-10-13 20:52:54 2017-10-13 20:52:58 2017-10-13 21:24:56 0:31:58 0:19:54 0:12:04 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_python.yaml} 3
Failure Reason:

"2017-10-13 21:18:10.403347 mon.a mon.0 158.69.90.75:6789/0 239 : cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS)" in cluster log

pass 1733830 2017-10-13 20:52:55 2017-10-13 20:52:58 2017-10-13 21:52:58 1:00:00 0:25:11 0:34:49 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_workunit_loadgen_mix.yaml} 3
fail 1733831 2017-10-13 20:52:56 2017-10-13 20:52:58 2017-10-13 21:34:57 0:41:59 0:21:02 0:20:57 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_api_tests.yaml} 3
Failure Reason:

"2017-10-13 21:25:00.816695 mon.b mon.0 158.69.91.37:6789/0 490 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET)" in cluster log

pass 1733832 2017-10-13 20:52:57 2017-10-13 20:52:58 2017-10-13 21:22:58 0:30:00 0:16:48 0:13:12 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_cli_import_export.yaml} 3
fail 1733833 2017-10-13 20:52:57 2017-10-13 20:52:59 2017-10-13 21:36:59 0:44:00 0:21:01 0:22:59 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} 3
Failure Reason:

"2017-10-13 21:25:40.336050 mon.a mon.0 158.69.91.35:6789/0 199 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log

pass 1733834 2017-10-13 20:52:58 2017-10-13 20:52:59 2017-10-13 21:32:59 0:40:00 0:26:53 0:13:07 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} 3
fail 1733835 2017-10-13 20:52:59 2017-10-13 20:53:00 2017-10-13 21:23:00 0:30:00 0:18:40 0:11:20 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_workunit_suites_iozone.yaml} 3
Failure Reason:

Command failed on ovh096 with status 110: "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage rbd --user 0 -p rbd map testimage.client.0 && while test '!' -e /dev/rbd/rbd/testimage.client.0 ; do sleep 1 ; done"

fail 1733836 2017-10-13 20:52:59 2017-10-13 20:53:00 2017-10-13 21:37:00 0:44:00 0:22:51 0:21:09 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_ec_s3tests.yaml} 3
Failure Reason:

Command failed (s3 tests against rgw) on ovh051 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw,!lifecycle'"

fail 1733837 2017-10-13 20:53:00 2017-10-13 20:53:01 2017-10-13 21:53:01 1:00:00 0:22:22 0:37:38 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_s3tests.yaml} 3
Failure Reason:

Command failed (s3 tests against rgw) on ovh071 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw,!lifecycle'"

pass 1733838 2017-10-13 20:53:01 2017-10-13 20:53:02 2017-10-13 21:49:02 0:56:00 0:16:36 0:39:24 ovh wip-daemon-helper-systemd smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} 3