User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2017-09-13 05:02:01 | 2017-09-13 05:25:31 | 2017-09-13 17:15:46 | 11:50:15 | smoke | master | ovh | 52d09e8 | 13 | 13 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 1627346 | 2017-09-13 05:02:15 | 2017-09-13 05:12:47 | 2017-09-13 05:28:47 | 0:16:00 | 0:12:17 | 0:03:43 | ovh | master | ubuntu | 16.04 | smoke/1node/{clusters/{fixed-1.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/ceph-deploy.yaml} | 1 | |
Failure Reason:
Command failed on ovh045 with status 32: 'sudo umount /dev/sdb1' |
||||||||||||||
pass | 1627347 | 2017-09-13 05:02:15 | 2017-09-13 05:25:31 | 2017-09-13 06:17:31 | 0:52:00 | 0:21:58 | 0:30:02 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_blogbench.yaml} | 3 | |||
fail | 1627348 | 2017-09-13 05:02:16 | 2017-09-13 05:28:57 | 2017-09-13 08:41:00 | 3:12:03 | 0:31:34 | 2:40:29 | ovh | master | centos | 7.3 | smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/centos_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} | 4 | |
Failure Reason:
Command failed on ovh096 with status 1: 'sudo ceph-create-keys --cluster ceph --id ovh096' |
||||||||||||||
pass | 1627349 | 2017-09-13 05:02:16 | 2017-09-13 05:43:07 | 2017-09-13 07:13:08 | 1:30:01 | 0:28:03 | 1:01:58 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_fsstress.yaml} | 3 | |||
dead | 1627350 | 2017-09-13 05:02:17 | 2017-09-13 05:45:08 | 2017-09-13 17:15:23 | 11:30:15 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_iozone.yaml} | 3 | |||||
pass | 1627351 | 2017-09-13 05:02:18 | 2017-09-13 05:47:06 | 2017-09-13 06:47:06 | 1:00:00 | 0:16:58 | 0:43:02 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/cfuse_workunit_suites_pjd.yaml} | 3 | |||
pass | 1627352 | 2017-09-13 05:02:18 | 2017-09-13 05:47:05 | 2017-09-13 06:17:05 | 0:30:00 | 0:16:54 | 0:13:06 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_direct_io.yaml} | 3 | |||
pass | 1627353 | 2017-09-13 05:02:19 | 2017-09-13 05:47:06 | 2017-09-13 06:59:06 | 1:12:00 | 0:43:34 | 0:28:26 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_dbench.yaml} | 3 | |||
pass | 1627354 | 2017-09-13 05:02:20 | 2017-09-13 05:47:06 | 2017-09-13 06:25:05 | 0:37:59 | 0:22:45 | 0:15:14 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_fsstress.yaml} | 3 | |||
pass | 1627355 | 2017-09-13 05:02:20 | 2017-09-13 05:47:05 | 2017-09-13 06:49:06 | 1:02:01 | 0:15:41 | 0:46:20 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/kclient_workunit_suites_pjd.yaml} | 3 | |||
pass | 1627356 | 2017-09-13 05:02:21 | 2017-09-13 05:47:28 | 2017-09-13 06:25:28 | 0:38:00 | 0:16:42 | 0:21:18 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/libcephfs_interface_tests.yaml} | 3 | |||
fail | 1627357 | 2017-09-13 05:02:22 | 2017-09-13 05:49:06 | 2017-09-13 07:13:07 | 1:24:01 | 0:30:06 | 0:53:55 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/mon_thrash.yaml} | 3 | |||
Failure Reason:
"2017-09-13 06:56:26.671127 mon.a mon.1 158.69.94.179:6789/0 67 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
fail | 1627358 | 2017-09-13 05:02:22 | 2017-09-13 05:51:06 | 2017-09-13 06:53:07 | 1:02:01 | 0:28:42 | 0:33:19 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_api_tests.yaml} | 3 | |||
Failure Reason:
"2017-09-13 06:33:48.863105 mon.b mon.0 158.69.93.29:6789/0 120 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
fail | 1627359 | 2017-09-13 05:02:23 | 2017-09-13 05:51:23 | 2017-09-13 06:33:23 | 0:42:00 | 0:32:37 | 0:09:23 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_bench.yaml} | 3 | |||
Failure Reason:
"2017-09-13 06:15:04.441857 mon.b mon.0 158.69.92.90:6789/0 515 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log |
||||||||||||||
fail | 1627360 | 2017-09-13 05:02:23 | 2017-09-13 05:53:07 | 2017-09-13 07:25:08 | 1:32:01 | 0:50:28 | 0:41:33 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cache_snaps.yaml} | 3 | |||
Failure Reason:
"2017-09-13 06:52:36.098524 mon.b mon.0 158.69.94.120:6789/0 136 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
pass | 1627361 | 2017-09-13 05:02:24 | 2017-09-13 05:53:07 | 2017-09-13 07:33:09 | 1:40:02 | 0:36:39 | 1:03:23 | ovh | master | ubuntu | 16.04 | smoke/systemd/{clusters/{fixed-4.yaml openstack.yaml} distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml tasks/systemd.yaml} | 4 | |
fail | 1627362 | 2017-09-13 05:02:25 | 2017-09-13 05:57:32 | 2017-09-13 06:37:32 | 0:40:00 | 0:16:38 | 0:23:22 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_cls_all.yaml} | 3 | |||
Failure Reason:
Command failed (workunit test cls/test_cls_sdk.sh) on ovh041 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=master TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_sdk.sh' |
||||||||||||||
fail | 1627363 | 2017-09-13 05:02:25 | 2017-09-13 05:59:34 | 2017-09-13 06:37:34 | 0:38:00 | 0:28:16 | 0:09:44 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_ec_snaps.yaml} | 3 | |||
Failure Reason:
"2017-09-13 06:19:08.011394 mon.b mon.0 158.69.93.123:6789/0 127 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
fail | 1627364 | 2017-09-13 05:02:26 | 2017-09-13 06:01:36 | 2017-09-13 06:39:36 | 0:38:00 | 0:18:19 | 0:19:41 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_python.yaml} | 3 | |||
Failure Reason:
"2017-09-13 06:33:01.514099 mon.b mon.0 158.69.93.38:6789/0 155 : cluster [WRN] Health check failed: application not enabled on 1 pool(s) (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
pass | 1627365 | 2017-09-13 05:02:27 | 2017-09-13 06:07:39 | 2017-09-13 07:15:39 | 1:08:00 | 0:27:25 | 0:40:35 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rados_workunit_loadgen_mix.yaml} | 3 | |||
fail | 1627366 | 2017-09-13 05:02:27 | 2017-09-13 06:13:22 | 2017-09-13 07:41:23 | 1:28:01 | 0:23:21 | 1:04:40 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_api_tests.yaml} | 3 | |||
Failure Reason:
"2017-09-13 07:28:46.028946 mon.b mon.0 158.69.95.16:6789/0 212 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET)" in cluster log |
||||||||||||||
pass | 1627367 | 2017-09-13 05:02:28 | 2017-09-13 06:15:27 | 2017-09-13 06:51:27 | 0:36:00 | 0:19:55 | 0:16:05 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_cli_import_export.yaml} | 3 | |||
fail | 1627368 | 2017-09-13 05:02:29 | 2017-09-13 06:15:39 | 2017-09-13 07:09:39 | 0:54:00 | 0:25:40 | 0:28:20 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_fsx.yaml} | 3 | |||
Failure Reason:
"2017-09-13 06:57:47.122887 mon.a mon.0 158.69.94.18:6789/0 119 : cluster [WRN] Health check failed: noscrub flag(s) set (OSDMAP_FLAGS)" in cluster log |
||||||||||||||
pass | 1627369 | 2017-09-13 05:02:29 | 2017-09-13 06:17:15 | 2017-09-13 07:27:16 | 1:10:01 | 0:26:23 | 0:43:38 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_python_api_tests.yaml} | 3 | |||
dead | 1627370 | 2017-09-13 05:02:30 | 2017-09-13 06:17:32 | 2017-09-13 17:15:46 | 10:58:14 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rbd_workunit_suites_iozone.yaml} | 3 | |||||
fail | 1627371 | 2017-09-13 05:02:31 | 2017-09-13 06:21:15 | 2017-09-13 07:49:16 | 1:28:01 | 0:24:23 | 1:03:38 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_ec_s3tests.yaml} | 3 | |||
Failure Reason:
Command failed (s3 tests against rgw) on ovh068 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw,!lifecycle'" |
||||||||||||||
fail | 1627372 | 2017-09-13 05:02:31 | 2017-09-13 06:25:12 | 2017-09-13 07:49:13 | 1:24:01 | 0:24:49 | 0:59:12 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_s3tests.yaml} | 3 | |||
Failure Reason:
Command failed (s3 tests against rgw) on ovh044 with status 1: "S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto.cfg /home/ubuntu/cephtest/s3-tests/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/s3-tests -v -a '!fails_on_rgw,!lifecycle'" |
||||||||||||||
pass | 1627373 | 2017-09-13 05:02:32 | 2017-09-13 06:25:12 | 2017-09-13 07:31:13 | 1:06:01 | 0:18:26 | 0:47:35 | ovh | master | smoke/basic/{clusters/{fixed-3-cephfs.yaml openstack.yaml} objectstore/bluestore.yaml tasks/rgw_swift.yaml} | 3 |