Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 1447435 2017-07-26 14:52:56 2017-07-26 16:10:36 2017-07-26 16:34:35 0:23:59 0:23:25 0:00:34 smithi master rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
pass 1447436 2017-07-26 14:52:57 2017-07-26 16:10:36 2017-07-26 16:44:36 0:34:00 0:32:10 0:01:50 smithi master rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
pass 1447437 2017-07-26 14:52:57 2017-07-26 16:11:01 2017-07-26 16:37:01 0:26:00 0:22:20 0:03:40 smithi master rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
pass 1447438 2017-07-26 14:52:58 2017-07-26 16:11:11 2017-07-26 16:37:11 0:26:00 0:23:05 0:02:55 smithi master rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} 2
pass 1447439 2017-07-26 14:52:59 2017-07-26 16:12:15 2017-07-26 16:40:15 0:28:00 0:26:23 0:01:37 smithi master rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml} 1
pass 1447440 2017-07-26 14:52:59 2017-07-26 16:12:15 2017-07-26 16:20:15 0:08:00 0:06:51 0:01:09 smithi master rados/singleton/{all/mon-seesaw.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml} 1
pass 1447441 2017-07-26 14:53:00 2017-07-26 16:12:24 2017-07-26 16:40:24 0:28:00 0:25:47 0:02:13 smithi master rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml} 1
fail 1447442 2017-07-26 14:53:01 2017-07-26 16:12:46 2017-07-26 16:20:45 0:07:59 0:04:41 0:03:18 smithi master rados/singleton/{all/erasure-code-nonregression.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml} 1
Failure Reason:

configuration must contain a dictionary of clients

pass 1447443 2017-07-26 14:53:02 2017-07-26 16:12:46 2017-07-26 16:38:46 0:26:00 0:25:26 0:00:34 smithi master rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml} 1
pass 1447444 2017-07-26 14:53:02 2017-07-26 16:13:06 2017-07-26 16:37:06 0:24:00 0:23:19 0:00:41 smithi master rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml} 1
fail 1447445 2017-07-26 14:53:03 2017-07-26 16:13:48 2017-07-26 16:51:48 0:38:00 0:31:42 0:06:18 smithi master centos rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml d-thrash/none.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} 2
Failure Reason:

"2017-07-26 16:33:03.728135 mon.b mon.0 172.21.15.38:6789/0 1017 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive (PG_AVAILABILITY)" in cluster log