User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
sage | 2017-06-27 04:00:46 | 2017-06-27 04:46:10 | 2017-06-27 17:51:42 | 13:05:32 | rados | wip-health | smithi | b8af48e | 13 | 193 | 6 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 1329975 | 2017-06-27 04:01:25 | 2017-06-27 04:46:10 | 2017-06-27 05:34:10 | 0:48:00 | 0:40:04 | 0:07:56 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
Failure Reason:
"2017-06-27 04:59:38.376644 mon.a mon.0 172.21.15.45:6789/0 1502 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 2 pgs stuck unclean" in cluster log |
||||||||||||||
fail | 1329976 | 2017-06-27 04:01:25 | 2017-06-27 04:46:10 | 2017-06-27 05:16:10 | 0:30:00 | 0:26:40 | 0:03:20 | smithi | master | centos | 7.3 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/filestore-btrfs.yaml rados.yaml supported/centos_latest.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:02:49.189886 mon.b mon.0 172.21.15.129:6789/0 4934 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 14 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1329977 | 2017-06-27 04:01:26 | 2017-06-27 04:46:10 | 2017-06-27 05:14:10 | 0:28:00 | 0:24:16 | 0:03:44 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |||
Failure Reason:
"2017-06-27 04:51:37.607432 mon.a mon.0 172.21.15.2:6789/0 20 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1329978 | 2017-06-27 04:01:27 | 2017-06-27 04:46:21 | 2017-06-27 05:02:20 | 0:15:59 | 0:08:54 | 0:07:05 | smithi | master | rados/singleton/{all/divergent_priors.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-06-27 04:52:48.124558 mon.a mon.0 172.21.15.83:6789/0 79 : cluster [WRN] HEALTH_WARN OSD_FLAGS: noout flag(s) set" in cluster log |
||||||||||||||
fail | 1329979 | 2017-06-27 04:01:27 | 2017-06-27 04:46:41 | 2017-06-27 05:08:40 | 0:21:59 | 0:14:15 | 0:07:44 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |||
Failure Reason:
"2017-06-27 04:54:10.978213 mon.b mon.0 172.21.15.80:6789/0 22 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1329980 | 2017-06-27 04:01:28 | 2017-06-27 04:47:44 | 2017-06-27 05:33:44 | 0:46:00 | 0:34:27 | 0:11:33 | smithi | master | ubuntu | 14.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml fast/fast.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/filestore-btrfs.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:04:36.395880 mon.b mon.0 172.21.15.155:6789/0 1290 : cluster [ERR] HEALTH_ERR PG_INCOMPLETE: 6 pgs incomplete" in cluster log |
||||||||||||||
fail | 1329981 | 2017-06-27 04:01:29 | 2017-06-27 04:48:10 | 2017-06-27 05:34:10 | 0:46:00 | 0:27:40 | 0:18:20 | smithi | master | ubuntu | 14.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} d-require-luminous/at-end.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/filestore-btrfs.yaml rados.yaml thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 4 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 1329982 | 2017-06-27 04:01:29 | 2017-06-27 04:48:11 | 2017-06-27 05:18:11 | 0:30:00 | 0:23:25 | 0:06:35 | smithi | master | rados/thrash-erasure-code-overwrites/{bluestore.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml fast/fast.yaml leveldb.yaml msgr-failures/fastclose.yaml rados.yaml thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:01:37.277949 mon.a mon.0 172.21.15.130:6789/0 713 : cluster [ERR] HEALTH_ERR PG_INCOMPLETE: 8 pgs incomplete" in cluster log |
||||||||||||||
fail | 1329983 | 2017-06-27 04:01:30 | 2017-06-27 04:49:32 | 2017-06-27 07:07:34 | 2:18:02 | 0:27:40 | 1:50:22 | smithi | master | ubuntu | 14.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} d-require-luminous/at-end.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/filestore-btrfs.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 1329984 | 2017-06-27 04:01:31 | 2017-06-27 04:50:10 | 2017-06-27 05:08:10 | 0:18:00 | 0:09:15 | 0:08:45 | smithi | master | rados/basic-luminous/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} objectstore/bluestore-comp.yaml rados.yaml scrub_test.yaml} | 2 | |||
Failure Reason:
"2017-06-27 04:58:01.131872 mon.a mon.0 172.21.15.100:6789/0 123 : cluster [ERR] HEALTH_ERR OSD_SCRUB_ERRORS: 2 scrub errors" in cluster log |
||||||||||||||
fail | 1329985 | 2017-06-27 04:01:31 | 2017-06-27 04:50:46 | 2017-06-27 05:04:46 | 0:14:00 | 0:10:32 | 0:03:28 | smithi | master | rados/mgr/{clusters/2-node-mgr.yaml debug/mgr.yaml objectstore/bluestore-comp.yaml tasks/failover.yaml} | 2 | |||
Failure Reason:
"2017-06-27 04:56:17.966216 mon.b mon.0 172.21.15.35:6789/0 25 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1329986 | 2017-06-27 04:01:32 | 2017-06-27 04:51:08 | 2017-06-27 05:03:07 | 0:11:59 | 0:07:25 | 0:04:34 | smithi | master | rados/objectstore/alloc-hint.yaml | 1 | |||
Failure Reason:
"2017-06-27 04:54:48.605534 mon.a mon.0 172.21.15.62:6789/0 59 : cluster [WRN] HEALTH_WARN PG_PEERING: 8 pgs peering" in cluster log |
||||||||||||||
fail | 1329987 | 2017-06-27 04:01:32 | 2017-06-27 04:52:05 | 2017-06-27 05:06:06 | 0:14:01 | 0:07:18 | 0:06:43 | smithi | master | rados/rest/mgr-restful.yaml | 1 | |||
Failure Reason:
"2017-06-27 04:57:58.734631 mon.a mon.0 172.21.15.88:6789/0 57 : cluster [WRN] HEALTH_WARN PG_PEERING: 8 pgs peering" in cluster log |
||||||||||||||
fail | 1329988 | 2017-06-27 04:01:33 | 2017-06-27 04:52:05 | 2017-06-27 05:08:04 | 0:15:59 | 0:12:39 | 0:03:20 | smithi | master | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi025 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-health TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 1329989 | 2017-06-27 04:01:34 | 2017-06-27 04:52:05 | 2017-06-27 05:06:05 | 0:14:00 | 0:11:00 | 0:03:00 | smithi | master | rados/singleton-nomsgr/{all/admin_socket_output.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-06-27 04:59:17.794209 mon.a mon.0 172.21.15.31:6789/0 162 : cluster [ERR] HEALTH_ERR OSD_FULL: 1 full osd(s)" in cluster log |
||||||||||||||
fail | 1329990 | 2017-06-27 04:01:34 | 2017-06-27 04:52:22 | 2017-06-27 05:10:21 | 0:17:59 | 0:06:43 | 0:11:16 | smithi | master | rados/upgrade/jewel-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-luminous.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} thrashosds-health.yaml} | 3 | |||
Failure Reason:
Command failed on smithi010 with status 22: 'sudo ceph osd new 0b1e14aa-8b86-45cb-8d42-34bc9ca3c001 3' |
||||||||||||||
fail | 1329991 | 2017-06-27 04:01:35 | 2017-06-27 04:52:40 | 2017-06-27 05:32:40 | 0:40:00 | 0:35:20 | 0:04:40 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:17:46.071742 mon.b mon.0 172.21.15.21:6789/0 6134 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 1 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1329992 | 2017-06-27 04:01:36 | 2017-06-27 04:54:07 | 2017-06-27 05:23:26 | 0:29:19 | 0:20:43 | 0:08:36 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:03:31.889292 mon.a mon.0 172.21.15.68:6789/0 585 : cluster [ERR] HEALTH_ERR PG_INCOMPLETE: 3 pgs incomplete" in cluster log |
||||||||||||||
fail | 1329993 | 2017-06-27 04:01:36 | 2017-06-27 04:54:07 | 2017-06-27 05:12:21 | 0:18:14 | 0:11:57 | 0:06:17 | smithi | master | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml tasks/rados_python.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:01:56.446061 mon.b mon.0 172.21.15.5:6789/0 21 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1329994 | 2017-06-27 04:01:37 | 2017-06-27 04:54:07 | 2017-06-27 05:06:07 | 0:12:00 | 0:08:17 | 0:03:43 | smithi | master | rados/singleton/{all/divergent_priors2.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-06-27 05:00:12.924953 mon.a mon.0 172.21.15.9:6789/0 73 : cluster [WRN] HEALTH_WARN OSD_FLAGS: noout flag(s) set" in cluster log |
||||||||||||||
fail | 1329995 | 2017-06-27 04:01:38 | 2017-06-27 04:54:07 | 2017-06-27 05:22:13 | 0:28:06 | 0:21:39 | 0:06:27 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:07:52.718902 mon.a mon.0 172.21.15.55:6789/0 2219 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 1 pgs stuck unclean" in cluster log |
||||||||||||||
fail | 1329996 | 2017-06-27 04:01:38 | 2017-06-27 04:54:09 | 2017-06-27 05:14:09 | 0:20:00 | 0:15:06 | 0:04:54 | smithi | master | ubuntu | 14.04 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml thrashers/sync-many.yaml workloads/pool-create-delete.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:00:04.722962 mon.b mon.0 172.21.15.163:6789/0 93 : cluster [WRN] HEALTH_WARN TOO_FEW_PGS: too few PGs per OSD (1 < min 2)" in cluster log |
||||||||||||||
fail | 1329997 | 2017-06-27 04:01:39 | 2017-06-27 04:54:46 | 2017-06-27 05:20:46 | 0:26:00 | 0:18:26 | 0:07:34 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:04:46.595412 mon.a mon.0 172.21.15.150:6789/0 24 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1329998 | 2017-06-27 04:01:39 | 2017-06-27 04:57:44 | 2017-06-27 05:37:28 | 0:39:44 | 0:31:56 | 0:07:48 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:13:13.856161 mon.b mon.0 172.21.15.143:6789/0 1673 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 2 pgs stuck unclean" in cluster log |
||||||||||||||
pass | 1329999 | 2017-06-27 04:01:40 | 2017-06-27 04:57:44 | 2017-06-27 05:09:25 | 0:11:41 | 0:06:45 | 0:04:56 | smithi | master | ubuntu | 14.04 | rados/singleton/{all/dump-stuck.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml} | 1 | |
fail | 1330000 | 2017-06-27 04:01:41 | 2017-06-27 04:57:44 | 2017-06-27 05:17:25 | 0:19:41 | 0:11:56 | 0:07:45 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:04:11.534970 mon.b mon.0 172.21.15.47:6789/0 22 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330001 | 2017-06-27 04:01:41 | 2017-06-27 04:57:44 | 2017-06-27 05:15:27 | 0:17:43 | 0:10:25 | 0:07:18 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:03:52.944688 mon.b mon.0 172.21.15.157:6789/0 19 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330002 | 2017-06-27 04:01:42 | 2017-06-27 04:58:18 | 2017-06-27 05:16:18 | 0:18:00 | 0:12:18 | 0:05:42 | smithi | master | rados/thrash-luminous/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml workloads/redirect_set_object.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:06:01.156724 mon.a mon.0 172.21.15.19:6789/0 24 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330003 | 2017-06-27 04:01:43 | 2017-06-27 04:58:18 | 2017-06-27 05:56:18 | 0:58:00 | 0:52:10 | 0:05:50 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:14:31.749619 mon.b mon.0 172.21.15.22:6789/0 1543 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 1 pgs stuck unclean" in cluster log |
||||||||||||||
fail | 1330004 | 2017-06-27 04:01:43 | 2017-06-27 05:00:21 | 2017-06-27 05:12:19 | 0:11:58 | 0:07:23 | 0:04:35 | smithi | master | rados/singleton-nomsgr/{all/cache-fs-trunc.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-06-27 05:06:16.874026 mon.a mon.0 172.21.15.29:6789/0 134 : cluster [WRN] HEALTH_WARN OSD_CACHE_NO_HIT_SET: 1 cache pools are missing hit_sets" in cluster log |
||||||||||||||
fail | 1330005 | 2017-06-27 04:01:44 | 2017-06-27 05:00:21 | 2017-06-27 05:16:20 | 0:15:59 | 0:09:35 | 0:06:24 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:05:00.917426 mon.a mon.0 172.21.15.65:6789/0 18 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330006 | 2017-06-27 04:01:45 | 2017-06-27 05:01:14 | 2017-06-27 05:21:14 | 0:20:00 | 0:14:04 | 0:05:56 | smithi | master | ubuntu | 14.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml tasks/rados_stress_watch.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:06:30.937164 mon.a mon.0 172.21.15.62:6789/0 18 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330007 | 2017-06-27 04:01:45 | 2017-06-27 05:01:55 | 2017-06-27 05:31:56 | 0:30:01 | 0:26:45 | 0:03:16 | smithi | master | rados/singleton/{all/ec-lost-unfound.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-06-27 05:08:04.142928 mon.a mon.0 172.21.15.18:6789/0 157 : cluster [ERR] HEALTH_ERR PG_INCOMPLETE: 16 pgs incomplete" in cluster log |
||||||||||||||
fail | 1330008 | 2017-06-27 04:01:46 | 2017-06-27 05:02:00 | 2017-06-27 05:28:00 | 0:26:00 | 0:22:30 | 0:03:30 | smithi | master | rados/objectstore/ceph_objectstore_tool.yaml | 1 | |||
Failure Reason:
"2017-06-27 05:14:05.740095 mon.a mon.0 172.21.15.144:6789/0 311 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 2 pgs stuck unclean" in cluster log |
||||||||||||||
fail | 1330009 | 2017-06-27 04:01:47 | 2017-06-27 05:02:01 | 2017-06-27 05:44:01 | 0:42:00 | 0:35:00 | 0:07:00 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:24:49.314441 mon.b mon.0 172.21.15.71:6789/0 6242 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 1 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330010 | 2017-06-27 04:01:48 | 2017-06-27 05:02:06 | 2017-06-27 05:34:06 | 0:32:00 | 0:28:08 | 0:03:52 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:10:30.316399 mon.b mon.0 172.21.15.110:6789/0 808 : cluster [ERR] HEALTH_ERR OSD_OUT_OF_ORDER_FULL: full ratio(s) out of order" in cluster log |
||||||||||||||
fail | 1330011 | 2017-06-27 04:01:48 | 2017-06-27 05:02:08 | 2017-06-27 05:38:09 | 0:36:01 | 0:32:19 | 0:03:42 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:06:41.690123 mon.a mon.0 172.21.15.112:6789/0 25 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330012 | 2017-06-27 04:01:49 | 2017-06-27 05:02:11 | 2017-06-27 05:44:11 | 0:42:00 | 0:28:28 | 0:13:32 | smithi | master | ubuntu | 14.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml supported/ubuntu_14.04.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:15:55.508209 mon.b mon.0 172.21.15.49:6789/0 851 : cluster [ERR] HEALTH_ERR OSD_OUT_OF_ORDER_FULL: full ratio(s) out of order" in cluster log |
||||||||||||||
fail | 1330013 | 2017-06-27 04:01:50 | 2017-06-27 05:02:13 | 2017-06-27 05:30:12 | 0:27:59 | 0:24:39 | 0:03:20 | smithi | master | rados/singleton/{all/lost-unfound-delete.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-06-27 05:14:45.047390 mon.a mon.0 172.21.15.151:6789/0 512 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 12 pgs stuck unclean" in cluster log |
||||||||||||||
dead | 1330014 | 2017-06-27 04:01:50 | 2017-06-27 05:02:15 | 2017-06-27 17:09:10 | 12:06:55 | smithi | master | rados/monthrash/{ceph.yaml clusters/9-mons.yaml d-require-luminous/at-mkfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/sync.yaml workloads/rados_5925.yaml} | 2 | |||||
fail | 1330015 | 2017-06-27 04:01:51 | 2017-06-27 05:02:22 | 2017-06-27 05:20:21 | 0:17:59 | 0:13:37 | 0:04:22 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:08:02.848654 mon.b mon.0 172.21.15.13:6789/0 19 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330016 | 2017-06-27 04:01:51 | 2017-06-27 05:02:24 | 2017-06-27 05:24:25 | 0:22:01 | 0:05:59 | 0:16:02 | smithi | master | ubuntu | 14.04 | rados/multimon/{clusters/3.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml tasks/mon_clock_no_skews.yaml} | 2 | |
Failure Reason:
'timechecks' |
||||||||||||||
fail | 1330017 | 2017-06-27 04:01:52 | 2017-06-27 05:02:33 | 2017-06-27 05:26:33 | 0:24:00 | 0:10:40 | 0:13:20 | smithi | master | ubuntu | 14.04 | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:16:11.830470 mon.a mon.0 172.21.15.53:6789/0 18 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330018 | 2017-06-27 04:01:53 | 2017-06-27 05:02:35 | 2017-06-27 05:44:34 | 0:41:59 | 0:26:26 | 0:15:33 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:17:44.893701 mon.b mon.0 172.21.15.163:6789/0 23 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330019 | 2017-06-27 04:01:53 | 2017-06-27 05:03:24 | 2017-06-27 05:37:23 | 0:33:59 | 0:29:44 | 0:04:15 | smithi | master | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml fast/normal.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:10:46.445025 mon.b mon.0 172.21.15.106:6789/0 553 : cluster [ERR] HEALTH_ERR PG_INCOMPLETE: 6 pgs incomplete" in cluster log |
||||||||||||||
fail | 1330020 | 2017-06-27 04:01:54 | 2017-06-27 05:03:36 | 2017-06-27 05:19:37 | 0:16:01 | 0:11:49 | 0:04:12 | smithi | master | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 4 | |||
Failure Reason:
"2017-06-27 05:10:18.140866 mon.b mon.0 172.21.15.1:6789/0 188 : cluster [WRN] HEALTH_WARN TOO_FEW_PGS: too few PGs per OSD (1 < min 2)" in cluster log |
||||||||||||||
fail | 1330021 | 2017-06-27 04:01:55 | 2017-06-27 05:03:59 | 2017-06-27 05:42:00 | 0:38:01 | 0:31:51 | 0:06:10 | smithi | master | rados/thrash-erasure-code-overwrites/{bluestore.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml fast/normal.yaml leveldb.yaml msgr-failures/few.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:13:26.398211 mon.b mon.0 172.21.15.35:6789/0 568 : cluster [ERR] HEALTH_ERR PG_INCOMPLETE: 2 pgs incomplete" in cluster log |
||||||||||||||
fail | 1330022 | 2017-06-27 04:01:55 | 2017-06-27 05:04:07 | 2017-06-27 05:20:06 | 0:15:59 | 0:11:06 | 0:04:53 | smithi | master | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |||
Failure Reason:
"2017-06-27 05:10:58.102975 mon.a mon.0 172.21.15.12:6789/0 23 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330023 | 2017-06-27 04:01:56 | 2017-06-27 05:04:07 | 2017-06-27 05:26:06 | 0:21:59 | 0:18:01 | 0:03:58 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:10:11.123661 mon.b mon.0 172.21.15.96:6789/0 617 : cluster [ERR] HEALTH_ERR PG_INCOMPLETE: 2 pgs incomplete" in cluster log |
||||||||||||||
fail | 1330024 | 2017-06-27 04:01:57 | 2017-06-27 05:04:07 | 2017-06-27 05:16:06 | 0:11:59 | 0:07:18 | 0:04:41 | smithi | master | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_striper.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:09:18.728302 mon.b mon.0 172.21.15.88:6789/0 19 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330025 | 2017-06-27 04:01:57 | 2017-06-27 05:04:12 | 2017-06-27 05:32:12 | 0:28:00 | 0:24:57 | 0:03:03 | smithi | master | rados/singleton/{all/lost-unfound.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-06-27 05:16:43.028494 mon.a mon.0 172.21.15.20:6789/0 336 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 12 pgs stuck unclean" in cluster log |
||||||||||||||
fail | 1330026 | 2017-06-27 04:01:58 | 2017-06-27 05:04:38 | 2017-06-27 06:20:39 | 1:16:01 | 1:11:50 | 0:04:11 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |||
Failure Reason:
reached maximum tries (105) after waiting for 630 seconds |
||||||||||||||
fail | 1330027 | 2017-06-27 04:01:59 | 2017-06-27 05:04:39 | 2017-06-27 05:28:38 | 0:23:59 | 0:20:18 | 0:03:41 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:18:21.626677 mon.a mon.0 172.21.15.9:6789/0 2079 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 1 pgs stuck inactive" in cluster log |
||||||||||||||
pass | 1330028 | 2017-06-27 04:01:59 | 2017-06-27 05:04:38 | 2017-06-27 05:10:37 | 0:05:59 | 0:04:59 | 0:01:00 | smithi | master | rados/singleton-nomsgr/{all/ceph-post-file.yaml rados.yaml} | 1 | |||
fail | 1330029 | 2017-06-27 04:02:00 | 2017-06-27 05:04:38 | 2017-06-27 05:48:38 | 0:44:00 | 0:29:00 | 0:15:00 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:26:45.427435 mon.b mon.0 172.21.15.65:6789/0 3271 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 1 pgs stuck unclean" in cluster log |
||||||||||||||
fail | 1330030 | 2017-06-27 04:02:00 | 2017-06-27 05:04:47 | 2017-06-27 05:20:50 | 0:16:03 | 0:10:59 | 0:05:04 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:09:45.687432 mon.a mon.0 172.21.15.89:6789/0 22 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330031 | 2017-06-27 04:02:01 | 2017-06-27 05:05:52 | 2017-06-27 05:23:52 | 0:18:00 | 0:11:07 | 0:06:53 | smithi | master | ubuntu | 14.04 | rados/singleton/{all/mon-config-keys.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml} | 1 | |
Failure Reason:
"2017-06-27 05:12:49.560567 mon.a mon.0 172.21.15.160:6789/0 46 : cluster [WRN] HEALTH_WARN PG_PEERING: 4 pgs peering" in cluster log |
||||||||||||||
pass | 1330032 | 2017-06-27 04:02:02 | 2017-06-27 05:05:52 | 2017-06-27 05:13:52 | 0:08:00 | 0:06:29 | 0:01:31 | smithi | master | rados/objectstore/filejournal.yaml | 1 | |||
fail | 1330033 | 2017-06-27 04:02:02 | 2017-06-27 05:06:02 | 2017-06-27 05:34:01 | 0:27:59 | 0:23:09 | 0:04:50 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:20:55.588704 mon.b mon.0 172.21.15.25:6789/0 1513 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 24 pgs stuck unclean" in cluster log |
||||||||||||||
fail | 1330034 | 2017-06-27 04:02:03 | 2017-06-27 05:06:06 | 2017-06-27 05:20:06 | 0:14:00 | 0:10:36 | 0:03:24 | smithi | master | rados/thrash-luminous/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/none.yaml workloads/redirect.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:10:47.693480 mon.b mon.0 172.21.15.94:6789/0 21 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330035 | 2017-06-27 04:02:04 | 2017-06-27 05:06:07 | 2017-06-27 05:42:08 | 0:36:01 | 0:30:13 | 0:05:48 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:17:30.501642 mon.b mon.0 172.21.15.46:6789/0 1319 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 6 pgs stuck unclean" in cluster log |
||||||||||||||
dead | 1330036 | 2017-06-27 04:02:04 | 2017-06-27 05:06:09 | 2017-06-27 17:12:04 | 12:05:55 | smithi | master | rados/monthrash/{ceph.yaml clusters/3-mons.yaml d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/force-sync-many.yaml workloads/rados_api_tests.yaml} | 2 | |||||
fail | 1330037 | 2017-06-27 04:02:05 | 2017-06-27 05:08:40 | 2017-06-27 05:24:40 | 0:16:00 | 0:09:53 | 0:06:07 | smithi | master | rados/basic-luminous/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} objectstore/bluestore.yaml rados.yaml scrub_test.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:14:47.531366 mon.b mon.0 172.21.15.14:6789/0 125 : cluster [ERR] HEALTH_ERR OSD_SCRUB_ERRORS: 2 scrub errors" in cluster log |
||||||||||||||
fail | 1330038 | 2017-06-27 04:02:05 | 2017-06-27 05:08:40 | 2017-06-27 05:22:41 | 0:14:01 | 0:09:38 | 0:04:23 | smithi | master | rados/mgr/{clusters/2-node-mgr.yaml debug/mgr.yaml objectstore/bluestore.yaml tasks/failover.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:14:09.852584 mon.b mon.0 172.21.15.33:6789/0 27 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330039 | 2017-06-27 04:02:06 | 2017-06-27 05:08:42 | 2017-06-27 05:24:41 | 0:15:59 | 0:12:14 | 0:03:45 | smithi | master | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi154 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-health TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 1330040 | 2017-06-27 04:02:07 | 2017-06-27 05:09:27 | 2017-06-27 05:31:26 | 0:21:59 | 0:13:13 | 0:08:46 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:17:55.722077 mon.b mon.0 172.21.15.69:6789/0 23 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330041 | 2017-06-27 04:02:08 | 2017-06-27 05:10:03 | 2017-06-27 05:42:04 | 0:32:01 | 0:28:55 | 0:03:06 | smithi | master | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_workunit_loadgen_big.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:16:26.078819 mon.a mon.0 172.21.15.132:6789/0 110 : cluster [WRN] HEALTH_WARN TOO_FEW_PGS: too few PGs per OSD (1 < min 2)" in cluster log |
||||||||||||||
pass | 1330042 | 2017-06-27 04:02:08 | 2017-06-27 05:10:03 | 2017-06-27 05:18:02 | 0:07:59 | 0:07:08 | 0:00:51 | smithi | master | rados/singleton/{all/mon-seesaw.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml} | 1 | |||
fail | 1330043 | 2017-06-27 04:02:09 | 2017-06-27 05:10:23 | 2017-06-27 05:50:22 | 0:39:59 | 0:35:17 | 0:04:42 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:22:48.125428 mon.b mon.0 172.21.15.97:6789/0 1897 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 2 pgs stuck unclean" in cluster log |
||||||||||||||
fail | 1330044 | 2017-06-27 04:02:09 | 2017-06-27 05:10:39 | 2017-06-27 05:46:39 | 0:36:00 | 0:32:16 | 0:03:44 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:16:43.778809 mon.a mon.0 172.21.15.29:6789/0 21 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330045 | 2017-06-27 04:02:10 | 2017-06-27 05:12:19 | 2017-06-27 05:52:19 | 0:40:00 | 0:34:47 | 0:05:13 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:36:20.700930 mon.a mon.0 172.21.15.4:6789/0 3360 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 1 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330046 | 2017-06-27 04:02:11 | 2017-06-27 05:12:19 | 2017-06-27 05:22:18 | 0:09:59 | 0:06:17 | 0:03:42 | smithi | master | rados/singleton-nomsgr/{all/export-after-evict.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-06-27 05:17:38.848116 mon.a mon.0 172.21.15.131:6789/0 74 : cluster [WRN] HEALTH_WARN OSD_CACHE_NO_HIT_SET: 1 cache pools are missing hit_sets" in cluster log |
||||||||||||||
fail | 1330047 | 2017-06-27 04:02:11 | 2017-06-27 05:12:19 | 2017-06-27 05:34:18 | 0:21:59 | 0:17:28 | 0:04:31 | smithi | master | rados/singleton/{all/mon-thrasher.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed on smithi186 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph quorum_status' |
||||||||||||||
fail | 1330048 | 2017-06-27 04:02:12 | 2017-06-27 05:12:19 | 2017-06-27 05:30:18 | 0:17:59 | 0:12:50 | 0:05:09 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:21:39.979568 mon.a mon.0 172.21.15.70:6789/0 1235 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 8 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330049 | 2017-06-27 04:02:13 | 2017-06-27 05:12:20 | 2017-06-27 05:42:20 | 0:30:00 | 0:26:33 | 0:03:27 | smithi | master | ubuntu | 16.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml supported/ubuntu_latest.yaml thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:16:47.371267 mon.b mon.0 172.21.15.81:6789/0 21 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330050 | 2017-06-27 04:02:13 | 2017-06-27 05:12:22 | 2017-06-27 05:46:22 | 0:34:00 | 0:27:48 | 0:06:12 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:34:07.818569 mon.b mon.0 172.21.15.8:6789/0 3982 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 3 pgs stuck unclean" in cluster log |
||||||||||||||
fail | 1330051 | 2017-06-27 04:02:14 | 2017-06-27 05:12:39 | 2017-06-27 05:36:38 | 0:23:59 | 0:21:09 | 0:02:50 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:28:41.403164 mon.b mon.0 172.21.15.5:6789/0 4492 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 1 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330052 | 2017-06-27 04:02:15 | 2017-06-27 05:14:01 | 2017-06-27 05:40:01 | 0:26:00 | 0:20:27 | 0:05:33 | smithi | master | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml fast/fast.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:21:28.885185 mon.a mon.0 172.21.15.31:6789/0 655 : cluster [ERR] HEALTH_ERR PG_INCOMPLETE: 7 pgs incomplete" in cluster log |
||||||||||||||
fail | 1330053 | 2017-06-27 04:02:15 | 2017-06-27 05:14:02 | 2017-06-27 05:30:01 | 0:15:59 | 0:12:01 | 0:03:58 | smithi | master | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} d-require-luminous/at-end.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 4 | |||
Failure Reason:
"2017-06-27 05:21:11.083284 mon.b mon.0 172.21.15.138:6789/0 114 : cluster [WRN] HEALTH_WARN PG_PEERING: 1 pgs peering" in cluster log |
||||||||||||||
fail | 1330054 | 2017-06-27 04:02:16 | 2017-06-27 05:14:01 | 2017-06-27 06:08:02 | 0:54:01 | 0:45:57 | 0:08:04 | smithi | master | rados/thrash-erasure-code-overwrites/{bluestore.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml fast/fast.yaml leveldb.yaml msgr-failures/osd-delay.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:28:07.187166 mon.a mon.0 172.21.15.2:6789/0 1810 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 1 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330055 | 2017-06-27 04:02:17 | 2017-06-27 05:14:02 | 2017-06-27 05:38:01 | 0:23:59 | 0:13:20 | 0:10:39 | smithi | master | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} d-require-luminous/at-end.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |||
Failure Reason:
"2017-06-27 05:26:21.616743 mon.a mon.0 172.21.15.12:6789/0 22 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330056 | 2017-06-27 04:02:17 | 2017-06-27 05:14:06 | 2017-06-27 09:30:11 | 4:16:05 | 4:11:07 | 0:04:58 | smithi | master | rados/objectstore/filestore-idempotent-aio-journal.yaml | 1 | |||
Failure Reason:
"2017-06-27 05:18:12.670592 mon.a mon.0 172.21.15.88:6789/0 47 : cluster [WRN] HEALTH_WARN PG_PEERING: 8 pgs peering" in cluster log |
||||||||||||||
fail | 1330057 | 2017-06-27 04:02:18 | 2017-06-27 05:14:11 | 2017-06-27 05:56:10 | 0:41:59 | 0:36:57 | 0:05:02 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:19:15.916318 mon.b mon.0 172.21.15.188:6789/0 22 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330058 | 2017-06-27 04:02:19 | 2017-06-27 05:14:11 | 2017-06-27 05:38:11 | 0:24:00 | 0:18:48 | 0:05:12 | smithi | master | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml tasks/rados_workunit_loadgen_mix.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:21:21.987335 mon.a mon.0 172.21.15.141:6789/0 24 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330059 | 2017-06-27 04:02:19 | 2017-06-27 05:14:15 | 2017-06-27 05:50:15 | 0:36:00 | 0:31:44 | 0:04:16 | smithi | master | rados/singleton/{all/osd-backfill.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-06-27 05:27:41.994282 mon.a mon.0 172.21.15.93:6789/0 394 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 5 pgs stuck unclean" in cluster log |
||||||||||||||
fail | 1330060 | 2017-06-27 04:02:20 | 2017-06-27 05:14:38 | 2017-06-27 05:44:38 | 0:30:00 | 0:26:31 | 0:03:29 | smithi | master | rados/multimon/{clusters/6.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/mon_clock_with_skews.yaml} | 2 | |||
Failure Reason:
'timechecks' |
||||||||||||||
fail | 1330061 | 2017-06-27 04:02:21 | 2017-06-27 05:15:39 | 2017-06-27 05:53:39 | 0:38:00 | 0:34:22 | 0:03:38 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
"2017-06-27 05:26:51.899377 mon.a mon.0 172.21.15.19:6789/0 17 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330062 | 2017-06-27 04:02:21 | 2017-06-27 05:16:07 | 2017-06-27 05:44:07 | 0:28:00 | 0:19:35 | 0:08:25 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:24:37.319189 mon.a mon.0 172.21.15.162:6789/0 21 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
dead | 1330063 | 2017-06-27 04:02:22 | 2017-06-27 05:16:11 | 2017-06-27 17:22:47 | 12:06:36 | smithi | master | rados/monthrash/{ceph.yaml clusters/9-mons.yaml d-require-luminous/at-mkfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml thrashers/many.yaml workloads/rados_mon_workunits.yaml} | 2 | |||||
fail | 1330064 | 2017-06-27 04:02:23 | 2017-06-27 05:16:13 | 2017-06-27 05:54:13 | 0:38:00 | 0:32:28 | 0:05:32 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:35:04.630957 mon.a mon.0 172.21.15.36:6789/0 3676 : cluster [ERR] HEALTH_ERR OSD_OUT_OF_ORDER_FULL: full ratio(s) out of order" in cluster log |
||||||||||||||
fail | 1330065 | 2017-06-27 04:02:23 | 2017-06-27 05:16:20 | 2017-06-27 05:36:19 | 0:19:59 | 0:15:11 | 0:04:48 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:28:43.039520 mon.b mon.0 172.21.15.3:6789/0 1576 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 1 pgs stuck unclean" in cluster log |
||||||||||||||
fail | 1330066 | 2017-06-27 04:02:24 | 2017-06-27 05:16:22 | 2017-06-27 05:40:21 | 0:23:59 | 0:17:37 | 0:06:22 | smithi | master | ubuntu | 14.04 | rados/singleton/{all/osd-recovery-incomplete.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml} | 1 | |
Failure Reason:
"2017-06-27 05:23:20.195427 mon.a mon.0 172.21.15.58:6789/0 78 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 8 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330067 | 2017-06-27 04:02:25 | 2017-06-27 05:17:37 | 2017-06-27 05:37:36 | 0:19:59 | 0:15:40 | 0:04:19 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:28:19.764149 mon.b mon.0 172.21.15.1:6789/0 851 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 7 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330068 | 2017-06-27 04:02:25 | 2017-06-27 05:18:03 | 2017-06-27 05:36:03 | 0:18:00 | 0:13:16 | 0:04:44 | smithi | master | rados/thrash-luminous/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml workloads/redirect_set_object.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:23:57.047840 mon.a mon.0 172.21.15.92:6789/0 19 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330069 | 2017-06-27 04:02:26 | 2017-06-27 05:18:12 | 2017-06-27 05:40:12 | 0:22:00 | 0:13:59 | 0:08:01 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:24:56.222890 mon.a mon.0 172.21.15.62:6789/0 30 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330070 | 2017-06-27 04:02:27 | 2017-06-27 05:19:50 | 2017-06-27 05:41:51 | 0:22:01 | 0:18:52 | 0:03:09 | smithi | master | rados/singleton-nomsgr/{all/full-tiering.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed on smithi007 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph osd pool set-quota ec-ca max_bytes 0'" |
||||||||||||||
fail | 1330071 | 2017-06-27 04:02:27 | 2017-06-27 05:20:06 | 2017-06-27 05:36:05 | 0:15:59 | 0:11:47 | 0:04:12 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:24:21.152549 mon.a mon.0 172.21.15.100:6789/0 19 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330072 | 2017-06-27 04:02:28 | 2017-06-27 05:20:07 | 2017-06-27 05:50:07 | 0:30:00 | 0:23:35 | 0:06:25 | smithi | master | ubuntu | 14.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml tasks/rados_workunit_loadgen_mostlyread.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:25:11.816623 mon.b mon.1 172.21.15.175:6789/0 2 : cluster [WRN] message from mon.0 was stamped 0.508636s in the future, clocks not synchronized" in cluster log |
||||||||||||||
fail | 1330073 | 2017-06-27 04:02:29 | 2017-06-27 05:20:07 | 2017-06-27 05:32:07 | 0:12:00 | 0:08:38 | 0:03:22 | smithi | master | rados/singleton/{all/osd-recovery.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-06-27 05:25:24.697352 mon.a mon.0 172.21.15.109:6789/0 104 : cluster [WRN] HEALTH_WARN DEGRADED_OBJECTS: 19444/77984 objects degraded (24.933%)" in cluster log |
||||||||||||||
fail | 1330074 | 2017-06-27 04:02:29 | 2017-06-27 05:20:22 | 2017-06-27 06:00:22 | 0:40:00 | 0:35:49 | 0:04:11 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:39:45.003245 mon.a mon.0 172.21.15.55:6789/0 3538 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 2 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330075 | 2017-06-27 04:02:30 | 2017-06-27 05:20:47 | 2017-06-27 05:58:47 | 0:38:00 | 0:33:42 | 0:04:18 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:32:40.594650 mon.b mon.0 172.21.15.131:6789/0 1060 : cluster [ERR] HEALTH_ERR OSD_OUT_OF_ORDER_FULL: full ratio(s) out of order" in cluster log |
||||||||||||||
pass | 1330076 | 2017-06-27 04:02:30 | 2017-06-27 05:20:47 | 2017-06-27 09:54:53 | 4:34:06 | 4:31:53 | 0:02:13 | smithi | master | rados/objectstore/filestore-idempotent.yaml | 1 | |||
fail | 1330077 | 2017-06-27 04:02:31 | 2017-06-27 05:21:02 | 2017-06-27 05:57:02 | 0:36:00 | 0:30:17 | 0:05:43 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:36:34.394660 mon.a mon.0 172.21.15.68:6789/0 3308 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 23 pgs stuck inactive" in cluster log |
||||||||||||||
dead | 1330078 | 2017-06-27 04:02:32 | 2017-06-27 05:21:15 | 2017-06-27 17:28:09 | 12:06:54 | smithi | master | ubuntu | 14.04 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml thrashers/one.yaml workloads/snaps-few-objects.yaml} | 2 | |||
pass | 1330079 | 2017-06-27 04:02:33 | 2017-06-27 05:22:21 | 2017-06-27 05:30:21 | 0:08:00 | 0:06:19 | 0:01:41 | smithi | master | rados/singleton/{all/peer.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml} | 1 | |||
fail | 1330080 | 2017-06-27 04:02:33 | 2017-06-27 05:22:21 | 2017-06-27 05:38:21 | 0:16:00 | 0:10:51 | 0:05:09 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:27:46.136396 mon.b mon.0 172.21.15.33:6789/0 21 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330081 | 2017-06-27 04:02:34 | 2017-06-27 05:22:21 | 2017-06-27 05:54:21 | 0:32:00 | 0:28:48 | 0:03:12 | smithi | master | centos | 7.3 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore.yaml rados.yaml supported/centos_latest.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:37:48.658638 mon.b mon.0 172.21.15.13:6789/0 3117 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 3 pgs stuck unclean" in cluster log |
||||||||||||||
fail | 1330082 | 2017-06-27 04:02:35 | 2017-06-27 05:22:42 | 2017-06-27 06:06:42 | 0:44:00 | 0:39:34 | 0:04:26 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:36:12.804499 mon.b mon.0 172.21.15.94:6789/0 3979 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 1 pgs stuck unclean" in cluster log |
||||||||||||||
fail | 1330083 | 2017-06-27 04:02:35 | 2017-06-27 05:23:36 | 2017-06-27 05:49:35 | 0:25:59 | 0:20:54 | 0:05:05 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:37:21.489746 mon.a mon.0 172.21.15.26:6789/0 3389 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 1 pgs stuck unclean" in cluster log |
||||||||||||||
fail | 1330084 | 2017-06-27 04:02:36 | 2017-06-27 05:23:53 | 2017-06-27 05:59:54 | 0:36:01 | 0:24:32 | 0:11:29 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:34:05.590963 mon.a mon.0 172.21.15.77:6789/0 71 : cluster [WRN] HEALTH_WARN TOO_FEW_PGS: too few PGs per OSD (1 < min 2)" in cluster log |
||||||||||||||
fail | 1330085 | 2017-06-27 04:02:37 | 2017-06-27 05:24:26 | 2017-06-27 05:40:26 | 0:16:00 | 0:10:18 | 0:05:42 | smithi | master | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/readwrite.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:29:53.688754 mon.a mon.0 172.21.15.174:6789/0 25 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
pass | 1330086 | 2017-06-27 04:02:37 | 2017-06-27 05:24:50 | 2017-06-27 05:32:50 | 0:08:00 | 0:06:57 | 0:01:03 | smithi | master | rados/singleton/{all/pg-removal-interruption.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml} | 1 | |||
fail | 1330087 | 2017-06-27 04:02:38 | 2017-06-27 05:24:50 | 2017-06-27 05:50:50 | 0:26:00 | 0:19:43 | 0:06:17 | smithi | master | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml fast/normal.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:31:07.664116 mon.a mon.0 172.21.15.53:6789/0 226 : cluster [ERR] HEALTH_ERR OSD_OUT_OF_ORDER_FULL: full ratio(s) out of order" in cluster log |
||||||||||||||
fail | 1330088 | 2017-06-27 04:02:39 | 2017-06-27 05:25:25 | 2017-06-27 05:57:25 | 0:32:00 | 0:26:32 | 0:05:28 | smithi | master | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 4 | |||
Failure Reason:
"2017-06-27 05:33:17.808888 mon.c mon.0 172.21.15.57:6789/0 181 : cluster [WRN] HEALTH_WARN PG_PEERING: 2 pgs peering" in cluster log |
||||||||||||||
fail | 1330089 | 2017-06-27 04:02:39 | 2017-06-27 05:26:16 | 2017-06-27 05:52:16 | 0:26:00 | 0:20:53 | 0:05:07 | smithi | master | rados/thrash-erasure-code-overwrites/{bluestore.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml fast/normal.yaml leveldb.yaml msgr-failures/fastclose.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:34:13.113112 mon.b mon.0 172.21.15.66:6789/0 450 : cluster [ERR] HEALTH_ERR PG_INCOMPLETE: 1 pgs incomplete" in cluster log |
||||||||||||||
fail | 1330090 | 2017-06-27 04:02:40 | 2017-06-27 05:26:16 | 2017-06-27 05:46:16 | 0:20:00 | 0:10:59 | 0:09:01 | smithi | master | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |||
Failure Reason:
"2017-06-27 05:36:45.802381 mon.b mon.0 172.21.15.56:6789/0 18 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330091 | 2017-06-27 04:02:41 | 2017-06-27 05:26:34 | 2017-06-27 05:40:33 | 0:13:59 | 0:09:11 | 0:04:48 | smithi | master | ubuntu | 14.04 | rados/basic-luminous/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} objectstore/filestore-btrfs.yaml rados.yaml scrub_test.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:32:13.892065 mon.b mon.0 172.21.15.54:6789/0 119 : cluster [ERR] HEALTH_ERR OSD_SCRUB_ERRORS: 2 scrub errors" in cluster log |
||||||||||||||
fail | 1330092 | 2017-06-27 04:02:42 | 2017-06-27 05:28:12 | 2017-06-27 05:42:12 | 0:14:00 | 0:08:48 | 0:05:12 | smithi | master | ubuntu | 14.04 | rados/mgr/{clusters/2-node-mgr.yaml debug/mgr.yaml objectstore/filestore-btrfs.yaml tasks/failover.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:33:28.719075 mon.b mon.0 172.21.15.169:6789/0 21 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330093 | 2017-06-27 04:02:42 | 2017-06-27 05:28:13 | 2017-06-27 05:44:12 | 0:15:59 | 0:12:40 | 0:03:19 | smithi | master | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi135 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-health TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 1330094 | 2017-06-27 04:02:43 | 2017-06-27 05:28:39 | 2017-06-27 05:38:38 | 0:09:59 | 0:06:49 | 0:03:10 | smithi | master | rados/singleton-nomsgr/{all/health-warnings.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-06-27 05:34:16.166505 mon.a mon.0 172.21.15.9:6789/0 194 : cluster [WRN] HEALTH_WARN TOO_FEW_PGS: too few PGs per OSD (1 < min 2)" in cluster log |
||||||||||||||
fail | 1330095 | 2017-06-27 04:02:44 | 2017-06-27 05:30:14 | 2017-06-27 05:54:13 | 0:23:59 | 0:18:46 | 0:05:13 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:41:25.954589 mon.b mon.0 172.21.15.51:6789/0 1909 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 7 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330096 | 2017-06-27 04:02:44 | 2017-06-27 05:30:14 | 2017-06-27 06:02:14 | 0:32:00 | 0:28:25 | 0:03:35 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:36:06.581567 mon.b mon.0 172.21.15.101:6789/0 24 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330097 | 2017-06-27 04:02:45 | 2017-06-27 05:30:14 | 2017-06-27 05:50:13 | 0:19:59 | 0:15:10 | 0:04:49 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:42:33.929911 mon.b mon.0 172.21.15.133:6789/0 1287 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 2 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330098 | 2017-06-27 04:02:46 | 2017-06-27 05:30:19 | 2017-06-27 05:52:19 | 0:22:00 | 0:17:50 | 0:04:10 | smithi | master | ubuntu | 14.04 | rados/singleton/{all/radostool.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml} | 1 | |
Failure Reason:
"2017-06-27 05:36:42.118896 mon.a mon.0 172.21.15.69:6789/0 117 : cluster [WRN] HEALTH_WARN PG_PEERING: 8 pgs peering" in cluster log |
||||||||||||||
fail | 1330099 | 2017-06-27 04:02:47 | 2017-06-27 05:30:22 | 2017-06-27 05:48:21 | 0:17:59 | 0:11:48 | 0:06:11 | smithi | master | rados/multimon/{clusters/6.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/mon_recovery.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:40:15.521596 mon.a mon.0 172.21.15.18:6789/0 35 : cluster [WRN] HEALTH_WARN MON_DOWN: 1/6 mons down, quorum a,b,c,d,f" in cluster log |
||||||||||||||
fail | 1330100 | 2017-06-27 04:02:47 | 2017-06-27 05:30:36 | 2017-06-27 05:46:37 | 0:16:01 | 0:09:49 | 0:06:12 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:36:42.851942 mon.b mon.0 172.21.15.59:6789/0 20 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330101 | 2017-06-27 04:02:48 | 2017-06-27 05:31:40 | 2017-06-27 05:47:38 | 0:15:58 | 0:10:22 | 0:05:36 | smithi | master | ubuntu | 14.04 | rados/thrash-luminous/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml workloads/redirect.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:37:03.520528 mon.a mon.0 172.21.15.45:6789/0 32 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330102 | 2017-06-27 04:02:49 | 2017-06-27 05:31:57 | 2017-06-27 05:45:57 | 0:14:00 | 0:10:17 | 0:03:43 | smithi | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_cls_all.yaml validater/lockdep.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:36:56.463216 mon.a mon.0 172.21.15.95:6789/0 24 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330103 | 2017-06-27 04:02:49 | 2017-06-27 05:32:08 | 2017-06-27 06:02:08 | 0:30:00 | 0:25:49 | 0:04:11 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:37:58.130372 mon.b mon.0 172.21.15.21:6789/0 21 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
pass | 1330104 | 2017-06-27 04:02:50 | 2017-06-27 05:32:15 | 2017-06-27 05:38:14 | 0:05:59 | 0:04:46 | 0:01:13 | smithi | master | rados/objectstore/fusestore.yaml | 1 | |||
dead | 1330105 | 2017-06-27 04:02:51 | 2017-06-27 05:32:33 | 2017-06-27 17:38:39 | 12:06:06 | smithi | master | rados/monthrash/{ceph.yaml clusters/9-mons.yaml d-require-luminous/at-mkfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/one.yaml workloads/pool-create-delete.yaml} | 2 | |||||
fail | 1330106 | 2017-06-27 04:02:52 | 2017-06-27 05:32:50 | 2017-06-27 05:50:49 | 0:17:59 | 0:13:40 | 0:04:19 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:37:24.474417 mon.a mon.0 172.21.15.108:6789/0 18 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330107 | 2017-06-27 04:02:52 | 2017-06-27 05:32:51 | 2017-06-27 05:52:50 | 0:19:59 | 0:16:45 | 0:03:14 | smithi | master | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/repair_test.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:40:02.349593 mon.b mon.0 172.21.15.20:6789/0 195 : cluster [ERR] HEALTH_ERR OSD_SCRUB_ERRORS: 1 scrub errors" in cluster log |
||||||||||||||
pass | 1330108 | 2017-06-27 04:02:53 | 2017-06-27 05:33:45 | 2017-06-27 05:41:44 | 0:07:59 | 0:07:17 | 0:00:42 | smithi | master | rados/singleton/{all/rebuild-mondb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml} | 1 | |||
fail | 1330109 | 2017-06-27 04:02:54 | 2017-06-27 05:34:11 | 2017-06-27 06:06:10 | 0:31:59 | 0:27:53 | 0:04:06 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:40:01.763107 mon.b mon.0 172.21.15.63:6789/0 27 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330110 | 2017-06-27 04:02:55 | 2017-06-27 05:34:11 | 2017-06-27 06:22:11 | 0:48:00 | 0:35:42 | 0:12:18 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:52:50.262086 mon.a mon.0 172.21.15.46:6789/0 2213 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 2 pgs stuck unclean" in cluster log |
||||||||||||||
pass | 1330111 | 2017-06-27 04:02:55 | 2017-06-27 05:34:11 | 2017-06-27 05:58:11 | 0:24:00 | 0:22:28 | 0:01:32 | smithi | master | rados/singleton-nomsgr/{all/msgr.yaml rados.yaml} | 1 | |||
fail | 1330112 | 2017-06-27 04:02:56 | 2017-06-27 05:34:12 | 2017-06-27 06:04:11 | 0:29:59 | 0:25:10 | 0:04:49 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:40:10.397947 mon.a mon.0 172.21.15.60:6789/0 269 : cluster [ERR] HEALTH_ERR OSD_OUT_OF_ORDER_FULL: full ratio(s) out of order" in cluster log |
||||||||||||||
fail | 1330113 | 2017-06-27 04:02:57 | 2017-06-27 05:34:19 | 2017-06-27 05:52:19 | 0:18:00 | 0:14:02 | 0:03:58 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:38:58.340070 mon.a mon.0 172.21.15.40:6789/0 36 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330114 | 2017-06-27 04:02:57 | 2017-06-27 05:34:24 | 2017-06-27 05:46:24 | 0:12:00 | 0:08:21 | 0:03:39 | smithi | master | rados/singleton/{all/reg11184.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-06-27 05:38:10.625606 mon.a mon.0 172.21.15.204:6789/0 69 : cluster [WRN] HEALTH_WARN PG_PEERING: 2 pgs peering" in cluster log |
||||||||||||||
fail | 1330115 | 2017-06-27 04:02:58 | 2017-06-27 05:34:32 | 2017-06-27 06:16:33 | 0:42:01 | 0:36:50 | 0:05:11 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:46:59.654623 mon.a mon.0 172.21.15.145:6789/0 2496 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 2 pgs stuck unclean" in cluster log |
||||||||||||||
fail | 1330116 | 2017-06-27 04:02:59 | 2017-06-27 05:36:13 | 2017-06-27 06:18:13 | 0:42:00 | 0:33:09 | 0:08:51 | smithi | master | ubuntu | 14.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore-btrfs.yaml rados.yaml supported/ubuntu_14.04.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:49:08.548357 mon.b mon.0 172.21.15.54:6789/0 1917 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 3 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330117 | 2017-06-27 04:02:59 | 2017-06-27 05:36:13 | 2017-06-27 06:00:12 | 0:23:59 | 0:16:31 | 0:07:28 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:44:06.855194 mon.b mon.0 172.21.15.58:6789/0 21 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330118 | 2017-06-27 04:03:00 | 2017-06-27 05:36:20 | 2017-06-27 06:02:20 | 0:26:00 | 0:22:30 | 0:03:30 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:45:08.188195 mon.a mon.0 172.21.15.130:6789/0 1055 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 3 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330119 | 2017-06-27 04:03:01 | 2017-06-27 05:36:39 | 2017-06-27 06:02:39 | 0:26:00 | 0:21:05 | 0:04:55 | smithi | master | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml tasks/rgw_snaps.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:42:27.794514 mon.b mon.0 172.21.15.38:6789/0 19 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330120 | 2017-06-27 04:03:02 | 2017-06-27 05:37:41 | 2017-06-27 05:49:40 | 0:11:59 | 0:07:06 | 0:04:53 | smithi | master | rados/singleton/{all/resolve_stuck_peering.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:42:27.009052 mon.a mon.0 172.21.15.200:6789/0 106 : cluster [WRN] HEALTH_WARN OSD_DOWN: 2 osds down" in cluster log |
||||||||||||||
fail | 1330121 | 2017-06-27 04:03:02 | 2017-06-27 05:37:41 | 2017-06-27 06:05:41 | 0:28:00 | 0:23:08 | 0:04:52 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:47:19.745329 mon.a mon.0 172.21.15.31:6789/0 905 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 2 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330122 | 2017-06-27 04:03:03 | 2017-06-27 05:37:41 | 2017-06-27 05:51:40 | 0:13:59 | 0:10:20 | 0:03:39 | smithi | master | rados/monthrash/{ceph.yaml clusters/3-mons.yaml d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/sync-many.yaml workloads/rados_5925.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:44:04.867177 mon.a mon.0 172.21.15.1:6789/0 114 : cluster [WRN] HEALTH_WARN PG_PEERING: 1 pgs peering" in cluster log |
||||||||||||||
fail | 1330123 | 2017-06-27 04:03:04 | 2017-06-27 05:38:02 | 2017-06-27 06:16:02 | 0:38:00 | 0:33:48 | 0:04:12 | smithi | master | ubuntu | 14.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml fast/fast.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore-btrfs.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=2-m=1.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:53:37.990505 mon.a mon.0 172.21.15.74:6789/0 2579 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 1 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330124 | 2017-06-27 04:03:05 | 2017-06-27 05:38:11 | 2017-06-27 06:00:10 | 0:21:59 | 0:10:45 | 0:11:14 | smithi | master | ubuntu | 14.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} d-require-luminous/at-end.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore-btrfs.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 4 | |
Failure Reason:
"2017-06-27 05:49:33.905590 mon.a mon.0 172.21.15.162:6789/0 225 : cluster [WRN] HEALTH_WARN PG_PEERING: 1 pgs peering" in cluster log |
||||||||||||||
fail | 1330125 | 2017-06-27 04:03:05 | 2017-06-27 05:38:12 | 2017-06-27 06:04:12 | 0:26:00 | 0:22:28 | 0:03:32 | smithi | master | rados/thrash-erasure-code-overwrites/{bluestore.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml fast/fast.yaml leveldb.yaml msgr-failures/few.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:45:42.100289 mon.a mon.0 172.21.15.9:6789/0 751 : cluster [ERR] HEALTH_ERR PG_INCOMPLETE: 2 pgs incomplete" in cluster log |
||||||||||||||
fail | 1330126 | 2017-06-27 04:03:06 | 2017-06-27 05:38:15 | 2017-06-27 07:04:17 | 1:26:02 | 0:11:01 | 1:15:01 | smithi | master | ubuntu | 14.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} d-require-luminous/at-end.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore-btrfs.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
Failure Reason:
"2017-06-27 06:53:15.442469 mon.b mon.0 172.21.15.49:6789/0 20 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
pass | 1330127 | 2017-06-27 04:03:07 | 2017-06-27 05:38:22 | 2017-06-27 05:46:21 | 0:07:59 | 0:06:26 | 0:01:33 | smithi | master | rados/objectstore/keyvaluedb.yaml | 1 | |||
fail | 1330128 | 2017-06-27 04:03:07 | 2017-06-27 05:38:48 | 2017-06-27 06:10:48 | 0:32:00 | 0:27:34 | 0:04:26 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |||
Failure Reason:
"2017-06-27 06:01:09.901487 mon.b mon.0 172.21.15.35:6789/0 4078 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 3 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330129 | 2017-06-27 04:03:08 | 2017-06-27 05:39:12 | 2017-06-27 05:57:11 | 0:17:59 | 0:13:47 | 0:04:12 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:43:21.840805 mon.b mon.0 172.21.15.76:6789/0 26 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330130 | 2017-06-27 04:03:09 | 2017-06-27 05:40:15 | 2017-06-27 05:52:14 | 0:11:59 | 0:08:05 | 0:03:54 | smithi | master | ubuntu | 14.04 | rados/singleton/{all/rest-api.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml} | 1 | |
Failure Reason:
"2017-06-27 05:44:35.465695 mon.a mon.0 172.21.15.171:6789/0 59 : cluster [WRN] HEALTH_WARN PG_PEERING: 4 pgs peering" in cluster log |
||||||||||||||
fail | 1330131 | 2017-06-27 04:03:09 | 2017-06-27 05:40:15 | 2017-06-27 05:54:14 | 0:13:59 | 0:10:30 | 0:03:29 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:46:24.996416 mon.b mon.0 172.21.15.33:6789/0 19 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330132 | 2017-06-27 04:03:10 | 2017-06-27 05:40:22 | 2017-06-27 05:56:22 | 0:16:00 | 0:11:59 | 0:04:01 | smithi | master | rados/thrash-luminous/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml workloads/redirect_set_object.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:44:51.578816 mon.b mon.0 172.21.15.100:6789/0 27 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330133 | 2017-06-27 04:03:11 | 2017-06-27 05:40:28 | 2017-06-27 05:58:27 | 0:17:59 | 0:13:50 | 0:04:09 | smithi | master | rados/singleton-nomsgr/{all/multi-backfill-reject.yaml rados.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:46:32.913653 mon.a mon.0 172.21.15.12:6789/0 92 : cluster [WRN] HEALTH_WARN TOO_FEW_PGS: too few PGs per OSD (1 < min 2)" in cluster log |
||||||||||||||
fail | 1330134 | 2017-06-27 04:03:11 | 2017-06-27 05:40:38 | 2017-06-27 06:16:38 | 0:36:00 | 0:31:15 | 0:04:45 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:52:31.317922 mon.a mon.0 172.21.15.3:6789/0 1814 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 4 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330135 | 2017-06-27 04:03:12 | 2017-06-27 05:41:46 | 2017-06-27 06:01:49 | 0:20:03 | 0:15:33 | 0:04:30 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:47:51.858082 mon.b mon.0 172.21.15.7:6789/0 20 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330136 | 2017-06-27 04:03:13 | 2017-06-27 05:41:54 | 2017-06-27 06:19:54 | 0:38:00 | 0:32:26 | 0:05:34 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
Failure Reason:
"2017-06-27 06:00:15.474005 mon.b mon.0 172.21.15.49:6789/0 4674 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 1 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330137 | 2017-06-27 04:03:14 | 2017-06-27 05:42:01 | 2017-06-27 06:08:00 | 0:25:59 | 0:20:33 | 0:05:26 | smithi | master | ubuntu | 14.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:46:23.607712 mon.b mon.0 172.21.15.71:6789/0 25 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330138 | 2017-06-27 04:03:14 | 2017-06-27 05:42:05 | 2017-06-27 05:54:04 | 0:11:59 | 0:07:50 | 0:04:09 | smithi | master | rados/multimon/{clusters/9.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml tasks/mon_clock_no_skews.yaml} | 3 | |||
Failure Reason:
'timechecks' |
||||||||||||||
fail | 1330139 | 2017-06-27 04:03:15 | 2017-06-27 05:42:09 | 2017-06-27 05:56:08 | 0:13:59 | 0:09:26 | 0:04:33 | smithi | master | rados/singleton/{all/test_envlibrados_for_rocksdb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-06-27 05:48:12.345456 mon.a mon.0 172.21.15.98:6789/0 95 : cluster [WRN] HEALTH_WARN PG_PEERING: 1 pgs peering" in cluster log |
||||||||||||||
fail | 1330140 | 2017-06-27 04:03:16 | 2017-06-27 05:42:13 | 2017-06-27 06:04:12 | 0:21:59 | 0:18:30 | 0:03:29 | smithi | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
"2017-06-27 05:53:09.062572 mon.a mon.0 172.21.15.5:6789/0 21 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330141 | 2017-06-27 04:03:16 | 2017-06-27 05:42:21 | 2017-06-27 06:14:21 | 0:32:00 | 0:27:41 | 0:04:19 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |||
Failure Reason:
"2017-06-27 06:01:27.176904 mon.a mon.0 172.21.15.67:6789/0 4933 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 1 pgs stuck unclean" in cluster log |
||||||||||||||
fail | 1330142 | 2017-06-27 04:03:17 | 2017-06-27 05:44:30 | 2017-06-27 05:58:29 | 0:13:59 | 0:09:32 | 0:04:27 | smithi | master | rados/basic-luminous/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} objectstore/filestore-xfs.yaml rados.yaml scrub_test.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:51:14.817528 mon.a mon.0 172.21.15.37:6789/0 126 : cluster [ERR] HEALTH_ERR OSD_SCRUB_ERRORS: 2 scrub errors" in cluster log |
||||||||||||||
fail | 1330143 | 2017-06-27 04:03:18 | 2017-06-27 05:44:30 | 2017-06-27 05:58:29 | 0:13:59 | 0:08:23 | 0:05:36 | smithi | master | rados/mgr/{clusters/2-node-mgr.yaml debug/mgr.yaml objectstore/filestore-xfs.yaml tasks/failover.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:48:42.394958 mon.a mon.0 172.21.15.56:6789/0 16 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330144 | 2017-06-27 04:03:19 | 2017-06-27 05:44:30 | 2017-06-27 05:58:29 | 0:13:59 | 0:11:08 | 0:02:51 | smithi | master | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi182 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-health TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 1330145 | 2017-06-27 04:03:20 | 2017-06-27 05:44:30 | 2017-06-27 06:18:30 | 0:34:00 | 0:30:28 | 0:03:32 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:51:25.969364 mon.a mon.0 172.21.15.135:6789/0 23 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330146 | 2017-06-27 04:03:20 | 2017-06-27 05:44:35 | 2017-06-27 06:02:35 | 0:18:00 | 0:12:54 | 0:05:06 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:49:19.137701 mon.a mon.0 172.21.15.112:6789/0 19 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
dead | 1330147 | 2017-06-27 04:03:21 | 2017-06-27 05:44:39 | 2017-06-27 17:51:42 | 12:07:03 | smithi | master | rados/monthrash/{ceph.yaml clusters/9-mons.yaml d-require-luminous/at-mkfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml thrashers/sync.yaml workloads/rados_api_tests.yaml} | 2 | |||||
fail | 1330148 | 2017-06-27 04:03:22 | 2017-06-27 05:46:07 | 2017-06-27 06:10:07 | 0:24:00 | 0:19:29 | 0:04:31 | smithi | master | rados/singleton/{all/thrash-rados/{thrash-rados.yaml thrashosds-health.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:53:03.968453 mon.a mon.0 172.21.15.29:6789/0 118 : cluster [WRN] HEALTH_WARN OSD_DOWN: 1 osds down" in cluster log |
||||||||||||||
fail | 1330149 | 2017-06-27 04:03:23 | 2017-06-27 05:46:17 | 2017-06-27 06:06:16 | 0:19:59 | 0:15:55 | 0:04:04 | smithi | master | rados/objectstore/objectcacher-stress.yaml | 1 | |||
Failure Reason:
"2017-06-27 05:51:53.331790 mon.a mon.0 172.21.15.8:6789/0 43 : cluster [WRN] HEALTH_WARN PG_PEERING: 8 pgs peering" in cluster log |
||||||||||||||
fail | 1330150 | 2017-06-27 04:03:23 | 2017-06-27 05:46:22 | 2017-06-27 06:24:22 | 0:38:00 | 0:32:18 | 0:05:42 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
Failure Reason:
"2017-06-27 05:58:17.805241 mon.a mon.0 172.21.15.45:6789/0 2762 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 1 pgs stuck unclean" in cluster log |
||||||||||||||
fail | 1330151 | 2017-06-27 04:03:24 | 2017-06-27 05:46:23 | 2017-06-27 06:10:23 | 0:24:00 | 0:19:20 | 0:04:40 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:57:13.861491 mon.b mon.0 172.21.15.109:6789/0 1568 : cluster [ERR] HEALTH_ERR PG_INCOMPLETE: 2 pgs incomplete" in cluster log |
||||||||||||||
fail | 1330152 | 2017-06-27 04:03:25 | 2017-06-27 05:46:25 | 2017-06-27 06:16:25 | 0:30:00 | 0:26:21 | 0:03:39 | smithi | master | ubuntu | 16.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml supported/ubuntu_latest.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
"2017-06-27 06:02:26.171287 mon.a mon.0 172.21.15.95:6789/0 3798 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 4 pgs stuck unclean" in cluster log |
||||||||||||||
pass | 1330153 | 2017-06-27 04:03:26 | 2017-06-27 05:46:38 | 2017-06-27 05:52:37 | 0:05:59 | 0:05:23 | 0:00:36 | smithi | master | rados/singleton-nomsgr/{all/pool-access.yaml rados.yaml} | 1 | |||
fail | 1330154 | 2017-06-27 04:03:26 | 2017-06-27 05:46:40 | 2017-06-27 06:46:40 | 1:00:00 | 0:55:44 | 0:04:16 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |||
Failure Reason:
"2017-06-27 06:02:54.421847 mon.a mon.0 172.21.15.18:6789/0 3068 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 1 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330155 | 2017-06-27 04:03:27 | 2017-06-27 05:47:49 | 2017-06-27 06:01:48 | 0:13:59 | 0:08:00 | 0:05:59 | smithi | master | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_cls_all.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:52:31.983806 mon.b mon.0 172.21.15.65:6789/0 22 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330156 | 2017-06-27 04:03:28 | 2017-06-27 05:48:28 | 2017-06-27 06:16:28 | 0:28:00 | 0:22:55 | 0:05:05 | smithi | master | rados/singleton/{all/thrash_cache_writeback_proxy_none.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:54:43.816171 mon.a mon.0 172.21.15.26:6789/0 101 : cluster [WRN] HEALTH_WARN TOO_FEW_PGS: too few PGs per OSD (1 < min 2)" in cluster log |
||||||||||||||
fail | 1330157 | 2017-06-27 04:03:29 | 2017-06-27 05:48:40 | 2017-06-27 06:10:39 | 0:21:59 | 0:17:54 | 0:04:05 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:54:25.148570 mon.b mon.0 172.21.15.24:6789/0 17 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330158 | 2017-06-27 04:03:30 | 2017-06-27 05:49:49 | 2017-06-27 06:25:49 | 0:36:00 | 0:31:05 | 0:04:55 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
Failure Reason:
"2017-06-27 06:00:24.196699 mon.a mon.0 172.21.15.53:6789/0 1991 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 4 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330159 | 2017-06-27 04:03:31 | 2017-06-27 05:49:49 | 2017-06-27 06:29:49 | 0:40:00 | 0:36:28 | 0:03:32 | smithi | master | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml fast/normal.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:58:40.741595 mon.b mon.0 172.21.15.4:6789/0 659 : cluster [ERR] HEALTH_ERR PG_INCOMPLETE: 7 pgs incomplete" in cluster log |
||||||||||||||
fail | 1330160 | 2017-06-27 04:03:31 | 2017-06-27 05:50:08 | 2017-06-27 06:20:08 | 0:30:00 | 0:25:44 | 0:04:16 | smithi | master | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 4 | |||
Failure Reason:
"2017-06-27 05:56:36.059581 mon.b mon.0 172.21.15.97:6789/0 147 : cluster [WRN] HEALTH_WARN PG_PEERING: 1 pgs peering" in cluster log |
||||||||||||||
fail | 1330161 | 2017-06-27 04:03:32 | 2017-06-27 05:50:14 | 2017-06-27 06:30:14 | 0:40:00 | 0:35:49 | 0:04:11 | smithi | master | rados/thrash-erasure-code-overwrites/{bluestore.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml fast/normal.yaml leveldb.yaml msgr-failures/osd-delay.yaml rados.yaml thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:55:41.607258 mon.b mon.0 172.21.15.93:6789/0 285 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 2 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330162 | 2017-06-27 04:03:33 | 2017-06-27 05:50:16 | 2017-06-27 06:24:16 | 0:34:00 | 0:26:08 | 0:07:52 | smithi | master | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |||
Failure Reason:
"2017-06-27 06:00:45.176344 mon.b mon.0 172.21.15.90:6789/0 19 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330163 | 2017-06-27 04:03:33 | 2017-06-27 05:50:24 | 2017-06-27 06:06:23 | 0:15:59 | 0:11:57 | 0:04:02 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:59:54.656998 mon.a mon.0 172.21.15.91:6789/0 1801 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 1 pgs stuck unclean" in cluster log |
||||||||||||||
fail | 1330164 | 2017-06-27 04:03:34 | 2017-06-27 05:51:06 | 2017-06-27 06:21:05 | 0:29:59 | 0:24:00 | 0:05:59 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:56:18.076912 mon.a mon.0 172.21.15.40:6789/0 25 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330165 | 2017-06-27 04:03:35 | 2017-06-27 05:51:06 | 2017-06-27 06:09:05 | 0:17:59 | 0:13:56 | 0:04:03 | smithi | master | rados/thrash-luminous/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml workloads/redirect.yaml} | 2 | |||
Failure Reason:
"2017-06-27 06:00:31.197706 mon.a mon.0 172.21.15.1:6789/0 521 : cluster [ERR] HEALTH_ERR OSD_OUT_OF_ORDER_FULL: full ratio(s) out of order" in cluster log |
||||||||||||||
fail | 1330166 | 2017-06-27 04:03:36 | 2017-06-27 05:51:23 | 2017-06-27 06:03:22 | 0:11:59 | 0:05:23 | 0:06:36 | smithi | master | ubuntu | 14.04 | rados/singleton/{all/watch-notify-same-primary.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml} | 1 | |
Failure Reason:
"2017-06-27 05:55:45.281859 mon.a mon.0 172.21.15.171:6789/0 51 : cluster [WRN] HEALTH_WARN OSD_FLAGS: noout flag(s) set" in cluster log |
||||||||||||||
fail | 1330167 | 2017-06-27 04:03:36 | 2017-06-27 05:51:41 | 2017-06-27 06:15:41 | 0:24:00 | 0:18:52 | 0:05:08 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:56:45.211744 mon.a mon.0 172.21.15.66:6789/0 26 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330168 | 2017-06-27 04:03:37 | 2017-06-27 05:52:16 | 2017-06-27 06:12:16 | 0:20:00 | 0:10:51 | 0:09:09 | smithi | master | ubuntu | 14.04 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml thrashers/force-sync-many.yaml workloads/rados_mon_workunits.yaml} | 2 | |
Failure Reason:
"2017-06-27 06:01:30.176211 mon.b mon.0 172.21.15.68:6789/0 73 : cluster [WRN] HEALTH_WARN PG_PEERING: 2 pgs peering" in cluster log |
||||||||||||||
fail | 1330169 | 2017-06-27 04:03:38 | 2017-06-27 05:52:17 | 2017-06-27 06:14:16 | 0:21:59 | 0:11:20 | 0:10:39 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
Failure Reason:
"2017-06-27 06:02:57.897648 mon.a mon.0 172.21.15.162:6789/0 20 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330170 | 2017-06-27 04:03:39 | 2017-06-27 05:52:20 | 2017-06-27 06:24:20 | 0:32:00 | 0:28:18 | 0:03:42 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |||
Failure Reason:
"2017-06-27 06:10:30.293752 mon.a mon.0 172.21.15.19:6789/0 3291 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 1 pgs stuck inactive" in cluster log |
||||||||||||||
pass | 1330171 | 2017-06-27 04:03:39 | 2017-06-27 05:52:20 | 2017-06-27 13:12:30 | 7:20:10 | 7:19:53 | 0:00:17 | smithi | master | rados/objectstore/objectstore.yaml | 1 | |||
fail | 1330172 | 2017-06-27 04:03:40 | 2017-06-27 05:52:20 | 2017-06-27 06:10:20 | 0:18:00 | 0:12:32 | 0:05:28 | smithi | master | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_python.yaml} | 2 | |||
Failure Reason:
"2017-06-27 05:58:39.772462 mon.a mon.0 172.21.15.51:6789/0 28 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330173 | 2017-06-27 04:03:41 | 2017-06-27 05:52:39 | 2017-06-27 06:02:38 | 0:09:59 | 0:05:30 | 0:04:29 | smithi | master | ubuntu | 14.04 | rados/singleton/{all/admin-socket.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml} | 1 | |
Failure Reason:
"2017-06-27 05:56:49.188447 mon.a mon.0 172.21.15.70:6789/0 47 : cluster [WRN] HEALTH_WARN PG_PEERING: 8 pgs peering" in cluster log |
||||||||||||||
fail | 1330174 | 2017-06-27 04:03:41 | 2017-06-27 05:52:51 | 2017-06-27 06:46:52 | 0:54:01 | 0:51:04 | 0:02:57 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |||
Failure Reason:
"2017-06-27 06:10:28.723067 mon.a mon.0 172.21.15.20:6789/0 3881 : cluster [ERR] HEALTH_ERR PG_STUCK_UNCLEAN: 1 pgs stuck unclean" in cluster log |
||||||||||||||
fail | 1330175 | 2017-06-27 04:03:42 | 2017-06-27 05:53:49 | 2017-06-27 06:07:49 | 0:14:00 | 0:10:20 | 0:03:40 | smithi | master | centos | rados/singleton-nomsgr/{all/valgrind-leaks.yaml rados.yaml} | 1 | ||
Failure Reason:
"2017-06-27 06:02:47.377453 mon.a mon.0 172.21.15.139:6789/0 57 : cluster [WRN] HEALTH_WARN PG_DEGRADED: 8 pgs degraded" in cluster log |
||||||||||||||
fail | 1330176 | 2017-06-27 04:03:43 | 2017-06-27 05:54:05 | 2017-06-27 06:32:05 | 0:38:00 | 0:34:26 | 0:03:34 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |||
Failure Reason:
"2017-06-27 06:06:48.248659 mon.b mon.0 172.21.15.33:6789/0 1374 : cluster [ERR] HEALTH_ERR OSD_OUT_OF_ORDER_FULL: full ratio(s) out of order" in cluster log |
||||||||||||||
fail | 1330177 | 2017-06-27 04:03:44 | 2017-06-27 05:54:14 | 2017-06-27 06:14:13 | 0:19:59 | 0:13:42 | 0:06:17 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
Failure Reason:
"2017-06-27 06:04:52.227224 mon.b mon.0 172.21.15.57:6789/0 1403 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 2 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330178 | 2017-06-27 04:03:44 | 2017-06-27 05:54:14 | 2017-06-27 06:32:14 | 0:38:00 | 0:10:51 | 0:27:09 | smithi | master | ubuntu | 14.04 | rados/multimon/{clusters/21.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml tasks/mon_clock_with_skews.yaml} | 3 | |
Failure Reason:
'timechecks' |
||||||||||||||
fail | 1330179 | 2017-06-27 04:03:45 | 2017-06-27 05:54:15 | 2017-06-27 06:20:15 | 0:26:00 | 0:19:32 | 0:06:28 | smithi | master | ubuntu | 14.04 | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/lockdep.yaml} | 2 | |
Failure Reason:
"2017-06-27 06:02:52.938701 mon.a mon.0 172.21.15.56:6789/0 865 : cluster [ERR] HEALTH_ERR PG_INCOMPLETE: 2 pgs incomplete" in cluster log |
||||||||||||||
fail | 1330180 | 2017-06-27 04:03:46 | 2017-06-27 05:54:22 | 2017-06-27 06:06:22 | 0:12:00 | 0:09:11 | 0:02:49 | smithi | master | rados/singleton/{all/divergent_priors.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-06-27 06:01:09.642230 mon.a mon.0 172.21.15.185:6789/0 180 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 3 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330181 | 2017-06-27 04:03:47 | 2017-06-27 05:56:19 | 2017-06-27 06:28:18 | 0:31:59 | 0:27:56 | 0:04:03 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
Failure Reason:
"2017-06-27 06:02:27.900851 mon.b mon.0 172.21.15.30:6789/0 21 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1330182 | 2017-06-27 04:03:47 | 2017-06-27 05:56:19 | 2017-06-27 06:24:18 | 0:27:59 | 0:23:57 | 0:04:02 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |||
Failure Reason:
"2017-06-27 06:01:55.381087 mon.b mon.0 172.21.15.100:6789/0 407 : cluster [ERR] HEALTH_ERR PG_INCOMPLETE: 3 pgs incomplete" in cluster log |
||||||||||||||
fail | 1330183 | 2017-06-27 04:03:48 | 2017-06-27 05:56:20 | 2017-06-27 06:42:20 | 0:46:00 | 0:41:57 | 0:04:03 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |||
Failure Reason:
"2017-06-27 06:08:13.767328 mon.b mon.0 172.21.15.22:6789/0 1260 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 1 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330184 | 2017-06-27 04:03:49 | 2017-06-27 05:56:24 | 2017-06-27 06:32:23 | 0:35:59 | 0:32:27 | 0:03:32 | smithi | master | centos | 7.3 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml supported/centos_latest.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
"2017-06-27 06:10:06.519807 mon.b mon.0 172.21.15.13:6789/0 2796 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 1 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330185 | 2017-06-27 04:03:49 | 2017-06-27 05:57:03 | 2017-06-27 06:23:03 | 0:26:00 | 0:17:36 | 0:08:24 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
Failure Reason:
"2017-06-27 06:11:49.724606 mon.a mon.0 172.21.15.158:6789/0 2685 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 1 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1330186 | 2017-06-27 04:03:50 | 2017-06-27 05:57:13 | 2017-06-27 06:19:12 | 0:21:59 | 0:17:03 | 0:04:56 | smithi | master | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml tasks/rados_stress_watch.yaml} | 2 | |||
Failure Reason:
"2017-06-27 06:03:17.670696 mon.b mon.0 172.21.15.76:6789/0 22 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |