User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
sage | 2017-07-03 16:54:36 | 2017-07-03 16:55:28 | 2017-07-04 06:04:38 | 13:09:10 | rados | wip-health | smithi | 44f5f85 | 115 | 92 | 5 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 1356764 | 2017-07-03 16:54:55 | 2017-07-03 16:55:28 | 2017-07-03 17:35:28 | 0:40:00 | 0:37:11 | 0:02:49 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
fail | 1356765 | 2017-07-03 16:54:56 | 2017-07-03 16:55:41 | 2017-07-03 17:27:41 | 0:32:00 | 0:27:08 | 0:04:52 | smithi | master | centos | 7.3 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/filestore-btrfs.yaml rados.yaml supported/centos_latest.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
"2017-07-03 17:17:58.810215 mon.b mon.0 172.21.15.86:6789/0 5916 : cluster [WRN] HEALTH_WARN OBJECT_UNFOUND: 32/291 unfound (10.997%)" in cluster log |
||||||||||||||
pass | 1356766 | 2017-07-03 16:54:56 | 2017-07-03 16:56:03 | 2017-07-03 17:22:03 | 0:26:00 | 0:23:46 | 0:02:14 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |||
fail | 1356767 | 2017-07-03 16:54:57 | 2017-07-03 16:56:31 | 2017-07-03 17:10:30 | 0:13:59 | 0:10:08 | 0:03:51 | smithi | master | rados/singleton/{all/divergent_priors.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-07-03 17:04:38.633938 mon.a mon.0 172.21.15.5:6789/0 138 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
pass | 1356768 | 2017-07-03 16:54:58 | 2017-07-03 16:56:38 | 2017-07-03 17:12:37 | 0:15:59 | 0:15:36 | 0:00:23 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |||
pass | 1356769 | 2017-07-03 16:54:58 | 2017-07-03 16:56:55 | 2017-07-03 17:28:55 | 0:32:00 | 0:30:07 | 0:01:53 | smithi | master | ubuntu | 14.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml fast/fast.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/filestore-btrfs.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} | 2 | |
pass | 1356770 | 2017-07-03 16:54:59 | 2017-07-03 16:57:28 | 2017-07-03 17:09:27 | 0:11:59 | 0:11:11 | 0:00:48 | smithi | master | ubuntu | 14.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} d-require-luminous/at-end.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/filestore-btrfs.yaml rados.yaml thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 4 | |
pass | 1356771 | 2017-07-03 16:55:00 | 2017-07-03 16:57:43 | 2017-07-03 17:19:43 | 0:22:00 | 0:21:33 | 0:00:27 | smithi | master | rados/thrash-erasure-code-overwrites/{bluestore.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml fast/fast.yaml leveldb.yaml msgr-failures/fastclose.yaml rados.yaml thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} | 2 | |||
pass | 1356772 | 2017-07-03 16:55:01 | 2017-07-03 16:57:43 | 2017-07-03 17:13:43 | 0:16:00 | 0:10:32 | 0:05:28 | smithi | master | ubuntu | 14.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} d-require-luminous/at-end.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/filestore-btrfs.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 1356773 | 2017-07-03 16:55:01 | 2017-07-03 16:57:51 | 2017-07-03 17:07:50 | 0:09:59 | 0:09:01 | 0:00:58 | smithi | master | rados/basic-luminous/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} objectstore/bluestore-comp.yaml rados.yaml scrub_test.yaml} | 2 | |||
fail | 1356774 | 2017-07-03 16:55:02 | 2017-07-03 16:58:09 | 2017-07-03 17:14:09 | 0:16:00 | 0:10:32 | 0:05:28 | smithi | master | rados/mgr/{clusters/2-node-mgr.yaml debug/mgr.yaml objectstore/bluestore-comp.yaml tasks/failover.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:05:40.664825 mon.b mon.0 172.21.15.31:6789/0 169 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 14 pgs stuck inactive" in cluster log |
||||||||||||||
pass | 1356775 | 2017-07-03 16:55:02 | 2017-07-03 16:58:17 | 2017-07-03 17:06:16 | 0:07:59 | 0:06:28 | 0:01:31 | smithi | master | rados/objectstore/alloc-hint.yaml | 1 | |||
fail | 1356776 | 2017-07-03 16:55:03 | 2017-07-03 16:59:01 | 2017-07-03 17:13:00 | 0:13:59 | 0:06:36 | 0:07:23 | smithi | master | rados/rest/mgr-restful.yaml | 1 | |||
Failure Reason:
"2017-07-03 17:05:02.806442 mon.a mon.0 172.21.15.77:6789/0 122 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
fail | 1356777 | 2017-07-03 16:55:04 | 2017-07-03 16:59:18 | 2017-07-03 17:15:17 | 0:15:59 | 0:11:43 | 0:04:16 | smithi | master | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi201 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-health TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 1356778 | 2017-07-03 16:55:04 | 2017-07-03 17:01:06 | 2017-07-03 17:15:05 | 0:13:59 | 0:09:32 | 0:04:27 | smithi | master | rados/singleton-nomsgr/{all/admin_socket_output.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-07-03 17:07:12.563511 mon.a mon.0 172.21.15.1:6789/0 149 : cluster [WRN] HEALTH_WARN OSDMAP_FLAGS: full flag(s) set" in cluster log |
||||||||||||||
fail | 1356779 | 2017-07-03 16:55:05 | 2017-07-03 17:01:35 | 2017-07-03 18:47:39 | 1:46:04 | 1:42:35 | 0:03:29 | smithi | master | rados/upgrade/jewel-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-luminous.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} thrashosds-health.yaml} | 3 | |||
Failure Reason:
Command failed on smithi196 with status 1: "SWIFT_TEST_CONFIG_FILE=/home/ubuntu/cephtest/archive/testswift.client.0.conf /home/ubuntu/cephtest/swift/virtualenv/bin/nosetests -w /home/ubuntu/cephtest/swift/test/functional -v -a '!fails_on_rgw'" |
||||||||||||||
fail | 1356780 | 2017-07-03 16:55:06 | 2017-07-03 17:02:23 | 2017-07-03 17:38:22 | 0:35:59 | 0:30:45 | 0:05:14 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:09:39.224359 mon.a mon.0 172.21.15.98:6789/0 364 : cluster [ERR] overall HEALTH_ERR noscrub flag(s) set; 1 pgs stuck inactive; 1 pgs stuck unclean" in cluster log |
||||||||||||||
fail | 1356781 | 2017-07-03 16:55:07 | 2017-07-03 17:02:23 | 2017-07-03 17:26:22 | 0:23:59 | 0:18:34 | 0:05:25 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
"2017-07-03 17:08:00.203976 mon.b mon.0 172.21.15.66:6789/0 380 : cluster [WRN] HEALTH_WARN POOL_FULL: 1 pool(s) full" in cluster log |
||||||||||||||
fail | 1356782 | 2017-07-03 16:55:07 | 2017-07-03 17:02:23 | 2017-07-03 17:22:22 | 0:19:59 | 0:11:49 | 0:08:10 | smithi | master | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml tasks/rados_python.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:13:36.327549 mon.a mon.0 172.21.15.12:6789/0 263 : cluster [WRN] HEALTH_WARN OBJECT_DEGRADED: 1/2 objects degraded (50.000%)" in cluster log |
||||||||||||||
pass | 1356783 | 2017-07-03 16:55:08 | 2017-07-03 17:03:02 | 2017-07-03 17:13:01 | 0:09:59 | 0:08:29 | 0:01:30 | smithi | master | rados/singleton/{all/divergent_priors2.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml} | 1 | |||
fail | 1356784 | 2017-07-03 16:55:09 | 2017-07-03 17:03:36 | 2017-07-03 17:33:36 | 0:30:00 | 0:24:49 | 0:05:11 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:10:32.107380 mon.a mon.0 172.21.15.26:6789/0 292 : cluster [ERR] overall HEALTH_ERR nodeep-scrub flag(s) set; 1 pgs stuck inactive; 1 pgs stuck unclean; 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
pass | 1356785 | 2017-07-03 16:55:09 | 2017-07-03 17:03:36 | 2017-07-03 17:21:36 | 0:18:00 | 0:15:49 | 0:02:11 | smithi | master | ubuntu | 14.04 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml thrashers/sync-many.yaml workloads/pool-create-delete.yaml} | 2 | |
pass | 1356786 | 2017-07-03 16:55:10 | 2017-07-03 17:04:25 | 2017-07-03 17:24:25 | 0:20:00 | 0:18:00 | 0:02:00 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |||
fail | 1356787 | 2017-07-03 16:55:10 | 2017-07-03 17:04:25 | 2017-07-03 17:38:25 | 0:34:00 | 0:28:07 | 0:05:53 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:11:03.260878 mon.a mon.0 172.21.15.118:6789/0 324 : cluster [WRN] overall HEALTH_WARN noscrub flag(s) set" in cluster log |
||||||||||||||
pass | 1356788 | 2017-07-03 16:55:12 | 2017-07-03 17:05:13 | 2017-07-03 17:15:12 | 0:09:59 | 0:08:45 | 0:01:14 | smithi | master | ubuntu | 14.04 | rados/singleton/{all/dump-stuck.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml} | 1 | |
pass | 1356789 | 2017-07-03 16:55:13 | 2017-07-03 17:05:46 | 2017-07-03 17:21:46 | 0:16:00 | 0:12:45 | 0:03:15 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
fail | 1356790 | 2017-07-03 16:55:13 | 2017-07-03 17:06:10 | 2017-07-03 17:22:10 | 0:16:00 | 0:10:37 | 0:05:23 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:12:48.639609 mon.b mon.0 172.21.15.104:6789/0 238 : cluster [WRN] overall HEALTH_WARN noscrub flag(s) set" in cluster log |
||||||||||||||
fail | 1356791 | 2017-07-03 16:55:14 | 2017-07-03 17:06:10 | 2017-07-03 17:22:10 | 0:16:00 | 0:11:52 | 0:04:08 | smithi | master | rados/thrash-luminous/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:13:53.254908 mon.b mon.0 172.21.15.34:6789/0 396 : cluster [WRN] overall HEALTH_WARN nodeep-scrub flag(s) set; 2 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
pass | 1356792 | 2017-07-03 16:55:15 | 2017-07-03 17:06:33 | 2017-07-03 17:44:33 | 0:38:00 | 0:36:24 | 0:01:36 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |||
pass | 1356793 | 2017-07-03 16:55:16 | 2017-07-03 17:06:33 | 2017-07-03 17:14:32 | 0:07:59 | 0:06:18 | 0:01:41 | smithi | master | rados/singleton-nomsgr/{all/cache-fs-trunc.yaml rados.yaml} | 1 | |||
pass | 1356794 | 2017-07-03 16:55:16 | 2017-07-03 17:07:38 | 2017-07-03 17:25:38 | 0:18:00 | 0:12:05 | 0:05:55 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |||
pass | 1356795 | 2017-07-03 16:55:17 | 2017-07-03 17:08:01 | 2017-07-03 17:26:01 | 0:18:00 | 0:13:49 | 0:04:11 | smithi | master | ubuntu | 14.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml tasks/rados_stress_watch.yaml} | 2 | |
fail | 1356796 | 2017-07-03 16:55:18 | 2017-07-03 17:08:17 | 2017-07-03 17:32:17 | 0:24:00 | 0:20:10 | 0:03:50 | smithi | master | rados/singleton/{all/ec-lost-unfound.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-07-03 17:13:24.018759 mon.a mon.0 172.21.15.109:6789/0 128 : cluster [ERR] HEALTH_ERR PG_INCOMPLETE: 16 pgs incomplete" in cluster log |
||||||||||||||
pass | 1356797 | 2017-07-03 16:55:18 | 2017-07-03 17:08:17 | 2017-07-03 17:32:17 | 0:24:00 | 0:22:07 | 0:01:53 | smithi | master | rados/objectstore/ceph_objectstore_tool.yaml | 1 | |||
pass | 1356798 | 2017-07-03 16:55:19 | 2017-07-03 17:08:41 | 2017-07-03 17:46:41 | 0:38:00 | 0:35:14 | 0:02:46 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
fail | 1356799 | 2017-07-03 16:55:20 | 2017-07-03 17:08:51 | 2017-07-03 17:42:51 | 0:34:00 | 0:28:32 | 0:05:28 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:15:24.385380 mon.a mon.0 172.21.15.4:6789/0 236 : cluster [WRN] overall HEALTH_WARN 225/1269 objects degraded (17.730%); 2 pgs degraded" in cluster log |
||||||||||||||
pass | 1356800 | 2017-07-03 16:55:20 | 2017-07-03 17:09:36 | 2017-07-03 17:47:37 | 0:38:01 | 0:36:47 | 0:01:14 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |||
fail | 1356801 | 2017-07-03 16:55:21 | 2017-07-03 17:09:52 | 2017-07-03 17:41:52 | 0:32:00 | 0:26:55 | 0:05:05 | smithi | master | ubuntu | 14.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml supported/ubuntu_14.04.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
"2017-07-03 17:23:09.803186 mon.a mon.0 172.21.15.54:6789/0 2022 : cluster [ERR] overall HEALTH_ERR noscrub flag(s) set; 3 pgs stuck inactive; 3 pgs stuck unclean; 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
pass | 1356802 | 2017-07-03 16:55:22 | 2017-07-03 17:10:00 | 2017-07-03 17:38:00 | 0:28:00 | 0:26:08 | 0:01:52 | smithi | master | rados/singleton/{all/lost-unfound-delete.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml} | 1 | |||
pass | 1356803 | 2017-07-03 16:55:22 | 2017-07-03 17:10:25 | 2017-07-03 17:22:24 | 0:11:59 | 0:09:40 | 0:02:19 | smithi | master | rados/monthrash/{ceph.yaml clusters/9-mons.yaml d-require-luminous/at-mkfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/sync.yaml workloads/rados_5925.yaml} | 2 | |||
fail | 1356804 | 2017-07-03 16:55:23 | 2017-07-03 17:10:32 | 2017-07-03 17:28:31 | 0:17:59 | 0:12:49 | 0:05:10 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:17:21.571314 mon.a mon.0 172.21.15.73:6789/0 177 : cluster [WRN] overall HEALTH_WARN noscrub flag(s) set" in cluster log |
||||||||||||||
fail | 1356805 | 2017-07-03 16:55:24 | 2017-07-03 17:11:07 | 2017-07-03 17:21:06 | 0:09:59 | 0:05:21 | 0:04:38 | smithi | master | ubuntu | 14.04 | rados/multimon/{clusters/3.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml tasks/mon_clock_no_skews.yaml} | 2 | |
Failure Reason:
global name 'self' is not defined |
||||||||||||||
fail | 1356806 | 2017-07-03 16:55:24 | 2017-07-03 17:11:08 | 2017-07-03 17:39:07 | 0:27:59 | 0:22:43 | 0:05:16 | smithi | master | ubuntu | 14.04 | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} | 2 | |
Failure Reason:
Command failed on smithi169 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph quorum_status' |
||||||||||||||
pass | 1356807 | 2017-07-03 16:55:25 | 2017-07-03 17:11:35 | 2017-07-03 17:39:37 | 0:28:02 | 0:25:42 | 0:02:20 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
fail | 1356808 | 2017-07-03 16:55:26 | 2017-07-03 17:11:35 | 2017-07-03 17:39:37 | 0:28:02 | 0:23:43 | 0:04:19 | smithi | master | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml fast/normal.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:19:22.967860 mon.a mon.0 172.21.15.31:6789/0 200 : cluster [ERR] overall HEALTH_ERR 1 osds down; 1 pgs degraded; 7 pgs incomplete; 1 pgs undersized" in cluster log |
||||||||||||||
fail | 1356809 | 2017-07-03 16:55:26 | 2017-07-03 17:12:46 | 2017-07-03 17:26:45 | 0:13:59 | 0:09:29 | 0:04:30 | smithi | master | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 4 | |||
Failure Reason:
"2017-07-03 17:18:44.480175 mon.a mon.0 172.21.15.81:6789/0 746 : cluster [WRN] overall HEALTH_WARN noscrub flag(s) set; 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
fail | 1356810 | 2017-07-03 16:55:27 | 2017-07-03 17:12:46 | 2017-07-03 17:46:46 | 0:34:00 | 0:29:01 | 0:04:59 | smithi | master | rados/thrash-erasure-code-overwrites/{bluestore.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml fast/normal.yaml leveldb.yaml msgr-failures/few.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:19:17.575544 mon.a mon.0 172.21.15.204:6789/0 421 : cluster [ERR] overall HEALTH_ERR 1 osds down; 3 pgs degraded; 15 pgs incomplete; 1 pgs stuck unclean; 3 pgs undersized; 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
fail | 1356811 | 2017-07-03 16:55:28 | 2017-07-03 17:12:46 | 2017-07-03 17:28:45 | 0:15:59 | 0:10:34 | 0:05:25 | smithi | master | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |||
Failure Reason:
"2017-07-03 17:20:16.625444 mon.a mon.0 172.21.15.3:6789/0 211 : cluster [WRN] overall HEALTH_WARN noscrub flag(s) set" in cluster log |
||||||||||||||
fail | 1356812 | 2017-07-03 16:55:28 | 2017-07-03 17:13:01 | 2017-07-03 17:39:01 | 0:26:00 | 0:20:48 | 0:05:12 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:19:28.815466 mon.b mon.0 172.21.15.44:6789/0 942 : cluster [ERR] overall HEALTH_ERR 1 cache pools are missing hit_sets; nodeep-scrub flag(s) set; 1 pgs stuck inactive; 1 pgs stuck unclean; 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
pass | 1356813 | 2017-07-03 16:55:29 | 2017-07-03 17:13:02 | 2017-07-03 17:21:02 | 0:08:00 | 0:06:30 | 0:01:30 | smithi | master | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_striper.yaml} | 2 | |||
pass | 1356814 | 2017-07-03 16:55:30 | 2017-07-03 17:13:28 | 2017-07-03 17:35:28 | 0:22:00 | 0:21:31 | 0:00:29 | smithi | master | rados/singleton/{all/lost-unfound.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml} | 1 | |||
pass | 1356815 | 2017-07-03 16:55:30 | 2017-07-03 17:13:53 | 2017-07-03 18:53:54 | 1:40:01 | 1:34:47 | 0:05:14 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |||
fail | 1356816 | 2017-07-03 16:55:31 | 2017-07-03 17:13:53 | 2017-07-03 17:37:53 | 0:24:00 | 0:20:13 | 0:03:47 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:21:17.277095 mon.a mon.0 172.21.15.92:6789/0 829 : cluster [WRN] overall HEALTH_WARN nodeep-scrub flag(s) set" in cluster log |
||||||||||||||
pass | 1356817 | 2017-07-03 16:55:32 | 2017-07-03 17:13:54 | 2017-07-03 17:19:53 | 0:05:59 | 0:04:31 | 0:01:28 | smithi | master | rados/singleton-nomsgr/{all/ceph-post-file.yaml rados.yaml} | 1 | |||
pass | 1356818 | 2017-07-03 16:55:32 | 2017-07-03 17:14:10 | 2017-07-03 17:48:09 | 0:33:59 | 0:31:45 | 0:02:14 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
pass | 1356819 | 2017-07-03 16:55:33 | 2017-07-03 17:14:33 | 2017-07-03 17:28:33 | 0:14:00 | 0:12:22 | 0:01:38 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |||
pass | 1356820 | 2017-07-03 16:55:34 | 2017-07-03 17:15:15 | 2017-07-03 17:27:14 | 0:11:59 | 0:10:40 | 0:01:19 | smithi | master | ubuntu | 14.04 | rados/singleton/{all/mon-config-keys.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml} | 1 | |
pass | 1356821 | 2017-07-03 16:55:34 | 2017-07-03 17:15:15 | 2017-07-03 17:23:14 | 0:07:59 | 0:05:36 | 0:02:23 | smithi | master | rados/objectstore/filejournal.yaml | 1 | |||
pass | 1356822 | 2017-07-03 16:55:35 | 2017-07-03 17:15:15 | 2017-07-03 17:43:15 | 0:28:00 | 0:25:57 | 0:02:03 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |||
pass | 1356823 | 2017-07-03 16:55:36 | 2017-07-03 17:15:28 | 2017-07-03 17:27:27 | 0:11:59 | 0:10:19 | 0:01:40 | smithi | master | rados/thrash-luminous/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |||
fail | 1356824 | 2017-07-03 16:55:36 | 2017-07-03 17:15:52 | 2017-07-03 17:41:51 | 0:25:59 | 0:21:33 | 0:04:26 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:22:45.130485 mon.a mon.0 172.21.15.99:6789/0 224 : cluster [WRN] overall HEALTH_WARN noscrub flag(s) set; 37/2430 objects degraded (1.523%); 626/2430 objects misplaced (25.761%); 2 pgs backfilling; 1 pgs backfill_toofull" in cluster log |
||||||||||||||
fail | 1356825 | 2017-07-03 16:55:37 | 2017-07-03 17:15:52 | 2017-07-03 20:39:55 | 3:24:03 | 3:18:23 | 0:05:40 | smithi | master | rados/monthrash/{ceph.yaml clusters/3-mons.yaml d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/force-sync-many.yaml workloads/rados_api_tests.yaml} | 2 | |||
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi107 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-health TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 1356826 | 2017-07-03 16:55:38 | 2017-07-03 17:16:32 | 2017-07-03 17:28:31 | 0:11:59 | 0:09:36 | 0:02:23 | smithi | master | rados/basic-luminous/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} objectstore/bluestore.yaml rados.yaml scrub_test.yaml} | 2 | |||
fail | 1356827 | 2017-07-03 16:55:38 | 2017-07-03 17:17:00 | 2017-07-03 17:29:00 | 0:12:00 | 0:08:17 | 0:03:43 | smithi | master | rados/mgr/{clusters/2-node-mgr.yaml debug/mgr.yaml objectstore/bluestore.yaml tasks/failover.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:22:43.106173 mon.b mon.0 172.21.15.94:6789/0 165 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 14 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1356828 | 2017-07-03 16:55:39 | 2017-07-03 17:17:42 | 2017-07-03 17:33:41 | 0:15:59 | 0:11:58 | 0:04:01 | smithi | master | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi144 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-health TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
pass | 1356829 | 2017-07-03 16:55:40 | 2017-07-03 17:18:51 | 2017-07-03 17:34:51 | 0:16:00 | 0:14:01 | 0:01:59 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
pass | 1356830 | 2017-07-03 16:55:40 | 2017-07-03 17:18:55 | 2017-07-03 17:48:55 | 0:30:00 | 0:28:25 | 0:01:35 | smithi | master | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_workunit_loadgen_big.yaml} | 2 | |||
pass | 1356831 | 2017-07-03 16:55:41 | 2017-07-03 17:19:13 | 2017-07-03 17:27:12 | 0:07:59 | 0:06:04 | 0:01:55 | smithi | master | rados/singleton/{all/mon-seesaw.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml} | 1 | |||
fail | 1356832 | 2017-07-03 16:55:41 | 2017-07-03 17:19:15 | 2017-07-03 17:57:15 | 0:38:00 | 0:32:53 | 0:05:07 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:42:59.596542 mon.a mon.0 172.21.15.110:6789/0 6253 : cluster [ERR] overall HEALTH_ERR noscrub flag(s) set; 2 pgs stuck inactive; 2 pgs stuck unclean; 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
pass | 1356833 | 2017-07-03 16:55:42 | 2017-07-03 17:19:44 | 2017-07-03 17:47:43 | 0:27:59 | 0:27:21 | 0:00:38 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |||
fail | 1356834 | 2017-07-03 16:55:43 | 2017-07-03 17:20:03 | 2017-07-03 18:02:03 | 0:42:00 | 0:36:54 | 0:05:06 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:28:09.932178 mon.a mon.0 172.21.15.45:6789/0 781 : cluster [ERR] overall HEALTH_ERR noscrub,nodeep-scrub flag(s) set; 1 osds down; 453/1248 objects degraded (36.298%); 7 pgs degraded; 1 pgs stuck inactive; 1 pgs stuck unclean; 7 pgs undersized; 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
fail | 1356835 | 2017-07-03 16:55:43 | 2017-07-03 17:20:19 | 2017-07-03 17:30:19 | 0:10:00 | 0:05:56 | 0:04:04 | smithi | master | rados/singleton-nomsgr/{all/export-after-evict.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-07-03 17:24:17.946200 mon.a mon.0 172.21.15.95:6789/0 83 : cluster [WRN] HEALTH_WARN CACHE_POOL_NO_HIT_SET: 1 cache pools are missing hit_sets" in cluster log |
||||||||||||||
fail | 1356836 | 2017-07-03 16:55:44 | 2017-07-03 17:21:12 | 2017-07-03 17:43:12 | 0:22:00 | 0:16:47 | 0:05:13 | smithi | master | rados/singleton/{all/mon-thrasher.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml} | 1 | |||
Failure Reason:
"2017-07-03 17:25:46.486873 mon.a mon.0 172.21.15.104:6789/0 140 : cluster [WRN] overall HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log |
||||||||||||||
pass | 1356837 | 2017-07-03 16:55:45 | 2017-07-03 17:21:12 | 2017-07-03 17:37:12 | 0:16:00 | 0:14:13 | 0:01:47 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
pass | 1356838 | 2017-07-03 16:55:45 | 2017-07-03 17:21:12 | 2017-07-03 17:49:12 | 0:28:00 | 0:25:57 | 0:02:03 | smithi | master | ubuntu | 16.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml supported/ubuntu_latest.yaml thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
fail | 1356839 | 2017-07-03 16:55:46 | 2017-07-03 17:21:37 | 2017-07-03 17:51:37 | 0:30:00 | 0:26:13 | 0:03:47 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:30:09.475749 mon.a mon.0 172.21.15.2:6789/0 1166 : cluster [ERR] overall HEALTH_ERR full ratio(s) out of order; 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
fail | 1356840 | 2017-07-03 16:55:47 | 2017-07-03 17:21:50 | 2017-07-03 17:49:50 | 0:28:00 | 0:23:23 | 0:04:37 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:28:33.712461 mon.a mon.0 172.21.15.12:6789/0 207 : cluster [WRN] HEALTH_WARN POOL_FULL: 1 pool(s) full" in cluster log |
||||||||||||||
pass | 1356841 | 2017-07-03 16:55:48 | 2017-07-03 17:21:50 | 2017-07-03 17:45:50 | 0:24:00 | 0:23:09 | 0:00:51 | smithi | master | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml fast/fast.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read.yaml} | 2 | |||
pass | 1356842 | 2017-07-03 16:55:49 | 2017-07-03 17:22:05 | 2017-07-03 17:34:05 | 0:12:00 | 0:10:43 | 0:01:17 | smithi | master | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} d-require-luminous/at-end.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 4 | |||
pass | 1356843 | 2017-07-03 16:55:49 | 2017-07-03 17:22:11 | 2017-07-03 18:14:11 | 0:52:00 | 0:50:18 | 0:01:42 | smithi | master | rados/thrash-erasure-code-overwrites/{bluestore.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml fast/fast.yaml leveldb.yaml msgr-failures/osd-delay.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} | 2 | |||
fail | 1356844 | 2017-07-03 16:55:50 | 2017-07-03 17:22:12 | 2017-07-03 17:42:11 | 0:19:59 | 0:12:34 | 0:07:25 | smithi | master | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} d-require-luminous/at-end.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |||
Failure Reason:
"2017-07-03 17:32:28.783616 mon.b mon.0 172.21.15.34:6789/0 29 : cluster [WRN] HEALTH_WARN MGR_DOWN: no active mgr" in cluster log |
||||||||||||||
pass | 1356845 | 2017-07-03 16:55:51 | 2017-07-03 17:22:31 | 2017-07-03 22:02:37 | 4:40:06 | 4:38:26 | 0:01:40 | smithi | master | rados/objectstore/filestore-idempotent-aio-journal.yaml | 1 | |||
pass | 1356846 | 2017-07-03 16:55:52 | 2017-07-03 17:22:32 | 2017-07-03 18:00:31 | 0:37:59 | 0:37:04 | 0:00:55 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |||
pass | 1356847 | 2017-07-03 16:55:53 | 2017-07-03 17:23:16 | 2017-07-03 17:43:15 | 0:19:59 | 0:18:09 | 0:01:50 | smithi | master | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml tasks/rados_workunit_loadgen_mix.yaml} | 2 | |||
pass | 1356848 | 2017-07-03 16:55:53 | 2017-07-03 17:24:34 | 2017-07-03 17:38:34 | 0:14:00 | 0:13:27 | 0:00:33 | smithi | master | rados/singleton/{all/osd-backfill.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml} | 1 | |||
fail | 1356849 | 2017-07-03 16:55:54 | 2017-07-03 17:24:34 | 2017-07-03 17:40:34 | 0:16:00 | 0:07:26 | 0:08:34 | smithi | master | rados/multimon/{clusters/6.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/mon_clock_with_skews.yaml} | 2 | |||
Failure Reason:
global name 'self' is not defined |
||||||||||||||
dead | 1356850 | 2017-07-03 16:55:55 | 2017-07-03 17:25:46 | 2017-07-04 05:32:47 | 12:07:01 | smithi | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | |||||
pass | 1356851 | 2017-07-03 16:55:55 | 2017-07-03 17:25:47 | 2017-07-03 17:47:47 | 0:22:00 | 0:20:01 | 0:01:59 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
dead | 1356852 | 2017-07-03 16:55:56 | 2017-07-03 17:26:04 | 2017-07-04 05:31:29 | 12:05:25 | smithi | master | rados/monthrash/{ceph.yaml clusters/9-mons.yaml d-require-luminous/at-mkfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml thrashers/many.yaml workloads/rados_mon_workunits.yaml} | 2 | |||||
fail | 1356853 | 2017-07-03 16:55:57 | 2017-07-03 17:26:23 | 2017-07-03 17:58:23 | 0:32:00 | 0:25:26 | 0:06:34 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:36:43.788048 mon.a mon.0 172.21.15.3:6789/0 165 : cluster [WRN] overall HEALTH_WARN 7/106 objects degraded (6.604%); 1 pgs degraded" in cluster log |
||||||||||||||
pass | 1356854 | 2017-07-03 16:55:57 | 2017-07-03 17:26:24 | 2017-07-03 17:42:23 | 0:15:59 | 0:14:20 | 0:01:39 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |||
fail | 1356855 | 2017-07-03 16:55:58 | 2017-07-03 17:26:55 | 2017-07-03 17:48:54 | 0:21:59 | 0:17:39 | 0:04:20 | smithi | master | ubuntu | 14.04 | rados/singleton/{all/osd-recovery-incomplete.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml} | 1 | |
Failure Reason:
"2017-07-03 17:31:30.463518 mon.a mon.0 172.21.15.49:6789/0 76 : cluster [WRN] HEALTH_WARN OBJECT_MISPLACED: 61268/75280 objects misplaced (81.387%)" in cluster log |
||||||||||||||
fail | 1356856 | 2017-07-03 16:55:59 | 2017-07-03 17:27:13 | 2017-07-03 17:47:13 | 0:20:00 | 0:15:23 | 0:04:37 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:32:41.819307 mon.b mon.0 172.21.15.85:6789/0 143 : cluster [WRN] overall HEALTH_WARN noscrub flag(s) set; 2 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
fail | 1356857 | 2017-07-03 16:55:59 | 2017-07-03 17:27:16 | 2017-07-03 17:45:15 | 0:17:59 | 0:13:31 | 0:04:28 | smithi | master | rados/thrash-luminous/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:35:58.372015 mon.a mon.0 172.21.15.8:6789/0 984 : cluster [ERR] overall HEALTH_ERR 408/2940 objects degraded (13.878%); 57/2940 objects misplaced (1.939%); 2 pgs backfill_wait; 7 pgs degraded; 3 pgs recovery_wait; 1 pgs stuck inactive; 1 pgs stuck unclean; 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
pass | 1356858 | 2017-07-03 16:56:00 | 2017-07-03 17:27:28 | 2017-07-03 17:43:28 | 0:16:00 | 0:14:01 | 0:01:59 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
fail | 1356859 | 2017-07-03 16:56:01 | 2017-07-03 17:27:42 | 2017-07-03 17:49:42 | 0:22:00 | 0:17:17 | 0:04:43 | smithi | master | rados/singleton-nomsgr/{all/full-tiering.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed on smithi106 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph osd pool set-quota ec-ca max_bytes 0'" |
||||||||||||||
fail | 1356860 | 2017-07-03 16:56:01 | 2017-07-03 17:28:41 | 2017-07-03 17:48:40 | 0:19:59 | 0:14:35 | 0:05:24 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:35:26.393621 mon.a mon.0 172.21.15.55:6789/0 465 : cluster [WRN] overall HEALTH_WARN noscrub flag(s) set; 1 osds down; 588/1491 objects degraded (39.437%); 261/1491 objects misplaced (17.505%); 6 pgs degraded; 4 pgs undersized" in cluster log |
||||||||||||||
pass | 1356861 | 2017-07-03 16:56:02 | 2017-07-03 17:28:41 | 2017-07-03 17:52:40 | 0:23:59 | 0:23:23 | 0:00:36 | smithi | master | ubuntu | 14.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml tasks/rados_workunit_loadgen_mostlyread.yaml} | 2 | |
pass | 1356862 | 2017-07-03 16:56:03 | 2017-07-03 17:28:41 | 2017-07-03 17:40:40 | 0:11:59 | 0:09:51 | 0:02:08 | smithi | master | rados/singleton/{all/osd-recovery.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml} | 1 | |||
pass | 1356863 | 2017-07-03 16:56:03 | 2017-07-03 17:28:47 | 2017-07-03 18:08:47 | 0:40:00 | 0:38:26 | 0:01:34 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |||
fail | 1356864 | 2017-07-03 16:56:04 | 2017-07-03 17:28:56 | 2017-07-03 18:08:56 | 0:40:00 | 0:36:06 | 0:03:54 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:42:26.422582 mon.b mon.0 172.21.15.83:6789/0 1726 : cluster [ERR] overall HEALTH_ERR nodeep-scrub flag(s) set; 1 osds down; 160/1284 objects degraded (12.461%); 16 pgs degraded; 10 pgs stuck inactive; 10 pgs stuck unclean; 16 pgs undersized" in cluster log |
||||||||||||||
pass | 1356865 | 2017-07-03 16:56:05 | 2017-07-03 17:29:00 | 2017-07-03 21:55:06 | 4:26:06 | 4:24:19 | 0:01:47 | smithi | master | rados/objectstore/filestore-idempotent.yaml | 1 | |||
pass | 1356866 | 2017-07-03 16:56:06 | 2017-07-03 17:30:13 | 2017-07-03 18:10:13 | 0:40:00 | 0:37:54 | 0:02:06 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
dead | 1356867 | 2017-07-03 16:56:06 | 2017-07-03 17:30:20 | 2017-07-04 05:37:10 | 12:06:50 | smithi | master | ubuntu | 14.04 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml thrashers/one.yaml workloads/snaps-few-objects.yaml} | 2 | |||
pass | 1356868 | 2017-07-03 16:56:07 | 2017-07-03 17:32:27 | 2017-07-03 17:44:26 | 0:11:59 | 0:08:05 | 0:03:54 | smithi | master | rados/singleton/{all/peer.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml} | 1 | |||
pass | 1356869 | 2017-07-03 16:56:07 | 2017-07-03 17:32:27 | 2017-07-03 17:46:27 | 0:14:00 | 0:11:53 | 0:02:07 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |||
fail | 1356870 | 2017-07-03 16:56:08 | 2017-07-03 17:33:46 | 2017-07-03 18:09:46 | 0:36:00 | 0:30:28 | 0:05:32 | smithi | master | centos | 7.3 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore.yaml rados.yaml supported/centos_latest.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
"2017-07-03 17:41:15.553167 mon.b mon.0 172.21.15.137:6789/0 408 : cluster [WRN] overall HEALTH_WARN 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
pass | 1356871 | 2017-07-03 16:56:09 | 2017-07-03 17:33:48 | 2017-07-03 18:13:48 | 0:40:00 | 0:38:34 | 0:01:26 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
fail | 1356872 | 2017-07-03 16:56:10 | 2017-07-03 17:34:06 | 2017-07-03 18:00:05 | 0:25:59 | 0:21:46 | 0:04:13 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:51:37.024306 mon.a mon.0 172.21.15.144:6789/0 3724 : cluster [ERR] overall HEALTH_ERR noscrub flag(s) set; 4 pgs stuck inactive; 4 pgs stuck unclean; 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
pass | 1356873 | 2017-07-03 16:56:10 | 2017-07-03 17:34:15 | 2017-07-03 17:58:15 | 0:24:00 | 0:23:25 | 0:00:35 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
pass | 1356874 | 2017-07-03 16:56:11 | 2017-07-03 17:35:04 | 2017-07-03 17:49:03 | 0:13:59 | 0:11:48 | 0:02:11 | smithi | master | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/readwrite.yaml} | 2 | |||
pass | 1356875 | 2017-07-03 16:56:12 | 2017-07-03 17:35:32 | 2017-07-03 17:43:31 | 0:07:59 | 0:06:26 | 0:01:33 | smithi | master | rados/singleton/{all/pg-removal-interruption.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml} | 1 | |||
fail | 1356876 | 2017-07-03 16:56:12 | 2017-07-03 17:35:32 | 2017-07-03 18:01:32 | 0:26:00 | 0:21:22 | 0:04:38 | smithi | master | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml fast/normal.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:49:48.170404 mon.b mon.0 172.21.15.133:6789/0 1770 : cluster [ERR] overall HEALTH_ERR nodeep-scrub flag(s) set; 513/3756 objects degraded (13.658%); 13 pgs degraded; 9 pgs recovery_wait; 3 pgs stuck inactive; 3 pgs stuck unclean; 2 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
fail | 1356877 | 2017-07-03 16:56:13 | 2017-07-03 17:37:43 | 2017-07-03 17:53:43 | 0:16:00 | 0:10:14 | 0:05:46 | smithi | master | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 4 | |||
Failure Reason:
"2017-07-03 17:44:55.126267 mon.c mon.0 172.21.15.59:6789/0 376 : cluster [WRN] overall HEALTH_WARN 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
fail | 1356878 | 2017-07-03 16:56:14 | 2017-07-03 17:38:03 | 2017-07-03 18:04:03 | 0:26:00 | 0:21:21 | 0:04:39 | smithi | master | rados/thrash-erasure-code-overwrites/{bluestore.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml fast/normal.yaml leveldb.yaml msgr-failures/fastclose.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:43:32.161421 mon.a mon.0 172.21.15.92:6789/0 165 : cluster [WRN] overall HEALTH_WARN 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
fail | 1356879 | 2017-07-03 16:56:14 | 2017-07-03 17:38:03 | 2017-07-03 17:56:02 | 0:17:59 | 0:11:25 | 0:06:34 | smithi | master | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |||
Failure Reason:
"2017-07-03 17:46:26.233250 mon.a mon.0 172.21.15.78:6789/0 777 : cluster [WRN] overall HEALTH_WARN noscrub flag(s) set; 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
pass | 1356880 | 2017-07-03 16:56:15 | 2017-07-03 17:38:32 | 2017-07-03 17:50:32 | 0:12:00 | 0:08:41 | 0:03:19 | smithi | master | ubuntu | 14.04 | rados/basic-luminous/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} objectstore/filestore-btrfs.yaml rados.yaml scrub_test.yaml} | 2 | |
fail | 1356881 | 2017-07-03 16:56:16 | 2017-07-03 17:38:32 | 2017-07-03 17:52:32 | 0:14:00 | 0:08:33 | 0:05:27 | smithi | master | ubuntu | 14.04 | rados/mgr/{clusters/2-node-mgr.yaml debug/mgr.yaml objectstore/filestore-btrfs.yaml tasks/failover.yaml} | 2 | |
Failure Reason:
"2017-07-03 17:44:18.002906 mon.b mon.0 172.21.15.77:6789/0 151 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 14 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1356882 | 2017-07-03 16:56:16 | 2017-07-03 17:38:34 | 2017-07-03 17:54:34 | 0:16:00 | 0:11:15 | 0:04:45 | smithi | master | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi112 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-health TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
pass | 1356883 | 2017-07-03 16:56:17 | 2017-07-03 17:39:23 | 2017-07-03 17:49:23 | 0:10:00 | 0:08:10 | 0:01:50 | smithi | master | rados/singleton-nomsgr/{all/health-warnings.yaml rados.yaml} | 1 | |||
fail | 1356884 | 2017-07-03 16:56:18 | 2017-07-03 17:39:23 | 2017-07-03 18:05:23 | 0:26:00 | 0:19:59 | 0:06:01 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:56:13.825070 mon.a mon.0 172.21.15.114:6789/0 1861 : cluster [ERR] overall HEALTH_ERR 1 osds down; 516/2610 objects degraded (19.770%); 75 pgs degraded; 15 pgs stuck inactive; 16 pgs stuck unclean; 75 pgs undersized; 2 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
pass | 1356885 | 2017-07-03 16:56:19 | 2017-07-03 17:39:57 | 2017-07-03 18:09:57 | 0:30:00 | 0:29:12 | 0:00:48 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |||
fail | 1356886 | 2017-07-03 16:56:19 | 2017-07-03 17:39:57 | 2017-07-03 17:59:56 | 0:19:59 | 0:15:05 | 0:04:54 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:52:08.435806 mon.b mon.0 172.21.15.118:6789/0 1817 : cluster [ERR] overall HEALTH_ERR noscrub,nodeep-scrub flag(s) set; 1 osds down; 87/912 objects degraded (9.539%); 28 pgs degraded; 2 pgs stuck inactive; 2 pgs stuck unclean; 28 pgs undersized; 2 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
pass | 1356887 | 2017-07-03 16:56:20 | 2017-07-03 17:40:35 | 2017-07-03 17:58:34 | 0:17:59 | 0:17:20 | 0:00:39 | smithi | master | ubuntu | 14.04 | rados/singleton/{all/radostool.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml} | 1 | |
fail | 1356888 | 2017-07-03 16:56:22 | 2017-07-03 17:40:44 | 2017-07-03 17:56:43 | 0:15:59 | 0:11:12 | 0:04:47 | smithi | master | rados/multimon/{clusters/6.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/mon_recovery.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:49:23.782382 mon.b mon.0 172.21.15.4:6789/0 11 : cluster [WRN] overall HEALTH_WARN 2/6 mons down, quorum b,d,f,e" in cluster log |
||||||||||||||
pass | 1356889 | 2017-07-03 16:56:22 | 2017-07-03 17:41:37 | 2017-07-03 17:53:37 | 0:12:00 | 0:11:04 | 0:00:56 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
fail | 1356890 | 2017-07-03 16:56:23 | 2017-07-03 17:41:52 | 2017-07-03 17:57:52 | 0:16:00 | 0:10:29 | 0:05:31 | smithi | master | ubuntu | 14.04 | rados/thrash-luminous/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |
Failure Reason:
"2017-07-03 17:49:10.204882 mon.b mon.0 172.21.15.51:6789/0 900 : cluster [ERR] overall HEALTH_ERR 1 osds down; 410/2742 objects degraded (14.953%); 12 pgs degraded; 1 pgs stuck inactive; 1 pgs stuck unclean; 12 pgs undersized; 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
pass | 1356891 | 2017-07-03 16:56:24 | 2017-07-03 17:41:54 | 2017-07-03 17:51:53 | 0:09:59 | 0:09:30 | 0:00:29 | smithi | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_cls_all.yaml validater/lockdep.yaml} | 2 | |||
fail | 1356892 | 2017-07-03 16:56:24 | 2017-07-03 17:42:13 | 2017-07-03 18:10:12 | 0:27:59 | 0:23:22 | 0:04:37 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:47:21.315867 mon.a mon.0 172.21.15.105:6789/0 164 : cluster [WRN] overall HEALTH_WARN noscrub,nodeep-scrub flag(s) set" in cluster log |
||||||||||||||
pass | 1356893 | 2017-07-03 16:56:25 | 2017-07-03 17:42:24 | 2017-07-03 17:48:23 | 0:05:59 | 0:04:45 | 0:01:14 | smithi | master | rados/objectstore/fusestore.yaml | 1 | |||
dead | 1356894 | 2017-07-03 16:56:26 | 2017-07-03 17:43:03 | 2017-07-04 05:48:52 | 12:05:49 | smithi | master | rados/monthrash/{ceph.yaml clusters/9-mons.yaml d-require-luminous/at-mkfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/one.yaml workloads/pool-create-delete.yaml} | 2 | |||||
pass | 1356895 | 2017-07-03 16:56:27 | 2017-07-03 17:43:17 | 2017-07-03 17:59:17 | 0:16:00 | 0:14:37 | 0:01:23 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |||
pass | 1356896 | 2017-07-03 16:56:27 | 2017-07-03 17:43:17 | 2017-07-03 18:03:17 | 0:20:00 | 0:18:43 | 0:01:17 | smithi | master | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/repair_test.yaml} | 2 | |||
pass | 1356897 | 2017-07-03 16:56:28 | 2017-07-03 17:43:17 | 2017-07-03 17:51:17 | 0:08:00 | 0:07:06 | 0:00:54 | smithi | master | rados/singleton/{all/rebuild-mondb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml} | 1 | |||
pass | 1356898 | 2017-07-03 16:56:29 | 2017-07-03 17:43:29 | 2017-07-03 18:11:29 | 0:28:00 | 0:26:54 | 0:01:06 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |||
pass | 1356899 | 2017-07-03 16:56:29 | 2017-07-03 17:43:33 | 2017-07-03 18:27:33 | 0:44:00 | 0:40:16 | 0:03:44 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
pass | 1356900 | 2017-07-03 16:56:30 | 2017-07-03 17:44:35 | 2017-07-03 18:12:35 | 0:28:00 | 0:25:53 | 0:02:07 | smithi | master | rados/singleton-nomsgr/{all/msgr.yaml rados.yaml} | 1 | |||
fail | 1356901 | 2017-07-03 16:56:31 | 2017-07-03 17:44:35 | 2017-07-03 18:18:35 | 0:34:00 | 0:27:34 | 0:06:26 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |||
Failure Reason:
"2017-07-03 18:08:45.277436 mon.a mon.0 172.21.15.26:6789/0 5383 : cluster [ERR] overall HEALTH_ERR nodeep-scrub flag(s) set; 6 pgs stuck inactive; 6 pgs stuck unclean; 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
pass | 1356902 | 2017-07-03 16:56:31 | 2017-07-03 17:45:38 | 2017-07-03 18:01:38 | 0:16:00 | 0:14:37 | 0:01:23 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |||
fail | 1356903 | 2017-07-03 16:56:32 | 2017-07-03 17:46:02 | 2017-07-03 18:02:01 | 0:15:59 | 0:11:46 | 0:04:13 | smithi | master | rados/singleton/{all/reg11184.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml} | 1 | |||
fail | 1356904 | 2017-07-03 16:56:33 | 2017-07-03 17:46:33 | 2017-07-03 18:24:33 | 0:38:00 | 0:33:14 | 0:04:46 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
Failure Reason:
"2017-07-03 18:06:12.075714 mon.b mon.0 172.21.15.7:6789/0 4891 : cluster [ERR] overall HEALTH_ERR noscrub flag(s) set; 1 pgs stuck inactive; 1 pgs stuck unclean; 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
pass | 1356905 | 2017-07-03 16:56:33 | 2017-07-03 17:46:46 | 2017-07-03 18:20:46 | 0:34:00 | 0:32:05 | 0:01:55 | smithi | master | ubuntu | 14.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore-btrfs.yaml rados.yaml supported/ubuntu_14.04.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
fail | 1356906 | 2017-07-03 16:56:35 | 2017-07-03 17:46:47 | 2017-07-03 18:08:46 | 0:21:59 | 0:15:52 | 0:06:07 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
"2017-07-03 17:51:49.792865 mon.b mon.0 172.21.15.60:6789/0 153 : cluster [WRN] HEALTH_WARN POOL_FULL: 1 pool(s) full" in cluster log |
||||||||||||||
fail | 1356907 | 2017-07-03 16:56:35 | 2017-07-03 17:47:22 | 2017-07-03 18:17:22 | 0:30:00 | 0:23:45 | 0:06:15 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:55:04.428068 mon.b mon.0 172.21.15.5:6789/0 253 : cluster [ERR] overall HEALTH_ERR noscrub flag(s) set; 144/2206 objects degraded (6.528%); 1 pgs degraded; 4 pgs stuck inactive; 4 pgs stuck unclean; 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
pass | 1356908 | 2017-07-03 16:56:36 | 2017-07-03 17:47:38 | 2017-07-03 18:09:38 | 0:22:00 | 0:20:53 | 0:01:07 | smithi | master | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml tasks/rgw_snaps.yaml} | 2 | |||
pass | 1356909 | 2017-07-03 16:56:37 | 2017-07-03 17:47:44 | 2017-07-03 17:57:44 | 0:10:00 | 0:08:26 | 0:01:34 | smithi | master | rados/singleton/{all/resolve_stuck_peering.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml} | 2 | |||
pass | 1356910 | 2017-07-03 16:56:38 | 2017-07-03 17:47:48 | 2017-07-03 18:15:48 | 0:28:00 | 0:23:31 | 0:04:29 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |||
pass | 1356911 | 2017-07-03 16:56:38 | 2017-07-03 17:48:10 | 2017-07-03 18:00:10 | 0:12:00 | 0:09:43 | 0:02:17 | smithi | master | rados/monthrash/{ceph.yaml clusters/3-mons.yaml d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml thrashers/sync-many.yaml workloads/rados_5925.yaml} | 2 | |||
pass | 1356912 | 2017-07-03 16:56:39 | 2017-07-03 17:48:33 | 2017-07-03 18:28:33 | 0:40:00 | 0:38:44 | 0:01:16 | smithi | master | ubuntu | 14.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml fast/fast.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore-btrfs.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=2-m=1.yaml} | 2 | |
pass | 1356913 | 2017-07-03 16:56:40 | 2017-07-03 17:48:41 | 2017-07-03 18:16:41 | 0:28:00 | 0:10:08 | 0:17:52 | smithi | master | ubuntu | 14.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} d-require-luminous/at-end.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore-btrfs.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 4 | |
pass | 1356914 | 2017-07-03 16:56:40 | 2017-07-03 17:48:55 | 2017-07-03 18:12:55 | 0:24:00 | 0:22:37 | 0:01:23 | smithi | master | rados/thrash-erasure-code-overwrites/{bluestore.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml fast/fast.yaml leveldb.yaml msgr-failures/few.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} | 2 | |||
pass | 1356915 | 2017-07-03 16:56:41 | 2017-07-03 17:48:56 | 2017-07-03 18:04:56 | 0:16:00 | 0:11:31 | 0:04:29 | smithi | master | ubuntu | 14.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} d-require-luminous/at-end.yaml leveldb.yaml msgr-failures/few.yaml objectstore/filestore-btrfs.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 1356916 | 2017-07-03 16:56:42 | 2017-07-03 17:49:04 | 2017-07-03 17:57:04 | 0:08:00 | 0:06:16 | 0:01:44 | smithi | master | rados/objectstore/keyvaluedb.yaml | 1 | |||
fail | 1356917 | 2017-07-03 16:56:42 | 2017-07-03 17:49:13 | 2017-07-03 18:19:13 | 0:30:00 | 0:26:29 | 0:03:31 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:54:27.439742 mon.a mon.0 172.21.15.81:6789/0 465 : cluster [WRN] overall HEALTH_WARN noscrub,nodeep-scrub flag(s) set; 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
pass | 1356918 | 2017-07-03 16:56:43 | 2017-07-03 17:49:24 | 2017-07-03 18:05:23 | 0:15:59 | 0:15:06 | 0:00:53 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
fail | 1356919 | 2017-07-03 16:56:44 | 2017-07-03 17:49:51 | 2017-07-03 21:01:55 | 3:12:04 | 3:06:23 | 0:05:41 | smithi | master | ubuntu | 14.04 | rados/singleton/{all/rest-api.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml} | 1 | |
Failure Reason:
Command failed (workunit test rest/test.py) on smithi077 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-health TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rest/test.py' |
||||||||||||||
pass | 1356920 | 2017-07-03 16:56:44 | 2017-07-03 17:49:52 | 2017-07-03 17:59:51 | 0:09:59 | 0:08:25 | 0:01:34 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |||
fail | 1356921 | 2017-07-03 16:56:45 | 2017-07-03 17:50:33 | 2017-07-03 18:06:32 | 0:15:59 | 0:10:23 | 0:05:36 | smithi | master | rados/thrash-luminous/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |||
Failure Reason:
"2017-07-03 17:55:39.569198 mon.a mon.0 172.21.15.88:6789/0 424 : cluster [WRN] overall HEALTH_WARN nodeep-scrub flag(s) set; 678/4395 objects degraded (15.427%); 9 pgs degraded; 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
fail | 1356922 | 2017-07-03 16:56:46 | 2017-07-03 17:51:25 | 2017-07-03 18:13:25 | 0:22:00 | 0:13:36 | 0:08:24 | smithi | master | rados/singleton-nomsgr/{all/multi-backfill-reject.yaml rados.yaml} | 2 | |||
Failure Reason:
"2017-07-03 18:02:13.287366 mon.a mon.0 172.21.15.2:6789/0 163 : cluster [WRN] HEALTH_WARN OBJECT_DEGRADED: 82408/661524 objects degraded (12.457%)" in cluster log |
||||||||||||||
pass | 1356923 | 2017-07-03 16:56:46 | 2017-07-03 17:51:38 | 2017-07-03 18:17:38 | 0:26:00 | 0:23:54 | 0:02:06 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |||
fail | 1356924 | 2017-07-03 16:56:47 | 2017-07-03 17:51:54 | 2017-07-03 18:19:54 | 0:28:00 | 0:22:01 | 0:05:59 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |||
Failure Reason:
"2017-07-03 18:00:51.736890 mon.b mon.0 172.21.15.6:6789/0 942 : cluster [ERR] overall HEALTH_ERR noscrub,nodeep-scrub flag(s) set; 1 osds down; 400/1160 objects degraded (34.483%); 38/1160 objects misplaced (3.276%); 1 pgs backfill_toofull; 13 pgs degraded; 1 pgs stuck inactive; 2 pgs stuck unclean; 13 pgs undersized" in cluster log |
||||||||||||||
pass | 1356925 | 2017-07-03 16:56:48 | 2017-07-03 17:52:41 | 2017-07-03 18:28:41 | 0:36:00 | 0:34:08 | 0:01:52 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
pass | 1356926 | 2017-07-03 16:56:48 | 2017-07-03 17:52:41 | 2017-07-03 18:14:41 | 0:22:00 | 0:20:15 | 0:01:45 | smithi | master | ubuntu | 14.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml tasks/rados_api_tests.yaml} | 2 | |
fail | 1356927 | 2017-07-03 16:56:49 | 2017-07-03 17:53:38 | 2017-07-03 18:05:37 | 0:11:59 | 0:08:01 | 0:03:58 | smithi | master | rados/multimon/{clusters/9.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml tasks/mon_clock_no_skews.yaml} | 3 | |||
Failure Reason:
global name 'self' is not defined |
||||||||||||||
pass | 1356928 | 2017-07-03 16:56:50 | 2017-07-03 17:53:53 | 2017-07-03 18:05:53 | 0:12:00 | 0:10:10 | 0:01:50 | smithi | master | rados/singleton/{all/test_envlibrados_for_rocksdb.yaml msgr-failures/few.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml} | 1 | |||
fail | 1356929 | 2017-07-03 16:56:50 | 2017-07-03 17:54:35 | 2017-07-03 18:16:35 | 0:22:00 | 0:15:38 | 0:06:22 | smithi | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml d-thrash/none.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} | 2 | |||
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 1356930 | 2017-07-03 16:56:51 | 2017-07-03 17:56:12 | 2017-07-03 18:32:12 | 0:36:00 | 0:30:04 | 0:05:56 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |||
Failure Reason:
"2017-07-03 18:09:13.503312 mon.b mon.0 172.21.15.51:6789/0 2777 : cluster [ERR] overall HEALTH_ERR noscrub,nodeep-scrub flag(s) set; 3 pgs stuck inactive; 4 pgs stuck unclean" in cluster log |
||||||||||||||
pass | 1356931 | 2017-07-03 16:56:52 | 2017-07-03 17:56:44 | 2017-07-03 18:08:44 | 0:12:00 | 0:10:17 | 0:01:43 | smithi | master | rados/basic-luminous/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} objectstore/filestore-xfs.yaml rados.yaml scrub_test.yaml} | 2 | |||
fail | 1356932 | 2017-07-03 16:56:52 | 2017-07-03 17:57:05 | 2017-07-03 18:11:04 | 0:13:59 | 0:09:50 | 0:04:09 | smithi | master | rados/mgr/{clusters/2-node-mgr.yaml debug/mgr.yaml objectstore/filestore-xfs.yaml tasks/failover.yaml} | 2 | |||
Failure Reason:
"2017-07-03 18:03:40.562561 mon.a mon.0 172.21.15.12:6789/0 158 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 14 pgs stuck inactive" in cluster log |
||||||||||||||
fail | 1356933 | 2017-07-03 16:56:53 | 2017-07-03 17:57:25 | 2017-07-03 18:13:25 | 0:16:00 | 0:12:26 | 0:03:34 | smithi | master | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed (workunit test cephtool/test.sh) on smithi028 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=wip-health TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
pass | 1356934 | 2017-07-03 16:56:54 | 2017-07-03 17:57:45 | 2017-07-03 18:27:45 | 0:30:00 | 0:28:48 | 0:01:12 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |||
fail | 1356935 | 2017-07-03 16:56:54 | 2017-07-03 17:57:54 | 2017-07-03 18:17:53 | 0:19:59 | 0:16:13 | 0:03:46 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |||
Failure Reason:
"2017-07-03 18:07:15.284987 mon.a mon.0 172.21.15.11:6789/0 1038 : cluster [ERR] overall HEALTH_ERR noscrub,nodeep-scrub flag(s) set; 105/2652 objects misplaced (3.959%); 2 pgs backfilling; 1 pgs stuck inactive; 1 pgs stuck unclean; 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
dead | 1356936 | 2017-07-03 16:56:55 | 2017-07-03 17:58:16 | 2017-07-04 06:04:38 | 12:06:22 | smithi | master | rados/monthrash/{ceph.yaml clusters/9-mons.yaml d-require-luminous/at-mkfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/mon-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml thrashers/sync.yaml workloads/rados_api_tests.yaml} | 2 | |||||
pass | 1356937 | 2017-07-03 16:56:56 | 2017-07-03 17:58:24 | 2017-07-03 18:16:23 | 0:17:59 | 0:17:21 | 0:00:38 | smithi | master | rados/singleton/{all/thrash-rados/{thrash-rados.yaml thrashosds-health.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml} | 2 | |||
pass | 1356938 | 2017-07-03 16:56:56 | 2017-07-03 17:58:43 | 2017-07-03 18:20:42 | 0:21:59 | 0:21:39 | 0:00:20 | smithi | master | rados/objectstore/objectcacher-stress.yaml | 1 | |||
pass | 1356939 | 2017-07-03 16:56:57 | 2017-07-03 17:59:18 | 2017-07-03 18:31:18 | 0:32:00 | 0:31:25 | 0:00:35 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
fail | 1356940 | 2017-07-03 16:56:58 | 2017-07-03 18:00:01 | 2017-07-03 18:24:00 | 0:23:59 | 0:19:37 | 0:04:22 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |||
Failure Reason:
"2017-07-03 18:16:48.158558 mon.b mon.0 172.21.15.133:6789/0 2765 : cluster [ERR] overall HEALTH_ERR 2 pgs stuck inactive; 2 pgs stuck unclean" in cluster log |
||||||||||||||
fail | 1356941 | 2017-07-03 16:56:59 | 2017-07-03 18:00:01 | 2017-07-03 18:32:01 | 0:32:00 | 0:27:57 | 0:04:03 | smithi | master | ubuntu | 16.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml supported/ubuntu_latest.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
"2017-07-03 18:07:21.766653 mon.b mon.0 172.21.15.96:6789/0 681 : cluster [ERR] overall HEALTH_ERR noscrub,nodeep-scrub flag(s) set; 2 pgs stuck inactive; 2 pgs stuck unclean" in cluster log |
||||||||||||||
pass | 1356942 | 2017-07-03 16:56:59 | 2017-07-03 18:00:07 | 2017-07-03 18:08:06 | 0:07:59 | 0:06:17 | 0:01:42 | smithi | master | rados/singleton-nomsgr/{all/pool-access.yaml rados.yaml} | 1 | |||
pass | 1356943 | 2017-07-03 16:57:00 | 2017-07-03 18:00:11 | 2017-07-03 19:00:11 | 1:00:00 | 0:58:02 | 0:01:58 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |||
pass | 1356944 | 2017-07-03 16:57:01 | 2017-07-03 18:00:32 | 2017-07-03 18:14:32 | 0:14:00 | 0:08:58 | 0:05:02 | smithi | master | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_cls_all.yaml} | 2 | |||
pass | 1356945 | 2017-07-03 16:57:02 | 2017-07-03 18:01:40 | 2017-07-03 18:25:40 | 0:24:00 | 0:21:50 | 0:02:10 | smithi | master | rados/singleton/{all/thrash_cache_writeback_proxy_none.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml} | 2 | |||
pass | 1356946 | 2017-07-03 16:57:02 | 2017-07-03 18:01:40 | 2017-07-03 18:21:40 | 0:20:00 | 0:18:51 | 0:01:09 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |||
pass | 1356947 | 2017-07-03 16:57:03 | 2017-07-03 18:02:02 | 2017-07-03 18:36:02 | 0:34:00 | 0:31:44 | 0:02:16 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
fail | 1356948 | 2017-07-03 16:57:04 | 2017-07-03 18:02:04 | 2017-07-03 18:40:04 | 0:38:00 | 0:33:22 | 0:04:38 | smithi | master | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml fast/normal.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} | 2 | |||
Failure Reason:
"2017-07-03 18:17:07.566759 mon.a mon.0 172.21.15.108:6789/0 3218 : cluster [ERR] overall HEALTH_ERR noscrub flag(s) set; 7 pgs stuck inactive; 7 pgs stuck unclean; 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
fail | 1356949 | 2017-07-03 16:57:04 | 2017-07-03 18:03:17 | 2017-07-03 18:21:17 | 0:18:00 | 0:13:04 | 0:04:56 | smithi | master | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 4 | |||
Failure Reason:
"2017-07-03 18:12:18.602804 mon.a mon.0 172.21.15.92:6789/0 369 : cluster [WRN] overall HEALTH_WARN 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
fail | 1356950 | 2017-07-03 16:57:05 | 2017-07-03 18:03:18 | 2017-07-03 18:47:18 | 0:44:00 | 0:36:45 | 0:07:15 | smithi | master | rados/thrash-erasure-code-overwrites/{bluestore.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml fast/normal.yaml leveldb.yaml msgr-failures/osd-delay.yaml rados.yaml thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |||
Failure Reason:
"2017-07-03 18:14:55.850445 mon.a mon.0 172.21.15.134:6789/0 737 : cluster [ERR] overall HEALTH_ERR noscrub,nodeep-scrub flag(s) set; 1 osds down; 2 pgs degraded; 10 pgs incomplete; 2 pgs undersized; 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
fail | 1356951 | 2017-07-03 16:57:06 | 2017-07-03 18:04:08 | 2017-07-03 18:24:08 | 0:20:00 | 0:12:58 | 0:07:02 | smithi | master | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml leveldb.yaml msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |||
Failure Reason:
"2017-07-03 18:15:13.912290 mon.c mon.0 172.21.15.80:6789/0 285 : cluster [WRN] overall HEALTH_WARN 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
fail | 1356952 | 2017-07-03 16:57:07 | 2017-07-03 18:05:18 | 2017-07-03 18:25:17 | 0:19:59 | 0:14:26 | 0:05:33 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |||
Failure Reason:
"2017-07-03 18:11:14.450788 mon.b mon.0 172.21.15.112:6789/0 213 : cluster [WRN] overall HEALTH_WARN 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
pass | 1356953 | 2017-07-03 16:57:07 | 2017-07-03 18:05:24 | 2017-07-03 18:27:24 | 0:22:00 | 0:20:51 | 0:01:09 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |||
fail | 1356954 | 2017-07-03 16:57:08 | 2017-07-03 18:05:24 | 2017-07-03 18:25:24 | 0:20:00 | 0:15:51 | 0:04:09 | smithi | master | rados/thrash-luminous/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |||
Failure Reason:
"2017-07-03 18:17:11.500981 mon.b mon.0 172.21.15.143:6789/0 1558 : cluster [ERR] overall HEALTH_ERR 2 pgs stuck inactive; 2 pgs stuck unclean" in cluster log |
||||||||||||||
pass | 1356955 | 2017-07-03 16:57:09 | 2017-07-03 18:05:45 | 2017-07-03 18:13:44 | 0:07:59 | 0:06:12 | 0:01:47 | smithi | master | ubuntu | 14.04 | rados/singleton/{all/watch-notify-same-primary.yaml msgr-failures/many.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml} | 1 | |
fail | 1356956 | 2017-07-03 16:57:09 | 2017-07-03 18:05:53 | 2017-07-03 18:53:54 | 0:48:01 | 0:42:20 | 0:05:41 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |||
Failure Reason:
"2017-07-03 18:17:00.614508 mon.a mon.0 172.21.15.37:6789/0 1386 : cluster [ERR] overall HEALTH_ERR nodeep-scrub flag(s) set; 2 osds down; 4511/16689 objects degraded (27.030%); 34 pgs degraded; 11 pgs stuck degraded; 20 pgs stuck inactive; 21 pgs stuck unclean; 11 pgs stuck undersized; 34 pgs undersized; 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
fail | 1356957 | 2017-07-03 16:57:10 | 2017-07-03 18:06:43 | 2017-07-03 18:20:43 | 0:14:00 | 0:09:17 | 0:04:43 | smithi | master | ubuntu | 14.04 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml thrashers/force-sync-many.yaml workloads/rados_mon_workunits.yaml} | 2 | |
Failure Reason:
"2017-07-03 18:12:56.681265 mon.b mon.1 172.21.15.163:6789/0 781 : cluster [ERR] HEALTH_ERR PG_STUCK_INACTIVE: 1 pgs stuck inactive" in cluster log |
||||||||||||||
pass | 1356958 | 2017-07-03 16:57:11 | 2017-07-03 18:08:15 | 2017-07-03 18:22:14 | 0:13:59 | 0:11:38 | 0:02:21 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
fail | 1356959 | 2017-07-03 16:57:11 | 2017-07-03 18:08:45 | 2017-07-03 18:40:50 | 0:32:05 | 0:28:43 | 0:03:22 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |||
Failure Reason:
"2017-07-03 18:18:06.602092 mon.a mon.0 172.21.15.150:6789/0 818 : cluster [ERR] overall HEALTH_ERR 1 pgs stuck inactive; 1 pgs stuck unclean" in cluster log |
||||||||||||||
pass | 1356960 | 2017-07-03 16:57:12 | 2017-07-03 18:08:47 | 2017-07-04 01:50:59 | 7:42:12 | 7:40:23 | 0:01:49 | smithi | master | rados/objectstore/objectstore.yaml | 1 | |||
fail | 1356961 | 2017-07-03 16:57:13 | 2017-07-03 18:08:48 | 2017-07-03 18:32:48 | 0:24:00 | 0:15:03 | 0:08:57 | smithi | master | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_python.yaml} | 2 | |||
Failure Reason:
"2017-07-03 18:21:01.800744 mon.a mon.0 172.21.15.137:6789/0 251 : cluster [WRN] HEALTH_WARN OBJECT_DEGRADED: 1/2 objects degraded (50.000%)" in cluster log |
||||||||||||||
pass | 1356962 | 2017-07-03 16:57:13 | 2017-07-03 18:08:58 | 2017-07-03 18:14:57 | 0:05:59 | 0:05:16 | 0:00:43 | smithi | master | ubuntu | 14.04 | rados/singleton/{all/admin-socket.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml} | 1 | |
pass | 1356963 | 2017-07-03 16:57:14 | 2017-07-03 18:09:50 | 2017-07-03 18:51:50 | 0:42:00 | 0:39:48 | 0:02:12 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |||
fail | 1356964 | 2017-07-03 16:57:15 | 2017-07-03 18:09:50 | 2017-07-03 18:25:49 | 0:15:59 | 0:11:03 | 0:04:56 | smithi | master | rados/singleton-nomsgr/{all/valgrind-leaks.yaml rados.yaml} | 1 | |||
Failure Reason:
Command failed on smithi004 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' |
||||||||||||||
fail | 1356965 | 2017-07-03 16:57:15 | 2017-07-03 18:09:57 | 2017-07-03 18:45:57 | 0:36:00 | 0:31:07 | 0:04:53 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |||
Failure Reason:
"2017-07-03 18:17:27.139017 mon.b mon.0 172.21.15.22:6789/0 204 : cluster [WRN] overall HEALTH_WARN nodeep-scrub flag(s) set" in cluster log |
||||||||||||||
pass | 1356966 | 2017-07-03 16:57:16 | 2017-07-03 18:10:14 | 2017-07-03 18:32:13 | 0:21:59 | 0:14:19 | 0:07:40 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
fail | 1356967 | 2017-07-03 16:57:17 | 2017-07-03 18:10:14 | 2017-07-03 18:36:14 | 0:26:00 | 0:14:38 | 0:11:22 | smithi | master | ubuntu | 14.04 | rados/multimon/{clusters/21.yaml mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml tasks/mon_clock_with_skews.yaml} | 3 | |
Failure Reason:
global name 'self' is not defined |
||||||||||||||
fail | 1356968 | 2017-07-03 16:57:17 | 2017-07-03 18:11:13 | 2017-07-03 18:35:13 | 0:24:00 | 0:18:36 | 0:05:24 | smithi | master | ubuntu | 14.04 | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml d-thrash/default/{default.yaml thrashosds-health.yaml} mon_kv_backend/leveldb.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/lockdep.yaml} | 2 | |
Failure Reason:
"2017-07-03 18:17:33.867263 mon.a mon.0 172.21.15.56:6789/0 327 : cluster [WRN] HEALTH_WARN POOL_FULL: 1 pool(s) full" in cluster log |
||||||||||||||
pass | 1356969 | 2017-07-03 16:57:18 | 2017-07-03 18:11:29 | 2017-07-03 18:27:29 | 0:16:00 | 0:13:57 | 0:02:03 | smithi | master | rados/singleton/{all/divergent_priors.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml} | 1 | |||
pass | 1356970 | 2017-07-03 16:57:19 | 2017-07-03 18:12:44 | 2017-07-03 18:42:44 | 0:30:00 | 0:27:32 | 0:02:28 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-xfs.yaml rados.yaml rocksdb.yaml thrashers/none.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
fail | 1356971 | 2017-07-03 16:57:19 | 2017-07-03 18:12:56 | 2017-07-03 18:42:56 | 0:30:00 | 0:26:25 | 0:03:35 | smithi | master | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/osd-delay.yaml msgr/simple.yaml objectstore/bluestore-comp.yaml rados.yaml rocksdb.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |||
Failure Reason:
"2017-07-03 18:20:42.194045 mon.a mon.0 172.21.15.18:6789/0 289 : cluster [WRN] HEALTH_WARN POOL_FULL: 1 pool(s) full" in cluster log |
||||||||||||||
fail | 1356972 | 2017-07-03 16:57:20 | 2017-07-03 18:13:26 | 2017-07-03 19:11:27 | 0:58:01 | 0:52:45 | 0:05:16 | smithi | master | rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |||
Failure Reason:
"2017-07-03 18:27:11.750216 mon.a mon.0 172.21.15.31:6789/0 297 : cluster [WRN] overall HEALTH_WARN nodeep-scrub flag(s) set; 415/12158 objects degraded (3.413%); 1 pgs degraded; 1 pools have pg_num > pgp_num" in cluster log |
||||||||||||||
pass | 1356973 | 2017-07-03 16:57:21 | 2017-07-03 18:13:27 | 2017-07-03 18:49:27 | 0:36:00 | 0:34:25 | 0:01:35 | smithi | master | centos | 7.3 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml leveldb.yaml msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml supported/centos_latest.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 1356974 | 2017-07-03 16:57:22 | 2017-07-03 18:13:53 | 2017-07-03 18:37:53 | 0:24:00 | 0:21:46 | 0:02:14 | smithi | master | ubuntu | 14.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-end.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/filestore-btrfs.yaml rados.yaml rocksdb.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
pass | 1356975 | 2017-07-03 16:57:22 | 2017-07-03 18:13:53 | 2017-07-03 18:31:53 | 0:18:00 | 0:16:42 | 0:01:18 | smithi | master | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-require-luminous/at-mkfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/many.yaml msgr/random.yaml objectstore/bluestore.yaml rados.yaml tasks/rados_stress_watch.yaml} | 2 |