User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
kchai | 2019-06-15 04:14:13 | 2019-06-15 09:22:18 | 2019-06-17 08:35:44 | 1 day, 23:13:26 | rados | wip-kefu-testing-2019-06-14-1737 | mira | 4e3c8e8 | 12 | 342 | 3 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 4037366 | 2019-06-15 04:14:26 | 2019-06-15 04:14:37 | 2019-06-15 10:54:42 | 6:40:05 | 2:23:49 | 4:16:16 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037367 | 2019-06-15 04:14:27 | 2019-06-15 05:02:01 | 2019-06-15 07:44:02 | 2:42:01 | 0:20:41 | 2:21:20 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037368 | 2019-06-15 04:14:28 | 2019-06-15 05:03:59 | 2019-06-15 06:23:59 | 1:20:00 | 0:23:24 | 0:56:36 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/hammer.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037369 | 2019-06-15 04:14:28 | 2019-06-15 05:44:30 | 2019-06-15 07:38:30 | 1:54:00 | 0:20:58 | 1:33:02 | mira | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} tasks/ssh_orchestrator.yaml} | 2 | |
Failure Reason:
Command failed on mira061 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037370 | 2019-06-15 04:14:29 | 2019-06-15 06:24:02 | 2019-06-15 09:22:04 | 2:58:02 | 0:20:21 | 2:37:41 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037371 | 2019-06-15 04:14:30 | 2019-06-15 07:10:19 | 2019-06-15 08:12:19 | 1:02:00 | 0:20:53 | 0:41:07 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037372 | 2019-06-15 04:14:31 | 2019-06-15 07:38:45 | 2019-06-15 08:16:44 | 0:37:59 | 0:20:26 | 0:17:33 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037373 | 2019-06-15 04:14:32 | 2019-06-15 07:44:16 | 2019-06-15 08:14:15 | 0:29:59 | 0:19:33 | 0:10:26 | mira | master | ubuntu | 16.04 | rados/singleton/{all/mon-config-keys.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
"2019-06-15T07:58:55.054058+0000 mon.a (mon.0) 74 : cluster [WRN] Health check failed: Reduced data availability: 8 pgs inactive (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 4037374 | 2019-06-15 04:14:33 | 2019-06-15 08:12:32 | 2019-06-15 08:48:32 | 0:36:00 | 0:20:01 | 0:15:59 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037375 | 2019-06-15 04:14:33 | 2019-06-15 08:14:17 | 2019-06-15 09:54:17 | 1:40:00 | 0:21:03 | 1:18:57 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037376 | 2019-06-15 04:14:34 | 2019-06-15 08:16:58 | 2019-06-15 08:48:57 | 0:31:59 | 0:19:03 | 0:12:56 | mira | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/export-after-evict.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
"2019-06-15T08:30:39.246768+0000 mon.a (mon.0) 84 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037377 | 2019-06-15 04:14:35 | 2019-06-15 08:48:59 | 2019-06-15 10:29:00 | 1:40:01 | 0:20:25 | 1:19:36 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037378 | 2019-06-15 04:14:36 | 2019-06-15 09:22:18 | 2019-06-15 12:24:21 | 3:02:03 | 0:40:05 | 2:21:58 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037379 | 2019-06-15 04:14:37 | 2019-06-15 09:25:06 | 2019-06-15 11:01:07 | 1:36:01 | 0:20:05 | 1:15:56 | mira | master | ubuntu | 18.04 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/one.yaml workloads/rados_mon_osdmap_prune.yaml} | 2 | |
Failure Reason:
"2019-06-15T10:43:49.018146+0000 mon.f (mon.0) 127 : cluster [WRN] Health check failed: 4 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037380 | 2019-06-15 04:14:38 | 2019-06-15 09:54:19 | 2019-06-15 11:40:20 | 1:46:01 | 0:29:39 | 1:16:22 | mira | master | centos | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037381 | 2019-06-15 04:14:38 | 2019-06-15 10:29:16 | 2019-06-15 16:39:22 | 6:10:06 | 0:23:32 | 5:46:34 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/jewel-v1only.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/few.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037382 | 2019-06-15 04:14:39 | 2019-06-15 10:54:56 | 2019-06-15 11:34:56 | 0:40:00 | 0:20:10 | 0:19:50 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037383 | 2019-06-15 04:14:40 | 2019-06-15 11:01:10 | 2019-06-15 11:57:10 | 0:56:00 | 0:06:18 | 0:49:42 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/sample_fio.yaml} | 1 | |
Failure Reason:
Command failed on mira061 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037384 | 2019-06-15 04:14:41 | 2019-06-15 11:34:58 | 2019-06-15 12:19:03 | 0:44:05 | 0:26:45 | 0:17:20 | mira | master | centos | 7.6 | rados/singleton/{all/mon-config.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
"2019-06-15T12:01:55.330474+0000 mon.a (mon.0) 56 : cluster [WRN] Health check failed: Reduced data availability: 8 pgs inactive (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 4037385 | 2019-06-15 04:14:41 | 2019-06-15 11:40:22 | 2019-06-15 12:48:22 | 1:08:00 | 0:28:06 | 0:39:54 | mira | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/readwrite.yaml} | 2 | |
Failure Reason:
"2019-06-15T12:29:47.569968+0000 mon.b (mon.0) 210 : cluster [WRN] Health check failed: 4 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 4037386 | 2019-06-15 04:14:42 | 2019-06-15 11:57:11 | 2019-06-15 15:27:14 | 3:30:03 | 0:08:14 | 3:21:49 | mira | master | ubuntu | 18.04 | rados/multimon/{clusters/3.yaml msgr-failures/many.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_recovery.yaml} | 2 | |
fail | 4037387 | 2019-06-15 04:14:43 | 2019-06-15 12:19:20 | 2019-06-15 15:07:21 | 2:48:01 | 2:23:55 | 0:24:06 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037388 | 2019-06-15 04:14:44 | 2019-06-15 12:24:36 | 2019-06-15 13:50:36 | 1:26:00 | 0:40:12 | 0:45:48 | mira | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 4037389 | 2019-06-15 04:14:45 | 2019-06-15 12:48:37 | 2019-06-15 14:24:38 | 1:36:01 | 0:20:24 | 1:15:37 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037390 | 2019-06-15 04:14:45 | 2019-06-15 13:50:38 | 2019-06-15 14:56:39 | 1:06:01 | 0:20:22 | 0:45:39 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037391 | 2019-06-15 04:14:46 | 2019-06-15 14:24:55 | 2019-06-15 15:30:54 | 1:05:59 | 0:20:24 | 0:45:35 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037392 | 2019-06-15 04:14:47 | 2019-06-15 14:56:40 | 2019-06-15 17:36:42 | 2:40:02 | 2:22:37 | 0:17:25 | mira | master | rhel | 7.6 | rados/singleton/{all/osd-backfill.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037393 | 2019-06-15 04:14:48 | 2019-06-15 15:07:23 | 2019-06-15 15:59:23 | 0:52:00 | 0:20:36 | 0:31:24 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037394 | 2019-06-15 04:14:49 | 2019-06-15 15:27:29 | 2019-06-15 15:49:28 | 0:21:59 | 0:07:02 | 0:14:57 | mira | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/workunits.yaml} | 2 | |
Failure Reason:
"2019-06-15T15:46:26.745844+0000 mon.b (mon.0) 68 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037395 | 2019-06-15 04:14:49 | 2019-06-15 15:31:10 | 2019-06-15 16:31:10 | 1:00:00 | 0:33:32 | 0:26:28 | mira | master | rhel | 7.6 | rados/objectstore/{backends/filestore-idempotent-aio-journal.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
"2019-06-15T16:14:09.714879+0000 mon.a (mon.0) 63 : cluster [WRN] Health check failed: Reduced data availability: 8 pgs inactive (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 4037396 | 2019-06-15 04:14:50 | 2019-06-15 15:49:43 | 2019-06-15 16:07:42 | 0:17:59 | 0:06:43 | 0:11:16 | mira | master | ubuntu | 16.04 | rados/standalone/{supported-random-distro$/{ubuntu_16.04.yaml} workloads/misc.yaml} | 1 | |
Failure Reason:
Command failed (workunit test misc/ok-to-stop.sh) on mira069 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4e3c8e8c47461e1654ec14767afa2f9385ed9e32 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/misc/ok-to-stop.sh' |
||||||||||||||
fail | 4037397 | 2019-06-15 04:14:51 | 2019-06-15 15:59:25 | 2019-06-15 18:45:27 | 2:46:02 | 2:25:07 | 0:20:55 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037398 | 2019-06-15 04:14:52 | 2019-06-15 16:02:07 | 2019-06-15 16:50:07 | 0:48:00 | 0:20:42 | 0:27:18 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037399 | 2019-06-15 04:14:53 | 2019-06-15 16:02:15 | 2019-06-15 16:56:14 | 0:53:59 | 0:24:03 | 0:29:56 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037400 | 2019-06-15 04:14:53 | 2019-06-15 16:07:56 | 2019-06-15 16:49:56 | 0:42:00 | 0:20:33 | 0:21:27 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037401 | 2019-06-15 04:14:54 | 2019-06-15 16:16:38 | 2019-06-15 17:00:38 | 0:44:00 | 0:36:06 | 0:07:54 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037402 | 2019-06-15 04:14:55 | 2019-06-15 16:16:42 | 2019-06-15 16:34:41 | 0:17:59 | 0:06:02 | 0:11:57 | mira | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/sample_radosbench.yaml} | 1 | |
Failure Reason:
Command failed on mira117 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037403 | 2019-06-15 04:14:56 | 2019-06-15 16:16:42 | 2019-06-15 16:48:42 | 0:32:00 | 0:19:46 | 0:12:14 | mira | master | ubuntu | 18.04 | rados/singleton/{all/osd-recovery-incomplete.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037404 | 2019-06-15 04:14:57 | 2019-06-15 16:31:24 | 2019-06-15 17:15:24 | 0:44:00 | 0:28:32 | 0:15:28 | mira | master | centos | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037405 | 2019-06-15 04:14:57 | 2019-06-15 16:34:43 | 2019-06-15 17:04:42 | 0:29:59 | 0:19:39 | 0:10:20 | mira | master | ubuntu | 16.04 | rados/singleton-nomsgr/{all/full-tiering.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
"2019-06-15T16:48:35.170003+0000 mon.a (mon.0) 87 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037406 | 2019-06-15 04:14:58 | 2019-06-15 16:39:23 | 2019-06-15 17:13:23 | 0:34:00 | 0:20:20 | 0:13:40 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037407 | 2019-06-15 04:14:59 | 2019-06-15 16:48:43 | 2019-06-15 17:40:43 | 0:52:00 | 0:27:57 | 0:24:03 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037408 | 2019-06-15 04:15:00 | 2019-06-15 16:50:11 | 2019-06-15 17:42:10 | 0:51:59 | 0:28:16 | 0:23:43 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037409 | 2019-06-15 04:15:01 | 2019-06-15 16:50:11 | 2019-06-15 17:10:10 | 0:19:59 | 0:06:35 | 0:13:24 | mira | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/repair_test.yaml} | 2 | |
Failure Reason:
Command failed on mira030 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037410 | 2019-06-15 04:15:01 | 2019-06-15 16:56:17 | 2019-06-15 17:48:21 | 0:52:04 | 0:28:06 | 0:23:58 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037411 | 2019-06-15 04:15:02 | 2019-06-15 17:00:44 | 2019-06-15 17:44:44 | 0:44:00 | 0:28:03 | 0:15:57 | mira | master | centos | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037412 | 2019-06-15 04:15:03 | 2019-06-15 17:04:59 | 2019-06-15 20:00:59 | 2:56:00 | 2:35:08 | 0:20:52 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037413 | 2019-06-15 04:15:04 | 2019-06-15 17:10:20 | 2019-06-15 17:44:19 | 0:33:59 | 0:20:10 | 0:13:49 | mira | master | ubuntu | 16.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037414 | 2019-06-15 04:15:05 | 2019-06-15 17:13:25 | 2019-06-15 17:47:24 | 0:33:59 | 0:20:15 | 0:13:44 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037415 | 2019-06-15 04:15:06 | 2019-06-15 17:15:26 | 2019-06-15 17:47:25 | 0:31:59 | 0:20:05 | 0:11:54 | mira | master | ubuntu | 16.04 | rados/singleton/{all/osd-recovery.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037416 | 2019-06-15 04:15:07 | 2019-06-15 17:36:47 | 2019-06-15 18:40:47 | 1:04:00 | 0:32:50 | 0:31:10 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous-v1only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037417 | 2019-06-15 04:15:07 | 2019-06-15 17:40:45 | 2019-06-15 18:34:45 | 0:54:00 | 0:28:22 | 0:25:38 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037418 | 2019-06-15 04:15:08 | 2019-06-15 17:42:18 | 2019-06-15 18:16:17 | 0:33:59 | 0:20:11 | 0:13:48 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037419 | 2019-06-15 04:15:09 | 2019-06-15 17:44:35 | 2019-06-15 18:34:35 | 0:50:00 | 0:27:58 | 0:22:02 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037420 | 2019-06-15 04:15:10 | 2019-06-15 17:44:47 | 2019-06-15 18:22:45 | 0:37:58 | 0:20:33 | 0:17:25 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037421 | 2019-06-15 04:15:11 | 2019-06-15 17:47:26 | 2019-06-15 18:17:25 | 0:29:59 | 0:19:30 | 0:10:29 | mira | master | ubuntu | 16.04 | rados/singleton/{all/peer.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037422 | 2019-06-15 04:15:12 | 2019-06-15 17:47:27 | 2019-06-15 18:05:26 | 0:17:59 | 0:05:39 | 0:12:20 | mira | master | ubuntu | 16.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/crash.yaml} | 2 | |
Failure Reason:
Command failed on mira034 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037423 | 2019-06-15 04:15:12 | 2019-06-15 17:48:32 | 2019-06-15 18:22:31 | 0:33:59 | 0:05:03 | 0:28:56 | mira | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/cosbench_64K_read_write.yaml} | 1 | |
Failure Reason:
Command failed on mira034 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037424 | 2019-06-15 04:15:13 | 2019-06-15 18:05:27 | 2019-06-15 21:19:30 | 3:14:03 | 2:44:12 | 0:29:51 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037425 | 2019-06-15 04:15:14 | 2019-06-15 18:16:32 | 2019-06-15 19:00:31 | 0:43:59 | 0:28:07 | 0:15:52 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037426 | 2019-06-15 04:15:15 | 2019-06-15 18:17:26 | 2019-06-15 18:55:26 | 0:38:00 | 0:20:27 | 0:17:33 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037427 | 2019-06-15 04:15:16 | 2019-06-15 18:22:46 | 2019-06-15 18:58:46 | 0:36:00 | 0:19:54 | 0:16:06 | mira | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/health-warnings.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037428 | 2019-06-15 04:15:17 | 2019-06-15 18:22:46 | 2019-06-15 19:08:46 | 0:46:00 | 0:20:42 | 0:25:18 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037429 | 2019-06-15 04:15:17 | 2019-06-15 18:34:49 | 2019-06-15 19:20:49 | 0:46:00 | 0:35:46 | 0:10:14 | mira | master | rhel | 7.6 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/sync-many.yaml workloads/rados_mon_workunits.yaml} | 2 | |
Failure Reason:
"2019-06-15T19:01:57.473238+0000 mon.b (mon.0) 195 : cluster [WRN] Health check failed: 4 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037430 | 2019-06-15 04:15:18 | 2019-06-15 18:34:49 | 2019-06-15 19:26:49 | 0:52:00 | 0:29:55 | 0:22:05 | mira | master | centos | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037431 | 2019-06-15 04:15:19 | 2019-06-15 18:41:02 | 2019-06-15 20:05:07 | 1:24:05 | 0:23:18 | 1:00:47 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/few.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037432 | 2019-06-15 04:15:20 | 2019-06-15 18:45:42 | 2019-06-15 19:37:42 | 0:52:00 | 0:40:15 | 0:11:45 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037433 | 2019-06-15 04:15:21 | 2019-06-15 18:55:41 | 2019-06-15 22:09:43 | 3:14:02 | 2:52:28 | 0:21:34 | mira | master | rhel | 7.6 | rados/singleton/{all/pg-autoscaler.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 2 | |
Failure Reason:
Command failed (workunit test mon/pg_autoscaler.sh) on mira072 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4e3c8e8c47461e1654ec14767afa2f9385ed9e32 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/pg_autoscaler.sh' |
||||||||||||||
fail | 4037434 | 2019-06-15 04:15:22 | 2019-06-15 18:58:48 | 2019-06-15 19:32:47 | 0:33:59 | 0:20:34 | 0:13:25 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037435 | 2019-06-15 04:15:23 | 2019-06-15 19:00:46 | 2019-06-15 19:54:46 | 0:54:00 | 0:43:26 | 0:10:34 | mira | master | rhel | 7.6 | rados/objectstore/{backends/filestore-idempotent.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
"2019-06-15T19:36:32.799913+0000 mon.a (mon.0) 63 : cluster [WRN] Health check failed: Reduced data availability: 8 pgs inactive (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 4037436 | 2019-06-15 04:15:24 | 2019-06-15 19:08:48 | 2019-06-15 19:44:48 | 0:36:00 | 0:20:34 | 0:15:26 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037437 | 2019-06-15 04:15:25 | 2019-06-15 19:20:52 | 2019-06-15 19:54:51 | 0:33:59 | 0:20:19 | 0:13:40 | mira | master | ubuntu | 16.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/rgw_snaps.yaml} | 2 | |
Failure Reason:
"2019-06-15T19:37:03.665821+0000 mon.b (mon.0) 186 : cluster [WRN] Health check failed: 5 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037438 | 2019-06-15 04:15:26 | 2019-06-15 19:26:52 | 2019-06-15 20:06:51 | 0:39:59 | 0:20:48 | 0:19:11 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037439 | 2019-06-15 04:15:27 | 2019-06-15 19:32:50 | 2019-06-15 20:10:49 | 0:37:59 | 0:21:01 | 0:16:58 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
pass | 4037440 | 2019-06-15 04:15:27 | 2019-06-15 19:37:44 | 2019-06-15 20:01:44 | 0:24:00 | 0:05:59 | 0:18:01 | mira | master | ubuntu | 18.04 | rados/multimon/{clusters/6.yaml msgr-failures/few.yaml msgr/async-v2only.yaml no_pools.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_clock_no_skews.yaml} | 2 | |
fail | 4037441 | 2019-06-15 04:15:28 | 2019-06-15 19:44:50 | 2019-06-15 20:22:50 | 0:38:00 | 0:19:39 | 0:18:21 | mira | master | ubuntu | 16.04 | rados/singleton/{all/pg-removal-interruption.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037442 | 2019-06-15 04:15:29 | 2019-06-15 19:54:49 | 2019-06-15 20:28:49 | 0:34:00 | 0:20:27 | 0:13:33 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037443 | 2019-06-15 04:15:30 | 2019-06-15 19:54:53 | 2019-06-15 20:34:52 | 0:39:59 | 0:20:32 | 0:19:27 | mira | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} | 2 | |||
Failure Reason:
"2019-06-15T20:16:22.532487+0000 mon.a (mon.0) 254 : cluster [WRN] Health check failed: 6 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037444 | 2019-06-15 04:15:31 | 2019-06-15 20:01:13 | 2019-06-15 20:55:13 | 0:54:00 | 0:45:15 | 0:08:45 | mira | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/cosbench_64K_write.yaml} | 1 | |
Failure Reason:
Command failed on mira026 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037445 | 2019-06-15 04:15:32 | 2019-06-15 20:01:47 | 2019-06-15 20:35:46 | 0:33:59 | 0:20:15 | 0:13:44 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037446 | 2019-06-15 04:15:33 | 2019-06-15 20:05:24 | 2019-06-15 22:57:25 | 2:52:01 | 2:30:56 | 0:21:05 | mira | master | rhel | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-many-deletes.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037447 | 2019-06-15 04:15:34 | 2019-06-15 20:07:06 | 2019-06-15 21:17:07 | 1:10:01 | 0:56:18 | 0:13:43 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037449 | 2019-06-15 04:15:35 | 2019-06-15 20:10:51 | 2019-06-15 21:00:51 | 0:50:00 | 0:32:40 | 0:17:20 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/mimic-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037451 | 2019-06-15 04:15:36 | 2019-06-15 20:23:05 | 2019-06-15 20:57:05 | 0:34:00 | 0:20:08 | 0:13:52 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037453 | 2019-06-15 04:15:37 | 2019-06-15 20:28:58 | 2019-06-15 21:12:58 | 0:44:00 | 0:28:13 | 0:15:47 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037455 | 2019-06-15 04:15:37 | 2019-06-15 20:35:02 | 2019-06-15 20:55:01 | 0:19:59 | 0:06:59 | 0:13:00 | mira | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Command failed on mira117 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037457 | 2019-06-15 04:15:38 | 2019-06-15 20:35:48 | 2019-06-15 21:07:47 | 0:31:59 | 0:19:40 | 0:12:19 | mira | master | ubuntu | 18.04 | rados/singleton/{all/radostool.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
"2019-06-15T20:50:32.025975+0000 mon.a (mon.0) 76 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037460 | 2019-06-15 04:15:39 | 2019-06-15 20:55:02 | 2019-06-15 21:37:02 | 0:42:00 | 0:28:19 | 0:13:41 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037462 | 2019-06-15 04:15:40 | 2019-06-15 20:55:14 | 2019-06-15 21:35:14 | 0:40:00 | 0:26:39 | 0:13:21 | mira | master | centos | 7.6 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037464 | 2019-06-15 04:15:41 | 2019-06-15 20:57:06 | 2019-06-15 21:27:06 | 0:30:00 | 0:19:03 | 0:10:57 | mira | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/large-omap-object-warnings.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
"2019-06-15T21:11:08.999949+0000 mon.a (mon.0) 61 : cluster [WRN] Health check failed: Reduced data availability: 8 pgs inactive (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 4037466 | 2019-06-15 04:15:42 | 2019-06-15 21:01:05 | 2019-06-16 00:21:07 | 3:20:02 | 3:08:14 | 0:11:48 | mira | master | ubuntu | 16.04 | rados/standalone/{supported-random-distro$/{ubuntu_16.04.yaml} workloads/mon.yaml} | 1 | |
Failure Reason:
Command failed (workunit test mon/mon-bind.sh) on mira038 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4e3c8e8c47461e1654ec14767afa2f9385ed9e32 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-bind.sh' |
||||||||||||||
fail | 4037468 | 2019-06-15 04:15:43 | 2019-06-15 21:07:49 | 2019-06-15 22:17:49 | 1:10:00 | 0:54:45 | 0:15:15 | mira | master | ubuntu | 18.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/nautilus.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-octopus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashosds-health.yaml} | 4 | |
Failure Reason:
failed to become clean before timeout expired |
||||||||||||||
fail | 4037471 | 2019-06-15 04:15:44 | 2019-06-15 21:13:00 | 2019-06-15 23:59:01 | 2:46:01 | 2:25:38 | 0:20:23 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037473 | 2019-06-15 04:15:45 | 2019-06-15 21:17:08 | 2019-06-15 21:59:08 | 0:42:00 | 0:28:13 | 0:13:47 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037475 | 2019-06-15 04:15:45 | 2019-06-15 21:19:31 | 2019-06-15 21:53:31 | 0:34:00 | 0:20:35 | 0:13:25 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037477 | 2019-06-15 04:15:46 | 2019-06-15 21:27:20 | 2019-06-15 22:01:20 | 0:34:00 | 0:20:46 | 0:13:14 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037479 | 2019-06-15 04:15:47 | 2019-06-15 21:35:29 | 2019-06-15 22:09:28 | 0:33:59 | 0:20:22 | 0:13:37 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037482 | 2019-06-15 04:15:48 | 2019-06-15 21:37:12 | 2019-06-15 22:11:11 | 0:33:59 | 0:20:13 | 0:13:46 | mira | master | ubuntu | 18.04 | rados/singleton/{all/random-eio.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 2 | |
Failure Reason:
"2019-06-15T21:53:18.526145+0000 mon.a (mon.0) 176 : cluster [WRN] Health check failed: 4 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037484 | 2019-06-15 04:15:49 | 2019-06-15 21:53:33 | 2019-06-15 22:13:32 | 0:19:59 | 0:07:21 | 0:12:38 | mira | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/scrub_test.yaml} | 2 | |
Failure Reason:
Command failed on mira030 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037486 | 2019-06-15 04:15:50 | 2019-06-15 21:59:10 | 2019-06-15 22:23:09 | 0:23:59 | 0:11:47 | 0:12:12 | mira | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/fio_4K_rand_read.yaml} | 1 | |
Failure Reason:
Command failed on mira034 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037488 | 2019-06-15 04:15:50 | 2019-06-15 22:01:21 | 2019-06-15 22:47:21 | 0:46:00 | 0:22:52 | 0:23:08 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/mimic.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037490 | 2019-06-15 04:15:51 | 2019-06-15 22:09:40 | 2019-06-15 23:01:40 | 0:52:00 | 0:42:17 | 0:09:43 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037492 | 2019-06-15 04:15:52 | 2019-06-15 22:09:45 | 2019-06-15 23:03:44 | 0:53:59 | 0:43:08 | 0:10:51 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037494 | 2019-06-15 04:15:53 | 2019-06-15 22:11:19 | 2019-06-15 23:07:19 | 0:56:00 | 0:43:47 | 0:12:13 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037496 | 2019-06-15 04:15:54 | 2019-06-15 22:13:33 | 2019-06-15 22:51:33 | 0:38:00 | 0:20:37 | 0:17:23 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037497 | 2019-06-15 04:15:55 | 2019-06-15 22:18:04 | 2019-06-15 23:18:04 | 1:00:00 | 0:43:34 | 0:16:26 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037498 | 2019-06-15 04:15:56 | 2019-06-15 22:23:24 | 2019-06-15 23:29:24 | 1:06:00 | 0:27:59 | 0:38:01 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037499 | 2019-06-15 04:15:56 | 2019-06-15 22:47:37 | 2019-06-15 23:27:36 | 0:39:59 | 0:26:48 | 0:13:11 | mira | master | centos | 7.6 | rados/singleton/{all/rebuild-mondb.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037500 | 2019-06-15 04:15:57 | 2019-06-15 22:51:35 | 2019-06-15 23:27:34 | 0:35:59 | 0:21:13 | 0:14:46 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
pass | 4037501 | 2019-06-15 04:15:58 | 2019-06-15 22:57:27 | 2019-06-15 23:13:26 | 0:15:59 | 0:05:33 | 0:10:26 | mira | master | ubuntu | 16.04 | rados/objectstore/{backends/fusestore.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
fail | 4037502 | 2019-06-15 04:15:59 | 2019-06-15 23:01:41 | 2019-06-15 23:35:41 | 0:34:00 | 0:20:35 | 0:13:25 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037503 | 2019-06-15 04:16:00 | 2019-06-15 23:03:46 | 2019-06-15 23:37:45 | 0:33:59 | 0:20:03 | 0:13:56 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037504 | 2019-06-15 04:16:01 | 2019-06-15 23:07:32 | 2019-06-15 23:59:32 | 0:52:00 | 0:28:16 | 0:23:44 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037505 | 2019-06-15 04:16:01 | 2019-06-15 23:13:28 | 2019-06-16 01:53:34 | 2:40:06 | 2:22:51 | 0:17:15 | mira | master | rhel | 7.6 | rados/singleton-nomsgr/{all/librados_hello_world.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
"2019-06-16T01:37:50.792863+0000 mon.a (mon.0) 99 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 4037506 | 2019-06-15 04:16:02 | 2019-06-15 23:18:19 | 2019-06-15 23:38:18 | 0:19:59 | 0:06:18 | 0:13:41 | mira | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/failover.yaml} | 2 | |
Failure Reason:
Command failed on mira034 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037507 | 2019-06-15 04:16:03 | 2019-06-15 23:27:36 | 2019-06-16 00:01:35 | 0:33:59 | 0:20:54 | 0:13:05 | mira | master | ubuntu | 18.04 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/sync.yaml workloads/snaps-few-objects.yaml} | 2 | |
Failure Reason:
"2019-06-15T23:44:41.875957+0000 mon.f (mon.0) 139 : cluster [WRN] Health check failed: 4 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037508 | 2019-06-15 04:16:04 | 2019-06-15 23:27:37 | 2019-06-16 00:13:37 | 0:46:00 | 0:37:33 | 0:08:27 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037509 | 2019-06-15 04:16:05 | 2019-06-15 23:29:26 | 2019-06-16 00:35:25 | 1:05:59 | 0:33:16 | 0:32:43 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/nautilus-v1only.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037510 | 2019-06-15 04:16:05 | 2019-06-15 23:35:56 | 2019-06-16 02:19:57 | 2:44:01 | 2:23:13 | 0:20:48 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037511 | 2019-06-15 04:16:06 | 2019-06-15 23:37:47 | 2019-06-16 00:09:47 | 0:32:00 | 0:19:54 | 0:12:06 | mira | master | ubuntu | 16.04 | rados/singleton/{all/recovery-preemption.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037512 | 2019-06-15 04:16:07 | 2019-06-15 23:38:19 | 2019-06-16 00:24:19 | 0:46:00 | 0:35:13 | 0:10:47 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037513 | 2019-06-15 04:16:08 | 2019-06-15 23:59:16 | 2019-06-16 00:15:15 | 0:15:59 | 0:05:12 | 0:10:47 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/fio_4K_rand_rw.yaml} | 1 | |
Failure Reason:
Command failed on mira046 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037514 | 2019-06-15 04:16:09 | 2019-06-15 23:59:33 | 2019-06-16 00:33:32 | 0:33:59 | 0:20:24 | 0:13:35 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037515 | 2019-06-15 04:16:10 | 2019-06-16 00:01:53 | 2019-06-16 02:45:54 | 2:44:01 | 2:24:21 | 0:19:40 | mira | master | rhel | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-small-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037516 | 2019-06-15 04:16:10 | 2019-06-16 00:09:49 | 2019-06-16 00:57:49 | 0:48:00 | 0:36:46 | 0:11:14 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037517 | 2019-06-15 04:16:11 | 2019-06-16 00:13:46 | 2019-06-16 00:57:46 | 0:44:00 | 0:29:04 | 0:14:56 | mira | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
"2019-06-16T00:38:38.269248+0000 mon.b (mon.0) 261 : cluster [WRN] Health check failed: 5 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037518 | 2019-06-15 04:16:12 | 2019-06-16 00:15:29 | 2019-06-16 00:59:29 | 0:44:00 | 0:28:07 | 0:15:53 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037519 | 2019-06-15 04:16:13 | 2019-06-16 00:21:09 | 2019-06-16 00:55:08 | 0:33:59 | 0:20:42 | 0:13:17 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037520 | 2019-06-15 04:16:14 | 2019-06-16 00:24:21 | 2019-06-16 01:10:20 | 0:45:59 | 0:35:46 | 0:10:13 | mira | master | rhel | 7.6 | rados/singleton/{all/resolve_stuck_peering.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037521 | 2019-06-15 04:16:15 | 2019-06-16 00:33:36 | 2019-06-16 01:15:35 | 0:41:59 | 0:28:19 | 0:13:40 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037522 | 2019-06-15 04:16:15 | 2019-06-16 00:35:40 | 2019-06-16 01:27:40 | 0:52:00 | 0:28:22 | 0:23:38 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037523 | 2019-06-15 04:16:16 | 2019-06-16 00:55:11 | 2019-06-16 01:35:11 | 0:40:00 | 0:23:08 | 0:16:52 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus-v2only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037524 | 2019-06-15 04:16:17 | 2019-06-16 00:57:59 | 2019-06-16 01:41:58 | 0:43:59 | 0:28:07 | 0:15:52 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
pass | 4037525 | 2019-06-15 04:16:18 | 2019-06-16 00:57:59 | 2019-06-16 01:25:58 | 0:27:59 | 0:13:22 | 0:14:37 | mira | master | centos | 7.6 | rados/multimon/{clusters/9.yaml msgr-failures/many.yaml msgr/async.yaml no_pools.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/mon_clock_with_skews.yaml} | 3 | |
fail | 4037526 | 2019-06-15 04:16:18 | 2019-06-16 00:59:43 | 2019-06-16 01:43:43 | 0:44:00 | 0:20:09 | 0:23:51 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037527 | 2019-06-15 04:16:19 | 2019-06-16 01:10:22 | 2019-06-16 02:20:22 | 1:10:00 | 0:44:36 | 0:25:24 | mira | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 4037528 | 2019-06-15 04:16:20 | 2019-06-16 01:15:51 | 2019-06-16 03:45:58 | 2:30:07 | 2:13:58 | 0:16:09 | mira | master | centos | 7.6 | rados/singleton/{all/test-crash.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037529 | 2019-06-15 04:16:21 | 2019-06-16 01:26:01 | 2019-06-16 02:10:01 | 0:44:00 | 0:28:10 | 0:15:50 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037530 | 2019-06-15 04:16:22 | 2019-06-16 01:27:42 | 2019-06-16 01:53:41 | 0:25:59 | 0:12:25 | 0:13:34 | mira | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/fio_4M_rand_read.yaml} | 1 | |
Failure Reason:
Command failed on mira018 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
pass | 4037531 | 2019-06-15 04:16:22 | 2019-06-16 01:35:26 | 2019-06-16 02:01:25 | 0:25:59 | 0:16:42 | 0:09:17 | mira | master | ubuntu | 16.04 | rados/singleton-nomsgr/{all/msgr.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
fail | 4037532 | 2019-06-15 04:16:23 | 2019-06-16 01:42:02 | 2019-06-16 02:18:01 | 0:35:59 | 0:20:22 | 0:15:37 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037533 | 2019-06-15 04:16:24 | 2019-06-16 01:43:57 | 2019-06-16 02:17:57 | 0:34:00 | 0:20:32 | 0:13:28 | mira | master | ubuntu | 16.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037534 | 2019-06-15 04:16:25 | 2019-06-16 01:53:54 | 2019-06-16 02:47:54 | 0:54:00 | 0:28:20 | 0:25:40 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037535 | 2019-06-15 04:16:26 | 2019-06-16 01:53:54 | 2019-06-16 02:13:59 | 0:20:05 | 0:05:41 | 0:14:24 | mira | master | ubuntu | 16.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/insights.yaml} | 2 | |
Failure Reason:
Command failed on mira026 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037536 | 2019-06-15 04:16:26 | 2019-06-16 02:01:42 | 2019-06-16 02:47:41 | 0:45:59 | 0:35:28 | 0:10:31 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037537 | 2019-06-15 04:16:27 | 2019-06-16 02:10:09 | 2019-06-16 02:46:08 | 0:35:59 | 0:20:26 | 0:15:33 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037538 | 2019-06-15 04:16:28 | 2019-06-16 02:14:12 | 2019-06-16 02:46:11 | 0:31:59 | 0:19:10 | 0:12:49 | mira | master | ubuntu | 18.04 | rados/singleton/{all/test_envlibrados_for_rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
"2019-06-16T02:29:13.409137+0000 mon.a (mon.0) 76 : cluster [WRN] Health check failed: Reduced data availability: 8 pgs inactive (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 4037539 | 2019-06-15 04:16:29 | 2019-06-16 02:18:01 | 2019-06-16 03:09:59 | 0:51:58 | 0:32:50 | 0:19:08 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/nautilus.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
pass | 4037540 | 2019-06-15 04:16:30 | 2019-06-16 02:18:03 | 2019-06-16 02:44:02 | 0:25:59 | 0:18:56 | 0:07:03 | mira | master | rhel | 7.6 | rados/objectstore/{backends/keyvaluedb.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
dead | 4037541 | 2019-06-15 04:16:30 | 2019-06-16 02:20:05 | 2019-06-16 14:22:26 | 12:02:21 | mira | master | centos | 7.6 | rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/osd.yaml} | 1 | |||
fail | 4037542 | 2019-06-15 04:16:31 | 2019-06-16 02:20:24 | 2019-06-16 02:54:28 | 0:34:04 | 0:20:22 | 0:13:42 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037543 | 2019-06-15 04:16:32 | 2019-06-16 02:44:20 | 2019-06-16 03:32:20 | 0:48:00 | 0:36:55 | 0:11:05 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037544 | 2019-06-15 04:16:33 | 2019-06-16 02:46:10 | 2019-06-16 03:20:09 | 0:33:59 | 0:20:12 | 0:13:47 | mira | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_cls_all.yaml} | 2 | |
Failure Reason:
"2019-06-16T03:01:35.143143+0000 mon.b (mon.0) 267 : cluster [WRN] Health check failed: 4 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037545 | 2019-06-15 04:16:33 | 2019-06-16 02:46:10 | 2019-06-16 03:30:09 | 0:43:59 | 0:28:41 | 0:15:18 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037546 | 2019-06-15 04:16:34 | 2019-06-16 02:46:13 | 2019-06-16 03:22:12 | 0:35:59 | 0:21:03 | 0:14:56 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037547 | 2019-06-15 04:16:35 | 2019-06-16 02:47:57 | 2019-06-16 03:39:57 | 0:52:00 | 0:28:37 | 0:23:23 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037548 | 2019-06-15 04:16:36 | 2019-06-16 02:47:57 | 2019-06-16 03:21:56 | 0:33:59 | 0:19:59 | 0:14:00 | mira | master | ubuntu | 16.04 | rados/singleton/{all/thrash-backfill-full.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037549 | 2019-06-15 04:16:37 | 2019-06-16 02:54:30 | 2019-06-16 03:36:29 | 0:41:59 | 0:27:57 | 0:14:02 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037550 | 2019-06-15 04:16:38 | 2019-06-16 03:10:09 | 2019-06-16 03:26:08 | 0:15:59 | 0:05:14 | 0:10:45 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/fio_4M_rand_rw.yaml} | 1 | |
Failure Reason:
Command failed on mira034 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037551 | 2019-06-15 04:16:38 | 2019-06-16 03:20:12 | 2019-06-16 03:54:11 | 0:33:59 | 0:20:28 | 0:13:31 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037552 | 2019-06-15 04:16:39 | 2019-06-16 03:21:58 | 2019-06-16 03:57:58 | 0:36:00 | 0:21:22 | 0:14:38 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037553 | 2019-06-15 04:16:40 | 2019-06-16 03:22:14 | 2019-06-16 03:56:13 | 0:33:59 | 0:19:51 | 0:14:08 | mira | master | ubuntu | 16.04 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/force-sync-many.yaml workloads/pool-create-delete.yaml} | 2 | |
Failure Reason:
"2019-06-16T03:38:06.083944+0000 mon.b (mon.0) 197 : cluster [WRN] Health check failed: 4 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037554 | 2019-06-15 04:16:41 | 2019-06-16 03:26:23 | 2019-06-16 04:00:22 | 0:33:59 | 0:20:43 | 0:13:16 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=clay-k=4-m=2.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037555 | 2019-06-15 04:16:42 | 2019-06-16 03:30:25 | 2019-06-16 04:16:24 | 0:45:59 | 0:29:22 | 0:16:37 | mira | master | centos | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037556 | 2019-06-15 04:16:42 | 2019-06-16 03:32:35 | 2019-06-16 04:20:35 | 0:48:00 | 0:37:10 | 0:10:50 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037557 | 2019-06-15 04:16:43 | 2019-06-16 03:36:31 | 2019-06-16 04:26:31 | 0:50:00 | 0:38:29 | 0:11:31 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037558 | 2019-06-15 04:16:44 | 2019-06-16 03:39:59 | 2019-06-16 04:53:59 | 1:14:00 | 0:23:03 | 0:50:57 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/hammer.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/few.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037559 | 2019-06-15 04:16:45 | 2019-06-16 03:46:14 | 2019-06-16 04:30:14 | 0:44:00 | 0:35:27 | 0:08:33 | mira | master | rhel | 7.6 | rados/singleton-nomsgr/{all/multi-backfill-reject.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037560 | 2019-06-15 04:16:46 | 2019-06-16 03:54:25 | 2019-06-16 04:50:25 | 0:56:00 | 0:28:22 | 0:27:38 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037561 | 2019-06-15 04:16:46 | 2019-06-16 03:56:23 | 2019-06-16 04:38:23 | 0:42:00 | 0:27:43 | 0:14:17 | mira | master | centos | 7.6 | rados/singleton/{all/thrash-eio.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037562 | 2019-06-15 04:16:47 | 2019-06-16 03:58:13 | 2019-06-16 04:34:13 | 0:36:00 | 0:20:23 | 0:15:37 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037563 | 2019-06-15 04:16:48 | 2019-06-16 04:00:24 | 2019-06-16 04:34:23 | 0:33:59 | 0:20:10 | 0:13:49 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037564 | 2019-06-15 04:16:49 | 2019-06-16 04:16:27 | 2019-06-16 04:36:26 | 0:19:59 | 0:05:45 | 0:14:14 | mira | master | ubuntu | 16.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/module_selftest.yaml} | 2 | |
Failure Reason:
"2019-06-16T04:34:31.773662+0000 mon.a (mon.0) 68 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037565 | 2019-06-15 04:16:50 | 2019-06-16 04:20:37 | 2019-06-16 05:00:37 | 0:40:00 | 0:20:31 | 0:19:29 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037566 | 2019-06-15 04:16:50 | 2019-06-16 04:26:34 | 2019-06-16 05:02:33 | 0:35:59 | 0:21:56 | 0:14:03 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037567 | 2019-06-15 04:16:51 | 2019-06-16 04:30:15 | 2019-06-16 05:12:20 | 0:42:05 | 0:28:05 | 0:14:00 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037568 | 2019-06-15 04:16:52 | 2019-06-16 04:34:24 | 2019-06-16 05:24:24 | 0:50:00 | 0:27:35 | 0:22:25 | mira | master | centos | 7.6 | rados/singleton/{all/thrash-rados/{thrash-rados.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037569 | 2019-06-15 04:16:53 | 2019-06-16 04:34:25 | 2019-06-16 05:16:29 | 0:42:04 | 0:28:09 | 0:13:55 | mira | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rados_python.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037570 | 2019-06-15 04:16:54 | 2019-06-16 04:36:41 | 2019-06-16 05:18:41 | 0:42:00 | 0:33:16 | 0:08:44 | mira | master | rhel | 7.6 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037571 | 2019-06-15 04:16:55 | 2019-06-16 04:38:25 | 2019-06-16 05:12:24 | 0:33:59 | 0:20:15 | 0:13:44 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037572 | 2019-06-15 04:16:55 | 2019-06-16 04:50:34 | 2019-06-16 05:06:33 | 0:15:59 | 0:05:27 | 0:10:32 | mira | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/fio_4M_rand_write.yaml} | 1 | |
Failure Reason:
Command failed on mira018 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037573 | 2019-06-15 04:16:56 | 2019-06-16 04:54:14 | 2019-06-16 05:28:13 | 0:33:59 | 0:20:12 | 0:13:47 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037574 | 2019-06-15 04:16:57 | 2019-06-16 05:00:52 | 2019-06-16 05:40:52 | 0:40:00 | 0:23:24 | 0:16:36 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/jewel-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037575 | 2019-06-15 04:16:58 | 2019-06-16 05:02:35 | 2019-06-16 05:46:34 | 0:43:59 | 0:28:52 | 0:15:07 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037576 | 2019-06-15 04:16:59 | 2019-06-16 05:06:35 | 2019-06-16 07:50:37 | 2:44:02 | 2:23:56 | 0:20:06 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037577 | 2019-06-15 04:16:59 | 2019-06-16 05:12:22 | 2019-06-16 05:46:21 | 0:33:59 | 0:20:10 | 0:13:49 | mira | master | ubuntu | 16.04 | rados/singleton/{all/thrash_cache_writeback_proxy_none.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 2 | |
Failure Reason:
"2019-06-16T05:28:12.513687+0000 mon.a (mon.0) 189 : cluster [WRN] Health check failed: 4 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037578 | 2019-06-15 04:17:00 | 2019-06-16 05:12:26 | 2019-06-16 05:58:25 | 0:45:59 | 0:36:02 | 0:09:57 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037579 | 2019-06-15 04:17:01 | 2019-06-16 05:16:31 | 2019-06-16 05:46:30 | 0:29:59 | 0:19:00 | 0:10:59 | mira | master | ubuntu | 16.04 | rados/objectstore/{backends/objectcacher-stress.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
"2019-06-16T05:31:53.052537+0000 mon.a (mon.0) 63 : cluster [WRN] Health check failed: Reduced data availability: 8 pgs inactive (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 4037580 | 2019-06-15 04:17:02 | 2019-06-16 05:18:43 | 2019-06-16 06:14:42 | 0:55:59 | 0:29:01 | 0:26:58 | mira | master | centos | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037581 | 2019-06-15 04:17:02 | 2019-06-16 05:24:28 | 2019-06-16 05:58:28 | 0:34:00 | 0:20:38 | 0:13:22 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037582 | 2019-06-15 04:17:03 | 2019-06-16 05:28:21 | 2019-06-16 06:16:21 | 0:48:00 | 0:26:19 | 0:21:41 | mira | master | centos | 7.6 | rados/singleton-nomsgr/{all/pool-access.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
"2019-06-16T05:59:26.474497+0000 mon.a (mon.0) 63 : cluster [WRN] Health check failed: Reduced data availability: 8 pgs inactive (PG_AVAILABILITY)" in cluster log |
||||||||||||||
fail | 4037583 | 2019-06-15 04:17:04 | 2019-06-16 05:40:54 | 2019-06-16 06:14:54 | 0:34:00 | 0:20:20 | 0:13:40 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
pass | 4037584 | 2019-06-15 04:17:05 | 2019-06-16 05:46:36 | 2019-06-16 08:26:37 | 2:40:01 | 2:21:15 | 0:18:46 | mira | master | rhel | 7.6 | rados/multimon/{clusters/21.yaml msgr-failures/few.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/mon_recovery.yaml} | 3 | |
fail | 4037585 | 2019-06-15 04:17:06 | 2019-06-16 05:46:36 | 2019-06-16 06:20:35 | 0:33:59 | 0:20:18 | 0:13:41 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037586 | 2019-06-15 04:17:06 | 2019-06-16 05:46:36 | 2019-06-16 06:20:35 | 0:33:59 | 0:20:31 | 0:13:28 | mira | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/rados_cls_all.yaml validater/lockdep.yaml} | 2 | |||
Failure Reason:
"2019-06-16T06:03:05.653826+0000 mon.b (mon.0) 208 : cluster [WRN] Health check failed: 5 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037587 | 2019-06-15 04:17:07 | 2019-06-16 05:58:27 | 2019-06-16 06:34:27 | 0:36:00 | 0:21:17 | 0:14:43 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037588 | 2019-06-15 04:17:08 | 2019-06-16 05:58:29 | 2019-06-16 06:28:28 | 0:29:59 | 0:19:21 | 0:10:38 | mira | master | ubuntu | 18.04 | rados/singleton/{all/watch-notify-same-primary.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037589 | 2019-06-15 04:17:09 | 2019-06-16 06:14:51 | 2019-06-16 06:54:51 | 0:40:00 | 0:23:52 | 0:16:08 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037590 | 2019-06-15 04:17:10 | 2019-06-16 06:14:55 | 2019-06-16 06:56:55 | 0:42:00 | 0:28:19 | 0:13:41 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037591 | 2019-06-15 04:17:10 | 2019-06-16 06:16:31 | 2019-06-16 09:02:33 | 2:46:02 | 2:24:57 | 0:21:05 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037592 | 2019-06-15 04:17:11 | 2019-06-16 06:20:37 | 2019-06-16 06:48:36 | 0:27:59 | 0:13:10 | 0:14:49 | mira | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{centos_7.yaml} tasks/orchestrator_cli.yaml} | 2 | |
Failure Reason:
Command failed on mira082 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037593 | 2019-06-15 04:17:12 | 2019-06-16 06:20:37 | 2019-06-16 06:44:37 | 0:24:00 | 0:11:48 | 0:12:12 | mira | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/radosbench_4K_rand_read.yaml} | 1 | |
Failure Reason:
Command failed on mira111 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037594 | 2019-06-15 04:17:13 | 2019-06-16 06:28:40 | 2019-06-16 07:04:40 | 0:36:00 | 0:21:26 | 0:14:34 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037595 | 2019-06-15 04:17:14 | 2019-06-16 06:34:30 | 2019-06-16 09:18:37 | 2:44:07 | 2:23:35 | 0:20:32 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037596 | 2019-06-15 04:17:14 | 2019-06-16 06:44:42 | 2019-06-16 07:22:41 | 0:37:59 | 0:21:59 | 0:16:00 | mira | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_stress_watch.yaml} | 2 | |
Failure Reason:
"2019-06-16T07:05:11.452256+0000 mon.b (mon.0) 200 : cluster [WRN] Health check failed: 6 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037597 | 2019-06-15 04:17:15 | 2019-06-16 06:48:51 | 2019-06-16 07:28:51 | 0:40:00 | 0:20:39 | 0:19:21 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037598 | 2019-06-15 04:17:16 | 2019-06-16 06:54:53 | 2019-06-16 07:24:52 | 0:29:59 | 0:18:56 | 0:11:03 | mira | master | ubuntu | 16.04 | rados/singleton/{all/admin-socket.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
"2019-06-16T07:07:51.513943+0000 mon.a (mon.0) 66 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037599 | 2019-06-15 04:17:17 | 2019-06-16 06:56:56 | 2019-06-16 09:44:58 | 2:48:02 | 2:25:50 | 0:22:12 | mira | master | rhel | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=2-m=1.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
dead | 4037600 | 2019-06-15 04:17:17 | 2019-06-16 07:04:42 | 2019-06-16 19:07:04 | 12:02:22 | mira | master | ubuntu | 16.04 | rados/standalone/{supported-random-distro$/{ubuntu_16.04.yaml} workloads/scrub.yaml} | 1 | |||
fail | 4037601 | 2019-06-15 04:17:18 | 2019-06-16 07:22:43 | 2019-06-16 07:56:43 | 0:34:00 | 0:20:43 | 0:13:17 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037602 | 2019-06-15 04:17:19 | 2019-06-16 07:25:08 | 2019-06-16 07:59:07 | 0:33:59 | 0:21:08 | 0:12:51 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037603 | 2019-06-15 04:17:20 | 2019-06-16 07:28:52 | 2019-06-16 08:20:52 | 0:52:00 | 0:28:07 | 0:23:53 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037604 | 2019-06-15 04:17:21 | 2019-06-16 07:50:51 | 2019-06-16 08:40:51 | 0:50:00 | 0:28:16 | 0:21:44 | mira | master | centos | 7.6 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/many.yaml workloads/rados_5925.yaml} | 2 | |
Failure Reason:
"2019-06-16T08:22:35.523203+0000 mon.f (mon.0) 126 : cluster [WRN] Health check failed: 5 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037605 | 2019-06-15 04:17:21 | 2019-06-16 07:56:45 | 2019-06-16 08:42:44 | 0:45:59 | 0:36:11 | 0:09:48 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037606 | 2019-06-15 04:17:22 | 2019-06-16 07:59:09 | 2019-06-16 09:01:09 | 1:02:00 | 0:23:32 | 0:38:28 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous-v1only.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/few.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037607 | 2019-06-15 04:17:23 | 2019-06-16 08:21:00 | 2019-06-16 09:00:59 | 0:39:59 | 0:20:19 | 0:19:40 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037608 | 2019-06-15 04:17:24 | 2019-06-16 08:26:45 | 2019-06-16 09:10:45 | 0:44:00 | 0:33:58 | 0:10:02 | mira | master | rhel | 7.6 | rados/singleton-nomsgr/{all/recovery-unfound-found.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037609 | 2019-06-15 04:17:25 | 2019-06-16 08:41:01 | 2019-06-16 09:31:00 | 0:49:59 | 0:27:11 | 0:22:48 | mira | master | centos | 7.6 | rados/singleton/{all/deduptool.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
"2019-06-16T09:11:49.384867+0000 mon.a (mon.0) 90 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037610 | 2019-06-15 04:17:26 | 2019-06-16 08:43:01 | 2019-06-16 09:29:00 | 0:45:59 | 0:34:40 | 0:11:19 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037611 | 2019-06-15 04:17:26 | 2019-06-16 09:01:01 | 2019-06-16 09:35:01 | 0:34:00 | 0:20:52 | 0:13:08 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037612 | 2019-06-15 04:17:27 | 2019-06-16 09:01:10 | 2019-06-16 09:17:10 | 0:16:00 | 0:05:26 | 0:10:34 | mira | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/radosbench_4K_seq_read.yaml} | 1 | |
Failure Reason:
Command failed on mira046 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037613 | 2019-06-15 04:17:28 | 2019-06-16 09:02:35 | 2019-06-16 09:38:34 | 0:35:59 | 0:21:02 | 0:14:57 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037614 | 2019-06-15 04:17:29 | 2019-06-16 09:10:46 | 2019-06-16 10:04:46 | 0:54:00 | 0:28:07 | 0:25:53 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037615 | 2019-06-15 04:17:30 | 2019-06-16 09:17:12 | 2019-06-16 12:01:13 | 2:44:01 | 2:23:54 | 0:20:07 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037616 | 2019-06-15 04:17:31 | 2019-06-16 09:18:51 | 2019-06-16 10:08:51 | 0:50:00 | 0:31:16 | 0:18:44 | mira | master | centos | 7.6 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037617 | 2019-06-15 04:17:31 | 2019-06-16 09:29:03 | 2019-06-16 10:03:02 | 0:33:59 | 0:20:04 | 0:13:55 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037618 | 2019-06-15 04:17:32 | 2019-06-16 09:31:16 | 2019-06-16 10:01:15 | 0:29:59 | 0:19:14 | 0:10:45 | mira | master | ubuntu | 18.04 | rados/singleton/{all/divergent_priors.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
pass | 4037619 | 2019-06-15 04:17:33 | 2019-06-16 09:35:04 | 2019-06-16 12:17:05 | 2:42:01 | 2:24:45 | 0:17:16 | mira | master | rhel | 7.6 | rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
fail | 4037620 | 2019-06-15 04:17:34 | 2019-06-16 09:38:51 | 2019-06-16 10:22:50 | 0:43:59 | 0:28:15 | 0:15:44 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037621 | 2019-06-15 04:17:35 | 2019-06-16 09:45:56 | 2019-06-16 10:15:55 | 0:29:59 | 0:20:05 | 0:09:54 | mira | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{rhel_7.yaml} tasks/progress.yaml} | 2 | |
Failure Reason:
Command failed on mira117 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037622 | 2019-06-15 04:17:35 | 2019-06-16 10:01:19 | 2019-06-16 10:53:19 | 0:52:00 | 0:28:17 | 0:23:43 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037623 | 2019-06-15 04:17:36 | 2019-06-16 10:03:18 | 2019-06-16 10:21:17 | 0:17:59 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 4 | |||
Failure Reason:
Could not reconnect to ubuntu@mira111.front.sepia.ceph.com |
||||||||||||||
fail | 4037624 | 2019-06-15 04:17:37 | 2019-06-16 10:05:01 | 2019-06-16 10:23:00 | 0:17:59 | mira | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rados_striper.yaml} | 2 | |||
Failure Reason:
Could not reconnect to ubuntu@mira072.front.sepia.ceph.com |
||||||||||||||
fail | 4037625 | 2019-06-15 04:17:38 | 2019-06-16 10:09:05 | 2019-06-16 12:53:07 | 2:44:02 | 2:23:52 | 0:20:10 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037626 | 2019-06-15 04:17:39 | 2019-06-16 10:15:57 | 2019-06-16 12:59:59 | 2:44:02 | 2:23:44 | 0:20:18 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037627 | 2019-06-15 04:17:39 | 2019-06-16 10:21:31 | 2019-06-16 11:03:31 | 0:42:00 | 0:28:01 | 0:13:59 | mira | master | centos | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037628 | 2019-06-15 04:17:40 | 2019-06-16 10:23:05 | 2019-06-16 10:57:04 | 0:33:59 | 0:20:37 | 0:13:22 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037629 | 2019-06-15 04:17:41 | 2019-06-16 10:23:05 | 2019-06-16 11:13:05 | 0:50:00 | 0:26:32 | 0:23:28 | mira | master | centos | 7.6 | rados/singleton/{all/divergent_priors2.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037630 | 2019-06-15 04:17:42 | 2019-06-16 10:53:21 | 2019-06-16 11:27:21 | 0:34:00 | 0:20:20 | 0:13:40 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037631 | 2019-06-15 04:17:43 | 2019-06-16 10:57:21 | 2019-06-16 11:23:20 | 0:25:59 | 0:12:02 | 0:13:57 | mira | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/radosbench_4M_rand_read.yaml} | 1 | |
Failure Reason:
Command failed on mira071 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037632 | 2019-06-15 04:17:43 | 2019-06-16 11:03:33 | 2019-06-16 11:39:33 | 0:36:00 | 0:20:14 | 0:15:46 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037633 | 2019-06-15 04:17:44 | 2019-06-16 11:13:20 | 2019-06-16 11:43:20 | 0:30:00 | 0:19:33 | 0:10:27 | mira | master | ubuntu | 16.04 | rados/singleton-nomsgr/{all/version-number-sanity.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
"2019-06-16T11:27:19.394622+0000 mon.a (mon.0) 99 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 4037634 | 2019-06-15 04:17:45 | 2019-06-16 11:23:23 | 2019-06-16 12:09:22 | 0:45:59 | 0:35:19 | 0:10:40 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037635 | 2019-06-15 04:17:46 | 2019-06-16 11:27:24 | 2019-06-16 14:13:25 | 2:46:01 | 2:25:44 | 0:20:17 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037636 | 2019-06-15 04:17:46 | 2019-06-16 11:39:35 | 2019-06-16 12:11:35 | 0:32:00 | 0:20:46 | 0:11:14 | mira | master | ubuntu | 16.04 | rados/singleton/{all/dump-stuck.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037637 | 2019-06-15 04:17:47 | 2019-06-16 11:43:21 | 2019-06-16 12:23:21 | 0:40:00 | 0:22:44 | 0:17:16 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/mimic-v1only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037638 | 2019-06-15 04:17:48 | 2019-06-16 12:01:23 | 2019-06-16 12:37:23 | 0:36:00 | 0:22:11 | 0:13:49 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037639 | 2019-06-15 04:17:49 | 2019-06-16 12:09:38 | 2019-06-16 13:07:38 | 0:58:00 | 0:33:19 | 0:24:41 | mira | master | centos | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037640 | 2019-06-15 04:17:50 | 2019-06-16 12:11:51 | 2019-06-16 12:57:50 | 0:45:59 | 0:28:30 | 0:17:29 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
pass | 4037641 | 2019-06-15 04:17:50 | 2019-06-16 12:17:20 | 2019-06-16 12:35:19 | 0:17:59 | 0:06:13 | 0:11:46 | mira | master | ubuntu | 18.04 | rados/multimon/{clusters/3.yaml msgr-failures/many.yaml msgr/async-v2only.yaml no_pools.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_clock_no_skews.yaml} | 2 | |
fail | 4037642 | 2019-06-15 04:17:51 | 2019-06-16 12:23:34 | 2019-06-16 12:57:33 | 0:33:59 | 0:20:22 | 0:13:37 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037643 | 2019-06-15 04:17:52 | 2019-06-16 12:35:34 | 2019-06-16 13:33:34 | 0:58:00 | 0:45:17 | 0:12:43 | mira | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 4037644 | 2019-06-15 04:17:53 | 2019-06-16 12:37:38 | 2019-06-16 13:27:37 | 0:49:59 | 0:35:27 | 0:14:32 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037645 | 2019-06-15 04:17:54 | 2019-06-16 12:53:21 | 2019-06-16 13:27:21 | 0:34:00 | 0:20:30 | 0:13:30 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037646 | 2019-06-15 04:17:54 | 2019-06-16 12:57:44 | 2019-06-16 13:41:43 | 0:43:59 | 0:33:59 | 0:10:00 | mira | master | rhel | 7.6 | rados/singleton/{all/ec-lost-unfound.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037647 | 2019-06-15 04:17:55 | 2019-06-16 12:57:52 | 2019-06-16 13:17:51 | 0:19:59 | 0:06:34 | 0:13:25 | mira | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/prometheus.yaml} | 2 | |
Failure Reason:
Command failed on mira072 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037648 | 2019-06-15 04:17:56 | 2019-06-16 13:00:01 | 2019-06-16 13:52:00 | 0:51:59 | 0:28:29 | 0:23:30 | mira | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rados_workunit_loadgen_big.yaml} | 2 | |
Failure Reason:
"2019-06-16T13:32:53.992749+0000 mon.a (mon.0) 172 : cluster [WRN] Health check failed: 6 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037649 | 2019-06-15 04:17:57 | 2019-06-16 13:07:54 | 2019-06-16 13:41:53 | 0:33:59 | 0:20:39 | 0:13:20 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037650 | 2019-06-15 04:17:58 | 2019-06-16 13:17:53 | 2019-06-16 14:01:52 | 0:43:59 | 0:29:37 | 0:14:22 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037651 | 2019-06-15 04:17:59 | 2019-06-16 13:27:35 | 2019-06-16 13:45:35 | 0:18:00 | 0:05:39 | 0:12:21 | mira | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/radosbench_4M_seq_read.yaml} | 1 | |
Failure Reason:
Command failed on mira034 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037652 | 2019-06-15 04:17:59 | 2019-06-16 13:27:39 | 2019-06-16 14:03:38 | 0:35:59 | 0:20:55 | 0:15:04 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037653 | 2019-06-15 04:18:00 | 2019-06-16 13:33:36 | 2019-06-16 14:23:35 | 0:49:59 | 0:27:50 | 0:22:09 | mira | master | centos | 7.6 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/one.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
"2019-06-16T14:05:18.298112+0000 mon.a (mon.0) 151 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037654 | 2019-06-15 04:18:01 | 2019-06-16 13:41:58 | 2019-06-16 14:27:57 | 0:45:59 | 0:29:59 | 0:16:00 | mira | master | centos | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037655 | 2019-06-15 04:18:02 | 2019-06-16 13:41:58 | 2019-06-16 14:33:57 | 0:51:59 | 0:32:44 | 0:19:15 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/mimic.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037656 | 2019-06-15 04:18:03 | 2019-06-16 13:45:49 | 2019-06-16 14:19:49 | 0:34:00 | 0:20:30 | 0:13:30 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
pass | 4037657 | 2019-06-15 04:18:03 | 2019-06-16 13:52:02 | 2019-06-16 14:16:01 | 0:23:59 | 0:13:31 | 0:10:28 | mira | master | centos | 7.6 | rados/singleton/{all/erasure-code-nonregression.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
fail | 4037658 | 2019-06-15 04:18:04 | 2019-06-16 14:01:54 | 2019-06-16 14:35:53 | 0:33:59 | 0:19:47 | 0:14:12 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037659 | 2019-06-15 04:18:05 | 2019-06-16 14:03:53 | 2019-06-16 14:45:52 | 0:41:59 | 0:32:55 | 0:09:04 | mira | master | rhel | 7.6 | rados/objectstore/{backends/alloc-hint.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
"2019-06-16T14:28:19.634332+0000 mon.a (mon.0) 87 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037660 | 2019-06-15 04:18:06 | 2019-06-16 14:13:40 | 2019-06-16 14:43:39 | 0:29:59 | 0:19:22 | 0:10:37 | mira | master | ubuntu | 18.04 | rados/rest/{mgr-restful.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
"2019-06-16T14:27:39.995802+0000 mon.a (mon.0) 108 : cluster [WRN] Health check failed: 1 MDSs report slow metadata IOs (MDS_SLOW_METADATA_IO)" in cluster log |
||||||||||||||
fail | 4037661 | 2019-06-15 04:18:07 | 2019-06-16 14:16:03 | 2019-06-16 14:56:03 | 0:40:00 | 0:27:06 | 0:12:54 | mira | master | centos | 7.6 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037662 | 2019-06-15 04:18:07 | 2019-06-16 14:20:03 | 2019-06-16 15:34:03 | 1:14:00 | 1:03:18 | 0:10:42 | mira | master | centos | rados/singleton-flat/valgrind-leaks.yaml | 1 | ||
Failure Reason:
"2019-06-16T15:04:25.913157+0000 mon.a (mon.0) 228 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037663 | 2019-06-15 04:18:08 | 2019-06-16 14:22:40 | 2019-06-16 14:52:39 | 0:29:59 | 0:18:55 | 0:11:04 | mira | master | ubuntu | 16.04 | rados/singleton-nomsgr/{all/admin_socket_output.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
"2019-06-16T14:35:18.630130+0000 mon.a (mon.0) 64 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037664 | 2019-06-15 04:18:09 | 2019-06-16 14:23:37 | 2019-06-16 17:43:39 | 3:20:02 | 3:07:57 | 0:12:05 | mira | master | ubuntu | 16.04 | rados/standalone/{supported-random-distro$/{ubuntu_16.04.yaml} workloads/crush.yaml} | 1 | |
Failure Reason:
Command failed (workunit test crush/crush-classes.sh) on mira030 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4e3c8e8c47461e1654ec14767afa2f9385ed9e32 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/crush/crush-classes.sh' |
||||||||||||||
fail | 4037665 | 2019-06-15 04:18:10 | 2019-06-16 14:28:12 | 2019-06-16 15:56:12 | 1:28:00 | 1:02:36 | 0:25:24 | mira | master | centos | 7.6 | rados/upgrade/mimic-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/mimic.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{centos_7.yaml} thrashosds-health.yaml} | 4 | |
Failure Reason:
failed to become clean before timeout expired |
||||||||||||||
fail | 4037666 | 2019-06-15 04:18:11 | 2019-06-16 14:33:59 | 2019-06-16 15:07:59 | 0:34:00 | 0:20:20 | 0:13:40 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037667 | 2019-06-15 04:18:11 | 2019-06-16 14:36:00 | 2019-06-16 16:44:02 | 2:08:02 | 1:51:21 | 0:16:41 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
Failure Reason:
{'mira082.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}} |
||||||||||||||
fail | 4037668 | 2019-06-15 04:18:12 | 2019-06-16 14:43:54 | 2019-06-16 15:17:54 | 0:34:00 | 0:20:22 | 0:13:38 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037669 | 2019-06-15 04:18:13 | 2019-06-16 14:45:54 | 2019-06-16 15:31:54 | 0:46:00 | 0:29:17 | 0:16:43 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037670 | 2019-06-15 04:18:14 | 2019-06-16 14:52:41 | 2019-06-16 15:32:41 | 0:40:00 | 0:26:40 | 0:13:20 | mira | master | centos | 7.6 | rados/singleton/{all/lost-unfound-delete.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037671 | 2019-06-15 04:18:15 | 2019-06-16 14:56:17 | 2019-06-16 15:40:17 | 0:44:00 | 0:34:45 | 0:09:15 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037672 | 2019-06-15 04:18:16 | 2019-06-16 15:08:15 | 2019-06-16 17:18:16 | 2:10:01 | 1:51:21 | 0:18:40 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} | 2 | |
Failure Reason:
{'mira111.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}, 'mira061.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}} |
||||||||||||||
fail | 4037673 | 2019-06-15 04:18:16 | 2019-06-16 15:17:59 | 2019-06-16 15:57:58 | 0:39:59 | 0:22:58 | 0:17:01 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/nautilus-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037674 | 2019-06-15 04:18:17 | 2019-06-16 15:32:10 | 2019-06-16 15:48:09 | 0:15:59 | 0:05:35 | 0:10:24 | mira | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/radosbench_4M_write.yaml} | 1 | |
Failure Reason:
Command failed on mira075 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037675 | 2019-06-15 04:18:18 | 2019-06-16 15:32:43 | 2019-06-16 16:06:42 | 0:33:59 | 0:20:56 | 0:13:03 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037676 | 2019-06-15 04:18:19 | 2019-06-16 15:34:05 | 2019-06-16 16:08:04 | 0:33:59 | 0:20:08 | 0:13:51 | mira | master | ubuntu | 16.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037677 | 2019-06-15 04:18:20 | 2019-06-16 15:40:19 | 2019-06-16 15:50:18 | 0:09:59 | 0:02:49 | 0:07:10 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |
Failure Reason:
{'mira041.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}, 'mira117.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}} |
||||||||||||||
fail | 4037678 | 2019-06-15 04:18:20 | 2019-06-16 15:48:24 | 2019-06-16 16:36:24 | 0:48:00 | 0:23:15 | 0:24:45 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037679 | 2019-06-15 04:18:21 | 2019-06-16 15:50:20 | 2019-06-16 16:24:19 | 0:33:59 | 0:20:49 | 0:13:10 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037680 | 2019-06-15 04:18:22 | 2019-06-16 15:56:14 | 2019-06-16 16:40:14 | 0:44:00 | 0:28:03 | 0:15:57 | mira | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rados_workunit_loadgen_mix.yaml} | 2 | |
Failure Reason:
"2019-06-16T16:21:10.973602+0000 mon.a (mon.0) 222 : cluster [WRN] Health check failed: 5 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037681 | 2019-06-15 04:18:23 | 2019-06-16 15:58:16 | 2019-06-16 16:08:15 | 0:09:59 | 0:02:57 | 0:07:02 | mira | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{rhel_7.yaml} tasks/ssh_orchestrator.yaml} | 2 | |
Failure Reason:
{'mira041.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}, 'mira101.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}} |
||||||||||||||
fail | 4037682 | 2019-06-15 04:18:24 | 2019-06-16 16:06:44 | 2019-06-16 16:56:43 | 0:49:59 | 0:28:32 | 0:21:27 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037683 | 2019-06-15 04:18:24 | 2019-06-16 16:08:19 | 2019-06-16 16:40:19 | 0:32:00 | 0:19:38 | 0:12:22 | mira | master | ubuntu | 16.04 | rados/singleton/{all/lost-unfound.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037684 | 2019-06-15 04:18:25 | 2019-06-16 16:08:20 | 2019-06-16 16:54:19 | 0:45:59 | 0:28:16 | 0:17:43 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037685 | 2019-06-15 04:18:26 | 2019-06-16 16:24:34 | 2019-06-16 16:36:33 | 0:11:59 | 0:02:35 | 0:09:24 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
Failure Reason:
{'mira117.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}, 'mira075.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}} |
||||||||||||||
fail | 4037686 | 2019-06-15 04:18:27 | 2019-06-16 16:36:26 | 2019-06-16 17:10:25 | 0:33:59 | 0:20:30 | 0:13:29 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037687 | 2019-06-15 04:18:28 | 2019-06-16 16:36:34 | 2019-06-16 18:46:35 | 2:10:01 | 1:51:01 | 0:19:00 | mira | master | rhel | 7.6 | rados/singleton-nomsgr/{all/cache-fs-trunc.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
{'mira071.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}} |
||||||||||||||
fail | 4037688 | 2019-06-15 04:18:28 | 2019-06-16 16:40:29 | 2019-06-16 17:14:28 | 0:33:59 | 0:20:55 | 0:13:04 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037689 | 2019-06-15 04:18:29 | 2019-06-16 16:40:29 | 2019-06-16 18:48:30 | 2:08:01 | 1:50:48 | 0:17:13 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
Failure Reason:
{'mira117.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}, 'mira065.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}} |
||||||||||||||
fail | 4037690 | 2019-06-15 04:18:30 | 2019-06-16 16:44:16 | 2019-06-16 17:24:16 | 0:40:00 | 0:22:59 | 0:17:01 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus-v2only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037691 | 2019-06-15 04:18:31 | 2019-06-16 16:54:36 | 2019-06-16 17:28:35 | 0:33:59 | 0:21:51 | 0:12:08 | mira | master | ubuntu | 18.04 | rados/singleton/{all/max-pg-per-osd.from-mon.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
Command failed on mira041 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.0 flush_pg_stats' |
||||||||||||||
fail | 4037692 | 2019-06-15 04:18:32 | 2019-06-16 16:56:45 | 2019-06-16 17:42:45 | 0:46:00 | 0:28:38 | 0:17:22 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037693 | 2019-06-15 04:18:33 | 2019-06-16 17:10:35 | 2019-06-16 17:44:34 | 0:33:59 | 0:20:33 | 0:13:26 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037694 | 2019-06-15 04:18:33 | 2019-06-16 17:14:43 | 2019-06-16 17:26:42 | 0:11:59 | 0:02:44 | 0:09:15 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
Failure Reason:
{'mira034.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}, 'mira046.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}} |
||||||||||||||
fail | 4037695 | 2019-06-15 04:18:34 | 2019-06-16 17:18:19 | 2019-06-16 17:34:18 | 0:15:59 | 0:05:14 | 0:10:45 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/sample_fio.yaml} | 1 | |
Failure Reason:
Command failed on mira111 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037696 | 2019-06-15 04:18:35 | 2019-06-16 17:24:30 | 2019-06-16 18:00:30 | 0:36:00 | 0:20:57 | 0:15:03 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037697 | 2019-06-15 04:18:36 | 2019-06-16 17:26:56 | 2019-06-16 17:38:55 | 0:11:59 | 0:02:29 | 0:09:30 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
Failure Reason:
{'mira034.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}, 'mira046.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}} |
||||||||||||||
fail | 4037698 | 2019-06-15 04:18:37 | 2019-06-16 17:28:37 | 2019-06-16 18:02:37 | 0:34:00 | 0:19:58 | 0:14:02 | mira | master | ubuntu | 16.04 | rados/objectstore/{backends/ceph_objectstore_tool.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037699 | 2019-06-15 04:18:37 | 2019-06-16 17:34:33 | 2019-06-16 18:06:33 | 0:32:00 | 0:20:37 | 0:11:23 | mira | master | ubuntu | 16.04 | rados/singleton/{all/max-pg-per-osd.from-primary.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
Command failed on mira111 with status 6: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.0 flush_pg_stats' |
||||||||||||||
fail | 4037700 | 2019-06-15 04:18:38 | 2019-06-16 17:38:57 | 2019-06-16 18:12:57 | 0:34:00 | 0:20:20 | 0:13:40 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
pass | 4037701 | 2019-06-15 04:18:39 | 2019-06-16 17:42:47 | 2019-06-16 17:58:46 | 0:15:59 | 0:05:33 | 0:10:26 | mira | master | ubuntu | 16.04 | rados/multimon/{clusters/6.yaml msgr-failures/few.yaml msgr/async.yaml no_pools.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/mon_clock_with_skews.yaml} | 2 | |
fail | 4037702 | 2019-06-15 04:18:40 | 2019-06-16 17:43:41 | 2019-06-16 18:19:40 | 0:35:59 | 0:21:04 | 0:14:55 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037703 | 2019-06-15 04:18:41 | 2019-06-16 17:44:50 | 2019-06-16 18:20:49 | 0:35:59 | 0:20:55 | 0:15:04 | mira | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml tasks/rados_api_tests.yaml validater/lockdep.yaml} | 2 | |||
Failure Reason:
"2019-06-16T18:01:49.876598+0000 mon.a (mon.0) 200 : cluster [WRN] Health check failed: 6 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037704 | 2019-06-15 04:18:41 | 2019-06-16 17:59:06 | 2019-06-16 18:33:04 | 0:33:58 | 0:20:39 | 0:13:19 | mira | master | ubuntu | 16.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/rados_workunit_loadgen_mostlyread.yaml} | 2 | |
Failure Reason:
"2019-06-16T18:15:30.842063+0000 mon.b (mon.0) 185 : cluster [WRN] Health check failed: 5 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037705 | 2019-06-15 04:18:42 | 2019-06-16 18:00:44 | 2019-06-16 18:52:44 | 0:52:00 | 0:28:20 | 0:23:40 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037706 | 2019-06-15 04:18:43 | 2019-06-16 18:02:47 | 2019-06-16 18:36:47 | 0:34:00 | 0:20:39 | 0:13:21 | mira | master | ubuntu | 18.04 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/sync-many.yaml workloads/rados_mon_osdmap_prune.yaml} | 2 | |
Failure Reason:
"2019-06-16T18:20:05.400156+0000 mon.f (mon.0) 156 : cluster [WRN] Health check failed: 4 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037707 | 2019-06-15 04:18:44 | 2019-06-16 18:06:34 | 2019-06-16 18:54:34 | 0:48:00 | 0:30:28 | 0:17:32 | mira | master | centos | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037708 | 2019-06-15 04:18:45 | 2019-06-16 18:13:04 | 2019-06-16 18:51:04 | 0:38:00 | 0:22:39 | 0:15:21 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/nautilus.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/few.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037709 | 2019-06-15 04:18:46 | 2019-06-16 18:19:42 | 2019-06-16 18:37:41 | 0:17:59 | 0:06:02 | 0:11:57 | mira | master | ubuntu | 16.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/workunits.yaml} | 2 | |
Failure Reason:
"2019-06-16T18:35:02.607431+0000 mon.a (mon.0) 107 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037710 | 2019-06-15 04:18:46 | 2019-06-16 18:20:56 | 2019-06-16 18:54:56 | 0:34:00 | 0:21:07 | 0:12:53 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037711 | 2019-06-15 04:18:47 | 2019-06-16 18:33:06 | 2019-06-16 19:07:05 | 0:33:59 | 0:20:29 | 0:13:30 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037712 | 2019-06-15 04:18:48 | 2019-06-16 18:37:02 | 2019-06-16 19:09:02 | 0:32:00 | 0:19:43 | 0:12:17 | mira | master | ubuntu | 16.04 | rados/singleton/{all/max-pg-per-osd.from-replica.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
Failure Reason:
Command failed on mira082 with status 6: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.0 flush_pg_stats' |
||||||||||||||
fail | 4037713 | 2019-06-15 04:18:49 | 2019-06-16 18:37:43 | 2019-06-16 20:47:44 | 2:10:01 | 1:50:45 | 0:19:16 | mira | master | rhel | 7.6 | rados/singleton-nomsgr/{all/ceph-kvstore-tool.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
{'mira088.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}} |
||||||||||||||
fail | 4037714 | 2019-06-15 04:18:50 | 2019-06-16 18:46:51 | 2019-06-16 19:20:51 | 0:34:00 | 0:20:27 | 0:13:33 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037715 | 2019-06-15 04:18:50 | 2019-06-16 18:48:47 | 2019-06-16 19:22:47 | 0:34:00 | 0:20:36 | 0:13:24 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037716 | 2019-06-15 04:18:51 | 2019-06-16 18:51:06 | 2019-06-16 19:01:04 | 0:09:58 | 0:02:44 | 0:07:14 | mira | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/sample_radosbench.yaml} | 1 | |
Failure Reason:
{'mira034.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}} |
||||||||||||||
fail | 4037717 | 2019-06-15 04:18:52 | 2019-06-16 18:52:59 | 2019-06-16 19:04:58 | 0:11:59 | 0:02:29 | 0:09:30 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
Failure Reason:
{'mira046.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}, 'mira072.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}} |
||||||||||||||
fail | 4037718 | 2019-06-15 04:18:53 | 2019-06-16 18:54:49 | 2019-06-16 19:38:48 | 0:43:59 | 0:28:40 | 0:15:19 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037719 | 2019-06-15 04:18:54 | 2019-06-16 18:54:57 | 2019-06-16 22:21:00 | 3:26:03 | 3:13:09 | 0:12:54 | mira | master | centos | 7.6 | rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/erasure-code.yaml} | 1 | |
Failure Reason:
Command failed (workunit test erasure-code/test-erasure-code-plugins.sh) on mira061 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4e3c8e8c47461e1654ec14767afa2f9385ed9e32 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/erasure-code/test-erasure-code-plugins.sh' |
||||||||||||||
fail | 4037720 | 2019-06-15 04:18:54 | 2019-06-16 19:01:10 | 2019-06-16 19:11:09 | 0:09:59 | 0:02:47 | 0:07:12 | mira | master | rhel | 7.6 | rados/singleton/{all/mon-auth-caps.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
{'mira034.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}} |
||||||||||||||
fail | 4037721 | 2019-06-15 04:18:55 | 2019-06-16 19:05:00 | 2019-06-16 19:16:59 | 0:11:59 | 0:02:26 | 0:09:33 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
Failure Reason:
{'mira046.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}, 'mira072.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}} |
||||||||||||||
fail | 4037722 | 2019-06-15 04:18:56 | 2019-06-16 19:07:10 | 2019-06-16 19:41:09 | 0:33:59 | 0:20:38 | 0:13:21 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037723 | 2019-06-15 04:18:57 | 2019-06-16 19:07:10 | 2019-06-16 19:51:10 | 0:44:00 | 0:28:36 | 0:15:24 | mira | master | centos | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037724 | 2019-06-15 04:18:58 | 2019-06-16 19:09:18 | 2019-06-16 19:49:17 | 0:39:59 | 0:23:51 | 0:16:08 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/hammer.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037725 | 2019-06-15 04:18:59 | 2019-06-16 19:11:13 | 2019-06-16 19:57:14 | 0:46:01 | 0:28:44 | 0:17:17 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037726 | 2019-06-15 04:18:59 | 2019-06-16 19:17:13 | 2019-06-16 19:59:13 | 0:42:00 | 0:28:13 | 0:13:47 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037727 | 2019-06-15 04:19:00 | 2019-06-16 19:21:06 | 2019-06-16 20:13:05 | 0:51:59 | 0:28:12 | 0:23:47 | mira | master | centos | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037728 | 2019-06-15 04:19:01 | 2019-06-16 19:22:57 | 2019-06-16 19:56:56 | 0:33:59 | 0:20:28 | 0:13:31 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037729 | 2019-06-15 04:19:02 | 2019-06-16 19:39:04 | 2019-06-16 21:49:05 | 2:10:01 | 1:51:20 | 0:18:41 | mira | master | rhel | 7.6 | rados/singleton/{all/mon-config-key-caps.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
{'mira018.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}} |
||||||||||||||
fail | 4037730 | 2019-06-15 04:19:03 | 2019-06-16 19:41:14 | 2019-06-16 20:27:13 | 0:45:59 | 0:28:16 | 0:17:43 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037731 | 2019-06-15 04:19:03 | 2019-06-16 19:49:19 | 2019-06-16 20:33:19 | 0:44:00 | 0:28:11 | 0:15:49 | mira | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/readwrite.yaml} | 2 | |
Failure Reason:
"2019-06-16T20:14:16.747633+0000 mon.b (mon.0) 263 : cluster [WRN] Health check failed: 3 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 4037732 | 2019-06-15 04:19:04 | 2019-06-16 19:51:15 | 2019-06-16 20:35:14 | 0:43:59 | 0:28:10 | 0:15:49 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037733 | 2019-06-15 04:19:05 | 2019-06-16 19:56:58 | 2019-06-16 20:30:57 | 0:33:59 | 0:20:03 | 0:13:56 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037734 | 2019-06-15 04:19:06 | 2019-06-16 19:57:15 | 2019-06-16 20:17:14 | 0:19:59 | 0:06:02 | 0:13:57 | mira | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/crash.yaml} | 2 | |
Failure Reason:
Command failed on mira026 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037735 | 2019-06-15 04:19:07 | 2019-06-16 19:59:28 | 2019-06-16 20:15:27 | 0:15:59 | 0:05:11 | 0:10:48 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/cosbench_64K_read_write.yaml} | 1 | |
Failure Reason:
Command failed on mira046 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1' |
||||||||||||||
fail | 4037736 | 2019-06-15 04:19:08 | 2019-06-16 20:13:09 | 2019-06-16 20:47:08 | 0:33:59 | 0:20:18 | 0:13:41 | mira | master | ubuntu | 16.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037737 | 2019-06-15 04:19:08 | 2019-06-16 20:15:32 | 2019-06-16 21:03:31 | 0:47:59 | 0:30:34 | 0:17:25 | mira | master | centos | 7.6 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
pass | 4037738 | 2019-06-15 04:19:09 | 2019-06-16 20:17:23 | 2019-06-16 20:33:22 | 0:15:59 | 0:06:11 | 0:09:48 | mira | master | ubuntu | 16.04 | rados/singleton-nomsgr/{all/ceph-post-file.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
fail | 4037739 | 2019-06-15 04:19:10 | 2019-06-16 20:27:27 | 2019-06-16 21:19:27 | 0:52:00 | 0:28:18 | 0:23:42 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037740 | 2019-06-15 04:19:11 | 2019-06-16 20:31:07 | 2019-06-16 21:31:07 | 1:00:00 | 0:32:41 | 0:27:19 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/jewel-v1only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 4 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037741 | 2019-06-15 04:19:11 | 2019-06-16 20:33:20 | 2019-06-16 20:43:19 | 0:09:59 | 0:02:26 | 0:07:33 | mira | master | rhel | 7.6 | rados/objectstore/{backends/filejournal.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
{'mira065.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}} |
||||||||||||||
dead | 4037742 | 2019-06-15 04:19:12 | 2019-06-16 20:33:23 | 2019-06-17 08:35:44 | 12:02:21 | mira | master | centos | 7.6 | rados/singleton/{all/mon-config-keys.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |||
fail | 4037743 | 2019-06-15 04:19:13 | 2019-06-16 20:35:28 | 2019-06-16 22:45:29 | 2:10:01 | 1:51:18 | 0:18:43 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
Failure Reason:
{'mira069.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}, 'mira082.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}} |
||||||||||||||
fail | 4037744 | 2019-06-15 04:19:14 | 2019-06-16 20:43:21 | 2019-06-16 21:17:21 | 0:34:00 | 0:20:24 | 0:13:36 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037745 | 2019-06-15 04:19:15 | 2019-06-16 20:47:23 | 2019-06-16 21:31:22 | 0:43:59 | 0:28:00 | 0:15:59 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037746 | 2019-06-15 04:19:15 | 2019-06-16 20:47:45 | 2019-06-16 21:31:44 | 0:43:59 | 0:27:56 | 0:16:03 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037747 | 2019-06-15 04:19:16 | 2019-06-16 21:03:34 | 2019-06-16 22:01:34 | 0:58:00 | 0:29:27 | 0:28:33 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |
Failure Reason:
Scrubbing terminated -- not all pgs were active and clean. |
||||||||||||||
fail | 4037748 | 2019-06-15 04:19:17 | 2019-06-16 21:17:31 | 2019-06-16 23:25:32 | 2:08:01 | 1:51:20 | 0:16:41 | mira | master | rhel | 7.6 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
{'mira065.front.sepia.ceph.com': {'attempts': 5, 'changed': True, 'censored': "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}} |