User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
kchai | 2019-07-09 13:03:00 | 2019-07-09 13:03:24 | 2019-07-11 03:44:11 | 1 day, 14:40:47 | rados | wip-kefu-testing-2019-07-09-1756 | mira | 8297096 | 191 | 12 | 12 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 4105530 | 2019-07-09 13:03:17 | 2019-07-09 13:03:24 | 2019-07-09 13:55:23 | 0:51:59 | 0:36:25 | 0:15:34 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 4 | |
dead | 4105531 | 2019-07-09 13:03:18 | 2019-07-09 13:03:25 | 2019-07-10 01:05:51 | 12:02:26 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |||
pass | 4105532 | 2019-07-09 13:03:19 | 2019-07-09 13:03:24 | 2019-07-09 13:57:24 | 0:54:00 | 0:41:44 | 0:12:16 | mira | master | ubuntu | 18.04 | rados/objectstore/{backends/filestore-idempotent.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4105533 | 2019-07-09 13:03:19 | 2019-07-09 13:03:25 | 2019-07-09 13:37:24 | 0:33:59 | 0:23:25 | 0:10:34 | mira | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_workunit_loadgen_mix.yaml} | 2 | |
pass | 4105534 | 2019-07-09 13:03:20 | 2019-07-09 13:03:25 | 2019-07-09 15:53:26 | 2:50:01 | 2:30:47 | 0:19:14 | mira | master | rhel | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=2-m=1.yaml} | 2 | |
pass | 4105535 | 2019-07-09 13:03:21 | 2019-07-09 13:03:24 | 2019-07-09 14:23:24 | 1:20:00 | 1:06:50 | 0:13:10 | mira | master | ubuntu | 18.04 | rados/singleton/{all/lost-unfound.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4105536 | 2019-07-09 13:03:22 | 2019-07-09 13:37:43 | 2019-07-09 13:55:42 | 0:17:59 | 0:08:16 | 0:09:43 | mira | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/sample_fio.yaml} | 1 | |
pass | 4105537 | 2019-07-09 13:03:23 | 2019-07-09 13:40:52 | 2019-07-09 14:26:51 | 0:45:59 | 0:23:12 | 0:22:47 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |
pass | 4105538 | 2019-07-09 13:03:24 | 2019-07-09 13:55:39 | 2019-07-09 14:23:38 | 0:27:59 | 0:10:46 | 0:17:13 | mira | master | ubuntu | 16.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 4105539 | 2019-07-09 13:03:25 | 2019-07-09 13:55:43 | 2019-07-09 14:17:42 | 0:21:59 | 0:08:35 | 0:13:24 | mira | master | ubuntu | 16.04 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/force-sync-many.yaml workloads/rados_5925.yaml} | 2 | |
pass | 4105540 | 2019-07-09 13:03:25 | 2019-07-09 13:57:40 | 2019-07-09 14:15:39 | 0:17:59 | 0:07:32 | 0:10:27 | mira | master | ubuntu | 16.04 | rados/singleton/{all/max-pg-per-osd.from-mon.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
pass | 4105541 | 2019-07-09 13:03:26 | 2019-07-09 13:58:33 | 2019-07-09 14:28:32 | 0:29:59 | 0:15:01 | 0:14:58 | mira | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{centos_7.yaml} tasks/workunits.yaml} | 2 | |
pass | 4105542 | 2019-07-09 13:03:27 | 2019-07-09 14:05:31 | 2019-07-09 14:51:31 | 0:46:00 | 0:35:49 | 0:10:11 | mira | master | ubuntu | 18.04 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4105543 | 2019-07-09 13:03:28 | 2019-07-09 14:15:41 | 2019-07-09 14:33:41 | 0:18:00 | 0:07:40 | 0:10:20 | mira | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/large-omap-object-warnings.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4105544 | 2019-07-09 13:03:29 | 2019-07-09 14:17:45 | 2019-07-09 15:31:45 | 1:14:00 | 1:00:56 | 0:13:04 | mira | master | centos | 7.6 | rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/mon.yaml} | 1 | |
dead | 4105545 | 2019-07-09 13:03:29 | 2019-07-09 14:23:40 | 2019-07-10 02:26:02 | 12:02:22 | mira | master | ubuntu | 18.04 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/nautilus.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-octopus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashosds-health.yaml} | 4 | |||
pass | 4105546 | 2019-07-09 13:03:30 | 2019-07-09 14:23:40 | 2019-07-09 15:05:39 | 0:41:59 | 0:35:58 | 0:06:01 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} | 2 | |
pass | 4105547 | 2019-07-09 13:03:31 | 2019-07-09 14:27:05 | 2019-07-09 17:17:07 | 2:50:02 | 2:32:10 | 0:17:52 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
pass | 4105548 | 2019-07-09 13:03:32 | 2019-07-09 14:28:23 | 2019-07-09 15:06:23 | 0:38:00 | 0:25:45 | 0:12:15 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |
pass | 4105549 | 2019-07-09 13:03:33 | 2019-07-09 14:28:34 | 2019-07-09 14:58:33 | 0:29:59 | 0:19:15 | 0:10:44 | mira | master | centos | 7.6 | rados/singleton/{all/max-pg-per-osd.from-primary.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4105550 | 2019-07-09 13:03:33 | 2019-07-09 14:33:55 | 2019-07-09 14:53:54 | 0:19:59 | 0:09:44 | 0:10:15 | mira | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/sample_radosbench.yaml} | 1 | |
pass | 4105551 | 2019-07-09 13:03:34 | 2019-07-09 14:51:33 | 2019-07-09 15:29:33 | 0:38:00 | 0:26:14 | 0:11:46 | mira | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_workunit_loadgen_mostlyread.yaml} | 2 | |
dead | 4105552 | 2019-07-09 13:03:35 | 2019-07-09 14:53:56 | 2019-07-09 15:09:55 | 0:15:59 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/jewel.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | — | |||
Failure Reason:
reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
pass | 4105553 | 2019-07-09 13:03:36 | 2019-07-09 14:58:35 | 2019-07-09 15:36:35 | 0:38:00 | 0:25:18 | 0:12:42 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 4105554 | 2019-07-09 13:03:37 | 2019-07-09 15:05:42 | 2019-07-09 15:43:41 | 0:37:59 | 0:30:08 | 0:07:51 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |
pass | 4105555 | 2019-07-09 13:03:38 | 2019-07-09 15:06:25 | 2019-07-09 15:26:24 | 0:19:59 | 0:06:33 | 0:13:26 | mira | master | ubuntu | 16.04 | rados/multimon/{clusters/21.yaml msgr-failures/many.yaml msgr/async.yaml no_pools.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/mon_clock_no_skews.yaml} | 3 | |
pass | 4105556 | 2019-07-09 13:03:38 | 2019-07-09 15:09:58 | 2019-07-09 15:33:57 | 0:23:59 | 0:11:48 | 0:12:11 | mira | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} | 2 | |||
pass | 4105557 | 2019-07-09 13:03:39 | 2019-07-09 15:26:26 | 2019-07-09 15:48:26 | 0:22:00 | 0:12:44 | 0:09:16 | mira | master | ubuntu | 16.04 | rados/singleton/{all/max-pg-per-osd.from-replica.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
pass | 4105558 | 2019-07-09 13:03:40 | 2019-07-09 15:29:48 | 2019-07-09 15:47:47 | 0:17:59 | 0:06:57 | 0:11:02 | mira | master | ubuntu | 18.04 | rados/objectstore/{backends/fusestore.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4105559 | 2019-07-09 13:03:41 | 2019-07-09 15:31:47 | 2019-07-09 16:13:47 | 0:42:00 | 0:33:48 | 0:08:12 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |
pass | 4105560 | 2019-07-09 13:03:41 | 2019-07-09 15:33:59 | 2019-07-09 15:51:58 | 0:17:59 | 0:08:08 | 0:09:51 | mira | master | ubuntu | 16.04 | rados/singleton-nomsgr/{all/librados_hello_world.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
pass | 4105561 | 2019-07-09 13:03:42 | 2019-07-09 15:36:46 | 2019-07-09 18:08:47 | 2:32:01 | 2:14:29 | 0:17:32 | mira | master | rhel | 7.6 | rados/singleton/{all/mon-auth-caps.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4105562 | 2019-07-09 13:03:43 | 2019-07-09 15:43:44 | 2019-07-09 16:11:43 | 0:27:59 | 0:16:41 | 0:11:18 | mira | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{centos_7.yaml} tasks/crash.yaml} | 2 | |
pass | 4105563 | 2019-07-09 13:03:44 | 2019-07-09 15:47:50 | 2019-07-09 16:13:49 | 0:25:59 | 0:16:35 | 0:09:24 | mira | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/cosbench_64K_read_write.yaml} | 1 | |
pass | 4105564 | 2019-07-09 13:03:45 | 2019-07-09 15:48:43 | 2019-07-09 18:38:47 | 2:50:04 | 2:32:05 | 0:17:59 | mira | master | rhel | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} | 2 | |
pass | 4105565 | 2019-07-09 13:03:45 | 2019-07-09 15:52:00 | 2019-07-09 16:22:00 | 0:30:00 | 0:17:27 | 0:12:33 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
pass | 4105566 | 2019-07-09 13:03:46 | 2019-07-09 15:53:28 | 2019-07-09 16:29:27 | 0:35:59 | 0:27:24 | 0:08:35 | mira | master | rhel | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/readwrite.yaml} | 2 | |
pass | 4105567 | 2019-07-09 13:03:47 | 2019-07-09 16:11:45 | 2019-07-09 16:43:45 | 0:32:00 | 0:24:52 | 0:07:08 | mira | master | rhel | 7.6 | rados/singleton/{all/mon-config-key-caps.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4105568 | 2019-07-09 13:03:48 | 2019-07-09 16:13:48 | 2019-07-09 16:51:48 | 0:38:00 | 0:20:55 | 0:17:05 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous-v1only.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 4 | |
dead | 4105569 | 2019-07-09 13:03:48 | 2019-07-09 16:13:50 | 2019-07-09 16:29:49 | 0:15:59 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | — | |||
Failure Reason:
reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
pass | 4105570 | 2019-07-09 13:03:49 | 2019-07-09 16:22:14 | 2019-07-09 17:02:13 | 0:39:59 | 0:28:31 | 0:11:28 | mira | master | ubuntu | 16.04 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/many.yaml workloads/rados_api_tests.yaml} | 2 | |
pass | 4105571 | 2019-07-09 13:03:50 | 2019-07-09 16:29:29 | 2019-07-09 19:07:30 | 2:38:01 | 2:18:51 | 0:19:10 | mira | master | rhel | 7.6 | rados/singleton/{all/mon-config-keys.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4105572 | 2019-07-09 13:03:51 | 2019-07-09 16:29:50 | 2019-07-09 17:07:50 | 0:38:00 | 0:31:58 | 0:06:02 | mira | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/cosbench_64K_write.yaml} | 1 | |
pass | 4105573 | 2019-07-09 13:03:52 | 2019-07-09 16:43:46 | 2019-07-09 17:09:46 | 0:26:00 | 0:17:08 | 0:08:52 | mira | master | ubuntu | 16.04 | rados/singleton-nomsgr/{all/msgr.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
pass | 4105574 | 2019-07-09 13:03:53 | 2019-07-09 16:51:50 | 2019-07-09 17:25:49 | 0:33:59 | 0:18:20 | 0:15:39 | mira | master | centos | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
fail | 4105575 | 2019-07-09 13:03:53 | 2019-07-09 17:02:15 | 2019-07-09 17:56:15 | 0:54:00 | 0:42:01 | 0:11:59 | mira | master | ubuntu | 16.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |
Failure Reason:
SSH connection to mira052 was lost: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
dead | 4105576 | 2019-07-09 13:03:54 | 2019-07-09 17:07:54 | 2019-07-09 17:23:53 | 0:15:59 | mira | master | ubuntu | 16.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | — | |||
Failure Reason:
reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
pass | 4105577 | 2019-07-09 13:03:55 | 2019-07-09 17:10:00 | 2019-07-09 17:54:00 | 0:44:00 | 0:31:26 | 0:12:34 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
pass | 4105578 | 2019-07-09 13:03:56 | 2019-07-09 17:17:09 | 2019-07-09 18:13:09 | 0:56:00 | 0:43:56 | 0:12:04 | mira | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/dashboard.yaml} | 2 | |
pass | 4105579 | 2019-07-09 13:03:57 | 2019-07-09 17:24:08 | 2019-07-09 17:58:07 | 0:33:59 | 0:15:30 | 0:18:29 | mira | master | centos | 7.6 | rados/singleton/{all/mon-config.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
fail | 4105580 | 2019-07-09 13:03:57 | 2019-07-09 17:25:50 | 2019-07-09 17:37:50 | 0:12:00 | mira | master | ubuntu | 18.04 | rados/objectstore/{backends/keyvaluedb.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |||
Failure Reason:
Command failed on mira115 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y install linux-image-generic' |
||||||||||||||
pass | 4105581 | 2019-07-09 13:03:58 | 2019-07-09 17:37:51 | 2019-07-09 20:57:54 | 3:20:03 | 3:10:01 | 0:10:02 | mira | master | centos | 7.6 | rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/osd.yaml} | 1 | |
dead | 4105582 | 2019-07-09 13:03:59 | 2019-07-09 17:54:14 | 2019-07-10 05:56:36 | 12:02:22 | mira | master | centos | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |||
pass | 4105583 | 2019-07-09 13:04:00 | 2019-07-09 17:56:29 | 2019-07-09 18:44:29 | 0:48:00 | 0:26:55 | 0:21:05 | mira | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/repair_test.yaml} | 2 | |
pass | 4105584 | 2019-07-09 13:04:01 | 2019-07-09 17:58:22 | 2019-07-09 20:40:24 | 2:42:02 | 2:20:53 | 0:21:09 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
pass | 4105585 | 2019-07-09 13:04:02 | 2019-07-09 18:08:49 | 2019-07-09 21:20:51 | 3:12:02 | 2:53:55 | 0:18:07 | mira | master | rhel | 7.6 | rados/singleton/{all/osd-backfill.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4105586 | 2019-07-09 13:04:02 | 2019-07-09 18:13:23 | 2019-07-09 18:33:22 | 0:19:59 | 0:08:41 | 0:11:18 | mira | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/fio_4K_rand_read.yaml} | 1 | |
fail | 4105587 | 2019-07-09 13:04:03 | 2019-07-09 18:33:24 | 2019-07-09 19:33:24 | 1:00:00 | 0:45:26 | 0:14:34 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/few.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 4 | |
Failure Reason:
"2019-07-09T19:03:31.147929+0000 osd.8 (osd.8) 1 : cluster [ERR] 2.10 required past_interval bounds are empty [178,150) but past_intervals is not: ([25,149] all_participants=1,4,5,7,8 intervals=([25,46] acting 5,8),([50,58] acting 4,5),([137,145] acting 1,5),([146,149] acting 1,7))" in cluster log |
||||||||||||||
pass | 4105588 | 2019-07-09 13:04:04 | 2019-07-09 18:38:49 | 2019-07-09 18:54:48 | 0:15:59 | 0:05:23 | 0:10:36 | mira | master | ubuntu | 16.04 | rados/multimon/{clusters/3.yaml msgr-failures/few.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/mon_clock_with_skews.yaml} | 2 | |
dead | 4105589 | 2019-07-09 13:04:05 | 2019-07-09 18:44:30 | 2019-07-10 06:46:52 | 12:02:22 | mira | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | ||||
pass | 4105590 | 2019-07-09 13:04:06 | 2019-07-09 18:54:50 | 2019-07-09 22:32:52 | 3:38:02 | 3:19:18 | 0:18:44 | mira | master | rhel | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} | 2 | |
pass | 4105591 | 2019-07-09 13:04:06 | 2019-07-09 19:07:46 | 2019-07-09 19:45:46 | 0:38:00 | 0:24:48 | 0:13:12 | mira | master | ubuntu | 16.04 | rados/singleton-nomsgr/{all/multi-backfill-reject.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 2 | |
pass | 4105592 | 2019-07-09 13:04:07 | 2019-07-09 19:33:26 | 2019-07-09 22:29:28 | 2:56:02 | 2:39:13 | 0:16:49 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
pass | 4105593 | 2019-07-09 13:04:08 | 2019-07-09 19:45:48 | 2019-07-09 20:51:48 | 1:06:00 | 0:55:21 | 0:10:39 | mira | master | ubuntu | 16.04 | rados/singleton/{all/osd-recovery-incomplete.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
pass | 4105594 | 2019-07-09 13:04:09 | 2019-07-09 20:40:26 | 2019-07-09 21:02:25 | 0:21:59 | 0:10:39 | 0:11:20 | mira | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/failover.yaml} | 2 | |
pass | 4105595 | 2019-07-09 13:04:10 | 2019-07-09 20:52:03 | 2019-07-09 21:22:03 | 0:30:00 | 0:18:12 | 0:11:48 | mira | master | centos | 7.6 | rados/singleton/{all/osd-recovery.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
fail | 4105596 | 2019-07-09 13:04:10 | 2019-07-09 20:57:55 | 2019-07-09 21:47:55 | 0:50:00 | 0:32:51 | 0:17:09 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
Failure Reason:
timed out waiting for admin_socket to appear after osd.1 restart |
||||||||||||||
pass | 4105597 | 2019-07-09 13:04:11 | 2019-07-09 21:02:27 | 2019-07-09 21:20:26 | 0:17:59 | 0:08:53 | 0:09:06 | mira | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/fio_4K_rand_rw.yaml} | 1 | |
pass | 4105598 | 2019-07-09 13:04:12 | 2019-07-09 21:20:34 | 2019-07-09 22:06:34 | 0:46:00 | 0:36:48 | 0:09:12 | mira | master | ubuntu | 16.04 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
pass | 4105599 | 2019-07-09 13:04:13 | 2019-07-09 21:20:52 | 2019-07-09 21:48:51 | 0:27:59 | 0:15:36 | 0:12:23 | mira | master | ubuntu | 16.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/rgw_snaps.yaml} | 2 | |
pass | 4105600 | 2019-07-09 13:04:14 | 2019-07-09 21:22:17 | 2019-07-09 22:30:17 | 1:08:00 | 0:56:37 | 0:11:23 | mira | master | ubuntu | 16.04 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/one.yaml workloads/rados_mon_osdmap_prune.yaml} | 2 | |
pass | 4105601 | 2019-07-09 13:04:14 | 2019-07-09 21:47:58 | 2019-07-09 22:15:57 | 0:27:59 | 0:20:51 | 0:07:08 | mira | master | rhel | 7.6 | rados/singleton/{all/peer.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4105602 | 2019-07-09 13:04:15 | 2019-07-09 21:49:07 | 2019-07-09 22:33:07 | 0:44:00 | 0:31:01 | 0:12:59 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
pass | 4105603 | 2019-07-09 13:04:16 | 2019-07-09 22:06:37 | 2019-07-10 01:00:39 | 2:54:02 | 2:36:02 | 0:18:00 | mira | master | rhel | 7.6 | rados/objectstore/{backends/objectcacher-stress.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
dead | 4105604 | 2019-07-09 13:04:17 | 2019-07-09 22:15:58 | 2019-07-09 22:31:58 | 0:16:00 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/mimic-v1only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | — | |||
Failure Reason:
reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
pass | 4105605 | 2019-07-09 13:04:18 | 2019-07-09 22:29:30 | 2019-07-09 23:09:30 | 0:40:00 | 0:26:56 | 0:13:04 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
pass | 4105606 | 2019-07-09 13:04:18 | 2019-07-09 22:30:19 | 2019-07-10 01:24:21 | 2:54:02 | 2:34:36 | 0:19:26 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} | 2 | |
pass | 4105607 | 2019-07-09 13:04:19 | 2019-07-09 22:31:59 | 2019-07-09 22:49:58 | 0:17:59 | 0:07:46 | 0:10:13 | mira | master | ubuntu | 16.04 | rados/singleton-nomsgr/{all/pool-access.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
pass | 4105608 | 2019-07-09 13:04:20 | 2019-07-09 22:32:53 | 2019-07-09 23:10:53 | 0:38:00 | 0:25:56 | 0:12:04 | mira | master | ubuntu | 16.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 4105609 | 2019-07-09 13:04:21 | 2019-07-09 22:33:21 | 2019-07-09 23:13:20 | 0:39:59 | 0:18:20 | 0:21:39 | mira | master | centos | 7.6 | rados/singleton/{all/pg-autoscaler.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 2 | |
pass | 4105610 | 2019-07-09 13:04:22 | 2019-07-09 22:50:01 | 2019-07-10 01:22:02 | 2:32:01 | 2:14:00 | 0:18:01 | mira | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/fio_4M_rand_read.yaml} | 1 | |
pass | 4105611 | 2019-07-09 13:04:22 | 2019-07-09 23:09:37 | 2019-07-09 23:49:36 | 0:39:59 | 0:27:16 | 0:12:43 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
pass | 4105612 | 2019-07-09 13:04:23 | 2019-07-09 23:11:08 | 2019-07-10 01:47:09 | 2:36:01 | 2:17:39 | 0:18:22 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 4105613 | 2019-07-09 13:04:24 | 2019-07-09 23:13:35 | 2019-07-09 23:35:34 | 0:21:59 | 0:10:21 | 0:11:38 | mira | master | ubuntu | 16.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/insights.yaml} | 2 | |
pass | 4105614 | 2019-07-09 13:04:25 | 2019-07-09 23:35:37 | 2019-07-09 23:57:36 | 0:21:59 | 0:10:11 | 0:11:48 | mira | master | ubuntu | 16.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/scrub_test.yaml} | 2 | |
pass | 4105615 | 2019-07-09 13:04:26 | 2019-07-09 23:49:51 | 2019-07-10 00:19:50 | 0:29:59 | 0:19:11 | 0:10:48 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read.yaml} | 2 | |
pass | 4105616 | 2019-07-09 13:04:26 | 2019-07-09 23:57:38 | 2019-07-10 00:13:37 | 0:15:59 | 0:06:27 | 0:09:32 | mira | master | ubuntu | 18.04 | rados/singleton/{all/pg-removal-interruption.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
fail | 4105617 | 2019-07-09 13:04:27 | 2019-07-10 00:13:53 | 2019-07-10 03:33:55 | 3:20:02 | 3:00:50 | 0:19:12 | mira | master | rhel | 7.6 | rados/standalone/{supported-random-distro$/{rhel_7.yaml} workloads/scrub.yaml} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-scrub-snaps.sh) on mira069 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=82970962e741a1d3f283196c6cc582868132c3fe TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-snaps.sh' |
||||||||||||||
pass | 4105618 | 2019-07-09 13:04:28 | 2019-07-10 00:20:08 | 2019-07-10 00:56:08 | 0:36:00 | 0:24:08 | 0:11:52 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
pass | 4105619 | 2019-07-09 13:04:29 | 2019-07-10 00:56:22 | 2019-07-10 01:34:22 | 0:38:00 | 0:27:07 | 0:10:53 | mira | master | ubuntu | 16.04 | rados/singleton-nomsgr/{all/recovery-unfound-found.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
pass | 4105620 | 2019-07-09 13:04:30 | 2019-07-10 01:00:54 | 2019-07-10 01:22:53 | 0:21:59 | 0:11:56 | 0:10:03 | mira | master | ubuntu | 18.04 | rados/singleton/{all/radostool.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4105621 | 2019-07-09 13:04:30 | 2019-07-10 01:06:05 | 2019-07-10 01:48:05 | 0:42:00 | 0:28:46 | 0:13:14 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/mimic.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 4 | |
pass | 4105622 | 2019-07-09 13:04:31 | 2019-07-10 01:22:06 | 2019-07-10 01:52:05 | 0:29:59 | 0:22:25 | 0:07:34 | mira | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/fio_4M_rand_rw.yaml} | 1 | |
pass | 4105623 | 2019-07-09 13:04:32 | 2019-07-10 01:22:55 | 2019-07-10 01:50:54 | 0:27:59 | 0:16:11 | 0:11:48 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
pass | 4105624 | 2019-07-09 13:04:33 | 2019-07-10 01:24:23 | 2019-07-10 01:48:22 | 0:23:59 | 0:11:08 | 0:12:51 | mira | master | ubuntu | 18.04 | rados/multimon/{clusters/6.yaml msgr-failures/many.yaml msgr/async-v2only.yaml no_pools.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_recovery.yaml} | 2 | |
pass | 4105625 | 2019-07-09 13:04:34 | 2019-07-10 01:34:40 | 2019-07-10 01:58:39 | 0:23:59 | 0:12:17 | 0:11:42 | mira | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_cls_all.yaml validater/lockdep.yaml} | 2 | |||
pass | 4105626 | 2019-07-09 13:04:35 | 2019-07-10 01:47:12 | 2019-07-10 04:29:14 | 2:42:02 | 2:23:37 | 0:18:25 | mira | master | rhel | 7.6 | rados/singleton/{all/random-eio.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 2 | |
pass | 4105627 | 2019-07-09 13:04:35 | 2019-07-10 01:48:19 | 2019-07-10 05:32:27 | 3:44:08 | 3:14:51 | 0:29:17 | mira | master | centos | 7.6 | rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
dead | 4105628 | 2019-07-09 13:04:36 | 2019-07-10 01:48:24 | 2019-07-10 13:50:47 | 12:02:23 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |||
pass | 4105629 | 2019-07-09 13:04:37 | 2019-07-10 01:50:56 | 2019-07-10 02:44:55 | 0:53:59 | 0:32:55 | 0:21:04 | mira | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{centos_7.yaml} tasks/module_selftest.yaml} | 2 | |
pass | 4105630 | 2019-07-09 13:04:38 | 2019-07-10 01:52:17 | 2019-07-10 02:26:17 | 0:34:00 | 0:22:43 | 0:11:17 | mira | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_api_tests.yaml} | 2 | |
pass | 4105631 | 2019-07-09 13:04:39 | 2019-07-10 01:58:42 | 2019-07-10 02:40:42 | 0:42:00 | 0:29:53 | 0:12:07 | mira | master | ubuntu | 16.04 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/sync-many.yaml workloads/rados_mon_workunits.yaml} | 2 | |
pass | 4105632 | 2019-07-09 13:04:39 | 2019-07-10 02:26:11 | 2019-07-10 02:54:10 | 0:27:59 | 0:16:59 | 0:11:00 | mira | master | centos | 7.6 | rados/singleton/{all/rebuild-mondb.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
dead | 4105633 | 2019-07-09 13:04:40 | 2019-07-10 02:26:18 | 2019-07-10 02:42:17 | 0:15:59 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/fio_4M_rand_write.yaml} | — | |||
Failure Reason:
reached maximum tries (60) after waiting for 900 seconds |
||||||||||||||
fail | 4105634 | 2019-07-09 13:04:41 | 2019-07-10 02:40:44 | 2019-07-10 05:16:46 | 2:36:02 | 2:08:46 | 0:27:16 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
Failure Reason:
'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdd || sgdisk --zap-all /dev/sdd', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-10 05:14:20.878114'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.140779', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2001655500'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N210EV5E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sde'}, 'ansible_loop_var': u'item', u'end': u'2019-07-10 05:14:23.289358', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2001655500'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N210EV5E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sde'}, u'cmd': u'sgdisk --zap-all /dev/sde || sgdisk --zap-all /dev/sde', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sde || sgdisk --zap-all /dev/sde', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-10 05:14:22.148579'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.010899', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2012776300'], u'uuids': [u'8a504e08-548e-4538-a04b-c35b6f053df0']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPS930N121G73V', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HDS721010CLA330', u'partitions': {}}, 'key': u'sdf'}, 'ansible_loop_var': u'item', u'end': u'2019-07-10 05:14:24.567150', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2012776300'], u'uuids': [u'8a504e08-548e-4538-a04b-c35b6f053df0']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPS930N121G73V', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HDS721010CLA330', u'partitions': {}}, 'key': u'sdf'}, u'cmd': u'sgdisk --zap-all /dev/sdf || sgdisk --zap-all /dev/sdf', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdf || sgdisk --zap-all /dev/sdf', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-10 05:14:23.556251'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.040837', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': [u'7b7535bb-dbb2-432b-85fb-e20b4c93fff1']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP5A1FQ', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdg'}, 'ansible_loop_var': u'item', u'end': u'2019-07-10 05:14:25.879638', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': [u'7b7535bb-dbb2-432b-85fb-e20b4c93fff1']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP5A1FQ', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdg'}, u'cmd': u'sgdisk --zap-all /dev/sdg || sgdisk --zap-all /dev/sdg', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdg || sgdisk --zap-all /dev/sdg', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-10 05:14:24.838801'}, {'ansible_loop_var': u'item', '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP52BEJ', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {u'sda1': {u'start': u'2048', u'sectorsize': 512, u'uuid': u'8220ee6f-9407-43db-b7ff-d35b799efddd', u'sectors': u'1953522688', u'holders': [], u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000-part1'], u'uuids': [u'8220ee6f-9407-43db-b7ff-d35b799efddd']}, u'size': u'931.51 GB'}}}, 'key': u'sda'}, 'skipped': True, 'changed': False, '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP52BEJ', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {u'sda1': {u'start': u'2048', u'sectorsize': 512, u'uuid': u'8220ee6f-9407-43db-b7ff-d35b799efddd', u'sectors': u'1953522688', u'holders': [], u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000-part1'], u'uuids': [u'8220ee6f-9407-43db-b7ff-d35b799efddd']}, u'size': u'931.51 GB'}}}, 'key': u'sda'}}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.011338', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP53FPZ', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdb'}, 'ansible_loop_var': u'item', u'end': u'2019-07-10 05:14:27.146459', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP53FPZ', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdb'}, u'cmd': u'sgdisk --zap-all /dev/sdb || sgdisk --zap-all /dev/sdb', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdb || sgdisk --zap-all /dev/sdb', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-10 05:14:26.135121'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.011506', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d208263c000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0HD2H3VPL', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdc'}, 'ansible_loop_var': u'item', u'end': u'2019-07-10 05:14:28.408798', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d208263c000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0HD2H3VPL', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdc'}, u'cmd': u'sgdisk --zap-all /dev/sdc || sgdisk --zap-all /dev/sdc', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdc || sgdisk --zap-all /dev/sdc', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-10 05:14:27.397292'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.010910', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'NA', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': [u'9d078471-69c8-4ace-900c-c52b0c56bac5']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'PAJ55T7E', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA721010KLA330', u'partitions': {}}, 'key': u'sdh'}, 'ansible_loop_var': u'item', u'end': u'2019-07-10 05:14:29.673932', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'NA', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': [u'9d078471-69c8-4ace-900c-c52b0c56bac5']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'PAJ55T7E', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA721010KLA330', u'partitions': {}}, 'key': u'sdh'}, u'cmd': u'sgdisk --zap-all /dev/sdh || sgdisk --zap-all /dev/sdh', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdh || sgdisk --zap-all /dev/sdh', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-10 05:14:28.663022'}, {'stderr_lines': [u'Problem opening /dev/dm-0 for reading! Error is 2.', u'The specified file does not exist!', u"Problem opening '' for writing! Program will now terminate.", u'Warning! MBR not overwritten! Error is 2!', u'Problem opening /dev/dm-0 for reading! Error is 2.', u'The specified file does not exist!', u"Problem opening '' for writing! Program will now terminate.", u'Warning! MBR not overwritten! Error is 2!'], u'changed': True, u'stdout': u'', u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'delta': u'0:00:00.008452', 'stdout_lines': [], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': None, u'links': {u'masters': [], u'labels': [], u'ids': [u'dm-name-mpatha', u'dm-uuid-mpath-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'', u'support_discard': u'0', u'serial': u'5VP53FPZ', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': None, u'partitions': {}}, 'key': u'dm-0'}, 'ansible_loop_var': u'item', u'end': u'2019-07-10 05:14:29.927772', '_ansible_no_log': False, u'start': u'2019-07-10 05:14:29.919320', u'failed': True, u'cmd': u'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', 'item': {'value': {u'sectorsize': u'512', u'vendor': None, u'links': {u'masters': [], u'labels': [], u'ids': [u'dm-name-mpatha', u'dm-uuid-mpath-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'', u'support_discard': u'0', u'serial': u'5VP53FPZ', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': None, u'partitions': {}}, 'key': u'dm-0'}, u'stderr': u"Problem opening /dev/dm-0 for reading! Error is 2.\nThe specified file does not exist!\nProblem opening '' for writing! Program will now terminate.\nWarning! MBR not overwritten! Error is 2!\nProblem opening /dev/dm-0 for reading! Error is 2.\nThe specified file does not exist!\nProblem opening '' for writing! Program will now terminate.\nWarning! MBR not overwritten! Error is 2!", u'rc': 2, u'msg': u'non-zero return code'}]}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 219, in represent_list return self.represent_sequence(u'tag:yaml.org,2002:seq', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 102, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined raise RepresenterError("cannot represent an object", data)RepresenterError: ('cannot represent an object', u'sdd') |
||||||||||||||
pass | 4105635 | 2019-07-09 13:04:42 | 2019-07-10 02:42:19 | 2019-07-10 03:22:18 | 0:39:59 | 0:26:54 | 0:13:05 | mira | master | centos | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} | 2 | |
pass | 4105636 | 2019-07-09 13:04:43 | 2019-07-10 02:44:58 | 2019-07-10 03:18:57 | 0:33:59 | 0:22:15 | 0:11:44 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
pass | 4105637 | 2019-07-09 13:04:44 | 2019-07-10 02:54:13 | 2019-07-10 03:12:12 | 0:17:59 | 0:08:03 | 0:09:56 | mira | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/version-number-sanity.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4105638 | 2019-07-09 13:04:45 | 2019-07-10 03:12:14 | 2019-07-10 04:00:13 | 0:47:59 | 0:38:42 | 0:09:17 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 4105639 | 2019-07-09 13:04:45 | 2019-07-10 03:19:06 | 2019-07-10 04:23:06 | 1:04:00 | 0:51:44 | 0:12:16 | mira | master | centos | 7.6 | rados/singleton/{all/recovery-preemption.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4105640 | 2019-07-09 13:04:46 | 2019-07-10 03:22:36 | 2019-07-10 04:00:35 | 0:37:59 | 0:20:09 | 0:17:50 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | 4 | |
pass | 4105641 | 2019-07-09 13:04:47 | 2019-07-10 03:34:13 | 2019-07-10 04:06:12 | 0:31:59 | 0:20:20 | 0:11:39 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects-many-deletes.yaml} | 2 | |
pass | 4105642 | 2019-07-09 13:04:48 | 2019-07-10 04:00:32 | 2019-07-10 04:48:31 | 0:47:59 | 0:35:42 | 0:12:17 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
pass | 4105643 | 2019-07-09 13:04:49 | 2019-07-10 04:00:37 | 2019-07-10 04:32:36 | 0:31:59 | 0:18:05 | 0:13:54 | mira | master | centos | 7.6 | rados/singleton/{all/resolve_stuck_peering.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 2 | |
pass | 4105644 | 2019-07-09 13:04:50 | 2019-07-10 04:06:14 | 2019-07-10 04:28:13 | 0:21:59 | 0:11:15 | 0:10:44 | mira | master | ubuntu | 16.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/rados_cls_all.yaml} | 2 | |
pass | 4105645 | 2019-07-09 13:04:51 | 2019-07-10 04:23:08 | 2019-07-10 04:45:07 | 0:21:59 | 0:09:59 | 0:12:00 | mira | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/orchestrator_cli.yaml} | 2 | |
pass | 4105646 | 2019-07-09 13:04:51 | 2019-07-10 04:28:15 | 2019-07-10 04:48:15 | 0:20:00 | 0:10:22 | 0:09:38 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_4K_rand_read.yaml} | 1 | |
pass | 4105647 | 2019-07-09 13:04:52 | 2019-07-10 04:29:15 | 2019-07-10 05:05:14 | 0:35:59 | 0:19:57 | 0:16:02 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 4105648 | 2019-07-09 13:04:53 | 2019-07-10 04:32:38 | 2019-07-10 05:12:43 | 0:40:05 | 0:30:36 | 0:09:29 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |
pass | 4105649 | 2019-07-09 13:04:54 | 2019-07-10 04:45:10 | 2019-07-10 05:01:09 | 0:15:59 | 0:07:13 | 0:08:46 | mira | master | ubuntu | 16.04 | rados/objectstore/{backends/alloc-hint.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
pass | 4105650 | 2019-07-09 13:04:55 | 2019-07-10 04:48:17 | 2019-07-10 07:20:19 | 2:32:02 | 2:14:05 | 0:17:57 | mira | master | rhel | 7.6 | rados/rest/{mgr-restful.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4105651 | 2019-07-09 13:04:56 | 2019-07-10 04:48:46 | 2019-07-10 05:34:46 | 0:46:00 | 0:36:01 | 0:09:59 | mira | master | ubuntu | 18.04 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4105652 | 2019-07-09 13:04:56 | 2019-07-10 05:01:28 | 2019-07-10 05:45:28 | 0:44:00 | 0:23:11 | 0:20:49 | mira | master | centos | rados/singleton-flat/valgrind-leaks.yaml | 1 | ||
pass | 4105653 | 2019-07-09 13:04:57 | 2019-07-10 05:05:16 | 2019-07-10 07:39:18 | 2:34:02 | 2:15:36 | 0:18:26 | mira | master | rhel | 7.6 | rados/singleton-nomsgr/{all/admin_socket_output.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4105654 | 2019-07-09 13:04:58 | 2019-07-10 05:12:46 | 2019-07-10 05:42:46 | 0:30:00 | 0:17:06 | 0:12:54 | mira | master | centos | 7.6 | rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/crush.yaml} | 1 | |
pass | 4105655 | 2019-07-09 13:04:59 | 2019-07-10 05:16:54 | 2019-07-10 08:10:55 | 2:54:01 | 2:32:57 | 0:21:04 | mira | master | ubuntu | 18.04 | rados/upgrade/mimic-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/mimic.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashosds-health.yaml} | 4 | |
pass | 4105656 | 2019-07-09 13:04:59 | 2019-07-10 05:32:30 | 2019-07-10 05:52:29 | 0:19:59 | 0:09:06 | 0:10:53 | mira | master | ubuntu | 16.04 | rados/singleton/{all/test-crash.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
pass | 4105657 | 2019-07-09 13:05:00 | 2019-07-10 05:35:00 | 2019-07-10 08:17:02 | 2:42:02 | 2:23:56 | 0:18:06 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} | 2 | |
pass | 4105658 | 2019-07-09 13:05:01 | 2019-07-10 05:42:47 | 2019-07-10 06:22:47 | 0:40:00 | 0:28:18 | 0:11:42 | mira | master | centos | 7.6 | rados/singleton/{all/test_envlibrados_for_rocksdb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4105659 | 2019-07-09 13:05:02 | 2019-07-10 05:45:42 | 2019-07-10 06:47:42 | 1:02:00 | 0:37:25 | 0:24:35 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/nautilus-v2only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 4 | |
pass | 4105660 | 2019-07-09 13:05:03 | 2019-07-10 05:52:31 | 2019-07-10 06:22:30 | 0:29:59 | 0:20:30 | 0:09:29 | mira | master | rhel | 7.6 | rados/multimon/{clusters/9.yaml msgr-failures/few.yaml msgr/async.yaml no_pools.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/mon_clock_no_skews.yaml} | 3 | |
fail | 4105661 | 2019-07-09 13:05:04 | 2019-07-10 05:56:38 | 2019-07-10 06:52:38 | 0:56:00 | 0:35:34 | 0:20:26 | mira | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
Command failed on mira077 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' |
||||||||||||||
pass | 4105662 | 2019-07-09 13:05:04 | 2019-07-10 06:22:38 | 2019-07-10 06:44:37 | 0:21:59 | 0:11:02 | 0:10:57 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_4K_seq_read.yaml} | 1 | |
pass | 4105663 | 2019-07-09 13:05:05 | 2019-07-10 06:22:49 | 2019-07-10 07:26:49 | 1:04:00 | 0:29:12 | 0:34:48 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |
pass | 4105664 | 2019-07-09 13:05:06 | 2019-07-10 06:45:20 | 2019-07-10 07:11:19 | 0:25:59 | 0:13:57 | 0:12:02 | mira | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_python.yaml} | 2 | |
pass | 4105665 | 2019-07-09 13:05:07 | 2019-07-10 06:47:09 | 2019-07-10 07:25:12 | 0:38:03 | 0:26:41 | 0:11:22 | mira | master | ubuntu | 16.04 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/sync.yaml workloads/snaps-few-objects.yaml} | 2 | |
fail | 4105666 | 2019-07-09 13:05:08 | 2019-07-10 06:47:44 | 2019-07-10 07:07:43 | 0:19:59 | 0:07:12 | 0:12:47 | mira | master | ubuntu | 16.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_16.04.yaml} tasks/progress.yaml} | 2 | |
Failure Reason:
Test failure: test_osd_came_back (tasks.mgr.test_progress.TestProgress) |
||||||||||||||
pass | 4105667 | 2019-07-09 13:05:09 | 2019-07-10 06:52:53 | 2019-07-10 08:04:53 | 1:12:00 | 0:59:18 | 0:12:42 | mira | master | centos | 7.6 | rados/singleton/{all/thrash-backfill-full.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 2 | |
fail | 4105668 | 2019-07-09 13:05:09 | 2019-07-10 07:07:45 | 2019-07-10 11:05:48 | 3:58:03 | 3:17:13 | 0:40:50 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
Failure Reason:
timeout expired in wait_for_all_osds_up |
||||||||||||||
pass | 4105669 | 2019-07-09 13:05:10 | 2019-07-10 07:11:22 | 2019-07-10 07:43:24 | 0:32:02 | 0:20:30 | 0:11:32 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} | 2 | |
pass | 4105670 | 2019-07-09 13:05:11 | 2019-07-10 07:20:35 | 2019-07-10 08:18:34 | 0:57:59 | 0:42:54 | 0:15:05 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 4105671 | 2019-07-09 13:05:12 | 2019-07-10 07:25:30 | 2019-07-10 10:15:33 | 2:50:03 | 2:13:16 | 0:36:47 | mira | master | rhel | 7.6 | rados/singleton-nomsgr/{all/cache-fs-trunc.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
fail | 4105672 | 2019-07-09 13:05:12 | 2019-07-10 07:27:05 | 2019-07-10 08:51:05 | 1:24:00 | 0:32:57 | 0:51:03 | mira | master | rhel | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-small-objects.yaml} | 2 | |
Failure Reason:
not clean before minsize thrashing starts |
||||||||||||||
pass | 4105673 | 2019-07-09 13:05:13 | 2019-07-10 07:39:26 | 2019-07-10 09:01:26 | 1:22:00 | 0:31:45 | 0:50:15 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |
pass | 4105674 | 2019-07-09 13:05:14 | 2019-07-10 07:43:25 | 2019-07-10 10:53:27 | 3:10:02 | 2:30:07 | 0:39:55 | mira | master | rhel | 7.6 | rados/singleton/{all/thrash-eio.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 2 | |
pass | 4105675 | 2019-07-09 13:05:15 | 2019-07-10 08:05:08 | 2019-07-10 08:29:07 | 0:23:59 | 0:13:09 | 0:10:50 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_4M_rand_read.yaml} | 1 | |
pass | 4105676 | 2019-07-09 13:05:16 | 2019-07-10 08:10:57 | 2019-07-10 08:52:57 | 0:42:00 | 0:24:32 | 0:17:28 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
pass | 4105677 | 2019-07-09 13:05:16 | 2019-07-10 08:17:18 | 2019-07-10 09:03:18 | 0:46:00 | 0:25:45 | 0:20:15 | mira | master | centos | 7.6 | rados/objectstore/{backends/ceph_objectstore_tool.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4105678 | 2019-07-09 13:05:17 | 2019-07-10 08:18:51 | 2019-07-10 11:10:53 | 2:52:02 | 2:25:17 | 0:26:45 | mira | master | rhel | 7.6 | rados/singleton/{all/thrash-rados/{thrash-rados.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 2 | |
pass | 4105679 | 2019-07-09 13:05:18 | 2019-07-10 08:29:22 | 2019-07-10 16:45:29 | 8:16:07 | 0:43:24 | 7:32:43 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 4 | |
pass | 4105680 | 2019-07-09 13:05:19 | 2019-07-10 08:51:15 | 2019-07-10 11:33:16 | 2:42:01 | 2:23:44 | 0:18:17 | mira | master | rhel | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/rados_stress_watch.yaml} | 2 | |
fail | 4105681 | 2019-07-09 13:05:20 | 2019-07-10 08:52:58 | 2019-07-10 09:20:58 | 0:28:00 | 0:15:49 | 0:12:11 | mira | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{centos_7.yaml} tasks/prometheus.yaml} | 2 | |
Failure Reason:
Test failure: test_standby (tasks.mgr.test_prometheus.TestPrometheus) |
||||||||||||||
pass | 4105682 | 2019-07-09 13:05:20 | 2019-07-10 09:01:28 | 2019-07-10 09:47:27 | 0:45:59 | 0:31:00 | 0:14:59 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
pass | 4105683 | 2019-07-09 13:05:21 | 2019-07-10 09:03:19 | 2019-07-10 09:55:19 | 0:52:00 | 0:15:06 | 0:36:54 | mira | master | centos | 7.6 | rados/singleton-nomsgr/{all/ceph-kvstore-tool.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4105684 | 2019-07-09 13:05:22 | 2019-07-10 09:21:14 | 2019-07-10 10:23:14 | 1:02:00 | 0:25:03 | 0:36:57 | mira | master | ubuntu | 18.04 | rados/singleton/{all/thrash_cache_writeback_proxy_none.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 2 | |
pass | 4105685 | 2019-07-09 13:05:23 | 2019-07-10 09:47:29 | 2019-07-10 11:31:30 | 1:44:01 | 0:12:34 | 1:31:27 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 4105686 | 2019-07-09 13:05:24 | 2019-07-10 09:55:34 | 2019-07-10 10:23:34 | 0:28:00 | 0:16:44 | 0:11:16 | mira | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/radosbench_4M_seq_read.yaml} | 1 | |
pass | 4105687 | 2019-07-09 13:05:24 | 2019-07-10 10:15:46 | 2019-07-10 10:59:45 | 0:43:59 | 0:34:00 | 0:09:59 | mira | master | ubuntu | 18.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/erasure-code.yaml} | 1 | |
pass | 4105688 | 2019-07-09 13:05:25 | 2019-07-10 10:23:28 | 2019-07-10 10:51:27 | 0:27:59 | 0:15:48 | 0:12:11 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
pass | 4105689 | 2019-07-09 13:05:26 | 2019-07-10 10:23:35 | 2019-07-10 10:51:34 | 0:27:59 | 0:20:48 | 0:07:11 | mira | master | rhel | 7.6 | rados/singleton/{all/watch-notify-same-primary.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4105690 | 2019-07-09 13:05:27 | 2019-07-10 10:51:42 | 2019-07-10 11:21:41 | 0:29:59 | 0:15:14 | 0:14:45 | mira | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rados_striper.yaml} | 2 | |
pass | 4105691 | 2019-07-09 13:05:28 | 2019-07-10 10:51:42 | 2019-07-10 11:37:41 | 0:45:59 | 0:22:38 | 0:23:21 | mira | master | centos | 7.6 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/force-sync-many.yaml workloads/pool-create-delete.yaml} | 2 | |
pass | 4105692 | 2019-07-09 13:05:29 | 2019-07-10 10:53:42 | 2019-07-10 11:51:41 | 0:57:59 | 0:06:21 | 0:51:38 | mira | master | ubuntu | 18.04 | rados/multimon/{clusters/21.yaml msgr-failures/many.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_clock_with_skews.yaml} | 3 | |
pass | 4105693 | 2019-07-09 13:05:30 | 2019-07-10 10:59:59 | 2019-07-10 11:17:58 | 0:17:59 | 0:06:18 | 0:11:41 | mira | master | ubuntu | 16.04 | rados/singleton/{all/admin-socket.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
pass | 4105694 | 2019-07-09 13:05:30 | 2019-07-10 11:06:02 | 2019-07-10 11:46:01 | 0:39:59 | 0:21:37 | 0:18:22 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
pass | 4105695 | 2019-07-09 13:05:31 | 2019-07-10 11:10:55 | 2019-07-10 12:26:55 | 1:16:00 | 0:34:58 | 0:41:02 | mira | master | centos | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=clay-k=4-m=2.yaml} | 2 | |
pass | 4105696 | 2019-07-09 13:05:32 | 2019-07-10 11:18:12 | 2019-07-10 13:56:14 | 2:38:02 | 2:16:38 | 0:21:24 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
pass | 4105697 | 2019-07-09 13:05:33 | 2019-07-10 11:21:55 | 2019-07-10 12:19:55 | 0:58:00 | 0:34:43 | 0:23:17 | mira | master | centos | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 4105698 | 2019-07-09 13:05:34 | 2019-07-10 11:31:43 | 2019-07-10 12:41:43 | 1:10:00 | 0:35:54 | 0:34:06 | mira | master | centos | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |
pass | 4105699 | 2019-07-09 13:05:34 | 2019-07-10 11:33:28 | 2019-07-10 14:35:30 | 3:02:02 | 0:21:41 | 2:40:21 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 4 | |
pass | 4105700 | 2019-07-09 13:05:35 | 2019-07-10 11:37:43 | 2019-07-10 12:23:43 | 0:46:00 | 0:21:08 | 0:24:52 | mira | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_api_tests.yaml validater/lockdep.yaml} | 2 | |||
pass | 4105701 | 2019-07-09 13:05:36 | 2019-07-10 11:46:03 | 2019-07-10 12:04:02 | 0:17:59 | 0:08:12 | 0:09:47 | mira | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/radosbench_4M_write.yaml} | 1 | |
pass | 4105702 | 2019-07-09 13:05:37 | 2019-07-10 11:51:56 | 2019-07-10 12:19:55 | 0:27:59 | 0:19:44 | 0:08:15 | mira | master | rhel | 7.6 | rados/singleton-nomsgr/{all/ceph-post-file.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4105703 | 2019-07-09 13:05:38 | 2019-07-10 12:04:17 | 2019-07-10 14:50:19 | 2:46:02 | 2:12:23 | 0:33:39 | mira | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{rhel_7.yaml} tasks/ssh_orchestrator.yaml} | 2 | |
pass | 4105704 | 2019-07-09 13:05:39 | 2019-07-10 12:20:08 | 2019-07-10 14:50:10 | 2:30:02 | 2:11:36 | 0:18:26 | mira | master | rhel | 7.6 | rados/objectstore/{backends/filejournal.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4105705 | 2019-07-09 13:05:39 | 2019-07-10 12:20:08 | 2019-07-10 12:38:07 | 0:17:59 | 0:07:51 | 0:10:08 | mira | master | ubuntu | 18.04 | rados/singleton/{all/deduptool.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4105706 | 2019-07-09 13:05:40 | 2019-07-10 12:23:56 | 2019-07-10 15:41:59 | 3:18:03 | 3:00:13 | 0:17:50 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
pass | 4105707 | 2019-07-09 13:05:41 | 2019-07-10 12:26:57 | 2019-07-10 13:28:57 | 1:02:00 | 0:52:19 | 0:09:41 | mira | master | rhel | 7.6 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4105708 | 2019-07-09 13:05:42 | 2019-07-10 12:38:21 | 2019-07-10 13:04:20 | 0:25:59 | 0:14:30 | 0:11:29 | mira | master | centos | 7.6 | rados/singleton/{all/divergent_priors.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4105709 | 2019-07-09 13:05:43 | 2019-07-10 12:41:57 | 2019-07-10 13:11:57 | 0:30:00 | 0:18:31 | 0:11:29 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
pass | 4105710 | 2019-07-09 13:05:44 | 2019-07-10 13:04:29 | 2019-07-10 13:22:28 | 0:17:59 | 0:09:01 | 0:08:58 | mira | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/sample_fio.yaml} | 1 | |
pass | 4105711 | 2019-07-09 13:05:44 | 2019-07-10 13:11:58 | 2019-07-10 16:06:00 | 2:54:02 | 2:36:00 | 0:18:02 | mira | master | rhel | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/rados_workunit_loadgen_big.yaml} | 2 | |
pass | 4105712 | 2019-07-09 13:05:45 | 2019-07-10 13:22:30 | 2019-07-10 15:02:30 | 1:40:00 | 0:12:42 | 1:27:18 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel-v1only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | 4 | |
pass | 4105713 | 2019-07-09 13:05:46 | 2019-07-10 13:29:01 | 2019-07-10 13:44:59 | 0:15:58 | 0:07:15 | 0:08:43 | mira | master | ubuntu | 16.04 | rados/singleton/{all/divergent_priors2.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
pass | 4105714 | 2019-07-09 13:05:47 | 2019-07-10 13:45:14 | 2019-07-10 14:09:13 | 0:23:59 | 0:12:54 | 0:11:05 | mira | master | centos | 7.6 | rados/singleton-nomsgr/{all/export-after-evict.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4105715 | 2019-07-09 13:05:48 | 2019-07-10 13:51:16 | 2019-07-10 14:35:15 | 0:43:59 | 0:28:38 | 0:15:21 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
pass | 4105716 | 2019-07-09 13:05:49 | 2019-07-10 13:56:16 | 2019-07-10 14:40:15 | 0:43:59 | 0:22:30 | 0:21:29 | mira | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{rhel_7.yaml} tasks/workunits.yaml} | 2 | |
pass | 4105717 | 2019-07-09 13:05:49 | 2019-07-10 14:09:31 | 2019-07-10 15:33:31 | 1:24:00 | 0:11:15 | 1:12:45 | mira | master | ubuntu | 16.04 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 4105718 | 2019-07-09 13:05:50 | 2019-07-10 14:35:28 | 2019-07-10 15:07:27 | 0:31:59 | 0:19:27 | 0:12:32 | mira | master | centos | 7.6 | rados/singleton/{all/dump-stuck.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4105719 | 2019-07-09 13:05:51 | 2019-07-10 14:35:31 | 2019-07-10 15:07:31 | 0:32:00 | 0:21:23 | 0:10:37 | mira | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/sample_radosbench.yaml} | 1 | |
pass | 4105720 | 2019-07-09 13:05:52 | 2019-07-10 14:40:29 | 2019-07-10 15:12:28 | 0:31:59 | 0:19:10 | 0:12:49 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
pass | 4105721 | 2019-07-09 13:05:53 | 2019-07-10 14:50:17 | 2019-07-10 15:36:17 | 0:46:00 | 0:32:45 | 0:13:15 | mira | master | centos | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=2-m=1.yaml} | 2 | |
pass | 4105722 | 2019-07-09 13:05:53 | 2019-07-10 14:50:20 | 2019-07-10 15:38:20 | 0:48:00 | 0:25:46 | 0:22:14 | mira | master | ubuntu | 16.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 4105723 | 2019-07-09 13:05:54 | 2019-07-10 15:02:32 | 2019-07-10 15:42:31 | 0:39:59 | 0:25:37 | 0:14:22 | mira | master | centos | 7.6 | rados/objectstore/{backends/filestore-idempotent-aio-journal.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4105724 | 2019-07-09 13:05:55 | 2019-07-10 15:07:42 | 2019-07-10 15:41:41 | 0:33:59 | 0:20:31 | 0:13:28 | mira | master | ubuntu | 18.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/misc.yaml} | 1 | |
pass | 4105725 | 2019-07-09 13:05:56 | 2019-07-10 15:07:42 | 2019-07-10 16:31:42 | 1:24:00 | 0:36:26 | 0:47:34 | mira | master | centos | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
fail | 4105726 | 2019-07-09 13:05:57 | 2019-07-10 15:12:30 | 2019-07-10 16:20:30 | 1:08:00 | 0:37:54 | 0:30:06 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} | 2 | |
Failure Reason:
SSH connection to mira052 was lost: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --pool-snaps --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op delete 50 --pool unique_pool_0' |
||||||||||||||
pass | 4105727 | 2019-07-09 13:05:58 | 2019-07-10 15:33:33 | 2019-07-10 17:03:33 | 1:30:00 | 1:16:10 | 0:13:50 | mira | master | centos | 7.6 | rados/singleton/{all/ec-lost-unfound.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4105728 | 2019-07-09 13:05:59 | 2019-07-10 15:36:32 | 2019-07-10 18:08:33 | 2:32:01 | 2:13:13 | 0:18:48 | mira | master | rhel | 7.6 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/many.yaml workloads/rados_5925.yaml} | 2 | |
pass | 4105729 | 2019-07-09 13:05:59 | 2019-07-10 15:38:35 | 2019-07-10 16:14:34 | 0:35:59 | 0:22:00 | 0:13:59 | mira | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_workunit_loadgen_mix.yaml} | 2 | |
dead | 4105730 | 2019-07-09 13:06:00 | 2019-07-10 15:41:43 | 2019-07-11 03:44:11 | 12:02:28 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |||
pass | 4105731 | 2019-07-09 13:06:01 | 2019-07-10 15:42:11 | 2019-07-10 16:06:10 | 0:23:59 | 0:13:42 | 0:10:17 | mira | master | centos | 7.6 | rados/singleton-nomsgr/{all/full-tiering.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4105732 | 2019-07-09 13:06:02 | 2019-07-10 15:42:33 | 2019-07-10 16:34:32 | 0:51:59 | 0:15:54 | 0:36:05 | mira | master | centos | 7.6 | rados/multimon/{clusters/3.yaml msgr-failures/few.yaml msgr/async-v2only.yaml no_pools.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/mon_recovery.yaml} | 2 | |
dead | 4105733 | 2019-07-09 13:06:03 | 2019-07-10 16:06:16 | 2019-07-10 16:28:15 | 0:21:59 | mira | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} | — | ||||
Failure Reason:
reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
pass | 4105734 | 2019-07-09 13:06:03 | 2019-07-10 16:06:16 | 2019-07-10 17:30:16 | 1:24:00 | 0:27:10 | 0:56:50 | mira | master | ubuntu | 16.04 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/jewel.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{ubuntu_16.04.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 4 | |
pass | 4105735 | 2019-07-09 13:06:04 | 2019-07-10 16:14:49 | 2019-07-10 16:30:48 | 0:15:59 | 0:06:06 | 0:09:53 | mira | master | ubuntu | 16.04 | rados/singleton/{all/erasure-code-nonregression.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml}} | 1 | |
pass | 4105736 | 2019-07-09 13:06:05 | 2019-07-10 16:20:31 | 2019-07-10 16:52:31 | 0:32:00 | 0:17:02 | 0:14:58 | mira | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{centos_7.yaml} tasks/crash.yaml} | 2 | |
pass | 4105737 | 2019-07-09 13:06:06 | 2019-07-10 16:28:33 | 2019-07-10 16:56:32 | 0:27:59 | 0:16:45 | 0:11:14 | mira | master | ubuntu | 16.04 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_16.04.yaml} workloads/cosbench_64K_read_write.yaml} | 1 | |
pass | 4105738 | 2019-07-09 13:06:07 | 2019-07-10 16:31:04 | 2019-07-10 19:15:07 | 2:44:03 | 2:22:21 | 0:21:42 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
pass | 4105739 | 2019-07-09 13:06:08 | 2019-07-10 16:31:44 | 2019-07-10 17:45:44 | 1:14:00 | 1:02:29 | 0:11:31 | mira | master | ubuntu | 18.04 | rados/singleton/{all/lost-unfound-delete.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4105740 | 2019-07-09 13:06:08 | 2019-07-10 16:34:45 | 2019-07-10 17:16:44 | 0:41:59 | 0:28:15 | 0:13:44 | mira | master | ubuntu | 16.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_16.04.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
pass | 4105741 | 2019-07-09 13:06:09 | 2019-07-10 16:45:45 | 2019-07-10 17:39:45 | 0:54:00 | 0:36:43 | 0:17:17 | mira | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rados_workunit_loadgen_mostlyread.yaml} | 2 | |
pass | 4105742 | 2019-07-09 13:06:10 | 2019-07-10 16:52:33 | 2019-07-10 19:00:34 | 2:08:01 | 1:21:46 | 0:46:15 | mira | master | centos | 7.6 | rados/singleton/{all/lost-unfound.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4105743 | 2019-07-09 13:06:11 | 2019-07-10 16:56:34 | 2019-07-10 17:42:33 | 0:45:59 | 0:17:04 | 0:28:55 | mira | master | centos | 7.6 | rados/singleton-nomsgr/{all/health-warnings.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4105744 | 2019-07-09 13:06:12 | 2019-07-10 17:03:50 | 2019-07-10 17:51:49 | 0:47:59 | 0:25:50 | 0:22:09 | mira | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/cosbench_64K_write.yaml} | 1 |