User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
kchai | 2019-08-07 07:52:55 | 2019-08-07 07:53:11 | 2019-08-08 00:08:04 | 16:14:53 | rados | wip-kefu-testing-2019-08-06-1102 | mira | b796290 | 45 | 48 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 4193165 | 2019-08-07 07:53:07 | 2019-08-07 07:53:10 | 2019-08-07 08:23:09 | 0:29:59 | 0:14:40 | 0:15:19 | mira | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{centos_7.yaml} tasks/crash.yaml} | 2 | |
Failure Reason:
Test failure: setUpClass (tasks.mgr.test_crash.TestCrash) |
||||||||||||||
fail | 4193166 | 2019-08-07 07:53:08 | 2019-08-07 07:53:10 | 2019-08-07 11:07:12 | 3:14:02 | 2:54:51 | 0:19:11 | mira | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/cosbench_64K_read_write.yaml} | 1 | |
Failure Reason:
Command failed on mira061 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
pass | 4193167 | 2019-08-07 07:53:09 | 2019-08-07 07:53:10 | 2019-08-07 11:09:13 | 3:16:03 | 2:55:37 | 0:20:26 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/set-chunk-promote-flush.yaml} | 2 | |
fail | 4193168 | 2019-08-07 07:53:10 | 2019-08-07 07:53:11 | 2019-08-07 08:35:11 | 0:42:00 | 0:27:52 | 0:14:08 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |
Failure Reason:
"2019-08-07T08:30:17.640703+0000 mon.a (mon.0) 1653 : cluster [ERR] Health check failed: Module 'balancer' has failed: Remote method threw exception: Traceback (most recent call last):" in cluster log |
||||||||||||||
pass | 4193169 | 2019-08-07 07:53:11 | 2019-08-07 07:53:13 | 2019-08-07 08:53:24 | 1:00:11 | 0:51:27 | 0:08:44 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} | 2 | |
pass | 4193170 | 2019-08-07 07:53:12 | 2019-08-07 07:53:14 | 2019-08-07 11:15:29 | 3:22:15 | 3:12:38 | 0:09:37 | mira | master | rhel | 7.6 | rados/upgrade/mimic-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/mimic.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} thrashosds-health.yaml} | 4 | |
fail | 4193171 | 2019-08-07 07:53:13 | 2019-08-07 07:53:14 | 2019-08-07 08:31:14 | 0:38:00 | 0:30:01 | 0:07:59 | mira | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/cosbench_64K_write.yaml} | 1 | |
Failure Reason:
Command failed on mira082 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
fail | 4193172 | 2019-08-07 07:53:14 | 2019-08-07 07:53:15 | 2019-08-07 11:05:17 | 3:12:02 | 2:52:59 | 0:19:03 | mira | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{rhel_7.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Test failure: setUpClass (tasks.mgr.test_dashboard.TestDashboard) |
||||||||||||||
pass | 4193173 | 2019-08-07 07:53:15 | 2019-08-07 07:53:16 | 2019-08-07 08:43:21 | 0:50:05 | 0:33:45 | 0:16:20 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
fail | 4193174 | 2019-08-07 07:53:16 | 2019-08-07 07:53:17 | 2019-08-07 08:25:17 | 0:32:00 | 0:20:39 | 0:11:21 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
Failure Reason:
"2019-08-07T08:17:27.031994+0000 mon.b (mon.0) 1027 : cluster [ERR] Health check failed: Module 'balancer' has failed: Remote method threw exception: Traceback (most recent call last):" in cluster log |
||||||||||||||
fail | 4193175 | 2019-08-07 07:53:17 | 2019-08-07 08:23:11 | 2019-08-07 10:15:11 | 1:52:00 | 1:43:52 | 0:08:08 | mira | master | rhel | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} | 2 | |
Failure Reason:
not all PGs are active or peered 15 seconds after marking out OSDs |
||||||||||||||
pass | 4193176 | 2019-08-07 07:53:18 | 2019-08-07 08:25:18 | 2019-08-07 10:03:20 | 1:38:02 | 1:29:15 | 0:08:47 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 4193177 | 2019-08-07 07:53:19 | 2019-08-07 08:31:29 | 2019-08-07 09:45:28 | 1:13:59 | 1:06:13 | 0:07:46 | mira | master | rhel | 7.6 | rados/objectstore/{backends/ceph_objectstore_tool.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4193178 | 2019-08-07 07:53:20 | 2019-08-07 08:35:26 | 2019-08-07 10:05:27 | 1:30:01 | 1:21:00 | 0:09:01 | mira | master | rhel | 7.6 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/many.yaml workloads/rados_mon_workunits.yaml} | 2 | |
fail | 4193179 | 2019-08-07 07:53:21 | 2019-08-07 08:43:24 | 2019-08-07 12:05:26 | 3:22:02 | 2:51:00 | 0:31:02 | mira | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{rhel_7.yaml} tasks/failover.yaml} | 2 | |
Failure Reason:
Test failure: setUpClass (tasks.mgr.test_failover.TestFailover) |
||||||||||||||
fail | 4193180 | 2019-08-07 07:53:22 | 2019-08-07 08:53:39 | 2019-08-07 09:53:39 | 1:00:00 | 0:53:17 | 0:06:43 | mira | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/fio_4K_rand_rw.yaml} | 1 | |
Failure Reason:
Command failed on mira035 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
pass | 4193181 | 2019-08-07 07:53:23 | 2019-08-07 09:45:43 | 2019-08-07 11:13:43 | 1:28:00 | 1:19:36 | 0:08:24 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
pass | 4193182 | 2019-08-07 07:53:24 | 2019-08-07 09:53:53 | 2019-08-07 11:05:53 | 1:12:00 | 1:03:24 | 0:08:36 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
pass | 4193183 | 2019-08-07 07:53:25 | 2019-08-07 10:03:29 | 2019-08-07 11:25:29 | 1:22:00 | 1:13:22 | 0:08:38 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} | 2 | |
pass | 4193184 | 2019-08-07 07:53:26 | 2019-08-07 10:05:28 | 2019-08-07 10:43:28 | 0:38:00 | 0:20:48 | 0:17:12 | mira | master | centos | 7.6 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 4193185 | 2019-08-07 07:53:27 | 2019-08-07 10:15:26 | 2019-08-07 12:13:27 | 1:58:01 | 1:52:05 | 0:05:56 | mira | master | rhel | 7.6 | rados/singleton/{all/lost-unfound.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
fail | 4193186 | 2019-08-07 07:53:27 | 2019-08-07 10:44:02 | 2019-08-07 11:46:01 | 1:01:59 | 0:54:19 | 0:07:40 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
Failure Reason:
"2019-08-07T11:34:28.618453+0000 mon.b (mon.0) 363 : cluster [ERR] Health check failed: Module 'balancer' has failed: Remote method threw exception: Traceback (most recent call last):" in cluster log |
||||||||||||||
fail | 4193187 | 2019-08-07 07:53:28 | 2019-08-07 11:05:26 | 2019-08-07 12:35:27 | 1:30:01 | 1:00:06 | 0:29:55 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 4 | |
Failure Reason:
"2019-08-07T11:58:39.199558+0000 mon.a (mon.0) 892 : cluster [ERR] Health check failed: Module 'balancer' has failed: Remote method threw exception: Traceback (most recent call last):" in cluster log |
||||||||||||||
fail | 4193188 | 2019-08-07 07:53:29 | 2019-08-07 11:05:55 | 2019-08-07 11:33:54 | 0:27:59 | 0:14:58 | 0:13:01 | mira | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{centos_7.yaml} tasks/insights.yaml} | 2 | |
Failure Reason:
Test failure: setUpClass (tasks.mgr.test_insights.TestInsights) |
||||||||||||||
fail | 4193189 | 2019-08-07 07:53:30 | 2019-08-07 11:07:13 | 2019-08-07 12:31:14 | 1:24:01 | 1:15:42 | 0:08:19 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
Failure Reason:
"2019-08-07T12:04:36.972802+0000 mgr.x (mgr.4105) 60 : cluster [ERR] Unhandled exception from module 'balancer' while running on mgr.x: invalid literal for int() with base 10: 'a'" in cluster log |
||||||||||||||
pass | 4193190 | 2019-08-07 07:53:31 | 2019-08-07 11:09:14 | 2019-08-07 12:15:14 | 1:06:00 | 0:57:33 | 0:08:27 | mira | master | rhel | 7.6 | rados/singleton/{all/max-pg-per-osd.from-primary.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4193191 | 2019-08-07 07:53:32 | 2019-08-07 11:13:53 | 2019-08-07 12:25:53 | 1:12:00 | 1:05:04 | 0:06:56 | mira | master | rhel | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read.yaml} | 2 | |
pass | 4193192 | 2019-08-07 07:53:33 | 2019-08-07 11:15:44 | 2019-08-07 12:07:44 | 0:52:00 | 0:43:15 | 0:08:45 | mira | master | rhel | 7.6 | rados/objectstore/{backends/filejournal.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4193193 | 2019-08-07 07:53:34 | 2019-08-07 11:25:31 | 2019-08-07 12:33:30 | 1:07:59 | 1:00:21 | 0:07:38 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 4193194 | 2019-08-07 07:53:35 | 2019-08-07 11:33:56 | 2019-08-07 12:19:55 | 0:45:59 | 0:35:23 | 0:10:36 | mira | master | rhel | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/rados_python.yaml} | 2 | |
fail | 4193195 | 2019-08-07 07:53:36 | 2019-08-07 11:46:03 | 2019-08-07 12:20:02 | 0:33:59 | 0:23:17 | 0:10:42 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
Failure Reason:
"2019-08-07T12:08:08.497282+0000 mon.b (mon.0) 1053 : cluster [ERR] Health check failed: Module 'balancer' has failed: Remote method threw exception: Traceback (most recent call last):" in cluster log |
||||||||||||||
dead | 4193196 | 2019-08-07 07:53:37 | 2019-08-07 12:05:42 | 2019-08-08 00:08:04 | 12:02:22 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | 4 | |||
pass | 4193197 | 2019-08-07 07:53:38 | 2019-08-07 12:07:49 | 2019-08-07 12:59:48 | 0:51:59 | 0:44:58 | 0:07:01 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
pass | 4193198 | 2019-08-07 07:53:39 | 2019-08-07 12:13:42 | 2019-08-07 12:55:41 | 0:41:59 | 0:34:36 | 0:07:23 | mira | master | rhel | 7.6 | rados/singleton/{all/mon-auth-caps.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4193199 | 2019-08-07 07:53:39 | 2019-08-07 12:15:29 | 2019-08-07 13:17:29 | 1:02:00 | 0:52:19 | 0:09:41 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |
fail | 4193200 | 2019-08-07 07:53:40 | 2019-08-07 12:19:57 | 2019-08-07 12:37:56 | 0:17:59 | 0:06:28 | 0:11:31 | mira | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/module_selftest.yaml} | 2 | |
Failure Reason:
Test failure: setUpClass (tasks.mgr.test_module_selftest.TestModuleSelftest) |
||||||||||||||
fail | 4193201 | 2019-08-07 07:53:41 | 2019-08-07 12:20:04 | 2019-08-07 13:00:03 | 0:39:59 | 0:31:51 | 0:08:08 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/dedup_tier.yaml} | 2 | |
Failure Reason:
"2019-08-07T12:53:23.046329+0000 mgr.x (mgr.4107) 49 : cluster [ERR] Unhandled exception from module 'balancer' while running on mgr.x: invalid literal for int() with base 10: 'c'" in cluster log |
||||||||||||||
pass | 4193202 | 2019-08-07 07:53:42 | 2019-08-07 12:25:55 | 2019-08-07 15:05:56 | 2:40:01 | 2:21:06 | 0:18:55 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
fail | 4193203 | 2019-08-07 07:53:43 | 2019-08-07 12:31:29 | 2019-08-07 13:21:29 | 0:50:00 | 0:37:44 | 0:12:16 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
Failure Reason:
"2019-08-07T12:57:26.307337+0000 mon.a (mon.0) 264 : cluster [ERR] Health check failed: Module 'balancer' has failed: Remote method threw exception: Traceback (most recent call last):" in cluster log |
||||||||||||||
fail | 4193204 | 2019-08-07 07:53:44 | 2019-08-07 12:33:46 | 2019-08-07 13:37:46 | 1:04:00 | 0:39:17 | 0:24:43 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 4 | |
Failure Reason:
"2019-08-07T13:16:50.815853+0000 mon.c (mon.0) 501 : cluster [ERR] Health check failed: Module 'balancer' has failed: Remote method threw exception: Traceback (most recent call last):" in cluster log |
||||||||||||||
pass | 4193205 | 2019-08-07 07:53:45 | 2019-08-07 12:35:28 | 2019-08-07 13:15:28 | 0:40:00 | 0:33:37 | 0:06:23 | mira | master | rhel | 7.6 | rados/objectstore/{backends/filestore-idempotent-aio-journal.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4193206 | 2019-08-07 07:53:45 | 2019-08-07 12:37:57 | 2019-08-07 13:19:57 | 0:42:00 | 0:34:26 | 0:07:34 | mira | master | rhel | 7.6 | rados/standalone/{supported-random-distro$/{rhel_7.yaml} workloads/misc.yaml} | 1 | |
fail | 4193207 | 2019-08-07 07:53:46 | 2019-08-07 12:56:20 | 2019-08-07 13:12:18 | 0:15:58 | 0:06:34 | 0:09:24 | mira | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/orchestrator_cli.yaml} | 2 | |
Failure Reason:
Test failure: setUpClass (tasks.mgr.test_orchestrator_cli.TestOrchestratorCli) |
||||||||||||||
fail | 4193208 | 2019-08-07 07:53:47 | 2019-08-07 12:59:50 | 2019-08-07 16:07:52 | 3:08:02 | 2:49:29 | 0:18:33 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
Failure Reason:
"2019-08-07T15:39:07.490044+0000 mgr.x (mgr.4110) 287 : cluster [ERR] Unhandled exception from module 'balancer' while running on mgr.x: invalid literal for int() with base 10: 'f'" in cluster log |
||||||||||||||
fail | 4193209 | 2019-08-07 07:53:48 | 2019-08-07 13:00:05 | 2019-08-07 13:26:04 | 0:25:59 | 0:19:47 | 0:06:12 | mira | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/radosbench_4K_seq_read.yaml} | 1 | |
Failure Reason:
Command failed on mira066 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
fail | 4193210 | 2019-08-07 07:53:49 | 2019-08-07 13:12:20 | 2019-08-07 15:56:21 | 2:44:01 | 2:25:10 | 0:18:51 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |
Failure Reason:
"2019-08-07T15:48:07.325934+0000 mgr.x (mgr.4098) 187 : cluster [ERR] Unhandled exception from module 'balancer' while running on mgr.x: Remote method threw exception: Traceback (most recent call last):" in cluster log |
||||||||||||||
fail | 4193211 | 2019-08-07 07:53:50 | 2019-08-07 13:15:44 | 2019-08-07 15:31:45 | 2:16:01 | 2:03:04 | 0:12:57 | mira | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
Command failed (workunit test rados/test.sh) on mira061 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b796290a0c48a48174866475dbb076751167920b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 4193212 | 2019-08-07 07:53:51 | 2019-08-07 13:17:45 | 2019-08-07 13:45:44 | 0:27:59 | 0:21:15 | 0:06:44 | mira | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{rhel_7.yaml} tasks/progress.yaml} | 2 | |
Failure Reason:
Test failure: setUpClass (tasks.mgr.test_progress.TestProgress) |
||||||||||||||
fail | 4193213 | 2019-08-07 07:53:52 | 2019-08-07 13:19:58 | 2019-08-07 13:47:58 | 0:28:00 | 0:20:24 | 0:07:36 | mira | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/radosbench_4M_rand_read.yaml} | 1 | |
Failure Reason:
Command failed on mira035 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
fail | 4193214 | 2019-08-07 07:53:52 | 2019-08-07 13:21:44 | 2019-08-07 14:03:43 | 0:41:59 | 0:21:24 | 0:20:35 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |
Failure Reason:
"2019-08-07T13:55:46.673867+0000 mgr.x (mgr.4104) 54 : cluster [ERR] Unhandled exception from module 'balancer' while running on mgr.x: invalid literal for int() with base 10: 'c'" in cluster log |
||||||||||||||
fail | 4193215 | 2019-08-07 07:53:53 | 2019-08-07 13:26:06 | 2019-08-07 14:00:05 | 0:33:59 | 0:20:37 | 0:13:22 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/set-chunk-promote-flush.yaml} | 2 | |
Failure Reason:
"2019-08-07T13:51:08.740832+0000 mon.a (mon.0) 265 : cluster [ERR] Health check failed: Module 'balancer' has failed: Remote method threw exception: Traceback (most recent call last):" in cluster log |
||||||||||||||
pass | 4193216 | 2019-08-07 07:53:54 | 2019-08-07 13:38:02 | 2019-08-07 14:28:01 | 0:49:59 | 0:42:17 | 0:07:42 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
fail | 4193217 | 2019-08-07 07:53:55 | 2019-08-07 13:45:58 | 2019-08-07 14:35:58 | 0:50:00 | 0:34:06 | 0:15:54 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous-v1only.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 4 | |
Failure Reason:
"2019-08-07T14:18:42.173174+0000 mon.a (mon.0) 477 : cluster [ERR] Health check failed: Module 'balancer' has failed: Remote method threw exception: Traceback (most recent call last):" in cluster log |
||||||||||||||
pass | 4193218 | 2019-08-07 07:53:56 | 2019-08-07 13:47:59 | 2019-08-07 14:19:58 | 0:31:59 | 0:24:54 | 0:07:05 | mira | master | rhel | 7.6 | rados/singleton/{all/pg-autoscaler.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 2 | |
fail | 4193219 | 2019-08-07 07:53:57 | 2019-08-07 14:00:20 | 2019-08-07 14:26:19 | 0:25:59 | 0:20:13 | 0:05:46 | mira | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/radosbench_4M_seq_read.yaml} | 1 | |
Failure Reason:
Command failed on mira066 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
pass | 4193220 | 2019-08-07 07:53:58 | 2019-08-07 14:03:58 | 2019-08-07 15:13:58 | 1:10:00 | 1:03:50 | 0:06:10 | mira | master | rhel | 7.6 | rados/standalone/{supported-random-distro$/{rhel_7.yaml} workloads/mon.yaml} | 1 | |
pass | 4193221 | 2019-08-07 07:53:59 | 2019-08-07 14:20:13 | 2019-08-07 17:06:14 | 2:46:01 | 2:27:27 | 0:18:34 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |
pass | 4193222 | 2019-08-07 07:54:00 | 2019-08-07 14:26:21 | 2019-08-07 14:54:20 | 0:27:59 | 0:22:15 | 0:05:44 | mira | master | rhel | 7.6 | rados/singleton/{all/pg-removal-interruption.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
fail | 4193223 | 2019-08-07 07:54:01 | 2019-08-07 14:28:16 | 2019-08-07 14:56:15 | 0:27:59 | 0:13:43 | 0:14:16 | mira | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{centos_7.yaml} tasks/prometheus.yaml} | 2 | |
Failure Reason:
Test failure: setUpClass (tasks.mgr.test_prometheus.TestPrometheus) |
||||||||||||||
pass | 4193224 | 2019-08-07 07:54:01 | 2019-08-07 14:36:09 | 2019-08-07 15:10:09 | 0:34:00 | 0:23:16 | 0:10:44 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
fail | 4193225 | 2019-08-07 07:54:02 | 2019-08-07 14:54:22 | 2019-08-07 15:42:22 | 0:48:00 | 0:41:35 | 0:06:25 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} | 2 | |
Failure Reason:
not all PGs are active or peered 15 seconds after marking out OSDs |
||||||||||||||
pass | 4193226 | 2019-08-07 07:54:03 | 2019-08-07 14:56:17 | 2019-08-07 15:46:16 | 0:49:59 | 0:41:39 | 0:08:20 | mira | master | rhel | 7.6 | rados/singleton/{all/radostool.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
fail | 4193227 | 2019-08-07 07:54:04 | 2019-08-07 15:06:12 | 2019-08-07 15:32:11 | 0:25:59 | 0:14:54 | 0:11:05 | mira | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/radosbench_4M_write.yaml} | 1 | |
Failure Reason:
Command failed on mira109 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
fail | 4193228 | 2019-08-07 07:54:05 | 2019-08-07 15:10:10 | 2019-08-07 15:54:10 | 0:44:00 | 0:32:03 | 0:11:57 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
Failure Reason:
"2019-08-07T15:28:10.963187+0000 mgr.y (mgr.4100) 51 : cluster [ERR] Unhandled exception from module 'balancer' while running on mgr.y: Remote method threw exception: Traceback (most recent call last):" in cluster log |
||||||||||||||
pass | 4193229 | 2019-08-07 07:54:06 | 2019-08-07 15:14:00 | 2019-08-07 15:53:59 | 0:39:59 | 0:31:23 | 0:08:36 | mira | master | rhel | 7.6 | rados/singleton/{all/random-eio.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 2 | |
pass | 4193230 | 2019-08-07 07:54:07 | 2019-08-07 15:31:56 | 2019-08-07 18:15:57 | 2:44:01 | 2:26:40 | 0:17:21 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
pass | 4193231 | 2019-08-07 07:54:08 | 2019-08-07 15:32:13 | 2019-08-07 16:22:12 | 0:49:59 | 0:40:04 | 0:09:55 | mira | master | rhel | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/rados_workunit_loadgen_mostlyread.yaml} | 2 | |
fail | 4193232 | 2019-08-07 07:54:08 | 2019-08-07 15:42:24 | 2019-08-07 16:00:23 | 0:17:59 | 0:07:06 | 0:10:53 | mira | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/ssh_orchestrator.yaml} | 2 | |
Failure Reason:
Test failure: setUpClass (tasks.mgr.test_ssh_orchestrator.TestOrchestratorCli) |
||||||||||||||
pass | 4193233 | 2019-08-07 07:54:09 | 2019-08-07 15:46:18 | 2019-08-07 18:32:20 | 2:46:02 | 2:27:13 | 0:18:49 | mira | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/radosbench_omap_write.yaml} | 1 | |
fail | 4193234 | 2019-08-07 07:54:10 | 2019-08-07 15:54:01 | 2019-08-07 18:18:02 | 2:24:01 | 2:06:53 | 0:17:08 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
Failure Reason:
'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdd || sgdisk --zap-all /dev/sdd', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-08-07 18:17:21.863912'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.033815', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011a85600'], u'uuids': [u'91fb19c3-10cd-465e-8966-1987fbd52945']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211XZVE', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sde'}, 'ansible_loop_var': u'item', u'end': u'2019-08-07 18:17:24.170750', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011a85600'], u'uuids': [u'91fb19c3-10cd-465e-8966-1987fbd52945']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211XZVE', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sde'}, u'cmd': u'sgdisk --zap-all /dev/sde || sgdisk --zap-all /dev/sde', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sde || sgdisk --zap-all /dev/sde', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-08-07 18:17:23.136935'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.011376', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2040775100'], u'uuids': [u'eabe07ba-8d6f-43d6-81c5-bd99bbacd8a0']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N204WG1E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdf'}, 'ansible_loop_var': u'item', u'end': u'2019-08-07 18:17:25.449391', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2040775100'], u'uuids': [u'eabe07ba-8d6f-43d6-81c5-bd99bbacd8a0']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N204WG1E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdf'}, u'cmd': u'sgdisk --zap-all /dev/sdf || sgdisk --zap-all /dev/sdf', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdf || sgdisk --zap-all /dev/sdf', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-08-07 18:17:24.438015'}, {'ansible_loop_var': u'item', '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP66QW9', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000524AS', u'partitions': {u'sda1': {u'start': u'2048', u'sectorsize': 512, u'uuid': u'f602365c-3e1b-4c7f-a435-7729abad47a6', u'sectors': u'1953522688', u'holders': [], u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000-part1'], u'uuids': [u'f602365c-3e1b-4c7f-a435-7729abad47a6']}, u'size': u'931.51 GB'}}}, 'key': u'sda'}, 'skipped': True, 'changed': False, '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP66QW9', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000524AS', u'partitions': {u'sda1': {u'start': u'2048', u'sectorsize': 512, u'uuid': u'f602365c-3e1b-4c7f-a435-7729abad47a6', u'sectors': u'1953522688', u'holders': [], u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000-part1'], u'uuids': [u'f602365c-3e1b-4c7f-a435-7729abad47a6']}, u'size': u'931.51 GB'}}}, 'key': u'sda'}}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.024476', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP8NWLD', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdb'}, 'ansible_loop_var': u'item', u'end': u'2019-08-07 18:17:26.732666', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP8NWLD', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdb'}, u'cmd': u'sgdisk --zap-all /dev/sdb || sgdisk --zap-all /dev/sdb', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdb || sgdisk --zap-all /dev/sdb', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-08-07 18:17:25.708190'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.026767', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'6VPBDH90', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdc'}, 'ansible_loop_var': u'item', u'end': u'2019-08-07 18:17:28.007232', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'6VPBDH90', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdc'}, u'cmd': u'sgdisk --zap-all /dev/sdc || sgdisk --zap-all /dev/sdc', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdc || sgdisk --zap-all /dev/sdc', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-08-07 18:17:26.980465'}, {'stderr_lines': [u'Problem opening /dev/dm-0 for reading! Error is 2.', u'The specified file does not exist!', u"Problem opening '' for writing! Program will now terminate.", u'Warning! MBR not overwritten! Error is 2!', u'Problem opening /dev/dm-0 for reading! Error is 2.', u'The specified file does not exist!', u"Problem opening '' for writing! Program will now terminate.", u'Warning! MBR not overwritten! Error is 2!'], u'changed': True, u'stdout': u'', u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'delta': u'0:00:00.008538', 'stdout_lines': [], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': None, u'links': {u'masters': [], u'labels': [], u'ids': [u'dm-name-mpatha', u'dm-uuid-mpath-2001b4d2000000000'], u'uuids': [u'3519b9b8-fe6e-4e59-b29d-16f10f34a04d']}, u'sas_device_handle': None, u'host': u'', u'support_discard': u'0', u'serial': u'5VP8NWLD', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': None, u'partitions': {}}, 'key': u'dm-0'}, 'ansible_loop_var': u'item', u'end': u'2019-08-07 18:17:28.263497', '_ansible_no_log': False, u'start': u'2019-08-07 18:17:28.254959', u'failed': True, u'cmd': u'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', 'item': {'value': {u'sectorsize': u'512', u'vendor': None, u'links': {u'masters': [], u'labels': [], u'ids': [u'dm-name-mpatha', u'dm-uuid-mpath-2001b4d2000000000'], u'uuids': [u'3519b9b8-fe6e-4e59-b29d-16f10f34a04d']}, u'sas_device_handle': None, u'host': u'', u'support_discard': u'0', u'serial': u'5VP8NWLD', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': None, u'partitions': {}}, 'key': u'dm-0'}, u'stderr': u"Problem opening /dev/dm-0 for reading! Error is 2.\nThe specified file does not exist!\nProblem opening '' for writing! Program will now terminate.\nWarning! MBR not overwritten! Error is 2!\nProblem opening /dev/dm-0 for reading! Error is 2.\nThe specified file does not exist!\nProblem opening '' for writing! Program will now terminate.\nWarning! MBR not overwritten! Error is 2!", u'rc': 2, u'msg': u'non-zero return code'}]}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 219, in represent_list return self.represent_sequence(u'tag:yaml.org,2002:seq', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 102, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined raise RepresenterError("cannot represent an object", data)RepresenterError: ('cannot represent an object', u'sdd') |
||||||||||||||
pass | 4193235 | 2019-08-07 07:54:11 | 2019-08-07 15:54:11 | 2019-08-07 16:46:11 | 0:52:00 | 0:39:58 | 0:12:02 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
fail | 4193236 | 2019-08-07 07:54:12 | 2019-08-07 15:56:23 | 2019-08-07 16:14:22 | 0:17:59 | 0:07:40 | 0:10:19 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/sample_fio.yaml} | 1 | |
Failure Reason:
Command failed on mira038 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
pass | 4193237 | 2019-08-07 07:54:13 | 2019-08-07 16:00:25 | 2019-08-07 16:46:24 | 0:45:59 | 0:25:09 | 0:20:50 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
pass | 4193238 | 2019-08-07 07:54:14 | 2019-08-07 16:07:53 | 2019-08-07 16:37:53 | 0:30:00 | 0:24:17 | 0:05:43 | mira | master | rhel | 7.6 | rados/singleton/{all/test-crash.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
fail | 4193239 | 2019-08-07 07:54:14 | 2019-08-07 16:14:24 | 2019-08-07 17:12:23 | 0:57:59 | 0:32:04 | 0:25:55 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
Failure Reason:
"2019-08-07T16:57:25.492437+0000 mon.a (mon.0) 886 : cluster [ERR] Health check failed: Module 'balancer' has failed: invalid literal for int() with base 10: 'b' (MGR_MODULE_ERROR)" in cluster log |
||||||||||||||
fail | 4193240 | 2019-08-07 07:54:15 | 2019-08-07 16:22:15 | 2019-08-07 16:48:14 | 0:25:59 | 0:13:47 | 0:12:12 | mira | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/sample_radosbench.yaml} | 1 | |
Failure Reason:
Command failed on mira100 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
fail | 4193241 | 2019-08-07 07:54:16 | 2019-08-07 16:38:08 | 2019-08-07 17:08:08 | 0:30:00 | 0:23:39 | 0:06:21 | mira | master | rhel | 7.6 | rados/singleton/{all/test_envlibrados_for_rocksdb.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on mira101 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b796290a0c48a48174866475dbb076751167920b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
pass | 4193242 | 2019-08-07 07:54:17 | 2019-08-07 16:46:13 | 2019-08-07 19:34:14 | 2:48:01 | 2:29:37 | 0:18:24 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
pass | 4193243 | 2019-08-07 07:54:18 | 2019-08-07 16:46:25 | 2019-08-07 17:36:25 | 0:50:00 | 0:40:48 | 0:09:12 | mira | master | rhel | 7.6 | rados/singleton-nomsgr/{all/multi-backfill-reject.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 2 | |
pass | 4193244 | 2019-08-07 07:54:19 | 2019-08-07 16:48:18 | 2019-08-07 19:42:25 | 2:54:07 | 2:36:20 | 0:17:47 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
fail | 4193245 | 2019-08-07 07:54:20 | 2019-08-07 17:06:30 | 2019-08-07 17:42:29 | 0:35:59 | 0:14:05 | 0:21:54 | mira | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{centos_7.yaml} tasks/crash.yaml} | 2 | |
Failure Reason:
Test failure: setUpClass (tasks.mgr.test_crash.TestCrash) |
||||||||||||||
fail | 4193246 | 2019-08-07 07:54:21 | 2019-08-07 17:08:22 | 2019-08-07 17:24:21 | 0:15:59 | 0:06:05 | 0:09:54 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/cosbench_64K_read_write.yaml} | 1 | |
Failure Reason:
Command failed on mira101 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
fail | 4193247 | 2019-08-07 07:54:22 | 2019-08-07 17:12:35 | 2019-08-07 17:34:35 | 0:22:00 | 0:12:31 | 0:09:29 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/dedup_tier.yaml} | 2 | |
Failure Reason:
"2019-08-07T17:28:33.377063+0000 mon.a (mon.0) 309 : cluster [ERR] Health check failed: Module 'balancer' has failed: Remote method threw exception: Traceback (most recent call last):" in cluster log |
||||||||||||||
fail | 4193248 | 2019-08-07 07:54:23 | 2019-08-07 17:24:29 | 2019-08-07 17:40:28 | 0:15:59 | 0:06:06 | 0:09:53 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/cosbench_64K_write.yaml} | 1 | |
Failure Reason:
Command failed on mira101 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
pass | 4193249 | 2019-08-07 07:54:24 | 2019-08-07 17:34:36 | 2019-08-07 18:14:36 | 0:40:00 | 0:28:32 | 0:11:28 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} | 2 | |
fail | 4193250 | 2019-08-07 07:54:25 | 2019-08-07 17:36:31 | 2019-08-07 18:06:30 | 0:29:59 | 0:23:05 | 0:06:54 | mira | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Test failure: setUpClass (tasks.mgr.test_dashboard.TestDashboard) |
||||||||||||||
pass | 4193251 | 2019-08-07 07:54:26 | 2019-08-07 17:40:30 | 2019-08-07 18:20:29 | 0:39:59 | 0:27:38 | 0:12:21 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
fail | 4193252 | 2019-08-07 07:54:27 | 2019-08-07 17:42:31 | 2019-08-07 18:08:30 | 0:25:59 | 0:14:47 | 0:11:12 | mira | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/fio_4K_rand_read.yaml} | 1 | |
Failure Reason:
Command failed on mira035 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
fail | 4193253 | 2019-08-07 07:54:27 | 2019-08-07 18:06:34 | 2019-08-07 19:20:34 | 1:14:00 | 1:06:16 | 0:07:44 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
Failure Reason:
"2019-08-07T18:33:30.859002+0000 mgr.x (mgr.4105) 53 : cluster [ERR] Unhandled exception from module 'balancer' while running on mgr.x: Remote method threw exception: Traceback (most recent call last):" in cluster log |
||||||||||||||
pass | 4193254 | 2019-08-07 07:54:28 | 2019-08-07 18:08:35 | 2019-08-07 20:52:37 | 2:44:02 | 2:25:07 | 0:18:55 | mira | master | rhel | 7.6 | rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
fail | 4193255 | 2019-08-07 07:54:29 | 2019-08-07 18:14:37 | 2019-08-07 18:32:36 | 0:17:59 | 0:06:31 | 0:11:28 | mira | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/failover.yaml} | 2 | |
Failure Reason:
Test failure: setUpClass (tasks.mgr.test_failover.TestFailover) |
||||||||||||||
fail | 4193256 | 2019-08-07 07:54:30 | 2019-08-07 18:16:12 | 2019-08-07 18:34:11 | 0:17:59 | 0:07:48 | 0:10:11 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/fio_4K_rand_rw.yaml} | 1 | |
Failure Reason:
Command failed on mira109 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
pass | 4193257 | 2019-08-07 07:54:31 | 2019-08-07 18:18:18 | 2019-08-07 18:48:17 | 0:29:59 | 0:18:33 | 0:11:26 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} | 2 | |
pass | 4193258 | 2019-08-07 07:54:32 | 2019-08-07 18:20:45 | 2019-08-07 18:36:44 | 0:15:59 | 0:05:33 | 0:10:26 | mira | master | ubuntu | 18.04 | rados/multimon/{clusters/6.yaml msgr-failures/many.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_clock_with_skews.yaml} | 2 |