User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
kchai | 2019-08-12 14:45:48 | 2019-08-12 14:46:23 | 2019-08-13 08:01:38 | 17:15:15 | rados | wip-kefu-testing-2019-08-12-1306 | mira | afac599 | 53 | 25 | 4 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 4211282 | 2019-08-12 14:46:03 | 2019-08-12 14:46:21 | 2019-08-12 15:02:20 | 0:15:59 | 0:06:52 | 0:09:07 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_4M_write.yaml} | 1 | |
Failure Reason:
Command failed on mira018 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
pass | 4211283 | 2019-08-12 14:46:03 | 2019-08-12 14:46:23 | 2019-08-12 15:26:23 | 0:40:00 | 0:30:49 | 0:09:11 | mira | master | rhel | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/rados_stress_watch.yaml} | 2 | |
fail | 4211284 | 2019-08-12 14:46:04 | 2019-08-12 14:46:23 | 2019-08-12 15:04:22 | 0:17:59 | 0:07:42 | 0:10:17 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/sample_fio.yaml} | 1 | |
Failure Reason:
Command failed on mira105 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
fail | 4211285 | 2019-08-12 14:46:05 | 2019-08-12 14:46:22 | 2019-08-12 15:56:22 | 1:10:00 | 0:54:30 | 0:15:30 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 4 | |
Failure Reason:
"2019-08-12T15:33:58.613190+0000 osd.4 (osd.4) 1 : cluster [ERR] 5.e required past_interval bounds are empty [411,395) but past_intervals is not: ([300,394] all_participants=4,8,9 intervals=([300,313] acting 4,9),([314,394] acting 8,9))" in cluster log |
||||||||||||||
fail | 4211286 | 2019-08-12 14:46:06 | 2019-08-12 14:46:24 | 2019-08-12 15:12:23 | 0:25:59 | 0:20:29 | 0:05:30 | mira | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/sample_radosbench.yaml} | 1 | |
Failure Reason:
Command failed on mira109 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
fail | 4211287 | 2019-08-12 14:46:07 | 2019-08-12 14:46:24 | 2019-08-12 15:12:23 | 0:25:59 | 0:15:00 | 0:10:59 | mira | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/cosbench_64K_read_write.yaml} | 1 | |
Failure Reason:
Command failed on mira064 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
fail | 4211288 | 2019-08-12 14:46:08 | 2019-08-12 14:46:22 | 2019-08-12 15:44:22 | 0:58:00 | 0:44:17 | 0:13:43 | mira | master | centos | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=clay-k=4-m=2.yaml} | 2 | |
Failure Reason:
Command failed on mira059 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph status --format=json' |
||||||||||||||
fail | 4211289 | 2019-08-12 14:46:08 | 2019-08-12 14:46:24 | 2019-08-12 15:14:23 | 0:27:59 | 0:15:14 | 0:12:45 | mira | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/cosbench_64K_write.yaml} | 1 | |
Failure Reason:
Command failed on mira082 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
fail | 4211290 | 2019-08-12 14:46:09 | 2019-08-12 14:46:23 | 2019-08-12 15:24:23 | 0:38:00 | 0:29:25 | 0:08:35 | mira | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{rhel_7.yaml} tasks/dashboard.yaml} | 2 | |
Failure Reason:
Test failure: test_full_health (tasks.mgr.dashboard.test_health.HealthTest) |
||||||||||||||
fail | 4211291 | 2019-08-12 14:46:10 | 2019-08-12 14:46:23 | 2019-08-12 15:02:22 | 0:15:59 | 0:08:00 | 0:07:59 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/fio_4K_rand_read.yaml} | 1 | |
Failure Reason:
Command failed on mira063 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
fail | 4211292 | 2019-08-12 14:46:11 | 2019-08-12 15:02:36 | 2019-08-12 17:36:37 | 2:34:01 | 2:16:00 | 0:18:01 | mira | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/fio_4K_rand_rw.yaml} | 1 | |
Failure Reason:
Command failed on mira018 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
fail | 4211293 | 2019-08-12 14:46:12 | 2019-08-12 15:02:36 | 2019-08-12 15:26:35 | 0:23:59 | 0:14:39 | 0:09:20 | mira | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/fio_4M_rand_read.yaml} | 1 | |
Failure Reason:
Command failed on mira063 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
fail | 4211294 | 2019-08-12 14:46:13 | 2019-08-12 15:04:37 | 2019-08-12 15:22:36 | 0:17:59 | 0:07:36 | 0:10:23 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/fio_4M_rand_rw.yaml} | 1 | |
Failure Reason:
Command failed on mira105 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
dead | 4211295 | 2019-08-12 14:46:14 | 2019-08-12 15:12:38 | 2019-08-13 03:15:04 | 12:02:26 | mira | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | 2 | ||||
pass | 4211296 | 2019-08-12 14:46:14 | 2019-08-12 15:12:38 | 2019-08-12 17:50:39 | 2:38:01 | 2:20:16 | 0:17:45 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
fail | 4211297 | 2019-08-12 14:46:15 | 2019-08-12 15:14:39 | 2019-08-12 15:32:38 | 0:17:59 | 0:07:42 | 0:10:17 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/fio_4M_rand_write.yaml} | 1 | |
Failure Reason:
Command failed on mira082 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
pass | 4211298 | 2019-08-12 14:46:16 | 2019-08-12 15:22:50 | 2019-08-12 16:30:50 | 1:08:00 | 1:02:24 | 0:05:36 | mira | master | rhel | 7.6 | rados/standalone/{supported-random-distro$/{rhel_7.yaml} workloads/mon.yaml} | 1 | |
pass | 4211299 | 2019-08-12 14:46:17 | 2019-08-12 15:24:38 | 2019-08-12 20:40:43 | 5:16:05 | 4:57:42 | 0:18:23 | mira | master | rhel | 7.6 | rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/nautilus.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-octopus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} thrashosds-health.yaml} | 4 | |
fail | 4211300 | 2019-08-12 14:46:18 | 2019-08-12 15:26:37 | 2019-08-12 15:56:37 | 0:30:00 | 0:16:30 | 0:13:30 | mira | master | centos | 7.6 | rados/singleton/{all/rebuild-mondb.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
+ sudo adjust-ulimits ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --journal-path /var/lib/ceph/osd/ceph-0/journal --no-mon-config --op update-mon-db --mon-store-path /home/ubuntu/cephtest/mon-store *** stack smashing detected ***: ceph-objectstore-tool terminated ======= Backtrace: ========= /lib64/libc.so.6(__fortify_fail+0x37)[0x7f54c28f0b67] /lib64/libc.so.6(+0x117b22)[0x7f54c28f0b22] ceph-objectstore-tool(_Z13update_mon_dbR11ObjectStoreR13OSDSuperblockRKSsS4_+0x2c51)[0x559cc8c710f1] ceph-objectstore-tool(main+0x557e)[0x559cc8bf57de] /lib64/libc.so.6(__libc_start_main+0xf5)[0x7f54c27fb495] ceph-objectstore-tool(+0x3cc590)[0x559cc8c25590] ======= Memory map: ======== 559cc8859000-559cc972b000 r-xp 00000000 08:01 79894 /usr/bin/ceph-objectstore-tool 559cc992a000-559cc9962000 r--p 00ed1000 08:01 79894 /usr/bin/ceph-objectstore-tool 559cc9962000-559cc9976000 rw-p 00f09000 08:01 79894 /usr/bin/ceph-objectstore-tool 559cc9976000-559cc9a8e000 rw-p 00000000 00:00 0 559cca459000-559ccffbc000 rw-p 00000000 00:00 0 [heap] 7f54b0451000-7f54b0452000 ---p 00000000 00:00 0 7f54b0452000-7f54b0c52000 rw-p 00000000 00:00 0 7f54b0c52000-7f54b0c53000 ---p 00000000 00:00 0 7f54b0c53000-7f54b1453000 rw-p 00000000 00:00 0 7f54b1453000-7f54b1454000 ---p 00000000 00:00 0 7f54b1454000-7f54b1c54000 rw-p 00000000 00:00 0 7f54b1c54000-7f54b1c55000 ---p 00000000 00:00 0 7f54b1c55000-7f54b2455000 rw-p 00000000 00:00 0 7f54b4c5a000-7f54b4c5b000 ---p 00000000 00:00 0 7f54b4c5b000-7f54b545b000 rw-p 00000000 00:00 0 7f54b545b000-7f54b545c000 ---p 00000000 00:00 0 7f54b545c000-7f54b5c5c000 rw-p 00000000 00:00 0 7f54b5c5c000-7f54b5c5d000 ---p 00000000 00:00 0 7f54b5c5d000-7f54b645d000 rw-p 00000000 00:00 0 7f54b645d000-7f54b645e000 ---p 00000000 00:00 0 7f54b645e000-7f54b6c5e000 rw-p 00000000 00:00 0 7f54b9c64000-7f54b9c65000 ---p 00000000 00:00 0 7f54b9c65000-7f54ba465000 rw-p 00000000 00:00 0 7f54ba465000-7f54ba466000 ---p 00000000 00:00 0 7f54ba466000-7f54bac66000 rw-p 00000000 00:00 0 7f54bac66000-7f54bac67000 ---p 00000000 00:00 0 7f54bac67000-7f54bb467000 rw-p 00000000 00:00 0 7f54bb467000-7f54bb468000 ---p 00000000 00:00 0 7f54bb468000-7f54bbc68000 rw-p 00000000 00:00 0 7f54bbc68000-7f54bbc6c000 r-xp 00000000 08:01 394235 /usr/lib64/ceph/compressor/libceph_snappy.so.2.0.0 7f54bbc6c000-7f54bbe6c000 ---p 00004000 08:01 394235 /usr/lib64/ceph/compressor/libceph_snappy.so.2.0.0 7f54bbe6c000-7f54bbe6d000 r--p 00004000 08:01 394235 /usr/lib64/ceph/compressor/libceph_snappy.so.2.0.0 7f54bbe6d000-7f54bbe6e000 rw-p 00005000 08:01 394235 /usr/lib64/ceph/compressor/libceph_snappy.so.2.0.0 7f54bbe6e000-7f54bbe6f000 ---p 00000000 00:00 0 7f54bbe6f000-7f54bc66f000 rw-p 00000000 00:00 0 7f54bc66f000-7f54bc670000 ---p 00000000 00:00 0 7f54bc670000-7f54bce70000 rw-p 00000000 00:00 0 7f54bce70000-7f54bce71000 ---p 00000000 00:00 0 7f54bce71000-7f54bd671000 rw-p 00000000 00:00 0 7f54bd671000-7f54bd672000 ---p 00000000 00:00 0 7f54bd672000-7f54bde72000 rw-p 00000000 00:00 0 7f54bde72000-7f54bde73000 ---p 00000000 00:00 0 7f54bde73000-7f54be673000 rw-p 00000000 00:00 0 7f54be673000-7f54be674000 ---p 00000000 00:00 0 7f54be674000-7f54bee74000 rw-p 00000000 00:00 0 7f54bee74000-7f54bee75000 ---p 00000000 00:00 0 7f54bee75000-7f54bf675000 rw-p 00000000 00:00 0 7f54bf675000-7f54bf676000 ---p 00000000 00:00 0 7f54bf676000-7f54bfe76000 rw-p 00000000 00:00 0 7f54bfe76000-7f54bfe77000 ---p 00000000 00:00 0 7f54bfe77000-7f54c0677000 rw-p 00000000 00:00 0 7f54c0677000-7f54c0678000 ---p 00000000 00:00 0 7f54c0678000-7f54c0e78000 rw-p 00000000 00:00 0 7f54c0e78000-7f54c0e7a000 r-xp 00000000 08:01 72973 /usr/lib64/liburcu-common.so.1.0.0 7f54c0e7a000-7f54c1079000 ---p 00002000 08:01 72973 /usr/lib64/liburcu-common.so.1.0.0 7f54c1079000-7f54c107a000 r--p 00001000 08:01 72973 /usr/lib64/liburcu-common.so.1.0.0 7f54c107a000-7f54c107b000 rw-p 00002000 08:01 72973 /usr/lib64/liburcu-common.so.1.0.0 7f54c107b000-7f54c1081000 r-xp 00000000 08:01 72971 /usr/lib64/liburcu-cds.so.1.0.0 7f54c1081000-7f54c1280000 ---p 00006000 08:01 72971 /usr/lib64/liburcu-cds.so.1.0.0 7f54c1280000-7f54c1281000 r--p 00005000 08:01 72971 /usr/lib64/liburcu-cds.so.1.0.0 7f54c1281000-7f54c1282000 rw-p 00006000 08:01 72971 /usr/lib64/liburcu-cds.so.1.0.0 7f54c1282000-7f54c1288000 r-xp 00000000 08:01 72969 /usr/lib64/liburcu-bp.so.1.0.0 7f54c1288000-7f54c1488000 ---p 00006000 08:01 72969 /usr/lib64/liburcu-bp.so.1.0.0 7f54c1488000-7f54c1489000 r--p 00006000 08:01 72969 /usr/lib64/liburcu-bp.so.1.0.0 7f54c1489000-7f54c148a000 rw-p 00007000 08:01 72969 /usr/lib64/liburcu-bp.so.1.0.0 7f54c148a000-7f54c1493000 r-xp 00000000 08:01 72997 /usr/lib64/liblttng-ust-tracepoint.so.0.0.0 7f54c1493000-7f54c1692000 ---p 00009000 08:01 72997 /usr/lib64/liblttng-ust-tracepoint.so.0.0.0 7f54c1692000-7f54c1693000 r--p 00008000 08:01 72997 /usr/lib64/liblttng-ust-tracepoint.so.0.0.0 7f54c1693000-7f54c1694000 rw-p 00009000 08:01 72997 /usr/lib64/liblttng-ust-tracepoint.so.0.0.0 7f54c1694000-7f54c16a4000 rw-p 00000000 00:00 0 7f54c16a4000-7f54c16b3000 r-xp 00000000 08:01 4212 /usr/lib64/libbz2.so.1.0.6 7f54c16b3000-7f54c18b2000 ---p 0000f000 08:01 4212 /usr/lib64/libbz2.so.1.0.6 7f54c18b2000-7f54c18b3000 r--p 0000e000 08:01 4212 /usr/lib64/libbz2.so.1.0.6 7f54c18b3000-7f54c18b4000 rw-p 0000f000 08:01 4212 /usr/lib64/libbz2.so.1.0.6 7f54c18b4000-7f54c18d9000 r-xp 00000000 08:01 4203 /usr/lib64/liblzma.so.5.2.2 7f54c18d9000-7f54c1ad8000 ---p 00025000 08:01 4203 /usr/lib64/liblzma.so.5.2.2 7f54c1ad8000-7f54c1ad9000 r--p 00024000 08:01 4203 /usr/lib64/liblzma.so.5.2.2 7f54c1ad9000-7f54c1ada000 rw-p 00025000 08:01 4203 /usr/lib64/liblzma.so.5.2.2 7f54c1ada000-7f54c1af1000 r-xp 00000000 08:01 4213 /usr/lib64/libelf-0.172.so 7f54c1af1000-7f54c1cf0000 ---p 00017000 08:01 4213 /usr/lib64/libelf-0.172.so 7f54c1cf0000-7f54c1cf1000 r--p 00016000 08:01 4213 /usr/lib64/libelf-0.172.so 7f54c1cf1000-7f54c1cf2000 rw-p 00017000 08:01 4213 /usr/lib64/libelf-0.172.so 7f54c1cf2000-7f54c1cf6000 r-xp 00000000 08:01 3809 /usr/lib64/libattr.so.1.1.0 7f54c1cf6000-7f54c1ef5000 ---p 00004000 08:01 3809 /usr/lib64/libattr.so.1.1.0 7f54c1ef5000-7f54c1ef6000 r--p 00003000 08:01 3809 /usr/lib64/libattr.so.1.1.0 7f54c1ef6000-7f54c1ef7000 rw-p 00004000 08:01 3809 /usr/lib64/libattr.so.1.1.0 7f54c1ef7000-7f54c1f15000 r-xp 00000000 08:01 4428 /usr/lib64/libnl-3.so.200.23.0 7f54c1f15000-7f54c2115000 ---p 0001e000 08:01 4428 /usr/lib64/libnl-3.so.200.23.0 7f54c2115000-7f54c2117000 r--p 0001e000 08:01 4428 /usr/lib64/libnl-3.so.200.23.0 7f54c2117000-7f54c2118000 rw-p 00020000 08:01 4428 /usr/lib64/libnl-3.so.200.23.0 7f54c2118000-7f54c217c000 r-xp 00000000 08:01 4436 /usr/lib64/libnl-route-3.so.200.23.0 7f54c217c000-7f54c237b000 ---p 00064000 08:01 4436 /usr/lib64/libnl-route-3.so.200.23.0 7f54c237b000-7f54c237e000 r--p 00063000 08:01 4436 /usr/lib64/libnl-route-3.so.200.23.0 7f54c237e000-7f54c2383000 rw-p 00066000 08:01 4436 /usr/lib64/libnl-route-3.so.200.23.0 7f54c2383000-7f54c2385000 rw-p 00000000 00:00 0 7f54c2385000-7f54c23d1000 r-xp 00000000 08:01 7367 /usr/lib64/libdw-0.172.so 7f54c23d1000-7f54c25d1000 ---p 0004c000 08:01 7367 /usr/lib64/libdw-0.172.so 7f54c25d1000-7f54c25d3000 r--p 0004c000 08:01 7367 /usr/lib64/libdw-0.172.so 7f54c25d3000-7f54c25d4000 rw-p 0004e000 08:01 7367 /usr/lib64/libdw-0.172.so 7f54c25d4000-7f54c25d8000 r-xp 00000000 08:01 3811 /usr/lib64/libcap.so.2.22 7f54c25d8000-7f54c27d7000 ---p 00004000 08:01 3811 /usr/lib64/libcap.so.2.22 7f54c27d7000-7f54c27d8000 r--p 00003000 08:01 3811 /usr/lib64/libcap.so.2.22 7f54c27d8000-7f54c27d9000 rw-p 00004000 08:01 3811 /usr/lib64/libcap.so.2.22 7f54c27d9000-7f54c299b000 r-xp 00000000 08:01 3704 /usr/lib64/libc-2.17.so 7f54c299b000-7f54c2b9b000 ---p 001c2000 08:01 3704 /usr/lib64/libc-2.17.so 7f54c2b9b000-7f54c2b9f000 r--p 001c2000 08:01 3704 /usr/lib64/libc-2.17.so 7f54c2b9f000-7f54c2ba1000 rw-p 001c6000 08:01 3704 /usr/lib64/libc-2.17.so 7f54c2ba1000-7f54c2ba6000 rw-p 00000000 00:00 0 7f54c2ba6000-7f54c2bbb000 r-xp 00000000 08:01 12863 /usr/lib64/libgcc_s-4.8.5-20150702.so.1 7f54c2bbb000-7f54c2dba000 ---p 00015000 08:01 12863 /usr/lib64/libgcc_s-4.8.5-20150702.so.1 7f54c2dba000-7f54c2dbb000 r--p 00014000 08:01 12863 /usr/lib64/libgcc_s-4.8.5-20150702.so.1 7f54c2dbb000-7f54c2dbc000 rw-p 00015000 08:01 12863 /usr/lib64/libgcc_s-4.8.5-20150702.so.1 7f54c2dbc000-7f54c2ebd000 r-xp 00000000 08:01 3709 /usr/lib64/libm-2.17.so 7f54c2ebd000-7f54c30bc000 ---p 00101000 08:01 3709 /usr/lib64/libm-2.17.so 7f54c30bc000-7f54c30bd000 r--p 00100000 08:01 3709 /usr/lib64/libm-2.17.so 7f54c30bd000-7f54c30be000 rw-p 00101000 08:01 3709 /usr/lib64/libm-2.17.so 7f54c30be000-7f54c31a7000 r-xp 00000000 08:01 3738 /usr/lib64/libstdc++.so.6.0.19 7f54c31a7000-7f54c33a6000 ---p 000e9000 08:01 3738 /usr/lib64/libstdc++.so.6.0.19 7f54c33a6000-7f54c33ae000 r--p 000e8000 08:01 3738 /usr/lib64/libstdc++.so.6.0.19 7f54c33ae000-7f54c33b0000 rw-p 000f0000 08:01 3738 /usr/lib64/libstdc++.so.6.0.19 7f54c33b0000-7f54c33c5000 rw-p 00000000 00:00 0 7f54c33c5000-7f54c33db000 r-xp 00000000 08:01 3724 /usr/lib64/libresolv-2.17.so 7f54c33db000-7f54c35da000 ---p 00016000 08:01 3724 /usr/lib64/libresolv-2.17.so 7f54c35da000-7f54c35db000 r--p 00015000 08:01 3724 /usr/lib64/libresolv-2.17.so 7f54c35db000-7f54c35dc000 rw-p 00016000 08:01 3724 /usr/lib64/libresolv-2.17.so 7f54c35dc000-7f54c35de000 rw-p 00000000 00:00 0 7f54c35de000-7f54c35e5000 r-xp 00000000 08:01 3725 /usr/lib64/librt-2.17.so 7f54c35e5000-7f54c37e4000 ---p 00007000 08:01 3725 /usr/lib64/librt-2.17.so 7f54c37e4000-7f54c37e5000 r--p 00006000 08:01 3725 /usr/lib64/librt-2.17.so 7f54c37e5000-7f54c37e6000 rw-p 00007000 08:01 3725 /usr/lib64/librt-2.17.so 7f54c37e6000-7f54c37fb000 r-xp 00000000 08:01 13444 /usr/lib64/librdmacm.so.1.1.17.2 7f54c37fb000-7f54c39fa000 ---p 00015000 08:01 13444 /usr/lib64/librdmacm.so.1.1.17.2 7f54c39fa000-7f54c39fb000 r--p 00014000 08:01 13444 /usr/lib64/librdmacm.so.1.1.17.2 7f54c39fb000-7f54c39fc000 rw-p 00015000 08:01 13444 /usr/lib64/librdmacm.so.1.1.17.2 7f54c39fc000-7f54c39fd000 rw-p 00000000 00:00 0 7f54c39fd000-7f54c3a12000 r-xp 00000000 08:01 13434 /usr/lib64/libibverbs.so.1.1.17.2 7f54c3a12000-7f54c3c12000 ---p 00015000 08:01 13434 /usr/lib64/libibverbs.so.1.1.17.2 7f54c3c12000-7f54c3c13000 r--p 00015000 08:01 13434 /usr/lib64/libibverbs.so.1.1.17.2 7f54c3c13000-7f54c3c14000 rw-p 00016000 08:01 13434 /usr/lib64/libibverbs.so.1.1.17.2 7f54c3c14000-7f54c3c29000 r-xp 00000000 08:01 7382 /usr/lib64/libudev.so.1.6.2 7f54c3c29000-7f54c3e28000 ---p 00015000 08:01 7382 /usr/lib64/libudev.so.1.6.2 7f54c3e28000-7f54c3e29000 r--p 00014000 08:01 7382 /usr/lib64/libudev.so.1.6.2 7f54c3e29000-7f54c3e2a000 rw-p 00015000 08:01 7382 /usr/lib64/libudev.so.1.6.2 7f54c3e2a000-7f54c3e41000 r-xp 00000000 08:01 3723 /usr/lib64/libpthread-2.17.so 7f54c3e41000-7f54c4040000 ---p 00017000 08:01 3723 /usr/lib64/libpthread-2.17.so 7f54c4040000-7f54c4041000 r--p 00016000 08:01 3723 /usr/lib64/libpthread-2.17.so 7f54c4041000-7f54c4042000 rw-p 00017000 08:01 3723 /usr/lib64/libpthread-2.17.so 7f54c4042000-7f54c4046000 rw-p 00000000 00:00 0 7f54c4046000-7f54c427a000 r-xp 00000000 08:01 3992 /usr/lib64/libcrypto.so.1.0.2k 7f54c427a000-7f54c447a000 ---p 00234000 08:01 3992 /usr/lib64/libcrypto.so.1.0.2k 7f54c447a000-7f54c4496000 r--p 00234000 08:01 3992 /usr/lib64/libcrypto.so.1.0.2k 7f54c4496000-7f54c44a3000 rw-p 00250000 08:01 3992 /usr/lib64/libcrypto.so.1.0.2k 7f54c44a3000-7f54c44a7000 rw-p 00000000 00:00 0 7f54c44a7000-7f54c44e3000 r-xp 00000000 08:01 4222 /usr/lib64/libblkid.so.1.1.0 7f54c44e3000-7f54c46e2000 ---p 0003c000 08:01 4222 /usr/lib64/libblkid.so.1.1.0 7f54c46e2000-7f54c46e5000 r--p 0003b000 08:01 4222 /usr/lib64/libblkid.so.1.1.0 7f54c46e5000-7f54c46e6000 rw-p 0003e000 08:01 4222 /usr/lib64/libblkid.so.1.1.0 7f54c46e6000-7f54c46e7000 rw-p 00000000 00:00 0 7f54c46e7000-7f54c5068000 r-xp 00000000 08:01 394074 /usr/lib64/ceph/libceph-common.so.0 7f54c5068000-7f54c5268000 ---p 00981000 08:01 394074 /usr/lib64/ceph/libceph-common.so.0 7f54c5268000-7f54c5287000 r--p 00981000 08:01 394074 /usr/lib64/ceph/libceph-common.so.0 7f54c5287000-7f54c5290000 rw-p 009a0000 08:01 394074 /usr/lib64/ceph/libceph-common.so.0 7f54c5290000-7f54cd4b0000 rw-p 00000000 00:00 0 7f54cd4b0000-7f54cd4f6000 r-xp 00000000 08:01 13409 /usr/lib64/libtcmalloc.so.4.4.5 7f54cd4f6000-7f54cd6f5000 ---p 00046000 08:01 13409 /usr/lib64/libtcmalloc.so.4.4.5 7f54cd6f5000-7f54cd6f7000 r--p 00045000 08:01 13409 /usr/lib64/libtcmalloc.so.4.4.5 7f54cd6f7000-7f54cd6f9000 rw-p 00047000 08:01 13409 /usr/lib64/libtcmalloc.so.4.4.5 7f54cd6f9000-7f54cd8a5000 rw-p 00000000 00:00 0 7f54cd8a5000-7f54cd8a9000 r-xp 00000000 08:01 4210 /usr/lib64/libuuid.so.1.3.0 7f54cd8a9000-7f54cdaa8000 ---p 00004000 08:01 4210 /usr/lib64/libuuid.so.1.3.0 7f54cdaa8000-7f54cdaa9000 r--p 00003000 08:01 4210 /usr/lib64/libuuid.so.1.3.0 7f54cdaa9000-7f54cdaaa000 rw-p 00004000 08:01 4210 /usr/lib64/libuuid.so.1.3.0 7f54cdaaa000-7f54cdaab000 r-xp 00000000 08:01 4447 /usr/lib64/libaio.so.1.0.1 7f54cdaab000-7f54cdcaa000 ---p 00001000 08:01 4447 /usr/lib64/libaio.so.1.0.1 7f54cdcaa000-7f54cdcab000 r--p 00000000 08:01 4447 /usr/lib64/libaio.so.1.0.1 7f54cdcab000-7f54cdcac000 rw-p 00001000 08:01 4447 /usr/lib64/libaio.so.1.0.1 7f54cdcac000-7f54cdcc1000 r-xp 00000000 08:01 3737 /usr/lib64/libz.so.1.2.7 7f54cdcc1000-7f54cdec0000 ---p 00015000 08:01 3737 /usr/lib64/libz.so.1.2.7 7f54cdec0000-7f54cdec1000 r--p 00014000 08:01 3737 /usr/lib64/libz.so.1.2.7 7f54cdec1000-7f54cdec2000 rw-p 00015000 08:01 3737 /usr/lib64/libz.so.1.2.7 7f54cdec2000-7f54cded6000 r-xp 00000000 08:01 5160 /usr/lib64/liblz4.so.1.7.5 7f54cded6000-7f54ce0d5000 ---p 00014000 08:01 5160 /usr/lib64/liblz4.so.1.7.5 7f54ce0d5000-7f54ce0d6000 r--p 00013000 08:01 5160 /usr/lib64/liblz4.so.1.7.5 7f54ce0d6000-7f54ce0d7000 rw-p 00014000 08:01 5160 /usr/lib64/liblz4.so.1.7.5 7f54ce0d7000-7f54ce0dc000 r-xp 00000000 08:01 6673 /usr/lib64/libsnappy.so.1.1.4 7f54ce0dc000-7f54ce2db000 ---p 00005000 08:01 6673 /usr/lib64/libsnappy.so.1.1.4 7f54ce2db000-7f54ce2dc000 r--p 00004000 08:01 6673 /usr/lib64/libsnappy.so.1.1.4 7f54ce2dc000-7f54ce2dd000 rw-p 00005000 08:01 6673 /usr/lib64/libsnappy.so.1.1.4 7f54ce2dd000-7f54ce32f000 r-xp 00000000 08:01 72951 /usr/lib64/libleveldb.so.1.0.7 7f54ce32f000-7f54ce52e000 ---p 00052000 08:01 72951 /usr/lib64/libleveldb.so.1.0.7 7f54ce52e000-7f54ce530000 r--p 00051000 08:01 72951 /usr/lib64/libleveldb.so.1.0.7 7f54ce530000-7f54ce531000 rw-p 00053000 08:01 72951 /usr/lib64/libleveldb.so.1.0.7 7f54ce531000-7f54ce55c000 r-xp 00000000 08:01 12940 /usr/lib64/libfuse.so.2.9.2 7f54ce55c000-7f54ce75c000 ---p 0002b000 08:01 12940 /usr/lib64/libfuse.so.2.9.2 7f54ce75c000-7f54ce76e000 r--p 0002b000 08:01 12940 /usr/lib64/libfuse.so.2.9.2 7f54ce76e000-7f54ce76f000 rw-p 0003d000 08:01 12940 /usr/lib64/libfuse.so.2.9.2 7f54ce76f000-7f54ce771000 r-xp 00000000 08:01 3708 /usr/lib64/libdl-2.17.so 7f54ce771000-7f54ce971000 ---p 00002000 08:01 3708 /usr/lib64/libdl-2.17.so 7f54ce971000-7f54ce972000 r--p 00002000 08:01 3708 /usr/lib64/libdl-2.17.so 7f54ce972000-7f54ce973000 rw-p 00003000 08:01 3708 /usr/lib64/libdl-2.17.so 7f54ce973000-7f54ce995000 r-xp 00000000 08:01 3494 /usr/lib64/ld-2.17.so 7f54ceb59000-7f54ceb62000 rw-s 00000000 00:0b 133344 /[aio] (deleted) 7f54ceb62000-7f54ceb86000 rw-p 00000000 00:00 0 7f54ceb89000-7f54ceb8a000 rw-p 00000000 00:00 0 7f54ceb8a000-7f54ceb93000 rw-s 00000000 00:0b 133341 /[aio] (deleted) 7f54ceb93000-7f54ceb94000 rw-p 00000000 00:00 0 7f54ceb94000-7f54ceb95000 r--p 00021000 08:01 3494 /usr/lib64/ld-2.17.so 7f54ceb95000-7f54ceb96000 rw-p 00022000 08:01 3494 /usr/lib64/ld-2.17.so 7f54ceb96000-7f54ceb97000 rw-p 00000000 00:00 0 7fff54e01000-7fff54ebb000 rw-p 00000000 00:00 0 [stack] 7fff54f53000-7fff54f55000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] *** Caught signal (Aborted) ** in thread 7f54ceb75980 thread_name:ceph-objectstor ceph version 15.0.0-3732-gafac599 (afac599ea097f1ae38f9c109fe7370ea32573468) octopus (dev) 1: (()+0xf5d0) [0x7f54c3e395d0] 2: (gsignal()+0x37) [0x7f54c280f2c7] 3: (abort()+0x148) [0x7f54c28109b8] 4: (()+0x78e17) [0x7f54c2851e17] 5: (__fortify_fail()+0x37) [0x7f54c28f0b67] 6: (()+0x117b22) [0x7f54c28f0b22] 7: (update_mon_db(ObjectStore&, OSDSuperblock&, std::string const&, std::string const&)+0x2c51) [0x559cc8c710f1] 8: (main()+0x557e) [0x559cc8bf57de] 9: (__libc_start_main()+0xf5) [0x7f54c27fb495] 10: (()+0x3cc590) [0x559cc8c25590] |
||||||||||||||
pass | 4211301 | 2019-08-12 14:46:18 | 2019-08-12 15:26:37 | 2019-08-12 15:54:37 | 0:28:00 | 0:21:01 | 0:06:59 | mira | master | rhel | 7.6 | rados/singleton-nomsgr/{all/lazy_omap_stats_output.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
fail | 4211302 | 2019-08-12 14:46:19 | 2019-08-12 15:32:40 | 2019-08-12 15:48:39 | 0:15:59 | 0:06:58 | 0:09:01 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_4K_rand_read.yaml} | 1 | |
Failure Reason:
Command failed on mira063 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
pass | 4211303 | 2019-08-12 14:46:20 | 2019-08-12 15:44:36 | 2019-08-12 16:20:36 | 0:36:00 | 0:27:00 | 0:09:00 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |
pass | 4211304 | 2019-08-12 14:46:21 | 2019-08-12 15:48:52 | 2019-08-12 16:54:52 | 1:06:00 | 0:58:35 | 0:07:25 | mira | master | rhel | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} | 2 | |
pass | 4211305 | 2019-08-12 14:46:22 | 2019-08-12 15:54:52 | 2019-08-12 16:24:51 | 0:29:59 | 0:21:11 | 0:08:48 | mira | master | rhel | 7.6 | rados/singleton/{all/resolve_stuck_peering.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 2 | |
pass | 4211306 | 2019-08-12 14:46:23 | 2019-08-12 15:56:38 | 2019-08-12 16:28:37 | 0:31:59 | 0:25:02 | 0:06:57 | mira | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{rhel_7.yaml} tasks/orchestrator_cli.yaml} | 2 | |
fail | 4211307 | 2019-08-12 14:46:24 | 2019-08-12 15:56:38 | 2019-08-12 18:28:40 | 2:32:02 | 2:14:52 | 0:17:10 | mira | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/radosbench_4K_seq_read.yaml} | 1 | |
Failure Reason:
Command failed on mira110 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
pass | 4211308 | 2019-08-12 14:46:25 | 2019-08-12 16:20:52 | 2019-08-12 16:54:51 | 0:33:59 | 0:24:48 | 0:09:11 | mira | master | rhel | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/scrub_test.yaml} | 2 | |
pass | 4211309 | 2019-08-12 14:46:26 | 2019-08-12 16:25:06 | 2019-08-12 17:17:06 | 0:52:00 | 0:41:20 | 0:10:40 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
pass | 4211310 | 2019-08-12 14:46:26 | 2019-08-12 16:28:52 | 2019-08-12 17:20:52 | 0:52:00 | 0:38:56 | 0:13:04 | mira | master | centos | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |
dead | 4211311 | 2019-08-12 14:46:27 | 2019-08-12 16:30:52 | 2019-08-13 04:49:29 | 12:18:37 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | 4 | |||
fail | 4211312 | 2019-08-12 14:46:28 | 2019-08-12 16:54:53 | 2019-08-12 17:40:53 | 0:46:00 | 0:31:36 | 0:14:24 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
Failure Reason:
Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op snap_remove 50 --op snap_create 50 --op rollback 50 --op setattr 25 --op read 100 --op copy_from 50 --op write 50 --op write_excl 50 --op rmattr 25 --op delete 50 --pool unique_pool_0' |
||||||||||||||
fail | 4211313 | 2019-08-12 14:46:29 | 2019-08-12 16:54:53 | 2019-08-12 17:10:52 | 0:15:59 | 0:07:42 | 0:08:17 | mira | master | ubuntu | 18.04 | rados/singleton/{all/test_envlibrados_for_rocksdb.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on mira063 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=afac599ea097f1ae38f9c109fe7370ea32573468 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
fail | 4211314 | 2019-08-12 14:46:30 | 2019-08-12 17:11:07 | 2019-08-12 17:39:06 | 0:27:59 | 0:15:05 | 0:12:54 | mira | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{centos_7.yaml} tasks/progress.yaml} | 2 | |
Failure Reason:
Test failure: test_osd_came_back (tasks.mgr.test_progress.TestProgress) |
||||||||||||||
fail | 4211315 | 2019-08-12 14:46:30 | 2019-08-12 17:17:20 | 2019-08-12 17:33:19 | 0:15:59 | 0:06:52 | 0:09:07 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_4M_rand_read.yaml} | 1 | |
Failure Reason:
Command failed on mira041 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
pass | 4211316 | 2019-08-12 14:46:31 | 2019-08-12 17:21:06 | 2019-08-12 18:39:06 | 1:18:00 | 0:57:30 | 0:20:30 | mira | master | centos | 7.6 | rados/singleton/{all/thrash-backfill-full.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 2 | |
pass | 4211317 | 2019-08-12 14:46:32 | 2019-08-12 17:33:34 | 2019-08-12 17:55:33 | 0:21:59 | 0:10:53 | 0:11:06 | mira | master | centos | 7.6 | rados/objectstore/{backends/keyvaluedb.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4211318 | 2019-08-12 14:46:33 | 2019-08-12 17:36:52 | 2019-08-12 18:12:51 | 0:35:59 | 0:23:16 | 0:12:43 | mira | master | centos | 7.6 | rados/singleton-nomsgr/{all/msgr.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
fail | 4211319 | 2019-08-12 14:46:34 | 2019-08-12 17:39:08 | 2019-08-12 20:05:09 | 2:26:01 | 2:07:00 | 0:19:01 | mira | master | rhel | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/rados_api_tests.yaml} | 2 | |
Failure Reason:
'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdd || sgdisk --zap-all /dev/sdd', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-08-12 20:03:18.701078'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.011361', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011a85600'], u'uuids': [u'30c5d265-8d5b-4ff8-9430-f289396598a5']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211XZVE', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sde'}, 'ansible_loop_var': u'item', u'end': u'2019-08-12 20:03:20.994362', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011a85600'], u'uuids': [u'30c5d265-8d5b-4ff8-9430-f289396598a5']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211XZVE', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sde'}, u'cmd': u'sgdisk --zap-all /dev/sde || sgdisk --zap-all /dev/sde', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sde || sgdisk --zap-all /dev/sde', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-08-12 20:03:19.983001'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.011310', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2040775100'], u'uuids': [u'ab0e5b7a-b535-4d6e-848a-fe2b2087a677']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N204WG1E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdf'}, 'ansible_loop_var': u'item', u'end': u'2019-08-12 20:03:22.256037', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2040775100'], u'uuids': [u'ab0e5b7a-b535-4d6e-848a-fe2b2087a677']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N204WG1E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdf'}, u'cmd': u'sgdisk --zap-all /dev/sdf || sgdisk --zap-all /dev/sdf', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdf || sgdisk --zap-all /dev/sdf', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-08-12 20:03:21.244727'}, {'ansible_loop_var': u'item', '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP66QW9', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000524AS', u'partitions': {u'sda1': {u'start': u'2048', u'sectorsize': 512, u'uuid': u'f602365c-3e1b-4c7f-a435-7729abad47a6', u'sectors': u'1953522688', u'holders': [], u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000-part1'], u'uuids': [u'f602365c-3e1b-4c7f-a435-7729abad47a6']}, u'size': u'931.51 GB'}}}, 'key': u'sda'}, 'skipped': True, 'changed': False, '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP66QW9', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000524AS', u'partitions': {u'sda1': {u'start': u'2048', u'sectorsize': 512, u'uuid': u'f602365c-3e1b-4c7f-a435-7729abad47a6', u'sectors': u'1953522688', u'holders': [], u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000-part1'], u'uuids': [u'f602365c-3e1b-4c7f-a435-7729abad47a6']}, u'size': u'931.51 GB'}}}, 'key': u'sda'}}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.032608', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP8NWLD', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdb'}, 'ansible_loop_var': u'item', u'end': u'2019-08-12 20:03:23.541957', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP8NWLD', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdb'}, u'cmd': u'sgdisk --zap-all /dev/sdb || sgdisk --zap-all /dev/sdb', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdb || sgdisk --zap-all /dev/sdb', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-08-12 20:03:22.509349'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.045705', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'6VPBDH90', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdc'}, 'ansible_loop_var': u'item', u'end': u'2019-08-12 20:03:24.838935', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'6VPBDH90', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdc'}, u'cmd': u'sgdisk --zap-all /dev/sdc || sgdisk --zap-all /dev/sdc', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdc || sgdisk --zap-all /dev/sdc', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-08-12 20:03:23.793230'}, {'stderr_lines': [u'Problem opening /dev/dm-0 for reading! Error is 2.', u'The specified file does not exist!', u"Problem opening '' for writing! Program will now terminate.", u'Warning! MBR not overwritten! Error is 2!', u'Problem opening /dev/dm-0 for reading! Error is 2.', u'The specified file does not exist!', u"Problem opening '' for writing! Program will now terminate.", u'Warning! MBR not overwritten! Error is 2!'], u'changed': True, u'stdout': u'', u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'delta': u'0:00:00.008442', 'stdout_lines': [], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': None, u'links': {u'masters': [], u'labels': [], u'ids': [u'dm-name-mpatha', u'dm-uuid-mpath-2001b4d2000000000'], u'uuids': [u'd5ac8000-7c67-4a5f-923b-8cfe690a2677']}, u'sas_device_handle': None, u'host': u'', u'support_discard': u'0', u'serial': u'5VP8NWLD', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': None, u'partitions': {}}, 'key': u'dm-0'}, 'ansible_loop_var': u'item', u'end': u'2019-08-12 20:03:25.104171', '_ansible_no_log': False, u'start': u'2019-08-12 20:03:25.095729', u'failed': True, u'cmd': u'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', 'item': {'value': {u'sectorsize': u'512', u'vendor': None, u'links': {u'masters': [], u'labels': [], u'ids': [u'dm-name-mpatha', u'dm-uuid-mpath-2001b4d2000000000'], u'uuids': [u'd5ac8000-7c67-4a5f-923b-8cfe690a2677']}, u'sas_device_handle': None, u'host': u'', u'support_discard': u'0', u'serial': u'5VP8NWLD', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': None, u'partitions': {}}, 'key': u'dm-0'}, u'stderr': u"Problem opening /dev/dm-0 for reading! Error is 2.\nThe specified file does not exist!\nProblem opening '' for writing! Program will now terminate.\nWarning! MBR not overwritten! Error is 2!\nProblem opening /dev/dm-0 for reading! Error is 2.\nThe specified file does not exist!\nProblem opening '' for writing! Program will now terminate.\nWarning! MBR not overwritten! Error is 2!", u'rc': 2, u'msg': u'non-zero return code'}]}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 219, in represent_list return self.represent_sequence(u'tag:yaml.org,2002:seq', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 102, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined raise RepresenterError("cannot represent an object", data)RepresenterError: ('cannot represent an object', u'sdd') |
||||||||||||||
pass | 4211320 | 2019-08-12 14:46:35 | 2019-08-12 17:41:08 | 2019-08-12 18:19:07 | 0:37:59 | 0:25:32 | 0:12:27 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
pass | 4211321 | 2019-08-12 14:46:35 | 2019-08-12 17:50:54 | 2019-08-12 18:38:53 | 0:47:59 | 0:34:48 | 0:13:11 | mira | master | centos | 7.6 | rados/singleton/{all/thrash-eio.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 2 | |
pass | 4211322 | 2019-08-12 14:46:36 | 2019-08-12 17:55:35 | 2019-08-12 20:43:36 | 2:48:01 | 2:28:45 | 0:19:16 | mira | master | rhel | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read.yaml} | 2 | |
pass | 4211323 | 2019-08-12 14:46:37 | 2019-08-12 18:13:05 | 2019-08-12 21:13:07 | 3:00:02 | 2:42:35 | 0:17:27 | mira | master | rhel | 7.6 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/many.yaml workloads/snaps-few-objects.yaml} | 2 | |
pass | 4211324 | 2019-08-12 14:46:38 | 2019-08-12 18:19:09 | 2019-08-12 19:05:09 | 0:46:00 | 0:33:21 | 0:12:39 | mira | master | centos | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 4211325 | 2019-08-12 14:46:39 | 2019-08-12 18:28:55 | 2019-08-12 19:22:55 | 0:54:00 | 0:37:50 | 0:16:10 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/jewel-v1only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 4 | |
fail | 4211326 | 2019-08-12 14:46:40 | 2019-08-12 18:39:08 | 2019-08-12 19:05:07 | 0:25:59 | 0:13:47 | 0:12:12 | mira | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/radosbench_4M_seq_read.yaml} | 1 | |
Failure Reason:
Command failed on mira117 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
pass | 4211327 | 2019-08-12 14:46:40 | 2019-08-12 18:39:08 | 2019-08-12 21:17:10 | 2:38:02 | 2:20:48 | 0:17:14 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
pass | 4211328 | 2019-08-12 14:46:41 | 2019-08-12 19:05:10 | 2019-08-12 19:47:10 | 0:42:00 | 0:33:25 | 0:08:35 | mira | master | rhel | 7.6 | rados/singleton/{all/thrash-rados/{thrash-rados.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 2 | |
pass | 4211329 | 2019-08-12 14:46:42 | 2019-08-12 19:05:10 | 2019-08-12 19:37:10 | 0:32:00 | 0:23:29 | 0:08:31 | mira | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{rhel_7.yaml} tasks/prometheus.yaml} | 2 | |
pass | 4211330 | 2019-08-12 14:46:43 | 2019-08-12 19:23:09 | 2019-08-12 20:09:09 | 0:46:00 | 0:38:11 | 0:07:49 | mira | master | rhel | 7.6 | rados/singleton/{all/thrash_cache_writeback_proxy_none.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 2 | |
pass | 4211331 | 2019-08-12 14:46:44 | 2019-08-12 19:37:24 | 2019-08-12 20:13:24 | 0:36:00 | 0:19:33 | 0:16:27 | mira | master | centos | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
pass | 4211332 | 2019-08-12 14:46:44 | 2019-08-12 19:47:22 | 2019-08-12 22:25:24 | 2:38:02 | 2:19:26 | 0:18:36 | mira | master | rhel | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/rados_cls_all.yaml} | 2 | |
pass | 4211333 | 2019-08-12 14:46:45 | 2019-08-12 20:05:24 | 2019-08-12 20:57:23 | 0:51:59 | 0:42:13 | 0:09:46 | mira | master | centos | 7.6 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
fail | 4211334 | 2019-08-12 14:46:46 | 2019-08-12 20:09:11 | 2019-08-12 20:25:10 | 0:15:59 | 0:06:41 | 0:09:18 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_4M_write.yaml} | 1 | |
Failure Reason:
Command failed on mira110 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
pass | 4211335 | 2019-08-12 14:46:47 | 2019-08-12 20:13:26 | 2019-08-12 21:03:25 | 0:49:59 | 0:24:10 | 0:25:49 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
pass | 4211336 | 2019-08-12 14:46:48 | 2019-08-12 20:25:12 | 2019-08-12 20:51:12 | 0:26:00 | 0:15:39 | 0:10:21 | mira | master | centos | 7.6 | rados/singleton/{all/watch-notify-same-primary.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4211337 | 2019-08-12 14:46:49 | 2019-08-12 20:40:59 | 2019-08-12 21:40:59 | 1:00:00 | 0:39:04 | 0:20:56 | mira | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} | 2 | ||
pass | 4211338 | 2019-08-12 14:46:49 | 2019-08-12 20:43:38 | 2019-08-12 21:19:38 | 0:36:00 | 0:20:15 | 0:15:45 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/jewel.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | 4 | |
pass | 4211339 | 2019-08-12 14:46:50 | 2019-08-12 20:51:24 | 2019-08-12 21:45:23 | 0:53:59 | 0:42:32 | 0:11:27 | mira | master | centos | 7.6 | rados/objectstore/{backends/objectcacher-stress.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4211340 | 2019-08-12 14:46:51 | 2019-08-12 20:57:40 | 2019-08-12 21:43:40 | 0:46:00 | 0:33:24 | 0:12:36 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
dead | 4211341 | 2019-08-12 14:46:52 | 2019-08-12 21:03:27 | 2019-08-13 08:01:38 | 10:58:11 | mira | master | centos | 7.6 | rados/singleton/{all/admin-socket.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |||
pass | 4211342 | 2019-08-12 14:46:53 | 2019-08-12 21:13:22 | 2019-08-12 22:01:22 | 0:48:00 | 0:26:52 | 0:21:08 | mira | master | centos | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-small-objects-many-deletes.yaml} | 2 | |
pass | 4211343 | 2019-08-12 14:46:53 | 2019-08-12 21:17:11 | 2019-08-13 00:05:13 | 2:48:02 | 2:30:23 | 0:17:39 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
pass | 4211344 | 2019-08-12 14:46:54 | 2019-08-12 21:19:39 | 2019-08-12 21:45:39 | 0:26:00 | 0:13:50 | 0:12:10 | mira | master | centos | 7.6 | rados/singleton/{all/deduptool.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4211345 | 2019-08-12 14:46:55 | 2019-08-12 21:41:15 | 2019-08-13 00:33:22 | 2:52:07 | 2:33:19 | 0:18:48 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 4211346 | 2019-08-12 14:46:56 | 2019-08-12 21:43:41 | 2019-08-12 22:15:41 | 0:32:00 | 0:19:41 | 0:12:19 | mira | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rados_python.yaml} | 2 | |
pass | 4211347 | 2019-08-12 14:46:57 | 2019-08-12 21:45:38 | 2019-08-12 22:31:38 | 0:46:00 | 0:39:05 | 0:06:55 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
pass | 4211348 | 2019-08-12 14:46:58 | 2019-08-12 21:45:40 | 2019-08-12 22:13:39 | 0:27:59 | 0:14:47 | 0:13:12 | mira | master | centos | 7.6 | rados/singleton/{all/divergent_priors.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
fail | 4211349 | 2019-08-12 14:46:59 | 2019-08-12 22:01:23 | 2019-08-12 22:31:23 | 0:30:00 | 0:18:26 | 0:11:34 | mira | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/sample_fio.yaml} | 1 | |
Failure Reason:
Command failed on mira018 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
pass | 4211350 | 2019-08-12 14:46:59 | 2019-08-12 22:13:42 | 2019-08-12 23:01:41 | 0:47:59 | 0:31:11 | 0:16:48 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 4 | |
pass | 4211351 | 2019-08-12 14:47:00 | 2019-08-12 22:15:56 | 2019-08-12 22:48:00 | 0:32:04 | 0:19:49 | 0:12:15 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/fastclose.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
pass | 4211352 | 2019-08-12 14:47:01 | 2019-08-12 22:25:38 | 2019-08-12 22:57:37 | 0:31:59 | 0:23:54 | 0:08:05 | mira | master | rhel | 7.6 | rados/singleton/{all/divergent_priors2.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4211353 | 2019-08-12 14:47:02 | 2019-08-12 22:31:38 | 2019-08-13 01:23:40 | 2:52:02 | 2:34:53 | 0:17:09 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} | 2 | |
pass | 4211354 | 2019-08-12 14:47:02 | 2019-08-12 22:31:39 | 2019-08-12 22:57:39 | 0:26:00 | 0:14:29 | 0:11:31 | mira | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{centos_7.yaml} tasks/workunits.yaml} | 2 | |
dead | 4211355 | 2019-08-12 14:47:03 | 2019-08-12 22:48:03 | 2019-08-13 08:00:11 | 9:12:08 | mira | master | centos | 7.6 | rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |||
pass | 4211356 | 2019-08-12 14:47:04 | 2019-08-12 22:57:52 | 2019-08-12 23:39:52 | 0:42:00 | 0:20:10 | 0:21:50 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/dedup_tier.yaml} | 2 | |
pass | 4211357 | 2019-08-12 14:47:05 | 2019-08-12 22:57:52 | 2019-08-12 23:35:52 | 0:38:00 | 0:24:47 | 0:13:13 | mira | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rados_stress_watch.yaml} | 2 | |
fail | 4211358 | 2019-08-12 14:47:06 | 2019-08-12 23:01:56 | 2019-08-12 23:19:55 | 0:17:59 | 0:07:01 | 0:10:58 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/sample_radosbench.yaml} | 1 | |
Failure Reason:
Command failed on mira072 with status 1: '/home/ubuntu/cephtest/cbt/cbt.py -a /home/ubuntu/cephtest/archive/cbt /home/ubuntu/cephtest/archive/cbt/cbt_config.yaml' |
||||||||||||||
pass | 4211359 | 2019-08-12 14:47:07 | 2019-08-12 23:20:09 | 2019-08-13 00:22:09 | 1:02:00 | 0:49:01 | 0:12:59 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/osd-delay.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
pass | 4211360 | 2019-08-12 14:47:07 | 2019-08-12 23:36:05 | 2019-08-13 00:00:05 | 0:24:00 | 0:13:22 | 0:10:38 | mira | master | centos | 7.6 | rados/singleton-nomsgr/{all/version-number-sanity.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4211361 | 2019-08-12 14:47:08 | 2019-08-12 23:40:06 | 2019-08-13 01:12:07 | 1:32:01 | 1:10:37 | 0:21:24 | mira | master | centos | 7.6 | rados/singleton/{all/ec-lost-unfound.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4211362 | 2019-08-12 14:47:09 | 2019-08-13 00:00:06 | 2019-08-13 00:28:06 | 0:28:00 | 0:13:18 | 0:14:42 | mira | master | centos | 7.6 | rados/multimon/{clusters/21.yaml msgr-failures/many.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/mon_clock_with_skews.yaml} | 3 | |
pass | 4211363 | 2019-08-12 14:47:10 | 2019-08-13 00:05:27 | 2019-08-13 01:11:27 | 1:06:00 | 0:49:29 | 0:16:31 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 4 |