ID
Status
Ceph Branch
Suite Branch
Teuthology Branch
Machine
OS
Nodes
Description
Failure Reason
wip-aclamk-memtracking-segfault-bluestore
wip-aclamk-memtracking-segfault-bluestore
py2
smithi
centos 8.1
rados/singleton/{all/lost-unfound-delete msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}}
failed to become clean before timeout expired
wip-aclamk-memtracking-segfault-bluestore
wip-aclamk-memtracking-segfault-bluestore
py2
smithi
rhel 8.1
rados/singleton/{all/lost-unfound msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}}
failed to become clean before timeout expired
wip-aclamk-memtracking-segfault-bluestore
wip-aclamk-memtracking-segfault-bluestore
py2
smithi
ubuntu 18.04
rados/perf/{ceph objectstore/bluestore-low-osd-mem-target openstack settings/optimized ubuntu_latest workloads/radosbench_omap_write}
Found coredumps on ubuntu@smithi006.front.sepia.ceph.com
wip-aclamk-memtracking-segfault-bluestore
wip-aclamk-memtracking-segfault-bluestore
py2
smithi
centos 8.1
rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1}
wip-aclamk-memtracking-segfault-bluestore
wip-aclamk-memtracking-segfault-bluestore
py2
smithi
rhel 8.1
rados/singleton/{all/radostool msgr-failures/many msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}}
wip-aclamk-memtracking-segfault-bluestore
wip-aclamk-memtracking-segfault-bluestore
py2
smithi
ubuntu 18.04
rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2}
wip-aclamk-memtracking-segfault-bluestore
wip-aclamk-memtracking-segfault-bluestore
py2
smithi
rhel 8.1
rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal msgr-failures/few rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-small-objects-fast-read-overwrites}
wip-aclamk-memtracking-segfault-bluestore
wip-aclamk-memtracking-segfault-bluestore
py2
smithi
centos 8.1
rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1}
"2020-06-30T13:37:12.486696+0000 mon.a (mon.0) 1018 : cluster [WRN] Health check failed: 1 daemons have recently crashed (RECENT_CRASH)" in cluster log
wip-aclamk-memtracking-segfault-bluestore
wip-aclamk-memtracking-segfault-bluestore
py2
smithi
rhel 8.1
rados/singleton/{all/test-crash msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}}
wip-aclamk-memtracking-segfault-bluestore
wip-aclamk-memtracking-segfault-bluestore
py2
smithi
centos 8.1
rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-small-objects-fast-read}