ID
Status
Ceph Branch
Suite Branch
Teuthology Branch
Machine
OS
Nodes
Description
Failure Reason
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
centos 9.stream
rados/standalone/{supported-random-distro$/{centos_latest} workloads/scrub}
Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi203 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=354447ca5357e926795d009be846687f04556beb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh'
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
ubuntu 22.04
rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait}
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
centos 9.stream
rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate}
"2024-04-29T20:10:38.228053+0000 mon.smithi149 (mon.0) 814 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
centos 8.stream
rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/cache-snaps}
"2024-04-29T20:30:00.000095+0000 mon.a (mon.0) 1197 : cluster [WRN] Health detail: HEALTH_WARN noscrub flag(s) set" in cluster log
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
ubuntu 22.04
rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}}
hit max job timeout
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
centos 9.stream
rados/upgrade/parallel/{0-random-distro$/{centos_9.stream} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}}
"2024-04-29T20:23:46.087624+0000 mon.a (mon.0) 356 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
centos 9.stream
rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/off fixed-2 mon_election/classic start}
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
centos 9.stream
rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/repave-all}
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
centos 8.stream
rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/radosbench}
"2024-04-29T20:30:00.000112+0000 mon.a (mon.0) 1104 : cluster [WRN] Health detail: HEALTH_WARN noscrub,nodeep-scrub flag(s) set" in cluster log
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
centos 9.stream
rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind}
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
centos 9.stream
rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start}
"2024-04-29T20:13:56.095048+0000 mon.a (mon.0) 687 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
centos 9.stream
rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-flag}
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
ubuntu 22.04
rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}}
Command failed (workunit test cephtool/test.sh) on smithi066 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=354447ca5357e926795d009be846687f04556beb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
centos 9.stream
rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-wait}
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
ubuntu 22.04
rados/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/off fixed-2 mon_election/classic start}
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
ubuntu 22.04
rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rmdir-reactivate}
"2024-04-29T20:26:59.982933+0000 mon.smithi089 (mon.0) 804 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
centos 9.stream
rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest}
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
centos 9.stream
rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind}
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
centos 9.stream
rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/connectivity start}
"2024-04-29T20:14:08.906445+0000 mon.a (mon.0) 665 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
ubuntu 22.04
rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}}
Command failed (workunit test cephtool/test.sh) on smithi122 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=354447ca5357e926795d009be846687f04556beb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
centos 9.stream
rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{centos_latest}}
hit max job timeout
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
centos 9.stream
rados/upgrade/parallel/{0-random-distro$/{centos_9.stream_runc} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}}
"2024-04-29T20:24:29.984506+0000 mon.a (mon.0) 358 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log
wip-yuri6-testing-2024-04-02-1310
wip-yuri6-testing-2024-04-02-1310
main
smithi
centos 9.stream
rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind}
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun