ID
Status
Ceph Branch
Suite Branch
Teuthology Branch
Machine
OS
Nodes
Description
Failure Reason
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
centos 9.stream
rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind}
"2023-07-22T16:25:23.537151+0000 mon.a (mon.0) 178 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
centos 9.stream
rados/standalone/{supported-random-distro$/{centos_latest} workloads/scrub}
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
rhel 8.6
rados/singleton/{all/erasure-code-nonregression mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
centos 8.stream
rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews}
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
ubuntu 20.04
rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start}
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
centos 9.stream
rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind}
"2023-07-22T17:34:10.201991+0000 mon.a (mon.0) 784 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
centos 8.stream
rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-stupid} tasks/e2e}
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi039 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1bf364b918a7ab4708130a64bf96639942959f6d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
centos 8.stream
rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-radosbench}
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
centos 9.stream
rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest}
saw valgrind issues
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
centos 9.stream
rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind}
Command failed on smithi192 with status 32: 'sync && sudo umount -f /var/lib/ceph/osd/ceph-6'
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
rhel 8.6
rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2}
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
centos 9.stream
rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} supported-random-distro$/{centos_latest} tasks/progress}
hit max job timeout
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
centos 9.stream
rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind}
Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi002 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1bf364b918a7ab4708130a64bf96639942959f6d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh'
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
rhel 8.6
rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-overwrites}
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
rhel 8.6
rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{rhel_8} tasks/workunits}
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
rhel 8.6
rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{rhel_8}}
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
centos 9.stream
rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados tasks/mon_recovery validater/lockdep}
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
rhel 8.6
rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_cephadm}
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
ubuntu 22.04
rados/singleton-nomsgr/{all/full-tiering mon_election/classic rados supported-random-distro$/{ubuntu_latest}}
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
centos 8.stream
rados/standalone/{supported-random-distro$/{centos_8} workloads/osd-backfill}
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
ubuntu 20.04
rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{ubuntu_20.04}}
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
centos 8.stream
rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zstd} tasks/e2e}
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi017 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1bf364b918a7ab4708130a64bf96639942959f6d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
rhel 8.6
rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/connectivity random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{rhel_8} tasks/crash}
wip-yuri10-testing-2023-07-21-0828-reef
wip-yuri10-testing-2023-07-21-0828-reef
main
smithi
centos 9.stream
rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind}
Command failed (workunit test rados/test.sh) on smithi096 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1bf364b918a7ab4708130a64bf96639942959f6d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'