Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7261583 2023-05-03 19:14:13 2023-05-03 19:15:27 2023-05-03 20:04:25 0:48:58 0:38:30 0:10:28 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/rbd_cls} 3
pass 7261584 2023-05-03 19:14:14 2023-05-03 19:16:48 2023-05-03 19:51:46 0:34:58 0:23:24 0:11:34 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
fail 7261585 2023-05-03 19:14:15 2023-05-03 19:18:08 2023-05-03 19:56:48 0:38:40 0:26:18 0:12:22 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi072 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid da96e3dc-e9ea-11ed-9b01-001a4aab830c -e sha1=2cc327ab03de508c4ed32f598c61221f937ffba0 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 7261586 2023-05-03 19:14:16 2023-05-03 19:19:09 2023-05-03 19:48:24 0:29:15 0:22:26 0:06:49 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/rados_cls_all} 2
Failure Reason:

"2023-05-03T19:44:30.342383+0000 mon.a (mon.0) 470 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7261587 2023-05-03 19:14:16 2023-05-03 19:19:09 2023-05-03 20:08:14 0:49:05 0:40:29 0:08:36 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli_mon} 5
fail 7261588 2023-05-03 19:14:17 2023-05-03 19:20:00 2023-05-03 20:00:29 0:40:29 0:26:56 0:13:33 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi186 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2cc327ab03de508c4ed32f598c61221f937ffba0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 7261589 2023-05-03 19:14:18 2023-05-03 19:21:00 2023-05-03 19:58:31 0:37:31 0:30:06 0:07:25 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7261590 2023-05-03 19:14:19 2023-05-03 19:21:21 2023-05-03 20:01:09 0:39:48 0:31:19 0:08:29 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
dead 7261591 2023-05-03 19:14:20 2023-05-03 19:21:51 2023-05-03 19:37:51 0:16:00 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 7261592 2023-05-03 19:14:21 2023-05-03 19:22:52 2023-05-03 20:02:46 0:39:54 0:27:00 0:12:54 smithi main centos 8.stream rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
pass 7261593 2023-05-03 19:14:21 2023-05-03 19:24:52 2023-05-03 20:07:02 0:42:10 0:33:50 0:08:20 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/small-objects-localized} 2
pass 7261594 2023-05-03 19:14:22 2023-05-03 19:25:53 2023-05-03 19:50:32 0:24:39 0:17:21 0:07:18 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
pass 7261595 2023-05-03 19:14:23 2023-05-03 19:26:23 2023-05-03 19:49:26 0:23:03 0:15:29 0:07:34 smithi main centos 8.stream rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 7261596 2023-05-03 19:14:24 2023-05-03 19:26:24 2023-05-03 20:08:19 0:41:55 0:35:12 0:06:43 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7261597 2023-05-03 19:14:25 2023-05-03 19:27:14 2023-05-03 19:55:46 0:28:32 0:20:24 0:08:08 smithi main centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7261598 2023-05-03 19:14:26 2023-05-03 19:28:05 2023-05-03 20:07:49 0:39:44 0:30:49 0:08:55 smithi main ubuntu 18.04 rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
pass 7261599 2023-05-03 19:14:26 2023-05-03 19:28:25 2023-05-03 20:04:02 0:35:37 0:23:27 0:12:10 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
fail 7261600 2023-05-03 19:14:27 2023-05-03 19:30:26 2023-05-03 19:55:26 0:25:00 0:18:50 0:06:10 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

"2023-05-03T19:52:09.003788+0000 mon.a (mon.0) 476 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7261601 2023-05-03 19:14:28 2023-05-03 19:30:26 2023-05-03 20:05:10 0:34:44 0:21:11 0:13:33 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/write_fadvise_dontneed} 2
pass 7261602 2023-05-03 19:14:29 2023-05-03 19:31:47 2023-05-03 20:54:07 1:22:20 1:14:19 0:08:01 smithi main centos 8.stream rados/singleton/{all/thrash-backfill-full mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} 2
pass 7261603 2023-05-03 19:14:30 2023-05-03 19:32:57 2023-05-03 20:20:17 0:47:20 0:36:35 0:10:45 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli_mon} 5
fail 7261604 2023-05-03 19:14:30 2023-05-03 19:36:58 2023-05-03 19:54:35 0:17:37 0:07:41 0:09:56 smithi main rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=2cc327ab03de508c4ed32f598c61221f937ffba0

pass 7261605 2023-05-03 19:14:31 2023-05-03 19:36:59 2023-05-03 20:02:07 0:25:08 0:17:56 0:07:12 smithi main centos 8.stream rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/connectivity start} 2
pass 7261606 2023-05-03 19:14:32 2023-05-03 19:36:59 2023-05-03 20:03:05 0:26:06 0:18:17 0:07:49 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/cache-agent-small} 2
dead 7261607 2023-05-03 19:14:33 2023-05-03 19:38:10 2023-05-03 19:53:09 0:14:59 smithi main centos 8.stream rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Error reimaging machines: reached maximum tries (60) after waiting for 900 seconds

pass 7261608 2023-05-03 19:14:34 2023-05-03 19:38:10 2023-05-03 20:19:20 0:41:10 0:30:15 0:10:55 smithi main centos 8.stream rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} 2
fail 7261609 2023-05-03 19:14:34 2023-05-03 19:38:10 2023-05-03 20:06:57 0:28:47 0:18:07 0:10:40 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi082 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:2cc327ab03de508c4ed32f598c61221f937ffba0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 301752f0-e9ec-11ed-9b01-001a4aab830c -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

pass 7261610 2023-05-03 19:14:35 2023-05-03 19:39:31 2023-05-03 20:05:46 0:26:15 0:17:26 0:08:49 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
pass 7261611 2023-05-03 19:14:36 2023-05-03 19:39:31 2023-05-03 20:20:30 0:40:59 0:32:24 0:08:35 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7261612 2023-05-03 19:14:37 2023-05-03 19:40:42 2023-05-03 20:30:51 0:50:09 0:38:00 0:12:09 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/cache-snaps} 3
pass 7261613 2023-05-03 19:14:38 2023-05-03 19:43:02 2023-05-03 20:17:54 0:34:52 0:21:28 0:13:24 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
pass 7261614 2023-05-03 19:14:38 2023-05-03 19:48:34 2023-05-03 20:23:37 0:35:03 0:23:24 0:11:39 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
pass 7261615 2023-05-03 19:14:39 2023-05-03 19:50:14 2023-05-03 20:50:50 1:00:36 0:52:41 0:07:55 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} 2
pass 7261616 2023-05-03 19:14:40 2023-05-03 19:50:35 2023-05-03 20:32:26 0:41:51 0:35:04 0:06:47 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7261617 2023-05-03 19:14:41 2023-05-03 19:51:25 2023-05-03 20:39:00 0:47:35 0:38:32 0:09:03 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli_mon} 5
fail 7261618 2023-05-03 19:14:41 2023-05-03 19:53:36 2023-05-03 20:35:50 0:42:14 0:33:37 0:08:37 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi135 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2cc327ab03de508c4ed32f598c61221f937ffba0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 7261619 2023-05-03 19:14:42 2023-05-03 19:54:36 2023-05-03 20:33:37 0:39:01 0:31:55 0:07:06 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7261620 2023-05-03 19:14:43 2023-05-03 19:54:37 2023-05-03 20:36:29 0:41:52 0:34:28 0:07:24 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 7261621 2023-05-03 19:14:44 2023-05-03 19:55:27 2023-05-03 20:22:27 0:27:00 0:19:21 0:07:39 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
pass 7261622 2023-05-03 19:14:45 2023-05-03 19:55:28 2023-05-03 20:37:06 0:41:38 0:34:46 0:06:52 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
pass 7261623 2023-05-03 19:14:46 2023-05-03 19:55:48 2023-05-03 20:23:40 0:27:52 0:18:20 0:09:32 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/basic 3-final} 2
pass 7261624 2023-05-03 19:14:46 2023-05-03 19:55:48 2023-05-03 20:29:23 0:33:35 0:23:20 0:10:15 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
pass 7261625 2023-05-03 19:14:47 2023-05-03 19:56:09 2023-05-03 20:26:27 0:30:18 0:22:28 0:07:50 smithi main centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic objectstore/bluestore-comp-lz4 supported-random-distro$/{centos_8} tasks/workunits} 2
pass 7261626 2023-05-03 19:14:48 2023-05-03 19:56:49 2023-05-03 20:50:07 0:53:18 0:45:31 0:07:47 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
pass 7261627 2023-05-03 19:14:49 2023-05-03 19:56:50 2023-05-03 20:45:52 0:49:02 0:38:18 0:10:44 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli_mon} 5
fail 7261628 2023-05-03 19:14:50 2023-05-03 20:00:30 2023-05-03 20:18:53 0:18:23 0:07:43 0:10:40 smithi main rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=2cc327ab03de508c4ed32f598c61221f937ffba0

pass 7261629 2023-05-03 19:14:51 2023-05-03 20:01:11 2023-05-03 20:27:19 0:26:08 0:18:00 0:08:08 smithi main centos 8.stream rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/connectivity start} 2
fail 7261630 2023-05-03 19:14:51 2023-05-03 20:02:11 2023-05-03 20:32:40 0:30:29 0:21:57 0:08:32 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

"2023-05-03T20:27:40.744843+0000 mon.a (mon.0) 515 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7261631 2023-05-03 19:14:52 2023-05-03 20:02:52 2023-05-03 20:36:12 0:33:20 0:26:27 0:06:53 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
pass 7261632 2023-05-03 19:14:53 2023-05-03 20:03:12 2023-05-03 20:45:46 0:42:34 0:35:04 0:07:30 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2