Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7614803 2024-03-21 11:21:31 2024-03-21 11:25:16 2024-03-21 11:42:52 0:17:36 0:07:22 0:10:14 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
fail 7614804 2024-03-21 11:21:32 2024-03-21 11:25:16 2024-03-21 11:54:01 0:28:45 0:18:40 0:10:05 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_host_drain} 3
Failure Reason:

"2024-03-21T11:48:48.912463+0000 mon.a (mon.0) 530 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7614805 2024-03-21 11:21:33 2024-03-21 11:25:17 2024-03-21 11:42:00 0:16:43 0:06:38 0:10:05 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
fail 7614806 2024-03-21 11:21:35 2024-03-21 11:25:17 2024-03-21 11:38:12 0:12:55 0:04:13 0:08:42 smithi main centos 9.stream orch:cephadm/mgr-nfs-upgrade/{0-centos_9.stream 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 7614807 2024-03-21 11:21:36 2024-03-21 11:25:17 2024-03-21 12:10:26 0:45:09 0:35:28 0:09:41 smithi main centos 9.stream orch:cephadm/nfs/{cluster/{1-node} conf/{client mds mon osd} overrides/ignorelist_health supported-random-distros$/{centos_latest} tasks/nfs} 1
Failure Reason:

"2024-03-21T11:45:36.400980+0000 mon.a (mon.0) 480 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7614808 2024-03-21 11:21:37 2024-03-21 11:25:18 2024-03-21 11:48:03 0:22:45 0:13:01 0:09:44 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/classic task/test_orch_cli} 1
Failure Reason:

"2024-03-21T11:45:06.032949+0000 mon.a (mon.0) 458 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

pass 7614809 2024-03-21 11:21:39 2024-03-21 11:25:18 2024-03-21 11:48:46 0:23:28 0:12:00 0:11:28 smithi main centos 9.stream orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream_runc} 2-node-mgr agent/off orchestrator_cli} 2
fail 7614810 2024-03-21 11:21:40 2024-03-21 11:25:18 2024-03-21 12:06:48 0:41:30 0:26:46 0:14:44 smithi main centos 9.stream orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} 3
Failure Reason:

"2024-03-21T11:45:57.419048+0000 mon.a (mon.0) 188 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log

fail 7614811 2024-03-21 11:21:42 2024-03-21 11:27:29 2024-03-21 11:52:42 0:25:13 0:12:47 0:12:26 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_basic} 2
Failure Reason:

"2024-03-21T11:49:26.102399+0000 mon.a (mon.0) 237 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7614812 2024-03-21 11:21:43 2024-03-21 11:27:29 2024-03-21 11:43:07 0:15:38 0:04:17 0:11:21 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
fail 7614813 2024-03-21 11:21:44 2024-03-21 11:27:50 2024-03-21 11:42:42 0:14:52 0:04:10 0:10:42 smithi main centos 9.stream orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/basic 3-final} 1
pass 7614814 2024-03-21 11:21:46 2024-03-21 11:27:50 2024-03-21 11:50:57 0:23:07 0:11:01 0:12:06 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} 3
fail 7614815 2024-03-21 11:21:47 2024-03-21 11:29:31 2024-03-21 12:27:31 0:58:00 0:48:00 0:10:00 smithi main centos 9.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} 2
Failure Reason:

"2024-03-21T11:58:36.023423+0000 mon.a (mon.0) 893 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7614816 2024-03-21 11:21:48 2024-03-21 11:29:31 2024-03-21 12:15:56 0:46:25 0:35:47 0:10:38 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

"2024-03-21T11:54:47.966111+0000 mon.a (mon.0) 844 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7614817 2024-03-21 11:21:50 2024-03-21 11:30:52 2024-03-21 12:04:42 0:33:50 0:22:59 0:10:51 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
Failure Reason:

"2024-03-21T12:00:00.000227+0000 mon.a (mon.0) 1223 : cluster [WRN] Health detail: HEALTH_WARN Reduced data availability: 7 pgs inactive; Degraded data redundancy: 214/597 objects degraded (35.846%), 42 pgs degraded; 2 pool(s) do not have an application enabled" in cluster log

fail 7614818 2024-03-21 11:21:51 2024-03-21 11:31:32 2024-03-21 11:56:35 0:25:03 0:14:10 0:10:53 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} 1
Failure Reason:

"2024-03-21T11:51:10.158119+0000 mon.a (mon.0) 294 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7614819 2024-03-21 11:21:52 2024-03-21 11:31:33 2024-03-21 11:47:42 0:16:09 0:04:27 0:11:42 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
fail 7614820 2024-03-21 11:21:54 2024-03-21 11:32:33 2024-03-21 11:50:42 0:18:09 0:07:14 0:10:55 smithi main ubuntu 22.04 orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
fail 7614821 2024-03-21 11:21:55 2024-03-21 11:33:04 2024-03-21 11:50:52 0:17:48 0:07:13 0:10:35 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} 2
fail 7614822 2024-03-21 11:21:56 2024-03-21 11:33:04 2024-03-21 12:09:15 0:36:11 0:25:14 0:10:57 smithi main ubuntu 22.04 orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

"2024-03-21T11:55:00.499152+0000 mon.a (mon.0) 435 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7614823 2024-03-21 11:21:58 2024-03-21 11:33:35 2024-03-21 12:10:28 0:36:53 0:25:39 0:11:14 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_monitoring_stack_basic} 3
Failure Reason:

Command failed on smithi094 with status 5: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ba760091cd7bd2b0d23f4825ac856ba66450e988 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 93e5de92-e779-11ee-95cd-87774f69a715 -- bash -c \'set -e\nset -x\nceph orch apply node-exporter\nceph orch apply grafana\nceph orch apply alertmanager\nceph orch apply prometheus\nsleep 240\nceph orch ls\nceph orch ps\nceph orch host ls\nMON_DAEMON=$(ceph orch ps --daemon-type mon -f json | jq -r \'"\'"\'last | .daemon_name\'"\'"\')\nGRAFANA_HOST=$(ceph orch ps --daemon-type grafana -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nPROM_HOST=$(ceph orch ps --daemon-type prometheus -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nALERTM_HOST=$(ceph orch ps --daemon-type alertmanager -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nGRAFANA_IP=$(ceph orch host ls -f json | jq -r --arg GRAFANA_HOST "$GRAFANA_HOST" \'"\'"\'.[] | select(.hostname==$GRAFANA_HOST) | .addr\'"\'"\')\nPROM_IP=$(ceph orch host ls -f json | jq -r --arg PROM_HOST "$PROM_HOST" \'"\'"\'.[] | select(.hostname==$PROM_HOST) | .addr\'"\'"\')\nALERTM_IP=$(ceph orch host ls -f json | jq -r --arg ALERTM_HOST "$ALERTM_HOST" \'"\'"\'.[] | select(.hostname==$ALERTM_HOST) | .addr\'"\'"\')\n# check each host node-exporter metrics endpoint is responsive\nALL_HOST_IPS=$(ceph orch host ls -f json | jq -r \'"\'"\'.[] | .addr\'"\'"\')\nfor ip in $ALL_HOST_IPS; do\n curl -s http://${ip}:9100/metric\ndone\n# check grafana endpoints are responsive and database health is okay\ncurl -k -s https://${GRAFANA_IP}:3000/api/health\ncurl -k -s https://${GRAFANA_IP}:3000/api/health | jq -e \'"\'"\'.database == "ok"\'"\'"\'\n# stop mon daemon in order to trigger an alert\nceph orch daemon stop $MON_DAEMON\nsleep 120\n# check prometheus endpoints are responsive and mon down alert is firing\ncurl -s http://${PROM_IP}:9095/api/v1/status/config\ncurl -s http://${PROM_IP}:9095/api/v1/status/config | jq -e \'"\'"\'.status == "success"\'"\'"\'\ncurl -s http://${PROM_IP}:9095/api/v1/alerts\ncurl -s http://${PROM_IP}:9095/api/v1/alerts | jq -e \'"\'"\'.data | .alerts | .[] | select(.labels | .alertname == "CephMonDown") | .state == "firing"\'"\'"\'\n# check alertmanager endpoints are responsive and mon down alert is active\ncurl -s http://${ALERTM_IP}:9093/api/v1/status\ncurl -s http://${ALERTM_IP}:9093/api/v1/alerts\ncurl -s http://${ALERTM_IP}:9093/api/v1/alerts | jq -e \'"\'"\'.data | .[] | select(.labels | .alertname == "CephMonDown") | .status | .state == "active"\'"\'"\'\n\''

fail 7614824 2024-03-21 11:21:59 2024-03-21 11:34:25 2024-03-21 11:51:28 0:17:03 0:06:36 0:10:27 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
fail 7614825 2024-03-21 11:22:00 2024-03-21 11:34:26 2024-03-21 11:49:38 0:15:12 0:04:28 0:10:44 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs 3-final} 2
fail 7614826 2024-03-21 11:22:02 2024-03-21 11:34:36 2024-03-21 12:12:15 0:37:39 0:24:11 0:13:28 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

"2024-03-21T12:06:35.081736+0000 mon.a (mon.0) 1146 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log

fail 7614827 2024-03-21 11:22:03 2024-03-21 11:36:17 2024-03-21 12:00:00 0:23:43 0:11:25 0:12:18 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_domain} 2
Failure Reason:

"2024-03-21T11:55:48.599019+0000 mon.a (mon.0) 236 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7614828 2024-03-21 11:22:04 2024-03-21 11:36:57 2024-03-21 11:51:59 0:15:02 0:04:26 0:10:36 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
fail 7614829 2024-03-21 11:22:06 2024-03-21 11:36:58 2024-03-21 12:54:41 1:17:43 1:04:38 0:13:05 smithi main ubuntu 22.04 orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
Failure Reason:

"2024-03-21T12:44:21.333379+0000 mon.a (mon.0) 4857 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log

fail 7614830 2024-03-21 11:22:07 2024-03-21 11:38:48 2024-03-21 12:02:34 0:23:46 0:10:19 0:13:27 smithi main ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} 2
Failure Reason:

Config file not found: "/home/teuthworker/src/git.ceph.com_ceph-c_ba760091cd7bd2b0d23f4825ac856ba66450e988/qa/tasks/cephadm.conf".

pass 7614831 2024-03-21 11:22:09 2024-03-21 11:39:39 2024-03-21 12:02:51 0:23:12 0:13:15 0:09:57 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_rgw_multisite} 3
fail 7614832 2024-03-21 11:22:10 2024-03-21 11:39:49 2024-03-21 11:55:35 0:15:46 0:04:23 0:11:23 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
fail 7614833 2024-03-21 11:22:11 2024-03-21 11:40:10 2024-03-21 11:59:04 0:18:54 0:07:22 0:11:32 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nvmeof 3-final} 2
fail 7614834 2024-03-21 11:22:13 2024-03-21 11:40:30 2024-03-21 12:01:23 0:20:53 0:06:42 0:14:11 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
fail 7614835 2024-03-21 11:22:14 2024-03-21 11:42:21 2024-03-21 11:59:15 0:16:54 0:04:18 0:12:36 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
fail 7614836 2024-03-21 11:22:15 2024-03-21 11:43:22 2024-03-21 12:05:20 0:21:58 0:12:07 0:09:51 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 3
Failure Reason:

"2024-03-21T12:03:49.813450+0000 mon.a (mon.0) 797 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7614837 2024-03-21 11:22:17 2024-03-21 11:44:12 2024-03-21 12:40:12 0:56:00 0:41:46 0:14:14 smithi main ubuntu 20.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} 2
Failure Reason:

"2024-03-21T12:07:19.760821+0000 mon.a (mon.0) 327 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7614838 2024-03-21 11:22:18 2024-03-21 11:47:43 2024-03-21 12:18:31 0:30:48 0:16:08 0:14:40 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_set_mon_crush_locations} 3
Failure Reason:

"2024-03-21T12:12:54.420263+0000 mon.a (mon.0) 479 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7614839 2024-03-21 11:22:19 2024-03-21 11:51:04 2024-03-21 12:06:45 0:15:41 0:04:36 0:11:05 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw 3-final} 2
fail 7614840 2024-03-21 11:22:21 2024-03-21 11:51:04 2024-03-21 12:14:24 0:23:20 0:09:00 0:14:20 smithi main ubuntu 22.04 orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/classic task/test_adoption} 1
Failure Reason:

No module named 'tasks'

fail 7614841 2024-03-21 11:22:22 2024-03-21 11:55:25 2024-03-21 12:10:06 0:14:41 0:04:37 0:10:04 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/repave-all} 2
fail 7614842 2024-03-21 11:22:24 2024-03-21 11:55:25 2024-03-21 12:26:04 0:30:39 0:19:15 0:11:24 smithi main ubuntu 22.04 orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_basic} 2
Failure Reason:

"2024-03-21T12:21:29.425209+0000 mon.a (mon.0) 283 : cluster [WRN] Health check failed: Degraded data redundancy: 15/45 objects degraded (33.333%), 7 pgs degraded (PG_DEGRADED)" in cluster log

fail 7614843 2024-03-21 11:22:25 2024-03-21 11:55:26 2024-03-21 12:22:16 0:26:50 0:15:21 0:11:29 smithi main centos 9.stream orch:cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-03-21T12:15:16.436868+0000 mon.a (mon.0) 585 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7614844 2024-03-21 11:22:26 2024-03-21 11:55:26 2024-03-21 12:14:28 0:19:02 0:07:17 0:11:45 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/basic 3-final} 2
fail 7614845 2024-03-21 11:22:28 2024-03-21 11:55:26 2024-03-21 12:19:13 0:23:47 0:13:55 0:09:52 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
Failure Reason:

No module named 'tasks'

fail 7614846 2024-03-21 11:22:29 2024-03-21 11:55:27 2024-03-21 12:15:06 0:19:39 0:09:19 0:10:20 smithi main ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

No module named 'tasks'

fail 7614847 2024-03-21 11:22:30 2024-03-21 11:55:27 2024-03-21 12:12:44 0:17:17 0:07:31 0:09:46 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_ca_signed_key} 2
Failure Reason:

No module named 'tasks'

fail 7614848 2024-03-21 11:22:32 2024-03-21 11:55:28 2024-03-21 12:13:59 0:18:31 0:06:52 0:11:39 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
fail 7614849 2024-03-21 11:22:33 2024-03-21 11:55:28 2024-03-21 12:11:16 0:15:48 0:04:15 0:11:33 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

No module named 'tasks'

fail 7614850 2024-03-21 11:22:34 2024-03-21 11:55:28 2024-03-21 12:23:54 0:28:26 0:17:25 0:11:01 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi176 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ba760091cd7bd2b0d23f4825ac856ba66450e988 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 7614851 2024-03-21 11:22:36 2024-03-21 11:55:29 2024-03-21 12:11:10 0:15:41 0:04:32 0:11:09 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

No module named 'tasks'

fail 7614852 2024-03-21 11:22:37 2024-03-21 11:55:29 2024-03-21 12:13:45 0:18:16 0:07:21 0:10:55 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/jaeger 3-final} 2
fail 7614853 2024-03-21 11:22:38 2024-03-21 11:55:29 2024-03-21 12:12:52 0:17:23 0:07:11 0:10:12 smithi main ubuntu 22.04 orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

No module named 'tasks'

fail 7614854 2024-03-21 11:22:40 2024-03-21 11:55:30 2024-03-21 12:15:19 0:19:49 0:07:03 0:12:46 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

No module named 'tasks'

pass 7614855 2024-03-21 11:22:41 2024-03-21 11:55:30 2024-03-21 12:30:04 0:34:34 0:22:48 0:11:46 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} 2
pass 7614856 2024-03-21 11:22:43 2024-03-21 11:55:40 2024-03-21 12:12:01 0:16:21 0:06:31 0:09:50 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_cephadm_repos} 1
fail 7614857 2024-03-21 11:22:44 2024-03-21 11:55:41 2024-03-21 12:16:27 0:20:46 0:06:45 0:14:01 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
fail 7614858 2024-03-21 11:22:46 2024-03-21 11:55:41 2024-03-21 12:19:44 0:24:03 0:14:09 0:09:54 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/connectivity task/test_cephadm_timeout} 1
Failure Reason:

"2024-03-21T12:18:01.877814+0000 mon.a (mon.0) 202 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

pass 7614859 2024-03-21 11:22:47 2024-03-21 11:55:41 2024-03-21 12:19:39 0:23:58 0:14:26 0:09:32 smithi main ubuntu 22.04 orch:cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_22.04} 2-node-mgr agent/on orchestrator_cli} 2
fail 7614860 2024-03-21 11:22:48 2024-03-21 11:55:42 2024-03-21 12:25:46 0:30:04 0:11:29 0:18:35 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_domain} 2
Failure Reason:

"2024-03-21T12:21:48.411371+0000 mon.a (mon.0) 269 : cluster [WRN] Health check failed: Degraded data redundancy: 14/42 objects degraded (33.333%), 7 pgs degraded (PG_DEGRADED)" in cluster log

fail 7614861 2024-03-21 11:22:50 2024-03-21 11:55:42 2024-03-21 12:19:00 0:23:18 0:04:19 0:18:59 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/mirror 3-final} 2
Failure Reason:

No module named 'tasks'

fail 7614862 2024-03-21 11:22:51 2024-03-21 11:55:43 2024-03-21 12:13:55 0:18:12 0:04:18 0:13:54 smithi main centos 9.stream orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream_runc} 1-start 2-services/rgw 3-final} 1
fail 7614863 2024-03-21 11:22:52 2024-03-21 11:55:43 2024-03-21 12:26:27 0:30:44 0:12:23 0:18:21 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} 3
Failure Reason:

"2024-03-21T12:21:14.027525+0000 mon.a (mon.0) 426 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7614864 2024-03-21 11:22:54 2024-03-21 12:02:54 2024-03-21 12:56:05 0:53:11 0:42:27 0:10:44 smithi main centos 9.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/classic} 2
Failure Reason:

Command failed on smithi057 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e6560820-e77c-11ee-95cd-87774f69a715 -e sha1=ba760091cd7bd2b0d23f4825ac856ba66450e988 -- bash -c \'ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e \'"\'"\'.up_to_date | length == 7\'"\'"\'\''

fail 7614865 2024-03-21 11:22:55 2024-03-21 12:02:55 2024-03-21 12:18:28 0:15:33 0:04:28 0:11:05 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} 2
fail 7614866 2024-03-21 11:22:57 2024-03-21 12:03:45 2024-03-21 12:36:01 0:32:16 0:14:38 0:17:38 smithi main centos 9.stream orch:cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
Failure Reason:

"2024-03-21T12:31:00.131882+0000 mon.a (mon.0) 643 : cluster [WRN] Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7614867 2024-03-21 11:22:58 2024-03-21 13:13:49 2024-03-21 13:39:05 0:25:16 0:15:51 0:09:25 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_extra_daemon_features} 2
fail 7614868 2024-03-21 11:22:59 2024-03-21 13:13:49 2024-03-21 13:32:15 0:18:26 0:07:20 0:11:06 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
fail 7614869 2024-03-21 11:23:01 2024-03-21 13:13:50 2024-03-21 13:29:47 0:15:57 0:04:23 0:11:34 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
fail 7614870 2024-03-21 11:23:02 2024-03-21 13:13:50 2024-03-21 13:33:09 0:19:19 0:06:45 0:12:34 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
fail 7614871 2024-03-21 11:23:04 2024-03-21 13:13:50 2024-03-21 13:30:05 0:16:15 0:04:22 0:11:53 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
fail 7614872 2024-03-21 11:23:05 2024-03-21 13:13:51 2024-03-21 14:11:56 0:58:05 0:47:51 0:10:14 smithi main ubuntu 22.04 orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} 2
Failure Reason:

"2024-03-21T13:50:00.000123+0000 mon.a (mon.0) 1143 : cluster [WRN] Health detail: HEALTH_WARN noscrub,nodeep-scrub flag(s) set; 1 osds down; Degraded data redundancy: 122/912 objects degraded (13.377%), 24 pgs degraded" in cluster log

pass 7614873 2024-03-21 11:23:06 2024-03-21 13:13:51 2024-03-21 13:47:50 0:33:59 0:23:22 0:10:37 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rotate-keys} 2
fail 7614874 2024-03-21 11:23:08 2024-03-21 13:13:52 2024-03-21 13:48:08 0:34:16 0:23:07 0:11:09 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_host_drain} 3
Failure Reason:

"2024-03-21T13:42:03.324053+0000 mon.a (mon.0) 510 : cluster [WRN] Health check failed: Degraded data redundancy: 2/6 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)" in cluster log

fail 7614875 2024-03-21 11:23:09 2024-03-21 13:13:52 2024-03-21 13:37:08 0:23:16 0:13:50 0:09:26 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/classic task/test_orch_cli} 1
Failure Reason:

"2024-03-21T13:35:18.977081+0000 mon.a (mon.0) 452 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7614876 2024-03-21 11:23:10 2024-03-21 13:13:52 2024-03-21 13:38:20 0:24:28 0:13:01 0:11:27 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_basic} 2
Failure Reason:

"2024-03-21T13:35:14.774573+0000 mon.a (mon.0) 237 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7614877 2024-03-21 11:23:12 2024-03-21 13:13:53 2024-03-21 13:44:26 0:30:33 0:04:36 0:25:57 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
dead 7614878 2024-03-21 11:23:13 2024-03-21 13:28:55 2024-03-22 01:39:14 12:10:19 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} 1
Failure Reason:

hit max job timeout

fail 7614879 2024-03-21 11:23:14 2024-03-21 13:28:55 2024-03-21 13:47:31 0:18:36 0:07:12 0:11:24 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
fail 7614880 2024-03-21 11:23:16 2024-03-21 13:29:06 2024-03-21 13:44:15 0:15:09 0:04:35 0:10:34 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
fail 7614881 2024-03-21 11:23:18 2024-03-21 13:29:06 2024-03-21 13:47:18 0:18:12 0:06:44 0:11:28 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
fail 7614882 2024-03-21 11:23:19 2024-03-21 13:29:07 2024-03-21 13:46:58 0:17:51 0:04:35 0:13:16 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} 2
pass 7614883 2024-03-21 11:23:20 2024-03-21 13:29:07 2024-03-21 13:52:08 0:23:01 0:12:01 0:11:00 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 3
fail 7614884 2024-03-21 11:23:22 2024-03-21 13:29:07 2024-03-21 14:17:46 0:48:39 0:37:38 0:11:01 smithi main ubuntu 20.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/connectivity} 2
Failure Reason:

"2024-03-21T14:02:02.128111+0000 mon.a (mon.0) 1073 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7614885 2024-03-21 11:23:23 2024-03-21 13:29:08 2024-03-21 14:06:47 0:37:39 0:26:00 0:11:39 smithi main ubuntu 22.04 orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-03-21T13:53:36.434819+0000 mon.a (mon.0) 416 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7614886 2024-03-21 11:23:24 2024-03-21 13:29:08 2024-03-21 14:46:14 1:17:06 1:05:46 0:11:20 smithi main ubuntu 22.04 orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
Failure Reason:

"2024-03-21T14:06:24.233955+0000 mon.a (mon.0) 1105 : cluster [WRN] Health check failed: 1 pool(s) full (POOL_FULL)" in cluster log

fail 7614887 2024-03-21 11:23:26 2024-03-21 13:29:08 2024-03-21 14:39:41 1:10:33 1:00:37 0:09:56 smithi main ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

"2024-03-21T14:10:00.000133+0000 mon.a (mon.0) 2614 : cluster [WRN] Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled" in cluster log

fail 7614888 2024-03-21 11:23:27 2024-03-21 13:29:09 2024-03-21 13:59:34 0:30:25 0:19:07 0:11:18 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_monitoring_stack_basic} 3
Failure Reason:

Command failed on smithi077 with status 5: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ba760091cd7bd2b0d23f4825ac856ba66450e988 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 70cfd1b4-e789-11ee-95cd-87774f69a715 -- bash -c \'set -e\nset -x\nceph orch apply node-exporter\nceph orch apply grafana\nceph orch apply alertmanager\nceph orch apply prometheus\nsleep 240\nceph orch ls\nceph orch ps\nceph orch host ls\nMON_DAEMON=$(ceph orch ps --daemon-type mon -f json | jq -r \'"\'"\'last | .daemon_name\'"\'"\')\nGRAFANA_HOST=$(ceph orch ps --daemon-type grafana -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nPROM_HOST=$(ceph orch ps --daemon-type prometheus -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nALERTM_HOST=$(ceph orch ps --daemon-type alertmanager -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nGRAFANA_IP=$(ceph orch host ls -f json | jq -r --arg GRAFANA_HOST "$GRAFANA_HOST" \'"\'"\'.[] | select(.hostname==$GRAFANA_HOST) | .addr\'"\'"\')\nPROM_IP=$(ceph orch host ls -f json | jq -r --arg PROM_HOST "$PROM_HOST" \'"\'"\'.[] | select(.hostname==$PROM_HOST) | .addr\'"\'"\')\nALERTM_IP=$(ceph orch host ls -f json | jq -r --arg ALERTM_HOST "$ALERTM_HOST" \'"\'"\'.[] | select(.hostname==$ALERTM_HOST) | .addr\'"\'"\')\n# check each host node-exporter metrics endpoint is responsive\nALL_HOST_IPS=$(ceph orch host ls -f json | jq -r \'"\'"\'.[] | .addr\'"\'"\')\nfor ip in $ALL_HOST_IPS; do\n curl -s http://${ip}:9100/metric\ndone\n# check grafana endpoints are responsive and database health is okay\ncurl -k -s https://${GRAFANA_IP}:3000/api/health\ncurl -k -s https://${GRAFANA_IP}:3000/api/health | jq -e \'"\'"\'.database == "ok"\'"\'"\'\n# stop mon daemon in order to trigger an alert\nceph orch daemon stop $MON_DAEMON\nsleep 120\n# check prometheus endpoints are responsive and mon down alert is firing\ncurl -s http://${PROM_IP}:9095/api/v1/status/config\ncurl -s http://${PROM_IP}:9095/api/v1/status/config | jq -e \'"\'"\'.status == "success"\'"\'"\'\ncurl -s http://${PROM_IP}:9095/api/v1/alerts\ncurl -s http://${PROM_IP}:9095/api/v1/alerts | jq -e \'"\'"\'.data | .alerts | .[] | select(.labels | .alertname == "CephMonDown") | .state == "firing"\'"\'"\'\n# check alertmanager endpoints are responsive and mon down alert is active\ncurl -s http://${ALERTM_IP}:9093/api/v1/status\ncurl -s http://${ALERTM_IP}:9093/api/v1/alerts\ncurl -s http://${ALERTM_IP}:9093/api/v1/alerts | jq -e \'"\'"\'.data | .[] | select(.labels | .alertname == "CephMonDown") | .status | .state == "active"\'"\'"\'\n\''

fail 7614889 2024-03-21 11:23:28 2024-03-21 13:29:09 2024-03-21 13:45:30 0:16:21 0:04:26 0:11:55 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs 3-final} 2
fail 7614890 2024-03-21 11:23:30 2024-03-21 13:29:10 2024-03-21 14:17:52 0:48:42 0:38:19 0:10:23 smithi main ubuntu 22.04 orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

"2024-03-21T14:12:16.953567+0000 mon.a (mon.0) 1205 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log

fail 7614891 2024-03-21 11:23:31 2024-03-21 13:29:10 2024-03-21 13:57:23 0:28:13 0:18:17 0:09:56 smithi main ubuntu 22.04 orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_domain} 2
Failure Reason:

"2024-03-21T13:53:10.443056+0000 mon.a (mon.0) 244 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7614892 2024-03-21 11:23:32 2024-03-21 13:29:10 2024-03-21 13:48:17 0:19:07 0:07:17 0:11:50 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
fail 7614893 2024-03-21 11:23:34 2024-03-21 13:29:11 2024-03-21 14:00:18 0:31:07 0:20:16 0:10:51 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_rgw_multisite} 3
Failure Reason:

"2024-03-21T13:54:09.903123+0000 mon.a (mon.0) 415 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7614894 2024-03-21 11:23:35 2024-03-21 13:29:11 2024-03-21 13:51:48 0:22:37 0:07:19 0:15:18 smithi main ubuntu 22.04 orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
fail 7614895 2024-03-21 11:23:37 2024-03-21 13:34:12 2024-03-21 13:55:46 0:21:34 0:06:43 0:14:51 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
fail 7614896 2024-03-21 11:23:38 2024-03-21 13:38:53 2024-03-21 13:54:10 0:15:17 0:04:17 0:11:00 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nvmeof 3-final} 2
fail 7614897 2024-03-21 11:23:39 2024-03-21 13:38:54 2024-03-21 13:52:16 0:13:22 0:04:19 0:09:03 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
fail 7614898 2024-03-21 11:23:41 2024-03-21 13:39:14 2024-03-21 14:39:26 1:00:12 0:48:21 0:11:51 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
Failure Reason:

"2024-03-21T14:07:27.429357+0000 mon.a (mon.0) 957 : cluster [WRN] Health check failed: Low space hindering backfill (add storage if this doesn't resolve itself): 4 pgs backfill_toofull (PG_BACKFILL_FULL)" in cluster log

fail 7614899 2024-03-21 11:23:42 2024-03-21 13:39:45 2024-03-21 14:12:29 0:32:44 0:22:40 0:10:04 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

"2024-03-21T14:10:00.000187+0000 mon.a (mon.0) 1363 : cluster [WRN] Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled" in cluster log

pass 7614900 2024-03-21 11:23:43 2024-03-21 13:41:25 2024-03-21 14:15:33 0:34:08 0:22:22 0:11:46 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_set_mon_crush_locations} 3