Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7394268 2023-09-12 04:25:38 2023-09-12 10:52:18 2023-09-12 11:17:33 0:25:15 0:19:22 0:05:53 smithi main rhel 8.6 orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

"/var/log/ceph/83d8a038-515c-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi082.log:2023-09-12T11:13:47.376+0000 7f13d411d700 0 log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7394269 2023-09-12 04:25:39 2023-09-12 10:52:18 2023-09-12 11:17:53 0:25:35 0:15:00 0:10:35 smithi main centos 8.stream orch:cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
fail 7394270 2023-09-12 04:25:40 2023-09-12 10:52:39 2023-09-12 11:24:15 0:31:36 0:19:45 0:11:51 smithi main rhel 8.6 orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

"/var/log/ceph/3ac356c6-515d-11ee-9ab7-7b867c8bd7da/ceph-mon.c.log:2023-09-12T11:15:15.351+0000 7ff4d7848700 7 mon.c@2(synchronizing).log v63 update_from_paxos applying incremental log 62 2023-09-12T11:15:13.358294+0000 mon.a (mon.0) 212 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

pass 7394271 2023-09-12 04:25:41 2023-09-12 10:56:40 2023-09-12 11:22:19 0:25:39 0:19:38 0:06:01 smithi main rhel 8.6 orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_extra_daemon_features} 2
fail 7394272 2023-09-12 04:25:41 2023-09-12 10:57:10 2023-09-12 11:38:37 0:41:27 0:32:08 0:09:19 smithi main centos 8.stream orch:cephadm/mgr-nfs-upgrade/{0-centos_8.stream_container_tools 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi050 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 82fe20a6-515d-11ee-9ab7-7b867c8bd7da -e sha1=1842449fc100440b6d2e1a58d51722b1be8353c4 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 7394273 2023-09-12 04:25:42 2023-09-12 10:57:30 2023-09-12 11:23:37 0:26:07 0:16:12 0:09:55 smithi main ubuntu 20.04 orch:cephadm/nfs/{begin/{0-install 1-ceph 2-logrotate} cluster/{1-node} objectstore/bluestore-bitmap overrides/ignorelist_health supported-random-distros$/{ubuntu_20.04} tasks/nfs} 1
Failure Reason:

Test failure: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)

pass 7394274 2023-09-12 04:25:43 2023-09-12 10:57:51 2023-09-12 11:25:16 0:27:25 0:21:39 0:05:46 smithi main rhel 8.6 orch:cephadm/orchestrator_cli/{0-random-distro$/{rhel_8.6_container_tools_rhel8} 2-node-mgr agent/off orchestrator_cli} 2
fail 7394275 2023-09-12 04:25:43 2023-09-12 10:58:01 2023-09-12 11:43:23 0:45:22 0:34:49 0:10:33 smithi main centos 8.stream orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} workloads/cephadm_iscsi} 3
Failure Reason:

"/var/log/ceph/93dd67c4-515d-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T11:19:17.978+0000 7f43f26ea700 0 log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7394276 2023-09-12 04:25:44 2023-09-12 10:58:02 2023-09-12 11:21:41 0:23:39 0:16:22 0:07:17 smithi main rhel 8.6 orch:cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.6_container_tools_3.0} 1-start 2-services/basic 3-final} 1
fail 7394277 2023-09-12 04:25:45 2023-09-12 10:59:12 2023-09-12 11:24:27 0:25:15 0:12:52 0:12:23 smithi main centos 8.stream orch:cephadm/smoke-small/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/classic start} 3
Failure Reason:

"/var/log/ceph/008092fc-515e-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T11:21:24.311+0000 7f57c608e700 0 log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7394278 2023-09-12 04:25:46 2023-09-12 11:01:03 2023-09-12 11:29:19 0:28:16 0:18:45 0:09:31 smithi main rhel 8.6 orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

"/var/log/ceph/3d0b6256-515e-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi089.log:2023-09-12T11:25:22.175+0000 7f81c44e8700 0 log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7394279 2023-09-12 04:25:46 2023-09-12 11:04:13 2023-09-12 11:30:21 0:26:08 0:19:06 0:07:02 smithi main rhel 8.6 orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi159 with status 127: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:1842449fc100440b6d2e1a58d51722b1be8353c4 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a400bf10-515e-11ee-9ab7-7b867c8bd7da -- ceph orch apply prometheus '1;smithi159=a'"

fail 7394280 2023-09-12 04:25:47 2023-09-12 11:05:24 2023-09-12 11:44:34 0:39:10 0:29:30 0:09:40 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"/var/log/ceph/ec9e1bdc-515e-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi070.log:2023-09-12T11:29:45.576+0000 7f11059b9700 0 log_channel(cluster) log [WRN] : Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)" in cluster log

fail 7394281 2023-09-12 04:25:48 2023-09-12 11:05:24 2023-09-12 11:40:14 0:34:50 0:22:58 0:11:52 smithi main ubuntu 20.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

"/var/log/ceph/91808a1e-515e-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi144.log:2023-09-12T11:32:05.741+0000 7ff8909d9700 10 mon.smithi144@0(leader).log v345 logging 2023-09-12T11:32:05.020580+0000 mgr.smithi144.tgzbpw (mgr.14195) 261 : cephadm [ERR] Failed while placing nfs.foo.0.0.smithi144.jimybq on smithi144: grace tool failed: rados_pool_create: -1" in cluster log

fail 7394282 2023-09-12 04:25:48 2023-09-12 11:05:55 2023-09-12 12:24:54 1:18:59 1:04:26 0:14:33 smithi main ubuntu 20.04 orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
Failure Reason:

"/var/log/ceph/2dbaef32-515f-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T11:35:18.896+0000 7f2c09aee700 0 log_channel(cluster) log [WRN] : Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 7394283 2023-09-12 04:25:49 2023-09-12 11:08:36 2023-09-12 11:34:21 0:25:45 0:16:22 0:09:23 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}} 1
Failure Reason:

"/var/log/ceph/771bcde0-515f-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T11:30:31.659+0000 7f43e37a5700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7394284 2023-09-12 04:25:50 2023-09-12 11:09:16 2023-09-12 11:58:48 0:49:32 0:37:48 0:11:44 smithi main ubuntu 20.04 orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rotate-keys} 2
Failure Reason:

"/var/log/ceph/5fcedad8-515f-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T11:39:17.449+0000 7f015ec07700 0 log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7394285 2023-09-12 04:25:51 2023-09-12 11:10:46 2023-09-12 11:42:16 0:31:30 0:15:59 0:15:31 smithi main centos 8.stream orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

"/var/log/ceph/24fe2232-5160-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi072.log:2023-09-12T11:37:53.555+0000 7f8c1b5f0700 0 log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7394286 2023-09-12 04:25:51 2023-09-12 11:16:38 2023-09-12 12:07:31 0:50:53 0:39:24 0:11:29 smithi main ubuntu 20.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} 2
Failure Reason:

"/var/log/ceph/03835b0e-5160-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T11:39:07.355+0000 7f5252aec700 0 log_channel(cluster) log [WRN] : Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

pass 7394287 2023-09-12 04:25:52 2023-09-12 11:17:38 2023-09-12 11:42:25 0:24:47 0:17:51 0:06:56 smithi main rhel 8.6 orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
fail 7394288 2023-09-12 04:25:53 2023-09-12 11:17:48 2023-09-12 11:41:07 0:23:19 0:17:25 0:05:54 smithi main rhel 8.6 orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
Failure Reason:

"/var/log/ceph/055d3e54-5160-11ee-9ab7-7b867c8bd7da/ceph-mon.c.log:2023-09-12T11:36:53.386+0000 7f4f9cd80700 7 mon.c@2(peon).log v161 update_from_paxos applying incremental log 161 2023-09-12T11:36:52.373122+0000 mon.a (mon.0) 526 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 7394289 2023-09-12 04:25:53 2023-09-12 11:17:59 2023-09-12 11:45:19 0:27:20 0:17:39 0:09:41 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_orch_cli} 1
Failure Reason:

"/var/log/ceph/07352e66-5161-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T11:43:43.561+0000 7f549499e700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7394290 2023-09-12 04:25:54 2023-09-12 11:17:59 2023-09-12 11:45:57 0:27:58 0:14:49 0:13:09 smithi main centos 8.stream orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} 2
Failure Reason:

"/var/log/ceph/bda19c3a-5160-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi174.log:2023-09-12T11:42:15.571+0000 7f3aeb37e700 10 mon.smithi174@0(leader).log v189 logging 2023-09-12T11:42:14.648121+0000 mgr.smithi174.wreiin (mgr.14217) 151 : cephadm [ERR] Failed while placing nfs.foo.0.0.smithi174.vduwqf on smithi174: grace tool failed: rados_pool_create: -1" in cluster log

fail 7394291 2023-09-12 04:25:55 2023-09-12 11:20:30 2023-09-12 12:02:04 0:41:34 0:30:50 0:10:44 smithi main centos 8.stream orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

"/var/log/ceph/374f4fe6-5161-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T11:47:24.657+0000 7f4b8cbb4700 -1 log_channel(cluster) log [ERR] : Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log

fail 7394292 2023-09-12 04:25:56 2023-09-12 11:20:30 2023-09-12 12:06:59 0:46:29 0:36:29 0:10:00 smithi main centos 8.stream orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_api_tests} 2
Failure Reason:

"/var/log/ceph/44e47528-5161-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T11:50:00.248+0000 7fc2f5298700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled" in cluster log

fail 7394293 2023-09-12 04:25:56 2023-09-12 11:21:51 2023-09-12 11:47:26 0:25:35 0:19:21 0:06:14 smithi main rhel 8.6 orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

"/var/log/ceph/b961f110-5160-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi124.log:2023-09-12T11:43:49.650+0000 7f8f59f72700 10 mon.smithi124@0(leader).log v242 logging 2023-09-12T11:43:49.197810+0000 mgr.smithi124.adffop (mgr.14219) 172 : cephadm [ERR] Failed while placing nfs.foo.0.0.smithi124.kyryah on smithi124: grace tool failed: rados_pool_create: -1" in cluster log

fail 7394294 2023-09-12 04:25:57 2023-09-12 11:22:21 2023-09-12 12:00:03 0:37:42 0:25:17 0:12:25 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

"/var/log/ceph/eeb48ebc-5161-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T11:48:52.754+0000 7f921a9b9700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/4 mons down, quorum a,e,c" in cluster log

fail 7394295 2023-09-12 04:25:58 2023-09-12 11:24:22 2023-09-12 12:05:50 0:41:28 0:30:10 0:11:18 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"/var/log/ceph/bb623a78-5161-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi173.log:2023-09-12T11:49:54.427+0000 7ff70d732700 0 log_channel(cluster) log [WRN] : Replacing daemon mds.cephfs.smithi173.ajcemi as rank 0 with standby daemon mds.cephfs.smithi204.wzgsms" in cluster log

fail 7394296 2023-09-12 04:25:58 2023-09-12 11:24:32 2023-09-12 11:48:37 0:24:05 0:16:53 0:07:12 smithi main rhel 8.6 orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

"/var/log/ceph/1b1fb72a-5161-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi022.log:2023-09-12T11:45:36.264+0000 7f20ab43b700 0 log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7394297 2023-09-12 04:25:59 2023-09-12 11:25:22 2023-09-12 11:59:54 0:34:32 0:22:58 0:11:34 smithi main centos 8.stream orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

"/var/log/ceph/e106a9e4-5161-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T11:49:59.999+0000 7f6c2a724700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled" in cluster log

fail 7394298 2023-09-12 04:26:00 2023-09-12 11:25:33 2023-09-12 11:50:22 0:24:49 0:13:00 0:11:49 smithi main centos 8.stream orch:cephadm/smoke-small/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 3
Failure Reason:

"/var/log/ceph/910551ca-5161-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T11:45:36.008+0000 7f007b287700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum a,c" in cluster log

fail 7394299 2023-09-12 04:26:01 2023-09-12 11:26:54 2023-09-12 11:52:30 0:25:36 0:17:10 0:08:26 smithi main rhel 8.6 orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

"/var/log/ceph/9f8d634a-5161-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi089.log:2023-09-12T11:49:47.669+0000 7f85ea328700 0 log_channel(cluster) log [WRN] : Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7394300 2023-09-12 04:26:01 2023-09-12 11:29:24 2023-09-12 12:06:37 0:37:13 0:25:10 0:12:03 smithi main ubuntu 20.04 orch:cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

"/var/log/ceph/f61a15dc-5161-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T11:58:50.004+0000 7fa1d2104700 0 log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7394301 2023-09-12 04:26:02 2023-09-12 11:30:25 2023-09-12 12:01:39 0:31:14 0:20:09 0:11:05 smithi main rhel 8.6 orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_rgw_multisite} 3
fail 7394302 2023-09-12 04:26:03 2023-09-12 11:34:26 2023-09-12 12:08:49 0:34:23 0:18:54 0:15:29 smithi main ubuntu 20.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nvmeof 3-final} 2
Failure Reason:

"/var/log/ceph/11eafc08-5163-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi050.log:2023-09-12T12:00:33.326+0000 7f49c044c700 0 log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7394303 2023-09-12 04:26:04 2023-09-12 11:38:47 2023-09-12 12:01:32 0:22:45 0:11:03 0:11:42 smithi main centos 8.stream orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools_crun 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} 2
Failure Reason:

No module named 'tasks.cephadm'

fail 7394304 2023-09-12 04:26:04 2023-09-12 11:40:17 2023-09-12 12:08:30 0:28:13 0:17:18 0:10:55 smithi main centos 8.stream orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

"/var/log/ceph/7ebb7074-5163-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi084.log:2023-09-12T12:01:17.431+0000 7f6565339700 0 log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 2/6 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)" in cluster log

fail 7394305 2023-09-12 04:26:05 2023-09-12 11:41:18 2023-09-12 12:31:50 0:50:32 0:39:02 0:11:30 smithi main centos 8.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/classic} 2
Failure Reason:

Command failed on smithi083 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c98c776a-5163-11ee-9ab7-7b867c8bd7da -e sha1=1842449fc100440b6d2e1a58d51722b1be8353c4 -- bash -c \'ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e \'"\'"\'.up_to_date | length == 7\'"\'"\'\''

fail 7394306 2023-09-12 04:26:06 2023-09-12 11:42:18 2023-09-12 12:21:56 0:39:38 0:31:49 0:07:49 smithi main rhel 8.6 orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} 2
Failure Reason:

"/var/log/ceph/6a155f9e-5164-11ee-9ab7-7b867c8bd7da/ceph-mon.c.log:2023-09-12T12:07:50.097+0000 7fe3f0e50700 7 mon.c@2(peon).log v128 update_from_paxos applying incremental log 128 2023-09-12T12:07:49.090323+0000 mon.a (mon.0) 381 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7394307 2023-09-12 04:26:06 2023-09-12 11:42:18 2023-09-12 12:12:35 0:30:17 0:23:13 0:07:04 smithi main rhel 8.6 orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_set_mon_crush_locations} 3
Failure Reason:

"/var/log/ceph/74282ef8-5164-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T12:07:51.937+0000 7f9fbddc3700 0 log_channel(cluster) log [WRN] : Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7394308 2023-09-12 04:26:07 2023-09-12 11:43:29 2023-09-12 12:02:44 0:19:15 0:08:12 0:11:03 smithi main centos 8.stream orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/rgw 3-final} 2
Failure Reason:

"grep: /var/log/ceph/ea61ca94-5163-11ee-9ab7-7b867c8bd7da/: No such file or directory" in cluster log

fail 7394309 2023-09-12 04:26:08 2023-09-12 11:43:29 2023-09-12 12:03:36 0:20:07 0:09:56 0:10:11 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi070 with status 1: 'sudo yum -y install cephfs-java'

fail 7394310 2023-09-12 04:26:09 2023-09-12 11:44:40 2023-09-12 12:03:52 0:19:12 0:09:42 0:09:30 smithi main rhel 8.6 orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

No module named 'tasks.nvme_loop'

fail 7394311 2023-09-12 04:26:09 2023-09-12 11:46:00 2023-09-12 12:04:22 0:18:22 0:07:29 0:10:53 smithi main centos 8.stream orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

No module named 'tasks.nvme_loop'

pass 7394312 2023-09-12 04:26:10 2023-09-12 11:47:31 2023-09-12 12:13:47 0:26:16 0:18:51 0:07:25 smithi main rhel 8.6 orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/basic 3-final} 2
fail 7394313 2023-09-12 04:26:11 2023-09-12 11:48:41 2023-09-12 12:41:40 0:52:59 0:44:20 0:08:39 smithi main rhel 8.6 orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
Failure Reason:

"/var/log/ceph/9c2dbcc8-5165-11ee-9ab7-7b867c8bd7da/ceph-mon.c.log:2023-09-12T12:27:29.343+0000 7fb458786700 7 mon.c@2(peon).log v691 update_from_paxos applying incremental log 691 2023-09-12T12:27:28.350301+0000 mon.a (mon.0) 2807 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log

fail 7394314 2023-09-12 04:26:12 2023-09-12 11:50:32 2023-09-12 12:39:45 0:49:13 0:43:01 0:06:12 smithi main rhel 8.6 orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

"/var/log/ceph/9011edb0-5165-11ee-9ab7-7b867c8bd7da/ceph-mon.c.log:2023-09-12T12:20:03.076+0000 7f023f8e3700 7 mon.c@2(peon).log v305 update_from_paxos applying incremental log 305 2023-09-12T12:20:01.738779+0000 mon.a (mon.0) 913 : cluster [WRN] Health detail: HEALTH_WARN 10 pool(s) do not have an application enabled" in cluster log

pass 7394315 2023-09-12 04:26:12 2023-09-12 11:50:32 2023-09-12 12:15:28 0:24:56 0:17:35 0:07:21 smithi main rhel 8.6 orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_adoption} 1
pass 7394316 2023-09-12 04:26:13 2023-09-12 12:03:26 2023-09-12 12:26:10 0:22:44 0:16:17 0:06:27 smithi main rhel 8.6 orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
fail 7394317 2023-09-12 04:26:14 2023-09-12 12:03:26 2023-09-12 12:29:44 0:26:18 0:16:21 0:09:57 smithi main ubuntu 20.04 orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_ca_signed_key} 2
Failure Reason:

"/var/log/ceph/cb3d5c52-5166-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T12:25:38.178+0000 7f9a0f48f700 0 log_channel(cluster) log [WRN] : Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7394318 2023-09-12 04:26:14 2023-09-12 12:03:47 2023-09-12 12:46:16 0:42:29 0:31:09 0:11:20 smithi main ubuntu 20.04 orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

"/var/log/ceph/f348e5c2-5166-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T12:39:59.994+0000 7f5b8bde9700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled" in cluster log

fail 7394319 2023-09-12 04:26:15 2023-09-12 12:03:57 2023-09-12 12:34:25 0:30:28 0:19:11 0:11:17 smithi main ubuntu 20.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

"/var/log/ceph/9b09900a-5166-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi124.log:2023-09-12T12:29:08.131+0000 7f450294c700 0 log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7394320 2023-09-12 04:26:16 2023-09-12 12:04:27 2023-09-12 12:35:15 0:30:48 0:18:26 0:12:22 smithi main centos 8.stream orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_8.stream_container_tools_crun} 2-node-mgr agent/on orchestrator_cli} 2
fail 7394321 2023-09-12 04:26:16 2023-09-12 12:05:58 2023-09-12 12:29:11 0:23:13 0:14:53 0:08:20 smithi main centos 8.stream orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools} 1-start 2-services/rgw 3-final} 1
Failure Reason:

"/var/log/ceph/06655bc2-5167-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi033.log:2023-09-12T12:27:07.652+0000 7fcf6fd19700 0 log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7394322 2023-09-12 04:26:17 2023-09-12 12:06:38 2023-09-12 12:30:20 0:23:42 0:13:00 0:10:42 smithi main centos 8.stream orch:cephadm/smoke-small/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/classic start} 3
Failure Reason:

"/var/log/ceph/3996a7f8-5167-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T12:27:20.497+0000 7f6a52d2f700 0 log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7394323 2023-09-12 04:26:18 2023-09-12 12:07:09 2023-09-12 13:04:39 0:57:30 0:51:32 0:05:58 smithi main rhel 8.6 orch:cephadm/thrash/{0-distro/rhel_8.6_container_tools_rhel8 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
Failure Reason:

"/var/log/ceph/886cf8aa-5167-11ee-9ab7-7b867c8bd7da/ceph-mon.c.log:2023-09-12T12:33:06.434+0000 7f177e8a8700 7 mon.c@2(peon).log v283 update_from_paxos applying incremental log 283 2023-09-12T12:33:05.415339+0000 mon.a (mon.0) 830 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 7394324 2023-09-12 04:26:19 2023-09-12 12:07:39 2023-09-12 12:47:38 0:39:59 0:29:27 0:10:32 smithi main centos 8.stream orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/jaeger 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7394325 2023-09-12 04:26:19 2023-09-12 12:08:40 2023-09-12 12:57:04 0:48:24 0:37:33 0:10:51 smithi main ubuntu 20.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/connectivity} 2
Failure Reason:

"/var/log/ceph/44dfa862-5167-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T12:31:12.926+0000 7f888af54700 0 log_channel(cluster) log [WRN] : Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 7394326 2023-09-12 04:26:20 2023-09-12 12:08:50 2023-09-12 12:41:03 0:32:13 0:17:53 0:14:20 smithi main ubuntu 20.04 orch:cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

"/var/log/ceph/d93a7046-5167-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi110.log:2023-09-12T12:37:47.531+0000 7fa895508700 10 mon.smithi110@0(leader).log v322 logging 2023-09-12T12:37:46.755083+0000 osd.1 (osd.1) 3 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running" in cluster log

fail 7394327 2023-09-12 04:26:21 2023-09-12 12:12:41 2023-09-12 12:39:05 0:26:24 0:15:34 0:10:50 smithi main centos 8.stream orch:cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
Failure Reason:

"/var/log/ceph/40dc75be-5168-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T12:35:52.207+0000 7f92a82e2700 0 log_channel(cluster) log [WRN] : Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7394328 2023-09-12 04:26:22 2023-09-12 12:13:51 2023-09-12 12:40:44 0:26:53 0:17:30 0:09:23 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_cephadm} 1
fail 7394329 2023-09-12 04:26:22 2023-09-12 12:13:52 2023-09-12 12:57:36 0:43:44 0:25:54 0:17:50 smithi main centos 8.stream orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rotate-keys} 2
Failure Reason:

"/var/log/ceph/c591eb44-5169-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T12:55:01.288+0000 7ffa36050700 10 mon.a@0(leader).log v539 logging 2023-09-12T12:55:00.290856+0000 mgr.x (mgr.24472) 28 : cephadm [ERR] Non-zero return from ['ceph', '-k', '/var/lib/ceph/mgr/ceph-x/keyring', '-n', 'mgr.x', 'tell', 'mgr.x', 'rotate-key', '-i', '-']: 2023-09-12T12:55:00.284+0000 7f0205ffb700 1 -- 172.21.15.89:0/1974329807 <== mon.1 v2:172.21.15.89:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 194+0+0 (secure 0 0 0) 0x7f01f0003770 con 0x7f020810ca80" in cluster log

fail 7394330 2023-09-12 04:26:23 2023-09-12 12:22:03 2023-09-12 13:06:16 0:44:13 0:30:29 0:13:44 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"/var/log/ceph/32c6d864-516a-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi134.log:2023-09-12T12:51:04.702+0000 7f584477f700 0 log_channel(cluster) log [WRN] : Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)" in cluster log

pass 7394331 2023-09-12 04:26:24 2023-09-12 12:47:44 2023-09-12 13:18:01 0:30:17 0:15:35 0:14:42 smithi main centos 8.stream orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/mirror 3-final} 2
pass 7394332 2023-09-12 04:26:24 2023-09-12 12:52:55 2023-09-12 13:16:07 0:23:12 0:09:22 0:13:50 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_cephadm_repos} 1
pass 7394333 2023-09-12 04:26:25 2023-09-12 12:57:06 2023-09-12 13:24:03 0:26:57 0:20:19 0:06:38 smithi main rhel 8.6 orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} 2
fail 7394334 2023-09-12 04:26:26 2023-09-12 12:57:06 2023-09-12 13:52:20 0:55:14 0:42:50 0:12:24 smithi main ubuntu 20.04 orch:cephadm/thrash/{0-distro/ubuntu_20.04 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} 2
Failure Reason:

"/var/log/ceph/72b3bb64-516e-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T13:19:15.251+0000 7f60fc8aa700 0 log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log

fail 7394335 2023-09-12 04:26:27 2023-09-12 12:57:37 2023-09-12 13:45:06 0:47:29 0:37:40 0:09:49 smithi main centos 8.stream orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools_crun fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_api_tests} 2
Failure Reason:

"/var/log/ceph/d871d5e4-516e-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T13:29:59.998+0000 7f19e1006700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 cache pools at or near target size; 1 pool(s) do not have an application enabled" in cluster log

fail 7394336 2023-09-12 04:26:27 2023-09-12 12:57:37 2023-09-12 13:22:32 0:24:55 0:18:10 0:06:45 smithi main rhel 8.6 orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

"/var/log/ceph/ecaf1c0c-516d-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi089.log:2023-09-12T13:17:47.988+0000 7f4bf08ec700 0 log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7394337 2023-09-12 04:26:28 2023-09-12 12:57:47 2023-09-12 13:26:02 0:28:15 0:14:17 0:13:58 smithi main centos 8.stream orch:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
fail 7394338 2023-09-12 04:26:29 2023-09-12 13:00:48 2023-09-12 13:26:34 0:25:46 0:19:20 0:06:26 smithi main rhel 8.6 orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

"/var/log/ceph/8f671eea-516e-11ee-9ab7-7b867c8bd7da/ceph-mon.c.log:2023-09-12T13:19:41.109+0000 7fc7bda60700 7 mon.c@2(peon).log v103 update_from_paxos applying incremental log 103 2023-09-12T13:19:40.104857+0000 mon.a (mon.0) 375 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

pass 7394339 2023-09-12 04:26:30 2023-09-12 13:01:28 2023-09-12 13:29:34 0:28:06 0:20:01 0:08:05 smithi main rhel 8.6 orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_extra_daemon_features} 2
fail 7394340 2023-09-12 04:26:30 2023-09-12 13:02:19 2023-09-12 13:34:25 0:32:06 0:21:02 0:11:04 smithi main ubuntu 20.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

"/var/log/ceph/b086790e-516e-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi138.log:2023-09-12T13:28:20.712+0000 7fa6f65e8700 0 log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7394341 2023-09-12 04:26:31 2023-09-12 13:02:19 2023-09-12 13:37:43 0:35:24 0:27:00 0:08:24 smithi main rhel 8.6 orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

"/var/log/ceph/44d35a14-516f-11ee-9ab7-7b867c8bd7da/ceph-mon.c.log:2023-09-12T13:30:00.145+0000 7fc8cda5e700 7 mon.c@2(peon).log v330 update_from_paxos applying incremental log 330 2023-09-12T13:30:00.000100+0000 mon.a (mon.0) 864 : cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled" in cluster log

fail 7394342 2023-09-12 04:26:32 2023-09-12 13:58:15 2023-09-12 14:41:30 0:43:15 0:32:00 0:11:15 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"/var/log/ceph/8c91e0f2-5177-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi022.log:2023-09-12T14:26:24.815+0000 7fb68ad23700 0 log_channel(cluster) log [WRN] : Replacing daemon mds.cephfs.smithi022.vtbsdv as rank 0 with standby daemon mds.cephfs.smithi155.jaqian" in cluster log

fail 7394343 2023-09-12 04:26:32 2023-09-12 14:00:05 2023-09-12 14:25:21 0:25:16 0:16:31 0:08:45 smithi main centos 8.stream orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

"/var/log/ceph/e00a360e-5176-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi124.log:2023-09-12T14:21:13.526+0000 7f36b4cbd700 10 mon.smithi124@0(leader).log v203 logging 2023-09-12T14:21:12.701233+0000 mgr.smithi124.dijuld (mgr.14219) 152 : cephadm [ERR] Failed while placing nfs.foo.0.0.smithi124.ugeace on smithi124: grace tool failed: rados_pool_create: -1" in cluster log

fail 7394344 2023-09-12 04:26:33 2023-09-12 14:00:16 2023-09-12 14:26:55 0:26:39 0:12:35 0:14:04 smithi main centos 8.stream orch:cephadm/smoke-small/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 3
Failure Reason:

"/var/log/ceph/7bd68d80-5177-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T14:22:25.971+0000 7f5a33f2b700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum a,c" in cluster log

fail 7394345 2023-09-12 04:26:34 2023-09-12 14:03:16 2023-09-12 14:50:34 0:47:18 0:37:15 0:10:03 smithi main centos 8.stream orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

"/var/log/ceph/d8d31256-5177-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T14:24:47.944+0000 7f5d4dab2700 0 log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log

fail 7394346 2023-09-12 04:26:35 2023-09-12 14:03:27 2023-09-12 14:44:04 0:40:37 0:27:35 0:13:02 smithi main centos 8.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/classic} 2
Failure Reason:

"/var/log/ceph/e2355048-5177-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T14:24:49.617+0000 7f1a2fe2f700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum a,c" in cluster log

fail 7394347 2023-09-12 04:26:35 2023-09-12 14:06:57 2023-09-12 14:35:03 0:28:06 0:16:18 0:11:48 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}} 1
Failure Reason:

"/var/log/ceph/bcc0943e-5178-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-12T14:32:14.543+0000 7f06b93e2700 0 log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7394348 2023-09-12 04:26:36 2023-09-13 03:07:06 2023-09-13 03:45:55 0:38:49 0:30:01 0:08:48 smithi main rhel 8.6 orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} 2
Failure Reason:

"/var/log/ceph/87e5183a-51e5-11ee-9ab7-7b867c8bd7da/ceph-mon.c.log:2023-09-13T03:34:01.659+0000 7fd7fbc10700 7 mon.c@2(peon).log v209 update_from_paxos applying incremental log 209 2023-09-13T03:34:00.674941+0000 mon.a (mon.0) 637 : cluster [WRN] Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7394349 2023-09-12 04:26:37 2023-09-13 03:08:56 2023-09-13 03:36:20 0:27:24 0:17:25 0:09:59 smithi main centos 8.stream orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

"/var/log/ceph/512082da-51e5-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi052.log:2023-09-13T03:30:28.667+0000 7fc6355ad700 0 log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7394350 2023-09-12 04:26:37 2023-09-13 03:08:57 2023-09-13 03:33:52 0:24:55 0:13:44 0:11:11 smithi main centos 8.stream orch:cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
fail 7394351 2023-09-12 04:26:38 2023-09-13 03:10:47 2023-09-13 03:36:12 0:25:25 0:17:39 0:07:46 smithi main rhel 8.6 orch:cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
Failure Reason:

"/var/log/ceph/2ec195b2-51e5-11ee-9ab7-7b867c8bd7da/ceph-mon.c.log:2023-09-13T03:31:00.240+0000 7fe94984a700 7 mon.c@2(peon).log v207 update_from_paxos applying incremental log 207 2023-09-13T03:30:59.736536+0000 mon.a (mon.0) 649 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7394352 2023-09-12 04:26:39 2023-09-13 03:10:58 2023-09-13 03:41:19 0:30:21 0:18:26 0:11:55 smithi main ubuntu 20.04 orch:cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_orch_cli} 1
Failure Reason:

"/var/log/ceph/97efa06a-51e5-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-13T03:34:16.319+0000 7f2739e74700 0 log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7394353 2023-09-12 04:26:40 2023-09-13 03:11:38 2023-09-13 03:39:14 0:27:36 0:19:39 0:07:57 smithi main rhel 8.6 orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} 2
Failure Reason:

"/var/log/ceph/705ec85a-51e5-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi029.log:2023-09-13T03:34:25.237+0000 7f3345258700 10 mon.smithi029@0(leader).log v255 logging 2023-09-13T03:34:24.769508+0000 mgr.smithi029.rggysm (mgr.14223) 184 : cephadm [ERR] Failed while placing nfs.foo.0.0.smithi029.xbwish on smithi029: grace tool failed: rados_pool_create: -1" in cluster log

fail 7394354 2023-09-12 04:26:40 2023-09-13 03:11:38 2023-09-13 04:00:33 0:48:55 0:37:01 0:11:54 smithi main centos 8.stream orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools_crun 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
Failure Reason:

"/var/log/ceph/61fbd414-51e6-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-13T03:38:10.555+0000 7fdcc7373700 0 log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 2/6 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)" in cluster log

fail 7394355 2023-09-12 04:26:41 2023-09-13 03:12:59 2023-09-13 04:11:18 0:58:19 0:43:17 0:15:02 smithi main ubuntu 20.04 orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

"/var/log/ceph/62adb698-51e6-11ee-9ab7-7b867c8bd7da/ceph-mon.a.log:2023-09-13T03:37:42.751+0000 7fb80c795700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

fail 7394356 2023-09-12 04:26:42 2023-09-13 03:17:20 2023-09-13 03:45:40 0:28:20 0:18:09 0:10:11 smithi main rhel 8.6 orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

"/var/log/ceph/8580f644-51e6-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi122.log:2023-09-13T03:40:53.272+0000 7f6ae52af700 10 mon.smithi122@0(leader).log v209 logging 2023-09-13T03:40:52.441837+0000 mgr.smithi122.gelshg (mgr.14207) 160 : cephadm [ERR] Failed while placing nfs.foo.0.0.smithi122.ofqams on smithi122: grace tool failed: rados_pool_create: -1" in cluster log

fail 7394357 2023-09-12 04:26:42 2023-09-13 03:20:30 2023-09-13 03:56:38 0:36:08 0:25:16 0:10:52 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

"/var/log/ceph/991e30bc-51e7-11ee-9ab8-7b867c8bd7da/ceph-mon.a.log:2023-09-13T03:45:27.809+0000 7fc21a1a9700 0 log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum a,e" in cluster log

fail 7394358 2023-09-12 04:26:43 2023-09-13 03:21:11 2023-09-13 04:03:22 0:42:11 0:31:42 0:10:29 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"/var/log/ceph/94ea30ea-51e7-11ee-9ab8-7b867c8bd7da/ceph-mon.smithi053.log:2023-09-13T03:48:36.463+0000 7fc33b8ab700 0 log_channel(cluster) log [WRN] : Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)" in cluster log

fail 7394359 2023-09-12 04:26:44 2023-09-13 03:22:11 2023-09-13 03:53:12 0:31:01 0:18:52 0:12:09 smithi main ubuntu 20.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

"/var/log/ceph/11060632-51e7-11ee-9ab7-7b867c8bd7da/ceph-mon.smithi092.log:2023-09-13T03:45:00.711+0000 7f8527dd5700 0 log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7394360 2023-09-12 04:26:45 2023-09-13 03:23:02 2023-09-13 04:01:19 0:38:17 0:23:15 0:15:02 smithi main centos 8.stream orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
fail 7394361 2023-09-12 04:26:45 2023-09-13 03:28:13 2023-09-13 03:55:23 0:27:10 0:18:45 0:08:25 smithi main rhel 8.6 orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

"/var/log/ceph/ebe1be9a-51e7-11ee-9ab8-7b867c8bd7da/ceph-mon.smithi130.log:2023-09-13T03:51:59.401+0000 7f6f48150700 0 log_channel(cluster) log [WRN] : Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log