Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7759388 2024-06-17 11:21:30 2024-06-17 11:27:16 2024-06-17 11:46:03 0:18:47 0:09:27 0:09:20 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi042 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9f334e96-2c9e-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi042:vg_nvme/lv_4'

fail 7759389 2024-06-17 11:21:31 2024-06-17 11:27:16 2024-06-17 11:52:29 0:25:13 0:10:26 0:14:47 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_host_drain} 3
Failure Reason:

Command failed on smithi027 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 610619fe-2c9f-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi027:vg_nvme/lv_4'

fail 7759390 2024-06-17 11:21:33 2024-06-17 11:30:49 2024-06-17 12:01:48 0:30:59 0:21:06 0:09:53 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"1718624959.9781818 mgr.smithi154.hsyxhd (mgr.14234) 1 : cluster [ERR] Failed to load ceph-mgr modules: k8sevents" in cluster log

pass 7759391 2024-06-17 11:21:34 2024-06-17 11:30:49 2024-06-17 12:24:03 0:53:14 0:43:31 0:09:43 smithi main centos 9.stream orch:cephadm/mgr-nfs-upgrade/{0-centos_9.stream 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 7759392 2024-06-17 11:21:35 2024-06-17 11:31:00 2024-06-17 11:50:06 0:19:06 0:09:54 0:09:12 smithi main centos 9.stream orch:cephadm/nfs/{cluster/{1-node} conf/{client mds mgr mon osd} overrides/{ignore_mgr_down ignorelist_health pg_health} supported-random-distros$/{centos_latest} tasks/nfs} 1
Failure Reason:

Command failed on smithi122 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 337829b4-2c9f-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi122:vg_nvme/lv_4'

fail 7759393 2024-06-17 11:21:36 2024-06-17 11:31:00 2024-06-17 11:49:55 0:18:55 0:09:10 0:09:45 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/classic task/test_orch_cli} 1
Failure Reason:

Command failed on smithi150 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 287c4b8a-2c9f-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi150:vg_nvme/lv_4'

pass 7759394 2024-06-17 11:21:38 2024-06-17 11:31:00 2024-06-17 11:57:46 0:26:46 0:14:55 0:11:51 smithi main ubuntu 22.04 orch:cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_22.04} 2-node-mgr agent/off orchestrator_cli} 2
fail 7759395 2024-06-17 11:21:39 2024-06-17 11:32:01 2024-06-17 12:17:59 0:45:58 0:31:34 0:14:24 smithi main ubuntu 22.04 orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

"2024-06-17T11:57:29.051618+0000 mon.smithi003 (mon.0) 357 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759396 2024-06-17 11:21:40 2024-06-17 11:36:32 2024-06-17 11:59:50 0:23:18 0:08:09 0:15:09 smithi main centos 9.stream orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} 3
Failure Reason:

Command failed on smithi119 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7ca5ab9c-2ca0-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi119:vg_nvme/lv_4'

fail 7759397 2024-06-17 11:21:42 2024-06-17 11:42:53 2024-06-17 12:05:50 0:22:57 0:09:40 0:13:17 smithi main ubuntu 22.04 orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_domain} 2
Failure Reason:

Config file not found: "/home/teuthworker/src/git.ceph.com_ceph-c_ad8002d393bbf6cd8063453a846731ff25274473/qa/tasks/cephadm.conf".

fail 7759398 2024-06-17 11:21:43 2024-06-17 11:46:14 2024-06-17 12:13:03 0:26:49 0:16:31 0:10:18 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

"2024-06-17T12:03:40.146853+0000 mon.smithi007 (mon.0) 373 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759399 2024-06-17 11:21:44 2024-06-17 11:46:34 2024-06-17 12:03:27 0:16:53 0:07:10 0:09:43 smithi main centos 9.stream orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/basic 3-final} 1
Failure Reason:

Command failed on smithi181 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 05cc84d6-2ca1-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi181:vg_nvme/lv_4'

fail 7759400 2024-06-17 11:21:46 2024-06-17 11:46:34 2024-06-17 12:06:01 0:19:27 0:07:59 0:11:28 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} 3
Failure Reason:

Command failed on smithi077 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 77ac649a-2ca1-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi077:/dev/nvme4n1'

fail 7759401 2024-06-17 11:21:47 2024-06-17 11:48:55 2024-06-17 12:12:02 0:23:07 0:12:33 0:10:34 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi002 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1180b22e-2ca2-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi002:vg_nvme/lv_4'

fail 7759402 2024-06-17 11:21:48 2024-06-17 11:49:16 2024-06-17 12:47:04 0:57:48 0:46:57 0:10:51 smithi main centos 9.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} 2
Failure Reason:

"2024-06-17T12:26:37.155484+0000 mon.a (mon.0) 1001 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7759403 2024-06-17 11:21:50 2024-06-17 11:49:56 2024-06-17 12:13:45 0:23:49 0:10:11 0:13:38 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rotate-keys} 2
Failure Reason:

Command failed on smithi120 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5c6aa97a-2ca2-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi120:vg_nvme/lv_4'

fail 7759404 2024-06-17 11:21:51 2024-06-17 11:52:37 2024-06-17 12:12:15 0:19:38 0:09:41 0:09:57 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} 1
Failure Reason:

Command failed on smithi027 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 73841d62-2ca2-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi027:vg_nvme/lv_4'

fail 7759405 2024-06-17 11:21:52 2024-06-17 11:52:37 2024-06-17 12:23:49 0:31:12 0:15:55 0:15:17 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

"2024-06-17T12:14:26.711152+0000 mon.smithi122 (mon.0) 369 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759406 2024-06-17 11:21:54 2024-06-17 11:56:48 2024-06-17 12:38:44 0:41:56 0:31:12 0:10:44 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} 2
Failure Reason:

"2024-06-17T12:18:32.752604+0000 mon.smithi018 (mon.0) 360 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759407 2024-06-17 11:21:55 2024-06-17 11:57:08 2024-06-17 12:27:04 0:29:56 0:16:57 0:12:59 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

"2024-06-17T12:17:47.469386+0000 mon.smithi001 (mon.0) 374 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759408 2024-06-17 11:21:56 2024-06-17 11:57:49 2024-06-17 12:14:09 0:16:20 0:04:18 0:12:02 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_res_basic} 2
Failure Reason:

No module named 'tasks'

fail 7759409 2024-06-17 11:21:58 2024-06-17 11:58:59 2024-06-17 12:20:56 0:21:57 0:12:09 0:09:48 smithi main ubuntu 22.04 orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi073 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 37a6b150-2ca3-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi073:/dev/nvme4n1'

fail 7759410 2024-06-17 11:21:59 2024-06-17 11:59:30 2024-06-17 12:16:50 0:17:20 0:07:05 0:10:15 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

No module named 'tasks'

fail 7759411 2024-06-17 11:22:00 2024-06-17 12:00:00 2024-06-17 12:23:25 0:23:25 0:13:40 0:09:45 smithi main ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi084 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8deaba70-2ca3-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi084:vg_nvme/lv_4'

fail 7759412 2024-06-17 11:22:01 2024-06-17 12:00:00 2024-06-17 12:28:22 0:28:22 0:14:47 0:13:35 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_monitoring_stack_basic} 3
Failure Reason:

Command failed on smithi039 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2142592c-2ca4-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi039:vg_nvme/lv_4'

fail 7759413 2024-06-17 11:22:03 2024-06-17 12:01:51 2024-06-17 13:06:22 1:04:31 0:53:40 0:10:51 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

dead 7759414 2024-06-17 11:22:04 2024-06-17 12:03:32 2024-06-17 12:06:56 0:03:24 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi042

dead 7759415 2024-06-17 11:22:05 2024-06-17 12:05:52 2024-06-17 12:13:07 0:07:15 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

Error reimaging machines: Failed to power on smithi002

fail 7759416 2024-06-17 11:22:07 2024-06-17 12:12:04 2024-06-17 12:40:40 0:28:36 0:16:57 0:11:39 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

"2024-06-17T12:31:52.856356+0000 mon.smithi007 (mon.0) 371 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759417 2024-06-17 11:22:08 2024-06-17 12:13:04 2024-06-17 12:33:05 0:20:01 0:09:00 0:11:01 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi120 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 264bde9c-2ca5-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi120:vg_nvme/lv_4'

fail 7759418 2024-06-17 11:22:09 2024-06-17 12:13:55 2024-06-17 12:35:46 0:21:51 0:10:19 0:11:32 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_rgw_multisite} 3
Failure Reason:

Command failed on smithi053 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6083b01c-2ca5-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi053:vg_nvme/lv_4'

fail 7759419 2024-06-17 11:22:12 2024-06-17 12:14:15 2024-06-17 12:43:48 0:29:33 0:16:51 0:12:42 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

"2024-06-17T12:34:15.040479+0000 mon.smithi135 (mon.0) 372 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759420 2024-06-17 11:22:13 2024-06-17 12:16:56 2024-06-17 12:34:11 0:17:15 0:07:35 0:09:40 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_res_dom} 2
Failure Reason:

Command failed on smithi077 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7d1d9d78-2ca5-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi077:vg_nvme/lv_4'

fail 7759421 2024-06-17 11:22:15 2024-06-17 12:17:16 2024-06-17 12:40:44 0:23:28 0:14:27 0:09:01 smithi main ubuntu 22.04 orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi003 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 03d58b50-2ca6-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi003:vg_nvme/lv_4'

fail 7759422 2024-06-17 11:22:16 2024-06-17 12:17:17 2024-06-17 12:47:56 0:30:39 0:21:14 0:09:25 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"1718627755.4273503 mgr.smithi045.awenxu (mgr.14234) 1 : cluster [ERR] Failed to load ceph-mgr modules: k8sevents" in cluster log

fail 7759423 2024-06-17 11:22:17 2024-06-17 12:17:17 2024-06-17 12:44:11 0:26:54 0:16:35 0:10:19 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

"2024-06-17T12:35:01.780462+0000 mon.smithi042 (mon.0) 371 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759424 2024-06-17 11:22:18 2024-06-17 12:17:17 2024-06-17 12:37:32 0:20:15 0:07:54 0:12:21 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 3
Failure Reason:

Command failed on smithi046 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid df4b0f6c-2ca5-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi046:/dev/nvme4n1'

fail 7759425 2024-06-17 11:22:20 2024-06-17 12:43:57 2024-06-17 13:37:54 0:53:57 0:44:47 0:09:10 smithi main ubuntu 22.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} 2
Failure Reason:

"2024-06-17T13:24:20.876571+0000 mon.a (mon.0) 813 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

dead 7759427 2024-06-17 11:22:21 2024-06-17 12:44:17 2024-06-17 12:46:31 0:02:14 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rotate-keys} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi105

dead 7759429 2024-06-17 11:22:22 2024-06-17 12:45:28 2024-06-17 12:49:02 0:03:34 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_set_mon_crush_locations} 3
Failure Reason:

Error reimaging machines: Failed to power on smithi002

fail 7759431 2024-06-17 11:22:24 2024-06-17 12:47:58 2024-06-17 13:13:21 0:25:23 0:16:23 0:09:00 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw 3-final} 2
Failure Reason:

"2024-06-17T13:05:38.599139+0000 mon.smithi105 (mon.0) 373 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

pass 7759432 2024-06-17 11:22:25 2024-06-17 12:48:19 2024-06-17 13:09:38 0:21:19 0:12:13 0:09:06 smithi main ubuntu 22.04 orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/classic task/test_adoption} 1
fail 7759433 2024-06-17 11:22:26 2024-06-17 12:48:19 2024-06-17 13:15:26 0:27:07 0:17:31 0:09:36 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/deploy-raw} 2
Failure Reason:

"2024-06-17T13:07:44.967102+0000 mon.smithi012 (mon.0) 532 : cluster [WRN] Health check failed: 7 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7759434 2024-06-17 11:22:28 2024-06-17 12:48:20 2024-06-17 13:10:39 0:22:19 0:09:58 0:12:21 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_basic} 2
Failure Reason:

Command failed on smithi089 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 85252c02-2caa-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi089:vg_nvme/lv_4'

dead 7759435 2024-06-17 11:22:29 2024-06-17 12:51:40 2024-06-17 12:57:05 0:05:25 smithi main centos 9.stream orch:cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi151

fail 7759436 2024-06-17 11:22:30 2024-06-17 12:56:01 2024-06-17 13:38:55 0:42:54 0:32:20 0:10:34 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/basic 3-final} 2
Failure Reason:

"2024-06-17T13:18:44.812847+0000 mon.smithi077 (mon.0) 361 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759437 2024-06-17 11:22:31 2024-06-17 12:57:22 2024-06-17 13:21:43 0:24:21 0:14:15 0:10:06 smithi main ubuntu 22.04 orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi001 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bc3400d2-2cab-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi001:vg_nvme/lv_4'

fail 7759438 2024-06-17 11:22:33 2024-06-17 12:58:02 2024-06-17 13:17:18 0:19:16 0:10:03 0:09:13 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi192 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 755f984c-2cab-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi192:vg_nvme/lv_4'

fail 7759439 2024-06-17 11:22:34 2024-06-17 12:58:23 2024-06-17 13:22:03 0:23:40 0:10:04 0:13:36 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_ca_signed_key} 2
Failure Reason:

Command failed on smithi039 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e7e7ea04-2cab-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi039:vg_nvme/lv_4'

fail 7759440 2024-06-17 11:22:35 2024-06-17 13:00:23 2024-06-17 14:03:54 1:03:31 0:52:51 0:10:40 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7759441 2024-06-17 11:22:37 2024-06-17 13:00:54 2024-06-17 13:26:30 0:25:36 0:16:19 0:09:17 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

"2024-06-17T13:18:52.026617+0000 mon.smithi003 (mon.0) 374 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

dead 7759442 2024-06-17 11:22:38 2024-06-17 13:01:44 2024-06-17 13:03:48 0:02:04 smithi main ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi135

pass 7759443 2024-06-17 11:22:39 2024-06-17 13:02:45 2024-06-17 13:32:32 0:29:47 0:19:59 0:09:48 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm} 1
fail 7759444 2024-06-17 11:22:40 2024-06-17 13:02:55 2024-06-17 13:30:57 0:28:02 0:16:36 0:11:26 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

"2024-06-17T13:21:54.600939+0000 mon.smithi002 (mon.0) 377 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759445 2024-06-17 11:22:42 2024-06-17 13:03:45 2024-06-17 13:45:09 0:41:24 0:32:32 0:08:52 smithi main ubuntu 22.04 orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

"2024-06-17T13:24:43.271844+0000 mon.smithi045 (mon.0) 356 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759446 2024-06-17 11:22:43 2024-06-17 13:03:46 2024-06-17 13:25:27 0:21:41 0:12:18 0:09:23 smithi main ubuntu 22.04 orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_domain} 2
Failure Reason:

Command failed on smithi150 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 61f7b9d2-2cac-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi150:vg_nvme/lv_4'

fail 7759447 2024-06-17 11:22:44 2024-06-17 13:03:46 2024-06-17 13:28:10 0:24:24 0:10:17 0:14:07 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi007 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d61f1ea4-2cac-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi007:vg_nvme/lv_4'

fail 7759448 2024-06-17 11:22:46 2024-06-17 13:06:27 2024-06-17 13:47:58 0:41:31 0:31:44 0:09:47 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/jaeger 3-final} 2
Failure Reason:

"2024-06-17T13:27:48.539308+0000 mon.smithi005 (mon.0) 359 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759449 2024-06-17 11:22:47 2024-06-17 13:06:27 2024-06-17 13:27:46 0:21:19 0:10:11 0:11:08 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rotate-keys} 2
Failure Reason:

Command failed on smithi053 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b23cbbd6-2cac-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi053:vg_nvme/lv_4'

pass 7759450 2024-06-17 11:22:48 2024-06-17 13:06:48 2024-06-17 13:21:37 0:14:49 0:06:01 0:08:48 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_cephadm_repos} 1
fail 7759451 2024-06-17 11:22:50 2024-06-17 13:06:48 2024-06-17 13:38:47 0:31:59 0:20:50 0:11:09 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"1718630703.2212942 mon.smithi154 (mon.0) 116 : cluster [WRN] Health check failed: 2 mgr modules have recently crashed (RECENT_MGR_MODULE_CRASH)" in cluster log

fail 7759452 2024-06-17 11:22:51 2024-06-17 13:07:48 2024-06-17 13:27:36 0:19:48 0:09:30 0:10:18 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/connectivity task/test_cephadm_timeout} 1
Failure Reason:

Command failed on smithi046 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d641745e-2cac-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi046:vg_nvme/lv_4'

pass 7759453 2024-06-17 11:22:52 2024-06-17 13:08:39 2024-06-17 13:33:15 0:24:36 0:14:12 0:10:24 smithi main ubuntu 22.04 orch:cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_22.04} 2-node-mgr agent/on orchestrator_cli} 2
fail 7759454 2024-06-17 11:22:53 2024-06-17 13:09:40 2024-06-17 13:34:45 0:25:05 0:16:31 0:08:34 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/mirror 3-final} 2
Failure Reason:

"2024-06-17T13:26:56.543429+0000 mon.smithi120 (mon.0) 372 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759455 2024-06-17 11:22:55 2024-06-17 13:09:50 2024-06-17 13:27:39 0:17:49 0:07:14 0:10:35 smithi main centos 9.stream orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/rgw 3-final} 1
Failure Reason:

Command failed on smithi089 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid be9056e0-2cac-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi089:vg_nvme/lv_4'

fail 7759456 2024-06-17 11:22:56 2024-06-17 13:10:40 2024-06-17 13:30:37 0:19:57 0:08:02 0:11:55 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} 3
Failure Reason:

Command failed on smithi105 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 33cda282-2cad-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi105:/dev/nvme4n1'

fail 7759457 2024-06-17 11:22:58 2024-06-17 13:13:31 2024-06-17 14:20:46 1:07:15 0:55:00 0:12:15 smithi main centos 9.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/classic} 2
Failure Reason:

"2024-06-17T13:36:36.991087+0000 mon.a (mon.0) 831 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7759458 2024-06-17 11:22:59 2024-06-17 13:15:32 2024-06-17 13:43:06 0:27:34 0:16:34 0:11:00 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} 2
Failure Reason:

"2024-06-17T13:33:31.868413+0000 mon.smithi059 (mon.0) 371 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759459 2024-06-17 11:23:00 2024-06-17 13:16:02 2024-06-17 13:44:17 0:28:15 0:16:47 0:11:28 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

"2024-06-17T13:35:00.001336+0000 mon.smithi029 (mon.0) 371 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759460 2024-06-17 11:23:01 2024-06-17 13:17:03 2024-06-17 13:34:19 0:17:16 0:06:53 0:10:23 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_basic} 2
Failure Reason:

Command failed on smithi192 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b1b20bac-2cad-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi192:vg_nvme/lv_4'

fail 7759461 2024-06-17 11:23:03 2024-06-17 13:17:23 2024-06-17 13:34:38 0:17:15 0:07:46 0:09:29 smithi main centos 9.stream orch:cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi018 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d5acfc74-2cad-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi018:/dev/nvme4n1'

fail 7759462 2024-06-17 11:23:04 2024-06-17 13:17:43 2024-06-17 13:38:07 0:20:24 0:10:02 0:10:22 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi073 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5396d5f6-2cae-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi073:vg_nvme/lv_4'

fail 7759463 2024-06-17 11:23:05 2024-06-17 13:19:14 2024-06-17 13:39:24 0:20:10 0:09:41 0:10:29 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi113 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 79307808-2cae-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi113:vg_nvme/lv_4'

dead 7759464 2024-06-17 11:23:07 2024-06-17 13:19:54 2024-06-17 13:22:08 0:02:14 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_extra_daemon_features} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi115

dead 7759465 2024-06-17 11:23:08 2024-06-17 13:21:05 2024-06-17 13:22:49 0:01:44 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi001

fail 7759466 2024-06-17 11:23:09 2024-06-17 13:21:45 2024-06-17 14:10:45 0:49:00 0:39:04 0:09:56 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

dead 7759467 2024-06-17 11:23:11 2024-06-17 13:21:46 2024-06-17 13:23:10 0:01:24 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi165

fail 7759468 2024-06-17 11:23:12 2024-06-17 13:22:06 2024-06-17 13:49:04 0:26:58 0:13:52 0:13:06 smithi main ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} 2
Failure Reason:

Command failed on smithi119 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 882499d8-2caf-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi119:vg_nvme/lv_4'

fail 7759469 2024-06-17 11:23:13 2024-06-17 13:25:37 2024-06-17 13:50:14 0:24:37 0:14:55 0:09:42 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_host_drain} 3
Failure Reason:

Command failed on smithi003 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bb8b7d6e-2caf-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi003:vg_nvme/lv_4'

dead 7759470 2024-06-17 11:23:15 2024-06-17 13:26:38 2024-06-17 13:28:41 0:02:03 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/classic task/test_orch_cli} 1
Failure Reason:

Error reimaging machines: Failed to power on smithi046

fail 7759471 2024-06-17 11:23:16 2024-06-17 13:27:38 2024-06-17 13:54:39 0:27:01 0:16:29 0:10:32 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

"2024-06-17T13:45:05.595727+0000 mon.smithi089 (mon.0) 371 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759472 2024-06-17 11:23:17 2024-06-17 13:27:48 2024-06-17 13:45:48 0:18:00 0:07:44 0:10:16 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_domain} 2
Failure Reason:

Command failed on smithi027 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7847e718-2caf-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi027:vg_nvme/lv_4'

fail 7759473 2024-06-17 11:23:18 2024-06-17 13:28:19 2024-06-17 13:55:41 0:27:22 0:16:24 0:10:58 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

"2024-06-17T13:47:33.763649+0000 mon.smithi105 (mon.0) 373 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759474 2024-06-17 11:23:20 2024-06-17 13:30:40 2024-06-17 13:56:28 0:25:48 0:15:00 0:10:48 smithi main ubuntu 22.04 orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi007 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 51705cbe-2cb0-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi007:vg_nvme/lv_4'

fail 7759475 2024-06-17 11:23:21 2024-06-17 13:30:40 2024-06-17 13:50:19 0:19:39 0:10:02 0:09:37 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} 2
Failure Reason:

Command failed on smithi002 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 001c1358-2cb0-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi002:vg_nvme/lv_4'

fail 7759476 2024-06-17 11:23:23 2024-06-17 13:31:00 2024-06-17 13:51:31 0:20:31 0:09:05 0:11:26 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} 1
Failure Reason:

Command failed on smithi138 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1ced2314-2cb0-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi138:vg_nvme/lv_4'

fail 7759477 2024-06-17 11:23:24 2024-06-17 13:32:41 2024-06-17 14:14:53 0:42:12 0:32:50 0:09:22 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

"2024-06-17T13:55:00.418739+0000 mon.smithi145 (mon.0) 358 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759478 2024-06-17 11:23:25 2024-06-17 13:33:21 2024-06-17 14:05:18 0:31:57 0:21:24 0:10:33 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"1718632382.3569965 mgr.smithi194.ufvonb (mgr.14234) 1 : cluster [ERR] Failed to load ceph-mgr modules: k8sevents" in cluster log

fail 7759479 2024-06-17 11:23:27 2024-06-17 13:34:22 2024-06-17 14:01:41 0:27:19 0:16:06 0:11:13 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} 2
Failure Reason:

"2024-06-17T13:52:15.146135+0000 mon.smithi001 (mon.0) 370 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759480 2024-06-17 11:23:28 2024-06-17 13:34:32 2024-06-17 13:54:07 0:19:35 0:08:43 0:10:52 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 3
Failure Reason:

Command failed on smithi018 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 60b1ce2e-2cb0-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi018:/dev/nvme4n1'

pass 7759481 2024-06-17 11:23:29 2024-06-17 13:34:33 2024-06-17 14:24:02 0:49:29 0:39:27 0:10:02 smithi main ubuntu 22.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/connectivity} 2
fail 7759482 2024-06-17 11:23:31 2024-06-17 13:34:33 2024-06-17 14:17:24 0:42:51 0:32:29 0:10:22 smithi main ubuntu 22.04 orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

"2024-06-17T13:57:59.058713+0000 mon.smithi049 (mon.0) 360 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759483 2024-06-17 11:23:32 2024-06-17 13:34:33 2024-06-17 13:56:30 0:21:57 0:11:48 0:10:09 smithi main ubuntu 22.04 orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_res_basic} 2
Failure Reason:

Command failed on smithi120 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9b1ee11e-2cb0-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi120:vg_nvme/lv_4'

fail 7759484 2024-06-17 11:23:33 2024-06-17 13:34:54 2024-06-17 13:59:09 0:24:15 0:12:06 0:12:09 smithi main ubuntu 22.04 orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi033 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 078d4688-2cb1-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi033:/dev/nvme4n1'

fail 7759485 2024-06-17 11:23:35 2024-06-17 13:37:14 2024-06-17 13:56:50 0:19:36 0:10:05 0:09:31 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
Failure Reason:

Command failed on smithi042 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f38b1be2-2cb0-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi042:vg_nvme/lv_4'

fail 7759486 2024-06-17 11:23:36 2024-06-17 13:37:55 2024-06-17 13:57:06 0:19:11 0:10:22 0:08:49 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi073 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid eb8fa2e6-2cb0-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi073:vg_nvme/lv_4'

fail 7759487 2024-06-17 11:23:37 2024-06-17 13:38:15 2024-06-17 13:57:59 0:19:44 0:10:23 0:09:21 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_monitoring_stack_basic} 3
Failure Reason:

Command failed on smithi077 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 11688460-2cb1-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi077:vg_nvme/lv_4'

fail 7759488 2024-06-17 11:23:39 2024-06-17 13:39:06 2024-06-17 14:07:05 0:27:59 0:16:47 0:11:12 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

"2024-06-17T13:58:26.605861+0000 mon.smithi113 (mon.0) 374 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759489 2024-06-17 11:23:40 2024-06-17 13:39:26 2024-06-17 14:13:02 0:33:36 0:17:59 0:15:37 smithi main ubuntu 22.04 orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

Command failed on smithi029 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7eaee112-2cb2-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi029:vg_nvme/lv_4'

fail 7759490 2024-06-17 11:23:41 2024-06-17 13:44:27 2024-06-17 14:26:42 0:42:15 0:32:46 0:09:29 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

"2024-06-17T14:06:30.811919+0000 mon.smithi045 (mon.0) 357 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759491 2024-06-17 11:23:42 2024-06-17 13:45:18 2024-06-17 14:12:26 0:27:08 0:15:17 0:11:51 smithi main ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
Failure Reason:

Command failed on smithi027 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d9303d84-2cb2-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi027:vg_nvme/lv_4'

fail 7759492 2024-06-17 11:23:44 2024-06-17 13:45:58 2024-06-17 14:14:41 0:28:43 0:15:47 0:12:56 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_rgw_multisite} 3
Failure Reason:

Command failed on smithi119 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 06da6480-2cb3-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi119:vg_nvme/lv_4'

fail 7759493 2024-06-17 11:23:45 2024-06-17 13:49:09 2024-06-17 14:26:58 0:37:49 0:27:12 0:10:37 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi138 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.1 && git clone https://git.ceph.com/ceph.git /home/ubuntu/cephtest/clone.client.1 && cd /home/ubuntu/cephtest/clone.client.1 && git checkout ad8002d393bbf6cd8063453a846731ff25274473'

fail 7759494 2024-06-17 11:23:46 2024-06-17 13:49:59 2024-06-17 14:17:35 0:27:36 0:16:34 0:11:02 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nvmeof 3-final} 2
Failure Reason:

"2024-06-17T14:08:25.077784+0000 mon.smithi003 (mon.0) 372 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759495 2024-06-17 11:23:47 2024-06-17 13:50:20 2024-06-17 14:15:22 0:25:02 0:16:20 0:08:42 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

"2024-06-17T14:07:45.053896+0000 mon.smithi130 (mon.0) 375 : cluster [WRN] Health check failed: Failed to apply 1 service(s): osd.all-available-devices (CEPHADM_APPLY_SPEC_FAIL)" in cluster log

fail 7759496 2024-06-17 11:23:49 2024-06-17 13:50:20 2024-06-17 14:14:03 0:23:43 0:07:48 0:15:55 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_res_dom} 2
Failure Reason:

Command failed on smithi039 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 4055caf6-2cb3-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi039:vg_nvme/lv_4'

fail 7759497 2024-06-17 11:23:50 2024-06-17 13:54:11 2024-06-17 14:14:48 0:20:37 0:10:20 0:10:17 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} 2
Failure Reason:

Command failed on smithi005 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 53e6c9b2-2cb3-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi005:vg_nvme/lv_4'

dead 7759498 2024-06-17 11:23:51 2024-06-17 13:54:11 2024-06-17 13:55:45 0:01:34 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi144

fail 7759499 2024-06-17 11:23:53 2024-06-17 13:54:42 2024-06-17 14:14:41 0:19:59 0:09:41 0:10:18 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} 2
Failure Reason:

Command failed on smithi105 with status 22: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:ad8002d393bbf6cd8063453a846731ff25274473 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 73db8e42-2cb3-11ef-bc9d-c7b262605968 -- ceph orch daemon add osd smithi105:vg_nvme/lv_4'

dead 7759500 2024-06-17 11:23:54 2024-06-17 13:55:42 2024-06-17 13:57:37 0:01:55 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_set_mon_crush_locations} 3
Failure Reason:

Error reimaging machines: Failed to power on smithi156