Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7671765 2024-04-24 15:49:23 2024-04-24 15:50:37 2024-04-24 16:28:35 0:37:58 0:27:55 0:10:03 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

"2024-04-24T16:10:02.813975+0000 mon.smithi178 (mon.0) 118 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7671766 2024-04-24 15:49:25 2024-04-24 15:50:37 2024-04-24 16:16:24 0:25:47 0:15:06 0:10:41 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_host_drain} 3
Failure Reason:

"2024-04-24T16:11:26.316938+0000 mon.a (mon.0) 619 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon osd.4 on smithi106 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log

pass 7671767 2024-04-24 15:49:26 2024-04-24 15:51:48 2024-04-24 16:31:47 0:39:59 0:32:07 0:07:52 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
pass 7671768 2024-04-24 15:49:27 2024-04-24 15:52:48 2024-04-24 16:35:10 0:42:22 0:34:11 0:08:11 smithi main centos 9.stream orch:cephadm/mgr-nfs-upgrade/{0-centos_9.stream 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 7671769 2024-04-24 15:49:28 2024-04-24 15:54:09 2024-04-24 16:35:25 0:41:16 0:34:10 0:07:06 smithi main centos 9.stream orch:cephadm/nfs/{cluster/{1-node} conf/{client mds mgr mon osd} overrides/{ignorelist_health pg_health} supported-random-distros$/{centos_latest} tasks/nfs} 1
Failure Reason:

"2024-04-24T16:19:50.527654+0000 mds.nfs-cephfs.smithi162.ebhdjn (mds.0) 1 : cluster [WRN] client.15229 isn't responding to mclientcaps(revoke), ino 0x1 pending pAsLsXs issued pAsLsXsFs, sent 62.930783 seconds ago" in cluster log

pass 7671770 2024-04-24 15:49:29 2024-04-24 15:54:09 2024-04-24 16:13:33 0:19:24 0:12:19 0:07:05 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/classic task/test_orch_cli} 1
pass 7671771 2024-04-24 15:49:30 2024-04-24 15:54:40 2024-04-24 16:15:03 0:20:23 0:11:14 0:09:09 smithi main centos 9.stream orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream_runc} 2-node-mgr agent/off orchestrator_cli} 2
fail 7671772 2024-04-24 15:49:31 2024-04-24 15:56:10 2024-04-24 16:35:50 0:39:40 0:28:05 0:11:35 smithi main ubuntu 22.04 orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

"2024-04-24T16:17:12.177138+0000 mon.smithi082 (mon.0) 118 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7671773 2024-04-24 15:49:32 2024-04-24 15:58:01 2024-04-24 16:32:04 0:34:03 0:25:07 0:08:56 smithi main centos 9.stream orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} 3
Failure Reason:

"2024-04-24T16:11:08.510569+0000 mon.a (mon.0) 202 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log

pass 7671774 2024-04-24 15:49:33 2024-04-24 15:58:51 2024-04-24 16:20:10 0:21:19 0:11:59 0:09:20 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_basic} 2
fail 7671775 2024-04-24 15:49:34 2024-04-24 16:01:02 2024-04-24 16:15:37 0:14:35 0:06:33 0:08:02 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

Command failed on smithi122 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5660c306-0255-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi032:172.21.15.32=smithi032;smithi122:172.21.15.122=smithi122'"

pass 7671776 2024-04-24 15:49:35 2024-04-24 16:02:33 2024-04-24 16:19:31 0:16:58 0:09:40 0:07:18 smithi main centos 9.stream orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/basic 3-final} 1
pass 7671777 2024-04-24 15:49:36 2024-04-24 16:02:33 2024-04-24 16:20:07 0:17:34 0:09:52 0:07:42 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} 3
fail 7671778 2024-04-24 15:49:37 2024-04-24 16:02:43 2024-04-24 16:58:32 0:55:49 0:47:09 0:08:40 smithi main centos 9.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} 2
Failure Reason:

"2024-04-24T16:42:01.554619+0000 mon.a (mon.0) 1078 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon iscsi.foo.smithi070.gcoqkr on smithi070 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log

pass 7671779 2024-04-24 15:49:38 2024-04-24 16:05:26 2024-04-24 16:49:16 0:43:50 0:32:39 0:11:11 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
pass 7671780 2024-04-24 15:49:39 2024-04-24 16:09:27 2024-04-24 16:40:34 0:31:07 0:20:49 0:10:18 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
pass 7671781 2024-04-24 15:49:40 2024-04-24 16:14:01 2024-04-24 16:33:00 0:18:59 0:12:48 0:06:11 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} 1
fail 7671782 2024-04-24 15:49:41 2024-04-24 16:14:02 2024-04-24 16:28:06 0:14:04 0:06:37 0:07:27 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

Command failed on smithi192 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1ac66d3a-0257-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi040:172.21.15.40=smithi040;smithi192:172.21.15.192=smithi192'"

fail 7671783 2024-04-24 15:49:42 2024-04-24 16:15:12 2024-04-24 16:55:01 0:39:49 0:28:32 0:11:17 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} 2
Failure Reason:

"2024-04-24T16:36:36.147144+0000 mon.smithi038 (mon.0) 118 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7671784 2024-04-24 15:49:43 2024-04-24 16:32:35 369 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

Command failed on smithi086 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c04749be-0257-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi060:172.21.15.60=smithi060;smithi086:172.21.15.86=smithi086'"

fail 7671785 2024-04-24 15:49:44 2024-04-24 16:19:33 2024-04-24 16:41:59 0:22:26 0:11:24 0:11:02 smithi main ubuntu 22.04 orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi063 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c6c4b050-0258-11ef-bc93-c7b262605968 -- lvm zap /dev/nvme4n1'

fail 7671786 2024-04-24 15:49:45 2024-04-24 16:20:14 2024-04-24 16:44:12 0:23:58 0:14:09 0:09:49 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_monitoring_stack_basic} 3
Failure Reason:

Command failed on smithi150 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 29716b62-0259-11ef-bc93-c7b262605968 -- lvm zap /dev/vg_nvme/lv_4'

fail 7671787 2024-04-24 15:49:46 2024-04-24 16:20:14 2024-04-24 17:35:43 1:15:29 1:08:04 0:07:25 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7671788 2024-04-24 15:49:47 2024-04-24 16:20:35 2024-04-24 16:35:04 0:14:29 0:06:46 0:07:43 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

Command failed on smithi032 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 16779bea-0258-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi016:172.21.15.16=smithi016;smithi032:172.21.15.32=smithi032'"

pass 7671789 2024-04-24 15:49:48 2024-04-24 16:21:05 2024-04-24 16:50:56 0:29:51 0:21:49 0:08:02 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/connectivity task/test_orch_cli_mon} 5
pass 7671790 2024-04-24 15:49:49 2024-04-24 16:21:16 2024-04-24 16:39:51 0:18:35 0:09:46 0:08:49 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_domain} 2
fail 7671791 2024-04-24 15:49:50 2024-04-24 16:21:16 2024-04-24 16:36:45 0:15:29 0:06:49 0:08:40 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

Command failed on smithi175 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 56145004-0258-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi112:172.21.15.112=smithi112;smithi175:172.21.15.175=smithi175'"

fail 7671792 2024-04-24 15:49:51 2024-04-24 16:23:47 2024-04-24 16:49:06 0:25:19 0:13:38 0:11:41 smithi main ubuntu 22.04 orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
Failure Reason:

"2024-04-24T16:46:11.986597+0000 mon.a (mon.0) 101 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7671793 2024-04-24 15:49:52 2024-04-24 16:25:17 2024-04-24 16:49:02 0:23:45 0:13:38 0:10:07 smithi main ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} 2
Failure Reason:

"2024-04-24T16:46:44.930275+0000 mon.a (mon.0) 104 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

pass 7671794 2024-04-24 15:49:53 2024-04-24 16:25:18 2024-04-24 16:51:00 0:25:42 0:12:57 0:12:45 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_rgw_multisite} 3
fail 7671795 2024-04-24 15:49:54 2024-04-24 16:31:39 2024-04-24 17:09:39 0:38:00 0:27:30 0:10:30 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nvmeof 3-final} 2
Failure Reason:

"2024-04-24T16:50:51.993516+0000 mon.smithi052 (mon.0) 121 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7671796 2024-04-24 15:49:55 2024-04-24 16:31:49 2024-04-24 16:46:24 0:14:35 0:06:15 0:08:20 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

Command failed on smithi148 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ac034c12-0259-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi130:172.21.15.130=smithi130;smithi148:172.21.15.148=smithi148'"

pass 7671797 2024-04-24 15:49:56 2024-04-24 16:33:10 2024-04-24 17:14:44 0:41:34 0:31:47 0:09:47 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
fail 7671798 2024-04-24 15:49:57 2024-04-24 16:33:20 2024-04-24 16:48:02 0:14:42 0:06:29 0:08:13 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

Command failed on smithi097 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid de41e18e-0259-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi026:172.21.15.26=smithi026;smithi097:172.21.15.97=smithi097'"

pass 7671799 2024-04-24 15:49:58 2024-04-24 16:35:11 2024-04-24 16:53:58 0:18:47 0:11:02 0:07:45 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 3
fail 7671800 2024-04-24 15:49:59 2024-04-24 16:36:42 2024-04-24 17:26:19 0:49:37 0:40:14 0:09:23 smithi main ubuntu 22.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} 2
Failure Reason:

"2024-04-24T16:57:01.151547+0000 mon.a (mon.0) 341 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7671801 2024-04-24 15:50:00 2024-04-24 16:36:42 2024-04-24 16:57:34 0:20:52 0:14:55 0:05:57 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_set_mon_crush_locations} 3
Failure Reason:

"2024-04-24T16:52:20.955723+0000 mon.a (mon.0) 432 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon osd.1 on smithi082 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7671802 2024-04-24 15:50:01 2024-04-24 16:36:42 2024-04-24 16:49:49 0:13:07 0:06:52 0:06:15 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw 3-final} 2
Failure Reason:

Command failed on smithi151 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2454d12c-025a-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi060:172.21.15.60=smithi060;smithi151:172.21.15.151=smithi151'"

pass 7671803 2024-04-24 15:50:02 2024-04-24 16:36:43 2024-04-24 16:58:15 0:21:32 0:12:45 0:08:47 smithi main ubuntu 22.04 orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/classic task/test_adoption} 1
fail 7671804 2024-04-24 15:50:03 2024-04-24 16:36:43 2024-04-24 16:51:49 0:15:06 0:06:52 0:08:14 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/deploy-raw} 2
Failure Reason:

Command failed on smithi162 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2fcf42ee-025a-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi112:172.21.15.112=smithi112;smithi162:172.21.15.162=smithi162'"

fail 7671805 2024-04-24 15:50:04 2024-04-24 16:36:43 2024-04-24 17:00:05 0:23:22 0:12:38 0:10:44 smithi main ubuntu 22.04 orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_basic} 2
Failure Reason:

"2024-04-24T16:57:08.557089+0000 mon.a (mon.0) 101 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7671806 2024-04-24 15:50:05 2024-04-24 16:36:44 2024-04-24 16:50:00 0:13:16 0:06:18 0:06:58 smithi main centos 9.stream orch:cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi165 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a3c7838-025a-11ef-bc93-c7b262605968 -- ceph orch apply mon '3;smithi114:172.21.15.114=a;smithi114:[v2:172.21.15.114:3301,v1:172.21.15.114:6790]=c;smithi165:172.21.15.165=b'"

fail 7671807 2024-04-24 15:50:06 2024-04-24 16:36:54 2024-04-24 17:16:10 0:39:16 0:27:29 0:11:47 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/basic 3-final} 2
Failure Reason:

"2024-04-24T16:57:46.761543+0000 mon.smithi027 (mon.0) 121 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

pass 7671808 2024-04-24 15:50:08 2024-04-24 16:37:15 2024-04-24 17:40:09 1:02:54 0:56:48 0:06:06 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
fail 7671809 2024-04-24 15:50:09 2024-04-24 16:37:15 2024-04-24 17:07:23 0:30:08 0:13:58 0:16:10 smithi main ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

"2024-04-24T17:03:40.495342+0000 mon.a (mon.0) 104 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

pass 7671810 2024-04-24 15:50:10 2024-04-24 16:39:56 2024-04-24 16:57:26 0:17:30 0:10:32 0:06:58 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_ca_signed_key} 2
fail 7671811 2024-04-24 15:50:11 2024-04-24 16:40:36 2024-04-24 17:44:04 1:03:28 0:52:39 0:10:49 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7671812 2024-04-24 15:50:12 2024-04-24 16:45:07 2024-04-24 20:16:09 3:31:02 0:24:48 3:06:14 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
Failure Reason:

Command failed on smithi008 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 88659ff6-0274-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi204:172.21.15.204=smithi204;smithi008:172.21.15.8=smithi008'"

fail 7671813 2024-04-24 15:50:13 2024-04-24 16:46:18 2024-04-24 17:11:57 0:25:39 0:15:24 0:10:15 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi179 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5904d29475f5be602879d9fb26280e89b808d5cc TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 7671814 2024-04-24 15:50:14 2024-04-24 16:46:18 2024-04-24 17:00:24 0:14:06 0:06:28 0:07:38 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
Failure Reason:

Command failed on smithi139 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 97666382-025b-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi113:172.21.15.113=smithi113;smithi139:172.21.15.139=smithi139'"

fail 7671815 2024-04-24 15:50:15 2024-04-24 16:46:59 2024-04-24 17:26:35 0:39:36 0:29:03 0:10:33 smithi main ubuntu 22.04 orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

"2024-04-24T17:06:36.630931+0000 mon.smithi078 (mon.0) 118 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7671816 2024-04-24 15:50:16 2024-04-24 16:46:59 2024-04-24 17:25:46 0:38:47 0:28:02 0:10:45 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/jaeger 3-final} 2
Failure Reason:

"2024-04-24T17:07:10.016584+0000 mon.smithi064 (mon.0) 118 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

pass 7671817 2024-04-24 15:50:17 2024-04-24 16:48:00 2024-04-24 17:26:58 0:38:58 0:29:51 0:09:07 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
pass 7671818 2024-04-24 15:50:18 2024-04-24 16:49:20 2024-04-24 17:17:07 0:27:47 0:21:22 0:06:25 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} 2
pass 7671819 2024-04-24 15:50:19 2024-04-24 16:50:01 2024-04-24 17:03:37 0:13:36 0:05:49 0:07:47 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_cephadm_repos} 1
fail 7671820 2024-04-24 15:50:20 2024-04-24 16:50:01 2024-04-24 17:30:27 0:40:26 0:32:56 0:07:30 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"1713978600.000146 mon.smithi076 (mon.0) 461 : cluster [WRN] application not enabled on pool '.mgr'" in cluster log

fail 7671821 2024-04-24 15:50:21 2024-04-24 16:51:02 2024-04-24 17:12:01 0:20:59 0:14:07 0:06:52 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/connectivity task/test_cephadm_timeout} 1
Failure Reason:

"2024-04-24T17:09:48.261783+0000 mon.a (mon.0) 209 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

pass 7671822 2024-04-24 15:50:22 2024-04-24 16:51:02 2024-04-24 17:10:07 0:19:05 0:11:58 0:07:07 smithi main centos 9.stream orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream_runc} 2-node-mgr agent/on orchestrator_cli} 2
pass 7671823 2024-04-24 15:50:23 2024-04-24 16:51:02 2024-04-24 17:07:53 0:16:51 0:10:11 0:06:40 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_domain} 2
fail 7671824 2024-04-24 15:50:24 2024-04-24 16:51:03 2024-04-24 17:04:23 0:13:20 0:06:48 0:06:32 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/mirror 3-final} 2
Failure Reason:

Command failed on smithi107 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a4d0912-025c-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi029:172.21.15.29=smithi029;smithi107:172.21.15.107=smithi107'"

pass 7671825 2024-04-24 15:50:25 2024-04-24 16:51:03 2024-04-24 17:11:19 0:20:16 0:11:41 0:08:35 smithi main centos 9.stream orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream_runc} 1-start 2-services/rgw 3-final} 1
fail 7671826 2024-04-24 15:50:26 2024-04-24 16:52:33 2024-04-24 17:11:40 0:19:07 0:12:09 0:06:58 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} 3
Failure Reason:

"2024-04-24T17:07:45.782213+0000 mon.a (mon.0) 468 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon osd.1 on smithi114 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7671827 2024-04-24 15:50:27 2024-04-24 16:52:34 2024-04-24 17:43:31 0:50:57 0:43:07 0:07:50 smithi main centos 9.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/classic} 2
Failure Reason:

Command failed on smithi130 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5447eae8-025c-11ef-bc93-c7b262605968 -e sha1=5904d29475f5be602879d9fb26280e89b808d5cc -- bash -c \'ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e \'"\'"\'.up_to_date | length == 7\'"\'"\'\''

fail 7671828 2024-04-24 15:50:28 2024-04-24 16:52:34 2024-04-24 17:05:14 0:12:40 0:06:50 0:05:50 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} 2
Failure Reason:

Command failed on smithi097 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 49fbe40e-025c-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi060:172.21.15.60=smithi060;smithi097:172.21.15.97=smithi097'"

fail 7671829 2024-04-24 15:50:29 2024-04-24 16:52:35 2024-04-24 17:07:37 0:15:02 0:07:50 0:07:12 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi193 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5b8fa660-025c-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi151:172.21.15.151=smithi151;smithi193:172.21.15.193=smithi193'"

pass 7671830 2024-04-24 15:50:30 2024-04-24 16:52:35 2024-04-24 17:13:22 0:20:47 0:13:18 0:07:29 smithi main centos 9.stream orch:cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
fail 7671831 2024-04-24 15:50:31 2024-04-24 16:52:35 2024-04-24 17:09:27 0:16:52 0:09:21 0:07:31 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_extra_daemon_features} 2
Failure Reason:

Command failed on smithi133 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d18a4230-025c-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi046:172.21.15.46=a;smithi133:172.21.15.133=b'"

fail 7671832 2024-04-24 15:50:32 2024-04-24 16:52:36 2024-04-24 17:30:07 0:37:31 0:27:53 0:09:38 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

"2024-04-24T17:11:21.044784+0000 mon.smithi028 (mon.0) 124 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7671833 2024-04-24 15:50:33 2024-04-24 16:52:36 2024-04-24 17:37:45 0:45:09 0:38:53 0:06:16 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7671834 2024-04-24 15:50:34 2024-04-24 16:52:37 2024-04-24 17:08:04 0:15:27 0:07:12 0:08:15 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

Command failed on smithi161 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6d4b8ff4-025c-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi150:172.21.15.150=smithi150;smithi161:172.21.15.161=smithi161'"

fail 7671835 2024-04-24 15:50:35 2024-04-24 16:52:37 2024-04-24 17:17:41 0:25:04 0:13:05 0:11:59 smithi main ubuntu 22.04 orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} 2
Failure Reason:

"2024-04-24T17:14:32.046552+0000 mon.a (mon.0) 101 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

pass 7671836 2024-04-24 15:50:36 2024-04-24 16:54:08 2024-04-24 17:23:35 0:29:27 0:22:56 0:06:31 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rotate-keys} 2
fail 7671837 2024-04-24 15:50:37 2024-04-24 16:54:08 2024-04-24 17:21:02 0:26:54 0:14:46 0:12:08 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_host_drain} 3
Failure Reason:

"2024-04-24T17:17:10.288797+0000 mon.a (mon.0) 102 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

pass 7671838 2024-04-24 15:50:38 2024-04-24 16:54:59 2024-04-24 17:14:00 0:19:01 0:11:34 0:07:27 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/classic task/test_orch_cli} 1
fail 7671839 2024-04-24 15:50:39 2024-04-24 16:54:59 2024-04-24 17:12:31 0:17:32 0:06:20 0:11:12 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

Command failed on smithi173 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2d8b638e-025d-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi102:172.21.15.102=smithi102;smithi173:172.21.15.173=smithi173'"

pass 7671840 2024-04-24 15:50:40 2024-04-24 16:57:30 2024-04-24 17:18:16 0:20:46 0:12:08 0:08:38 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_basic} 2
fail 7671841 2024-04-24 15:50:41 2024-04-24 16:59:30 2024-04-24 17:12:34 0:13:04 0:06:19 0:06:45 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

Command failed on smithi191 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 542c52e6-025d-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi043:172.21.15.43=smithi043;smithi191:172.21.15.191=smithi191'"

dead 7671842 2024-04-24 15:50:42 2024-04-24 16:59:31 2024-04-25 05:07:46 12:08:15 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} 1
Failure Reason:

hit max job timeout

fail 7671843 2024-04-24 15:50:43 2024-04-24 16:59:31 2024-04-24 17:37:33 0:38:02 0:27:17 0:10:45 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

"2024-04-24T17:19:06.917254+0000 mon.smithi057 (mon.0) 121 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7671844 2024-04-24 15:50:44 2024-04-24 16:59:31 2024-04-24 17:45:57 0:46:26 0:32:59 0:13:27 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"1713979344.3882833 mon.smithi062 (mon.0) 137 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log

fail 7671845 2024-04-24 15:50:45 2024-04-24 17:06:23 2024-04-24 17:19:37 0:13:14 0:06:21 0:06:53 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} 2
Failure Reason:

Command failed on smithi121 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 491e9142-025e-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi117:172.21.15.117=smithi117;smithi121:172.21.15.121=smithi121'"

pass 7671846 2024-04-24 15:50:46 2024-04-24 17:06:23 2024-04-24 17:25:03 0:18:40 0:09:30 0:09:10 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 3
fail 7671847 2024-04-24 15:50:47 2024-04-24 17:08:04 2024-04-24 17:55:39 0:47:35 0:37:11 0:10:24 smithi main ubuntu 22.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/connectivity} 2
Failure Reason:

"2024-04-24T17:38:50.032070+0000 mon.a (mon.0) 874 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7671848 2024-04-24 15:50:48 2024-04-24 17:08:04 2024-04-24 17:46:40 0:38:36 0:27:57 0:10:39 smithi main ubuntu 22.04 orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

"2024-04-24T17:28:24.010817+0000 mon.smithi113 (mon.0) 121 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7671849 2024-04-24 15:50:49 2024-04-24 17:08:04 2024-04-24 17:29:42 0:21:38 0:11:26 0:10:12 smithi main ubuntu 22.04 orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

Command failed on smithi038 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6b06a1fe-025f-11ef-bc93-c7b262605968 -- lvm zap /dev/nvme4n1'

fail 7671850 2024-04-24 15:50:50 2024-04-24 17:08:05 2024-04-24 17:31:29 0:23:24 0:13:46 0:09:38 smithi main ubuntu 22.04 orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
Failure Reason:

"2024-04-24T17:28:34.629151+0000 mon.a (mon.0) 104 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7671851 2024-04-24 15:50:51 2024-04-24 17:08:05 2024-04-24 17:31:45 0:23:40 0:13:51 0:09:49 smithi main ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

"2024-04-24T17:29:18.136228+0000 mon.a (mon.0) 103 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

pass 7671852 2024-04-24 15:50:52 2024-04-24 17:08:05 2024-04-24 17:34:21 0:26:16 0:18:30 0:07:46 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_monitoring_stack_basic} 3
fail 7671853 2024-04-24 15:50:53 2024-04-24 17:08:06 2024-04-24 17:20:43 0:12:37 0:06:22 0:06:15 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

Command failed on smithi060 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 73c86b70-025e-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi053:172.21.15.53=smithi053;smithi060:172.21.15.60=smithi060'"

fail 7671854 2024-04-24 15:50:54 2024-04-24 17:08:06 2024-04-24 17:36:52 0:28:46 0:16:51 0:11:55 smithi main ubuntu 22.04 orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

"2024-04-24T17:32:27.505046+0000 mon.a (mon.0) 101 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7671855 2024-04-24 15:50:55 2024-04-24 17:08:07 2024-04-24 17:32:21 0:24:14 0:12:39 0:11:35 smithi main ubuntu 22.04 orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_domain} 2
Failure Reason:

"2024-04-24T17:29:36.697004+0000 mon.a (mon.0) 101 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7671856 2024-04-24 15:50:56 2024-04-24 17:08:07 2024-04-24 17:46:30 0:38:23 0:27:52 0:10:31 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

"2024-04-24T17:28:18.805431+0000 mon.smithi107 (mon.0) 118 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7671857 2024-04-24 15:50:57 2024-04-24 17:08:07 2024-04-24 17:37:38 0:29:31 0:13:56 0:15:35 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_rgw_multisite} 3
Failure Reason:

Command failed on smithi033 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 63712292-0260-11ef-bc93-c7b262605968 -- lvm zap /dev/vg_nvme/lv_4'

fail 7671858 2024-04-24 15:50:58 2024-04-24 17:10:08 2024-04-24 17:56:09 0:46:01 0:38:43 0:07:18 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7671859 2024-04-24 15:50:59 2024-04-24 17:10:58 2024-04-24 17:24:30 0:13:32 0:06:11 0:07:21 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nvmeof 3-final} 2
Failure Reason:

Command failed on smithi106 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f82caa2a-025e-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi069:172.21.15.69=smithi069;smithi106:172.21.15.106=smithi106'"

fail 7671860 2024-04-24 15:51:00 2024-04-24 17:11:29 2024-04-24 17:25:16 0:13:47 0:06:57 0:06:50 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

Command failed on smithi203 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 11aa0f10-025f-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi019:172.21.15.19=smithi019;smithi203:172.21.15.203=smithi203'"

fail 7671861 2024-04-24 15:51:01 2024-04-24 17:11:59 2024-04-24 17:26:27 0:14:28 0:06:26 0:08:02 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

Command failed on smithi176 with status 125: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:5904d29475f5be602879d9fb26280e89b808d5cc shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 41fb6e20-025f-11ef-bc93-c7b262605968 -- ceph orch apply mon '2;smithi026:172.21.15.26=smithi026;smithi176:172.21.15.176=smithi176'"

pass 7671862 2024-04-24 15:51:02 2024-04-24 17:13:30 2024-04-24 18:17:16 1:03:46 0:56:47 0:06:59 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
pass 7671863 2024-04-24 15:51:03 2024-04-24 17:14:10 2024-04-24 17:44:20 0:30:10 0:21:34 0:08:36 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2