Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7571367 2024-02-22 21:41:44 2024-02-22 21:55:13 2024-02-23 00:02:30 2:07:17 1:56:18 0:10:59 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/radosbench} 3
dead 7571368 2024-02-22 21:41:44 2024-02-22 21:56:33 2024-02-23 10:12:10 12:15:37 smithi main ubuntu 20.04 rados/objectstore/{backends/objectstore supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

hit max job timeout

fail 7571369 2024-02-22 21:41:46 2024-02-22 21:59:44 2024-02-22 22:35:32 0:35:48 0:25:45 0:10:03 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi179 with status 5: 'sudo systemctl stop ceph-45bed426-d1d0-11ee-95c0-87774f69a715@mon.smithi179'

pass 7571370 2024-02-22 21:41:47 2024-02-22 22:00:05 2024-02-22 22:35:22 0:35:17 0:26:20 0:08:57 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} 1
pass 7571371 2024-02-22 21:41:47 2024-02-22 22:00:05 2024-02-22 22:51:39 0:51:34 0:40:34 0:11:00 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} 2
pass 7571372 2024-02-22 21:41:48 2024-02-22 22:00:15 2024-02-22 22:28:38 0:28:23 0:17:27 0:10:56 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli} 1
fail 7571373 2024-02-22 21:41:49 2024-02-22 22:01:26 2024-02-22 22:59:23 0:57:57 0:43:49 0:14:08 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

pass 7571374 2024-02-22 21:41:50 2024-02-22 22:01:46 2024-02-22 22:36:28 0:34:42 0:23:19 0:11:23 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
fail 7571375 2024-02-22 21:41:51 2024-02-22 22:03:07 2024-02-22 22:37:46 0:34:39 0:22:51 0:11:48 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi181 with status 5: 'sudo systemctl stop ceph-5d4832b8-d1d0-11ee-95c0-87774f69a715@mon.smithi181'

dead 7571376 2024-02-22 21:41:52 2024-02-22 22:04:37 2024-02-22 22:08:12 0:03:35 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi171

fail 7571377 2024-02-22 21:41:53 2024-02-22 22:07:08 2024-02-22 22:53:38 0:46:30 0:36:03 0:10:27 smithi main centos 8.stream rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

Command failed on smithi129 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1d301c3a-d1d1-11ee-95c0-87774f69a715 -e sha1=eb66ed921e744301e6863be6353618d63967ea59 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 7571378 2024-02-22 21:41:54 2024-02-22 22:08:19 2024-02-22 22:44:09 0:35:50 0:26:16 0:09:34 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi175 with status 5: 'sudo systemctl stop ceph-7f500696-d1d1-11ee-95c0-87774f69a715@mon.smithi175'

pass 7571379 2024-02-22 21:41:55 2024-02-22 22:08:59 2024-02-22 23:06:19 0:57:20 0:48:22 0:08:58 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
pass 7571380 2024-02-22 21:41:56 2024-02-22 22:09:00 2024-02-22 22:38:32 0:29:32 0:16:48 0:12:44 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
fail 7571381 2024-02-22 21:41:56 2024-02-22 22:14:51 2024-02-22 22:46:13 0:31:22 0:20:05 0:11:17 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

"2024-02-22T22:38:43.775373+0000 mon.a (mon.0) 569 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

pass 7571382 2024-02-22 21:41:57 2024-02-22 22:15:31 2024-02-22 22:50:44 0:35:13 0:26:38 0:08:35 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs} 1
fail 7571383 2024-02-22 21:41:58 2024-02-22 22:15:32 2024-02-22 23:08:43 0:53:11 0:22:58 0:30:13 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi162 with status 5: 'sudo systemctl stop ceph-c27ab2e2-d1d4-11ee-95c0-87774f69a715@mon.smithi162'

pass 7571384 2024-02-22 21:41:59 2024-02-22 23:19:31 2024-02-23 00:23:33 1:04:02 0:34:48 0:29:14 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} 2
pass 7571385 2024-02-22 21:42:00 2024-02-22 23:22:22 2024-02-22 23:53:41 0:31:19 0:14:54 0:16:25 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli} 1
fail 7571386 2024-02-22 21:42:01 2024-02-22 23:22:23 2024-02-23 00:14:43 0:52:20 0:26:10 0:26:10 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi097 with status 5: 'sudo systemctl stop ceph-f45a1948-d1dd-11ee-95c0-87774f69a715@mon.smithi097'

pass 7571387 2024-02-22 21:42:02 2024-02-22 23:23:33 2024-02-23 00:10:43 0:47:10 0:23:20 0:23:50 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
fail 7571388 2024-02-22 21:42:03 2024-02-22 23:23:43 2024-02-23 00:08:09 0:44:26 0:27:46 0:16:40 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

timeout expired in wait_until_healthy

pass 7571389 2024-02-22 21:42:04 2024-02-22 23:24:34 2024-02-23 00:19:27 0:54:53 0:39:28 0:15:25 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} 2
fail 7571390 2024-02-22 21:42:05 2024-02-22 23:24:34 2024-02-22 23:53:41 0:29:07 0:15:41 0:13:26 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi079 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:eb66ed921e744301e6863be6353618d63967ea59 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f242c454-d1db-11ee-95c0-87774f69a715 -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

fail 7571391 2024-02-22 21:42:06 2024-02-22 23:28:15 2024-02-23 00:06:41 0:38:26 0:17:58 0:20:28 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi187 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4f758cd8e8854d1f126da86e254076852bf059a9 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 7571392 2024-02-22 21:42:07 2024-02-22 23:39:57 2024-02-23 00:15:31 0:35:34 0:25:51 0:09:43 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi157 with status 5: 'sudo systemctl stop ceph-2f6a8e46-d1de-11ee-95c0-87774f69a715@mon.smithi157'

fail 7571393 2024-02-22 21:42:08 2024-02-22 23:39:57 2024-02-23 02:49:16 3:09:19 3:00:56 0:08:23 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

"2024-02-23T00:10:00.000199+0000 mon.smithi115 (mon.0) 526 : cluster [ERR] Health detail: HEALTH_ERR 2 failed cephadm daemon(s); 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds" in cluster log

fail 7571394 2024-02-22 21:42:08 2024-02-22 23:39:58 2024-02-23 00:49:16 1:09:18 0:45:54 0:23:24 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
Failure Reason:

"2024-02-23T00:35:32.304538+0000 mon.a (mon.0) 2466 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL)" in cluster log

fail 7571395 2024-02-22 21:42:09 2024-02-22 23:53:50 2024-02-23 00:40:07 0:46:17 0:34:50 0:11:27 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi118 with status 5: 'sudo systemctl stop ceph-898d23fe-d1e1-11ee-95c0-87774f69a715@mon.smithi118'

pass 7571396 2024-02-22 21:42:10 2024-02-22 23:54:10 2024-02-23 00:40:12 0:46:02 0:35:29 0:10:33 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} 1
pass 7571397 2024-02-22 21:42:11 2024-02-22 23:55:11 2024-02-23 01:04:36 1:09:25 1:00:45 0:08:40 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} 2
pass 7571398 2024-02-22 21:42:12 2024-02-22 23:55:21 2024-02-23 00:29:54 0:34:33 0:24:25 0:10:08 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli} 1
pass 7571399 2024-02-22 21:42:13 2024-02-22 23:55:21 2024-02-23 00:36:48 0:41:27 0:32:08 0:09:19 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
fail 7571400 2024-02-22 21:42:14 2024-02-22 23:55:22 2024-02-23 00:32:21 0:36:59 0:27:11 0:09:48 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi133 with status 5: 'sudo systemctl stop ceph-66e5983c-d1e0-11ee-95c0-87774f69a715@mon.smithi133'

pass 7571401 2024-02-22 21:42:15 2024-02-22 23:55:22 2024-02-23 00:49:42 0:54:20 0:42:47 0:11:33 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
fail 7571402 2024-02-22 21:42:16 2024-02-22 23:55:22 2024-02-23 00:25:33 0:30:11 0:19:55 0:10:16 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

"2024-02-23T00:17:36.281002+0000 mon.a (mon.0) 572 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

pass 7571403 2024-02-22 21:42:17 2024-02-22 23:55:33 2024-02-23 00:45:01 0:49:28 0:40:36 0:08:52 smithi main ubuntu 18.04 rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
pass 7571404 2024-02-22 21:42:17 2024-02-22 23:55:33 2024-02-23 00:19:42 0:24:09 0:17:44 0:06:25 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
fail 7571405 2024-02-22 21:42:18 2024-02-22 23:56:34 2024-02-23 00:49:22 0:52:48 0:41:56 0:10:52 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7571406 2024-02-22 21:42:19 2024-02-23 00:00:05 2024-02-23 00:47:01 0:46:56 0:35:02 0:11:54 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi012 with status 5: 'sudo systemctl stop ceph-7e9d9ae0-d1e2-11ee-95c0-87774f69a715@mon.smithi012'

pass 7571407 2024-02-22 21:42:20 2024-02-23 00:03:07 2024-02-23 01:00:28 0:57:21 0:48:09 0:09:12 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
pass 7571408 2024-02-22 21:42:21 2024-02-23 00:03:07 2024-02-23 00:51:46 0:48:39 0:36:08 0:12:31 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs} 1
fail 7571409 2024-02-22 21:42:22 2024-02-23 00:03:38 2024-02-23 00:41:24 0:37:46 0:27:16 0:10:30 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi194 with status 5: 'sudo systemctl stop ceph-aafe8438-d1e1-11ee-95c0-87774f69a715@mon.smithi194'

fail 7571410 2024-02-22 21:42:23 2024-02-23 00:04:18 2024-02-23 01:02:48 0:58:30 0:46:55 0:11:35 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
Failure Reason:

"2024-02-23T00:47:40.180059+0000 mon.a (mon.0) 2450 : cluster [WRN] Health check failed: 1 cache pools at or near target size (CACHE_POOL_NEAR_FULL)" in cluster log

pass 7571411 2024-02-22 21:42:24 2024-02-23 00:05:39 2024-02-23 00:38:55 0:33:16 0:24:16 0:09:00 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli} 1
fail 7571412 2024-02-22 21:42:25 2024-02-23 00:05:39 2024-02-23 00:50:02 0:44:23 0:33:47 0:10:36 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi154 with status 5: 'sudo systemctl stop ceph-ec7eeef6-d1e2-11ee-95c0-87774f69a715@mon.smithi154'

pass 7571413 2024-02-22 21:42:25 2024-02-23 00:06:40 2024-02-23 00:50:06 0:43:26 0:30:16 0:13:10 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
fail 7571414 2024-02-22 21:42:26 2024-02-23 00:49:54 1821 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi190 with status 5: 'sudo systemctl stop ceph-134b5146-d1e3-11ee-95c0-87774f69a715@mon.smithi190'

dead 7571415 2024-02-22 21:42:27 2024-02-23 00:12:16 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi067

fail 7571416 2024-02-22 21:42:28 2024-02-23 00:11:12 2024-02-23 01:02:56 0:51:44 0:40:57 0:10:47 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7571417 2024-02-22 21:42:29 2024-02-23 00:11:13 2024-02-23 00:44:03 0:32:50 0:22:57 0:09:53 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi179 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4f758cd8e8854d1f126da86e254076852bf059a9 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 7571418 2024-02-22 21:42:30 2024-02-23 00:11:13 2024-02-23 01:05:39 0:54:26 0:43:13 0:11:13 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7571419 2024-02-22 21:42:31 2024-02-23 00:11:14 2024-02-23 00:51:20 0:40:06 0:30:34 0:09:32 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi082 with status 5: 'sudo systemctl stop ceph-4a388110-d1e3-11ee-95c0-87774f69a715@mon.smithi082'

fail 7571420 2024-02-22 21:42:32 2024-02-23 00:12:25 2024-02-23 03:28:26 3:16:01 3:02:21 0:13:40 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

"2024-02-23T00:50:00.000128+0000 mon.smithi029 (mon.0) 530 : cluster [ERR] Health detail: HEALTH_ERR 2 failed cephadm daemon(s); 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds" in cluster log

fail 7571421 2024-02-22 21:42:33 2024-02-23 00:14:45 2024-02-23 01:11:10 0:56:25 0:43:12 0:13:13 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

pass 7571422 2024-02-22 21:42:34 2024-02-23 00:16:06 2024-02-23 01:01:53 0:45:47 0:34:58 0:10:49 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2