Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7556602 2024-02-12 15:28:47 2024-02-12 15:28:48 2024-02-12 16:16:26 0:47:38 0:37:21 0:10:17 smithi main centos 8.stream rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

"2024-02-12T16:00:00.000266+0000 mon.a (mon.0) 1804 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio'" in cluster log

fail 7556603 2024-02-12 15:28:47 2024-02-12 15:28:48 2024-02-12 15:48:45 0:19:57 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/radosbench} 3
Failure Reason:

Failed to reconnect to smithi125

dead 7556604 2024-02-12 15:28:48 2024-02-13 03:41:11 smithi main ubuntu 20.04 rados/objectstore/{backends/objectstore supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

hit max job timeout

fail 7556605 2024-02-12 15:28:49 2024-02-12 15:28:49 2024-02-12 16:07:46 0:38:57 0:27:08 0:11:49 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi165 with status 5: 'sudo systemctl stop ceph-3e171e28-c9be-11ee-95b9-87774f69a715@mon.smithi165'

fail 7556606 2024-02-12 15:28:50 2024-02-12 15:28:50 2024-02-12 16:08:46 0:39:56 0:23:17 0:16:39 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

Command failed on smithi017 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 6e13f51a-c9be-11ee-95b9-87774f69a715 -e sha1=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 7556607 2024-02-12 15:28:51 2024-02-12 15:28:51 2024-02-12 15:58:31 0:29:40 0:18:32 0:11:08 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

"2024-02-12T15:55:27.029754+0000 mon.smithi083 (mon.0) 616 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7556608 2024-02-12 15:28:51 2024-02-12 15:28:51 2024-02-12 16:09:00 0:40:09 0:27:24 0:12:45 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} 1
Failure Reason:

"2024-02-12T15:55:18.049583+0000 mon.a (mon.0) 500 : cluster [WRN] Replacing daemon mds.a.smithi140.cqbwdz as rank 0 with standby daemon mds.user_test_fs.smithi140.jbvnis" in cluster log

fail 7556609 2024-02-12 15:28:52 2024-02-12 15:28:52 2024-02-12 16:06:28 0:37:36 0:21:57 0:15:39 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-02-12T15:52:23.461297+0000 mon.a (mon.0) 162 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

fail 7556610 2024-02-12 15:28:53 2024-02-12 15:28:53 2024-02-12 16:22:56 0:54:03 0:41:37 0:12:26 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} 2
Failure Reason:

"2024-02-12T16:02:17.309148+0000 mon.a (mon.0) 1013 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log

fail 7556611 2024-02-12 15:28:54 2024-02-12 15:28:54 2024-02-12 16:15:58 0:47:04 0:34:54 0:12:10 smithi main rhel 8.6 rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
Failure Reason:

"2024-02-12T16:10:00.205019+0000 mon.a (mon.0) 933 : cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled" in cluster log

fail 7556612 2024-02-12 15:28:54 2024-02-12 15:28:54 2024-02-12 16:02:15 0:33:21 0:16:48 0:16:33 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli} 1
Failure Reason:

"2024-02-12T15:59:37.622036+0000 mon.a (mon.0) 465 : cluster [WRN] Health check failed: cephadm background work is paused (CEPHADM_PAUSED)" in cluster log

fail 7556613 2024-02-12 15:28:55 2024-02-12 15:28:55 2024-02-12 16:09:36 0:40:41 0:27:07 0:13:34 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi136 with status 5: 'sudo systemctl stop ceph-91062ae8-c9be-11ee-95b9-87774f69a715@mon.smithi136'

fail 7556614 2024-02-12 15:28:56 2024-02-12 15:28:56 2024-02-12 15:48:23 0:19:27 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Failed to reconnect to smithi028

fail 7556615 2024-02-12 15:28:57 2024-02-12 15:28:57 2024-02-12 16:00:10 0:31:13 0:17:04 0:14:09 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

"2024-02-12T15:57:00.726343+0000 mon.smithi062 (mon.0) 635 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7556616 2024-02-12 15:28:57 2024-02-12 15:28:57 2024-02-12 16:16:31 0:47:34 0:31:30 0:16:04 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

"2024-02-12T16:01:13.142918+0000 mon.a (mon.0) 213 : cluster [WRN] Health detail: HEALTH_WARN 1/4 mons down, quorum a,e,c" in cluster log

fail 7556617 2024-02-12 15:28:58 2024-02-12 15:28:58 2024-02-12 16:23:20 0:54:22 0:44:20 0:10:02 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7556618 2024-02-12 15:28:59 2024-02-12 15:28:59 2024-02-12 16:12:17 0:43:18 0:27:01 0:16:17 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

"2024-02-12T16:09:33.153772+0000 mon.a (mon.0) 490 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7556619 2024-02-12 15:29:00 2024-02-12 15:29:00 2024-02-12 16:10:50 0:41:50 0:27:54 0:13:56 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi192 with status 5: 'sudo systemctl stop ceph-d2725466-c9be-11ee-95b9-87774f69a715@mon.smithi192'

fail 7556620 2024-02-12 15:29:00 2024-02-12 15:29:01 2024-02-12 16:15:35 0:46:34 0:34:47 0:11:47 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

"2024-02-12T15:52:46.792077+0000 mon.smithi019 (mon.0) 501 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 7556621 2024-02-12 15:29:01 2024-02-12 15:29:01 2024-02-12 15:59:13 0:30:12 0:14:35 0:15:37 smithi main centos 8.stream rados/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{centos_8} workloads/ceph_iscsi} 3
Failure Reason:

'package_manager_version'

pass 7556622 2024-02-12 15:29:02 2024-02-12 15:29:02 2024-02-12 16:03:26 0:34:24 0:18:01 0:16:23 smithi main centos 8.stream rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/classic start} 2
fail 7556623 2024-02-12 15:29:03 2024-02-12 15:29:03 2024-02-12 16:24:09 0:55:06 0:41:31 0:13:35 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
Failure Reason:

"2024-02-12T16:05:01.816202+0000 mon.a (mon.0) 868 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET)" in cluster log

fail 7556624 2024-02-12 15:29:04 2024-02-12 15:29:04 2024-02-12 16:33:17 1:04:13 0:49:04 0:15:09 smithi main centos 8.stream rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

"2024-02-12T15:54:20.444402+0000 mon.a (mon.0) 153 : cluster [WRN] overall HEALTH_WARN 1/3 mons down, quorum a,c; Reduced data availability: 1 pg inactive" in cluster log

fail 7556625 2024-02-12 15:29:04 2024-02-12 15:29:04 2024-02-12 16:22:13 0:53:09 0:42:52 0:10:17 smithi main rhel 8.6 rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

"2024-02-12T16:10:00.000215+0000 mon.a (mon.0) 2277 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio'" in cluster log

fail 7556626 2024-02-12 15:29:05 2024-02-12 15:29:05 2024-02-12 16:06:13 0:37:08 0:21:04 0:16:04 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi149 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=57bd6abdec7bb457ae7999d9c96682e9ac678e27 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 7556627 2024-02-12 15:29:06 2024-02-12 15:29:06 2024-02-12 15:56:52 0:27:46 0:17:53 0:09:53 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

"2024-02-12T15:53:46.861184+0000 mon.smithi104 (mon.0) 615 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7556628 2024-02-12 15:29:07 2024-02-12 15:29:07 2024-02-12 16:10:13 0:41:06 0:31:37 0:09:29 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-12T15:54:23.688020+0000 mon.smithi018 (mon.0) 498 : cluster [WRN] Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)" in cluster log

fail 7556629 2024-02-12 15:29:07 2024-02-12 15:29:07 2024-02-12 16:36:51 1:07:44 0:53:47 0:13:57 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
Failure Reason:

"2024-02-12T16:09:06.310670+0000 mon.a (mon.0) 1002 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log

pass 7556630 2024-02-12 15:29:08 2024-02-12 15:29:08 2024-02-12 16:13:36 0:44:28 0:31:02 0:13:26 smithi main ubuntu 18.04 rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
pass 7556631 2024-02-12 15:29:09 2024-02-12 15:29:09 2024-02-12 16:00:43 0:31:34 0:19:47 0:11:47 smithi main rhel 8.6 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_rhel8 fixed-2 mon_election/classic start} 2
pass 7556632 2024-02-12 15:29:10 2024-02-12 15:29:10 2024-02-12 16:05:38 0:36:28 0:16:34 0:19:54 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
fail 7556633 2024-02-12 15:29:11 2024-02-12 15:29:11 2024-02-12 17:07:01 1:37:50 1:27:06 0:10:44 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/radosbench} 3
Failure Reason:

"2024-02-12T16:10:00.000101+0000 mon.a (mon.0) 1168 : cluster [WRN] Health detail: HEALTH_WARN noscrub,nodeep-scrub flag(s) set" in cluster log

fail 7556634 2024-02-12 15:29:11 2024-02-12 15:29:11 2024-02-12 15:58:47 0:29:36 0:18:09 0:11:27 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

"2024-02-12T15:55:16.270707+0000 mon.smithi038 (mon.0) 561 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7556635 2024-02-12 15:29:12 2024-02-12 15:29:12 2024-02-12 16:05:35 0:36:23 0:26:37 0:09:46 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-12T16:00:23.176158+0000 mon.smithi174 (mon.0) 465 : cluster [WRN] Health check update: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7556636 2024-02-12 15:29:13 2024-02-12 15:29:13 2024-02-12 16:06:12 0:36:59 0:20:36 0:16:23 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

"2024-02-12T15:57:13.735315+0000 mon.a (mon.0) 573 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7556637 2024-02-12 15:29:14 2024-02-12 15:29:14 2024-02-12 16:12:47 0:43:33 0:28:19 0:15:14 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs} 1
Failure Reason:

"2024-02-12T16:00:34.170289+0000 mon.a (mon.0) 502 : cluster [WRN] Replacing daemon mds.a.smithi090.sokhhe as rank 0 with standby daemon mds.user_test_fs.smithi090.mjbayh" in cluster log

fail 7556638 2024-02-12 15:29:15 2024-02-12 15:29:15 2024-02-12 15:50:51 0:21:36 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Failed to reconnect to smithi006

fail 7556639 2024-02-12 15:29:15 2024-02-12 15:29:15 2024-02-12 16:12:46 0:43:31 0:32:24 0:11:07 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

"2024-02-12T15:51:43.123455+0000 mon.smithi067 (mon.0) 350 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7556640 2024-02-12 15:29:16 2024-02-12 15:29:16 2024-02-12 16:15:42 0:46:26 0:35:49 0:10:37 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} 2
Failure Reason:

"2024-02-12T15:55:21.598164+0000 mon.a (mon.0) 535 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 7556641 2024-02-12 15:29:17 2024-02-12 15:29:17 2024-02-12 16:26:23 0:57:06 0:41:44 0:15:22 smithi main ubuntu 20.04 rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

"2024-02-12T15:55:30.114434+0000 mon.a (mon.0) 162 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

fail 7556642 2024-02-12 15:29:18 2024-02-12 15:29:18 2024-02-12 15:54:51 0:25:33 0:15:24 0:10:09 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

"2024-02-12T15:52:28.306392+0000 mon.a (mon.0) 465 : cluster [WRN] Health check failed: cephadm background work is paused (CEPHADM_PAUSED)" in cluster log

fail 7556643 2024-02-12 15:29:18 2024-02-12 15:29:18 2024-02-12 16:15:40 0:46:22 0:35:07 0:11:15 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-12T16:00:27.752263+0000 mon.smithi097 (mon.0) 370 : cluster [WRN] Health check update: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

pass 7556644 2024-02-12 15:29:19 2024-02-12 15:29:19 2024-02-12 16:05:42 0:36:23 0:23:16 0:13:07 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
fail 7556645 2024-02-12 15:29:20 2024-02-12 15:29:20 2024-02-12 16:14:01 0:44:41 0:32:11 0:12:30 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli_mon} 5
Failure Reason:

"2024-02-12T16:00:26.274053+0000 mon.a (mon.0) 194 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,e" in cluster log

fail 7556646 2024-02-12 15:29:21 2024-02-12 15:29:21 2024-02-12 16:36:29 1:07:08 0:52:36 0:14:32 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

"2024-02-12T16:20:00.000159+0000 mon.a (mon.0) 1944 : cluster [WRN] Health detail: HEALTH_WARN noscrub flag(s) set" in cluster log

fail 7556647 2024-02-12 15:29:22 2024-02-12 15:29:22 2024-02-12 16:09:53 0:40:31 0:26:30 0:14:01 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

"2024-02-12T16:06:28.568927+0000 mon.a (mon.0) 505 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7556648 2024-02-12 15:29:22 2024-02-12 15:29:22 2024-02-12 16:12:16 0:42:54 0:28:28 0:14:26 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi052 with status 5: 'sudo systemctl stop ceph-e0556cf8-c9be-11ee-95b9-87774f69a715@mon.smithi052'

pass 7556649 2024-02-12 15:29:23 2024-02-12 15:29:23 2024-02-12 16:02:01 0:32:38 0:18:29 0:14:09 smithi main centos 8.stream rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/connectivity start} 2
fail 7556650 2024-02-12 15:29:24 2024-02-12 15:29:24 2024-02-12 16:27:57 0:58:33 0:42:57 0:15:36 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

"2024-02-12T16:06:21.724585+0000 mon.a (mon.0) 892 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7556651 2024-02-12 15:29:25 2024-02-12 15:29:25 2024-02-12 16:17:06 0:47:41 0:31:50 0:15:51 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

"2024-02-12T15:52:40.987357+0000 mon.a (mon.0) 135 : cluster [WRN] overall HEALTH_WARN Reduced data availability: 1 pg inactive" in cluster log

fail 7556652 2024-02-12 15:29:26 2024-02-12 15:29:26 2024-02-12 15:58:04 0:28:38 0:16:40 0:11:58 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi047 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0e714d9a4bd2a821113e6318adb87bd06cf81ec1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e5efa256-c9bd-11ee-95b9-87774f69a715 -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

fail 7556653 2024-02-12 15:29:26 2024-02-12 15:29:26 2024-02-12 16:02:54 0:33:28 0:21:39 0:11:49 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi133 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=57bd6abdec7bb457ae7999d9c96682e9ac678e27 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7556654 2024-02-12 15:29:27 2024-02-12 15:29:27 2024-02-12 15:59:52 0:30:25 0:20:06 0:10:19 smithi main rhel 8.6 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_3.0 fixed-2 mon_election/classic start} 2
fail 7556655 2024-02-12 15:29:28 2024-02-12 15:29:28 2024-02-12 16:12:20 0:42:52 0:28:07 0:14:45 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi181 with status 5: 'sudo systemctl stop ceph-07c0a8ac-c9bf-11ee-95b9-87774f69a715@mon.smithi181'

fail 7556656 2024-02-12 15:29:29 2024-02-12 15:29:29 2024-02-12 16:16:02 0:46:33 0:34:03 0:12:30 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

"2024-02-12T15:51:21.266920+0000 mon.smithi099 (mon.0) 253 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)" in cluster log

fail 7556657 2024-02-12 15:29:29 2024-02-12 15:29:29 2024-02-12 16:18:43 0:49:14 0:38:37 0:10:37 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
Failure Reason:

"2024-02-12T16:00:00.000090+0000 mon.a (mon.0) 774 : cluster [WRN] Health detail: HEALTH_WARN nodeep-scrub flag(s) set" in cluster log

fail 7556658 2024-02-12 15:29:30 2024-02-12 15:29:30 2024-02-12 16:27:50 0:58:20 0:46:01 0:12:19 smithi main rhel 8.6 rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

"2024-02-12T16:10:00.000183+0000 mon.a (mon.0) 1453 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio'" in cluster log

fail 7556659 2024-02-12 15:29:31 2024-02-12 15:29:31 2024-02-12 16:02:58 0:33:27 0:19:11 0:14:16 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

"2024-02-12T15:58:59.691149+0000 mon.smithi088 (mon.0) 632 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7556660 2024-02-12 15:29:32 2024-02-12 15:29:32 2024-02-12 16:10:46 0:41:14 0:27:45 0:13:29 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi187 with status 5: 'sudo systemctl stop ceph-dc5e37f6-c9be-11ee-95b9-87774f69a715@mon.smithi187'

fail 7556661 2024-02-12 15:29:33 2024-02-12 15:29:33 2024-02-12 16:36:57 1:07:24 0:55:27 0:11:57 smithi main centos 8.stream rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

"2024-02-12T15:50:19.804154+0000 mon.a (mon.0) 135 : cluster [WRN] overall HEALTH_WARN Reduced data availability: 1 pg inactive" in cluster log

fail 7556662 2024-02-12 15:29:33 2024-02-12 15:29:33 2024-02-12 15:53:53 0:24:20 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/radosbench} 3
Failure Reason:

Failed to reconnect to smithi157

fail 7556663 2024-02-12 15:29:34 2024-02-12 15:29:34 2024-02-12 16:08:55 0:39:21 0:27:13 0:12:08 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} 1
Failure Reason:

"2024-02-12T15:55:10.791725+0000 mon.a (mon.0) 510 : cluster [WRN] Replacing daemon mds.a.smithi107.karzwc as rank 0 with standby daemon mds.user_test_fs.smithi107.nunjlh" in cluster log

fail 7556664 2024-02-12 15:29:35 2024-02-12 15:29:35 2024-02-12 16:05:00 0:35:25 0:22:08 0:13:17 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-02-12T15:50:45.961288+0000 mon.a (mon.0) 162 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

fail 7556665 2024-02-12 15:29:36 2024-02-12 15:29:36 2024-02-12 16:33:32 1:03:56 0:52:04 0:11:52 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} 2
Failure Reason:

"2024-02-12T15:57:23.172190+0000 mon.a (mon.0) 163 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

pass 7556666 2024-02-12 15:29:36 2024-02-12 15:29:37 2024-02-12 16:14:50 0:45:13 0:34:36 0:10:37 smithi main rhel 8.6 rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
fail 7556667 2024-02-12 15:29:37 2024-02-12 15:29:37 2024-02-12 16:01:05 0:31:28 0:17:48 0:13:40 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli} 1
Failure Reason:

"2024-02-12T15:59:11.592088+0000 mon.a (mon.0) 465 : cluster [WRN] Health check failed: cephadm background work is paused (CEPHADM_PAUSED)" in cluster log

fail 7556668 2024-02-12 15:29:38 2024-02-12 15:29:38 2024-02-12 15:50:21 0:20:43 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

Failed to reconnect to smithi129

fail 7556669 2024-02-12 15:29:39 2024-02-12 15:29:39 2024-02-12 16:30:55 1:01:16 0:26:36 0:34:40 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi037 with status 5: 'sudo systemctl stop ceph-a22008fa-c9c1-11ee-95b9-87774f69a715@mon.smithi037'

fail 7556670 2024-02-12 15:29:40 2024-02-12 15:54:23 2024-02-12 16:19:23 0:25:00 0:14:50 0:10:10 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

"2024-02-12T16:15:34.971342+0000 mon.smithi042 (mon.0) 622 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7556671 2024-02-12 15:29:40 2024-02-12 15:54:23 2024-02-12 16:33:57 0:39:34 0:28:14 0:11:20 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

"2024-02-12T16:18:39.452915+0000 mon.a (mon.0) 192 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,e" in cluster log

fail 7556672 2024-02-12 15:29:41 2024-02-12 15:54:23 2024-02-12 17:07:03 1:12:40 0:51:01 0:21:39 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

"2024-02-12T16:24:44.095436+0000 mon.a (mon.0) 173 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.148:3300/0,v1:172.21.15.148:6789/0] is down (out of quorum)" in cluster log

fail 7556673 2024-02-12 15:29:42 2024-02-12 15:54:24 2024-02-12 16:27:15 0:32:51 0:23:29 0:09:22 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

"2024-02-12T16:24:16.702427+0000 mon.a (mon.0) 496 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7556674 2024-02-12 15:29:43 2024-02-12 15:54:24 2024-02-12 16:35:04 0:40:40 0:25:59 0:14:41 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi142 with status 5: 'sudo systemctl stop ceph-2c4ec2b4-c9c2-11ee-95b9-87774f69a715@mon.smithi142'

fail 7556675 2024-02-12 15:29:43 2024-02-12 15:59:55 2024-02-12 16:33:50 0:33:55 0:22:55 0:11:00 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi183 with status 5: 'sudo systemctl stop ceph-ee1b93fa-c9c1-11ee-95b9-87774f69a715@mon.smithi183'

fail 7556676 2024-02-12 15:29:44 2024-02-12 16:00:46 2024-02-12 16:24:50 0:24:04 0:11:37 0:12:27 smithi main centos 8.stream rados/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{centos_8} workloads/ceph_iscsi} 3
Failure Reason:

'package_manager_version'

pass 7556677 2024-02-12 15:29:45 2024-02-12 16:03:37 2024-02-12 16:32:56 0:29:19 0:16:48 0:12:31 smithi main centos 8.stream rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/classic start} 2
fail 7556678 2024-02-12 15:29:46 2024-02-12 16:05:47 2024-02-12 16:50:57 0:45:10 0:35:49 0:09:21 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

"2024-02-12T16:30:41.483807+0000 mon.a (mon.0) 537 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 7556679 2024-02-12 15:29:46 2024-02-12 16:05:48 2024-02-12 16:46:49 0:41:01 0:25:10 0:15:51 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

Command failed on smithi063 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.9 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5a5ecd92-c9c3-11ee-95b9-87774f69a715 -e sha1=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 7556680 2024-02-12 15:29:47 2024-02-12 16:09:38 2024-02-12 16:59:53 0:50:15 0:40:23 0:09:52 smithi main ubuntu 18.04 rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

"2024-02-12T16:29:27.322634+0000 mon.a (mon.0) 165 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

fail 7556681 2024-02-12 15:29:48 2024-02-12 16:09:39 2024-02-12 16:34:51 0:25:12 0:18:01 0:07:11 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

"2024-02-12T16:30:26.879524+0000 mon.smithi083 (mon.0) 615 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

dead 7556682 2024-02-12 15:29:48 2024-02-12 16:09:39 2024-02-12 16:31:00 0:21:21 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

fail 7556683 2024-02-12 15:29:49 2024-02-12 16:09:40 2024-02-12 16:36:40 0:27:00 0:17:55 0:09:05 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi136 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=57bd6abdec7bb457ae7999d9c96682e9ac678e27 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 7556684 2024-02-12 15:29:50 2024-02-12 16:09:40 2024-02-12 16:48:09 0:38:29 0:27:02 0:11:27 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi071 with status 5: 'sudo systemctl stop ceph-dda33ac6-c9c3-11ee-95b9-87774f69a715@mon.smithi071'

fail 7556685 2024-02-12 15:29:51 2024-02-12 16:09:40 2024-02-12 17:02:07 0:52:27 0:40:44 0:11:43 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

"2024-02-12T16:40:00.000179+0000 mon.a (mon.0) 947 : cluster [WRN] Health detail: HEALTH_WARN noscrub flag(s) set" in cluster log

pass 7556686 2024-02-12 15:29:51 2024-02-12 16:09:41 2024-02-12 16:33:47 0:24:06 0:16:20 0:07:46 smithi main rhel 8.6 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_rhel8 fixed-2 mon_election/classic start} 2
fail 7556687 2024-02-12 15:29:52 2024-02-12 16:09:41 2024-02-12 16:29:36 0:19:55 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Failed to reconnect to smithi155

fail 7556688 2024-02-12 15:29:53 2024-02-12 16:09:41 2024-02-12 16:45:40 0:35:59 0:26:30 0:09:29 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi149 with status 5: 'sudo systemctl stop ceph-aa0fd796-c9c3-11ee-95b9-87774f69a715@mon.smithi149'

fail 7556689 2024-02-12 15:29:54 2024-02-12 16:09:42 2024-02-12 17:10:50 1:01:08 0:50:09 0:10:59 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

"2024-02-12T16:28:21.791194+0000 mon.a (mon.0) 136 : cluster [WRN] overall HEALTH_WARN Reduced data availability: 1 pg inactive" in cluster log

fail 7556690 2024-02-12 15:29:54 2024-02-12 16:09:42 2024-02-12 16:46:18 0:36:36 0:26:59 0:09:37 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs} 1
Failure Reason:

"2024-02-12T16:32:35.216201+0000 mon.a (mon.0) 504 : cluster [WRN] Replacing daemon mds.a.smithi038.jslhgw as rank 0 with standby daemon mds.user_test_fs.smithi038.npzaff" in cluster log

fail 7556691 2024-02-12 15:29:55 2024-02-12 16:09:42 2024-02-12 16:29:34 0:19:52 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Failed to reconnect to smithi177

fail 7556692 2024-02-12 15:29:56 2024-02-12 16:09:43 2024-02-12 16:42:45 0:33:02 0:22:51 0:10:11 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi184 with status 5: 'sudo systemctl stop ceph-18920fc8-c9c3-11ee-95b9-87774f69a715@mon.smithi184'

fail 7556693 2024-02-12 15:29:57 2024-02-12 16:09:43 2024-02-12 16:58:10 0:48:27 0:37:32 0:10:55 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
Failure Reason:

"2024-02-12T16:45:13.065709+0000 mon.a (mon.0) 2904 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log

fail 7556694 2024-02-12 15:29:57 2024-02-12 16:09:43 2024-02-12 16:57:47 0:48:04 0:36:58 0:11:06 smithi main centos 8.stream rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

"2024-02-12T16:38:50.527412+0000 mon.a (mon.0) 1115 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7556695 2024-02-12 15:29:58 2024-02-12 16:09:44 2024-02-12 16:34:20 0:24:36 0:16:01 0:08:35 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

"2024-02-12T16:33:09.488210+0000 mon.a (mon.0) 461 : cluster [WRN] Health check failed: cephadm background work is paused (CEPHADM_PAUSED)" in cluster log

fail 7556696 2024-02-12 15:29:59 2024-02-12 16:09:44 2024-02-12 16:31:54 0:22:10 0:14:36 0:07:34 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

"2024-02-12T16:29:17.446487+0000 mon.smithi133 (mon.0) 566 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7556697 2024-02-12 15:29:59 2024-02-12 16:09:44 2024-02-12 16:46:01 0:36:17 0:25:55 0:10:22 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi175 with status 5: 'sudo systemctl stop ceph-c5321fe8-c9c3-11ee-95b9-87774f69a715@mon.smithi175'

fail 7556698 2024-02-12 15:30:00 2024-02-12 16:09:45 2024-02-12 16:42:47 0:33:02 0:22:22 0:10:40 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-02-12T16:28:54.836433+0000 mon.a (mon.0) 162 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

fail 7556699 2024-02-12 15:30:01 2024-02-12 16:09:45 2024-02-12 16:54:16 0:44:31 0:28:04 0:16:27 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli_mon} 5
Failure Reason:

"2024-02-12T16:39:00.970909+0000 mon.a (mon.0) 192 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,e" in cluster log

pass 7556700 2024-02-12 15:30:02 2024-02-12 16:14:56 2024-02-12 17:18:55 1:03:59 0:35:12 0:28:47 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/rbd_cls} 3
fail 7556701 2024-02-12 15:30:02 2024-02-12 16:25:08 2024-02-12 17:06:20 0:41:12 0:33:24 0:07:48 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7556702 2024-02-12 15:30:03 2024-02-12 16:25:08 2024-02-12 17:00:33 0:35:25 0:23:28 0:11:57 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

"2024-02-12T16:57:58.931133+0000 mon.a (mon.0) 487 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7556703 2024-02-12 15:30:04 2024-02-12 16:25:09 2024-02-12 17:02:56 0:37:47 0:26:38 0:11:09 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi188 with status 5: 'sudo systemctl stop ceph-e489be12-c9c5-11ee-95b9-87774f69a715@mon.smithi188'

pass 7556704 2024-02-12 15:30:05 2024-02-12 16:25:09 2024-02-12 16:52:21 0:27:12 0:17:17 0:09:55 smithi main centos 8.stream rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/connectivity start} 2
fail 7556705 2024-02-12 15:30:05 2024-02-12 16:25:10 2024-02-12 17:28:05 1:02:55 0:53:14 0:09:41 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
Failure Reason:

"2024-02-12T16:46:05.953472+0000 mon.a (mon.0) 162 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

fail 7556706 2024-02-12 15:30:06 2024-02-12 16:25:10 2024-02-12 17:25:25 1:00:15 0:48:41 0:11:34 smithi main centos 8.stream rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

"2024-02-12T16:44:22.990973+0000 mon.a (mon.0) 152 : cluster [WRN] overall HEALTH_WARN Reduced data availability: 1 pg inactive" in cluster log

fail 7556707 2024-02-12 15:30:07 2024-02-12 16:25:10 2024-02-12 16:53:09 0:27:59 0:16:53 0:11:06 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

"2024-02-12T16:48:46.202241+0000 mon.smithi050 (mon.0) 611 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7556708 2024-02-12 15:30:08 2024-02-12 16:25:11 2024-02-12 17:16:24 0:51:13 0:41:15 0:09:58 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7556709 2024-02-12 15:30:08 2024-02-12 16:25:11 2024-02-12 17:24:31 0:59:20 0:49:31 0:09:49 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

"2024-02-12T17:00:00.000146+0000 mon.a (mon.0) 1073 : cluster [WRN] Health detail: HEALTH_WARN noscrub flag(s) set" in cluster log

fail 7556710 2024-02-12 15:30:09 2024-02-12 16:25:11 2024-02-12 16:56:16 0:31:05 0:19:46 0:11:19 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi146 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=57bd6abdec7bb457ae7999d9c96682e9ac678e27 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7556711 2024-02-12 15:30:10 2024-02-12 16:25:12 2024-02-12 16:51:24 0:26:12 0:18:47 0:07:25 smithi main rhel 8.6 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_3.0 fixed-2 mon_election/classic start} 2
fail 7556712 2024-02-12 15:30:11 2024-02-12 16:25:12 2024-02-12 17:22:57 0:57:45 0:44:49 0:12:56 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7556713 2024-02-12 15:30:11 2024-02-12 16:25:12 2024-02-12 17:03:43 0:38:31 0:26:20 0:12:11 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi134 with status 5: 'sudo systemctl stop ceph-040c1488-c9c6-11ee-95b9-87774f69a715@mon.smithi134'

pass 7556714 2024-02-12 15:30:12 2024-02-12 16:25:13 2024-02-12 17:09:06 0:43:53 0:34:09 0:09:44 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/test_rbd_api} 3
fail 7556715 2024-02-12 15:30:13 2024-02-12 16:25:13 2024-02-12 18:08:11 1:42:58 1:32:13 0:10:45 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi097 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 92a04f1c-c9c5-11ee-95b9-87774f69a715 -e sha1=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 7556716 2024-02-12 15:30:14 2024-02-12 16:25:14 2024-02-12 17:14:19 0:49:05 0:40:13 0:08:52 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7556717 2024-02-12 15:30:14 2024-02-12 16:25:14 2024-02-12 17:11:50 0:46:36 0:34:28 0:12:08 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

"2024-02-12T16:58:12.304620+0000 mon.a (mon.0) 1134 : cluster [WRN] Health check failed: Degraded data redundancy: 401/4158 objects degraded (9.644%), 5 pgs degraded (PG_DEGRADED)" in cluster log

fail 7556718 2024-02-12 15:30:15 2024-02-12 16:25:14 2024-02-12 17:14:03 0:48:49 0:41:13 0:07:36 smithi main rhel 8.6 rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

"2024-02-12T16:50:31.103006+0000 mon.a (mon.0) 159 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log