Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7555253 2024-02-09 21:36:50 2024-02-10 07:24:56 2024-02-10 08:40:43 1:15:47 0:37:05 0:38:42 smithi main centos 8.stream rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

"2024-02-10T08:30:00.000268+0000 mon.a (mon.0) 2326 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio'" in cluster log

fail 7555254 2024-02-09 21:36:51 2024-02-10 07:24:57 2024-02-10 07:45:30 0:20:33 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/radosbench} 3
Failure Reason:

Failed to reconnect to smithi133

pass 7555255 2024-02-09 21:36:52 2024-02-10 07:24:57 2024-02-10 07:48:43 0:23:46 0:15:56 0:07:50 smithi main rhel 8.6 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_rhel8 fixed-2 mon_election/connectivity start} 2
dead 7555256 2024-02-09 21:36:53 2024-02-10 07:24:58 2024-02-10 19:38:15 12:13:17 smithi main ubuntu 20.04 rados/objectstore/{backends/objectstore supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

hit max job timeout

fail 7555257 2024-02-09 21:36:53 2024-02-10 07:24:58 2024-02-10 08:37:35 1:12:37 0:31:58 0:40:39 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-10T08:21:32.912995+0000 mon.smithi049 (mon.0) 500 : cluster [WRN] Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)" in cluster log

fail 7555258 2024-02-09 21:36:54 2024-02-10 07:24:58 2024-02-10 08:07:51 0:42:53 0:31:44 0:11:09 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

"2024-02-10T07:43:13.002584+0000 mon.a (mon.0) 134 : cluster [WRN] overall HEALTH_WARN Reduced data availability: 1 pg inactive" in cluster log

fail 7555259 2024-02-09 21:36:55 2024-02-10 07:24:59 2024-02-10 07:57:23 0:32:24 0:20:04 0:12:20 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

"2024-02-10T07:54:05.876201+0000 mon.smithi090 (mon.0) 654 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7555260 2024-02-09 21:36:56 2024-02-10 07:26:19 2024-02-10 08:35:10 1:08:51 0:31:05 0:37:46 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} 1
Failure Reason:

"2024-02-10T08:22:02.912955+0000 mon.a (mon.0) 507 : cluster [WRN] Replacing daemon mds.a.smithi106.ajsrlw as rank 0 with standby daemon mds.user_test_fs.smithi106.qltqyw" in cluster log

fail 7555261 2024-02-09 21:36:57 2024-02-10 07:26:19 2024-02-10 08:05:37 0:39:18 0:21:34 0:17:44 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-02-10T07:51:32.592823+0000 mon.a (mon.0) 162 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

fail 7555262 2024-02-09 21:36:57 2024-02-10 07:33:01 2024-02-10 08:54:53 1:21:52 0:40:31 0:41:21 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} 2
Failure Reason:

"2024-02-10T08:48:32.443451+0000 mon.a (mon.0) 2279 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log

fail 7555263 2024-02-09 21:36:58 2024-02-10 07:34:51 2024-02-10 08:13:11 0:38:20 0:31:00 0:07:20 smithi main rhel 8.6 rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
Failure Reason:

"2024-02-10T07:59:31.317053+0000 mon.a (mon.0) 159 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

fail 7555264 2024-02-09 21:36:59 2024-02-10 07:36:02 2024-02-10 08:31:13 0:55:11 0:16:09 0:39:02 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli} 1
Failure Reason:

"2024-02-10T08:28:50.065000+0000 mon.a (mon.0) 470 : cluster [WRN] Health check failed: cephadm background work is paused (CEPHADM_PAUSED)" in cluster log

pass 7555265 2024-02-09 21:37:00 2024-02-10 07:36:02 2024-02-10 08:07:08 0:31:06 0:19:02 0:12:04 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
fail 7555266 2024-02-09 21:37:01 2024-02-10 07:36:53 2024-02-10 08:47:58 1:11:05 0:31:14 0:39:51 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-10T08:30:57.459449+0000 mon.smithi086 (mon.0) 322 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7555267 2024-02-09 21:37:02 2024-02-10 07:38:23 2024-02-10 08:00:18 0:21:55 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Failed to reconnect to smithi111

pass 7555268 2024-02-09 21:37:02 2024-02-10 07:39:24 2024-02-10 08:13:18 0:33:54 0:22:56 0:10:58 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} 2
fail 7555269 2024-02-09 21:37:03 2024-02-10 07:39:24 2024-02-10 08:35:16 0:55:52 0:15:49 0:40:03 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

"2024-02-10T08:31:12.801955+0000 mon.smithi063 (mon.0) 618 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

pass 7555270 2024-02-09 21:37:04 2024-02-10 07:39:25 2024-02-10 08:06:11 0:26:46 0:19:47 0:06:59 smithi main rhel 8.6 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} tasks/rados_cls_all} 2
fail 7555271 2024-02-09 21:37:05 2024-02-10 07:39:25 2024-02-10 08:49:38 1:10:13 0:29:10 0:41:03 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

"2024-02-10T08:43:33.235243+0000 mon.a (mon.0) 977 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log

fail 7555272 2024-02-09 21:37:06 2024-02-10 08:09:14 1185 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

Command failed on smithi007 with status 32: "sudo TESTDIR=/home/ubuntu/cephtest bash -ex -c 'mount -t nfs 10.0.31.7:/foouser /mnt/foo'"

fail 7555273 2024-02-09 21:37:07 2024-02-10 07:39:26 2024-02-10 08:42:30 1:03:04 0:25:14 0:37:50 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

"2024-02-10T08:40:40.621791+0000 mon.a (mon.0) 506 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7555274 2024-02-09 21:37:08 2024-02-10 07:39:26 2024-02-10 08:51:27 1:12:01 0:31:41 0:40:20 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-10T08:40:00.000108+0000 mon.smithi052 (mon.0) 106 : cluster [WRN] Health detail: HEALTH_WARN 1 osds down; Degraded data redundancy: 41/213 objects degraded (19.249%), 17 pgs degraded" in cluster log

fail 7555275 2024-02-09 21:37:09 2024-02-10 07:39:27 2024-02-10 08:49:18 1:09:51 0:31:32 0:38:19 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

"2024-02-10T08:28:49.622251+0000 mon.smithi138 (mon.0) 325 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7555276 2024-02-09 21:37:09 2024-02-10 07:39:27 2024-02-10 08:31:30 0:52:03 0:12:28 0:39:35 smithi main centos 8.stream rados/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{centos_8} workloads/ceph_iscsi} 3
Failure Reason:

'package_manager_version'

pass 7555277 2024-02-09 21:37:10 2024-02-10 07:39:27 2024-02-10 08:35:10 0:55:43 0:17:18 0:38:25 smithi main centos 8.stream rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/classic start} 2
fail 7555278 2024-02-09 21:37:11 2024-02-10 07:39:28 2024-02-10 08:54:42 1:15:14 0:37:35 0:37:39 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
Failure Reason:

"2024-02-10T08:29:43.313177+0000 mon.a (mon.0) 164 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

fail 7555279 2024-02-09 21:37:12 2024-02-10 07:39:28 2024-02-10 08:43:06 1:03:38 0:22:23 0:41:15 smithi main centos 8.stream rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

Command failed on smithi099 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0db3f732-c7ee-11ee-95b8-87774f69a715 -e sha1=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 7555280 2024-02-09 21:37:13 2024-02-10 07:41:59 2024-02-10 08:29:19 0:47:20 0:40:55 0:06:25 smithi main rhel 8.6 rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

"2024-02-10T08:10:56.686459+0000 mon.a (mon.0) 816 : cluster [WRN] Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

pass 7555281 2024-02-09 21:37:14 2024-02-10 07:41:59 2024-02-10 08:41:51 0:59:52 0:18:16 0:41:36 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
fail 7555282 2024-02-09 21:37:14 2024-02-10 07:44:50 2024-02-10 08:10:23 0:25:33 0:17:55 0:07:38 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

"2024-02-10T08:07:02.945261+0000 mon.smithi076 (mon.0) 613 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

pass 7555283 2024-02-09 21:37:15 2024-02-10 07:44:50 2024-02-10 08:12:00 0:27:10 0:21:24 0:05:46 smithi main rhel 8.6 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_3.0 fixed-2 mon_election/connectivity start} 2
fail 7555284 2024-02-09 21:37:16 2024-02-10 07:44:51 2024-02-10 09:00:21 1:15:30 0:35:42 0:39:48 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-10T08:45:19.470561+0000 mon.smithi146 (mon.0) 503 : cluster [WRN] Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)" in cluster log

fail 7555285 2024-02-09 21:37:17 2024-02-10 07:44:51 2024-02-10 08:51:22 1:06:31 0:49:12 0:17:19 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
Failure Reason:

"2024-02-10T08:19:30.793075+0000 mon.a (mon.0) 751 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7555286 2024-02-09 21:37:18 2024-02-10 07:48:02 2024-02-10 08:06:49 0:18:47 smithi main ubuntu 18.04 rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
Failure Reason:

Failed to reconnect to smithi181

fail 7555287 2024-02-09 21:37:19 2024-02-10 07:48:52 2024-02-10 08:23:40 0:34:48 0:16:51 0:17:57 smithi main rhel 8.6 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_rhel8 fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-02-10T08:16:05.997530+0000 mon.a (mon.0) 159 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

fail 7555288 2024-02-09 21:37:20 2024-02-10 07:54:43 2024-02-10 08:15:45 0:21:02 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
Failure Reason:

Failed to reconnect to smithi081

fail 7555289 2024-02-09 21:37:20 2024-02-10 07:54:44 2024-02-10 08:16:41 0:21:57 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/radosbench} 3
Failure Reason:

Failed to reconnect to smithi140

fail 7555290 2024-02-09 21:37:21 2024-02-10 07:54:44 2024-02-10 08:15:41 0:20:57 0:15:22 0:05:35 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

"2024-02-10T08:13:06.831266+0000 mon.smithi062 (mon.0) 562 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7555291 2024-02-09 21:37:22 2024-02-10 07:54:44 2024-02-10 08:45:15 0:50:31 0:32:19 0:18:12 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-10T08:34:48.508517+0000 mon.smithi064 (mon.0) 99 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 7555292 2024-02-09 21:37:23 2024-02-10 07:54:45 2024-02-10 08:42:42 0:47:57 0:36:54 0:11:03 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

Command failed on smithi022 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.9 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a9bbcb08-c7eb-11ee-95b8-87774f69a715 -e sha1=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 -- bash -c \'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mon --hosts $(ceph orch ps | grep mgr.x | awk \'"\'"\'{print $2}\'"\'"\')\''

fail 7555293 2024-02-09 21:37:24 2024-02-10 07:54:45 2024-02-10 08:32:56 0:38:11 0:27:05 0:11:06 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs} 1
Failure Reason:

"2024-02-10T08:19:24.148709+0000 mon.a (mon.0) 500 : cluster [WRN] Replacing daemon mds.a.smithi167.zfeadd as rank 0 with standby daemon mds.user_test_fs.smithi167.rlukwt" in cluster log

fail 7555294 2024-02-09 21:37:24 2024-02-10 07:54:46 2024-02-10 08:16:09 0:21:23 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

Failed to reconnect to smithi172

fail 7555295 2024-02-09 21:37:25 2024-02-10 07:54:46 2024-02-10 08:36:27 0:41:41 0:30:58 0:10:43 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

"2024-02-10T08:14:54.597292+0000 mon.smithi121 (mon.0) 350 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7555296 2024-02-09 21:37:26 2024-02-10 07:54:46 2024-02-10 08:48:49 0:54:03 0:38:43 0:15:20 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} 2
Failure Reason:

"2024-02-10T08:25:05.097386+0000 mon.a (mon.0) 161 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

pass 7555297 2024-02-09 21:37:27 2024-02-10 07:54:47 2024-02-10 08:33:04 0:38:17 0:22:51 0:15:26 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
fail 7555298 2024-02-09 21:37:28 2024-02-10 07:54:47 2024-02-10 08:47:45 0:52:58 0:41:28 0:11:30 smithi main ubuntu 20.04 rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

"2024-02-10T08:28:40.551937+0000 mon.a (mon.0) 852 : cluster [WRN] Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7555299 2024-02-09 21:37:29 2024-02-10 07:54:47 2024-02-10 08:19:09 0:24:22 0:16:28 0:07:54 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

"2024-02-10T08:17:40.536611+0000 mon.a (mon.0) 459 : cluster [WRN] Health check failed: cephadm background work is paused (CEPHADM_PAUSED)" in cluster log

fail 7555300 2024-02-09 21:37:29 2024-02-10 07:54:48 2024-02-10 08:43:51 0:49:03 0:40:03 0:09:00 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-10T08:25:59.938083+0000 mon.smithi097 (mon.0) 321 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7555301 2024-02-09 21:37:30 2024-02-10 07:54:48 2024-02-10 08:30:16 0:35:28 0:22:52 0:12:36 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-02-10T08:14:41.031311+0000 mon.a (mon.0) 163 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

fail 7555302 2024-02-09 21:37:31 2024-02-10 07:54:48 2024-02-10 08:38:41 0:43:53 0:28:45 0:15:08 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli_mon} 5
Failure Reason:

"2024-02-10T08:24:03.066171+0000 mon.a (mon.0) 213 : cluster [WRN] Health detail: HEALTH_WARN 1/4 mons down, quorum a,e,c" in cluster log

fail 7555303 2024-02-09 21:37:32 2024-02-10 07:54:49 2024-02-10 08:15:41 0:20:52 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Failed to reconnect to smithi162

fail 7555304 2024-02-09 21:37:33 2024-02-10 07:54:49 2024-02-10 08:29:37 0:34:48 0:23:39 0:11:09 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

"2024-02-10T08:27:03.070381+0000 mon.a (mon.0) 499 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7555305 2024-02-09 21:37:34 2024-02-10 07:54:50 2024-02-10 08:47:04 0:52:14 0:31:39 0:20:35 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-10T08:32:51.033849+0000 mon.smithi028 (mon.0) 504 : cluster [WRN] Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)" in cluster log

pass 7555306 2024-02-09 21:37:34 2024-02-10 08:06:11 2024-02-10 08:34:47 0:28:36 0:17:42 0:10:54 smithi main centos 8.stream rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/connectivity start} 2
fail 7555307 2024-02-09 21:37:35 2024-02-10 08:07:12 2024-02-10 09:09:59 1:02:47 0:44:29 0:18:18 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

"2024-02-10T08:56:37.118972+0000 mon.a (mon.0) 1690 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log

fail 7555308 2024-02-09 21:37:36 2024-02-10 08:10:28 2024-02-10 08:52:21 0:41:53 0:31:59 0:09:54 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

"2024-02-10T08:27:22.866515+0000 mon.a (mon.0) 135 : cluster [WRN] overall HEALTH_WARN Reduced data availability: 1 pg inactive" in cluster log

fail 7555309 2024-02-09 21:37:37 2024-02-10 08:10:28 2024-02-10 08:36:01 0:25:33 0:15:44 0:09:49 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi017 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0e714d9a4bd2a821113e6318adb87bd06cf81ec1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid cc3639dc-c7ed-11ee-95b8-87774f69a715 -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

pass 7555310 2024-02-09 21:37:38 2024-02-10 08:10:28 2024-02-10 08:37:38 0:27:10 0:18:17 0:08:53 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
pass 7555311 2024-02-09 21:37:39 2024-02-10 08:10:29 2024-02-10 08:38:40 0:28:11 0:20:56 0:07:15 smithi main rhel 8.6 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_3.0 fixed-2 mon_election/classic start} 2
fail 7555312 2024-02-09 21:37:40 2024-02-10 08:10:29 2024-02-10 08:55:41 0:45:12 0:32:24 0:12:48 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-10T08:44:16.678115+0000 mon.smithi098 (mon.0) 7 : cluster [WRN] Health detail: HEALTH_WARN 1 filesystem with deprecated feature inline_data" in cluster log

fail 7555313 2024-02-09 21:37:40 2024-02-10 08:10:29 2024-02-10 08:40:35 0:30:06 0:19:23 0:10:43 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi076 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2a5ff606-c7ee-11ee-95b8-87774f69a715 -e sha1=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

pass 7555314 2024-02-09 21:37:41 2024-02-10 08:10:30 2024-02-10 08:49:17 0:38:47 0:17:45 0:21:02 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
fail 7555315 2024-02-09 21:37:42 2024-02-10 08:10:30 2024-02-10 08:59:25 0:48:55 0:39:05 0:09:50 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
Failure Reason:

"2024-02-10T08:40:00.000107+0000 mon.a (mon.0) 730 : cluster [WRN] Health detail: HEALTH_WARN noscrub,nodeep-scrub flag(s) set" in cluster log

fail 7555316 2024-02-09 21:37:43 2024-02-10 08:10:30 2024-02-10 09:00:45 0:50:15 0:42:38 0:07:37 smithi main rhel 8.6 rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

"2024-02-10T08:35:08.867569+0000 mon.a (mon.0) 159 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

pass 7555317 2024-02-09 21:37:44 2024-02-10 08:10:31 2024-02-10 08:33:28 0:22:57 0:16:26 0:06:31 smithi main rhel 8.6 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_rhel8 fixed-2 mon_election/connectivity start} 2
fail 7555318 2024-02-09 21:37:44 2024-02-10 08:10:32 2024-02-10 08:41:31 0:30:59 0:18:49 0:12:10 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

"2024-02-10T08:37:10.566373+0000 mon.smithi005 (mon.0) 651 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

pass 7555319 2024-02-09 21:37:45 2024-02-10 08:10:32 2024-02-10 08:38:37 0:28:05 0:20:48 0:07:17 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
fail 7555320 2024-02-09 21:37:46 2024-02-10 08:10:32 2024-02-10 08:55:13 0:44:41 0:33:22 0:11:19 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-10T08:38:36.154004+0000 mon.smithi003 (mon.0) 506 : cluster [WRN] Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)" in cluster log

fail 7555321 2024-02-09 21:37:47 2024-02-10 08:12:03 2024-02-10 09:13:00 1:00:57 0:48:59 0:11:58 smithi main centos 8.stream rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

"2024-02-10T08:32:45.832632+0000 mon.a (mon.0) 146 : cluster [WRN] overall HEALTH_WARN Reduced data availability: 1 pg inactive" in cluster log

fail 7555322 2024-02-09 21:37:48 2024-02-10 08:13:24 2024-02-10 08:46:46 0:33:22 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/radosbench} 3
Failure Reason:

Failed to reconnect to smithi153

fail 7555323 2024-02-09 21:37:48 2024-02-10 08:25:46 2024-02-10 09:02:14 0:36:28 0:27:24 0:09:04 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} 1
Failure Reason:

"2024-02-10T08:48:56.401145+0000 mon.a (mon.0) 502 : cluster [WRN] Replacing daemon mds.a.smithi037.xynkje as rank 0 with standby daemon mds.user_test_fs.smithi037.qyvxph" in cluster log

fail 7555324 2024-02-09 21:37:49 2024-02-10 08:25:46 2024-02-10 08:45:38 0:19:52 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
Failure Reason:

Failed to reconnect to smithi193

pass 7555325 2024-02-09 21:37:50 2024-02-10 08:25:46 2024-02-10 08:49:50 0:24:04 0:16:56 0:07:08 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
fail 7555326 2024-02-09 21:37:51 2024-02-10 08:25:47 2024-02-10 09:28:00 1:02:13 0:50:24 0:11:49 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} 2
Failure Reason:

"2024-02-10T08:55:08.373866+0000 mon.a (mon.0) 735 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log

fail 7555327 2024-02-09 21:37:52 2024-02-10 08:25:47 2024-02-10 08:58:44 0:32:57 0:27:19 0:05:38 smithi main rhel 8.6 rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
Failure Reason:

"2024-02-10T08:47:01.030452+0000 mon.a (mon.0) 160 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

fail 7555328 2024-02-09 21:37:53 2024-02-10 08:25:47 2024-02-10 08:50:35 0:24:48 0:15:54 0:08:54 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli} 1
Failure Reason:

"2024-02-10T08:49:01.194929+0000 mon.a (mon.0) 458 : cluster [WRN] Health check failed: cephadm background work is paused (CEPHADM_PAUSED)" in cluster log

fail 7555329 2024-02-09 21:37:53 2024-02-10 08:25:48 2024-02-10 08:45:23 0:19:35 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

Failed to reconnect to smithi192

fail 7555330 2024-02-09 21:37:54 2024-02-10 08:25:48 2024-02-10 09:15:42 0:49:54 0:34:34 0:15:20 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi155 with status 5: 'sudo systemctl stop ceph-6081b54a-c7f2-11ee-95b8-87774f69a715@mon.smithi155'

fail 7555331 2024-02-09 21:37:55 2024-02-10 08:25:48 2024-02-10 08:51:24 0:25:36 0:15:08 0:10:28 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

"2024-02-10T08:48:16.311850+0000 mon.smithi089 (mon.0) 626 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

pass 7555332 2024-02-09 21:37:56 2024-02-10 08:25:49 2024-02-10 08:57:35 0:31:46 0:22:50 0:08:56 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} 2
fail 7555333 2024-02-09 21:37:57 2024-02-10 08:25:49 2024-02-10 09:17:26 0:51:37 0:36:13 0:15:24 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

"2024-02-10T09:11:01.737705+0000 mon.a (mon.0) 986 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log

fail 7555334 2024-02-09 21:37:57 2024-02-10 08:25:50 2024-02-10 09:42:48 1:16:58 0:55:26 0:21:32 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

"2024-02-10T08:56:07.636663+0000 mon.a (mon.0) 174 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.80:3300/0,v1:172.21.15.80:6789/0] is down (out of quorum)" in cluster log

fail 7555335 2024-02-09 21:37:58 2024-02-10 08:25:50 2024-02-10 09:08:22 0:42:32 0:24:13 0:18:19 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

"2024-02-10T09:04:55.350387+0000 mon.a (mon.0) 492 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7555336 2024-02-09 21:37:59 2024-02-10 08:33:11 2024-02-10 09:14:41 0:41:30 0:31:10 0:10:20 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-10T09:04:42.190380+0000 mon.smithi084 (mon.0) 272 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 7555337 2024-02-09 21:38:00 2024-02-10 08:33:32 2024-02-10 09:18:32 0:45:00 0:31:59 0:13:01 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

"2024-02-10T08:55:33.749709+0000 mon.smithi072 (mon.0) 326 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7555338 2024-02-09 21:38:01 2024-02-10 08:34:52 2024-02-10 09:06:54 0:32:02 0:20:09 0:11:53 smithi main centos 8.stream rados/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{centos_8} workloads/ceph_iscsi} 3
Failure Reason:

'package_manager_version'

fail 7555339 2024-02-09 21:38:01 2024-02-10 08:37:43 2024-02-10 09:06:53 0:29:10 0:17:29 0:11:41 smithi main centos 8.stream rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-02-10T08:58:06.310624+0000 mon.a (mon.0) 161 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

fail 7555340 2024-02-09 21:38:02 2024-02-10 08:38:44 2024-02-10 09:25:59 0:47:15 0:37:12 0:10:03 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

"2024-02-10T09:17:22.730142+0000 mon.a (mon.0) 1472 : cluster [ERR] Health check failed: full ratio(s) out of order (OSD_OUT_OF_ORDER_FULL)" in cluster log

fail 7555341 2024-02-09 21:38:03 2024-02-10 08:38:44 2024-02-10 09:12:02 0:33:18 0:19:43 0:13:35 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

"2024-02-10T08:59:29.461758+0000 mon.a (mon.0) 180 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c" in cluster log

fail 7555342 2024-02-09 21:38:04 2024-02-10 08:41:15 2024-02-10 09:01:01 0:19:46 smithi main ubuntu 18.04 rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

Failed to reconnect to smithi017

fail 7555343 2024-02-09 21:38:05 2024-02-10 08:41:15 2024-02-10 09:08:18 0:27:03 0:19:39 0:07:24 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

"2024-02-10T09:03:33.202023+0000 mon.smithi053 (mon.0) 628 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7555344 2024-02-09 21:38:06 2024-02-10 08:41:15 2024-02-10 09:32:25 0:51:10 0:43:52 0:07:18 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7555345 2024-02-09 21:38:07 2024-02-10 08:41:16 2024-02-10 09:17:22 0:36:06 0:22:16 0:13:50 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi063 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=57bd6abdec7bb457ae7999d9c96682e9ac678e27 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7555346 2024-02-09 21:38:08 2024-02-10 08:41:16 2024-02-10 09:08:11 0:26:55 0:20:18 0:06:37 smithi main rhel 8.6 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_3.0 fixed-2 mon_election/connectivity start} 2
fail 7555347 2024-02-09 21:38:08 2024-02-10 08:41:16 2024-02-10 09:31:50 0:50:34 0:30:26 0:20:08 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi143 with status 5: 'sudo systemctl stop ceph-9fa8617c-c7f4-11ee-95b8-87774f69a715@mon.smithi143'

fail 7555348 2024-02-09 21:38:09 2024-02-10 08:41:17 2024-02-10 09:41:14 0:59:57 0:44:36 0:15:21 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

"2024-02-10T09:18:07.138772+0000 mon.a (mon.0) 742 : cluster [WRN] Health check failed: Reduced data availability: 10 pgs inactive, 16 pgs peering (PG_AVAILABILITY)" in cluster log

pass 7555349 2024-02-09 21:38:10 2024-02-10 08:41:17 2024-02-10 09:05:50 0:24:33 0:17:13 0:07:20 smithi main rhel 8.6 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_rhel8 fixed-2 mon_election/classic start} 2
fail 7555350 2024-02-09 21:38:11 2024-02-10 08:41:17 2024-02-10 09:37:32 0:56:15 0:45:27 0:10:48 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/cache-snaps} 3
Failure Reason:

"2024-02-10T09:20:00.000182+0000 mon.a (mon.0) 1445 : cluster [WRN] Health detail: HEALTH_WARN noscrub flag(s) set" in cluster log

fail 7555351 2024-02-09 21:38:12 2024-02-10 08:41:18 2024-02-10 09:26:21 0:45:03 0:34:38 0:10:25 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi100 with status 5: 'sudo systemctl stop ceph-d19ef75a-c7f3-11ee-95b8-87774f69a715@mon.smithi100'

fail 7555352 2024-02-09 21:38:13 2024-02-10 08:41:18 2024-02-10 09:45:26 1:04:08 0:51:00 0:13:08 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

"2024-02-10T09:00:36.375518+0000 mon.a (mon.0) 135 : cluster [WRN] overall HEALTH_WARN Reduced data availability: 1 pg inactive" in cluster log

fail 7555353 2024-02-09 21:38:13 2024-02-10 08:41:19 2024-02-10 09:26:18 0:44:59 0:34:56 0:10:03 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs} 1
Failure Reason:

"2024-02-10T09:12:17.481849+0000 mon.a (mon.0) 501 : cluster [WRN] Replacing daemon mds.a.smithi191.ceylum as rank 0 with standby daemon mds.user_test_fs.smithi191.pvaune" in cluster log

fail 7555354 2024-02-09 21:38:14 2024-02-10 08:41:19 2024-02-10 09:15:33 0:34:14 0:23:00 0:11:14 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

"2024-02-10T08:59:24.902907+0000 mon.a (mon.0) 163 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

fail 7555355 2024-02-09 21:38:15 2024-02-10 08:41:19 2024-02-10 09:21:22 0:40:03 0:25:31 0:14:32 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi110 with status 5: 'sudo systemctl stop ceph-365142d0-c7f3-11ee-95b8-87774f69a715@mon.smithi110'

fail 7555356 2024-02-09 21:38:16 2024-02-10 08:41:20 2024-02-10 09:35:13 0:53:53 0:42:11 0:11:42 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
Failure Reason:

"2024-02-10T09:15:36.268084+0000 mon.a (mon.0) 942 : cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET)" in cluster log

fail 7555357 2024-02-09 21:38:17 2024-02-10 08:41:20 2024-02-10 09:37:35 0:56:15 0:42:09 0:14:06 smithi main centos 8.stream rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

"2024-02-10T09:20:00.000349+0000 mon.a (mon.0) 1087 : cluster [WRN] application not enabled on pool 'ceph_test_rados_api_asio'" in cluster log

fail 7555358 2024-02-09 21:38:17 2024-02-10 08:41:20 2024-02-10 09:10:14 0:28:54 0:19:38 0:09:16 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

"2024-02-10T09:08:14.202182+0000 mon.a (mon.0) 467 : cluster [WRN] Health check failed: cephadm background work is paused (CEPHADM_PAUSED)" in cluster log

fail 7555359 2024-02-09 21:38:18 2024-02-10 08:41:21 2024-02-10 09:03:24 0:22:03 0:15:06 0:06:57 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

"2024-02-10T09:01:13.920727+0000 mon.smithi150 (mon.0) 577 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7555360 2024-02-09 21:38:19 2024-02-10 08:41:21 2024-02-10 09:31:50 0:50:29 0:30:40 0:19:49 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi157 with status 5: 'sudo systemctl stop ceph-9fd1b5f4-c7f4-11ee-95b8-87774f69a715@mon.smithi157'

fail 7555361 2024-02-09 21:38:20 2024-02-10 08:41:21 2024-02-10 09:17:00 0:35:39 0:22:54 0:12:45 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-02-10T09:02:13.437231+0000 mon.a (mon.0) 162 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

fail 7555362 2024-02-09 21:38:21 2024-02-10 08:41:22 2024-02-10 09:36:26 0:55:04 0:33:01 0:22:03 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli_mon} 5
Failure Reason:

"2024-02-10T09:30:37.855200+0000 mon.a (mon.0) 984 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log

fail 7555363 2024-02-09 21:38:21 2024-02-10 08:49:23 2024-02-10 09:10:28 0:21:05 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Failed to reconnect to smithi027

fail 7555364 2024-02-09 21:38:22 2024-02-10 08:49:54 2024-02-10 09:39:42 0:49:48 0:36:38 0:13:10 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7555365 2024-02-09 21:38:23 2024-02-10 08:56:45 2024-02-10 09:33:49 0:37:04 0:28:13 0:08:51 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

"2024-02-10T09:31:32.995393+0000 mon.a (mon.0) 512 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7555366 2024-02-09 21:38:24 2024-02-10 08:56:45 2024-02-10 09:39:05 0:42:20 0:30:59 0:11:21 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi158 with status 5: 'sudo systemctl stop ceph-a847c16e-c7f5-11ee-95b8-87774f69a715@mon.smithi158'

fail 7555367 2024-02-09 21:38:25 2024-02-10 08:56:46 2024-02-10 09:25:58 0:29:12 0:19:29 0:09:43 smithi main centos 8.stream rados/cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/connectivity start} 2
Failure Reason:

"2024-02-10T09:20:00.325463+0000 mon.a (mon.0) 541 : cluster [WRN] Health detail: HEALTH_WARN Reduced data availability: 1 pg inactive, 1 pg peering" in cluster log

fail 7555368 2024-02-09 21:38:26 2024-02-10 08:56:46 2024-02-10 09:57:50 1:01:04 0:52:16 0:08:48 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
Failure Reason:

"2024-02-10T09:25:19.627471+0000 mon.a (mon.0) 543 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 7555369 2024-02-09 21:38:27 2024-02-10 08:56:46 2024-02-10 09:58:00 1:01:14 0:49:52 0:11:22 smithi main centos 8.stream rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

"2024-02-10T09:18:27.806179+0000 mon.a (mon.0) 153 : cluster [WRN] overall HEALTH_WARN Reduced data availability: 1 pg inactive" in cluster log

fail 7555370 2024-02-09 21:38:27 2024-02-10 08:56:47 2024-02-10 09:16:23 0:19:36 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

Failed to reconnect to smithi192

fail 7555371 2024-02-09 21:38:28 2024-02-10 08:56:47 2024-02-10 09:16:35 0:19:48 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

Failed to reconnect to smithi081

fail 7555372 2024-02-09 21:38:29 2024-02-10 08:56:48 2024-02-10 09:59:04 1:02:16 0:51:46 0:10:30 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

"2024-02-10T09:17:24.434168+0000 mon.a (mon.0) 184 : cluster [WRN] mon.b (rank 2) addr v1:172.21.15.3:6789/0 is down (out of quorum)" in cluster log

fail 7555373 2024-02-09 21:38:30 2024-02-10 08:56:48 2024-02-10 09:30:55 0:34:07 0:22:50 0:11:17 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi203 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=57bd6abdec7bb457ae7999d9c96682e9ac678e27 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 7555374 2024-02-09 21:38:31 2024-02-10 09:23:44 1129 smithi main rhel 8.6 rados/cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_3.0 fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-02-10T09:15:43.417690+0000 mon.a (mon.0) 159 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

fail 7555375 2024-02-09 21:38:32 2024-02-10 08:56:49 2024-02-10 09:54:01 0:57:12 0:44:18 0:12:54 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7555376 2024-02-09 21:38:33 2024-02-10 08:56:49 2024-02-10 09:39:46 0:42:57 0:30:51 0:12:06 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi173 with status 5: 'sudo systemctl stop ceph-c194278e-c7f5-11ee-95b8-87774f69a715@mon.smithi173'

fail 7555377 2024-02-09 21:38:33 2024-02-10 08:56:49 2024-02-10 09:16:32 0:19:43 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Failed to reconnect to smithi089

fail 7555378 2024-02-09 21:38:34 2024-02-10 08:56:50 2024-02-10 10:41:37 1:44:47 1:33:19 0:11:28 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi134 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e11eeb62-c7f4-11ee-95b8-87774f69a715 -e sha1=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 7555379 2024-02-09 21:38:35 2024-02-10 08:56:50 2024-02-10 09:53:45 0:56:55 0:45:37 0:11:18 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7555380 2024-02-09 21:38:36 2024-02-10 08:56:50 2024-02-10 09:47:08 0:50:18 0:39:40 0:10:38 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

"2024-02-10T09:28:07.958502+0000 mon.a (mon.0) 538 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 7555381 2024-02-09 21:38:37 2024-02-10 08:56:51 2024-02-10 09:42:47 0:45:56 0:38:46 0:07:10 smithi main rhel 8.6 rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

"2024-02-10T09:24:10.604386+0000 mon.a (mon.0) 779 : cluster [WRN] Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log