Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7568064 2024-02-19 22:55:01 2024-02-19 22:56:05 2024-02-19 23:23:06 0:27:01 0:17:46 0:09:15 smithi main centos 8.stream orch:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
pass 7568065 2024-02-19 22:55:03 2024-02-19 22:56:06 2024-02-19 23:46:20 0:50:14 0:39:30 0:10:44 smithi main ubuntu 18.04 orch:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/pggrow thrashosds-health workloads/rbd_cls} 3
fail 7568066 2024-02-19 22:55:04 2024-02-19 22:56:46 2024-02-19 23:27:12 0:30:26 0:19:31 0:10:55 smithi main ubuntu 20.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

"2024-02-19T23:19:20.662211+0000 mon.a (mon.0) 567 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

pass 7568067 2024-02-19 22:55:05 2024-02-19 22:57:17 2024-02-19 23:28:50 0:31:33 0:19:09 0:12:24 smithi main centos 8.stream orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/mirror 3-final} 2
fail 7568068 2024-02-19 22:55:07 2024-02-19 22:58:48 2024-02-19 23:54:10 0:55:22 0:43:20 0:12:02 smithi main centos 8.stream orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

expected string or bytes-like object

fail 7568069 2024-02-19 22:55:08 2024-02-19 23:00:59 2024-02-19 23:52:44 0:51:45 0:45:27 0:06:18 smithi main rhel 8.6 orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

"2024-02-19T23:50:00.000250+0000 mon.a (mon.0) 3189 : cluster [WRN] [WRN] POOL_FULL: 1 pool(s) full" in cluster log

fail 7568070 2024-02-19 22:55:09 2024-02-19 23:01:39 2024-02-19 23:39:49 0:38:10 0:27:58 0:10:12 smithi main centos 8.stream orch:cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

"2024-02-19T23:38:43.883876+0000 mon.a (mon.0) 558 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7568071 2024-02-19 22:55:10 2024-02-19 23:02:30 2024-02-19 23:38:41 0:36:11 0:25:47 0:10:24 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi153 with status 5: 'sudo systemctl stop ceph-9aa3f282-cf7d-11ee-95bb-87774f69a715@mon.smithi153'

fail 7568072 2024-02-19 22:55:12 2024-02-19 23:03:20 2024-02-19 23:37:18 0:33:58 0:22:56 0:11:02 smithi main centos 8.stream orch:cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi167 with status 5: 'sudo systemctl stop ceph-46763f26-cf7d-11ee-95bb-87774f69a715@mon.smithi167'

pass 7568073 2024-02-19 22:55:13 2024-02-19 23:03:41 2024-02-19 23:26:48 0:23:07 0:11:52 0:11:15 smithi main ubuntu 20.04 orch:cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_20.04} 2-node-mgr orchestrator_cli} 2
pass 7568074 2024-02-19 22:55:14 2024-02-19 23:03:41 2024-02-19 23:46:42 0:43:01 0:32:59 0:10:02 smithi main centos 8.stream orch:cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{centos_8} workloads/ceph_iscsi} 3
pass 7568075 2024-02-19 22:55:16 2024-02-19 23:03:42 2024-02-19 23:32:41 0:28:59 0:19:51 0:09:08 smithi main centos 8.stream orch:cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/classic start} 2
pass 7568076 2024-02-19 22:55:17 2024-02-19 23:03:42 2024-02-19 23:29:09 0:25:27 0:16:19 0:09:08 smithi main ubuntu 18.04 orch:cephadm/smoke-singlehost/{0-distro$/{ubuntu_18.04} 1-start 2-services/basic 3-final} 1
pass 7568077 2024-02-19 22:55:18 2024-02-19 23:03:43 2024-02-19 23:24:44 0:21:01 0:11:41 0:09:20 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_adoption} 1
pass 7568078 2024-02-19 22:55:20 2024-02-19 23:03:43 2024-02-20 00:07:05 1:03:22 0:54:25 0:08:57 smithi main ubuntu 18.04 orch:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/careful thrashosds-health workloads/snaps-few-objects} 3
fail 7568079 2024-02-19 22:55:21 2024-02-19 23:03:44 2024-02-20 00:00:56 0:57:12 0:47:39 0:09:33 smithi main rhel 8.6 orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7568080 2024-02-19 22:55:22 2024-02-19 23:05:44 2024-02-19 23:34:34 0:28:50 0:19:00 0:09:50 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi045 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=19854089e18d4f65dda2b6cd74e737367c2514bd TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7568081 2024-02-19 22:55:24 2024-02-19 23:05:45 2024-02-19 23:53:18 0:47:33 0:37:39 0:09:54 smithi main ubuntu 18.04 orch:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/default thrashosds-health workloads/test_rbd_api} 3
pass 7568082 2024-02-19 22:55:25 2024-02-19 23:05:45 2024-02-19 23:35:29 0:29:44 0:23:35 0:06:09 smithi main rhel 8.6 orch:cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_3.0 fixed-2 mon_election/connectivity start} 2
pass 7568083 2024-02-19 22:55:26 2024-02-19 23:05:46 2024-02-19 23:33:16 0:27:30 0:21:14 0:06:16 smithi main rhel 8.6 orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
fail 7568084 2024-02-19 22:55:28 2024-02-19 23:06:06 2024-02-19 23:45:21 0:39:15 0:25:41 0:13:34 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi155 with status 5: 'sudo systemctl stop ceph-61789520-cf7e-11ee-95bb-87774f69a715@mon.smithi155'

fail 7568085 2024-02-19 22:55:29 2024-02-19 23:08:07 2024-02-19 23:53:42 0:45:35 0:38:48 0:06:47 smithi main rhel 8.6 orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

pass 7568086 2024-02-19 22:55:30 2024-02-19 23:08:08 2024-02-19 23:25:17 0:17:09 0:08:02 0:09:07 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm_repos} 1
pass 7568087 2024-02-19 22:55:32 2024-02-19 23:08:08 2024-02-20 00:08:25 1:00:17 0:49:14 0:11:03 smithi main ubuntu 18.04 orch:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/mapgap thrashosds-health workloads/cache-snaps} 3
fail 7568088 2024-02-19 22:55:33 2024-02-19 23:08:39 2024-02-20 00:00:20 0:51:41 0:41:44 0:09:57 smithi main centos 8.stream orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

expected string or bytes-like object

pass 7568089 2024-02-19 22:55:34 2024-02-19 23:09:09 2024-02-19 23:51:48 0:42:39 0:34:59 0:07:40 smithi main rhel 8.6 orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
fail 7568090 2024-02-19 22:55:36 2024-02-19 23:10:10 2024-02-20 00:10:07 0:59:57 0:44:33 0:15:24 smithi main ubuntu 18.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

pass 7568091 2024-02-19 22:55:37 2024-02-19 23:13:21 2024-02-19 23:44:00 0:30:39 0:22:02 0:08:37 smithi main rhel 8.6 orch:cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_rhel8 fixed-2 mon_election/classic start} 2
pass 7568092 2024-02-19 22:55:38 2024-02-19 23:13:21 2024-02-20 00:56:31 1:43:10 1:33:07 0:10:03 smithi main ubuntu 18.04 orch:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/morepggrow thrashosds-health workloads/radosbench} 3
pass 7568093 2024-02-19 22:55:40 2024-02-19 23:15:02 2024-02-19 23:47:21 0:32:19 0:22:18 0:10:01 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}} 1
fail 7568094 2024-02-19 22:55:41 2024-02-19 23:16:12 2024-02-20 00:19:02 1:02:50 0:50:17 0:12:33 smithi main ubuntu 20.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

"2024-02-19T23:50:43.305947+0000 mon.a (mon.0) 599 : cluster [WRN] Health check failed: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7568095 2024-02-19 22:55:42 2024-02-19 23:19:03 2024-02-19 23:54:18 0:35:15 0:26:12 0:09:03 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi189 with status 5: 'sudo systemctl stop ceph-b7a52278-cf7f-11ee-95bb-87774f69a715@mon.smithi189'

dead 7568096 2024-02-19 22:55:43 2024-02-19 23:19:04 2024-02-19 23:20:08 0:01:04 smithi main ubuntu 20.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi174

fail 7568097 2024-02-19 22:55:45 2024-02-19 23:19:04 2024-02-19 23:39:54 0:20:50 smithi main ubuntu 18.04 orch:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/none thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Failed to reconnect to smithi077

pass 7568098 2024-02-19 22:55:46 2024-02-19 23:20:15 2024-02-19 23:50:08 0:29:53 0:19:58 0:09:55 smithi main rhel 8.6 orch:cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
fail 7568099 2024-02-19 22:55:47 2024-02-19 23:23:15 2024-02-20 00:02:16 0:39:01 0:28:54 0:10:07 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs} 1
Failure Reason:

"2024-02-19T23:52:48.917184+0000 mon.a (mon.0) 1207 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

pass 7568100 2024-02-19 22:55:49 2024-02-19 23:23:16 2024-02-19 23:59:06 0:35:50 0:24:17 0:11:33 smithi main ubuntu 18.04 orch:cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
fail 7568101 2024-02-19 22:55:50 2024-02-19 23:25:26 2024-02-19 23:48:45 0:23:19 smithi main ubuntu 18.04 orch:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/pggrow thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Failed to reconnect to smithi044

pass 7568102 2024-02-19 22:55:51 2024-02-19 23:28:57 2024-02-19 23:56:17 0:27:20 0:17:43 0:09:37 smithi main centos 8.stream orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs 3-final} 2
fail 7568103 2024-02-19 22:55:52 2024-02-19 23:29:18 2024-02-20 00:20:28 0:51:10 0:42:18 0:08:52 smithi main centos 8.stream orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
Failure Reason:

expected string or bytes-like object

fail 7568104 2024-02-19 22:55:54 2024-02-19 23:29:18 2024-02-19 23:52:48 0:23:30 smithi main ubuntu 18.04 orch:cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Failed to reconnect to smithi086

fail 7568105 2024-02-19 22:55:55 2024-02-19 23:32:49 2024-02-20 00:06:32 0:33:43 0:22:53 0:10:50 smithi main centos 8.stream orch:cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi121 with status 5: 'sudo systemctl stop ceph-539a0364-cf81-11ee-95bb-87774f69a715@mon.smithi121'

fail 7568106 2024-02-19 22:55:56 2024-02-19 23:33:19 2024-02-19 23:58:27 0:25:08 0:16:40 0:08:28 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

"2024-02-19T23:57:16.619375+0000 mon.a (mon.0) 468 : cluster [WRN] Health check failed: cephadm background work is paused (CEPHADM_PAUSED)" in cluster log

fail 7568107 2024-02-19 22:55:58 2024-02-19 23:33:19 2024-02-20 00:11:22 0:38:03 0:26:42 0:11:21 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi122 with status 5: 'sudo systemctl stop ceph-e75455a0-cf81-11ee-95bb-87774f69a715@mon.smithi122'

pass 7568108 2024-02-19 22:55:59 2024-02-19 23:34:30 2024-02-20 00:21:44 0:47:14 0:37:22 0:09:52 smithi main ubuntu 18.04 orch:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} 3
pass 7568109 2024-02-19 22:56:00 2024-02-19 23:34:30 2024-02-20 00:03:27 0:28:57 0:22:10 0:06:47 smithi main rhel 8.6 orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
pass 7568110 2024-02-19 22:56:01 2024-02-19 23:34:31 2024-02-20 00:12:09 0:37:38 0:26:16 0:11:22 smithi main ubuntu 20.04 orch:cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
pass 7568111 2024-02-19 22:56:03 2024-02-19 23:34:31 2024-02-20 00:25:36 0:51:05 0:40:28 0:10:37 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli_mon} 5
fail 7568112 2024-02-19 22:56:04 2024-02-19 23:34:41 2024-02-20 00:05:56 0:31:15 0:22:02 0:09:13 smithi main ubuntu 18.04 orch:cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

"2024-02-19T23:59:46.867424+0000 mon.smithi112 (mon.0) 631 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7568113 2024-02-19 22:56:05 2024-02-19 23:34:42 2024-02-19 23:56:00 0:21:18 smithi main ubuntu 18.04 orch:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/careful thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Failed to reconnect to smithi099

fail 7568114 2024-02-19 22:56:07 2024-02-19 23:35:32 2024-02-20 00:30:59 0:55:27 0:40:28 0:14:59 smithi main rhel 8.6 orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7568115 2024-02-19 22:56:08 2024-02-19 23:44:04 2024-02-20 00:43:33 0:59:29 0:48:31 0:10:58 smithi main centos 8.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

"2024-02-20T00:06:20.322763+0000 mon.a (mon.0) 431 : cluster [WRN] Health check failed: 1 stray daemons(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7568116 2024-02-19 22:56:09 2024-02-19 23:46:25 2024-02-20 00:44:00 0:57:35 0:47:10 0:10:25 smithi main centos 8.stream orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
Failure Reason:

expected string or bytes-like object

pass 7568117 2024-02-19 22:56:10 2024-02-19 23:46:45 2024-02-20 00:30:50 0:44:05 0:33:06 0:10:59 smithi main ubuntu 20.04 orch:cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
fail 7568118 2024-02-19 22:56:12 2024-02-19 23:46:46 2024-02-20 00:25:13 0:38:27 0:26:45 0:11:42 smithi main centos 8.stream orch:cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

"2024-02-20T00:24:15.791174+0000 mon.a (mon.0) 567 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7568119 2024-02-19 22:56:13 2024-02-19 23:50:06 2024-02-20 00:25:28 0:35:22 0:26:26 0:08:56 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi153 with status 5: 'sudo systemctl stop ceph-1b4beba0-cf84-11ee-95bb-87774f69a715@mon.smithi153'

pass 7568120 2024-02-19 22:56:14 2024-02-19 23:50:07 2024-02-20 00:19:24 0:29:17 0:19:40 0:09:37 smithi main centos 8.stream orch:cephadm/smoke/{0-nvme-loop distro/centos_8.stream_container_tools fixed-2 mon_election/connectivity start} 2
pass 7568121 2024-02-19 22:56:15 2024-02-19 23:50:07 2024-02-20 00:18:57 0:28:50 0:18:13 0:10:37 smithi main ubuntu 20.04 orch:cephadm/smoke-singlehost/{0-distro$/{ubuntu_20.04} 1-start 2-services/rgw 3-final} 1
pass 7568122 2024-02-19 22:56:17 2024-02-19 23:50:07 2024-02-20 00:11:12 0:21:05 0:11:56 0:09:09 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_adoption} 1
fail 7568123 2024-02-19 22:56:18 2024-02-19 23:50:08 2024-02-20 00:09:59 0:19:51 smithi main ubuntu 18.04 orch:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/radosbench} 3
Failure Reason:

Failed to reconnect to smithi104

pass 7568124 2024-02-19 22:56:19 2024-02-19 23:50:08 2024-02-20 00:19:49 0:29:41 0:20:31 0:09:10 smithi main ubuntu 18.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/rgw 3-final} 2
fail 7568125 2024-02-19 22:56:21 2024-02-19 23:50:09 2024-02-20 00:09:33 0:19:24 smithi main ubuntu 18.04 orch:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Failed to reconnect to smithi052

fail 7568126 2024-02-19 22:56:22 2024-02-19 23:50:09 2024-02-20 00:19:19 0:29:10 0:18:48 0:10:22 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi057 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=19854089e18d4f65dda2b6cd74e737367c2514bd TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7568127 2024-02-19 22:56:23 2024-02-19 23:50:09 2024-02-20 00:18:41 0:28:32 0:18:13 0:10:19 smithi main ubuntu 18.04 orch:cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/repave-all} 2
pass 7568128 2024-02-19 22:56:24 2024-02-19 23:50:20 2024-02-20 00:21:42 0:31:22 0:20:10 0:11:12 smithi main ubuntu 20.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/basic 3-final} 2
pass 7568129 2024-02-19 22:56:26 2024-02-19 23:50:20 2024-02-20 00:20:36 0:30:16 0:22:40 0:07:36 smithi main rhel 8.6 orch:cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_3.0 fixed-2 mon_election/classic start} 2
pass 7568130 2024-02-19 22:56:27 2024-02-19 23:50:20 2024-02-20 00:54:44 1:04:24 0:54:29 0:09:55 smithi main ubuntu 18.04 orch:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} 3
fail 7568131 2024-02-19 22:56:29 2024-02-19 23:50:31 2024-02-20 00:26:18 0:35:47 0:25:50 0:09:57 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi132 with status 5: 'sudo systemctl stop ceph-2a2e6c42-cf84-11ee-95bb-87774f69a715@mon.smithi132'

pass 7568132 2024-02-19 22:56:30 2024-02-19 23:50:31 2024-02-20 00:07:49 0:17:18 0:08:05 0:09:13 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm_repos} 1
pass 7568133 2024-02-19 22:56:31 2024-02-19 23:50:31 2024-02-20 00:18:26 0:27:55 0:17:20 0:10:35 smithi main centos 8.stream orch:cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
fail 7568134 2024-02-19 22:56:32 2024-02-19 23:50:32 2024-02-20 00:43:28 0:52:56 0:41:23 0:11:33 smithi main centos 8.stream orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

expected string or bytes-like object

fail 7568135 2024-02-19 22:56:34 2024-02-19 23:51:53 2024-02-20 00:42:36 0:50:43 0:39:05 0:11:38 smithi main centos 8.stream orch:cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

"2024-02-20T00:23:01.584444+0000 mon.a (mon.0) 906 : cluster [WRN] Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7568136 2024-02-19 22:56:35 2024-02-19 23:53:23 2024-02-20 01:10:37 1:17:14 1:07:21 0:09:53 smithi main centos 8.stream orch:cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi110 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24abccd8-cf84-11ee-95bb-87774f69a715 -e sha1=19854089e18d4f65dda2b6cd74e737367c2514bd -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

pass 7568137 2024-02-19 22:56:36 2024-02-19 23:53:24 2024-02-20 00:33:37 0:40:13 0:29:00 0:11:13 smithi main ubuntu 18.04 orch:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/none thrashosds-health workloads/test_rbd_api} 3
pass 7568138 2024-02-19 22:56:38 2024-02-19 23:56:04 2024-02-20 00:21:30 0:25:26 0:18:45 0:06:41 smithi main rhel 8.6 orch:cephadm/smoke/{0-nvme-loop distro/rhel_8.6_container_tools_rhel8 fixed-2 mon_election/connectivity start} 2
pass 7568139 2024-02-19 22:56:39 2024-02-19 23:56:05 2024-02-20 00:21:14 0:25:09 0:16:51 0:08:18 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}} 1
pass 7568140 2024-02-19 22:56:40 2024-02-19 23:56:25 2024-02-20 00:36:34 0:40:09 0:20:34 0:19:35 smithi main rhel 8.6 orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
fail 7568141 2024-02-19 22:56:41 2024-02-19 23:59:16 2024-02-20 00:23:39 0:24:23 smithi main ubuntu 18.04 orch:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Failed to reconnect to smithi107

fail 7568142 2024-02-19 22:56:43 2024-02-20 00:05:55 2024-02-20 00:36:58 0:31:03 0:19:45 0:11:18 smithi main ubuntu 20.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

"2024-02-20T00:29:07.354815+0000 mon.a (mon.0) 567 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7568143 2024-02-19 22:56:44 2024-02-20 00:05:56 2024-02-20 00:43:07 0:37:11 0:27:58 0:09:13 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi189 with status 5: 'sudo systemctl stop ceph-911501da-cf86-11ee-95bb-87774f69a715@mon.smithi189'

pass 7568144 2024-02-19 22:56:45 2024-02-20 00:05:56 2024-02-20 00:37:57 0:32:01 0:20:59 0:11:02 smithi main ubuntu 20.04 orch:cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
fail 7568145 2024-02-19 22:56:46 2024-02-20 00:05:56 2024-02-20 00:44:45 0:38:49 0:28:54 0:09:55 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} 1
Failure Reason:

"2024-02-20T00:35:37.343550+0000 mon.a (mon.0) 1198 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7568146 2024-02-19 22:56:48 2024-02-20 00:05:57 2024-02-20 02:08:03 2:02:06 1:42:53 0:19:13 smithi main ubuntu 18.04 orch:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/careful thrashosds-health workloads/radosbench} 3
Failure Reason:

"2024-02-20T01:20:00.000439+0000 mon.a (mon.0) 2030 : cluster [WRN] pg 7.7 is active+recovering+undersized+degraded+remapped, acting [6,3]" in cluster log

pass 7568147 2024-02-19 22:56:49 2024-02-20 00:06:07 2024-02-20 00:31:08 0:25:01 0:17:49 0:07:12 smithi main rhel 8.6 orch:cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/mirror 3-final} 2
pass 7568148 2024-02-19 22:56:50 2024-02-20 00:06:08 2024-02-20 00:39:26 0:33:18 0:24:05 0:09:13 smithi main ubuntu 18.04 orch:cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
fail 7568149 2024-02-19 22:56:52 2024-02-20 00:06:08 2024-02-20 00:57:17 0:51:09 0:41:52 0:09:17 smithi main centos 8.stream orch:cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} 2
Failure Reason:

expected string or bytes-like object

pass 7568150 2024-02-19 22:56:53 2024-02-20 00:06:08 2024-02-20 00:48:05 0:41:57 0:34:43 0:07:14 smithi main rhel 8.6 orch:cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
pass 7568151 2024-02-19 22:56:55 2024-02-20 00:06:09 2024-02-20 01:06:02 0:59:53 0:39:19 0:20:34 smithi main ubuntu 18.04 orch:cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/rbd_cls} 3
fail 7568152 2024-02-19 22:56:56 2024-02-20 00:06:09 2024-02-20 00:31:49 0:25:40 0:16:21 0:09:19 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli} 1
Failure Reason:

"2024-02-20T00:30:19.622786+0000 mon.a (mon.0) 459 : cluster [WRN] Health check failed: cephadm background work is paused (CEPHADM_PAUSED)" in cluster log

fail 7568153 2024-02-19 22:56:57 2024-02-20 00:06:09 2024-02-20 01:11:06 1:04:57 0:44:40 0:20:17 smithi main ubuntu 18.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7568154 2024-02-19 22:56:58 2024-02-20 00:06:10 2024-02-20 00:42:20 0:36:10 0:25:49 0:10:21 smithi main centos 8.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi195 with status 5: 'sudo systemctl stop ceph-6575da04-cf86-11ee-95bb-87774f69a715@mon.smithi195'

pass 7568155 2024-02-19 22:57:00 2024-02-20 00:07:10 2024-02-20 00:44:01 0:36:51 0:26:05 0:10:46 smithi main ubuntu 20.04 orch:cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/connectivity start} 2
pass 7568156 2024-02-19 22:57:01 2024-02-20 00:07:51 2024-02-20 01:14:20 1:06:29 0:56:12 0:10:17 smithi main ubuntu 18.04 orch:cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
pass 7568157 2024-02-19 22:57:02 2024-02-20 00:08:31 2024-02-20 00:39:19 0:30:48 0:17:32 0:13:16 smithi main centos 8.stream orch:cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
pass 7568158 2024-02-19 22:57:04 2024-02-20 00:12:12 2024-02-20 01:14:23 1:02:11 0:35:01 0:27:10 smithi main centos 8.stream orch:cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli_mon} 5