Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7557099 2024-02-12 21:58:29 2024-02-12 23:46:39 2024-02-13 00:35:54 0:49:15 0:39:53 0:09:22 smithi main centos 8.stream rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

"2024-02-13T00:10:05.151550+0000 mon.a (mon.0) 162 : cluster [WRN] mon.c (rank 2) addr [v2:172.21.15.119:3301/0,v1:172.21.15.119:6790/0] is down (out of quorum)" in cluster log

fail 7557100 2024-02-12 21:58:29 2024-02-12 23:46:39 2024-02-13 01:22:23 1:35:44 1:25:42 0:10:02 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/mimic backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/radosbench} 3
Failure Reason:

"2024-02-13T00:50:00.000185+0000 mon.a (mon.0) 2299 : cluster [WRN] [WRN] PG_DEGRADED: Degraded data redundancy: 6203/94215 objects degraded (6.584%), 6 pgs degraded, 1 pg undersized" in cluster log

dead 7557101 2024-02-12 21:58:30 2024-02-12 23:46:40 2024-02-13 11:59:19 12:12:39 smithi main ubuntu 20.04 rados/objectstore/{backends/objectstore supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

hit max job timeout

fail 7557102 2024-02-12 21:58:31 2024-02-12 23:46:40 2024-02-13 00:32:05 0:45:25 0:34:53 0:10:32 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-13T00:15:53.524447+0000 mon.smithi125 (mon.0) 504 : cluster [WRN] Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)" in cluster log

fail 7557103 2024-02-12 21:58:32 2024-02-12 23:46:50 2024-02-13 00:28:47 0:41:57 0:30:56 0:11:01 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

"2024-02-13T00:17:02.644069+0000 mon.a (mon.0) 54 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.104:3300/0,v1:172.21.15.104:6789/0] is down (out of quorum)" in cluster log

fail 7557104 2024-02-12 21:58:33 2024-02-12 23:46:51 2024-02-13 00:19:46 0:32:55 0:21:17 0:11:38 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

"2024-02-13T00:15:44.668607+0000 osd.1 (osd.1) 3 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running" in cluster log

fail 7557105 2024-02-12 21:58:33 2024-02-12 23:47:31 2024-02-13 00:30:37 0:43:06 0:33:56 0:09:10 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} 1
Failure Reason:

"2024-02-13T00:17:21.586402+0000 mon.a (mon.0) 502 : cluster [WRN] Replacing daemon mds.a.smithi017.fvzfva as rank 0 with standby daemon mds.user_test_fs.smithi017.hoijxq" in cluster log

fail 7557106 2024-02-12 21:58:34 2024-02-12 23:47:52 2024-02-13 00:22:21 0:34:29 0:24:05 0:10:24 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-02-13T00:08:10.908995+0000 mon.a (mon.0) 161 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

fail 7557107 2024-02-12 21:58:35 2024-02-12 23:48:52 2024-02-13 00:46:26 0:57:34 0:47:40 0:09:54 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} 2
Failure Reason:

expected string or bytes-like object

fail 7557108 2024-02-12 21:58:36 2024-02-12 23:49:23 2024-02-13 00:38:29 0:49:06 0:41:45 0:07:21 smithi main rhel 8.6 rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2
Failure Reason:

"2024-02-13T00:21:05.828678+0000 mon.a (mon.0) 161 : cluster [WRN] mon.c (rank 2) addr [v2:172.21.15.121:3301/0,v1:172.21.15.121:6790/0] is down (out of quorum)" in cluster log

fail 7557109 2024-02-12 21:58:37 2024-02-12 23:49:23 2024-02-13 00:18:54 0:29:31 0:20:18 0:09:13 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli} 1
Failure Reason:

"2024-02-13T00:17:40.991218+0000 mon.a (mon.0) 465 : cluster [WRN] Health check failed: cephadm background work is paused (CEPHADM_PAUSED)" in cluster log

fail 7557110 2024-02-12 21:58:38 2024-02-12 23:49:23 2024-02-13 00:37:25 0:48:02 0:36:34 0:11:28 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-13T00:20:00.000114+0000 mon.smithi123 (mon.0) 478 : cluster [ERR] Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds" in cluster log

pass 7557111 2024-02-12 21:58:38 2024-02-12 23:50:14 2024-02-13 00:55:45 1:05:31 0:53:57 0:11:34 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
pass 7557112 2024-02-12 21:58:39 2024-02-12 23:52:05 2024-02-13 00:19:24 0:27:19 0:18:10 0:09:09 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
fail 7557113 2024-02-12 21:58:40 2024-02-12 23:52:15 2024-02-13 00:38:19 0:46:04 0:36:36 0:09:28 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

"2024-02-13T00:33:57.895983+0000 mon.a (mon.0) 975 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log

fail 7557114 2024-02-12 21:58:41 2024-02-12 23:53:06 2024-02-13 00:25:44 0:32:38 0:22:00 0:10:38 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

Command failed on smithi028 with status 32: "sudo TESTDIR=/home/ubuntu/cephtest bash -ex -c 'mount -t nfs 10.0.31.28:/foouser /mnt/foo'"

pass 7557115 2024-02-12 21:58:42 2024-02-12 23:53:46 2024-02-13 00:32:46 0:39:00 0:28:21 0:10:39 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
fail 7557116 2024-02-12 21:58:43 2024-02-12 23:53:46 2024-02-13 00:42:57 0:49:11 0:40:30 0:08:41 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-13T00:30:00.000206+0000 mon.smithi026 (mon.0) 304 : cluster [WRN] Health detail: HEALTH_WARN Degraded data redundancy: 1146/210 objects degraded (545.714%), 10 pgs degraded, 4 pgs undersized" in cluster log

fail 7557117 2024-02-12 21:58:43 2024-02-12 23:53:47 2024-02-13 00:37:17 0:43:30 0:32:42 0:10:48 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

"2024-02-13T00:14:20.713070+0000 mon.smithi077 (mon.0) 330 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7557118 2024-02-12 21:58:44 2024-02-12 23:54:07 2024-02-13 00:18:16 0:24:09 0:13:27 0:10:42 smithi main centos 8.stream rados/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{centos_8} workloads/ceph_iscsi} 3
Failure Reason:

'package_manager_version'

fail 7557119 2024-02-12 21:58:45 2024-02-12 23:54:58 2024-02-13 00:47:32 0:52:34 0:42:19 0:10:15 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} 2
Failure Reason:

expected string or bytes-like object

fail 7557120 2024-02-12 21:58:46 2024-02-12 23:55:38 2024-02-13 00:28:51 0:33:13 0:21:59 0:11:14 smithi main centos 8.stream rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

Command failed on smithi071 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b2445a2c-ca04-11ee-95b9-87774f69a715 -e sha1=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 7557121 2024-02-12 21:58:47 2024-02-12 23:57:09 2024-02-13 00:48:20 0:51:11 0:44:18 0:06:53 smithi main rhel 8.6 rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

"2024-02-13T00:30:00.000149+0000 mon.a (mon.0) 1053 : cluster [WRN] use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications." in cluster log

pass 7557122 2024-02-12 21:58:47 2024-02-12 23:57:09 2024-02-13 00:26:27 0:29:18 0:18:34 0:10:44 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
pass 7557123 2024-02-12 21:58:48 2024-02-12 23:57:10 2024-02-13 00:29:17 0:32:07 0:22:25 0:09:42 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
fail 7557124 2024-02-12 21:58:49 2024-02-12 23:57:30 2024-02-13 00:41:14 0:43:44 0:32:17 0:11:27 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-13T00:24:20.345790+0000 mon.smithi047 (mon.0) 502 : cluster [WRN] Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)" in cluster log

fail 7557125 2024-02-12 21:58:50 2024-02-12 23:58:00 2024-02-13 00:55:27 0:57:27 0:46:27 0:11:00 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
Failure Reason:

expected string or bytes-like object

fail 7557126 2024-02-12 21:58:51 2024-02-12 23:58:21 2024-02-13 00:19:49 0:21:28 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/luminous-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/radosbench} 3
Failure Reason:

Failed to reconnect to smithi175

fail 7557127 2024-02-12 21:58:52 2024-02-12 23:59:21 2024-02-13 00:27:39 0:28:18 0:19:21 0:08:57 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

"2024-02-13T00:24:27.899189+0000 mon.smithi103 (mon.0) 599 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7557128 2024-02-12 21:58:52 2024-02-13 00:00:02 2024-02-13 00:44:33 0:44:31 0:33:15 0:11:16 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-13T00:31:04.674945+0000 mon.smithi049 (mon.0) 88 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 7 pgs peering (PG_AVAILABILITY)" in cluster log

fail 7557129 2024-02-12 21:58:53 2024-02-13 00:00:12 2024-02-13 00:50:21 0:50:09 0:36:55 0:13:14 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

Command failed on smithi023 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.9 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 58c70084-ca05-11ee-95b9-87774f69a715 -e sha1=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 -- bash -c \'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mon --hosts $(ceph orch ps | grep mgr.x | awk \'"\'"\'{print $2}\'"\'"\')\''

fail 7557130 2024-02-12 21:58:54 2024-02-13 00:02:08 2024-02-13 00:41:18 0:39:10 0:28:07 0:11:03 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs} 1
Failure Reason:

"2024-02-13T00:27:09.457373+0000 mon.a (mon.0) 508 : cluster [WRN] Replacing daemon mds.a.smithi073.xryipo as rank 0 with standby daemon mds.user_test_fs.smithi073.jrpgpk" in cluster log

fail 7557131 2024-02-12 21:58:55 2024-02-13 00:02:09 2024-02-13 00:37:22 0:35:13 0:24:05 0:11:08 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

"2024-02-13T00:23:16.063900+0000 mon.a (mon.0) 163 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

fail 7557132 2024-02-12 21:58:56 2024-02-13 00:03:19 2024-02-13 00:44:34 0:41:15 0:30:49 0:10:26 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

"2024-02-13T00:30:09.964725+0000 mds.foofs.smithi003.hpgbbg (mds.0) 2 : cluster [WRN] client session with duplicated session uuid 'ganesha-nfs.foo.0-0001' denied (client.14712 172.21.15.3:0/1402965498)" in cluster log

fail 7557133 2024-02-12 21:58:56 2024-02-13 00:03:20 2024-02-13 00:51:11 0:47:51 0:36:51 0:11:00 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} 2
Failure Reason:

expected string or bytes-like object

fail 7557134 2024-02-12 21:58:57 2024-02-13 00:03:20 2024-02-13 00:57:33 0:54:13 0:43:26 0:10:47 smithi main ubuntu 20.04 rados/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

"2024-02-13T00:37:54.257551+0000 mon.a (mon.0) 861 : cluster [WRN] Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7557135 2024-02-12 21:58:58 2024-02-13 00:03:30 2024-02-13 00:30:45 0:27:15 0:17:09 0:10:06 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

"2024-02-13T00:28:36.438158+0000 mon.a (mon.0) 461 : cluster [WRN] Health check failed: cephadm background work is paused (CEPHADM_PAUSED)" in cluster log

fail 7557136 2024-02-12 21:58:59 2024-02-13 00:04:01 2024-02-13 00:47:06 0:43:05 0:33:05 0:10:00 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-13T00:31:10.738949+0000 mon.smithi022 (mon.0) 508 : cluster [WRN] Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)" in cluster log

pass 7557137 2024-02-12 21:59:00 2024-02-13 00:04:11 2024-02-13 00:40:26 0:36:15 0:25:03 0:11:12 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
fail 7557138 2024-02-12 21:59:01 2024-02-13 00:04:21 2024-02-13 00:49:52 0:45:31 0:34:25 0:11:06 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli_mon} 5
Failure Reason:

"2024-02-13T00:44:04.818035+0000 mon.a (mon.0) 986 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log

fail 7557139 2024-02-12 21:59:02 2024-02-13 00:04:42 2024-02-13 00:25:22 0:20:40 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/mimic-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Failed to reconnect to smithi118

fail 7557140 2024-02-12 21:59:02 2024-02-13 00:04:52 2024-02-13 00:23:42 0:18:50 0:07:34 0:11:16 smithi main rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=0e714d9a4bd2a821113e6318adb87bd06cf81ec1

fail 7557141 2024-02-12 21:59:03 2024-02-13 00:05:03 2024-02-13 00:47:59 0:42:56 0:33:08 0:09:48 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-13T00:31:00.076579+0000 mon.smithi055 (mon.0) 503 : cluster [WRN] Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)" in cluster log

fail 7557142 2024-02-12 21:59:04 2024-02-13 00:05:03 2024-02-13 00:57:38 0:52:35 0:42:06 0:10:29 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

expected string or bytes-like object

fail 7557143 2024-02-12 21:59:05 2024-02-13 00:06:14 2024-02-13 00:49:05 0:42:51 0:31:59 0:10:52 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

"2024-02-13T00:38:29.156711+0000 mon.a (mon.0) 224 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7557144 2024-02-12 21:59:05 2024-02-13 00:06:54 2024-02-13 00:34:51 0:27:57 0:18:50 0:09:07 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Command failed on smithi032 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:0e714d9a4bd2a821113e6318adb87bd06cf81ec1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d15b559a-ca05-11ee-95b9-87774f69a715 -- bash -c \'set -e\nset -x\nceph orch ps\nceph orch device ls\nDEVID=$(ceph device ls | grep osd.1 | awk \'"\'"\'{print $1}\'"\'"\')\nHOST=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $1}\'"\'"\')\nDEV=$(ceph orch device ls | grep $DEVID | awk \'"\'"\'{print $2}\'"\'"\')\necho "host $HOST, dev $DEV, devid $DEVID"\nceph orch osd rm 1\nwhile ceph orch osd rm status | grep ^1 ; do sleep 5 ; done\nceph orch device zap $HOST $DEV --force\nceph orch daemon add osd $HOST:$DEV\nwhile ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done\n\''

pass 7557145 2024-02-12 21:59:06 2024-02-13 00:07:45 2024-02-13 00:36:59 0:29:14 0:20:43 0:08:31 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
fail 7557146 2024-02-12 21:59:07 2024-02-13 00:07:55 2024-02-13 00:51:01 0:43:06 0:33:01 0:10:05 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-13T00:35:55.059168+0000 mon.smithi007 (mon.0) 7 : cluster [WRN] Health detail: HEALTH_WARN 1 filesystem with deprecated feature inline_data" in cluster log

fail 7557147 2024-02-12 21:59:08 2024-02-13 00:08:06 2024-02-13 00:50:30 0:42:24 0:32:50 0:09:34 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

"2024-02-13T00:24:49.537298+0000 mon.smithi012 (mon.0) 67 : cluster [WRN] Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7557148 2024-02-12 21:59:09 2024-02-13 00:09:16 2024-02-13 01:00:56 0:51:40 0:40:22 0:11:18 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
Failure Reason:

expected string or bytes-like object

fail 7557149 2024-02-12 21:59:09 2024-02-13 00:09:37 2024-02-13 01:02:52 0:53:15 0:46:31 0:06:44 smithi main rhel 8.6 rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

"2024-02-13T00:50:00.000137+0000 mon.a (mon.0) 2284 : cluster [WRN] use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications." in cluster log

fail 7557150 2024-02-12 21:59:10 2024-02-13 00:09:47 2024-02-13 00:43:32 0:33:45 0:20:54 0:12:51 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

"2024-02-13T00:38:50.577704+0000 osd.1 (osd.1) 3 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running" in cluster log

fail 7557151 2024-02-12 21:59:11 2024-02-13 00:11:18 2024-02-13 00:41:11 0:29:53 0:20:00 0:09:53 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi005 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/ljflores/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 84e714279aa684636ac645daf0a1a85c2094cfb3'

fail 7557152 2024-02-12 21:59:12 2024-02-13 00:12:08 2024-02-13 01:13:30 1:01:22 0:48:46 0:12:36 smithi main centos 8.stream rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

"2024-02-13T00:34:27.011500+0000 mon.a (mon.0) 433 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7557153 2024-02-12 21:59:13 2024-02-13 00:14:29 2024-02-13 02:23:39 2:09:10 1:43:50 0:25:20 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/radosbench} 3
Failure Reason:

"2024-02-13T01:30:00.000184+0000 mon.a (mon.0) 2172 : cluster [WRN] [WRN] PG_DEGRADED: Degraded data redundancy: 36484/263163 objects degraded (13.864%), 8 pgs degraded, 12 pgs undersized" in cluster log

fail 7557154 2024-02-12 21:59:14 2024-02-13 00:19:31 2024-02-13 00:59:56 0:40:25 0:29:16 0:11:09 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_nfs} 1
Failure Reason:

"2024-02-13T00:46:22.646289+0000 mon.a (mon.0) 507 : cluster [WRN] Replacing daemon mds.a.smithi178.qwhnhf as rank 0 with standby daemon mds.user_test_fs.smithi178.tpckey" in cluster log

fail 7557155 2024-02-12 21:59:14 2024-02-13 00:20:21 2024-02-13 01:04:31 0:44:10 0:24:15 0:19:55 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-02-13T00:50:17.818712+0000 mon.a (mon.0) 162 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

fail 7557156 2024-02-12 21:59:15 2024-02-13 00:20:32 2024-02-13 01:17:35 0:57:03 0:47:55 0:09:08 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} 2
Failure Reason:

expected string or bytes-like object

fail 7557157 2024-02-12 21:59:16 2024-02-13 00:20:32 2024-02-13 00:49:05 0:28:33 0:19:02 0:09:31 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli} 1
Failure Reason:

"2024-02-13T00:46:40.115864+0000 mon.a (mon.0) 461 : cluster [WRN] Health check failed: cephadm background work is paused (CEPHADM_PAUSED)" in cluster log

fail 7557158 2024-02-12 21:59:17 2024-02-13 00:20:33 2024-02-13 00:38:13 0:17:40 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

Failed to reconnect to smithi175

fail 7557159 2024-02-12 21:59:17 2024-02-13 00:20:33 2024-02-13 00:55:34 0:35:01 0:25:47 0:09:14 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-13T00:44:03.433012+0000 mon.smithi112 (mon.0) 289 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

pass 7557160 2024-02-12 21:59:18 2024-02-13 00:20:33 2024-02-13 00:47:58 0:27:25 0:17:35 0:09:50 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
fail 7557161 2024-02-12 21:59:19 2024-02-13 00:20:34 2024-02-13 01:07:38 0:47:04 0:33:21 0:13:43 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

"2024-02-13T01:03:03.246123+0000 mon.a (mon.0) 978 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log

pass 7557162 2024-02-12 21:59:20 2024-02-13 00:24:05 2024-02-13 01:31:12 1:07:07 0:55:40 0:11:27 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/snaps-few-objects} 3
fail 7557163 2024-02-12 21:59:20 2024-02-13 00:25:45 2024-02-13 01:02:03 0:36:18 0:25:54 0:10:24 smithi main centos 8.stream rados/cephadm/dashboard/{0-distro/centos_8.stream_container_tools task/test_e2e} 2
Failure Reason:

"2024-02-13T01:00:56.116580+0000 mon.a (mon.0) 544 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7557164 2024-02-12 21:59:21 2024-02-13 00:26:36 2024-02-13 01:07:11 0:40:35 0:26:08 0:14:27 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi090 with status 5: 'sudo systemctl stop ceph-9c578e78-ca09-11ee-95b9-87774f69a715@mon.smithi090'

fail 7557165 2024-02-12 21:59:22 2024-02-13 00:29:27 2024-02-13 01:05:48 0:36:21 0:23:15 0:13:06 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi062 with status 5: 'sudo systemctl stop ceph-73f8706e-ca09-11ee-95b9-87774f69a715@mon.smithi062'

fail 7557166 2024-02-12 21:59:23 2024-02-13 00:32:47 2024-02-13 00:58:24 0:25:37 0:11:57 0:13:40 smithi main centos 8.stream rados/cephadm/rbd_iscsi/{base/install cluster/{fixed-3 openstack} pool/datapool supported-random-distro$/{centos_8} workloads/ceph_iscsi} 3
Failure Reason:

'package_manager_version'

fail 7557167 2024-02-12 21:59:24 2024-02-13 02:56:02 2024-02-13 03:43:14 0:47:12 0:38:09 0:09:03 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
Failure Reason:

expected string or bytes-like object

fail 7557168 2024-02-12 21:59:24 2024-02-13 02:56:02 2024-02-13 03:38:47 0:42:45 0:32:09 0:10:36 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04-15.2.9 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

"2024-02-13T03:14:04.640528+0000 mon.a (mon.0) 181 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.83:3300/0,v1:172.21.15.83:6789/0] is down (out of quorum)" in cluster log

fail 7557169 2024-02-12 21:59:25 2024-02-13 02:56:43 2024-02-13 03:46:19 0:49:36 0:40:15 0:09:21 smithi main ubuntu 18.04 rados/cephadm/with-work/{0-distro/ubuntu_18.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
Failure Reason:

"2024-02-13T03:27:06.994492+0000 mon.a (mon.0) 808 : cluster [WRN] Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7557170 2024-02-12 21:59:26 2024-02-13 02:56:43 2024-02-13 03:26:19 0:29:36 0:23:14 0:06:22 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

"2024-02-13T03:23:13.670435+0000 mon.smithi047 (mon.0) 643 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

pass 7557171 2024-02-12 21:59:27 2024-02-13 02:57:13 2024-02-13 03:28:49 0:31:36 0:24:55 0:06:41 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
pass 7557172 2024-02-12 21:59:28 2024-02-13 02:57:34 2024-02-13 03:24:26 0:26:52 0:17:37 0:09:15 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_cephadm} 1
fail 7557173 2024-02-12 21:59:28 2024-02-13 02:57:34 2024-02-13 03:43:14 0:45:40 0:32:18 0:13:22 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

"2024-02-13T03:26:35.329370+0000 mon.smithi003 (mon.0) 505 : cluster [WRN] Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)" in cluster log

fail 7557174 2024-02-12 21:59:29 2024-02-13 03:00:15 2024-02-13 03:51:25 0:51:10 0:42:16 0:08:54 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

expected string or bytes-like object

pass 7557175 2024-02-12 21:59:30 2024-02-13 03:00:15 2024-02-13 04:01:26 1:01:11 0:48:38 0:12:33 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size 1-install/luminous backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{ubuntu_18.04} mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/cache-snaps} 3
fail 7557176 2024-02-12 21:59:31 2024-02-13 03:03:16 2024-02-13 03:39:09 0:35:53 0:26:26 0:09:27 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi195 with status 5: 'sudo systemctl stop ceph-0c2b5b52-ca1f-11ee-95b9-87774f69a715@mon.smithi195'

fail 7557177 2024-02-12 21:59:32 2024-02-13 03:04:07 2024-02-13 03:53:13 0:49:06 0:36:42 0:12:24 smithi main ubuntu 20.04 rados/cephadm/upgrade/{1-start-distro/1-start-ubuntu_20.04 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/connectivity} 2
Failure Reason:

Command failed on smithi032 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c2adf8f4-ca1e-11ee-95b9-87774f69a715 -e sha1=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 -- bash -c \'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mon --hosts $(ceph orch ps | grep mgr.x | awk \'"\'"\'{print $2}\'"\'"\')\''

fail 7557178 2024-02-12 21:59:33 2024-02-13 03:04:17 2024-02-13 03:41:27 0:37:10 0:27:52 0:09:18 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_nfs} 1
Failure Reason:

"2024-02-13T03:28:12.780374+0000 mon.a (mon.0) 508 : cluster [WRN] Replacing daemon mds.a.smithi086.rfgrfe as rank 0 with standby daemon mds.user_test_fs.smithi086.mvzsxq" in cluster log

fail 7557179 2024-02-12 21:59:33 2024-02-13 03:04:17 2024-02-13 03:39:11 0:34:54 0:24:17 0:10:37 smithi main ubuntu 18.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_18.04 fixed-2 mon_election/connectivity start} 2
Failure Reason:

"2024-02-13T03:25:38.889917+0000 mon.a (mon.0) 162 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b" in cluster log

fail 7557180 2024-02-12 21:59:34 2024-02-13 03:05:29 2024-02-13 03:41:16 0:35:47 0:25:08 0:10:39 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

"2024-02-13T03:30:00.000200+0000 mon.smithi050 (mon.0) 396 : cluster [WRN] overall HEALTH_WARN 1 failed cephadm daemon(s)" in cluster log

fail 7557181 2024-02-12 21:59:35 2024-02-13 03:06:09 2024-02-13 03:55:32 0:49:23 0:38:58 0:10:25 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} 2
Failure Reason:

expected string or bytes-like object

fail 7557182 2024-02-12 21:59:36 2024-02-13 03:06:10 2024-02-13 03:52:56 0:46:46 0:36:34 0:10:12 smithi main centos 8.stream rados/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

"2024-02-13T03:40:00.000341+0000 mon.a (mon.0) 2253 : cluster [WRN] use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications." in cluster log

fail 7557183 2024-02-12 21:59:37 2024-02-13 03:07:30 2024-02-13 03:34:44 0:27:14 0:17:00 0:10:14 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

"2024-02-13T03:31:54.575795+0000 mon.a (mon.0) 467 : cluster [WRN] Health check failed: cephadm background work is paused (CEPHADM_PAUSED)" in cluster log

pass 7557184 2024-02-12 21:59:37 2024-02-13 03:07:31 2024-02-13 03:32:35 0:25:04 0:18:11 0:06:53 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
fail 7557185 2024-02-12 21:59:38 2024-02-13 03:07:31 2024-02-13 03:42:48 0:35:17 0:26:02 0:09:15 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi112 with status 5: 'sudo systemctl stop ceph-84391b2a-ca1f-11ee-95b9-87774f69a715@mon.smithi112'

pass 7557186 2024-02-12 21:59:39 2024-02-13 03:07:41 2024-02-13 03:44:53 0:37:12 0:25:02 0:12:10 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-nvme-loop distro/ubuntu_20.04 fixed-2 mon_election/classic start} 2
fail 7557187 2024-02-12 21:59:40 2024-02-13 03:08:22 2024-02-13 03:53:50 0:45:28 0:34:19 0:11:09 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli_mon} 5
Failure Reason:

"2024-02-13T03:40:12.562589+0000 mon.a (mon.0) 237 : cluster [WRN] mon.b (rank 4) addr [v2:172.21.15.140:3300/0,v1:172.21.15.140:6789/0] is down (out of quorum)" in cluster log

fail 7557188 2024-02-12 21:59:41 2024-02-13 03:10:33 2024-02-13 04:00:19 0:49:46 0:40:00 0:09:46 smithi main rhel 8.6 rados/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7557189 2024-02-12 21:59:41 2024-02-13 03:13:13 2024-02-13 03:30:54 0:17:41 0:07:11 0:10:30 smithi main rados/cephadm/dashboard/{0-distro/ignorelist_health task/test_e2e} 2
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=0e714d9a4bd2a821113e6318adb87bd06cf81ec1

fail 7557190 2024-02-12 21:59:42 2024-02-13 03:13:14 2024-02-13 03:50:21 0:37:07 0:26:19 0:10:48 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi188 with status 5: 'sudo systemctl stop ceph-93a1e532-ca20-11ee-95b9-87774f69a715@mon.smithi188'

fail 7557191 2024-02-12 21:59:43 2024-02-13 03:15:14 2024-02-13 04:13:12 0:57:58 0:47:30 0:10:28 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
Failure Reason:

expected string or bytes-like object

fail 7557192 2024-02-12 21:59:44 2024-02-13 03:15:55 2024-02-13 04:16:44 1:00:49 0:48:22 0:12:27 smithi main centos 8.stream rados/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls mon_election/classic} 2
Failure Reason:

"2024-02-13T03:34:20.723509+0000 mon.a (mon.0) 68 : cluster [WRN] Health check failed: 2 stray daemons(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7557193 2024-02-12 21:59:45 2024-02-13 03:17:36 2024-02-13 03:48:51 0:31:15 0:19:05 0:12:10 smithi main ubuntu 18.04 rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

"2024-02-13T03:44:40.810888+0000 osd.1 (osd.1) 3 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running" in cluster log

fail 7557194 2024-02-12 21:59:45 2024-02-13 03:19:06 2024-02-13 04:13:46 0:54:40 0:44:07 0:10:33 smithi main ubuntu 18.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7557195 2024-02-12 21:59:46 2024-02-13 03:20:27 2024-02-13 04:26:26 1:05:59 0:54:33 0:11:26 smithi main ubuntu 18.04 rados/cephadm/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat distro$/{ubuntu_18.04} mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

"2024-02-13T03:50:00.000135+0000 mon.a (mon.0) 477 : cluster [WRN] Health detail: HEALTH_WARN Reduced data availability: 1 pg inactive, 1 pg peering" in cluster log

fail 7557196 2024-02-12 21:59:47 2024-02-13 03:41:21 2024-02-13 04:07:57 0:26:36 0:18:15 0:08:21 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=84e714279aa684636ac645daf0a1a85c2094cfb3 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 7557197 2024-02-12 21:59:48 2024-02-13 03:41:21 2024-02-13 04:37:20 0:55:59 0:46:14 0:09:45 smithi main ubuntu 20.04 rados/cephadm/smoke-roleless/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7557198 2024-02-12 21:59:49 2024-02-13 03:41:21 2024-02-13 04:22:46 0:41:25 0:26:48 0:14:37 smithi main centos 8.stream rados/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
Failure Reason:

Command failed on smithi097 with status 5: 'sudo systemctl stop ceph-dc726c7e-ca24-11ee-95b9-87774f69a715@mon.smithi097'

fail 7557199 2024-02-12 21:59:49 2024-02-13 03:45:02 2024-02-13 05:47:46 2:02:44 1:49:35 0:13:09 smithi main centos 8.stream rados/cephadm/mgr-nfs-upgrade/{0-distro/centos_8.stream_container_tools 1-bootstrap/octopus 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi023 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v15 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d4e2d6ba-ca24-11ee-95b9-87774f69a715 -e sha1=0e714d9a4bd2a821113e6318adb87bd06cf81ec1 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 7557200 2024-02-12 21:59:50 2024-02-13 03:47:23 2024-02-13 04:42:25 0:55:02 0:45:24 0:09:38 smithi main centos 8.stream rados/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

fail 7557201 2024-02-12 21:59:51 2024-02-13 03:47:23 2024-02-13 04:40:09 0:52:46 0:39:26 0:13:20 smithi main centos 8.stream rados/cephadm/thrash/{0-distro/centos_8.stream_container_tools 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
Failure Reason:

expected string or bytes-like object

fail 7557202 2024-02-12 21:59:52 2024-02-13 03:50:04 2024-02-13 04:37:31 0:47:27 0:41:18 0:06:09 smithi main rhel 8.6 rados/cephadm/with-work/{0-distro/rhel_8.6_container_tools_rhel8 fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

"2024-02-13T04:19:30.417300+0000 mon.a (mon.0) 871 : cluster [WRN] Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log