User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
lflores | 2024-04-01 18:07:25 | 2024-04-01 18:17:00 | 2024-04-02 07:04:21 | 12:47:21 | rados | wip-yuri8-testing-2024-03-25-1419 | smithi | e142085 | 6 | 31 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7634074 | 2024-04-01 18:08:47 | 2024-04-01 18:17:00 | 2024-04-01 18:44:22 | 0:27:22 | 0:14:29 | 0:12:53 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{centos_latest}} | 1 | |
pass | 7634075 | 2024-04-01 18:08:49 | 2024-04-01 18:19:20 | 2024-04-01 18:56:22 | 0:37:02 | 0:23:54 | 0:13:08 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/quincy backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/test_rbd_api} | 3 | |
fail | 7634076 | 2024-04-01 18:08:50 | 2024-04-01 18:23:21 | 2024-04-01 18:46:31 | 0:23:10 | 0:13:27 | 0:09:43 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
"2024-04-01T18:43:47.238175+0000 mon.smithi146 (mon.0) 781 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7634077 | 2024-04-01 18:08:51 | 2024-04-01 18:23:22 | 2024-04-01 20:15:12 | 1:51:50 | 1:40:47 | 0:11:03 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7634078 | 2024-04-01 18:08:52 | 2024-04-01 18:23:52 | 2024-04-01 20:25:18 | 2:01:26 | 1:48:29 | 0:12:57 | smithi | main | ubuntu | 22.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/scrub} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi049 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b64378df0aa36ce626cf358e9d2b6f4658480c2f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh' |
||||||||||||||
fail | 7634079 | 2024-04-01 18:08:53 | 2024-04-01 18:23:53 | 2024-04-01 18:53:49 | 0:29:56 | 0:11:47 | 0:18:09 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_ca_signed_key} | 2 | |
Failure Reason:
Command failed on smithi072 with status 5: 'sudo systemctl stop ceph-aaf7e1b0-f058-11ee-b647-cb9ed24678a4@mon.a' |
||||||||||||||
fail | 7634080 | 2024-04-01 18:08:55 | 2024-04-01 18:31:04 | 2024-04-01 18:51:37 | 0:20:33 | 0:08:52 | 0:11:41 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_cephadm} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_cephadm.sh) on smithi050 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b64378df0aa36ce626cf358e9d2b6f4658480c2f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' |
||||||||||||||
fail | 7634081 | 2024-04-01 18:08:56 | 2024-04-01 18:32:54 | 2024-04-01 19:21:41 | 0:48:47 | 0:34:53 | 0:13:54 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
"2024-04-01T19:10:00.000109+0000 mon.a (mon.0) 1492 : cluster [WRN] Health detail: HEALTH_WARN noscrub flag(s) set" in cluster log |
||||||||||||||
fail | 7634082 | 2024-04-01 18:08:57 | 2024-04-01 18:52:35 | 2024-04-01 19:16:48 | 0:24:13 | 0:13:51 | 0:10:22 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
"2024-04-01T19:14:05.583789+0000 mon.smithi050 (mon.0) 777 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
dead | 7634083 | 2024-04-01 18:08:58 | 2024-04-01 18:53:55 | 2024-04-02 07:04:21 | 12:10:26 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{centos_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7634084 | 2024-04-01 18:09:00 | 2024-04-01 18:53:56 | 2024-04-01 19:19:11 | 0:25:15 | 0:14:56 | 0:10:19 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"2024-04-01T19:13:37.586041+0000 mon.a (mon.0) 664 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 7634085 | 2024-04-01 18:09:01 | 2024-04-01 18:53:56 | 2024-04-01 20:21:40 | 1:27:44 | 1:15:28 | 0:12:16 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
"2024-04-01T19:30:00.000125+0000 mon.a (mon.0) 1202 : cluster [WRN] Health detail: HEALTH_WARN nodeep-scrub flag(s) set" in cluster log |
||||||||||||||
fail | 7634086 | 2024-04-01 18:09:02 | 2024-04-01 18:56:27 | 2024-04-02 01:41:21 | 6:44:54 | 6:33:32 | 0:11:22 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi028 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b64378df0aa36ce626cf358e9d2b6f4658480c2f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 7634087 | 2024-04-01 18:09:03 | 2024-04-01 18:56:57 | 2024-04-01 19:31:56 | 0:34:59 | 0:19:52 | 0:15:07 | smithi | main | ubuntu | 22.04 | rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
"2024-04-01T19:24:48.900807+0000 mon.smithi078 (mon.0) 559 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 7634088 | 2024-04-01 18:09:05 | 2024-04-01 19:01:28 | 2024-04-01 19:27:36 | 0:26:08 | 0:14:36 | 0:11:32 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
"2024-04-01T19:23:16.396301+0000 mon.smithi038 (mon.0) 785 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7634089 | 2024-04-01 18:09:06 | 2024-04-01 19:02:39 | 2024-04-01 19:42:31 | 0:39:52 | 0:25:52 | 0:14:00 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/rbd_cls} | 3 | |
fail | 7634090 | 2024-04-01 18:09:07 | 2024-04-01 19:05:30 | 2024-04-01 19:36:20 | 0:30:50 | 0:18:15 | 0:12:35 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_host_drain} | 3 | |
Failure Reason:
"2024-04-01T19:30:52.580883+0000 mon.a (mon.0) 383 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
fail | 7634091 | 2024-04-01 18:09:08 | 2024-04-01 19:09:11 | 2024-04-01 19:34:12 | 0:25:01 | 0:15:44 | 0:09:17 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"2024-04-01T19:27:33.885365+0000 mon.a (mon.0) 672 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
fail | 7634092 | 2024-04-01 18:09:09 | 2024-04-01 19:09:21 | 2024-04-01 19:36:27 | 0:27:06 | 0:18:09 | 0:08:57 | smithi | main | centos | 9.stream | rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} | 1 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7634093 | 2024-04-01 18:09:11 | 2024-04-01 19:09:21 | 2024-04-01 20:11:14 | 1:01:53 | 0:43:19 | 0:18:34 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7634094 | 2024-04-01 18:09:12 | 2024-04-01 19:14:02 | 2024-04-01 19:41:23 | 0:27:21 | 0:13:52 | 0:13:29 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 | |
Failure Reason:
"2024-04-01T19:38:39.740081+0000 mon.a (mon.0) 299 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
pass | 7634095 | 2024-04-01 18:09:13 | 2024-04-01 19:14:03 | 2024-04-01 19:59:46 | 0:45:43 | 0:36:34 | 0:09:09 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/snaps-few-objects} | 3 | |
fail | 7634096 | 2024-04-01 18:09:14 | 2024-04-01 19:14:23 | 2024-04-01 19:55:16 | 0:40:53 | 0:25:17 | 0:15:36 | smithi | main | ubuntu | 22.04 | rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_monitoring_stack_basic} | 3 | |
Failure Reason:
"2024-04-01T19:43:37.245073+0000 mon.a (mon.0) 441 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
fail | 7634097 | 2024-04-01 18:09:16 | 2024-04-01 19:19:24 | 2024-04-01 19:50:47 | 0:31:23 | 0:18:59 | 0:12:24 | smithi | main | ubuntu | 22.04 | rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
Failure Reason:
"2024-04-01T19:46:22.137940+0000 mon.smithi113 (mon.0) 748 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7634098 | 2024-04-01 18:09:17 | 2024-04-01 19:20:25 | 2024-04-01 19:57:38 | 0:37:13 | 0:24:09 | 0:13:04 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} | 3 | |
fail | 7634099 | 2024-04-01 18:09:18 | 2024-04-01 19:24:45 | 2024-04-01 19:48:07 | 0:23:22 | 0:13:27 | 0:09:55 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_rgw_multisite} | 3 | |
Failure Reason:
"2024-04-01T19:43:49.342497+0000 mon.a (mon.0) 207 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,c" in cluster log |
||||||||||||||
fail | 7634100 | 2024-04-01 18:09:19 | 2024-04-01 19:24:46 | 2024-04-01 21:22:06 | 1:57:20 | 1:46:44 | 0:10:36 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7634101 | 2024-04-01 18:09:21 | 2024-04-01 19:25:06 | 2024-04-01 19:58:51 | 0:33:45 | 0:24:31 | 0:09:14 | smithi | main | ubuntu | 22.04 | rados/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"2024-04-01T19:52:18.981532+0000 mon.a (mon.0) 683 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 7634102 | 2024-04-01 18:09:22 | 2024-04-01 19:25:17 | 2024-04-01 19:46:47 | 0:21:30 | 0:11:26 | 0:10:04 | smithi | main | ubuntu | 22.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/mon} | 1 | |
Failure Reason:
Command failed (workunit test mon/health-mute.sh) on smithi173 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b64378df0aa36ce626cf358e9d2b6f4658480c2f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/health-mute.sh' |
||||||||||||||
fail | 7634103 | 2024-04-01 18:09:23 | 2024-04-01 19:25:17 | 2024-04-01 20:10:44 | 0:45:27 | 0:35:47 | 0:09:40 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps} | 3 | |
Failure Reason:
"2024-04-01T20:00:00.000135+0000 mon.a (mon.0) 1568 : cluster [WRN] Health detail: HEALTH_WARN noscrub flag(s) set" in cluster log |
||||||||||||||
pass | 7634104 | 2024-04-01 18:09:24 | 2024-04-01 19:25:28 | 2024-04-01 20:07:28 | 0:42:00 | 0:16:27 | 0:25:33 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_set_mon_crush_locations} | 3 | |
fail | 7634105 | 2024-04-01 18:09:26 | 2024-04-01 19:40:31 | 2024-04-01 20:17:23 | 0:36:52 | 0:26:23 | 0:10:29 | smithi | main | centos | 9.stream | rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} | 1 | |
Failure Reason:
reached maximum tries (51) after waiting for 300 seconds |
||||||||||||||
fail | 7634106 | 2024-04-01 18:09:27 | 2024-04-01 19:40:32 | 2024-04-01 20:03:23 | 0:22:51 | 0:13:16 | 0:09:35 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
Failure Reason:
"2024-04-01T19:59:45.258882+0000 mon.smithi112 (mon.0) 791 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7634107 | 2024-04-01 18:09:28 | 2024-04-01 19:40:32 | 2024-04-01 21:17:41 | 1:37:09 | 1:27:31 | 0:09:38 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/quincy backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/radosbench} | 3 | |
Failure Reason:
"2024-04-01T20:10:00.000102+0000 mon.a (mon.0) 1087 : cluster [WRN] Health detail: HEALTH_WARN nodeep-scrub flag(s) set" in cluster log |
||||||||||||||
fail | 7634108 | 2024-04-01 18:09:29 | 2024-04-01 19:40:32 | 2024-04-01 20:04:16 | 0:23:44 | 0:13:46 | 0:09:58 | smithi | main | centos | 9.stream | rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
Failure Reason:
"2024-04-01T20:01:32.313371+0000 mon.smithi148 (mon.0) 781 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
fail | 7634109 | 2024-04-01 18:09:30 | 2024-04-01 19:40:33 | 2024-04-01 20:51:44 | 1:11:11 | 1:02:08 | 0:09:03 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} | 2 | |
Failure Reason:
valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun |
||||||||||||||
fail | 7634110 | 2024-04-01 18:09:32 | 2024-04-01 19:40:43 | 2024-04-01 19:57:25 | 0:16:42 | 0:07:26 | 0:09:16 | smithi | main | centos | 9.stream | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats' |
||||||||||||||
fail | 7634111 | 2024-04-01 18:09:33 | 2024-04-01 19:40:44 | 2024-04-01 20:05:36 | 0:24:52 | 0:15:37 | 0:09:15 | smithi | main | centos | 9.stream | rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"2024-04-01T19:57:47.279510+0000 mon.a (mon.0) 445 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |