Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7679675 2024-04-29 21:43:08 2024-04-29 21:46:41 2024-04-29 22:05:26 0:18:45 0:12:50 0:05:55 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
fail 7679676 2024-04-29 21:43:10 2024-04-29 21:46:41 2024-04-29 23:25:41 1:39:00 1:32:02 0:06:58 smithi main centos 9.stream rados/standalone/{supported-random-distro$/{centos_latest} workloads/scrub} 1
Failure Reason:

Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi040 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=354447ca5357e926795d009be846687f04556beb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh'

pass 7679677 2024-04-29 21:43:11 2024-04-29 21:46:42 2024-04-29 22:16:32 0:29:50 0:19:53 0:09:57 smithi main ubuntu 22.04 rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm} 1
pass 7679678 2024-04-29 21:43:12 2024-04-29 21:46:42 2024-04-29 22:16:15 0:29:33 0:20:23 0:09:10 smithi main ubuntu 22.04 rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
dead 7679679 2024-04-29 21:43:14 2024-04-29 21:46:42 2024-04-29 22:05:21 0:18:39 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

fail 7679680 2024-04-29 21:43:15 2024-04-29 21:46:43 2024-04-29 22:59:57 1:13:14 0:56:08 0:17:06 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/cache-snaps} 3
Failure Reason:

"2024-04-29T22:50:00.000163+0000 mon.a (mon.0) 1747 : cluster [WRN] Health detail: HEALTH_WARN noscrub flag(s) set" in cluster log

dead 7679681 2024-04-29 21:43:16 2024-04-29 21:46:43 2024-04-30 10:04:08 12:17:25 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

hit max job timeout

fail 7679682 2024-04-29 21:43:18 2024-04-29 21:53:34 2024-04-29 23:47:08 1:53:34 1:46:35 0:06:59 smithi main centos 9.stream rados/upgrade/parallel/{0-random-distro$/{centos_9.stream} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

"2024-04-29T22:18:52.753166+0000 mon.a (mon.0) 316 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

pass 7679683 2024-04-29 21:43:19 2024-04-29 21:53:35 2024-04-29 22:22:56 0:29:21 0:13:42 0:15:39 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
pass 7679684 2024-04-29 21:43:20 2024-04-29 22:02:06 2024-04-29 22:21:06 0:19:00 0:12:30 0:06:30 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/repave-all} 2
fail 7679685 2024-04-29 21:43:22 2024-04-29 22:02:06 2024-04-30 00:01:15 1:59:09 1:40:29 0:18:40 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/radosbench} 3
Failure Reason:

"2024-04-29T23:00:00.000262+0000 mon.a (mon.0) 1040 : cluster [WRN] Health detail: HEALTH_WARN noscrub,nodeep-scrub flag(s) set" in cluster log

fail 7679686 2024-04-29 21:43:23 2024-04-29 22:02:07 2024-04-30 04:27:44 6:25:37 6:18:50 0:06:47 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi152 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=354447ca5357e926795d009be846687f04556beb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 7679687 2024-04-29 21:43:24 2024-04-29 22:02:07 2024-04-29 22:31:47 0:29:40 0:19:48 0:09:52 smithi main ubuntu 22.04 rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
dead 7679688 2024-04-29 21:43:26 2024-04-29 22:02:07 2024-04-30 10:10:28 12:08:21 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

hit max job timeout

pass 7679689 2024-04-29 21:43:27 2024-04-29 22:02:08 2024-04-29 22:24:21 0:22:13 0:12:33 0:09:40 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
fail 7679690 2024-04-29 21:43:28 2024-04-29 22:05:29 2024-04-29 22:43:19 0:37:50 0:28:01 0:09:49 smithi main ubuntu 22.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi171 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=354447ca5357e926795d009be846687f04556beb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

pass 7679691 2024-04-29 21:43:30 2024-04-29 22:05:29 2024-04-29 22:25:03 0:19:34 0:12:32 0:07:02 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
pass 7679692 2024-04-29 21:43:31 2024-04-29 22:05:39 2024-04-29 22:52:00 0:46:21 0:25:07 0:21:14 smithi main ubuntu 22.04 rados/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
fail 7679693 2024-04-29 21:43:32 2024-04-29 22:16:21 2024-04-29 22:49:34 0:33:13 0:20:49 0:12:24 smithi main ubuntu 22.04 rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

"2024-04-29T22:44:25.862446+0000 mon.smithi103 (mon.0) 804 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7679694 2024-04-29 21:43:34 2024-04-29 22:17:31 2024-04-29 22:40:25 0:22:54 0:17:07 0:05:47 smithi main centos 9.stream rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

fail 7679695 2024-04-29 21:43:35 2024-04-29 22:17:32 2024-04-29 22:54:51 0:37:19 0:30:16 0:07:03 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

pass 7679696 2024-04-29 21:43:36 2024-04-29 22:17:32 2024-04-29 23:40:11 1:22:39 1:04:49 0:17:50 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/snaps-few-objects} 3
fail 7679697 2024-04-29 21:43:38 2024-04-29 22:23:03 2024-04-29 22:45:22 0:22:19 0:14:36 0:07:43 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

"2024-04-29T22:37:53.343369+0000 mon.a (mon.0) 512 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7679698 2024-04-29 21:43:39 2024-04-29 22:24:24 2024-04-29 23:01:54 0:37:30 0:28:42 0:08:48 smithi main ubuntu 22.04 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi082 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=354447ca5357e926795d009be846687f04556beb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

dead 7679699 2024-04-29 21:43:40 2024-04-29 22:24:24 2024-04-30 10:35:38 12:11:14 smithi main centos 9.stream rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

hit max job timeout

fail 7679700 2024-04-29 21:43:42 2024-04-29 22:25:05 2024-04-30 00:21:40 1:56:35 1:47:27 0:09:08 smithi main centos 9.stream rados/upgrade/parallel/{0-random-distro$/{centos_9.stream_runc} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

"2024-04-29T22:53:50.681667+0000 mon.a (mon.0) 358 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7679701 2024-04-29 21:43:43 2024-04-29 22:28:15 2024-04-30 00:20:11 1:51:56 1:43:55 0:08:01 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

pass 7679702 2024-04-29 21:43:44 2024-04-29 22:28:46 2024-04-29 23:49:22 1:20:36 0:59:41 0:20:55 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps} 3