Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7722897 2024-05-23 09:51:34 2024-05-23 13:36:52 2024-05-23 14:01:48 0:24:56 0:14:40 0:10:16 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
fail 7722898 2024-05-23 09:51:35 2024-05-23 13:36:52 2024-05-23 17:07:33 3:30:41 3:20:36 0:10:05 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi088 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=34a0e0b0f768a2d68770203a7573913d71c36ac1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh'

fail 7722899 2024-05-23 09:51:36 2024-05-23 13:37:53 2024-05-23 13:59:42 0:21:49 0:11:23 0:10:26 smithi main ubuntu 22.04 rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_ca_signed_key} 2
Failure Reason:

Command failed on smithi134 with status 5: 'sudo systemctl stop ceph-2a5116e4-190c-11ef-bc9a-c7b262605968@mon.a'

fail 7722900 2024-05-23 09:51:37 2024-05-23 13:38:03 2024-05-23 14:01:16 0:23:13 0:13:08 0:10:05 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi148 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=34a0e0b0f768a2d68770203a7573913d71c36ac1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7722901 2024-05-23 09:51:38 2024-05-23 13:38:04 2024-05-23 14:21:43 0:43:39 0:34:44 0:08:55 smithi main ubuntu 22.04 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
pass 7722902 2024-05-23 09:51:39 2024-05-23 13:38:04 2024-05-23 14:26:45 0:48:41 0:35:14 0:13:27 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/cache-snaps} 3
pass 7722903 2024-05-23 09:51:40 2024-05-23 13:41:15 2024-05-23 14:05:14 0:23:59 0:13:44 0:10:15 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
pass 7722904 2024-05-23 09:51:41 2024-05-23 13:41:55 2024-05-23 14:06:52 0:24:57 0:14:55 0:10:02 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
pass 7722905 2024-05-23 09:51:42 2024-05-23 13:41:56 2024-05-23 15:09:59 1:28:03 1:17:00 0:11:03 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/radosbench} 3
dead 7722906 2024-05-23 09:51:43 2024-05-23 13:42:46 2024-05-23 13:44:10 0:01:24 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi132

pass 7722907 2024-05-23 09:51:44 2024-05-23 13:43:07 2024-05-23 14:11:03 0:27:56 0:19:14 0:08:42 smithi main ubuntu 22.04 rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
fail 7722908 2024-05-23 09:51:45 2024-05-23 13:43:37 2024-05-23 15:28:45 1:45:08 1:34:30 0:10:38 smithi main ubuntu 22.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/erasure-code} 1
Failure Reason:

Command failed (workunit test erasure-code/test-erasure-eio.sh) on smithi029 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=34a0e0b0f768a2d68770203a7573913d71c36ac1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/erasure-code/test-erasure-eio.sh'

pass 7722909 2024-05-23 09:51:46 2024-05-23 13:43:57 2024-05-23 14:27:29 0:43:32 0:34:55 0:08:37 smithi main ubuntu 22.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 1
pass 7722910 2024-05-23 09:51:47 2024-05-23 13:43:58 2024-05-23 14:09:09 0:25:11 0:16:07 0:09:04 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_host_drain} 3
pass 7722911 2024-05-23 09:51:48 2024-05-23 13:44:18 2024-05-23 14:09:37 0:25:19 0:15:22 0:09:57 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
fail 7722912 2024-05-23 09:51:49 2024-05-23 13:44:39 2024-05-23 14:12:49 0:28:10 0:17:50 0:10:20 smithi main centos 9.stream rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

fail 7722913 2024-05-23 09:51:50 2024-05-23 13:45:49 2024-05-23 14:24:00 0:38:11 0:27:43 0:10:28 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

pass 7722914 2024-05-23 09:51:51 2024-05-23 13:47:00 2024-05-23 14:09:40 0:22:40 0:14:25 0:08:15 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} 1
pass 7722915 2024-05-23 09:51:52 2024-05-23 13:47:00 2024-05-23 14:23:33 0:36:33 0:25:45 0:10:48 smithi main ubuntu 22.04 rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_monitoring_stack_basic} 3
pass 7722916 2024-05-23 09:51:53 2024-05-23 13:48:01 2024-05-23 14:29:12 0:41:11 0:32:19 0:08:52 smithi main centos 9.stream rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_latest}} 1
pass 7722917 2024-05-23 09:51:54 2024-05-23 13:48:01 2024-05-23 14:17:46 0:29:45 0:19:39 0:10:06 smithi main ubuntu 22.04 rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
dead 7722918 2024-05-23 09:51:55 2024-05-23 13:48:12 2024-05-24 01:57:55 12:09:43 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

hit max job timeout

fail 7722919 2024-05-23 09:51:56 2024-05-23 13:48:32 2024-05-23 15:57:48 2:09:16 1:58:47 0:10:29 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

pass 7722920 2024-05-23 09:51:57 2024-05-23 13:48:43 2024-05-23 14:26:21 0:37:38 0:24:42 0:12:56 smithi main ubuntu 22.04 rados/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
pass 7722921 2024-05-23 09:51:58 2024-05-23 13:49:33 2024-05-23 14:36:49 0:47:16 0:37:40 0:09:36 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps} 3
pass 7722922 2024-05-23 09:51:59 2024-05-23 13:49:53 2024-05-23 14:13:48 0:23:55 0:14:11 0:09:44 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
pass 7722923 2024-05-23 09:52:00 2024-05-23 13:50:54 2024-05-23 14:34:58 0:44:04 0:32:54 0:11:10 smithi main centos 9.stream rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest}} 1
pass 7722924 2024-05-23 09:52:01 2024-05-23 13:52:04 2024-05-23 15:26:03 1:33:59 1:21:09 0:12:50 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/quincy backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/radosbench} 3
pass 7722925 2024-05-23 09:52:02 2024-05-23 13:54:55 2024-05-23 14:18:06 0:23:11 0:13:58 0:09:13 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
fail 7722926 2024-05-23 09:52:04 2024-05-23 13:55:16 2024-05-23 20:29:17 6:34:01 6:24:40 0:09:21 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi079 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=34a0e0b0f768a2d68770203a7573913d71c36ac1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 7722927 2024-05-23 09:52:05 2024-05-23 13:55:26 2024-05-23 14:13:56 0:18:30 0:07:46 0:10:44 smithi main centos 9.stream rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats'

pass 7722928 2024-05-23 09:52:05 2024-05-23 13:56:16 2024-05-23 14:21:21 0:25:05 0:15:26 0:09:39 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2