Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7740338 2024-06-03 22:15:31 2024-06-03 22:18:26 2024-06-03 22:57:17 0:38:51 0:22:04 0:16:47 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

"2024-06-03T22:46:36.162629+0000 mon.smithi063 (mon.0) 121 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7740339 2024-06-03 22:15:32 2024-06-03 22:18:26 2024-06-04 01:14:58 2:56:32 2:40:06 0:16:26 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

fail 7740340 2024-06-03 22:15:33 2024-06-03 22:19:56 2024-06-03 22:41:46 0:21:50 0:11:19 0:10:31 smithi main ubuntu 22.04 rados/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
Failure Reason:

"2024-06-03T22:38:23.446166+0000 mon.a (mon.0) 102 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7740341 2024-06-03 22:15:33 2024-06-03 22:20:17 2024-06-04 00:52:59 2:32:42 2:17:33 0:15:09 smithi main centos 9.stream rados/standalone/{supported-random-distro$/{centos_latest} workloads/scrub} 1
Failure Reason:

Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi007 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5c821233d097eb6eb4287bbd1d0b6d01638e5f90 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh'

fail 7740342 2024-06-03 22:15:34 2024-06-03 22:20:17 2024-06-03 23:00:11 0:39:54 0:30:24 0:09:30 smithi main ubuntu 22.04 rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

"2024-06-03T22:38:56.169152+0000 mon.smithi038 (mon.0) 118 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7740343 2024-06-03 22:15:35 2024-06-03 22:20:37 2024-06-03 23:01:46 0:41:09 0:29:06 0:12:03 smithi main ubuntu 22.04 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi161 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5c821233d097eb6eb4287bbd1d0b6d01638e5f90 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 7740344 2024-06-03 22:15:36 2024-06-03 22:21:58 2024-06-03 22:29:20 0:07:22 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi005 with status 1: 'sudo yum install -y kernel'

dead 7740345 2024-06-03 22:15:37 2024-06-03 22:22:08 2024-06-04 10:32:51 12:10:43 smithi main centos 9.stream rados/upgrade/parallel/{0-random-distro$/{centos_9.stream_runc} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

hit max job timeout

dead 7740346 2024-06-03 22:15:38 2024-06-03 22:23:29 2024-06-03 22:27:03 0:03:34 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi177

fail 7740347 2024-06-03 22:15:39 2024-06-03 22:26:00 2024-06-03 22:33:44 0:07:44 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/radosbench} 3
Failure Reason:

Command failed on smithi078 with status 1: 'sudo yum install -y kernel'

fail 7740348 2024-06-03 22:15:40 2024-06-03 22:26:20 2024-06-04 00:09:24 1:43:04 1:31:05 0:11:59 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi069 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5c821233d097eb6eb4287bbd1d0b6d01638e5f90 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

dead 7740349 2024-06-03 22:15:41 2024-06-03 22:28:21 2024-06-03 22:29:54 0:01:33 smithi main ubuntu 22.04 rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi018

dead 7740350 2024-06-03 22:15:42 2024-06-03 22:28:51 2024-06-03 22:30:15 0:01:24 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi194

dead 7740351 2024-06-03 22:15:43 2024-06-03 22:29:11 2024-06-04 10:38:58 12:09:47 smithi main centos 9.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-4 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/ec-small-objects-overwrites} 4
Failure Reason:

hit max job timeout

dead 7740352 2024-06-03 22:15:44 2024-06-03 22:29:22 2024-06-03 22:30:26 0:01:04 smithi main ubuntu 22.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/erasure-code} 1
Failure Reason:

Error reimaging machines: Failed to power on smithi005

fail 7740353 2024-06-03 22:15:45 2024-06-03 22:29:22 2024-06-03 23:15:05 0:45:43 0:18:50 0:26:53 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

"2024-06-03T23:04:14.234373+0000 mon.smithi037 (mon.0) 121 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7740354 2024-06-03 22:15:46 2024-06-03 22:32:23 2024-06-03 23:09:54 0:37:31 0:28:33 0:08:58 smithi main ubuntu 22.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi145 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5c821233d097eb6eb4287bbd1d0b6d01638e5f90 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 7740355 2024-06-03 22:15:47 2024-06-03 22:32:23 2024-06-03 23:26:01 0:53:38 0:29:24 0:24:14 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

"2024-06-03T23:15:57.728326+0000 mon.smithi043 (mon.0) 121 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7740356 2024-06-03 22:15:48 2024-06-03 22:33:14 2024-06-03 22:56:48 0:23:34 0:11:41 0:11:53 smithi main ubuntu 22.04 rados/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-06-03T22:54:35.982718+0000 mon.a (mon.0) 101 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7740357 2024-06-03 22:15:49 2024-06-03 22:33:24 2024-06-04 00:04:32 1:31:08 1:08:09 0:22:59 smithi main centos 9.stream rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

dead 7740358 2024-06-03 22:15:50 2024-06-03 22:33:24 2024-06-03 22:34:28 0:01:04 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi079

fail 7740359 2024-06-03 22:15:51 2024-06-03 22:33:25 2024-06-03 23:21:02 0:47:37 0:18:20 0:29:17 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi002 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:7d208bbce2efc11fb0dcecbb271cb2051d1daa58 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 47727c94-21ff-11ef-bc9b-c7b262605968 -- ceph-volume lvm zap /dev/nvme4n1'

fail 7740360 2024-06-03 22:15:52 2024-06-03 22:33:45 2024-06-03 22:59:51 0:26:06 0:14:50 0:11:16 smithi main ubuntu 22.04 rados/thrash-erasure-code-crush-4-nodes/{arch/x86_64 ceph mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=8-m=6-crush} 4
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --ec-pool --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 0 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op setattr 25 --op rmattr 25 --op copy_from 50 --op append 100 --pool unique_pool_0'

fail 7740361 2024-06-03 22:15:53 2024-06-03 22:34:06 2024-06-04 01:35:17 3:01:11 2:32:25 0:28:46 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

fail 7740362 2024-06-03 22:15:53 2024-06-03 22:35:26 2024-06-03 22:43:49 0:08:23 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi142 with status 1: 'sudo yum install -y kernel'

fail 7740363 2024-06-03 22:15:54 2024-06-03 22:36:27 2024-06-04 00:07:26 1:30:59 0:58:07 0:32:52 smithi main centos 9.stream rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi133 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5c821233d097eb6eb4287bbd1d0b6d01638e5f90 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 7740364 2024-06-03 22:15:55 2024-06-03 22:38:28 2024-06-03 22:46:19 0:07:51 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/quincy backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/radosbench} 3
Failure Reason:

Command failed on smithi122 with status 1: 'sudo yum install -y kernel'

fail 7740365 2024-06-03 22:15:56 2024-06-03 22:38:48 2024-06-04 00:33:12 1:54:24 1:21:17 0:33:07 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi173 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5c821233d097eb6eb4287bbd1d0b6d01638e5f90 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 7740366 2024-06-03 22:15:57 2024-06-03 22:39:48 2024-06-03 23:36:52 0:57:04 0:25:54 0:31:10 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

"2024-06-03T23:26:48.643244+0000 mon.smithi070 (mon.0) 121 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log