Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7739676 2024-06-03 19:45:09 2024-06-03 19:47:28 2024-06-03 20:45:58 0:58:30 0:23:54 0:34:36 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

"2024-06-03T20:34:54.678080+0000 mon.smithi045 (mon.0) 118 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7739677 2024-06-03 19:45:10 2024-06-03 19:51:19 2024-06-04 00:48:24 4:57:05 4:21:26 0:35:39 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi136 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7d208bbce2efc11fb0dcecbb271cb2051d1daa58 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh'

fail 7739678 2024-06-03 19:45:11 2024-06-03 19:54:40 2024-06-03 20:17:10 0:22:30 0:11:44 0:10:46 smithi main ubuntu 22.04 rados/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
Failure Reason:

"2024-06-03T20:14:24.902037+0000 mon.a (mon.0) 102 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7739679 2024-06-03 19:45:12 2024-06-03 19:55:40 2024-06-03 22:48:59 2:53:19 2:23:33 0:29:46 smithi main centos 9.stream rados/standalone/{supported-random-distro$/{centos_latest} workloads/scrub} 1
Failure Reason:

Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi151 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7d208bbce2efc11fb0dcecbb271cb2051d1daa58 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh'

fail 7739680 2024-06-03 19:45:13 2024-06-03 19:55:50 2024-06-03 20:41:18 0:45:28 0:32:40 0:12:48 smithi main ubuntu 22.04 rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

"2024-06-03T20:20:33.935519+0000 mon.smithi082 (mon.0) 116 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7739681 2024-06-03 19:45:14 2024-06-03 19:57:01 2024-06-03 20:34:36 0:37:35 0:28:48 0:08:47 smithi main ubuntu 22.04 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi079 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7d208bbce2efc11fb0dcecbb271cb2051d1daa58 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 7739682 2024-06-03 19:45:15 2024-06-03 19:57:01 2024-06-03 20:04:15 0:07:14 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi032 with status 1: 'sudo yum install -y kernel'

fail 7739683 2024-06-03 19:45:16 2024-06-03 19:57:12 2024-06-03 21:14:21 1:17:09 0:48:23 0:28:46 smithi main centos 9.stream rados/upgrade/parallel/{0-random-distro$/{centos_9.stream_runc} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed on smithi097 with status 1: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --max-attr-len 20000 --op read 100 --op write 0 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op setattr 25 --op rmattr 25 --op copy_from 50 --op append 100 --pool unique_pool_0'

fail 7739684 2024-06-03 19:45:17 2024-06-03 19:57:32 2024-06-03 20:46:51 0:49:19 0:15:28 0:33:51 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-06-03T20:44:15.973980+0000 mon.a (mon.0) 113 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7739685 2024-06-03 19:45:18 2024-06-03 19:58:02 2024-06-03 20:05:26 0:07:24 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/radosbench} 3
Failure Reason:

Command failed on smithi044 with status 1: 'sudo yum install -y kernel'

fail 7739686 2024-06-03 19:45:19 2024-06-03 19:58:23 2024-06-03 21:54:42 1:56:19 1:28:17 0:28:02 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

"2024-06-03T21:41:19.606509+0000 mon.a (mon.0) 301 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7739687 2024-06-03 19:45:20 2024-06-03 19:59:23 2024-06-03 20:43:12 0:43:49 0:32:54 0:10:55 smithi main ubuntu 22.04 rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

"2024-06-03T20:20:40.007994+0000 mon.smithi064 (mon.0) 110 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

dead 7739688 2024-06-03 19:45:21 2024-06-03 19:59:44 2024-06-03 20:02:08 0:02:24 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi160

dead 7739689 2024-06-03 19:45:22 2024-06-03 20:01:04 2024-06-04 08:12:12 12:11:08 smithi main centos 9.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-4 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/ec-small-objects-overwrites} 4
Failure Reason:

hit max job timeout

pass 7739690 2024-06-03 19:45:23 2024-06-03 20:02:35 2024-06-03 21:46:36 1:44:01 1:33:43 0:10:18 smithi main ubuntu 22.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/erasure-code} 1
fail 7739691 2024-06-03 19:45:24 2024-06-03 20:02:35 2024-06-03 20:58:49 0:56:14 0:22:26 0:33:48 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

"2024-06-03T20:47:26.914286+0000 mon.smithi059 (mon.0) 121 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7739692 2024-06-03 19:45:25 2024-06-03 20:04:06 2024-06-03 20:44:00 0:39:54 0:28:27 0:11:27 smithi main ubuntu 22.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi177 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7d208bbce2efc11fb0dcecbb271cb2051d1daa58 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

dead 7739693 2024-06-03 19:45:26 2024-06-03 20:04:16 2024-06-03 20:05:20 0:01:04 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi032

fail 7739694 2024-06-03 19:45:27 2024-06-03 20:04:16 2024-06-03 20:26:27 0:22:11 0:12:51 0:09:20 smithi main ubuntu 22.04 rados/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-06-03T20:24:32.426514+0000 mon.a (mon.0) 101 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

fail 7739695 2024-06-03 19:45:28 2024-06-03 20:05:07 2024-06-03 21:40:28 1:35:21 1:09:51 0:25:30 smithi main centos 9.stream rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7739696 2024-06-03 19:45:29 2024-06-03 20:05:27 2024-06-03 22:00:37 1:55:10 1:30:48 0:24:22 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

fail 7739697 2024-06-03 19:45:30 2024-06-03 20:05:28 2024-06-03 20:57:59 0:52:31 0:15:47 0:36:44 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi117 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:7d208bbce2efc11fb0dcecbb271cb2051d1daa58 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 851fe72a-21eb-11ef-bc9b-c7b262605968 -- ceph-volume lvm zap /dev/nvme4n1'

dead 7739698 2024-06-03 19:45:31 2024-06-03 20:06:48 2024-06-03 20:07:52 0:01:04 smithi main ubuntu 22.04 rados/thrash-erasure-code-crush-4-nodes/{arch/x86_64 ceph mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=8-m=6-crush} 4
Failure Reason:

Error reimaging machines: Failed to power on smithi002

fail 7739699 2024-06-03 19:45:32 2024-06-03 20:06:49 2024-06-03 23:17:50 3:11:01 2:38:45 0:32:16 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

fail 7739700 2024-06-03 19:45:33 2024-06-03 20:08:39 2024-06-03 20:16:27 0:07:48 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi105 with status 1: 'sudo yum install -y kernel'

fail 7739701 2024-06-03 19:45:34 2024-06-03 20:09:20 2024-06-03 21:48:32 1:39:12 1:18:46 0:20:26 smithi main centos 9.stream rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi029 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7d208bbce2efc11fb0dcecbb271cb2051d1daa58 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 7739702 2024-06-03 19:45:35 2024-06-03 20:09:20 2024-06-03 20:19:30 0:10:10 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/quincy backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/radosbench} 3
Failure Reason:

Command failed on smithi016 with status 1: 'sudo yum install -y kernel'

fail 7739703 2024-06-03 19:45:36 2024-06-03 20:12:31 2024-06-03 22:20:11 2:07:40 1:40:08 0:27:32 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi070 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7d208bbce2efc11fb0dcecbb271cb2051d1daa58 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 7739704 2024-06-03 19:45:37 2024-06-03 20:13:11 2024-06-03 21:04:33 0:51:22 0:22:28 0:28:54 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

"2024-06-03T20:53:27.973761+0000 mon.smithi115 (mon.0) 121 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log