Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7755060 2024-06-14 07:28:48 2024-06-14 07:36:55 2024-06-14 07:47:30 0:10:35 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/quincy backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi043 with status 1: 'sudo yum install -y kernel'

fail 7755061 2024-06-14 07:28:49 2024-06-14 07:40:06 2024-06-14 09:44:54 2:04:48 1:52:25 0:12:23 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

pass 7755062 2024-06-14 07:28:50 2024-06-14 07:41:46 2024-06-14 08:05:38 0:23:52 0:13:22 0:10:30 smithi main ubuntu 22.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} 1
fail 7755063 2024-06-14 07:28:51 2024-06-14 07:41:47 2024-06-14 09:42:48 2:01:01 1:50:03 0:10:58 smithi main ubuntu 22.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/scrub} 1
Failure Reason:

Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi138 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eb07a70efc10a8d5c05dfe7c9475918bc1aa778c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh'

dead 7755064 2024-06-14 07:28:52 2024-06-14 07:42:57 2024-06-14 19:57:49 12:14:52 smithi main centos 9.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-4 openstack} fast/normal mon_election/connectivity msgr-failures/few rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 4
Failure Reason:

hit max job timeout

fail 7755065 2024-06-14 07:28:52 2024-06-14 07:47:38 2024-06-14 11:15:45 3:28:07 3:16:34 0:11:33 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/rados_api_tests} 4
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi112 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eb07a70efc10a8d5c05dfe7c9475918bc1aa778c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

dead 7755066 2024-06-14 07:28:53 2024-06-14 07:49:49 2024-06-14 07:52:23 0:02:34 smithi main ubuntu 22.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

Error reimaging machines: Failed to power on smithi162

fail 7755067 2024-06-14 07:28:54 2024-06-14 07:51:19 2024-06-14 08:34:29 0:43:10 0:25:58 0:17:12 smithi main ubuntu 22.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/rados_api_tests} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi062 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eb07a70efc10a8d5c05dfe7c9475918bc1aa778c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 7755068 2024-06-14 07:28:55 2024-06-15 12:54:32 2024-06-15 13:29:43 0:35:11 0:25:39 0:09:32 smithi main centos 9.stream rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi153 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eb07a70efc10a8d5c05dfe7c9475918bc1aa778c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 7755069 2024-06-14 07:28:56 2024-06-15 12:54:36 2024-06-15 13:02:09 0:07:33 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi003 with status 1: 'sudo yum install -y kernel'

fail 7755070 2024-06-14 07:28:57 2024-06-15 12:54:39 2024-06-15 13:17:45 0:23:06 0:13:42 0:09:24 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_extra_daemon_features} 2
Failure Reason:

"2024-06-15T13:13:03.107715+0000 mon.a (mon.0) 293 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7755071 2024-06-14 07:28:58 2024-06-15 12:54:39 2024-06-15 13:38:18 0:43:39 0:31:29 0:12:10 smithi main centos 9.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/dashboard} 2
Failure Reason:

Test failure: test_list_enabled_module (tasks.mgr.dashboard.test_mgr_module.MgrModuleTest)

fail 7755072 2024-06-14 07:28:59 2024-06-15 12:57:14 2024-06-15 13:22:59 0:25:45 0:14:06 0:11:39 smithi main centos 9.stream rados/encoder/{0-start 1-tasks supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed (workunit test dencoder/test-dencoder.sh) on smithi059 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eb07a70efc10a8d5c05dfe7c9475918bc1aa778c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/dencoder/test-dencoder.sh'

dead 7755073 2024-06-14 07:29:00 2024-06-15 12:58:15 2024-06-16 01:08:44 12:10:29 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

hit max job timeout

dead 7755074 2024-06-14 07:29:01 2024-06-15 12:58:15 2024-06-15 12:59:59 0:01:44 smithi main centos 9.stream rados/upgrade/parallel/{0-random-distro$/{centos_9.stream} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi007

pass 7755075 2024-06-14 07:29:02 2024-06-15 12:58:56 2024-06-15 13:25:58 0:27:02 0:18:27 0:08:35 smithi main centos 9.stream rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} 1
pass 7755076 2024-06-14 07:29:03 2024-06-15 12:58:56 2024-06-15 13:38:07 0:39:11 0:27:05 0:12:06 smithi main centos 9.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 4
fail 7755077 2024-06-14 07:29:04 2024-06-15 13:00:52 2024-06-15 13:24:19 0:23:27 0:13:27 0:10:00 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} 1
Failure Reason:

"2024-06-15T13:19:43.406334+0000 mon.a (mon.0) 232 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

dead 7755078 2024-06-14 07:29:05 2024-06-15 13:01:32 2024-06-15 13:03:16 0:01:44 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/radosbench} 3
Failure Reason:

Error reimaging machines: Failed to power on smithi003

fail 7755079 2024-06-14 07:29:06 2024-06-15 13:02:13 2024-06-15 14:03:34 1:01:21 0:51:23 0:09:58 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi042 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eb07a70efc10a8d5c05dfe7c9475918bc1aa778c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 7755080 2024-06-14 07:29:07 2024-06-15 13:02:43 2024-06-15 13:27:54 0:25:11 0:15:12 0:09:59 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

"2024-06-15T13:20:37.146225+0000 mon.a (mon.0) 550 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

dead 7755081 2024-06-14 07:29:08 2024-06-15 13:02:43 2024-06-16 01:13:00 12:10:17 smithi main centos 9.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 3
Failure Reason:

hit max job timeout

dead 7755082 2024-06-14 07:29:09 2024-06-15 13:04:14 2024-06-16 01:18:49 12:14:35 smithi main ubuntu 22.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-4 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-small-objects-overwrites} 4
Failure Reason:

hit max job timeout

fail 7755083 2024-06-14 07:29:10 2024-06-15 13:08:28 2024-06-15 13:45:50 0:37:22 0:28:25 0:08:57 smithi main ubuntu 22.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi045 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eb07a70efc10a8d5c05dfe7c9475918bc1aa778c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

dead 7755084 2024-06-14 07:29:11 2024-06-15 13:08:28 2024-06-15 13:12:23 0:03:55 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Error reimaging machines: Failed to power on smithi173

dead 7755085 2024-06-14 07:29:12 2024-06-15 13:11:19 2024-06-15 13:13:13 0:01:54 smithi main centos 9.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi151

pass 7755086 2024-06-14 07:29:13 2024-06-15 13:12:10 2024-06-15 13:34:31 0:22:21 0:12:48 0:09:33 smithi main ubuntu 22.04 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/sync workloads/rados_5925} 2
pass 7755087 2024-06-14 07:29:14 2024-06-15 13:13:00 2024-06-15 13:45:53 0:32:53 0:21:39 0:11:14 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/cache-pool-snaps-readproxy} 4
pass 7755088 2024-06-14 07:29:15 2024-06-15 13:14:01 2024-06-15 13:47:07 0:33:06 0:23:17 0:09:49 smithi main centos 9.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 4
pass 7755089 2024-06-14 07:29:16 2024-06-15 13:14:01 2024-06-15 13:51:08 0:37:07 0:25:40 0:11:27 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/normal ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/cache-snaps-balanced} 4
dead 7755090 2024-06-14 07:29:17 2024-06-15 13:17:47 2024-06-15 13:18:51 0:01:04 smithi main centos 9.stream rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
Failure Reason:

Error reimaging machines: Failed to power on smithi195

fail 7755091 2024-06-14 07:29:18 2024-06-15 13:17:47 2024-06-15 14:01:07 0:43:20 0:27:56 0:15:24 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

"2024-06-15T13:48:22.334105+0000 mon.a (mon.0) 173 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

pass 7755092 2024-06-14 07:29:19 2024-06-15 13:24:01 2024-06-15 13:53:37 0:29:36 0:19:40 0:09:56 smithi main ubuntu 22.04 rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/deploy-raw} 2
fail 7755093 2024-06-14 07:29:20 2024-06-15 13:24:01 2024-06-15 13:31:20 0:07:19 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi129 with status 1: 'sudo yum install -y kernel'

pass 7755094 2024-06-14 07:29:21 2024-06-15 13:24:22 2024-06-15 13:56:48 0:32:26 0:20:00 0:12:26 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 2
fail 7755095 2024-06-14 07:29:22 2024-06-15 13:26:22 2024-06-15 14:01:21 0:34:59 0:26:40 0:08:19 smithi main centos 9.stream rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest} thrashers/force-sync-many workloads/rados_api_tests} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi150 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eb07a70efc10a8d5c05dfe7c9475918bc1aa778c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 7755096 2024-06-14 07:29:23 2024-06-15 13:26:22 2024-06-15 13:47:01 0:20:39 0:08:53 0:11:46 smithi main ubuntu 22.04 rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm_repos} 1
Failure Reason:

Command failed (workunit test cephadm/test_repos.sh) on smithi039 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eb07a70efc10a8d5c05dfe7c9475918bc1aa778c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh'

fail 7755097 2024-06-14 07:29:23 2024-06-15 13:26:23 2024-06-15 13:52:24 0:26:01 0:14:51 0:11:10 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

"2024-06-15T13:48:37.966115+0000 mon.a (mon.0) 1024 : cluster [WRN] Health check failed: 2 Cephadm Agent(s) are not reporting. Hosts may be offline (CEPHADM_AGENT_DOWN)" in cluster log

fail 7755098 2024-06-14 07:29:24 2024-06-15 13:26:53 2024-06-15 13:59:50 0:32:57 0:24:18 0:08:39 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/lockdep} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi077 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eb07a70efc10a8d5c05dfe7c9475918bc1aa778c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 7755099 2024-06-14 07:29:25 2024-06-15 13:26:54 2024-06-15 13:54:45 0:27:51 0:15:28 0:12:23 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_host_drain} 3
fail 7755100 2024-06-14 07:29:26 2024-06-15 13:29:34 2024-06-15 16:59:27 3:29:53 3:16:45 0:13:08 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/rados_api_tests} 4
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi153 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eb07a70efc10a8d5c05dfe7c9475918bc1aa778c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 7755101 2024-06-14 07:29:27 2024-06-15 13:31:25 2024-06-15 14:08:01 0:36:36 0:25:25 0:11:11 smithi main centos 9.stream rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi156 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eb07a70efc10a8d5c05dfe7c9475918bc1aa778c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

pass 7755102 2024-06-14 07:29:28 2024-06-15 13:33:06 2024-06-15 14:38:20 1:05:14 0:55:28 0:09:46 smithi main centos 9.stream rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_latest} thrashers/many workloads/rados_mon_osdmap_prune} 2
fail 7755103 2024-06-14 07:29:29 2024-06-15 13:33:06 2024-06-15 13:40:23 0:07:17 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi181 with status 1: 'sudo yum install -y kernel'

pass 7755104 2024-06-14 07:29:30 2024-06-15 13:33:26 2024-06-15 13:56:08 0:22:42 0:11:10 0:11:32 smithi main ubuntu 22.04 rados/singleton/{all/resolve_stuck_peering mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} 2
dead 7755105 2024-06-14 07:29:31 2024-06-15 13:34:37 2024-06-16 01:46:42 12:12:05 smithi main ubuntu 22.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 3
Failure Reason:

hit max job timeout

fail 7755106 2024-06-14 07:29:32 2024-06-15 13:36:38 2024-06-15 14:19:16 0:42:38 0:32:09 0:10:29 smithi main centos 9.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} tasks/dashboard} 2
Failure Reason:

Test failure: test_list_enabled_module (tasks.mgr.dashboard.test_mgr_module.MgrModuleTest)

dead 7755107 2024-06-14 07:29:33 2024-06-15 13:38:08 2024-06-16 01:47:56 12:09:48 smithi main centos 9.stream rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

hit max job timeout

dead 7755108 2024-06-14 07:29:34 2024-06-15 13:38:08 2024-06-15 13:39:12 0:01:04 smithi main centos 9.stream rados/upgrade/parallel/{0-random-distro$/{centos_9.stream} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi073

fail 7755109 2024-06-14 07:29:35 2024-06-15 13:38:09 2024-06-15 13:55:21 0:17:12 0:07:38 0:09:34 smithi main centos 9.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr distro/{centos_latest} mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} tasks/{1-install 2-ceph 3-mgrmodules 4-units/progress}} 2
Failure Reason:

Test failure: test_default_progress_test (tasks.mgr.test_progress.TestProgress)

dead 7755110 2024-06-14 07:29:36 2024-06-15 13:38:19 2024-06-15 13:41:03 0:02:44 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi063

fail 7755111 2024-06-14 07:29:37 2024-06-15 13:40:00 2024-06-15 14:09:22 0:29:22 0:18:47 0:10:35 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_monitoring_stack_basic} 3
Failure Reason:

"2024-06-15T13:59:58.207488+0000 mon.a (mon.0) 465 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7755112 2024-06-14 07:29:38 2024-06-15 13:40:30 2024-06-15 14:01:23 0:20:53 0:12:09 0:08:44 smithi main centos 9.stream rados/standalone/{supported-random-distro$/{centos_latest} workloads/mon} 1
Failure Reason:

Command failed (workunit test mon/mkfs.sh) on smithi078 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eb07a70efc10a8d5c05dfe7c9475918bc1aa778c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mkfs.sh'

fail 7755113 2024-06-14 07:29:39 2024-06-15 13:40:31 2024-06-15 13:48:07 0:07:36 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi097 with status 1: 'sudo yum install -y kernel'

fail 7755114 2024-06-14 07:29:40 2024-06-15 13:41:11 2024-06-15 13:52:54 0:11:43 smithi main ubuntu 22.04 rados/thrash-erasure-code/{ceph clusters/{fixed-4 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=8-m=6-crush} 8
Failure Reason:

too many values to unpack (expected 1)

pass 7755115 2024-06-14 07:29:41 2024-06-15 13:44:52 2024-06-15 14:06:29 0:21:37 0:10:41 0:10:56 smithi main ubuntu 22.04 rados/singleton/{all/watch-notify-same-primary mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} 1
fail 7755116 2024-06-14 07:29:42 2024-06-15 13:44:52 2024-06-15 14:22:39 0:37:47 0:28:38 0:09:09 smithi main ubuntu 22.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi152 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eb07a70efc10a8d5c05dfe7c9475918bc1aa778c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 7755117 2024-06-14 07:29:43 2024-06-15 13:44:53 2024-06-15 13:52:36 0:07:43 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/quincy backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/radosbench} 3
Failure Reason:

Command failed on smithi113 with status 1: 'sudo yum install -y kernel'

fail 7755118 2024-06-14 07:29:44 2024-06-15 13:45:13 2024-06-15 14:18:59 0:33:46 0:23:40 0:10:06 smithi main centos 9.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

"2024-06-15T14:16:50.471992+0000 mon.a (mon.0) 810 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

dead 7755119 2024-06-14 07:29:45 2024-06-15 13:45:54 2024-06-16 01:58:26 12:12:32 smithi main ubuntu 22.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-4 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-snaps-few-objects-overwrites} 4
Failure Reason:

hit max job timeout

pass 7755120 2024-06-14 07:29:46 2024-06-15 13:47:14 2024-06-15 14:15:10 0:27:56 0:18:35 0:09:21 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_latest} thrashers/mapgap thrashosds-health workloads/admin_socket_objecter_requests} 4
fail 7755121 2024-06-14 07:29:47 2024-06-15 13:48:15 2024-06-15 14:16:33 0:28:18 0:15:08 0:13:10 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-06-15T14:10:08.080703+0000 mon.a (mon.0) 699 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7755122 2024-06-14 07:29:48 2024-06-15 13:50:46 2024-06-15 14:46:10 0:55:24 0:46:21 0:09:03 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi050 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=eb07a70efc10a8d5c05dfe7c9475918bc1aa778c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

dead 7755123 2024-06-14 07:29:49 2024-06-15 13:51:16 2024-06-16 02:00:38 12:09:22 smithi main ubuntu 22.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 3
Failure Reason:

hit max job timeout