Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7761746 2024-06-18 07:58:40 2024-06-18 09:40:07 2024-06-18 09:49:25 0:09:18 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/quincy backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi087 with status 1: 'sudo yum install -y kernel'

fail 7761747 2024-06-18 07:58:41 2024-06-18 09:42:07 2024-06-18 11:49:51 2:07:44 1:57:50 0:09:54 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

pass 7761748 2024-06-18 07:58:42 2024-06-18 09:42:38 2024-06-18 10:06:09 0:23:31 0:13:23 0:10:08 smithi main ubuntu 22.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} 1
fail 7761749 2024-06-18 07:58:43 2024-06-18 09:42:38 2024-06-18 11:39:07 1:56:29 1:46:59 0:09:30 smithi main ubuntu 22.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/scrub} 1
Failure Reason:

Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi188 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acf6609c9112bdd701428440affbf916eb043fff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh'

pass 7761750 2024-06-18 07:58:44 2024-06-18 09:43:29 2024-06-18 10:34:39 0:51:10 0:41:43 0:09:27 smithi main centos 9.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-4 openstack} fast/normal mon_election/connectivity msgr-failures/few rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 4
fail 7761751 2024-06-18 07:58:45 2024-06-18 09:43:39 2024-06-18 10:42:16 0:58:37 0:45:02 0:13:35 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/rados_api_tests} 4
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi194 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acf6609c9112bdd701428440affbf916eb043fff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 7761752 2024-06-18 07:58:46 2024-06-18 09:46:30 2024-06-18 10:32:17 0:45:47 0:33:15 0:12:32 smithi main ubuntu 22.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
fail 7761753 2024-06-18 07:58:47 2024-06-18 09:47:20 2024-06-18 10:23:25 0:36:05 0:26:04 0:10:01 smithi main ubuntu 22.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/rados_api_tests} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi129 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acf6609c9112bdd701428440affbf916eb043fff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 7761754 2024-06-18 07:58:48 2024-06-18 09:47:31 2024-06-18 10:23:01 0:35:30 0:26:23 0:09:07 smithi main centos 9.stream rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi187 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acf6609c9112bdd701428440affbf916eb043fff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 7761755 2024-06-18 07:58:49 2024-06-18 09:48:01 2024-06-18 09:55:31 0:07:30 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi107 with status 1: 'sudo yum install -y kernel'

fail 7761756 2024-06-18 07:58:50 2024-06-18 09:48:22 2024-06-18 10:13:58 0:25:36 0:14:41 0:10:55 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_extra_daemon_features} 2
Failure Reason:

"2024-06-18T10:12:11.740826+0000 mon.a (mon.0) 459 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7761757 2024-06-18 07:58:51 2024-06-18 09:51:02 2024-06-18 10:34:28 0:43:26 0:32:06 0:11:20 smithi main centos 9.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/dashboard} 2
Failure Reason:

Test failure: test_list_enabled_module (tasks.mgr.dashboard.test_mgr_module.MgrModuleTest)

fail 7761758 2024-06-18 07:58:52 2024-06-18 09:51:23 2024-06-18 10:16:26 0:25:03 0:15:13 0:09:50 smithi main centos 9.stream rados/encoder/{0-start 1-tasks supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed (workunit test dencoder/test-dencoder.sh) on smithi057 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acf6609c9112bdd701428440affbf916eb043fff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/dencoder/test-dencoder.sh'

dead 7761759 2024-06-18 07:58:54 2024-06-18 09:51:23 2024-06-18 22:01:37 12:10:14 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

hit max job timeout

fail 7761760 2024-06-18 07:58:55 2024-06-18 09:51:54 2024-06-18 11:50:37 1:58:43 1:46:33 0:12:10 smithi main centos 9.stream rados/upgrade/parallel/{0-random-distro$/{centos_9.stream} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

"2024-06-18T10:14:02.317014+0000 mon.a (mon.0) 519 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

pass 7761761 2024-06-18 07:58:56 2024-06-18 09:53:34 2024-06-18 10:22:21 0:28:47 0:20:18 0:08:29 smithi main centos 9.stream rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} 1
pass 7761762 2024-06-18 07:58:57 2024-06-18 09:53:34 2024-06-18 10:32:32 0:38:58 0:26:56 0:12:02 smithi main centos 9.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 4
fail 7761763 2024-06-18 07:58:58 2024-06-18 09:55:35 2024-06-18 10:19:03 0:23:28 0:13:05 0:10:23 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} 1
Failure Reason:

"2024-06-18T10:13:50.888864+0000 mon.a (mon.0) 234 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7761764 2024-06-18 07:58:59 2024-06-18 09:55:35 2024-06-18 10:03:55 0:08:20 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/radosbench} 3
Failure Reason:

Command failed on smithi005 with status 1: 'sudo yum install -y kernel'

fail 7761765 2024-06-18 07:59:00 2024-06-18 09:56:46 2024-06-18 10:56:00 0:59:14 0:46:43 0:12:31 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi012 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acf6609c9112bdd701428440affbf916eb043fff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 7761766 2024-06-18 07:59:01 2024-06-18 09:58:47 2024-06-18 10:25:50 0:27:03 0:15:12 0:11:51 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

"2024-06-18T10:22:35.310680+0000 mon.a (mon.0) 989 : cluster [WRN] Health check failed: 2 Cephadm Agent(s) are not reporting. Hosts may be offline (CEPHADM_AGENT_DOWN)" in cluster log

pass 7761767 2024-06-18 07:59:02 2024-06-18 10:00:17 2024-06-18 10:36:34 0:36:17 0:24:26 0:11:51 smithi main centos 9.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 3
pass 7761768 2024-06-18 07:59:03 2024-06-18 10:03:38 2024-06-18 10:39:11 0:35:33 0:23:59 0:11:34 smithi main ubuntu 22.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-4 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-small-objects-overwrites} 4
fail 7761769 2024-06-18 07:59:04 2024-06-18 10:05:39 2024-06-18 10:43:20 0:37:41 0:27:50 0:09:51 smithi main ubuntu 22.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi107 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acf6609c9112bdd701428440affbf916eb043fff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 7761770 2024-06-18 07:59:05 2024-06-18 10:05:39 2024-06-18 10:13:08 0:07:29 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi155 with status 1: 'sudo yum install -y kernel'

fail 7761771 2024-06-18 07:59:06 2024-06-18 10:06:00 2024-06-18 10:41:56 0:35:56 0:23:26 0:12:30 smithi main centos 9.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

"2024-06-18T10:39:53.275443+0000 mon.a (mon.0) 647 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

pass 7761772 2024-06-18 07:59:07 2024-06-18 10:08:40 2024-06-18 10:35:15 0:26:35 0:12:32 0:14:03 smithi main ubuntu 22.04 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/sync workloads/rados_5925} 2
pass 7761773 2024-06-18 07:59:08 2024-06-18 10:13:11 2024-06-18 11:15:37 1:02:26 0:51:17 0:11:09 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/cache-pool-snaps-readproxy} 4
pass 7761774 2024-06-18 07:59:09 2024-06-18 10:14:02 2024-06-18 10:50:15 0:36:13 0:22:53 0:13:20 smithi main centos 9.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 4
pass 7761775 2024-06-18 07:59:10 2024-06-18 10:17:13 2024-06-18 10:56:17 0:39:04 0:24:06 0:14:58 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/normal ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/cache-snaps-balanced} 4
fail 7761776 2024-06-18 07:59:11 2024-06-18 10:21:44 2024-06-18 10:49:23 0:27:39 0:18:09 0:09:30 smithi main centos 9.stream rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

fail 7761777 2024-06-18 07:59:12 2024-06-18 10:22:24 2024-06-18 11:02:46 0:40:22 0:29:42 0:10:40 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

pass 7761778 2024-06-18 07:59:13 2024-06-18 10:23:35 2024-06-18 10:56:00 0:32:25 0:19:43 0:12:42 smithi main ubuntu 22.04 rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/deploy-raw} 2
fail 7761779 2024-06-18 07:59:14 2024-06-18 10:25:55 2024-06-18 10:39:34 0:13:39 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi165 with status 1: 'sudo yum install -y kernel'

pass 7761780 2024-06-18 07:59:16 2024-06-18 10:32:16 2024-06-18 11:04:51 0:32:35 0:21:48 0:10:47 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 2
fail 7761781 2024-06-18 07:59:17 2024-06-18 10:32:27 2024-06-18 11:09:24 0:36:57 0:26:36 0:10:21 smithi main centos 9.stream rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest} thrashers/force-sync-many workloads/rados_api_tests} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi116 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acf6609c9112bdd701428440affbf916eb043fff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 7761782 2024-06-18 07:59:18 2024-06-18 10:32:37 2024-06-18 10:50:08 0:17:31 0:08:36 0:08:55 smithi main ubuntu 22.04 rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm_repos} 1
Failure Reason:

Command failed (workunit test cephadm/test_repos.sh) on smithi078 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acf6609c9112bdd701428440affbf916eb043fff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh'

fail 7761783 2024-06-18 07:59:19 2024-06-18 10:32:37 2024-06-18 10:58:45 0:26:08 0:15:10 0:10:58 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

"2024-06-18T10:56:07.087617+0000 mon.a (mon.0) 1026 : cluster [WRN] Health check failed: 2 Cephadm Agent(s) are not reporting. Hosts may be offline (CEPHADM_AGENT_DOWN)" in cluster log

fail 7761784 2024-06-18 07:59:20 2024-06-18 10:32:38 2024-06-18 11:09:41 0:37:03 0:24:53 0:12:10 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/lockdep} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi089 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acf6609c9112bdd701428440affbf916eb043fff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 7761785 2024-06-18 07:59:21 2024-06-18 10:34:28 2024-06-18 10:59:39 0:25:11 0:15:40 0:09:31 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_host_drain} 3
Failure Reason:

"2024-06-18T10:55:20.218546+0000 mon.a (mon.0) 641 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7761786 2024-06-18 07:59:22 2024-06-18 10:34:29 2024-06-18 14:04:36 3:30:07 3:17:25 0:12:42 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/rados_api_tests} 4
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi155 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acf6609c9112bdd701428440affbf916eb043fff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 7761787 2024-06-18 07:59:23 2024-06-18 10:36:40 2024-06-18 11:11:37 0:34:57 0:25:48 0:09:09 smithi main centos 9.stream rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi073 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acf6609c9112bdd701428440affbf916eb043fff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

pass 7761788 2024-06-18 07:59:24 2024-06-18 10:36:40 2024-06-18 11:42:15 1:05:35 0:52:53 0:12:42 smithi main centos 9.stream rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_latest} thrashers/many workloads/rados_mon_osdmap_prune} 2
fail 7761789 2024-06-18 07:59:25 2024-06-18 10:39:21 2024-06-18 10:48:11 0:08:50 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi181 with status 1: 'sudo yum install -y kernel'

pass 7761790 2024-06-18 07:59:26 2024-06-18 10:39:21 2024-06-18 11:01:36 0:22:15 0:10:49 0:11:26 smithi main ubuntu 22.04 rados/singleton/{all/resolve_stuck_peering mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} 2
pass 7761791 2024-06-18 07:59:27 2024-06-18 10:39:41 2024-06-18 11:19:50 0:40:09 0:27:38 0:12:31 smithi main ubuntu 22.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 3
fail 7761792 2024-06-18 07:59:28 2024-06-18 10:42:02 2024-06-18 11:23:21 0:41:19 0:31:34 0:09:45 smithi main centos 9.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} tasks/dashboard} 2
Failure Reason:

Test failure: test_list_enabled_module (tasks.mgr.dashboard.test_mgr_module.MgrModuleTest)

dead 7761793 2024-06-18 07:59:29 2024-06-18 10:42:22 2024-06-18 22:52:35 12:10:13 smithi main centos 9.stream rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

hit max job timeout

fail 7761794 2024-06-18 07:59:30 2024-06-18 10:42:23 2024-06-18 12:40:41 1:58:18 1:48:03 0:10:15 smithi main centos 9.stream rados/upgrade/parallel/{0-random-distro$/{centos_9.stream} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

"2024-06-18T11:04:28.220751+0000 mon.a (mon.0) 666 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7761795 2024-06-18 07:59:31 2024-06-18 10:43:23 2024-06-18 11:05:14 0:21:51 0:07:47 0:14:04 smithi main centos 9.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr distro/{centos_latest} mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} tasks/{1-install 2-ceph 3-mgrmodules 4-units/progress}} 2
Failure Reason:

Test failure: test_default_progress_test (tasks.mgr.test_progress.TestProgress)

fail 7761796 2024-06-18 07:59:33 2024-06-18 10:48:14 2024-06-18 12:48:19 2:00:05 1:48:14 0:11:51 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

fail 7761797 2024-06-18 07:59:34 2024-06-18 10:49:25 2024-06-18 11:20:57 0:31:32 0:21:07 0:10:25 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_monitoring_stack_basic} 3
Failure Reason:

"2024-06-18T11:13:59.241843+0000 mon.a (mon.0) 758 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7761798 2024-06-18 07:59:35 2024-06-18 10:49:55 2024-06-18 11:14:36 0:24:41 0:15:07 0:09:34 smithi main centos 9.stream rados/standalone/{supported-random-distro$/{centos_latest} workloads/mon} 1
Failure Reason:

Command failed (workunit test mon/mkfs.sh) on smithi043 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acf6609c9112bdd701428440affbf916eb043fff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mkfs.sh'

fail 7761799 2024-06-18 07:59:36 2024-06-18 10:49:56 2024-06-18 11:03:35 0:13:39 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Command failed on smithi175 with status 1: 'sudo yum install -y kernel'

fail 7761800 2024-06-18 07:59:37 2024-06-18 10:56:07 2024-06-18 11:07:59 0:11:52 smithi main ubuntu 22.04 rados/thrash-erasure-code/{ceph clusters/{fixed-4 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=8-m=6-crush} 8
Failure Reason:

too many values to unpack (expected 1)

pass 7761801 2024-06-18 07:59:38 2024-06-18 10:58:48 2024-06-18 11:19:12 0:20:24 0:10:30 0:09:54 smithi main ubuntu 22.04 rados/singleton/{all/watch-notify-same-primary mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} 1
fail 7761802 2024-06-18 07:59:39 2024-06-18 10:59:48 2024-06-18 11:37:27 0:37:39 0:28:10 0:09:29 smithi main ubuntu 22.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi143 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acf6609c9112bdd701428440affbf916eb043fff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 7761803 2024-06-18 07:59:40 2024-06-18 10:59:48 2024-06-18 11:09:01 0:09:13 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/quincy backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/radosbench} 3
Failure Reason:

Command failed on smithi165 with status 1: 'sudo yum install -y kernel'

fail 7761804 2024-06-18 07:59:41 2024-06-18 11:01:39 2024-06-18 11:36:21 0:34:42 0:23:45 0:10:57 smithi main centos 9.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

"2024-06-18T11:34:30.096251+0000 mon.a (mon.0) 807 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

pass 7761805 2024-06-18 07:59:42 2024-06-18 11:02:50 2024-06-18 11:59:48 0:56:58 0:44:07 0:12:51 smithi main ubuntu 22.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-4 openstack} fast/normal mon_election/connectivity msgr-failures/osd-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-snaps-few-objects-overwrites} 4
pass 7761806 2024-06-18 07:59:43 2024-06-18 11:05:00 2024-06-18 11:41:28 0:36:28 0:22:34 0:13:54 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_latest} thrashers/mapgap thrashosds-health workloads/admin_socket_objecter_requests} 4
fail 7761807 2024-06-18 07:59:44 2024-06-18 11:08:01 2024-06-18 11:33:16 0:25:15 0:14:58 0:10:17 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-06-18T11:29:05.820724+0000 mon.a (mon.0) 1042 : cluster [WRN] Health check failed: 2 Cephadm Agent(s) are not reporting. Hosts may be offline (CEPHADM_AGENT_DOWN)" in cluster log

fail 7761808 2024-06-18 07:59:45 2024-06-18 11:08:01 2024-06-18 12:10:02 1:02:01 0:50:21 0:11:40 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi131 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acf6609c9112bdd701428440affbf916eb043fff TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 7761809 2024-06-18 07:59:46 2024-06-18 11:08:02 2024-06-18 11:49:20 0:41:18 0:28:27 0:12:51 smithi main ubuntu 22.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 3