Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7783726 2024-07-02 11:50:02 2024-07-03 00:42:01 2024-07-03 01:11:12 0:29:11 0:16:19 0:12:52 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_set_mon_crush_locations} 3
Failure Reason:

"2024-07-03T01:04:10.652467+0000 mon.a (mon.0) 457 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7783727 2024-07-02 11:50:03 2024-07-03 00:44:02 2024-07-03 00:58:51 0:14:49 0:04:37 0:10:12 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

Command failed on smithi089 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo -n /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn'

pass 7783728 2024-07-02 11:50:04 2024-07-03 00:44:02 2024-07-03 01:07:20 0:23:18 0:12:06 0:11:12 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_ca_signed_key} 2
fail 7783729 2024-07-02 11:50:05 2024-07-03 00:44:03 2024-07-03 00:59:46 0:15:43 0:04:35 0:11:08 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi047 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo -n /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn'

pass 7783730 2024-07-02 11:50:06 2024-07-03 00:44:33 2024-07-03 03:15:48 2:31:15 2:22:11 0:09:04 smithi main centos 9.stream rados/standalone/{supported-random-distro$/{centos_latest} workloads/scrub} 1
fail 7783731 2024-07-02 11:50:07 2024-07-03 00:44:33 2024-07-03 01:22:36 0:38:03 0:28:20 0:09:43 smithi main ubuntu 22.04 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi145 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4a497e5b6950ee5504a79f996e1a818f067bd2c5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

pass 7783732 2024-07-02 11:50:07 2024-07-03 00:45:14 2024-07-03 01:15:34 0:30:20 0:20:18 0:10:02 smithi main ubuntu 22.04 rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
dead 7783733 2024-07-02 11:50:08 2024-07-03 00:45:44 2024-07-03 12:55:00 12:09:16 smithi main centos 9.stream rados/thrash-old-clients/{0-distro$/{centos_9.stream} 0-size-min-size-overrides/3-size-2-min-size 1-install/squid backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/rbd_cls} 3
Failure Reason:

hit max job timeout

fail 7783734 2024-07-02 11:50:09 2024-07-03 00:45:55 2024-07-03 01:46:58 1:01:03 0:51:52 0:09:11 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

"2024-07-03T01:34:24.391163+0000 mon.a (mon.0) 3157 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7783735 2024-07-02 11:50:10 2024-07-03 00:46:05 2024-07-03 01:00:54 0:14:49 0:04:38 0:10:11 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

Command failed on smithi032 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo -n /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn'

fail 7783736 2024-07-02 11:50:11 2024-07-03 00:46:05 2024-07-03 01:12:19 0:26:14 0:13:41 0:12:33 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_extra_daemon_features} 2
Failure Reason:

"2024-07-03T01:08:50.179325+0000 mon.a (mon.0) 366 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

pass 7783737 2024-07-02 11:50:12 2024-07-03 00:49:06 2024-07-03 01:14:19 0:25:13 0:12:59 0:12:14 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/read mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/redirect_promote_tests} 4
fail 7783738 2024-07-02 11:50:13 2024-07-03 00:50:47 2024-07-03 01:30:10 0:39:23 0:25:59 0:13:24 smithi main centos 9.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-snappy} tasks/dashboard} 2
Failure Reason:

Test failure: test_full_health (tasks.mgr.dashboard.test_health.HealthTest)

fail 7783739 2024-07-02 11:50:14 2024-07-03 00:52:07 2024-07-03 01:19:33 0:27:26 0:18:33 0:08:53 smithi main ubuntu 22.04 rados/encoder/{0-start 1-tasks supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test dencoder/test-dencoder.sh) on smithi195 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4a497e5b6950ee5504a79f996e1a818f067bd2c5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/dencoder/test-dencoder.sh'

dead 7783740 2024-07-02 11:50:15 2024-07-03 00:52:08 2024-07-03 13:05:38 12:13:30 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

hit max job timeout

pass 7783741 2024-07-02 11:50:16 2024-07-03 00:53:48 2024-07-03 01:19:19 0:25:31 0:15:27 0:10:04 smithi main ubuntu 22.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/c2c} 1
fail 7783742 2024-07-02 11:50:17 2024-07-03 00:53:49 2024-07-03 03:12:06 2:18:17 2:07:13 0:11:04 smithi main ubuntu 22.04 rados/upgrade/parallel/{0-random-distro$/{ubuntu_22.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

"2024-07-03T01:24:03.801892+0000 mon.a (mon.0) 683 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7783743 2024-07-02 11:50:18 2024-07-03 00:53:49 2024-07-03 01:20:40 0:26:51 0:18:14 0:08:37 smithi main centos 9.stream rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} 1
dead 7783744 2024-07-02 11:50:19 2024-07-03 00:53:49 2024-07-03 01:11:01 0:17:12 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/upmap-read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/redirect_set_object} 4
Failure Reason:

Error reimaging machines: This operation would block forever Hub: <Hub '' at 0x7f78ee6b58f0 epoll default pending=0 ref=0 fileno=4 thread_ident=0x7f78f0e4f740> Handles: []

pass 7783745 2024-07-02 11:50:20 2024-07-03 00:55:20 2024-07-03 01:31:25 0:36:05 0:26:07 0:09:58 smithi main ubuntu 22.04 rados/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
pass 7783746 2024-07-02 11:50:21 2024-07-03 00:55:30 2024-07-03 01:31:14 0:35:44 0:25:07 0:10:37 smithi main centos 9.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 4
pass 7783747 2024-07-02 11:50:22 2024-07-03 00:56:21 2024-07-03 01:34:25 0:38:04 0:25:16 0:12:48 smithi main centos 9.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-4 openstack} fast/fast mon_election/connectivity msgr-failures/few rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/ec-snaps-few-objects-overwrites} 4
pass 7783748 2024-07-02 11:50:23 2024-07-03 00:58:52 2024-07-03 01:20:35 0:21:43 0:11:56 0:09:47 smithi main ubuntu 22.04 rados/singleton/{all/dump-stuck mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} 1
pass 7783749 2024-07-02 11:50:24 2024-07-03 00:58:52 2024-07-03 01:27:44 0:28:52 0:17:42 0:11:10 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/normal ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/set-chunks-read} 4
pass 7783750 2024-07-02 11:50:25 2024-07-03 00:59:53 2024-07-03 01:25:02 0:25:09 0:13:56 0:11:13 smithi main centos 9.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_latest} tasks/rados_python} 2
pass 7783751 2024-07-02 11:50:25 2024-07-03 00:59:53 2024-07-03 01:36:38 0:36:45 0:25:53 0:10:52 smithi main ubuntu 22.04 rados/thrash-erasure-code-crush-4-nodes/{arch/x86_64 ceph mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=8-m=6-crush} 4
pass 7783752 2024-07-02 11:50:26 2024-07-03 01:01:04 2024-07-03 01:34:37 0:33:33 0:19:41 0:13:52 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_latest} thrashers/none thrashosds-health workloads/small-objects-balanced} 4
pass 7783753 2024-07-02 11:50:27 2024-07-03 01:05:15 2024-07-03 01:28:45 0:23:30 0:14:35 0:08:55 smithi main ubuntu 22.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_read} 1
fail 7783754 2024-07-02 11:50:28 2024-07-03 01:05:15 2024-07-03 01:18:07 0:12:52 0:04:28 0:08:24 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/deploy-raw} 2
Failure Reason:

Command failed on smithi187 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo -n /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn'

pass 7783755 2024-07-02 11:50:29 2024-07-03 01:05:15 2024-07-03 01:42:26 0:37:11 0:21:17 0:15:54 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/read mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/small-objects-localized} 4
pass 7783756 2024-07-02 11:50:30 2024-07-03 01:06:46 2024-07-03 01:29:41 0:22:55 0:10:02 0:12:53 smithi main centos 9.stream rados/singleton-nomsgr/{all/balancer mon_election/connectivity rados supported-random-distro$/{centos_latest}} 1
pass 7783757 2024-07-02 11:50:31 2024-07-03 01:09:07 2024-07-03 01:42:32 0:33:25 0:21:27 0:11:58 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/normal ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/upmap-read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/small-objects} 4
fail 7783758 2024-07-02 11:50:32 2024-07-03 01:11:17 2024-07-03 01:34:46 0:23:29 0:11:19 0:12:10 smithi main centos 9.stream rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/basic 3-final} 1
Failure Reason:

"2024-07-03T01:30:10.658016+0000 mon.smithi094 (mon.0) 280 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

pass 7783759 2024-07-02 11:50:33 2024-07-03 01:11:38 2024-07-03 01:55:50 0:44:12 0:30:45 0:13:27 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/snaps-few-objects-localized} 4
pass 7783760 2024-07-02 11:50:34 2024-07-03 01:13:59 2024-07-03 01:36:03 0:22:04 0:11:34 0:10:30 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
pass 7783761 2024-07-02 11:50:35 2024-07-03 01:14:29 2024-07-03 01:53:06 0:38:37 0:26:32 0:12:05 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/read mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} 4
pass 7783762 2024-07-02 11:50:36 2024-07-03 01:15:40 2024-07-03 01:55:27 0:39:47 0:27:37 0:12:10 smithi main ubuntu 22.04 rados/thrash-erasure-code/{ceph clusters/{fixed-4 openstack} fast/normal mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} 4
pass 7783763 2024-07-02 11:50:37 2024-07-03 01:17:20 2024-07-03 01:44:26 0:27:06 0:14:14 0:12:52 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/upmap-read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_latest} thrashers/none thrashosds-health workloads/write_fadvise_dontneed} 4
pass 7783764 2024-07-02 11:50:38 2024-07-03 01:19:21 2024-07-03 02:08:18 0:48:57 0:39:06 0:09:51 smithi main centos 9.stream rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/none msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest}} 1
pass 7783765 2024-07-02 11:50:39 2024-07-03 01:19:32 2024-07-03 01:53:12 0:33:40 0:23:14 0:10:26 smithi main ubuntu 22.04 rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_host_drain} 3
pass 7783766 2024-07-02 11:50:40 2024-07-03 01:19:42 2024-07-03 01:52:06 0:32:24 0:16:06 0:16:18 smithi main centos 9.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7783767 2024-07-02 11:50:41 2024-07-03 01:25:03 2024-07-03 01:53:58 0:28:55 0:17:16 0:11:39 smithi main ubuntu 22.04 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/force-sync-many workloads/pool-create-delete} 2
pass 7783768 2024-07-02 11:50:42 2024-07-03 01:26:24 2024-07-03 01:53:24 0:27:00 0:17:32 0:09:28 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados tasks/rados_cls_all validater/lockdep} 2
pass 7783769 2024-07-02 11:50:42 2024-07-03 01:26:24 2024-07-03 02:01:26 0:35:02 0:24:02 0:11:00 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/normal ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/cache-agent-big} 4
fail 7783770 2024-07-02 11:50:43 2024-07-03 01:27:45 2024-07-03 01:47:53 0:20:08 0:04:30 0:15:38 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

Command failed on smithi106 with status 234: 'sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1 && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/attr_allow_any_host && sudo mkdir -p /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1 && echo -n /dev/vg_nvme/lv_1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/device_path && echo 1 | sudo tee /sys/kernel/config/nvmet/subsystems/lv_1/namespaces/1/enable && sudo ln -s /sys/kernel/config/nvmet/subsystems/lv_1 /sys/kernel/config/nvmet/ports/1/subsystems/lv_1 && sudo nvme connect -t loop -n lv_1 -q hostnqn'

dead 7783771 2024-07-02 11:50:44 2024-07-03 01:28:55 2024-07-03 13:38:32 12:09:37 smithi main centos 9.stream rados/thrash-old-clients/{0-distro$/{centos_9.stream} 0-size-min-size-overrides/2-size-2-min-size 1-install/quincy backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

hit max job timeout

fail 7783772 2024-07-02 11:50:45 2024-07-03 01:29:46 2024-07-03 01:53:20 0:23:34 0:13:13 0:10:21 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} 1
Failure Reason:

"2024-07-03T01:48:31.266067+0000 mon.a (mon.0) 296 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log