Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7570367 2024-02-22 01:14:25 2024-02-22 01:15:21 2024-02-22 01:40:38 0:25:17 0:14:16 0:11:01 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

"2024-02-22T01:33:09.610766+0000 mon.smithi033 (mon.0) 262 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log

fail 7570368 2024-02-22 01:14:26 2024-02-22 01:15:31 2024-02-22 08:40:13 7:24:42 7:14:58 0:09:44 smithi main ubuntu 22.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 3
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 0 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op setattr 25 --op rmattr 25 --op copy_from 50 --op append 100 --pool unique_pool_0'

fail 7570369 2024-02-22 01:14:27 2024-02-22 01:15:52 2024-02-22 02:04:36 0:48:44 0:38:14 0:10:30 smithi main ubuntu 22.04 rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

"2024-02-22T01:41:39.628083+0000 mon.a (mon.0) 442 : cluster 3 [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm" in cluster log

pass 7570370 2024-02-22 01:14:28 2024-02-22 01:16:53 2024-02-22 01:47:56 0:31:03 0:20:18 0:10:45 smithi main centos 9.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/fastread thrashosds-health workloads/ec-small-objects} 2
pass 7570371 2024-02-22 01:14:29 2024-02-22 01:16:53 2024-02-22 01:48:21 0:31:28 0:16:46 0:14:42 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_rgw_multisite} 3
pass 7570372 2024-02-22 01:14:29 2024-02-22 01:18:14 2024-02-22 01:52:33 0:34:19 0:23:32 0:10:47 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/quincy backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/test_rbd_api} 3
fail 7570373 2024-02-22 01:14:30 2024-02-22 01:19:14 2024-02-22 08:35:13 7:15:59 7:06:16 0:09:43 smithi main centos 9.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 0 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op setattr 25 --op rmattr 25 --op copy_from 50 --op append 100 --pool unique_pool_0'

fail 7570374 2024-02-22 01:14:31 2024-02-22 01:19:35 2024-02-22 01:45:14 0:25:39 0:13:52 0:11:47 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

"2024-02-22T01:38:48.794853+0000 mon.smithi012 (mon.0) 261 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log

fail 7570375 2024-02-22 01:14:32 2024-02-22 01:23:49 2024-02-22 03:29:27 2:05:38 1:54:37 0:11:01 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

pass 7570376 2024-02-22 01:14:33 2024-02-22 01:24:00 2024-02-22 01:52:12 0:28:12 0:17:05 0:11:07 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_set_mon_crush_locations} 3
pass 7570377 2024-02-22 01:14:34 2024-02-22 01:24:20 2024-02-22 01:48:42 0:24:22 0:12:58 0:11:24 smithi main centos 9.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 7570378 2024-02-22 01:14:34 2024-02-22 01:25:11 2024-02-22 03:08:01 1:42:50 1:34:28 0:08:22 smithi main centos 9.stream rados/standalone/{supported-random-distro$/{centos_latest} workloads/scrub} 1
Failure Reason:

Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbe7fcec16eddee6239e4dd68c4d0203c7df6461 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh'

dead 7570379 2024-02-22 01:14:35 2024-02-22 01:25:11 2024-02-22 01:45:32 0:20:21 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/rados_api_tests} 2
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

pass 7570380 2024-02-22 01:14:36 2024-02-22 01:28:21 2024-02-22 02:02:42 0:34:21 0:24:32 0:09:49 smithi main ubuntu 22.04 rados/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
fail 7570381 2024-02-22 01:14:37 2024-02-22 01:29:11 2024-02-22 02:16:10 0:46:59 0:34:21 0:12:38 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/radosbench-high-concurrency} 2
Failure Reason:

Command failed on smithi019 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd pool rm unique_pool_5 unique_pool_5 --yes-i-really-really-mean-it'

fail 7570382 2024-02-22 01:14:38 2024-02-22 01:30:32 2024-02-22 02:02:38 0:32:06 0:21:12 0:10:54 smithi main ubuntu 22.04 rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

"2024-02-22T01:51:38.468026+0000 mon.smithi003 (mon.0) 253 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log

fail 7570383 2024-02-22 01:14:39 2024-02-22 01:30:52 2024-02-22 03:23:04 1:52:12 1:40:32 0:11:40 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/radosbench} 2
Failure Reason:

reached maximum tries (501) after waiting for 3000 seconds

pass 7570384 2024-02-22 01:14:39 2024-02-22 01:32:13 2024-02-22 01:55:32 0:23:19 0:12:23 0:10:56 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_ca_signed_key} 2
pass 7570385 2024-02-22 01:14:40 2024-02-22 01:32:13 2024-02-22 02:05:10 0:32:57 0:24:10 0:08:47 smithi main centos 9.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 2
fail 7570386 2024-02-22 01:14:41 2024-02-22 01:32:13 2024-02-22 04:54:19 3:22:06 3:11:38 0:10:28 smithi main ubuntu 22.04 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi057 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbe7fcec16eddee6239e4dd68c4d0203c7df6461 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

pass 7570387 2024-02-22 01:14:42 2024-02-22 01:32:14 2024-02-22 01:59:53 0:27:39 0:14:48 0:12:51 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest} thrashers/mapgap thrashosds-health workloads/redirect} 2
pass 7570388 2024-02-22 01:14:43 2024-02-22 01:34:44 2024-02-22 02:12:14 0:37:30 0:27:39 0:09:51 smithi main centos 9.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7570389 2024-02-22 01:14:43 2024-02-22 01:35:05 2024-02-22 02:21:34 0:46:29 0:36:22 0:10:07 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/cache-snaps} 3
fail 7570390 2024-02-22 01:14:44 2024-02-22 01:36:05 2024-02-22 08:36:30 7:00:25 6:51:24 0:09:01 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

SSH connection to smithi152 was lost: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all'

fail 7570391 2024-02-22 01:14:45 2024-02-22 01:36:06 2024-02-22 08:38:05 7:01:59 6:51:08 0:10:51 smithi main centos 9.stream rados/upgrade/parallel/{0-random-distro$/{centos_9.stream_runc} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

SSH connection to smithi044 was lost: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid efa62f68-d124-11ee-95bd-87774f69a715 -e sha1=dbe7fcec16eddee6239e4dd68c4d0203c7df6461 -- bash -c \'while ceph orch upgrade status | jq \'"\'"\'.in_progress\'"\'"\' | grep true ; do ceph orch ps ; ceph versions ; sleep 30 ; done\''

fail 7570392 2024-02-22 01:14:46 2024-02-22 01:36:26 2024-02-22 02:10:05 0:33:39 0:20:38 0:13:01 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/redirect_promote_tests} 2
Failure Reason:

Command failed on smithi164 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.2'

fail 7570393 2024-02-22 01:14:47 2024-02-22 01:38:47 2024-02-22 02:04:05 0:25:18 0:12:59 0:12:19 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

"2024-02-22T01:59:01.728525+0000 mon.smithi018 (mon.0) 258 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log

pass 7570394 2024-02-22 01:14:48 2024-02-22 01:40:47 2024-02-22 02:11:00 0:30:13 0:18:11 0:12:02 smithi main centos 9.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7570395 2024-02-22 01:14:48 2024-02-22 01:41:08 2024-02-22 02:06:17 0:25:09 0:14:56 0:10:13 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
pass 7570396 2024-02-22 01:14:49 2024-02-22 01:41:08 2024-02-22 02:03:42 0:22:34 0:11:48 0:10:46 smithi main centos 9.stream rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/basic 3-final} 1
fail 7570397 2024-02-22 01:14:50 2024-02-22 01:42:59 2024-02-22 08:36:01 6:53:02 6:41:40 0:11:22 smithi main centos 9.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op setattr 25 --op rmattr 25 --op copy_from 50 --op append 50 --op write_excl 50 --op append_excl 50 --pool unique_pool_0'

fail 7570398 2024-02-22 01:14:51 2024-02-22 01:44:09 2024-02-22 02:04:46 0:20:37 0:08:49 0:11:48 smithi main ubuntu 22.04 rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm_repos} 1
Failure Reason:

Command failed (workunit test cephadm/test_repos.sh) on smithi090 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbe7fcec16eddee6239e4dd68c4d0203c7df6461 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh'

pass 7570399 2024-02-22 01:14:52 2024-02-22 01:44:10 2024-02-22 02:07:48 0:23:38 0:14:12 0:09:26 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_cephadm_timeout} 1
fail 7570400 2024-02-22 01:14:53 2024-02-22 01:44:40 2024-02-22 08:37:05 6:52:25 6:42:36 0:09:49 smithi main centos 9.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 0 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op setattr 25 --op rmattr 25 --op copy_from 50 --op append 100 --pool unique_pool_0'

fail 7570401 2024-02-22 01:14:53 2024-02-22 01:45:31 2024-02-22 08:35:09 6:49:38 6:39:33 0:10:05 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/small-objects-localized} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --localize-reads --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op setattr 25 --op rmattr 25 --op copy_from 50 --op write_excl 50 --pool unique_pool_0'

fail 7570402 2024-02-22 01:14:54 2024-02-22 01:45:51 2024-02-22 08:42:59 6:57:08 6:39:14 0:17:54 smithi main ubuntu 22.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 0 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op setattr 25 --op rmattr 25 --op copy_from 50 --op append 100 --pool unique_pool_0'

fail 7570403 2024-02-22 01:14:55 2024-02-22 01:45:51 2024-02-22 02:11:20 0:25:29 0:14:11 0:11:18 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

"2024-02-22T02:03:50.623030+0000 mon.smithi022 (mon.0) 261 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log

fail 7570404 2024-02-22 01:14:56 2024-02-22 01:46:02 2024-02-22 05:05:34 3:19:32 3:08:50 0:10:42 smithi main centos 9.stream rados/standalone/{supported-random-distro$/{centos_latest} workloads/crush} 1
Failure Reason:

Command failed (workunit test crush/crush-choose-args.sh) on smithi150 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbe7fcec16eddee6239e4dd68c4d0203c7df6461 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/crush/crush-choose-args.sh'

pass 7570405 2024-02-22 01:14:57 2024-02-22 01:46:02 2024-02-22 03:14:39 1:28:37 1:17:24 0:11:13 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/radosbench} 3
fail 7570406 2024-02-22 01:14:57 2024-02-22 01:47:33 2024-02-22 03:04:29 1:16:56 1:08:13 0:08:43 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

pass 7570407 2024-02-22 01:14:58 2024-02-22 01:47:33 2024-02-22 02:13:48 0:26:15 0:15:00 0:11:15 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_extra_daemon_features} 2
pass 7570408 2024-02-22 01:14:59 2024-02-22 01:47:33 2024-02-22 02:11:25 0:23:52 0:14:06 0:09:46 smithi main ubuntu 22.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 7570409 2024-02-22 01:15:00 2024-02-22 01:47:44 2024-02-22 08:37:54 6:50:10 6:39:54 0:10:16 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/snaps-few-objects-balanced} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --balance-reads --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op copy_from 50 --op write_excl 50 --pool unique_pool_0'

pass 7570410 2024-02-22 01:15:01 2024-02-22 01:47:44 2024-02-22 02:21:24 0:33:40 0:24:08 0:09:32 smithi main ubuntu 22.04 rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_host_drain} 3
pass 7570411 2024-02-22 01:15:02 2024-02-22 01:48:05 2024-02-22 02:13:30 0:25:25 0:15:39 0:09:46 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
fail 7570412 2024-02-22 01:15:03 2024-02-22 01:48:05 2024-02-22 08:36:45 6:48:40 6:38:30 0:10:10 smithi main centos 9.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 3
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 0 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op setattr 25 --op rmattr 25 --op copy_from 50 --op append 100 --pool unique_pool_0'

fail 7570413 2024-02-22 01:15:03 2024-02-22 01:48:05 2024-02-22 02:18:17 0:30:12 0:20:02 0:10:10 smithi main ubuntu 22.04 rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

"2024-02-22T02:09:45.716328+0000 mon.smithi103 (mon.0) 252 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log

pass 7570414 2024-02-22 01:15:04 2024-02-22 01:48:06 2024-02-22 02:27:28 0:39:22 0:29:12 0:10:10 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/snaps-few-objects} 2
pass 7570415 2024-02-22 01:15:05 2024-02-22 01:48:06 2024-02-22 02:15:07 0:27:01 0:16:54 0:10:07 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} 1
fail 7570416 2024-02-22 01:15:06 2024-02-22 01:48:06 2024-02-22 08:36:31 6:48:25 6:38:30 0:09:55 smithi main centos 9.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/ec-small-objects-overwrites} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op setattr 25 --op rmattr 25 --op copy_from 50 --op append 50 --op write_excl 50 --op append_excl 50 --pool unique_pool_0'

pass 7570417 2024-02-22 01:15:07 2024-02-22 01:48:07 2024-02-22 02:14:51 0:26:44 0:16:21 0:10:23 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/write_fadvise_dontneed} 2
pass 7570418 2024-02-22 01:15:08 2024-02-22 01:48:47 2024-02-22 02:24:47 0:36:00 0:26:26 0:09:34 smithi main ubuntu 22.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7570419 2024-02-22 01:15:08 2024-02-22 01:48:48 2024-02-22 02:22:22 0:33:34 0:21:28 0:12:06 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_monitoring_stack_basic} 3
fail 7570420 2024-02-22 01:15:09 2024-02-22 01:48:48 2024-02-22 05:11:10 3:22:22 3:11:31 0:10:51 smithi main ubuntu 22.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi191 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbe7fcec16eddee6239e4dd68c4d0203c7df6461 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 7570421 2024-02-22 01:15:10 2024-02-22 01:48:49 2024-02-22 05:22:12 3:33:23 3:21:16 0:12:07 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
Failure Reason:

Command failed (workunit test cls/test_cls_hello.sh) on smithi102 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbe7fcec16eddee6239e4dd68c4d0203c7df6461 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_hello.sh'

fail 7570422 2024-02-22 01:15:11 2024-02-22 01:50:59 2024-02-22 02:18:02 0:27:03 0:13:51 0:13:12 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

"2024-02-22T02:11:08.866390+0000 mon.smithi027 (mon.0) 264 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log

pass 7570423 2024-02-22 01:15:12 2024-02-22 01:52:20 2024-02-22 02:31:30 0:39:10 0:24:54 0:14:16 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v2only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/rbd_cls} 3
pass 7570424 2024-02-22 01:15:13 2024-02-22 01:52:20 2024-02-22 02:24:19 0:31:59 0:20:52 0:11:07 smithi main ubuntu 22.04 rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_orch_cli} 1
fail 7570425 2024-02-22 01:15:13 2024-02-22 01:52:21 2024-02-22 02:19:59 0:27:38 0:16:56 0:10:42 smithi main centos 9.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi069 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbe7fcec16eddee6239e4dd68c4d0203c7df6461 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 7570426 2024-02-22 01:15:14 2024-02-22 01:52:41 2024-02-22 02:30:16 0:37:35 0:24:57 0:12:38 smithi main ubuntu 22.04 rados/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
dead 7570427 2024-02-22 01:15:15 2024-02-22 01:54:32 2024-02-22 02:00:57 0:06:25 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/cache-agent-small} 2
Failure Reason:

Error reimaging machines: Expected smithi045's OS to be centos 9 but found ubuntu 22.04

dead 7570428 2024-02-22 01:15:16 2024-02-22 01:55:32 2024-02-22 02:01:25 0:05:53 smithi main ubuntu 22.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

Error reimaging machines: list index out of range

pass 7570429 2024-02-22 01:15:17 2024-02-22 01:55:32 2024-02-22 02:18:35 0:23:03 0:13:18 0:09:45 smithi main centos 9.stream rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream_runc} 1-start 2-services/rgw 3-final} 1
fail 7570430 2024-02-22 01:15:18 2024-02-22 01:55:33 2024-02-22 02:22:42 0:27:09 0:13:35 0:13:34 smithi main centos 9.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-comp-zlib} supported-random-distro$/{centos_latest} tasks/module_selftest} 2
Failure Reason:

Test failure: test_diskprediction_local (tasks.mgr.test_module_selftest.TestModuleSelftest)

fail 7570431 2024-02-22 01:15:19 2024-02-22 01:59:34 2024-02-22 02:19:13 0:19:39 0:09:56 0:09:43 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats'

fail 7570432 2024-02-22 01:15:19 2024-02-22 01:59:44 2024-02-22 02:08:32 0:08:48 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_orch_cli_mon} 5
Failure Reason:

Command failed on smithi045 with status 1: 'sudo yum install -y kernel'

fail 7570433 2024-02-22 01:15:20 2024-02-22 02:01:15 2024-02-22 04:58:15 2:57:00 2:45:04 0:11:56 smithi main ubuntu 22.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-radosbench} 2
Failure Reason:

reached maximum tries (801) after waiting for 4800 seconds

fail 7570434 2024-02-22 01:15:21 2024-02-22 02:01:25 2024-02-22 02:14:54 0:13:29 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

Failed to reconnect to smithi045

fail 7570435 2024-02-22 01:15:22 2024-02-22 02:01:46 2024-02-22 02:53:36 0:51:50 0:40:35 0:11:15 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/cache-snaps-balanced} 2
Failure Reason:

Command failed on smithi055 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json'

pass 7570436 2024-02-22 01:15:23 2024-02-22 02:02:46 2024-02-22 02:28:08 0:25:22 0:14:32 0:10:50 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_rgw_multisite} 3
fail 7570437 2024-02-22 01:15:24 2024-02-22 02:03:27 2024-02-22 02:31:10 0:27:43 0:17:46 0:09:57 smithi main centos 9.stream rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

fail 7570438 2024-02-22 01:15:24 2024-02-22 02:03:27 2024-02-22 02:46:37 0:43:10 0:33:14 0:09:56 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

fail 7570439 2024-02-22 01:15:25 2024-02-22 02:03:47 2024-02-22 08:37:50 6:34:03 6:23:53 0:10:10 smithi main ubuntu 22.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --pool-snaps --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op copy_from 50 --op write_excl 50 --pool unique_pool_0'

pass 7570440 2024-02-22 01:15:26 2024-02-22 02:03:48 2024-02-22 02:27:23 0:23:35 0:14:10 0:09:25 smithi main ubuntu 22.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7570441 2024-02-22 01:15:27 2024-02-22 02:03:48 2024-02-22 02:40:28 0:36:40 0:25:58 0:10:42 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/cache-snaps} 2
pass 7570442 2024-02-22 01:15:28 2024-02-22 02:05:19 2024-02-22 02:38:37 0:33:18 0:21:42 0:11:36 smithi main ubuntu 22.04 rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_set_mon_crush_locations} 3
pass 7570443 2024-02-22 01:15:28 2024-02-22 02:06:09 2024-02-22 03:54:09 1:48:00 1:36:38 0:11:22 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/snaps-few-objects} 3
fail 7570444 2024-02-22 01:15:29 2024-02-22 02:06:20 2024-02-22 02:38:44 0:32:24 0:18:36 0:13:48 smithi main ubuntu 22.04 rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

"2024-02-22T02:31:15.298785+0000 mon.smithi037 (mon.0) 253 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log

pass 7570445 2024-02-22 01:15:30 2024-02-22 02:11:11 2024-02-22 02:39:02 0:27:51 0:17:16 0:10:35 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_latest} thrashers/mapgap thrashosds-health workloads/cache} 2
pass 7570446 2024-02-22 01:15:31 2024-02-22 02:11:11 2024-02-22 02:35:20 0:24:09 0:13:20 0:10:49 smithi main ubuntu 22.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
fail 7570447 2024-02-22 01:15:32 2024-02-22 02:11:32 2024-02-22 05:51:47 3:40:15 3:30:14 0:10:01 smithi main ubuntu 22.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/misc} 1
Failure Reason:

Command failed (workunit test misc/test-ceph-helpers.sh) on smithi046 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbe7fcec16eddee6239e4dd68c4d0203c7df6461 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/misc/test-ceph-helpers.sh'

pass 7570448 2024-02-22 01:15:33 2024-02-22 02:11:32 2024-02-22 02:37:32 0:26:00 0:15:26 0:10:34 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
pass 7570449 2024-02-22 01:15:34 2024-02-22 02:12:22 2024-02-22 02:58:56 0:46:34 0:34:06 0:12:28 smithi main centos 9.stream rados/singleton/{all/random-eio mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_latest}} 2
fail 7570450 2024-02-22 01:15:34 2024-02-22 02:13:33 2024-02-22 08:37:42 6:24:09 6:12:31 0:11:38 smithi main centos 9.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-small-objects-balanced} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --ec-pool --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 0 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op setattr 25 --op rmattr 25 --op copy_from 50 --op append 50 --op write_excl 0 --op append_excl 50 --pool unique_pool_0'

fail 7570451 2024-02-22 01:15:35 2024-02-22 02:13:54 2024-02-22 02:54:09 0:40:15 0:29:18 0:10:57 smithi main centos 9.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
Failure Reason:

Command failed on smithi042 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.7'

fail 7570452 2024-02-22 01:15:36 2024-02-22 02:14:55 2024-02-22 02:41:29 0:26:34 0:13:51 0:12:43 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

"2024-02-22T02:33:53.471033+0000 mon.smithi049 (mon.0) 258 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log

fail 7570453 2024-02-22 01:15:37 2024-02-22 02:15:15 2024-02-22 08:38:27 6:23:12 6:10:36 0:12:36 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/pool-snaps-few-objects} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --pool-snaps --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op copy_from 50 --op write_excl 50 --pool unique_pool_0'

pass 7570454 2024-02-22 01:15:38 2024-02-22 02:19:06 2024-02-22 02:42:07 0:23:01 0:14:01 0:09:00 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_ca_signed_key} 2
fail 7570455 2024-02-22 01:15:38 2024-02-22 02:19:06 2024-02-22 05:44:31 3:25:25 3:15:43 0:09:42 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/rados_api_tests} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi019 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbe7fcec16eddee6239e4dd68c4d0203c7df6461 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 7570456 2024-02-22 01:15:39 2024-02-22 02:19:07 2024-02-22 05:40:45 3:21:38 3:11:22 0:10:16 smithi main ubuntu 22.04 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi040 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbe7fcec16eddee6239e4dd68c4d0203c7df6461 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 7570457 2024-02-22 01:15:40 2024-02-22 02:19:07 2024-02-22 02:45:21 0:26:14 0:14:16 0:11:58 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

"2024-02-22T02:38:23.965432+0000 mon.smithi027 (mon.0) 262 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log

pass 7570458 2024-02-22 01:15:41 2024-02-22 02:19:07 2024-02-22 03:32:09 1:13:02 1:00:39 0:12:23 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/radosbench-high-concurrency} 2
pass 7570459 2024-02-22 01:15:42 2024-02-22 02:19:08 2024-02-22 03:43:02 1:23:54 0:42:53 0:41:01 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} 3
fail 7570460 2024-02-22 01:15:43 2024-02-22 02:19:08 2024-02-22 08:37:35 6:18:27 6:06:27 0:12:00 smithi main centos 9.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 3
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --ec-pool --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 0 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op setattr 25 --op rmattr 25 --op copy_from 50 --op append 100 --pool unique_pool_0'

fail 7570461 2024-02-22 01:15:43 2024-02-22 02:19:08 2024-02-22 08:36:31 6:17:23 6:08:58 0:08:25 smithi main centos 9.stream rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

SSH connection to smithi184 was lost: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin --log-to-stderr --format json -n client.0 --cluster ceph gc process --include-all'

fail 7570462 2024-02-22 01:15:44 2024-02-22 02:19:09 2024-02-22 08:38:51 6:19:42 6:08:57 0:10:45 smithi main centos 9.stream rados/upgrade/parallel/{0-random-distro$/{centos_9.stream_runc} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

SSH connection to smithi114 was lost: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid d861f3e0-d12a-11ee-95bd-87774f69a715 -e sha1=dbe7fcec16eddee6239e4dd68c4d0203c7df6461 -- bash -c \'while ceph orch upgrade status | jq \'"\'"\'.in_progress\'"\'"\' | grep true ; do ceph orch ps ; ceph versions ; sleep 30 ; done\''

pass 7570463 2024-02-22 01:15:45 2024-02-22 02:19:09 2024-02-22 03:52:58 1:33:49 1:23:13 0:10:36 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest} thrashers/mapgap thrashosds-health workloads/radosbench} 2
pass 7570464 2024-02-22 01:15:46 2024-02-22 02:19:10 2024-02-22 02:44:11 0:25:01 0:15:10 0:09:51 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
fail 7570465 2024-02-22 01:15:47 2024-02-22 02:19:10 2024-02-22 08:36:59 6:17:49 6:07:38 0:10:11 smithi main ubuntu 22.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-small-objects-fast-read} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --ec-pool --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 0 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op setattr 25 --op rmattr 25 --op copy_from 50 --op append 50 --op write_excl 0 --op append_excl 50 --pool unique_pool_0'

pass 7570466 2024-02-22 01:15:48 2024-02-22 02:19:10 2024-02-22 02:56:50 0:37:40 0:26:22 0:11:18 smithi main centos 9.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7570467 2024-02-22 01:15:48 2024-02-22 02:19:11 2024-02-22 02:47:29 0:28:18 0:18:40 0:09:38 smithi main ubuntu 22.04 rados/cephadm/smoke-singlehost/{0-random-distro$/{ubuntu_22.04} 1-start 2-services/basic 3-final} 1
pass 7570468 2024-02-22 01:15:49 2024-02-22 02:19:11 2024-02-22 02:44:30 0:25:19 0:11:43 0:13:36 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_latest} thrashers/none thrashosds-health workloads/redirect_promote_tests} 2
fail 7570469 2024-02-22 01:15:50 2024-02-22 02:21:32 2024-02-22 02:46:17 0:24:45 0:15:37 0:09:08 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_cephadm_timeout} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm_timeout.py) on smithi136 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dbe7fcec16eddee6239e4dd68c4d0203c7df6461 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm_timeout.py'

fail 7570470 2024-02-22 01:15:51 2024-02-22 02:21:32 2024-02-22 04:15:17 1:53:45 1:43:30 0:10:15 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

fail 7570471 2024-02-22 01:15:52 2024-02-22 02:21:42 2024-02-22 08:37:10 6:15:28 6:05:19 0:10:09 smithi main centos 9.stream rados/singleton/{all/thrash-backfill-full mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest}} 2
Failure Reason:

SSH connection to smithi154 was lost: "/bin/sh -c 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage rados --no-log-to-stderr --name client.0 -p unique_pool_0 bench 1800 rand'"

fail 7570472 2024-02-22 01:15:52 2024-02-22 02:21:43 2024-02-22 02:52:18 0:30:35 0:19:32 0:11:03 smithi main ubuntu 22.04 rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

"2024-02-22T02:43:15.587007+0000 mon.smithi077 (mon.0) 254 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log

pass 7570473 2024-02-22 01:15:53 2024-02-22 02:22:23 2024-02-22 02:48:06 0:25:43 0:11:58 0:13:45 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/redirect_set_object} 2
pass 7570474 2024-02-22 01:15:54 2024-02-22 02:24:24 2024-02-22 02:54:34 0:30:10 0:19:28 0:10:42 smithi main ubuntu 22.04 rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_extra_daemon_features} 2
fail 7570475 2024-02-22 01:15:55 2024-02-22 02:24:54 2024-02-22 08:37:28 6:12:34 6:01:44 0:10:50 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/set-chunks-read} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --set_chunk --low_tier_pool low_tier --max-ops 4000 --objects 300 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op delete 10 --op tier_promote 10 --op write_excl 50 --pool unique_pool_0'

pass 7570476 2024-02-22 01:15:56 2024-02-22 02:25:25 2024-02-22 03:43:41 1:18:16 1:08:31 0:09:45 smithi main ubuntu 22.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/mon} 1
pass 7570477 2024-02-22 01:15:57 2024-02-22 02:25:25 2024-02-22 02:54:34 0:29:09 0:16:16 0:12:53 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_host_drain} 3
unknown 7570479 2024-02-22 01:15:58 2024-02-22 01:15:58 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps}