Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7213009 2023-03-18 00:58:16 2023-03-20 14:20:31 2023-03-20 15:28:32 1:08:01 0:56:56 0:11:05 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/cache-pool-snaps-readproxy} 2
fail 7213010 2023-03-18 00:58:16 2023-03-20 14:22:49 2023-03-20 15:55:18 1:32:29 1:19:23 0:13:06 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_nfs} 1
Failure Reason:

Test failure: test_create_and_delete_cluster (tasks.cephfs.test_nfs.TestNFS)

pass 7213011 2023-03-18 00:58:17 2023-03-20 14:22:56 2023-03-20 15:24:45 1:01:49 0:46:36 0:15:13 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
fail 7213012 2023-03-18 00:58:18 2023-03-20 14:24:18 2023-03-20 15:00:43 0:36:25 0:16:38 0:19:47 smithi main centos 8.stream rados/singleton/{all/deduptool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test rados/test_dedup_tool.sh) on smithi146 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6dd66830c12498d02f3894e352dd8b49a7bab4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_dedup_tool.sh'

pass 7213013 2023-03-18 00:58:18 2023-03-20 14:33:11 2023-03-20 15:36:11 1:03:00 0:47:46 0:15:14 smithi main centos 8.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-snaps-few-objects-overwrites} 2
pass 7213014 2023-03-18 00:58:19 2023-03-20 14:36:23 2023-03-20 15:03:11 0:26:48 0:20:13 0:06:35 smithi main rhel 8.6 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
fail 7213015 2023-03-18 00:58:20 2023-03-20 14:36:29 2023-03-20 18:46:30 4:10:01 3:58:48 0:11:13 smithi main centos 8.stream rados/standalone/{supported-random-distro$/{centos_8} workloads/osd} 1
Failure Reason:

Command failed (workunit test osd/repro_long_log.sh) on smithi110 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6dd66830c12498d02f3894e352dd8b49a7bab4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/repro_long_log.sh'

pass 7213016 2023-03-18 00:58:21 2023-03-20 14:37:00 2023-03-20 16:15:09 1:38:09 1:24:29 0:13:40 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-hybrid rados tasks/rados_api_tests validater/lockdep} 2
pass 7213017 2023-03-18 00:58:21 2023-03-20 14:38:22 2023-03-21 02:12:03 11:33:41 11:17:26 0:16:15 smithi main ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/classic random-objectstore$/{bluestore-low-osd-mem-target} supported-random-distro$/{ubuntu_latest} tasks/prometheus} 2
pass 7213018 2023-03-18 00:58:22 2023-03-20 14:40:47 2023-03-20 15:15:17 0:34:30 0:21:22 0:13:08 smithi main ubuntu 20.04 rados/cephadm/smoke-singlehost/{0-random-distro$/{ubuntu_20.04} 1-start 2-services/rgw 3-final} 1
pass 7213019 2023-03-18 00:58:23 2023-03-20 14:40:51 2023-03-20 15:48:59 1:08:08 0:48:29 0:19:39 smithi main centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7213020 2023-03-18 00:58:24 2023-03-20 14:46:50 2023-03-20 16:02:44 1:15:54 1:02:39 0:13:15 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/cache-pool-snaps} 2
pass 7213021 2023-03-18 00:58:24 2023-03-20 14:47:02 2023-03-20 15:19:17 0:32:15 0:18:42 0:13:33 smithi main ubuntu 20.04 rados/singleton/{all/divergent_priors mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
pass 7213022 2023-03-18 00:58:25 2023-03-20 14:49:00 2023-03-20 15:18:59 0:29:59 0:22:15 0:07:44 smithi main rhel 8.6 rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{rhel_8}} 1
pass 7213023 2023-03-18 00:58:26 2023-03-20 14:49:13 2023-03-20 15:32:43 0:43:30 0:32:34 0:10:56 smithi main rhel 8.6 rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
pass 7213024 2023-03-18 00:58:26 2023-03-20 14:49:51 2023-03-20 15:31:40 0:41:49 0:29:23 0:12:26 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} tasks/scrub_test} 2
fail 7213025 2023-03-18 00:58:27 2023-03-20 14:50:28 2023-03-20 19:56:18 5:05:50 4:49:09 0:16:41 smithi main centos 8.stream rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/force-sync-many workloads/rados_mon_osdmap_prune} 2
Failure Reason:

Command failed (workunit test mon/test_mon_osdmap_prune.sh) on smithi130 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6dd66830c12498d02f3894e352dd8b49a7bab4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/test_mon_osdmap_prune.sh'

pass 7213026 2023-03-18 00:58:28 2023-03-20 14:54:26 2023-03-20 16:03:12 1:08:46 0:55:21 0:13:25 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/cache-snaps-balanced} 2
pass 7213027 2023-03-18 00:58:29 2023-03-20 14:54:33 2023-03-20 15:27:32 0:32:59 0:19:55 0:13:04 smithi main ubuntu 20.04 rados/singleton/{all/divergent_priors2 mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
pass 7213028 2023-03-18 00:58:29 2023-03-20 14:54:39 2023-03-20 15:21:15 0:26:36 0:15:09 0:11:27 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_read} 1
fail 7213029 2023-03-18 00:58:30 2023-03-20 14:54:46 2023-03-21 03:17:52 12:23:06 12:08:20 0:14:46 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v2only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

hit max job timeout

pass 7213030 2023-03-18 00:58:31 2023-03-20 14:56:40 2023-03-20 15:31:27 0:34:47 0:21:27 0:13:20 smithi main ubuntu 20.04 rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} 2
pass 7213031 2023-03-18 00:58:32 2023-03-20 14:56:57 2023-03-20 15:29:41 0:32:44 0:22:02 0:10:42 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/repave-all} 2
pass 7213032 2023-03-18 00:58:32 2023-03-20 14:58:19 2023-03-20 16:00:16 1:01:57 0:48:49 0:13:08 smithi main centos 8.stream rados/singleton-nomsgr/{all/msgr mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 7213033 2023-03-18 00:58:33 2023-03-20 15:00:49 2023-03-20 15:32:09 0:31:20 0:19:39 0:11:41 smithi main centos 8.stream rados/singleton/{all/dump-stuck mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} 1
pass 7213034 2023-03-18 00:58:34 2023-03-20 15:00:58 2023-03-20 16:09:05 1:08:07 0:52:13 0:15:54 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/cache-snaps} 2
pass 7213035 2023-03-18 00:58:35 2023-03-20 15:03:27 2023-03-20 16:06:00 1:02:33 0:48:08 0:14:25 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_orch_cli} 1
pass 7213036 2023-03-18 00:58:35 2023-03-20 15:06:38 2023-03-20 15:38:54 0:32:16 0:15:36 0:16:40 smithi main rhel 8.6 rados/objectstore/{backends/objectstore-memstore supported-random-distro$/{rhel_8}} 1
pass 7213037 2023-03-18 00:58:36 2023-03-20 15:15:30 2023-03-20 15:59:35 0:44:05 0:32:38 0:11:27 smithi main rhel 8.6 rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/classic rados supported-random-distro$/{rhel_8}} 2
pass 7213038 2023-03-18 00:58:37 2023-03-20 15:17:21 2023-03-20 16:35:36 1:18:15 1:05:21 0:12:54 smithi main rhel 8.6 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-small-objects} 2
pass 7213039 2023-03-18 00:58:37 2023-03-20 15:19:22 2023-03-20 16:39:15 1:19:53 1:07:36 0:12:17 smithi main centos 8.stream rados/singleton/{all/ec-inconsistent-hinfo mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} 1
pass 7213040 2023-03-18 00:58:38 2023-03-20 15:19:32 2023-03-20 16:30:51 1:11:19 0:52:59 0:18:20 smithi main centos 8.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7213041 2023-03-18 00:58:39 2023-03-20 15:25:04 2023-03-20 17:17:11 1:52:07 1:43:15 0:08:52 smithi main rhel 8.6 rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
pass 7213042 2023-03-18 00:58:40 2023-03-20 15:25:16 2023-03-20 16:34:44 1:09:28 0:53:17 0:16:11 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/cache} 2
pass 7213043 2023-03-18 00:58:40 2023-03-20 15:28:48 2023-03-20 16:26:01 0:57:13 0:44:58 0:12:15 smithi main rhel 8.6 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 7213044 2023-03-18 00:58:41 2023-03-20 15:30:01 2023-03-20 16:19:45 0:49:44 0:34:21 0:15:23 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} tasks/libcephsqlite} 2
pass 7213045 2023-03-18 00:58:42 2023-03-20 15:31:13 2023-03-20 16:11:44 0:40:31 0:27:18 0:13:13 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/osd_stale_reads mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
fail 7213046 2023-03-18 00:58:42 2023-03-20 15:31:44 2023-03-20 15:58:49 0:27:05 0:17:01 0:10:04 smithi main centos 8.stream rados/standalone/{supported-random-distro$/{centos_8} workloads/scrub} 1
Failure Reason:

Command failed (workunit test scrub/osd-mapper.sh) on smithi026 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6dd66830c12498d02f3894e352dd8b49a7bab4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-mapper.sh'

pass 7213047 2023-03-18 00:58:43 2023-03-20 15:31:49 2023-03-20 18:05:58 2:34:09 2:22:02 0:12:07 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} 2
pass 7213048 2023-03-18 00:58:44 2023-03-20 15:31:55 2023-03-20 16:17:03 0:45:08 0:27:53 0:17:15 smithi main ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} supported-random-distro$/{ubuntu_latest} tasks/workunits} 2
pass 7213049 2023-03-18 00:58:45 2023-03-20 15:32:59 2023-03-20 15:59:24 0:26:25 0:15:11 0:11:14 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} 1
dead 7213050 2023-03-18 00:58:45 2023-03-20 15:33:11 2023-03-21 06:41:59 15:08:48 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/classic task/test_orch_cli_mon} 5
Failure Reason:

hit max job timeout

pass 7213051 2023-03-18 00:58:46 2023-03-20 15:36:42 2023-03-20 19:36:27 3:59:45 3:49:33 0:10:12 smithi main rhel 8.6 rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} 1
pass 7213052 2023-03-18 00:58:47 2023-03-20 15:39:02 2023-03-20 16:53:52 1:14:50 0:59:23 0:15:27 smithi main centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7213053 2023-03-18 00:58:48 2023-03-20 15:39:31 2023-03-20 16:37:14 0:57:43 0:49:11 0:08:32 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/dedup-io-mixed} 2
pass 7213054 2023-03-18 00:58:48 2023-03-20 15:39:34 2023-03-20 17:07:26 1:27:52 1:09:41 0:18:11 smithi main rhel 8.6 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/minsize_recovery thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 2
pass 7213055 2023-03-18 00:58:49 2023-03-20 15:48:00 2023-03-20 16:14:23 0:26:23 0:13:54 0:12:29 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/pool-access mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
pass 7213056 2023-03-18 00:58:50 2023-03-20 15:48:08 2023-03-20 16:27:00 0:38:52 0:25:03 0:13:49 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
pass 7213057 2023-03-18 00:58:51 2023-03-20 15:48:15 2023-03-20 16:14:53 0:26:38 0:13:10 0:13:28 smithi main centos 8.stream rados/singleton/{all/erasure-code-nonregression mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} 1
fail 7213058 2023-03-18 00:58:51 2023-03-20 15:49:22 2023-03-20 21:56:01 6:06:39 5:51:42 0:14:57 smithi main centos 8.stream rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/many workloads/rados_mon_workunits} 2
Failure Reason:

Command failed (workunit test mon/osd.sh) on smithi177 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6dd66830c12498d02f3894e352dd8b49a7bab4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/osd.sh'

pass 7213059 2023-03-18 00:58:52 2023-03-20 15:49:35 2023-03-20 16:58:45 1:09:10 0:55:38 0:13:32 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/dedup-io-snaps} 2
pass 7213060 2023-03-18 00:58:53 2023-03-20 15:52:54 2023-03-20 16:19:12 0:26:18 0:17:21 0:08:57 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_adoption} 1
fail 7213061 2023-03-18 00:58:54 2023-03-20 15:53:02 2023-03-20 16:31:26 0:38:24 0:22:52 0:15:32 smithi main centos 8.stream rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} 3
Failure Reason:

"2023-03-20T16:23:54.516871+0000 mon.a (mon.0) 11 : cluster [WRN] Health check failed: 1/9 mons down, quorum a,b,c,d,e,f,g,h (MON_DOWN)" in cluster log

pass 7213062 2023-03-18 00:58:54 2023-03-20 15:53:10 2023-03-20 16:45:13 0:52:03 0:38:56 0:13:07 smithi main centos 8.stream rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 7213063 2023-03-18 00:58:55 2023-03-20 15:55:44 2023-03-20 17:04:16 1:08:32 0:56:02 0:12:30 smithi main rhel 8.6 rados/singleton/{all/lost-unfound-delete mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} 1
pass 7213064 2023-03-18 00:58:56 2023-03-20 15:59:03 2023-03-20 18:33:52 2:34:49 2:20:26 0:14:23 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
pass 7213065 2023-03-18 00:58:57 2023-03-20 15:59:13 2023-03-20 16:27:38 0:28:25 0:16:18 0:12:07 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_write} 1
pass 7213066 2023-03-18 00:58:57 2023-03-20 15:59:33 2023-03-20 17:00:07 1:00:34 0:50:18 0:10:16 smithi main rhel 8.6 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} tasks/rados_api_tests} 2
pass 7213067 2023-03-18 00:58:58 2023-03-20 16:00:04 2023-03-20 17:23:58 1:23:54 1:14:20 0:09:34 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/pool-snaps-few-objects} 2
pass 7213068 2023-03-18 00:58:59 2023-03-20 16:00:42 2023-03-21 01:21:40 9:20:58 9:04:14 0:16:44 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} 3
pass 7213069 2023-03-18 00:59:00 2023-03-20 16:05:14 2023-03-20 17:10:39 1:05:25 0:51:38 0:13:47 smithi main ubuntu 20.04 rados/singleton/{all/lost-unfound mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 1
pass 7213070 2023-03-18 00:59:00 2023-03-20 16:06:11 2023-03-20 16:42:08 0:35:57 0:20:38 0:15:19 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/version-number-sanity mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
pass 7213071 2023-03-18 00:59:01 2023-03-20 16:08:10 2023-03-20 17:37:22 1:29:12 1:14:19 0:14:53 smithi main centos 8.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 2
fail 7213072 2023-03-18 00:59:02 2023-03-20 16:09:19 2023-03-20 16:42:56 0:33:37 0:18:57 0:14:40 smithi main ubuntu 20.04 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi090 with status 22: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6dd66830c12498d02f3894e352dd8b49a7bab4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

pass 7213073 2023-03-18 00:59:02 2023-03-20 16:09:30 2023-03-20 17:35:59 1:26:29 1:13:58 0:12:31 smithi main rhel 8.6 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7213074 2023-03-18 00:59:03 2023-03-20 16:10:45 2023-03-20 16:50:08 0:39:23 0:25:51 0:13:32 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
pass 7213075 2023-03-18 00:59:04 2023-03-20 16:10:56 2023-03-20 17:47:16 1:36:20 1:15:06 0:21:14 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 7213076 2023-03-18 00:59:05 2023-03-20 16:19:52 2023-03-20 18:08:28 1:48:36 1:36:19 0:12:17 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/rados_api_tests} 2
pass 7213077 2023-03-18 00:59:05 2023-03-20 16:20:09 2023-03-20 16:48:23 0:28:14 0:15:43 0:12:31 smithi main ubuntu 20.04 rados/singleton/{all/max-pg-per-osd.from-mon mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
dead 7213078 2023-03-18 00:59:06 2023-03-20 16:20:18 2023-03-21 07:58:50 15:38:32 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zstd} tasks/dashboard} 2
Failure Reason:

hit max job timeout

dead 7213079 2023-03-18 00:59:07 2023-03-20 16:26:23 2023-03-21 05:04:55 12:38:32 smithi main rhel 8.6 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{rhel_8} tasks/crash} 2
Failure Reason:

hit max job timeout

fail 7213080 2023-03-18 00:59:08 2023-03-20 16:27:28 2023-03-20 20:26:19 3:58:51 3:43:18 0:15:33 smithi main ubuntu 20.04 rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

"2023-03-20T18:29:25.055561+0000 osd.2 (osd.2) 1 : cluster [WRN] 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'rbd' : 1 ])" in cluster log

pass 7213081 2023-03-18 00:59:08 2023-03-20 16:27:36 2023-03-20 17:18:29 0:50:53 0:35:13 0:15:40 smithi main ubuntu 20.04 rados/rest/{mgr-restful supported-random-distro$/{ubuntu_latest}} 1
fail 7213082 2023-03-18 00:59:09 2023-03-20 16:28:00 2023-03-20 16:51:30 0:23:30 0:06:57 0:16:33 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

Command failed on smithi142 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

pass 7213083 2023-03-18 00:59:10 2023-03-20 16:31:03 2023-03-20 17:17:13 0:46:10 0:33:15 0:12:55 smithi main centos 8.stream rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 7213084 2023-03-18 00:59:10 2023-03-20 16:31:18 2023-03-20 16:58:54 0:27:36 0:18:22 0:09:14 smithi main rhel 8.6 rados/standalone/{supported-random-distro$/{rhel_8} workloads/c2c} 1
dead 7213085 2023-03-18 00:59:11 2023-03-20 16:31:35 2023-03-21 05:53:12 13:21:37 smithi main rhel 8.6 rados/upgrade/parallel/{0-random-distro$/{rhel_8.6_container_tools_rhel8} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

hit max job timeout

pass 7213086 2023-03-18 00:59:12 2023-03-20 16:32:04 2023-03-20 17:17:50 0:45:46 0:33:09 0:12:37 smithi main centos 8.stream rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} 1
pass 7213087 2023-03-18 00:59:13 2023-03-20 16:32:28 2023-03-20 18:08:17 1:35:49 1:19:49 0:16:00 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} 2
fail 7213088 2023-03-18 00:59:13 2023-03-20 16:34:21 2023-03-20 17:07:48 0:33:27 0:23:00 0:10:27 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi121.front.sepia.ceph.com: ['type=AVC msg=audit(1679331723.805:19882): avc: denied { ioctl } for pid=109677 comm="iptables" path="/var/lib/containers/storage/overlay/3ea251cca9c6c4cf9ecbad596d7a4d1d1681c9fde93bdff9c6e714c7f9b049b6/merged" dev="overlay" ino=3278939 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

pass 7213089 2023-03-18 00:59:14 2023-03-20 16:34:32 2023-03-20 18:02:09 1:27:37 1:13:28 0:14:09 smithi main rhel 8.6 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7213090 2023-03-18 00:59:15 2023-03-20 16:35:08 2023-03-20 17:21:17 0:46:09 0:33:44 0:12:25 smithi main centos 8.stream rados/singleton/{all/max-pg-per-osd.from-primary mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} 1
pass 7213091 2023-03-18 00:59:16 2023-03-20 16:35:26 2023-03-20 18:57:56 2:22:30 2:10:44 0:11:46 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/radosbench-high-concurrency} 2
pass 7213092 2023-03-18 00:59:16 2023-03-20 16:36:05 2023-03-20 20:36:06 4:00:01 3:44:43 0:15:18 smithi main centos 8.stream rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
pass 7213093 2023-03-18 00:59:17 2023-03-20 16:37:40 2023-03-20 17:15:19 0:37:39 0:22:36 0:15:03 smithi main centos 8.stream rados/singleton-nomsgr/{all/balancer mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 7213094 2023-03-18 00:59:18 2023-03-20 16:39:39 2023-03-20 17:13:06 0:33:27 0:16:54 0:16:33 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4K_rand_read} 1
pass 7213095 2023-03-18 00:59:19 2023-03-20 16:42:39 2023-03-20 18:34:04 1:51:25 1:34:30 0:16:55 smithi main ubuntu 20.04 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/one workloads/snaps-few-objects} 2
pass 7213096 2023-03-18 00:59:19 2023-03-20 16:59:23 2023-03-20 18:09:04 1:09:41 0:52:27 0:17:14 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} tasks/rados_cls_all} 2
pass 7213097 2023-03-18 00:59:20 2023-03-20 17:00:27 2023-03-20 17:50:06 0:49:39 0:37:06 0:12:33 smithi main centos 8.stream rados/singleton/{all/max-pg-per-osd.from-replica mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} 1
pass 7213098 2023-03-18 00:59:21 2023-03-20 17:00:43 2023-03-20 17:40:27 0:39:44 0:23:37 0:16:07 smithi main centos 8.stream rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools_crun} 1-start 2-services/basic 3-final} 1
pass 7213099 2023-03-18 00:59:21 2023-03-20 17:04:49 2023-03-20 17:45:35 0:40:46 0:20:40 0:20:06 smithi main ubuntu 20.04 rados/multimon/{clusters/9 mon_election/connectivity msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} 3
pass 7213100 2023-03-18 00:59:22 2023-03-20 17:08:19 2023-03-20 18:09:18 1:00:59 0:49:21 0:11:38 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/radosbench} 2
pass 7213101 2023-03-18 00:59:23 2023-03-20 17:09:00 2023-03-20 19:07:37 1:58:37 1:39:06 0:19:31 smithi main centos 8.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 2
pass 7213102 2023-03-18 00:59:24 2023-03-20 17:13:38 2023-03-20 18:11:25 0:57:47 0:42:48 0:14:59 smithi main centos 8.stream rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 7213103 2023-03-18 00:59:24 2023-03-20 17:15:46 2023-03-20 18:01:19 0:45:33 0:28:23 0:17:10 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
pass 7213104 2023-03-18 00:59:25 2023-03-20 17:17:28 2023-03-20 18:01:30 0:44:02 0:29:36 0:14:26 smithi main centos 8.stream rados/singleton/{all/mon-auth-caps mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} 1
pass 7213105 2023-03-18 00:59:26 2023-03-20 17:17:42 2023-03-20 19:09:50 1:52:08 1:36:38 0:15:30 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/redirect} 2
pass 7213106 2023-03-18 00:59:27 2023-03-20 17:18:43 2023-03-20 17:56:11 0:37:28 0:24:08 0:13:20 smithi main centos 8.stream rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 7213107 2023-03-18 00:59:27 2023-03-20 17:18:58 2023-03-20 18:46:32 1:27:34 1:14:52 0:12:42 smithi main rhel 8.6 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} 2
pass 7213108 2023-03-18 00:59:28 2023-03-20 17:19:09 2023-03-20 17:43:35 0:24:26 0:08:57 0:15:29 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_cephadm_repos} 1
pass 7213109 2023-03-18 00:59:29 2023-03-20 17:19:24 2023-03-20 18:46:46 1:27:22 1:15:16 0:12:06 smithi main rhel 8.6 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7213110 2023-03-18 00:59:30 2023-03-20 17:19:33 2023-03-20 18:04:48 0:45:15 0:29:09 0:16:06 smithi main ubuntu 20.04 rados/singleton/{all/mon-config-key-caps mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} 1
pass 7213111 2023-03-18 00:59:30 2023-03-20 17:21:34 2023-03-20 18:38:18 1:16:44 0:45:56 0:30:48 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 7213112 2023-03-18 00:59:31 2023-03-20 17:35:09 2023-03-20 18:06:51 0:31:42 0:16:54 0:14:48 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4K_seq_read} 1
dead 7213113 2023-03-18 00:59:32 2023-03-20 17:35:28 2023-03-21 10:21:27 16:45:59 smithi main rhel 8.6 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} supported-random-distro$/{rhel_8} tasks/failover} 2
Failure Reason:

hit max job timeout

pass 7213114 2023-03-18 00:59:33 2023-03-20 17:35:50 2023-03-20 18:07:18 0:31:28 0:15:03 0:16:25 smithi main centos 8.stream rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 7213115 2023-03-18 00:59:33 2023-03-20 17:36:20 2023-03-20 18:15:17 0:38:57 0:30:05 0:08:52 smithi main rhel 8.6 rados/standalone/{supported-random-distro$/{rhel_8} workloads/crush} 1
pass 7213116 2023-03-18 00:59:34 2023-03-20 17:36:34 2023-03-20 19:23:16 1:46:42 1:32:53 0:13:49 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/redirect_promote_tests} 2
dead 7213117 2023-03-18 00:59:35 2023-03-20 17:37:35 2023-03-21 06:27:27 12:49:52 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps} 3
Failure Reason:

hit max job timeout

pass 7213118 2023-03-18 00:59:35 2023-03-20 17:41:01 2023-03-20 19:27:12 1:46:11 1:32:01 0:14:10 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
pass 7213119 2023-03-18 00:59:36 2023-03-20 17:41:25 2023-03-20 19:49:56 2:08:31 1:48:47 0:19:44 smithi main centos 8.stream rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
pass 7213120 2023-03-18 00:59:37 2023-03-20 17:45:55 2023-03-20 18:28:29 0:42:34 0:30:01 0:12:33 smithi main rhel 8.6 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/rados_python} 2
pass 7213121 2023-03-18 00:59:38 2023-03-20 17:46:04 2023-03-20 19:00:36 1:14:32 0:59:08 0:15:24 smithi main rhel 8.6 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7213122 2023-03-18 00:59:38 2023-03-20 17:47:37 2023-03-20 18:22:25 0:34:48 0:21:23 0:13:25 smithi main ubuntu 20.04 rados/singleton/{all/mon-config-keys mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} 1
fail 7213123 2023-03-18 00:59:39 2023-03-20 17:47:58 2023-03-20 18:27:54 0:39:56 0:24:34 0:15:22 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} 1
Failure Reason:

Command failed (workunit test cephadm/test_iscsi_pids_limit.sh) on smithi045 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6dd66830c12498d02f3894e352dd8b49a7bab4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_iscsi_pids_limit.sh'

pass 7213124 2023-03-18 00:59:40 2023-03-20 17:50:38 2023-03-20 18:27:19 0:36:41 0:17:28 0:19:13 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/crushdiff mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 7213125 2023-03-18 00:59:41 2023-03-20 17:56:41 2023-03-20 19:44:29 1:47:48 1:31:54 0:15:54 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/redirect_set_object} 2
pass 7213126 2023-03-18 00:59:41 2023-03-20 17:58:58 2023-03-20 18:21:34 0:22:36 0:09:45 0:12:51 smithi main ubuntu 20.04 rados/objectstore/{backends/fusestore supported-random-distro$/{ubuntu_latest}} 1
pass 7213127 2023-03-18 00:59:42 2023-03-20 17:59:18 2023-03-20 18:32:53 0:33:35 0:22:15 0:11:20 smithi main centos 8.stream rados/singleton/{all/mon-config mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8}} 1
pass 7213128 2023-03-18 00:59:43 2023-03-20 17:59:40 2023-03-20 22:10:38 4:10:58 3:56:49 0:14:09 smithi main centos 8.stream rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/sync-many workloads/pool-create-delete} 2
pass 7213129 2023-03-18 00:59:43 2023-03-20 18:00:23 2023-03-20 18:39:11 0:38:48 0:27:32 0:11:16 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
pass 7213130 2023-03-18 00:59:44 2023-03-20 18:01:42 2023-03-20 19:27:20 1:25:38 1:08:58 0:16:40 smithi main centos 8.stream rados/multimon/{clusters/21 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/mon_recovery} 3
pass 7213131 2023-03-18 00:59:45 2023-03-20 18:02:39 2023-03-20 18:34:35 0:31:56 0:19:34 0:12:22 smithi main centos 8.stream rados/singleton-nomsgr/{all/export-after-evict mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 7213132 2023-03-18 00:59:46 2023-03-20 18:02:55 2023-03-20 20:19:04 2:16:09 2:01:03 0:15:06 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/set-chunks-read} 2
pass 7213133 2023-03-18 00:59:46 2023-03-20 18:04:56 2023-03-20 18:33:18 0:28:22 0:15:35 0:12:47 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} 1
pass 7213134 2023-03-18 00:59:47 2023-03-20 18:05:11 2023-03-20 19:22:04 1:16:53 1:03:27 0:13:26 smithi main ubuntu 20.04 rados/singleton/{all/osd-backfill mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 1
pass 7213135 2023-03-18 00:59:48 2023-03-20 18:05:22 2023-03-20 19:56:39 1:51:17 1:40:30 0:10:47 smithi main rhel 8.6 rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
pass 7213136 2023-03-18 00:59:49 2023-03-20 18:06:31 2023-03-20 19:20:11 1:13:40 0:55:05 0:18:35 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} tasks/rados_stress_watch} 2
pass 7213137 2023-03-18 00:59:49 2023-03-20 18:07:47 2023-03-20 18:37:03 0:29:16 0:16:09 0:13:07 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/full-tiering mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 7213138 2023-03-18 00:59:50 2023-03-20 18:08:33 2023-03-20 19:35:07 1:26:34 1:13:19 0:13:15 smithi main centos 8.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} 2
pass 7213139 2023-03-18 00:59:51 2023-03-20 18:08:42 2023-03-20 19:48:50 1:40:08 1:23:24 0:16:44 smithi main ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-small-objects-overwrites} 2
pass 7213140 2023-03-18 00:59:52 2023-03-20 18:09:18 2023-03-20 19:21:47 1:12:29 0:59:07 0:13:22 smithi main ubuntu 20.04 rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
pass 7213141 2023-03-18 00:59:52 2023-03-20 18:09:30 2023-03-20 19:35:52 1:26:22 1:15:12 0:11:10 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/small-objects-balanced} 2
fail 7213142 2023-03-18 00:59:53 2023-03-20 18:09:39 2023-03-20 20:27:58 2:18:19 2:05:37 0:12:42 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_nfs} 1
Failure Reason:

Test failure: test_create_and_delete_cluster (tasks.cephfs.test_nfs.TestNFS)

pass 7213143 2023-03-18 00:59:54 2023-03-20 18:09:49 2023-03-20 19:28:47 1:18:58 1:01:51 0:17:07 smithi main ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7213144 2023-03-18 00:59:54 2023-03-20 18:11:36 2023-03-20 19:50:16 1:38:40 1:17:15 0:21:25 smithi main rhel 8.6 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
dead 7213145 2023-03-18 00:59:55 2023-03-20 18:22:38 2023-03-21 07:49:57 13:27:19 smithi main centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-snappy} supported-random-distro$/{centos_8} tasks/insights} 2
Failure Reason:

hit max job timeout

fail 7213146 2023-03-18 00:59:56 2023-03-20 18:28:39 2023-03-20 18:59:29 0:30:50 0:17:45 0:13:05 smithi main ubuntu 20.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi046 with status 22: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6dd66830c12498d02f3894e352dd8b49a7bab4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

dead 7213147 2023-03-18 00:59:57 2023-03-20 18:28:55 2023-03-21 06:39:55 12:11:00 smithi main rhel 8.6 rados/singleton/{all/osd-recovery mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

hit max job timeout

dead 7213148 2023-03-18 00:59:57 2023-03-20 18:29:07 2023-03-21 06:38:05 12:08:58 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

hit max job timeout

pass 7213149 2023-03-18 00:59:58 2023-03-20 18:29:18 2023-03-20 19:07:31 0:38:13 0:24:32 0:13:41 smithi main centos 8.stream rados/singleton-nomsgr/{all/health-warnings mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 7213150 2023-03-18 00:59:59 2023-03-20 18:29:30 2023-03-20 20:33:25 2:03:55 1:47:13 0:16:42 smithi main centos 8.stream rados/standalone/{supported-random-distro$/{centos_8} workloads/erasure-code} 1
pass 7213151 2023-03-18 01:00:00 2023-03-20 18:33:21 2023-03-20 20:04:01 1:30:40 1:16:54 0:13:46 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
pass 7213152 2023-03-18 01:00:00 2023-03-20 18:34:09 2023-03-20 19:57:13 1:23:04 1:07:58 0:15:06 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/small-objects-localized} 2
pass 7213153 2023-03-18 01:00:01 2023-03-20 18:34:22 2023-03-20 20:07:19 1:32:57 1:12:00 0:20:57 smithi main centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7213154 2023-03-18 01:00:02 2023-03-20 18:38:43 2023-03-20 19:09:40 0:30:57 0:15:50 0:15:07 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-comp openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_seq_read} 1
pass 7213155 2023-03-18 01:00:03 2023-03-20 18:39:03 2023-03-20 20:53:22 2:14:19 2:03:11 0:11:08 smithi main rhel 8.6 rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
pass 7213156 2023-03-18 01:00:03 2023-03-20 18:39:32 2023-03-20 19:17:10 0:37:38 0:26:14 0:11:24 smithi main centos 8.stream rados/singleton/{all/peer mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} 1
fail 7213157 2023-03-18 01:00:04 2023-03-20 18:39:44 2023-03-20 19:44:47 1:05:03 0:43:03 0:22:00 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi169 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6dd66830c12498d02f3894e352dd8b49a7bab4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7213158 2023-03-18 01:00:05 2023-03-20 18:46:53 2023-03-20 19:12:06 0:25:13 0:08:35 0:16:38 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

Command failed on smithi005 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

pass 7213159 2023-03-18 01:00:05 2023-03-20 18:47:06 2023-03-20 19:20:57 0:33:51 0:19:06 0:14:45 smithi main centos 8.stream rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
dead 7213160 2023-03-18 01:00:06 2023-03-20 18:49:30 2023-03-21 08:36:27 13:46:57 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/octopus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/radosbench} 3
Failure Reason:

hit max job timeout

pass 7213161 2023-03-18 01:00:07 2023-03-20 18:49:41 2023-03-20 20:26:31 1:36:50 1:18:04 0:18:46 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/small-objects} 2
pass 7213162 2023-03-18 01:00:08 2023-03-20 18:58:20 2023-03-20 20:08:03 1:09:43 0:52:06 0:17:37 smithi main ubuntu 20.04 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/sync workloads/rados_5925} 2
pass 7213163 2023-03-18 01:00:08 2023-03-20 19:01:06 2023-03-20 20:13:37 1:12:31 1:03:48 0:08:43 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_orch_cli} 1
pass 7213164 2023-03-18 01:00:09 2023-03-20 19:01:15 2023-03-20 19:35:37 0:34:22 0:24:02 0:10:20 smithi main rhel 8.6 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/rados_striper} 2
pass 7213165 2023-03-18 01:00:10 2023-03-20 19:01:28 2023-03-20 19:49:17 0:47:49 0:26:44 0:21:05 smithi main ubuntu 20.04 rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} 2
pass 7213166 2023-03-18 01:00:11 2023-03-20 19:07:57 2023-03-20 19:47:24 0:39:27 0:20:43 0:18:44 smithi main centos 8.stream rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} 2
pass 7213167 2023-03-18 01:00:11 2023-03-20 19:10:03 2023-03-20 19:45:26 0:35:23 0:27:20 0:08:03 smithi main rhel 8.6 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{rhel_8}} 1
pass 7213168 2023-03-18 01:00:12 2023-03-20 19:10:20 2023-03-20 20:47:13 1:36:53 1:20:19 0:16:34 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/snaps-few-objects-balanced} 2
pass 7213169 2023-03-18 01:00:13 2023-03-20 19:12:42 2023-03-20 19:43:54 0:31:12 0:22:23 0:08:49 smithi main rhel 8.6 rados/cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.6_container_tools_3.0} 1-start 2-services/rgw 3-final} 1
pass 7213170 2023-03-18 01:00:13 2023-03-20 19:12:50 2023-03-20 20:49:02 1:36:12 1:24:15 0:11:57 smithi main centos 8.stream rados/objectstore/{backends/keyvaluedb supported-random-distro$/{centos_8}} 1
pass 7213171 2023-03-18 01:00:14 2023-03-20 19:13:05 2023-03-20 19:52:40 0:39:35 0:27:13 0:12:22 smithi main rhel 8.6 rados/singleton/{all/pg-autoscaler mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} 1
pass 7213172 2023-03-18 01:00:15 2023-03-20 19:17:30 2023-03-20 19:57:42 0:40:12 0:26:42 0:13:30 smithi main rhel 8.6 rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
pass 7213173 2023-03-18 01:00:16 2023-03-20 19:20:35 2023-03-20 22:24:13 3:03:38 2:51:29 0:12:09 smithi main rhel 8.6 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-radosbench} 2
pass 7213174 2023-03-18 01:00:16 2023-03-20 19:21:18 2023-03-20 20:00:46 0:39:28 0:26:11 0:13:17 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
pass 7213175 2023-03-18 01:00:17 2023-03-20 19:22:28 2023-03-20 19:54:07 0:31:39 0:15:41 0:15:58 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4M_write} 1
pass 7213176 2023-03-18 01:00:18 2023-03-20 19:23:47 2023-03-20 20:37:35 1:13:48 0:56:25 0:17:23 smithi main centos 8.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7213177 2023-03-18 01:00:19 2023-03-20 19:27:28 2023-03-20 20:44:47 1:17:19 1:03:03 0:14:16 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/snaps-few-objects-localized} 2
pass 7213178 2023-03-18 01:00:19 2023-03-20 19:27:44 2023-03-20 19:57:34 0:29:50 0:21:27 0:08:23 smithi main rhel 8.6 rados/singleton/{all/pg-removal-interruption mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} 1
dead 7213179 2023-03-18 01:00:20 2023-03-20 19:28:01 2023-03-21 08:32:34 13:04:33 smithi main centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{centos_8} tasks/module_selftest} 2
Failure Reason:

hit max job timeout

pass 7213180 2023-03-18 01:00:21 2023-03-20 19:29:01 2023-03-20 20:38:28 1:09:27 0:51:08 0:18:19 smithi main rhel 8.6 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
dead 7213181 2023-03-18 01:00:21 2023-03-20 19:35:30 2023-03-21 07:54:50 12:19:20 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

hit max job timeout

pass 7213182 2023-03-18 01:00:22 2023-03-20 19:36:44 2023-03-20 21:01:50 1:25:06 1:04:34 0:20:32 smithi main centos 8.stream rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 7213183 2023-03-18 01:00:23 2023-03-20 19:44:19 2023-03-20 20:16:13 0:31:54 0:21:35 0:10:19 smithi main rhel 8.6 rados/standalone/{supported-random-distro$/{rhel_8} workloads/mgr} 1
pass 7213184 2023-03-18 01:00:24 2023-03-20 19:45:02 2023-03-20 20:28:55 0:43:53 0:31:05 0:12:48 smithi main centos 8.stream rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
pass 7213185 2023-03-18 01:00:24 2023-03-20 19:45:20 2023-03-20 20:51:39 1:06:19 0:52:19 0:14:00 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} 2
pass 7213186 2023-03-18 01:00:25 2023-03-20 19:45:33 2023-03-20 21:11:46 1:26:13 1:12:57 0:13:16 smithi main centos 8.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} 2
pass 7213187 2023-03-18 01:00:26 2023-03-20 19:45:43 2023-03-20 21:10:25 1:24:42 1:03:41 0:21:01 smithi main ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7213188 2023-03-18 01:00:27 2023-03-20 19:47:47 2023-03-20 21:20:22 1:32:35 1:16:39 0:15:56 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/snaps-few-objects} 2
pass 7213189 2023-03-18 01:00:27 2023-03-20 19:49:11 2023-03-20 20:36:37 0:47:26 0:38:34 0:08:52 smithi main rhel 8.6 rados/singleton/{all/radostool mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} 1
pass 7213190 2023-03-18 01:00:28 2023-03-20 19:49:24 2023-03-20 21:10:14 1:20:50 1:05:36 0:15:14 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/rados_workunit_loadgen_big} 2
pass 7213191 2023-03-18 01:00:29 2023-03-20 19:49:40 2023-03-20 22:11:03 2:21:23 2:06:08 0:15:15 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
pass 7213192 2023-03-18 01:00:30 2023-03-20 19:50:16 2023-03-20 20:41:13 0:50:57 0:36:06 0:14:51 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 2
dead 7213193 2023-03-18 01:00:30 2023-03-20 19:50:37 2023-03-21 09:49:21 13:58:44 smithi main rhel 8.6 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/rados_api_tests} 2
Failure Reason:

hit max job timeout

fail 7213194 2023-03-18 01:00:31 2023-03-20 19:53:10 2023-03-20 21:08:29 1:15:19 0:58:24 0:16:55 smithi main centos 8.stream rados/singleton/{all/random-eio mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8}} 2
Failure Reason:

"2023-03-20T20:38:02.747647+0000 mon.a (mon.0) 229 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

pass 7213195 2023-03-18 01:00:32 2023-03-20 20:53:56 2023-03-20 23:03:02 2:09:06 1:40:46 0:28:20 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/write_fadvise_dontneed} 2
pass 7213196 2023-03-18 01:00:32 2023-03-20 21:08:53 2023-03-20 21:35:12 0:26:19 0:11:48 0:14:31 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/classic task/test_adoption} 1
pass 7213197 2023-03-18 01:00:33 2023-03-20 21:09:04 2023-03-20 21:52:39 0:43:35 0:29:38 0:13:57 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_omap_write} 1
pass 7213198 2023-03-18 01:00:34 2023-03-20 21:10:33 2023-03-20 21:42:49 0:32:16 0:15:53 0:16:23 smithi main ubuntu 20.04 rados/multimon/{clusters/6 mon_election/classic msgr-failures/few msgr/async-v1only no_pools objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} 2
pass 7213199 2023-03-18 01:00:35 2023-03-20 21:10:44 2023-03-20 21:50:35 0:39:51 0:31:19 0:08:32 smithi main rhel 8.6 rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{rhel_8}} 1
pass 7213200 2023-03-18 01:00:35 2023-03-20 21:10:58 2023-03-20 21:54:38 0:43:40 0:27:42 0:15:58 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
pass 7213201 2023-03-18 01:00:36 2023-03-20 21:11:17 2023-03-20 21:53:27 0:42:10 0:29:13 0:12:57 smithi main centos 8.stream rados/singleton/{all/rebuild-mondb mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 1
pass 7213202 2023-03-18 01:00:37 2023-03-20 21:12:09 2023-03-21 09:28:16 12:16:07 11:51:06 0:25:01 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/rbd_cls} 3
pass 7213203 2023-03-18 01:00:38 2023-03-20 21:20:46 2023-03-20 22:18:11 0:57:25 0:27:58 0:29:27 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/admin_socket_objecter_requests} 2
pass 7213204 2023-03-18 01:00:38 2023-03-20 21:35:44 2023-03-21 02:45:10 5:09:26 4:44:37 0:24:49 smithi main centos 8.stream rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
pass 7213205 2023-03-18 01:00:39 2023-03-20 21:43:18 2023-03-20 22:22:34 0:39:16 0:21:43 0:17:33 smithi main rhel 8.6 rados/singleton-nomsgr/{all/pool-access mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
pass 7213206 2023-03-18 01:00:40 2023-03-20 21:50:56 2023-03-20 23:43:44 1:52:48 1:38:49 0:13:59 smithi main rhel 8.6 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-small-objects-balanced} 2
pass 7213207 2023-03-18 01:00:41 2023-03-20 21:53:46 2023-03-20 23:29:10 1:35:24 1:22:38 0:12:46 smithi main rhel 8.6 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7213208 2023-03-18 01:00:42 2023-03-20 21:55:04 2023-03-20 22:40:10 0:45:06 0:35:31 0:09:35 smithi main rhel 8.6 rados/singleton/{all/recovery-preemption mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} 1
pass 7213209 2023-03-18 01:00:42 2023-03-20 21:56:28 2023-03-20 23:32:34 1:36:06 1:16:03 0:20:03 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} tasks/rados_workunit_loadgen_mix} 2
pass 7213210 2023-03-18 01:00:43 2023-03-20 22:00:18 2023-03-21 06:53:02 8:52:44 8:33:56 0:18:48 smithi main centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{centos_8} tasks/progress} 2
pass 7213211 2023-03-18 01:00:44 2023-03-20 22:04:17 2023-03-20 22:55:00 0:50:43 0:36:38 0:14:05 smithi main ubuntu 20.04 rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{ubuntu_latest}} 1
pass 7213212 2023-03-18 01:00:45 2023-03-20 22:04:33 2023-03-21 01:56:53 3:52:20 3:34:40 0:17:40 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/cache-agent-big} 2
pass 7213213 2023-03-18 01:00:45 2023-03-20 22:10:57 2023-03-21 00:03:04 1:52:07 1:37:06 0:15:01 smithi main rhel 8.6 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 7213214 2023-03-18 01:00:46 2023-03-20 22:15:08 2023-03-20 22:48:51 0:33:43 0:22:06 0:11:37 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_cephadm} 1
pass 7213215 2023-03-18 01:00:47 2023-03-20 22:15:15 2023-03-20 23:12:17 0:57:02 0:41:38 0:15:24 smithi main centos 8.stream rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/classic rados supported-random-distro$/{centos_8}} 1
fail 7213216 2023-03-18 01:00:48 2023-03-20 22:18:50 2023-03-20 23:52:39 1:33:49 1:23:17 0:10:32 smithi main rhel 8.6 rados/standalone/{supported-random-distro$/{rhel_8} workloads/misc} 1
Failure Reason:

Command failed (workunit test misc/test-ceph-helpers.sh) on smithi005 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6dd66830c12498d02f3894e352dd8b49a7bab4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/misc/test-ceph-helpers.sh'

pass 7213217 2023-03-18 01:00:48 2023-03-20 22:19:09 2023-03-21 00:58:49 2:39:40 2:20:58 0:18:42 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/lockdep} 2
pass 7213218 2023-03-18 01:00:49 2023-03-20 22:24:42 2023-03-20 23:07:03 0:42:21 0:26:26 0:15:55 smithi main ubuntu 20.04 rados/singleton/{all/resolve_stuck_peering mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} 2
pass 7213219 2023-03-18 01:00:50 2023-03-20 22:25:07 2023-03-21 00:09:50 1:44:43 1:07:01 0:37:42 smithi main centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7213220 2023-03-18 01:00:51 2023-03-20 22:44:30 2023-03-20 23:19:48 0:35:18 0:17:28 0:17:50 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/sample_fio} 1
pass 7213221 2023-03-18 01:00:51 2023-03-20 22:49:13 2023-03-20 23:33:41 0:44:28 0:27:39 0:16:49 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
pass 7213222 2023-03-18 01:00:52 2023-03-20 22:51:35 2023-03-21 01:04:52 2:13:17 1:59:48 0:13:29 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/cache-agent-small} 2
fail 7213223 2023-03-18 01:00:53 2023-03-20 22:54:39 2023-03-20 23:27:47 0:33:08 0:18:25 0:14:43 smithi main ubuntu 20.04 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi110 with status 22: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6dd66830c12498d02f3894e352dd8b49a7bab4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

pass 7213224 2023-03-18 01:00:54 2023-03-20 22:55:19 2023-03-20 23:35:44 0:40:25 0:19:58 0:20:27 smithi main ubuntu 20.04 rados/singleton/{all/test-crash mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} 1
pass 7213225 2023-03-18 01:00:54 2023-03-20 23:03:25 2023-03-20 23:37:48 0:34:23 0:21:15 0:13:08 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/version-number-sanity mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 7213226 2023-03-18 01:00:55 2023-03-20 23:03:42 2023-03-21 01:18:34 2:14:52 1:54:55 0:19:57 smithi main centos 8.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/fastclose rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 2
pass 7213227 2023-03-18 01:00:56 2023-03-20 23:07:26 2023-03-21 03:00:19 3:52:53 3:26:13 0:26:40 smithi main centos 8.stream rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
fail 7213228 2023-03-18 01:00:57 2023-03-20 23:20:02 2023-03-21 05:19:30 5:59:28 5:39:17 0:20:11 smithi main rhel 8.6 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/many workloads/rados_mon_osdmap_prune} 2
Failure Reason:

Command failed (workunit test mon/test_mon_osdmap_prune.sh) on smithi202 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6dd66830c12498d02f3894e352dd8b49a7bab4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/test_mon_osdmap_prune.sh'

pass 7213229 2023-03-18 01:00:57 2023-03-21 00:10:45 2023-03-21 02:15:11 2:04:26 1:39:48 0:24:38 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/cache-pool-snaps-readproxy} 2
pass 7213230 2023-03-18 01:00:58 2023-03-21 00:25:15 2023-03-21 00:59:57 0:34:42 0:22:49 0:11:53 smithi main rhel 8.6 rados/singleton/{all/test-noautoscale-flag mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} 1
pass 7213231 2023-03-18 01:00:59 2023-03-21 00:28:46 2023-03-21 01:50:28 1:21:42 0:47:48 0:33:54 smithi main rhel 8.6 rados/multimon/{clusters/9 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} tasks/mon_recovery} 3
pass 7213232 2023-03-18 01:00:59 2023-03-21 13:32:13 2023-03-21 14:09:33 0:37:20 0:28:29 0:08:51 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} tasks/rados_workunit_loadgen_mostlyread} 2
fail 7213233 2023-03-18 01:01:00 2023-03-21 13:32:13 2023-03-21 14:09:42 0:37:29 0:27:32 0:09:57 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-stupid} tasks/dashboard} 2
Failure Reason:

Test failure: test_full_health (tasks.mgr.dashboard.test_health.HealthTest)

fail 7213234 2023-03-18 01:01:01 2023-03-21 13:32:14 2023-03-21 13:51:32 0:19:18 0:06:24 0:12:54 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

Command failed on smithi090 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

pass 7213235 2023-03-18 01:01:02 2023-03-21 13:32:24 2023-03-21 14:03:35 0:31:11 0:21:35 0:09:36 smithi main centos 8.stream rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 7213236 2023-03-18 01:01:02 2023-03-21 13:32:45 2023-03-21 15:06:57 1:34:12 1:26:00 0:08:12 smithi main rhel 8.6 rados/upgrade/parallel/{0-random-distro$/{rhel_8.6_container_tools_rhel8} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
pass 7213237 2023-03-18 01:01:03 2023-03-21 13:32:55 2023-03-21 13:52:03 0:19:08 0:10:00 0:09:08 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/classic task/test_cephadm_repos} 1
pass 7213238 2023-03-18 01:01:04 2023-03-21 13:32:55 2023-03-21 14:09:28 0:36:33 0:25:17 0:11:16 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/cache-pool-snaps} 2
pass 7213239 2023-03-18 01:01:05 2023-03-21 13:33:06 2023-03-21 13:53:13 0:20:07 0:09:10 0:10:57 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/sample_radosbench} 1
fail 7213240 2023-03-18 01:01:05 2023-03-21 13:33:06 2023-03-21 13:53:28 0:20:22 0:13:18 0:07:04 smithi main rhel 8.6 rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/centos_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi112 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6dd66830c12498d02f3894e352dd8b49a7bab4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

pass 7213241 2023-03-18 01:01:06 2023-03-21 13:34:27 2023-03-21 13:56:32 0:22:05 0:14:35 0:07:30 smithi main rhel 8.6 rados/cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.6_container_tools_3.0} 1-start 2-services/basic 3-final} 1
pass 7213242 2023-03-18 01:01:07 2023-03-21 13:35:27 2023-03-21 13:57:58 0:22:31 0:12:56 0:09:35 smithi main centos 8.stream rados/singleton-nomsgr/{all/balancer mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 7213243 2023-03-18 01:01:08 2023-03-21 13:36:58 2023-03-21 14:11:39 0:34:41 0:21:12 0:13:29 smithi main ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-small-objects-fast-read} 2
pass 7213244 2023-03-18 01:01:08 2023-03-21 13:39:29 2023-03-21 14:23:26 0:43:57 0:32:54 0:11:03 smithi main centos 8.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7213245 2023-03-18 01:01:09 2023-03-21 13:40:19 2023-03-21 14:04:55 0:24:36 0:14:39 0:09:57 smithi main centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} supported-random-distro$/{centos_8} tasks/prometheus} 2
pass 7213246 2023-03-18 01:01:10 2023-03-21 13:41:20 2023-03-21 14:32:40 0:51:20 0:37:58 0:13:22 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/quincy backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} 3
pass 7213247 2023-03-18 01:01:11 2023-03-21 13:45:21 2023-03-21 14:08:20 0:22:59 0:11:44 0:11:15 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 7213248 2023-03-18 01:01:11 2023-03-21 13:47:13 2023-03-21 14:51:30 1:04:17 0:57:03 0:07:14 smithi main rhel 8.6 rados/singleton/{all/thrash-backfill-full mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} 2
pass 7213249 2023-03-18 01:01:12 2023-03-21 13:47:34 2023-03-21 14:14:40 0:27:06 0:16:02 0:11:04 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
pass 7213250 2023-03-18 01:01:13 2023-03-21 13:49:44 2023-03-21 14:20:51 0:31:07 0:24:30 0:06:37 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/cache-snaps-balanced} 2
pass 7213251 2023-03-18 01:01:14 2023-03-21 13:49:45 2023-03-21 14:16:03 0:26:18 0:16:02 0:10:16 smithi main rhel 8.6 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7213252 2023-03-18 01:01:14 2023-03-21 14:04:40 2023-03-21 14:26:04 0:21:24 0:15:14 0:06:10 smithi main rhel 8.6 rados/singleton-nomsgr/{all/cache-fs-trunc mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
fail 7213253 2023-03-18 01:01:15 2023-03-21 14:05:00 2023-03-21 14:25:58 0:20:58 0:13:51 0:07:07 smithi main rhel 8.6 rados/standalone/{supported-random-distro$/{rhel_8} workloads/mon} 1
Failure Reason:

Command failed (workunit test mon/health-mute.sh) on smithi139 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6dd66830c12498d02f3894e352dd8b49a7bab4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/health-mute.sh'

pass 7213254 2023-03-18 01:01:16 2023-03-21 14:05:00 2023-03-21 16:15:49 2:10:49 2:00:36 0:10:13 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
pass 7213255 2023-03-18 01:01:17 2023-03-21 14:08:28 2023-03-21 14:33:28 0:25:00 0:18:13 0:06:47 smithi main rhel 8.6 rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
dead 7213256 2023-03-18 01:01:17 2023-03-21 14:08:29 2023-03-22 02:22:31 12:14:02 smithi main ubuntu 20.04 rados/singleton/{all/thrash-eio mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 2
Failure Reason:

hit max job timeout

pass 7213257 2023-03-18 01:01:18 2023-03-21 14:09:29 2023-03-21 14:50:46 0:41:17 0:35:27 0:05:50 smithi main rhel 8.6 rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{rhel_8}} 1
pass 7213258 2023-03-18 01:01:19 2023-03-21 14:09:40 2023-03-21 14:39:09 0:29:29 0:22:42 0:06:47 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/cache-snaps} 2
pass 7213259 2023-03-18 01:01:20 2023-03-21 14:09:50 2023-03-21 14:36:21 0:26:31 0:14:59 0:11:32 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} tasks/readwrite} 2
pass 7213260 2023-03-18 01:01:20 2023-03-21 14:11:21 2023-03-21 14:32:46 0:21:25 0:12:19 0:09:06 smithi main centos 8.stream rados/singleton-nomsgr/{all/ceph-kvstore-tool mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 7213261 2023-03-18 01:01:21 2023-03-21 14:11:41 2023-03-21 14:33:55 0:22:14 0:11:01 0:11:13 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-comp openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_read} 1
fail 7213262 2023-03-18 01:01:22 2023-03-21 14:11:41 2023-03-21 14:30:20 0:18:39 0:07:14 0:11:25 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} 1
Failure Reason:

Command failed on smithi146 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s'

pass 7213263 2023-03-18 01:01:23 2023-03-21 14:12:52 2023-03-21 14:55:34 0:42:42 0:33:59 0:08:43 smithi main rhel 8.6 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/one workloads/rados_mon_workunits} 2
pass 7213264 2023-03-18 01:01:23 2023-03-21 14:14:43 2023-03-21 14:49:27 0:34:44 0:22:56 0:11:48 smithi main centos 8.stream rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 2
pass 7213265 2023-03-18 01:01:24 2023-03-21 14:16:13 2023-03-21 14:39:01 0:22:48 0:12:08 0:10:40 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/cache} 2
pass 7213266 2023-03-18 01:01:25 2023-03-21 14:16:14 2023-03-21 14:40:51 0:24:37 0:08:54 0:15:43 smithi main ubuntu 20.04 rados/multimon/{clusters/21 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_no_skews} 3
fail 7213267 2023-03-18 01:01:25 2023-03-21 14:34:40 2023-03-21 15:03:21 0:28:41 0:21:05 0:07:36 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_nfs} 1
Failure Reason:

Test failure: test_create_and_delete_cluster (tasks.cephfs.test_nfs.TestNFS)

pass 7213268 2023-03-18 01:01:26 2023-03-21 14:36:31 2023-03-21 14:57:30 0:20:59 0:10:56 0:10:03 smithi main centos 8.stream rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 7213269 2023-03-18 01:01:27 2023-03-21 14:36:31 2023-03-21 15:19:09 0:42:38 0:30:42 0:11:56 smithi main centos 8.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-overwrites} 2
pass 7213270 2023-03-18 01:01:28 2023-03-21 14:38:12 2023-03-21 15:13:39 0:35:27 0:22:47 0:12:40 smithi main ubuntu 20.04 rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 2
pass 7213271 2023-03-18 01:01:28 2023-03-21 14:39:06 2023-03-21 15:04:39 0:25:33 0:14:48 0:10:45 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/repave-all} 2
pass 7213272 2023-03-18 01:01:29 2023-03-21 14:39:16 2023-03-21 15:04:01 0:24:45 0:16:17 0:08:28 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/dedup-io-mixed} 2
pass 7213273 2023-03-18 01:01:30 2023-03-21 14:40:47 2023-03-21 15:16:58 0:36:11 0:26:02 0:10:09 smithi main ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7213274 2023-03-18 01:01:31 2023-03-21 14:40:57 2023-03-21 15:20:13 0:39:16 0:24:01 0:15:15 smithi main ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-many-deletes} 2
pass 7213275 2023-03-18 01:01:31 2023-03-21 14:46:18 2023-03-21 15:06:11 0:19:53 0:09:39 0:10:14 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/crushdiff mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
pass 7213276 2023-03-18 01:01:32 2023-03-21 14:46:19 2023-03-21 15:07:43 0:21:24 0:09:47 0:11:37 smithi main ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-zstd} supported-random-distro$/{ubuntu_latest} tasks/workunits} 2
pass 7213277 2023-03-18 01:01:33 2023-03-21 14:47:59 2023-03-21 15:09:48 0:21:49 0:11:56 0:09:53 smithi main centos 8.stream rados/singleton/{all/watch-notify-same-primary mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} 1
pass 7213278 2023-03-18 01:01:34 2023-03-21 14:48:50 2023-03-21 15:17:23 0:28:33 0:16:58 0:11:35 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/repair_test} 2
pass 7213279 2023-03-18 01:01:34 2023-03-21 14:49:30 2023-03-21 15:14:05 0:24:35 0:16:35 0:08:00 smithi main rhel 8.6 rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
pass 7213280 2023-03-18 01:01:35 2023-03-21 14:50:51 2023-03-21 15:34:31 0:43:40 0:28:01 0:15:39 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 7213281 2023-03-18 01:01:36 2023-03-21 14:57:38 2023-03-21 15:21:48 0:24:10 0:10:50 0:13:20 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-low-osd-mem-target openstack scheduler/wpq_default_shards settings/optimized ubuntu_latest workloads/fio_4K_rand_rw} 1
pass 7213282 2023-03-18 01:01:36 2023-03-21 14:59:38 2023-03-21 15:30:37 0:30:59 0:20:36 0:10:23 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/dedup-io-snaps} 2
pass 7213283 2023-03-18 01:01:37 2023-03-21 15:03:29 2023-03-21 15:30:17 0:26:48 0:14:50 0:11:58 smithi main centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7213284 2023-03-18 01:01:38 2023-03-21 15:04:50 2023-03-21 15:24:30 0:19:40 0:12:42 0:06:58 smithi main rhel 8.6 rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
pass 7213285 2023-03-18 01:01:39 2023-03-21 15:05:40 2023-03-21 18:43:46 3:38:06 3:27:38 0:10:28 smithi main ubuntu 20.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd-backfill} 1
pass 7213286 2023-03-18 01:01:39 2023-03-21 15:05:41 2023-03-21 15:36:10 0:30:29 0:22:29 0:08:00 smithi main centos 8.stream rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} 1
pass 7213287 2023-03-18 01:01:40 2023-03-21 15:05:41 2023-03-21 15:31:17 0:25:36 0:15:32 0:10:04 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados tasks/mon_recovery validater/lockdep} 2
fail 7213288 2023-03-18 01:01:41 2023-03-21 15:06:12 2023-03-21 15:26:06 0:19:54 0:13:00 0:06:54 smithi main rhel 8.6 rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/rhel_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi073 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6dd66830c12498d02f3894e352dd8b49a7bab4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

pass 7213289 2023-03-18 01:01:42 2023-03-21 15:07:02 2023-03-21 15:37:28 0:30:26 0:19:10 0:11:16 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_orch_cli} 1
pass 7213290 2023-03-18 01:01:42 2023-03-21 15:07:02 2023-03-21 15:40:47 0:33:45 0:20:08 0:13:37 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/test_rbd_api} 3
pass 7213291 2023-03-18 01:01:43 2023-03-21 15:07:43 2023-03-21 15:55:35 0:47:52 0:37:59 0:09:53 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/pool-snaps-few-objects} 2
pass 7213292 2023-03-18 01:01:44 2023-03-21 15:07:43 2023-03-21 15:27:57 0:20:14 0:08:51 0:11:23 smithi main ubuntu 20.04 rados/singleton/{all/admin-socket mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} 1
pass 7213293 2023-03-18 01:01:45 2023-03-21 15:07:44 2023-03-21 15:29:12 0:21:28 0:12:38 0:08:50 smithi main centos 8.stream rados/singleton-nomsgr/{all/full-tiering mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 7213294 2023-03-18 01:01:45 2023-03-21 15:07:54 2023-03-21 15:43:46 0:35:52 0:24:35 0:11:17 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
fail 7213295 2023-03-18 01:01:46 2023-03-21 15:09:55 2023-03-21 15:32:39 0:22:44 0:12:34 0:10:10 smithi main centos 8.stream rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi089 with status 22: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6dd66830c12498d02f3894e352dd8b49a7bab4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

pass 7213296 2023-03-18 01:01:47 2023-03-21 15:11:25 2023-03-21 15:38:50 0:27:25 0:18:02 0:09:23 smithi main centos 8.stream rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/sync-many workloads/pool-create-delete} 2
pass 7213297 2023-03-18 01:01:47 2023-03-21 15:11:36 2023-03-21 18:01:55 2:50:19 2:17:50 0:32:29 smithi main centos 8.stream rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{centos_8}} 1
pass 7213298 2023-03-18 01:01:48 2023-03-21 15:30:45 2023-03-21 15:54:25 0:23:40 0:16:02 0:07:38 smithi main rhel 8.6 rados/cephadm/osds/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
pass 7213299 2023-03-18 01:01:49 2023-03-21 15:31:26 2023-03-21 15:49:29 0:18:03 0:08:21 0:09:42 smithi main ubuntu 20.04 rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/many msgr/async-v1only no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} 2
pass 7213300 2023-03-18 01:01:50 2023-03-21 15:31:36 2023-03-21 16:08:43 0:37:07 0:31:02 0:06:05 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/rados_api_tests} 2
pass 7213301 2023-03-18 01:01:50 2023-03-21 15:31:37 2023-03-21 16:09:39 0:38:02 0:26:47 0:11:15 smithi main centos 8.stream rados/singleton/{all/backfill-toofull mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} 1
pass 7213302 2023-03-18 01:01:51 2023-03-21 15:32:47 2023-03-21 15:57:23 0:24:36 0:13:12 0:11:24 smithi main centos 8.stream rados/singleton-nomsgr/{all/health-warnings mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 7213303 2023-03-18 01:01:52 2023-03-21 15:34:38 2023-03-21 15:57:31 0:22:53 0:10:32 0:12:21 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/fio_4M_rand_read} 1
pass 7213304 2023-03-18 01:01:53 2023-03-21 15:34:38 2023-03-21 15:58:33 0:23:55 0:11:42 0:12:13 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} tasks/scrub_test} 2
pass 7213305 2023-03-18 01:01:53 2023-03-21 15:36:19 2023-03-21 16:14:58 0:38:39 0:25:54 0:12:45 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_orch_cli_mon} 5
fail 7213306 2023-03-18 01:01:54 2023-03-21 15:39:19 2023-03-21 16:03:49 0:24:30 0:14:05 0:10:25 smithi main rhel 8.6 rados/singleton/{all/deduptool mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed (workunit test rados/test_dedup_tool.sh) on smithi049 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6dd66830c12498d02f3894e352dd8b49a7bab4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_dedup_tool.sh'

pass 7213307 2023-03-18 01:01:55 2023-03-21 15:40:50 2023-03-21 16:16:05 0:35:15 0:28:16 0:06:59 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/radosbench-high-concurrency} 2
pass 7213308 2023-03-18 01:01:56 2023-03-21 15:40:50 2023-03-21 16:18:57 0:38:07 0:29:11 0:08:56 smithi main rhel 8.6 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7213309 2023-03-18 01:01:56 2023-03-21 15:43:51 2023-03-21 16:22:38 0:38:47 0:24:43 0:14:04 smithi main centos 8.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-small-objects} 2
fail 7213310 2023-03-18 01:01:57 2023-03-21 15:49:32 2023-03-21 16:29:04 0:39:32 0:27:30 0:12:02 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi090 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6dd66830c12498d02f3894e352dd8b49a7bab4c TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 7213311 2023-03-18 01:01:58 2023-03-21 15:51:13 2023-03-21 16:17:57 0:26:44 0:14:25 0:12:19 smithi main centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} supported-random-distro$/{centos_8} tasks/crash} 2
fail 7213312 2023-03-18 01:01:59 2023-03-21 15:54:34 2023-03-21 16:13:14 0:18:40 0:06:26 0:12:14 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

Command failed on smithi035 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

pass 7213313 2023-03-18 01:01:59 2023-03-21 15:57:25 2023-03-21 16:19:22 0:21:57 0:12:19 0:09:38 smithi main centos 8.stream rados/singleton-nomsgr/{all/large-omap-object-warnings mon_election/classic rados supported-random-distro$/{centos_8}} 1