Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6965703 2022-08-10 13:39:12 2022-08-10 13:44:35 2022-08-10 14:06:11 0:21:36 0:11:46 0:09:50 smithi main ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 6965704 2022-08-10 13:39:13 2022-08-10 13:44:36 2022-08-10 14:15:29 0:30:53 0:22:40 0:08:13 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zlib} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi115 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6a8bacfcc7768309a2ff4fbbdb2946831d82b6d1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6965705 2022-08-10 13:39:14 2022-08-10 13:45:16 2022-08-10 14:16:49 0:31:33 0:21:29 0:10:04 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/master} 1
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

pass 6965706 2022-08-10 13:39:16 2022-08-10 13:45:17 2022-08-10 14:04:03 0:18:46 0:12:33 0:06:13 smithi main centos 8.stream rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 6965707 2022-08-10 13:39:17 2022-08-10 13:45:17 2022-08-10 17:26:24 3:41:07 3:31:46 0:09:21 smithi main centos 8.stream rados/standalone/{supported-random-distro$/{centos_8} workloads/osd} 1
pass 6965708 2022-08-10 13:39:18 2022-08-10 13:48:38 2022-08-10 14:14:01 0:25:23 0:16:45 0:08:38 smithi main rhel 8.6 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/classic random-objectstore$/{bluestore-comp-zstd} supported-random-distro$/{rhel_8} tasks/prometheus} 2
pass 6965709 2022-08-10 13:39:19 2022-08-10 13:50:28 2022-08-10 14:17:23 0:26:55 0:19:51 0:07:04 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/dedup-io-mixed} 2
fail 6965710 2022-08-10 13:39:20 2022-08-10 13:50:29 2022-08-10 14:12:09 0:21:40 0:11:29 0:10:11 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test rados/test_librados_build.sh) on smithi165 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6a8bacfcc7768309a2ff4fbbdb2946831d82b6d1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_librados_build.sh'

pass 6965711 2022-08-10 13:39:21 2022-08-10 13:50:39 2022-08-10 14:29:57 0:39:18 0:32:10 0:07:08 smithi main rhel 8.6 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/sync-many workloads/rados_api_tests} 2
pass 6965712 2022-08-10 13:39:23 2022-08-10 13:50:59 2022-08-10 14:15:50 0:24:51 0:19:23 0:05:28 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/lockdep} 2
pass 6965713 2022-08-10 13:39:24 2022-08-10 13:51:00 2022-08-10 14:09:54 0:18:54 0:13:37 0:05:17 smithi main rhel 8.6 rados/singleton/{all/mon-config mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} 1
pass 6965714 2022-08-10 13:39:25 2022-08-10 13:51:00 2022-08-10 16:50:09 2:59:09 2:21:49 0:37:20 smithi main rhel 8.6 rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{rhel_8}} 1
pass 6965715 2022-08-10 13:39:26 2022-08-10 14:00:52 2022-08-10 14:26:27 0:25:35 0:16:47 0:08:48 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/dedup-io-snaps} 2
pass 6965716 2022-08-10 13:39:28 2022-08-10 14:03:24 2022-08-10 14:27:43 0:24:19 0:10:58 0:13:21 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 6965717 2022-08-10 13:39:29 2022-08-10 14:06:14 2022-08-10 14:32:49 0:26:35 0:19:54 0:06:41 smithi main rhel 8.6 rados/singleton-nomsgr/{all/msgr mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
pass 6965718 2022-08-10 13:39:30 2022-08-10 14:06:15 2022-08-10 14:33:39 0:27:24 0:17:13 0:10:11 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} tasks/libcephsqlite} 2
pass 6965719 2022-08-10 13:39:31 2022-08-10 14:06:15 2022-08-10 14:40:02 0:33:47 0:22:06 0:11:41 smithi main ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects} 2
pass 6965720 2022-08-10 13:39:32 2022-08-10 14:08:36 2022-08-10 14:48:08 0:39:32 0:26:54 0:12:38 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/pool-snaps-few-objects} 2
pass 6965721 2022-08-10 13:39:33 2022-08-10 14:09:56 2022-08-10 14:30:50 0:20:54 0:15:24 0:05:30 smithi main centos 8.stream rados/cephadm/workunits/{agent/off mon_election/classic task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} 1
pass 6965722 2022-08-10 13:39:35 2022-08-10 14:09:57 2022-08-10 14:55:07 0:45:10 0:38:13 0:06:57 smithi main rhel 8.6 rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
pass 6965723 2022-08-10 13:39:36 2022-08-10 14:09:57 2022-08-10 14:51:56 0:41:59 0:31:26 0:10:33 smithi main rhel 8.6 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 6965724 2022-08-10 13:39:37 2022-08-10 14:14:08 2022-08-10 14:51:24 0:37:16 0:28:41 0:08:35 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/rados_api_tests} 2
pass 6965725 2022-08-10 13:39:38 2022-08-10 14:15:38 2022-08-10 14:42:43 0:27:05 0:19:43 0:07:22 smithi main rhel 8.6 rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
pass 6965726 2022-08-10 13:39:39 2022-08-10 14:15:59 2022-08-10 14:43:05 0:27:06 0:21:05 0:06:01 smithi main centos 8.stream rados/singleton-nomsgr/{all/osd_stale_reads mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 6965727 2022-08-10 13:39:40 2022-08-10 14:15:59 2022-08-10 17:09:46 2:53:47 2:47:31 0:06:16 smithi main centos 8.stream rados/standalone/{supported-random-distro$/{centos_8} workloads/scrub} 1
pass 6965728 2022-08-10 13:39:42 2022-08-10 14:16:30 2022-08-10 14:40:20 0:23:50 0:14:13 0:09:37 smithi main ubuntu 20.04 rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} 3
pass 6965729 2022-08-10 13:39:43 2022-08-10 14:16:50 2022-08-10 15:04:47 0:47:57 0:38:25 0:09:32 smithi main rados/cephadm/workunits/{agent/on mon_election/connectivity task/test_nfs} 1
pass 6965730 2022-08-10 13:39:44 2022-08-10 14:17:20 2022-08-10 14:50:55 0:33:35 0:23:07 0:10:28 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/radosbench-high-concurrency} 2
pass 6965731 2022-08-10 13:39:45 2022-08-10 14:17:31 2022-08-10 14:49:04 0:31:33 0:18:43 0:12:50 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
pass 6965732 2022-08-10 13:39:46 2022-08-10 14:19:31 2022-08-10 15:10:35 0:51:04 0:34:07 0:16:57 smithi main ubuntu 20.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 6965733 2022-08-10 13:39:47 2022-08-10 14:26:33 2022-08-10 15:01:18 0:34:45 0:24:50 0:09:55 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
pass 6965734 2022-08-10 13:39:49 2022-08-10 14:27:53 2022-08-10 15:01:33 0:33:40 0:21:44 0:11:56 smithi main ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 6965735 2022-08-10 13:39:50 2022-08-10 14:30:04 2022-08-10 14:54:13 0:24:09 0:13:00 0:11:09 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/redirect_promote_tests} 2
fail 6965736 2022-08-10 13:39:51 2022-08-10 14:30:54 2022-08-10 15:04:00 0:33:06 0:21:27 0:11:39 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6965737 2022-08-10 13:39:52 2022-08-10 14:32:55 2022-08-10 14:56:45 0:23:50 0:14:27 0:09:23 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
fail 6965738 2022-08-10 13:39:53 2022-08-10 14:33:35 2022-08-10 15:20:47 0:47:12 0:40:14 0:06:58 smithi main rhel 8.6 rados/upgrade/parallel/{0-random-distro$/{rhel_8.6_container_tools_3.0} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi093 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

fail 6965739 2022-08-10 13:39:54 2022-08-10 14:33:36 2022-08-10 15:02:42 0:29:06 0:22:22 0:06:44 smithi main centos 8.stream rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} 1
Failure Reason:

expected valgrind issues and found none

pass 6965740 2022-08-10 13:39:55 2022-08-10 14:33:46 2022-08-10 15:01:03 0:27:17 0:17:26 0:09:51 smithi main ubuntu 20.04 rados/cephadm/smoke-singlehost/{0-random-distro$/{ubuntu_20.04} 1-start 2-services/basic 3-final} 1
fail 6965741 2022-08-10 13:39:57 2022-08-10 14:33:46 2022-08-10 15:11:08 0:37:22 0:24:29 0:12:53 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi070 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6a8bacfcc7768309a2ff4fbbdb2946831d82b6d1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6965742 2022-08-10 13:39:58 2022-08-10 14:40:08 2022-08-10 15:18:03 0:37:55 0:26:41 0:11:14 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

fail 6965743 2022-08-10 13:39:59 2022-08-10 14:40:28 2022-08-10 15:03:48 0:23:20 0:14:25 0:08:55 smithi main centos 8.stream rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test rados/test_librados_build.sh) on smithi098 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=6a8bacfcc7768309a2ff4fbbdb2946831d82b6d1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_librados_build.sh'

fail 6965744 2022-08-10 13:40:00 2022-08-10 14:42:49 2022-08-10 15:14:10 0:31:21 0:21:42 0:09:39 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6965745 2022-08-10 13:40:01 2022-08-10 14:42:49 2022-08-10 15:06:29 0:23:40 0:14:40 0:09:00 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
fail 6965746 2022-08-10 13:40:02 2022-08-10 14:42:59 2022-08-10 15:34:30 0:51:31 0:42:20 0:09:11 smithi main ubuntu 20.04 rados/upgrade/parallel/{0-random-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi063 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

pass 6965747 2022-08-10 13:40:04 2022-08-10 14:43:00 2022-08-10 17:16:11 2:33:11 2:21:17 0:11:54 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
pass 6965748 2022-08-10 13:40:05 2022-08-10 14:48:11 2022-08-10 15:12:20 0:24:09 0:16:23 0:07:46 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/redirect_set_object} 2
pass 6965749 2022-08-10 13:40:06 2022-08-10 14:49:11 2022-08-10 15:10:29 0:21:18 0:14:39 0:06:39 smithi main centos 8.stream rados/singleton/{all/max-pg-per-osd.from-primary mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{centos_8}} 1
fail 6965750 2022-08-10 13:40:07 2022-08-10 14:49:12 2022-08-10 15:24:02 0:34:50 0:25:59 0:08:51 smithi main centos 8.stream rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} 1
Failure Reason:

expected valgrind issues and found none

pass 6965751 2022-08-10 13:40:08 2022-08-10 14:51:02 2022-08-10 15:25:00 0:33:58 0:27:24 0:06:34 smithi main rhel 8.6 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 6965752 2022-08-10 13:40:10 2022-08-10 14:51:33 2022-08-10 15:13:24 0:21:51 0:14:31 0:07:20 smithi main rhel 8.6 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} tasks/scrub_test} 2