User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2023-04-06 15:41:41 | 2023-04-06 17:31:56 | 2023-04-07 06:23:10 | 12:51:14 | rados | wip-yuri4-testing-2023-03-31-1237 | smithi | 27e6cf7 | 11 | 20 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7234348 | 2023-04-06 15:42:09 | 2023-04-06 17:31:56 | 2023-04-06 17:59:41 | 0:27:45 | 0:11:12 | 0:16:33 | smithi | main | ubuntu | 22.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/connectivity random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{ubuntu_latest} tasks/workunits} | 2 | |
Failure Reason:
"1680803833.1007533 mon.a (mon.0) 97 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
pass | 7234349 | 2023-04-06 15:42:10 | 2023-04-06 17:37:47 | 2023-04-06 19:32:57 | 1:55:10 | 1:24:39 | 0:30:31 | smithi | main | centos | 8.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} | 2 | |
pass | 7234350 | 2023-04-06 15:42:11 | 2023-04-06 17:39:48 | 2023-04-06 18:23:43 | 0:43:55 | 0:33:45 | 0:10:10 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7234351 | 2023-04-06 15:42:11 | 2023-04-06 17:40:29 | 2023-04-06 18:26:41 | 0:46:12 | 0:30:48 | 0:15:24 | smithi | main | ubuntu | 22.04 | rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/one workloads/rados_mon_workunits} | 2 | |
Failure Reason:
"1680804225.4300835 mon.a (mon.0) 93 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 7234352 | 2023-04-06 15:42:12 | 2023-04-06 17:44:39 | 2023-04-06 18:12:06 | 0:27:27 | 0:12:59 | 0:14:28 | smithi | main | ubuntu | 22.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-zstd} supported-random-distro$/{ubuntu_latest} tasks/crash} | 2 | |
Failure Reason:
"1680804430.719237 mon.a (mon.0) 106 : cluster [WRN] Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 7234353 | 2023-04-06 15:42:13 | 2023-04-06 17:47:20 | 2023-04-06 18:03:19 | 0:15:59 | 0:06:08 | 0:09:51 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} | 1 | |
Failure Reason:
Command failed on smithi130 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
dead | 7234354 | 2023-04-06 15:42:14 | 2023-04-06 17:47:20 | 2023-04-07 05:59:10 | 12:11:50 | smithi | main | ubuntu | 20.04 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_20.04}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7234355 | 2023-04-06 15:42:15 | 2023-04-06 17:47:31 | 2023-04-06 18:15:10 | 0:27:39 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |||
Failure Reason:
Command failed on smithi066 with status 1: 'sudo yum install -y kernel' |
||||||||||||||
pass | 7234356 | 2023-04-06 15:42:15 | 2023-04-06 17:48:31 | 2023-04-06 18:45:11 | 0:56:40 | 0:26:26 | 0:30:14 | smithi | main | centos | 8.stream | rados/cephadm/smoke-singlehost/{0-random-distro$/{centos_8.stream_container_tools} 1-start 2-services/basic 3-final} | 1 | |
fail | 7234357 | 2023-04-06 15:42:16 | 2023-04-06 17:49:52 | 2023-04-06 18:16:54 | 0:27:02 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps} | 3 | |||
Failure Reason:
Command failed on smithi057 with status 1: 'sudo yum install -y kernel' |
||||||||||||||
fail | 7234358 | 2023-04-06 15:42:17 | 2023-04-06 17:51:33 | 2023-04-06 20:05:19 | 2:13:46 | 1:40:47 | 0:32:59 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi142 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=27e6cf719ed2d85a1ebd352632a8b6b0b84dbfa5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 7234359 | 2023-04-06 15:42:18 | 2023-04-06 17:53:53 | 2023-04-06 18:16:31 | 0:22:38 | 0:06:27 | 0:16:11 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/master} | 3 | |
Failure Reason:
Command failed on smithi119 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
fail | 7234360 | 2023-04-06 15:42:19 | 2023-04-06 17:58:14 | 2023-04-06 19:59:37 | 2:01:23 | 1:32:08 | 0:29:15 | smithi | main | centos | 8.stream | rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/centos_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi159 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=27e6cf719ed2d85a1ebd352632a8b6b0b84dbfa5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
fail | 7234361 | 2023-04-06 15:42:19 | 2023-04-06 17:58:15 | 2023-04-06 18:32:46 | 0:34:31 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |||
Failure Reason:
Command failed on smithi088 with status 1: 'sudo yum install -y kernel' |
||||||||||||||
fail | 7234362 | 2023-04-06 15:42:20 | 2023-04-06 18:07:16 | 2023-04-06 18:32:46 | 0:25:30 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/cache-snaps-balanced} | 2 | |||
Failure Reason:
Command failed on smithi023 with status 1: 'sudo yum install -y kernel' |
||||||||||||||
fail | 7234363 | 2023-04-06 15:42:21 | 2023-04-06 18:08:27 | 2023-04-06 20:09:28 | 2:01:01 | 1:32:03 | 0:28:58 | smithi | main | centos | 8.stream | rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/rhel_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi110 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=27e6cf719ed2d85a1ebd352632a8b6b0b84dbfa5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh' |
||||||||||||||
pass | 7234364 | 2023-04-06 15:42:22 | 2023-04-06 18:08:27 | 2023-04-06 20:07:58 | 1:59:31 | 1:31:04 | 0:28:27 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/pool-snaps-few-objects} | 2 | |
pass | 7234365 | 2023-04-06 15:42:22 | 2023-04-06 18:08:28 | 2023-04-06 19:54:50 | 1:46:22 | 1:15:58 | 0:30:24 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_cephadm} | 1 | |
pass | 7234366 | 2023-04-06 15:42:23 | 2023-04-06 18:09:08 | 2023-04-06 18:44:44 | 0:35:36 | 0:22:45 | 0:12:51 | smithi | main | ubuntu | 22.04 | rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} tasks/rados_cls_all} | 2 | |
fail | 7234367 | 2023-04-06 15:42:24 | 2023-04-06 18:12:09 | 2023-04-06 18:28:04 | 0:15:55 | 0:06:09 | 0:09:46 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/1.7.2} | 1 | |
Failure Reason:
Command failed on smithi099 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
dead | 7234368 | 2023-04-06 15:42:25 | 2023-04-06 18:12:09 | 2023-04-07 06:23:10 | 12:11:01 | smithi | main | rhel | 8.6 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{rhel_8}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7234369 | 2023-04-06 15:42:26 | 2023-04-06 18:13:39 | 2023-04-06 18:38:19 | 0:24:40 | 0:13:32 | 0:11:08 | smithi | main | ubuntu | 22.04 | rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-comp-snappy} supported-random-distro$/{ubuntu_latest} tasks/prometheus} | 2 | |
Failure Reason:
"1680806020.6358383 mon.a (mon.0) 83 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
pass | 7234370 | 2023-04-06 15:42:27 | 2023-04-06 18:13:40 | 2023-04-06 19:00:25 | 0:46:45 | 0:35:57 | 0:10:48 | smithi | main | ubuntu | 20.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_20.04} thrashers/morepggrow thrashosds-health workloads/ec-pool-snaps-few-objects-overwrites} | 2 | |
fail | 7234371 | 2023-04-06 15:42:27 | 2023-04-06 18:14:30 | 2023-04-06 19:22:16 | 1:07:46 | 0:55:20 | 0:12:26 | smithi | main | ubuntu | 20.04 | rados/standalone/{supported-random-distro$/{ubuntu_20.04} workloads/mon} | 1 | |
Failure Reason:
Command failed (workunit test mon/mon-stretched-cluster.sh) on smithi044 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=27e6cf719ed2d85a1ebd352632a8b6b0b84dbfa5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon/mon-stretched-cluster.sh' |
||||||||||||||
fail | 7234372 | 2023-04-06 15:42:28 | 2023-04-06 18:14:41 | 2023-04-06 19:32:39 | 1:17:58 | 1:07:34 | 0:10:24 | smithi | main | ubuntu | 22.04 | rados/singleton/{all/ec-lost-unfound mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
"1680806011.0121825 mon.a (mon.0) 110 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
pass | 7234373 | 2023-04-06 15:42:29 | 2023-04-06 18:14:41 | 2023-04-06 20:01:53 | 1:47:12 | 1:14:22 | 0:32:50 | smithi | main | centos | 8.stream | rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/test_rbd_api} | 3 | |
fail | 7234374 | 2023-04-06 15:42:30 | 2023-04-06 18:15:11 | 2023-04-06 20:27:37 | 2:12:26 | 1:40:36 | 0:31:50 | smithi | main | centos | 8.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi061 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=27e6cf719ed2d85a1ebd352632a8b6b0b84dbfa5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 7234375 | 2023-04-06 15:42:30 | 2023-04-06 18:16:02 | 2023-04-06 20:12:20 | 1:56:18 | 1:26:03 | 0:30:15 | smithi | main | centos | 8.stream | rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_orch_cli_mon} | 5 | |
fail | 7234376 | 2023-04-06 15:42:31 | 2023-04-06 18:17:03 | 2023-04-06 20:33:26 | 2:16:23 | 1:45:10 | 0:31:13 | smithi | main | centos | 8.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-snappy} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi057 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=27e6cf719ed2d85a1ebd352632a8b6b0b84dbfa5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |
||||||||||||||
fail | 7234377 | 2023-04-06 15:42:32 | 2023-04-06 18:18:03 | 2023-04-06 18:36:31 | 0:18:28 | 0:06:18 | 0:12:10 | smithi | main | ubuntu | 20.04 | rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/calico rook/master} | 3 | |
Failure Reason:
Command failed on smithi006 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull' |
||||||||||||||
pass | 7234378 | 2023-04-06 15:42:33 | 2023-04-06 18:20:24 | 2023-04-06 19:20:27 | 1:00:03 | 0:29:16 | 0:30:47 | smithi | main | centos | 8.stream | rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
fail | 7234379 | 2023-04-06 15:42:34 | 2023-04-06 18:20:24 | 2023-04-06 18:41:05 | 0:20:41 | 0:10:26 | 0:10:15 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats' |
||||||||||||||
pass | 7234380 | 2023-04-06 15:42:34 | 2023-04-06 18:21:34 | 2023-04-06 20:11:34 | 1:50:00 | 1:20:26 | 0:29:34 | smithi | main | centos | 8.stream | rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-small-objects-overwrites} | 2 |