Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7750031 2024-06-10 23:03:50 2024-06-10 23:11:56 2024-06-10 23:43:28 0:31:32 0:21:11 0:10:21 smithi main centos 9.stream rados:thrash-old-clients/{0-distro$/{centos_9.stream} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/rbd_cls} 3
fail 7750032 2024-06-10 23:03:51 2024-06-10 23:11:56 2024-06-10 23:39:22 0:27:26 0:17:34 0:09:52 smithi main centos 9.stream rados:thrash-old-clients/{0-distro$/{centos_9.stream} 0-size-min-size-overrides/2-size-2-min-size 1-install/squid backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi123 with status 1: 'CEPH_CLIENT_ID=2 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --max-attr-len 20000 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op copy_from 50 --op write_excl 50 --pool unique_pool_0'

pass 7750033 2024-06-10 23:03:52 2024-06-10 23:12:16 2024-06-10 23:43:06 0:30:50 0:19:46 0:11:04 smithi main centos 9.stream rados:thrash-old-clients/{0-distro$/{centos_9.stream} 0-size-min-size-overrides/3-size-2-min-size 1-install/quincy backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} 3
dead 7750034 2024-06-10 23:03:53 2024-06-10 23:12:17 2024-06-10 23:13:41 0:01:24 smithi main centos 9.stream rados:thrash-old-clients/{0-distro$/{centos_9.stream} 0-size-min-size-overrides/2-size-2-min-size 1-install/reef backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/cache-snaps} 3
Failure Reason:

Error reimaging machines: Failed to power on smithi096

fail 7750035 2024-06-10 23:03:54 2024-06-10 23:12:37 2024-06-11 00:52:41 1:40:04 1:27:54 0:12:10 smithi main centos 9.stream rados:thrash-old-clients/{0-distro$/{centos_9.stream} 0-size-min-size-overrides/3-size-2-min-size 1-install/squid backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/radosbench} 3
Failure Reason:

"2024-06-10T23:50:00.001447+0000 mon.a (mon.0) 1668 : cluster [WRN] Health detail: HEALTH_WARN Degraded data redundancy: 30/112143 objects degraded (0.027%), 1 pg degraded" in cluster log

pass 7750036 2024-06-10 23:03:55 2024-06-10 23:15:28 2024-06-10 23:54:31 0:39:03 0:20:32 0:18:31 smithi main centos 9.stream rados:thrash-old-clients/{0-distro$/{centos_9.stream} 0-size-min-size-overrides/2-size-2-min-size 1-install/quincy backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/mapgap thrashosds-health workloads/rbd_cls} 3
fail 7750037 2024-06-10 23:03:57 2024-06-10 23:23:39 2024-06-10 23:55:35 0:31:56 0:20:39 0:11:17 smithi main centos 9.stream rados:thrash-old-clients/{0-distro$/{centos_9.stream} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

Command failed on smithi156 with status 1: 'CEPH_CLIENT_ID=2 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --max-attr-len 20000 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op copy_from 50 --op write_excl 50 --pool unique_pool_0'