Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6860463 2022-06-02 15:08:20 2022-06-02 15:12:00 2022-06-02 15:43:48 0:31:48 0:22:37 0:09:11 smithi main ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-small-objects-overwrites} 2
pass 6860464 2022-06-02 15:08:22 2022-06-02 15:12:30 2022-06-02 15:37:57 0:25:27 0:19:02 0:06:25 smithi main rhel 8.4 rados/cephadm/smoke/{0-distro/rhel_8.4_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
fail 6860465 2022-06-02 15:08:23 2022-06-02 15:13:11 2022-06-02 15:44:52 0:31:41 0:22:16 0:09:25 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6860466 2022-06-02 15:08:25 2022-06-02 15:13:31 2022-06-02 15:51:07 0:37:36 0:28:38 0:08:58 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

fail 6860467 2022-06-02 15:08:27 2022-06-02 15:13:41 2022-06-02 15:45:14 0:31:33 0:22:09 0:09:24 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6860468 2022-06-02 15:08:29 2022-06-02 15:13:42 2022-06-02 15:53:09 0:39:27 0:29:51 0:09:36 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

pass 6860469 2022-06-02 15:08:31 2022-06-02 15:13:52 2022-06-02 15:53:12 0:39:20 0:30:17 0:09:03 smithi main ubuntu 20.04 rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{ubuntu_latest}} 1
pass 6860470 2022-06-02 15:08:33 2022-06-02 15:13:53 2022-06-02 15:34:51 0:20:58 0:13:57 0:07:01 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-hybrid rados tasks/mon_recovery validater/lockdep} 2
pass 6860471 2022-06-02 15:08:35 2022-06-02 15:13:53 2022-06-02 15:41:19 0:27:26 0:20:33 0:06:53 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-stupid rados tasks/rados_cls_all validater/lockdep} 2
pass 6860472 2022-06-02 15:08:37 2022-06-02 15:14:04 2022-06-02 15:38:22 0:24:18 0:15:17 0:09:01 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
pass 6860473 2022-06-02 15:08:39 2022-06-02 15:15:14 2022-06-02 15:36:26 0:21:12 0:13:58 0:07:14 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/repave-all} 2