Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6926760 2022-07-12 13:38:12 2022-07-12 13:45:15 2022-07-12 14:24:11 0:38:56 0:32:09 0:06:47 smithi main rhel 8.4 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 6926761 2022-07-12 13:38:14 2022-07-12 13:45:15 2022-07-12 14:10:49 0:25:34 0:18:00 0:07:34 smithi main rhel 8.4 rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} 2
pass 6926762 2022-07-12 13:38:15 2022-07-12 13:45:26 2022-07-12 14:11:13 0:25:47 0:17:35 0:08:12 smithi main rhel 8.4 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/redirect} 2
pass 6926763 2022-07-12 13:38:16 2022-07-14 08:23:28 2022-07-14 09:06:48 0:43:20 0:36:37 0:06:43 smithi main rhel 8.4 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 6926764 2022-07-12 13:38:18 2022-07-14 08:23:29 2022-07-14 09:16:29 0:53:00 0:45:49 0:07:11 smithi main rhel 8.4 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8}} 1
pass 6926765 2022-07-12 13:38:19 2022-07-14 08:23:29 2022-07-14 08:52:25 0:28:56 0:21:53 0:07:03 smithi main rhel 8.4 rados/singleton/{all/pg-removal-interruption mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} 1
pass 6926766 2022-07-12 13:38:21 2022-07-14 08:23:30 2022-07-14 08:52:43 0:29:13 0:22:00 0:07:13 smithi main rhel 8.4 rados/objectstore/{backends/alloc-hint supported-random-distro$/{rhel_8}} 1
fail 6926767 2022-07-12 13:38:22 2022-07-14 08:23:31 2022-07-14 08:56:49 0:33:18 0:22:51 0:10:27 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

pass 6926768 2022-07-12 13:38:24 2022-07-14 08:23:31 2022-07-14 10:04:28 1:40:57 1:34:48 0:06:09 smithi main rhel 8.4 rados/upgrade/parallel/{0-random-distro$/{rhel_8.4_container_tools_rhel8} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
pass 6926769 2022-07-12 13:38:25 2022-07-14 08:23:32 2022-07-14 08:56:24 0:32:52 0:26:33 0:06:19 smithi main rhel 8.4 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/filestore-xfs rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 6926770 2022-07-12 13:38:27 2022-07-14 08:23:33 2022-07-14 08:57:29 0:33:56 0:25:49 0:08:07 smithi main rhel 8.4 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-bitmap rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 6926771 2022-07-12 13:38:28 2022-07-14 08:23:43 2022-07-14 08:48:12 0:24:29 0:18:13 0:06:16 smithi main rhel 8.4 rados/cephadm/smoke-singlehost/{0-random-distro$/{rhel_8.4_container_tools_rhel8} 1-start 2-services/basic 3-final} 1
pass 6926772 2022-07-12 13:38:29 2022-07-14 08:23:44 2022-07-14 09:06:09 0:42:25 0:33:42 0:08:43 smithi main rhel 8.4 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/small-objects-balanced} 2
pass 6926774 2022-07-12 13:38:31 2022-07-14 08:25:07 2022-07-14 08:52:23 0:27:16 0:20:36 0:06:40 smithi main rhel 8.4 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} tasks/rados_python} 2
pass 6926776 2022-07-12 13:38:32 2022-07-14 08:25:08 2022-07-14 09:14:52 0:49:44 0:43:33 0:06:11 smithi main rhel 8.4 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/many workloads/rados_mon_workunits} 2
pass 6926778 2022-07-12 13:38:34 2022-07-14 08:26:00 2022-07-14 09:31:20 1:05:20 0:59:07 0:06:13 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
pass 6926780 2022-07-12 13:38:35 2022-07-14 08:26:21 2022-07-14 09:13:43 0:47:22 0:40:07 0:07:15 smithi main rhel 8.4 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/snaps-few-objects-localized} 2
pass 6926782 2022-07-12 13:38:36 2022-07-14 08:26:52 2022-07-14 08:51:51 0:24:59 0:19:24 0:05:35 smithi main rhel 8.4 rados/cephadm/osds/{0-distro/rhel_8.4_container_tools_3.0 0-nvme-loop 1-start 2-ops/repave-all} 2
fail 6926784 2022-07-12 13:38:38 2022-07-14 08:27:14 2022-07-14 08:56:35 0:29:21 0:16:20 0:13:01 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

Command failed on smithi044 with status 22: 'kubectl -n rook-ceph exec rook-ceph-tools-6db9f859bb-x2m6j -- ceph orch apply osd --all-available-devices'

fail 6926786 2022-07-12 13:38:39 2022-07-14 08:29:45 2022-07-14 09:21:45 0:52:00 0:41:23 0:10:37 smithi main rados/cephadm/workunits/{agent/on mon_election/connectivity task/test_nfs} 1
Failure Reason:

Test failure: test_export_create_with_non_existing_fsname (tasks.cephfs.test_nfs.TestNFS)

fail 6926788 2022-07-12 13:38:40 2022-07-14 08:29:57 2022-07-14 09:03:25 0:33:28 0:23:00 0:10:28 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6926790 2022-07-12 13:38:42 2022-07-14 08:29:58 2022-07-14 08:58:17 0:28:19 0:16:54 0:11:25 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

Command failed on smithi040 with status 22: 'kubectl -n rook-ceph exec rook-ceph-tools-6db9f859bb-48kfx -- ceph orch apply osd --all-available-devices'