Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6932147 2022-07-15 14:46:53 2022-07-15 15:29:31 2022-07-15 16:00:40 0:31:09 0:21:27 0:09:42 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/master} 1
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

pass 6932149 2022-07-15 14:46:54 2022-07-15 15:29:31 2022-07-15 15:44:20 0:14:49 0:09:21 0:05:28 smithi main rhel 8.6 rados/singleton-nomsgr/{all/pool-access mon_election/classic rados supported-random-distro$/{rhel_8}} 1
pass 6932151 2022-07-15 14:46:56 2022-07-15 15:29:32 2022-07-15 16:11:41 0:42:09 0:32:40 0:09:29 smithi main rhel 8.6 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
fail 6932153 2022-07-15 14:46:57 2022-07-15 15:31:12 2022-07-15 16:03:22 0:32:10 0:21:26 0:10:44 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6932155 2022-07-15 14:46:59 2022-07-15 15:31:13 2022-07-15 16:14:44 0:43:31 0:34:36 0:08:55 smithi main rhel 8.6 rados/upgrade/parallel/{0-random-distro$/{rhel_8.6_container_tools_rhel8} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi086 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

pass 6932157 2022-07-15 14:47:00 2022-07-15 15:33:34 2022-07-15 16:12:43 0:39:09 0:26:46 0:12:23 smithi main rhel 8.6 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/rados_mon_workunits} 2
pass 6932159 2022-07-15 14:47:02 2022-07-15 15:39:25 2022-07-15 16:09:44 0:30:19 0:19:54 0:10:25 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/small-objects-balanced} 2
pass 6932161 2022-07-15 14:47:03 2022-07-15 15:42:26 2022-07-15 16:12:35 0:30:09 0:22:27 0:07:42 smithi main rhel 8.6 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-small-objects-overwrites} 2
fail 6932162 2022-07-15 14:47:05 2022-07-15 15:43:26 2022-07-15 16:21:05 0:37:39 0:28:28 0:09:11 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

pass 6932164 2022-07-15 14:47:06 2022-07-15 15:43:36 2022-07-15 16:01:32 0:17:56 0:10:18 0:07:38 smithi main rhel 8.6 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{rhel_8}} 1
fail 6932166 2022-07-15 14:47:08 2022-07-15 15:44:27 2022-07-15 16:16:31 0:32:04 0:21:26 0:10:38 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6932168 2022-07-15 14:47:10 2022-07-15 15:45:17 2022-07-15 16:32:27 0:47:10 0:39:09 0:08:01 smithi main centos 8.stream rados/upgrade/parallel/{0-random-distro$/{centos_8.stream_container_tools} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi029 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

pass 6932170 2022-07-15 14:47:11 2022-07-15 15:47:08 2022-07-15 16:21:13 0:34:05 0:26:44 0:07:21 smithi main rhel 8.6 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/rados_api_tests} 2
pass 6932171 2022-07-15 14:47:13 2022-07-15 15:48:18 2022-07-15 16:09:10 0:20:52 0:12:33 0:08:19 smithi main rhel 8.6 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4