Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7067876 2022-10-14 15:14:18 2022-10-14 17:05:58 2022-10-14 17:21:14 0:15:16 0:04:25 0:10:51 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

Command failed on smithi196 with status 1: 'kubectl create -f rook/cluster/examples/kubernetes/ceph/crds.yaml -f rook/cluster/examples/kubernetes/ceph/common.yaml -f operator.yaml'

pass 7067877 2022-10-14 15:14:20 2022-10-14 17:05:59 2022-10-14 18:48:19 1:42:20 1:30:40 0:11:40 smithi main ubuntu 20.04 rados/upgrade/parallel/{0-random-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
pass 7067878 2022-10-14 15:14:21 2022-10-14 17:05:59 2022-10-14 17:42:23 0:36:24 0:26:00 0:10:24 smithi main ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 7067879 2022-10-14 15:14:22 2022-10-14 17:05:59 2022-10-14 17:43:03 0:37:04 0:24:52 0:12:12 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

fail 7067880 2022-10-14 15:14:23 2022-10-14 17:07:20 2022-10-14 17:19:45 0:12:25 0:07:24 0:05:01 smithi main rhel 8.6 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi047 with status 1: 'sudo yum -y install ceph-radosgw'

fail 7067881 2022-10-14 15:14:24 2022-10-14 17:07:20 2022-10-14 17:21:01 0:13:41 0:04:30 0:09:11 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

Command failed on smithi040 with status 1: 'kubectl create -f rook/cluster/examples/kubernetes/ceph/crds.yaml -f rook/cluster/examples/kubernetes/ceph/common.yaml -f operator.yaml'

pass 7067882 2022-10-14 15:14:25 2022-10-14 17:07:20 2022-10-14 18:35:37 1:28:17 1:20:20 0:07:57 smithi main centos 8.stream rados/upgrade/parallel/{0-random-distro$/{centos_8.stream_container_tools} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
fail 7067883 2022-10-14 15:14:26 2022-10-14 17:07:31 2022-10-14 20:44:02 3:36:31 3:23:43 0:12:48 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi099 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7dadb1ba04609ced6448c42c653c8a73a665d0c0 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh'

fail 7067884 2022-10-14 15:14:27 2022-10-14 17:12:42 2022-10-14 17:49:18 0:36:36 0:25:17 0:11:19 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

pass 7067885 2022-10-14 15:14:28 2022-10-14 17:13:12 2022-10-14 17:30:48 0:17:36 0:07:25 0:10:11 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1