Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7003409 2022-09-01 00:22:55 2022-09-01 00:23:56 2022-09-01 00:39:25 0:15:29 0:06:21 0:09:08 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

Command failed on smithi187 with status 1: 'kubectl apply -f https://docs.projectcalico.org/manifests/tigera-operator.yaml'

fail 7003410 2022-09-01 00:22:56 2022-09-01 00:24:06 2022-09-01 00:45:09 0:21:03 0:14:49 0:06:14 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi114 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4ab2d4e1d3292e01a3c9dec0d93cbe9c8ccc1e54 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7003411 2022-09-01 00:22:58 2022-09-01 00:24:07 2022-09-01 00:53:45 0:29:38 0:23:01 0:06:37 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zstd} tasks/e2e} 2
fail 7003412 2022-09-01 00:22:59 2022-09-01 00:24:27 2022-09-01 01:02:03 0:37:36 0:27:03 0:10:33 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

fail 7003413 2022-09-01 00:23:00 2022-09-01 00:24:47 2022-09-01 00:42:08 0:17:21 0:11:22 0:05:59 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test rados/test_librados_build.sh) on smithi073 with status 134: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4ab2d4e1d3292e01a3c9dec0d93cbe9c8ccc1e54 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_librados_build.sh'

fail 7003414 2022-09-01 00:23:01 2022-09-01 00:24:48 2022-09-01 00:45:39 0:20:51 0:11:45 0:09:06 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi163 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4ab2d4e1d3292e01a3c9dec0d93cbe9c8ccc1e54 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 7003415 2022-09-01 00:23:02 2022-09-01 00:25:48 2022-09-01 00:41:13 0:15:25 0:06:18 0:09:07 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

Command failed on smithi136 with status 1: 'kubectl apply -f https://docs.projectcalico.org/manifests/tigera-operator.yaml'

pass 7003416 2022-09-01 00:23:04 2022-09-01 00:25:49 2022-09-01 02:50:38 2:24:49 2:16:32 0:08:17 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
pass 7003417 2022-09-01 00:23:05 2022-09-01 00:27:19 2022-09-01 05:12:20 4:45:01 4:38:19 0:06:42 smithi main centos 8.stream rados/standalone/{supported-random-distro$/{centos_8} workloads/osd-backfill} 1
pass 7003418 2022-09-01 00:23:06 2022-09-01 00:27:19 2022-09-01 03:06:24 2:39:05 2:18:53 0:20:12 smithi main rhel 8.6 rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{rhel_8}} 1
fail 7003419 2022-09-01 00:23:07 2022-09-01 00:27:20 2022-09-01 00:45:35 0:18:15 0:06:57 0:11:18 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

Command failed on smithi052 with status 1: 'kubectl apply -f https://docs.projectcalico.org/manifests/tigera-operator.yaml'

pass 7003420 2022-09-01 00:23:08 2022-09-01 00:27:50 2022-09-01 00:53:35 0:25:45 0:16:25 0:09:20 smithi main centos 8.stream rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
pass 7003421 2022-09-01 00:23:10 2022-09-01 00:30:41 2022-09-01 04:06:53 3:36:12 3:27:09 0:09:03 smithi main centos 8.stream rados/standalone/{supported-random-distro$/{centos_8} workloads/osd} 1
fail 7003422 2022-09-01 00:23:11 2022-09-01 00:33:42 2022-09-01 00:51:08 0:17:26 0:11:11 0:06:15 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test rados/test_librados_build.sh) on smithi070 with status 134: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4ab2d4e1d3292e01a3c9dec0d93cbe9c8ccc1e54 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_librados_build.sh'

pass 7003423 2022-09-01 00:23:12 2022-09-01 00:33:42 2022-09-01 03:16:49 2:43:07 2:14:58 0:28:09 smithi main rhel 8.6 rados/objectstore/{backends/objectstore-bluestore-b supported-random-distro$/{rhel_8}} 1
fail 7003424 2022-09-01 00:23:13 2022-09-01 00:33:53 2022-09-01 00:54:56 0:21:03 0:14:58 0:06:05 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi093 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4ab2d4e1d3292e01a3c9dec0d93cbe9c8ccc1e54 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'