Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6995291 2022-08-26 16:55:44 2022-08-26 16:58:45 2022-08-26 17:14:09 0:15:24 0:06:15 0:09:09 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/master} 1
Failure Reason:

Command failed on smithi120 with status 1: 'kubectl apply -f https://docs.projectcalico.org/manifests/tigera-operator.yaml'

fail 6995292 2022-08-26 16:55:46 2022-08-26 16:58:45 2022-08-26 17:13:52 0:15:07 0:05:45 0:09:22 smithi main rados/cephadm/workunits/{agent/on mon_election/connectivity task/test_cephadm_repos} 1
Failure Reason:

Command failed (workunit test cephadm/test_repos.sh) on smithi050 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=925d6d50c6abf38f110c774968b0ed462c9e5c17 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh'

fail 6995293 2022-08-26 16:55:47 2022-08-26 16:58:46 2022-08-26 17:11:42 0:12:56 0:07:10 0:05:46 smithi main centos 8.stream rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi186 with status 1: 'sudo yum -y install ceph-test'

pass 6995294 2022-08-26 16:55:48 2022-08-26 16:58:46 2022-08-26 17:43:41 0:44:55 0:36:22 0:08:33 smithi main rados/cephadm/workunits/{agent/on mon_election/connectivity task/test_nfs} 1
pass 6995295 2022-08-26 16:55:49 2022-08-26 16:58:46 2022-08-26 17:26:49 0:28:03 0:18:03 0:10:00 smithi main rados/cephadm/workunits/{agent/off mon_election/classic task/test_orch_cli} 1
fail 6995296 2022-08-26 16:55:51 2022-08-26 16:58:47 2022-08-26 17:14:36 0:15:49 0:06:16 0:09:33 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

Command failed on smithi150 with status 1: 'kubectl apply -f https://docs.projectcalico.org/manifests/tigera-operator.yaml'

fail 6995297 2022-08-26 16:55:52 2022-08-26 16:58:47 2022-08-26 17:13:38 0:14:51 0:07:29 0:07:22 smithi main centos 8.stream rados/upgrade/parallel/{0-random-distro$/{centos_8.stream_container_tools_crun} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed on smithi073 with status 1: 'sudo yum -y install ceph-test'

fail 6995298 2022-08-26 16:55:53 2022-08-26 16:58:47 2022-08-26 17:14:36 0:15:49 0:07:17 0:08:32 smithi main centos 8.stream rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} 1
Failure Reason:

Command failed on smithi071 with status 1: 'sudo yum -y install ceph-test'

fail 6995299 2022-08-26 16:55:54 2022-08-26 16:59:28 2022-08-26 17:15:17 0:15:49 0:05:41 0:10:08 smithi main rados/cephadm/workunits/{agent/off mon_election/classic task/test_cephadm_repos} 1
Failure Reason:

Command failed (workunit test cephadm/test_repos.sh) on smithi201 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=925d6d50c6abf38f110c774968b0ed462c9e5c17 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh'

fail 6995300 2022-08-26 16:55:55 2022-08-26 16:59:48 2022-08-26 17:18:54 0:19:06 0:07:21 0:11:45 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

Command failed on smithi160 with status 1: 'kubectl create -f rook/deploy/examples/crds.yaml -f rook/deploy/examples/common.yaml -f operator.yaml'

fail 6995301 2022-08-26 16:55:57 2022-08-26 17:00:19 2022-08-26 17:15:23 0:15:04 0:08:57 0:06:07 smithi main rhel 8.6 rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi167 with status 1: 'sudo yum -y install ceph-test'

fail 6995302 2022-08-26 16:55:58 2022-08-26 17:00:19 2022-08-26 17:18:16 0:17:57 0:06:19 0:11:38 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

Command failed on smithi153 with status 1: 'kubectl apply -f https://docs.projectcalico.org/manifests/tigera-operator.yaml'

fail 6995303 2022-08-26 16:55:59 2022-08-26 17:02:30 2022-08-26 17:20:09 0:17:39 0:09:46 0:07:53 smithi main rhel 8.6 rados/upgrade/parallel/{0-random-distro$/{rhel_8.6_container_tools_3.0} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed on smithi187 with status 1: 'sudo yum -y install ceph-test'

fail 6995304 2022-08-26 16:56:01 2022-08-26 17:02:50 2022-08-26 17:17:56 0:15:06 0:08:25 0:06:41 smithi main rhel 8.6 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} tasks/readwrite} 2
Failure Reason:

Command failed on smithi134 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd sqlite-devel sqlite-devel sqlite-devel sqlite-devel'

fail 6995305 2022-08-26 16:56:02 2022-08-26 17:02:51 2022-08-26 17:18:42 0:15:51 0:05:43 0:10:08 smithi main rados/cephadm/workunits/{agent/off mon_election/connectivity task/test_cephadm_repos} 1
Failure Reason:

Command failed (workunit test cephadm/test_repos.sh) on smithi077 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=925d6d50c6abf38f110c774968b0ed462c9e5c17 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh'

fail 6995306 2022-08-26 16:56:03 2022-08-26 17:02:51 2022-08-26 17:20:23 0:17:32 0:06:44 0:10:48 smithi main rados/cephadm/workunits/{agent/off mon_election/connectivity task/test_nfs} 1
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=925d6d50c6abf38f110c774968b0ed462c9e5c17

fail 6995307 2022-08-26 16:56:04 2022-08-26 17:02:51 2022-08-26 17:19:09 0:16:18 0:07:18 0:09:00 smithi main centos 8.stream rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} 1
Failure Reason:

Command failed on smithi103 with status 1: 'sudo yum -y install ceph-test'