Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7045337 2022-09-27 14:09:08 2022-09-27 14:09:09 2022-09-27 14:24:26 0:15:17 0:04:42 0:10:35 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

Command failed on smithi196 with status 1: 'kubectl create -f rook/cluster/examples/kubernetes/ceph/crds.yaml -f rook/cluster/examples/kubernetes/ceph/common.yaml -f operator.yaml'

pass 7045338 2022-09-27 14:09:09 2022-09-27 14:09:10 2022-09-27 14:40:36 0:31:26 0:25:15 0:06:11 smithi main centos 8.stream rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} 1
fail 7045339 2022-09-27 14:09:10 2022-09-27 14:09:10 2022-09-27 14:30:46 0:21:36 0:10:30 0:11:06 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

Command failed on smithi045 with status 22: 'kubectl -n rook-ceph exec rook-ceph-tools-7564bb9799-xnxjf -- ceph orch apply osd --all-available-devices'

fail 7045340 2022-09-27 14:09:12 2022-09-27 14:09:12 2022-09-27 14:24:49 0:15:37 0:04:45 0:10:52 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

Command failed on smithi078 with status 1: 'kubectl create -f rook/cluster/examples/kubernetes/ceph/crds.yaml -f rook/cluster/examples/kubernetes/ceph/common.yaml -f operator.yaml'

fail 7045341 2022-09-27 14:09:13 2022-09-27 14:09:13 2022-09-27 14:36:29 0:27:16 0:19:38 0:07:38 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi102 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=e180dd701b19dfa839037d075a3eb0c71f1c6a95 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

dead 7045342 2022-09-27 14:09:14 2022-09-27 14:09:14 2022-09-27 14:34:46 0:25:32 0:15:22 0:10:10 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

{'smithi074.front.sepia.ceph.com': {'_ansible_no_log': False, 'attempts': 24, 'changed': False, 'invocation': {'module_args': {'allow_unauthenticated': False, 'autoclean': False, 'autoremove': False, 'cache_valid_time': 0, 'deb': None, 'default_release': None, 'dpkg_options': 'force-confdef,force-confold', 'force': False, 'force_apt_get': False, 'install_recommends': None, 'only_upgrade': False, 'package': None, 'policy_rc_d': None, 'purge': False, 'state': 'present', 'update_cache': True, 'update_cache_retries': 5, 'update_cache_retry_max_delay': 12, 'upgrade': None}}, 'msg': 'Failed to update apt cache: unknown reason'}}