Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7186332 2023-02-24 16:09:48 2023-02-24 16:11:33 2023-02-24 16:28:14 0:16:41 0:06:03 0:10:38 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/host rook/master} 1
Failure Reason:

Command failed on smithi101 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7186333 2023-02-24 16:09:49 2023-02-24 16:11:33 2023-02-24 16:27:09 0:15:36 0:06:02 0:09:34 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

Command failed on smithi035 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7186334 2023-02-24 16:09:50 2023-02-24 16:11:34 2023-02-24 16:30:29 0:18:55 0:13:14 0:05:41 smithi main rhel 8.4 rados/cephadm/workunits/{0-distro/rhel_8.4_container_tools_3.0 agent/on mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi102 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=cbccb547f47ec697c2e2ecf23392cc636ea19450 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7186335 2023-02-24 16:09:51 2023-02-24 16:11:34 2023-02-24 16:38:13 0:26:39 0:13:06 0:13:33 smithi main ubuntu 20.04 rados/objectstore/{backends/ceph_objectstore_tool supported-random-distro$/{ubuntu_latest}} 1
pass 7186336 2023-02-24 16:09:52 2023-02-24 16:11:34 2023-02-24 17:36:44 1:25:10 1:11:24 0:13:46 smithi main ubuntu 20.04 rados/singleton/{all/thrash-backfill-full mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest}} 2
fail 7186337 2023-02-24 16:09:53 2023-02-24 16:11:34 2023-02-24 16:31:17 0:19:43 0:06:19 0:13:24 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

Command failed on smithi097 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

pass 7186338 2023-02-24 16:09:54 2023-02-24 16:11:35 2023-02-24 18:51:34 2:39:59 2:31:14 0:08:45 smithi main rhel 8.4 rados/objectstore/{backends/filestore-idempotent supported-random-distro$/{rhel_8}} 1
pass 7186339 2023-02-24 16:09:55 2023-02-24 16:11:35 2023-02-24 16:48:27 0:36:52 0:29:40 0:07:12 smithi main rhel 8.4 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{rhel_8} tasks/progress} 2
fail 7186340 2023-02-24 16:09:56 2023-02-24 16:11:35 2023-02-24 16:32:23 0:20:48 0:12:52 0:07:56 smithi main rhel 8.4 rados/cephadm/workunits/{0-distro/rhel_8.4_container_tools_rhel8 agent/on mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi099 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=cbccb547f47ec697c2e2ecf23392cc636ea19450 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 7186341 2023-02-24 16:09:57 2023-02-24 16:11:36 2023-02-24 16:27:20 0:15:44 0:05:59 0:09:45 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

Command failed on smithi064 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

pass 7186342 2023-02-24 16:09:59 2023-02-24 16:11:36 2023-02-24 17:57:31 1:45:55 1:33:10 0:12:45 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/radosbench} 2