Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6413610 2021-09-29 14:51:46 2021-09-29 15:01:19 2021-09-29 15:16:35 0:15:16 0:06:50 0:08:26 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} 1
Failure Reason:

Command failed on smithi063 with status 1: 'sudo kubeadm init --node-name smithi063 --token abcdef.4za98j5w2qhinx9s --pod-network-cidr 10.249.240.0/21'

fail 6413611 2021-09-29 14:51:47 2021-09-29 15:01:19 2021-09-29 15:32:17 0:30:58 0:21:01 0:09:57 smithi master centos 8.2 rados/cephadm/dashboard/{0-distro/centos_8.2_container_tools_3.0 task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi122 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7736cf93813c51819d021c72610c287f0a7891d1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 6413612 2021-09-29 14:51:48 2021-09-29 15:01:19 2021-09-29 15:39:11 0:37:52 0:28:17 0:09:35 smithi master centos 8.3 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.3_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 6413613 2021-09-29 14:51:49 2021-09-29 15:02:30 2021-09-29 15:27:36 0:25:06 0:14:54 0:10:12 smithi master ubuntu 20.04 rados/singleton/{all/rebuild-mondb mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
pass 6413614 2021-09-29 14:51:50 2021-09-29 15:02:50 2021-09-29 15:57:35 0:54:45 0:45:58 0:08:47 smithi master rhel 8.4 rados/singleton/{all/thrash-eio mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8}} 2
fail 6413615 2021-09-29 14:51:51 2021-09-29 15:04:40 2021-09-29 15:19:59 0:15:19 0:06:25 0:08:54 smithi master ubuntu 18.04 rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} 1
Failure Reason:

Command failed on smithi183 with status 1: 'sudo kubeadm init --node-name smithi183 --token abcdef.y0dk0mfx2fzk47j7 --pod-network-cidr 10.253.176.0/21'

pass 6413616 2021-09-29 14:51:52 2021-09-29 15:04:51 2021-09-29 15:56:30 0:51:39 0:34:33 0:17:06 smithi master centos 8.2 rados/cephadm/thrash/{0-distro/centos_8.2_container_tools_3.0 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} 2
fail 6413617 2021-09-29 14:51:53 2021-09-29 15:11:22 2021-09-29 15:41:16 0:29:54 0:19:07 0:10:47 smithi master centos 8.2 rados/cephadm/dashboard/{0-distro/centos_8.2_container_tools_3.0 task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi012 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7736cf93813c51819d021c72610c287f0a7891d1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 6413618 2021-09-29 14:51:54 2021-09-29 15:11:23 2021-09-29 15:49:04 0:37:41 0:27:29 0:10:12 smithi master centos 8.3 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.3_container_tools_3.0 1-bootstrap/16.2.4 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
dead 6413619 2021-09-29 14:51:55 2021-09-29 15:11:54 2021-09-30 03:23:25 12:11:31 smithi master rhel 8.4 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

hit max job timeout

fail 6413620 2021-09-29 14:51:56 2021-09-29 15:12:55 2021-09-29 15:32:51 0:19:56 0:07:04 0:12:52 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

Command failed on smithi064 with status 1: 'sudo kubeadm init --node-name smithi064 --token abcdef.u8icl9p87eln0q98 --pod-network-cidr 10.249.248.0/21'

fail 6413621 2021-09-29 14:51:57 2021-09-29 15:13:05 2021-09-29 15:32:25 0:19:20 0:09:40 0:09:40 smithi master ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-comp-snappy supported-random-distro$/{ubuntu_latest} tasks/module_selftest} 2
Failure Reason:

Test failure: test_diskprediction_local (tasks.mgr.test_module_selftest.TestModuleSelftest)