Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 6389780 2021-09-14 16:06:06 2021-09-14 16:06:50 2021-09-14 16:45:39 0:38:49 0:29:48 0:09:01 smithi master centos 8.3 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-bitmap supported-random-distro$/{centos_8} tasks/module_selftest} 2
pass 6389781 2021-09-14 16:06:07 2021-09-14 16:06:50 2021-09-14 16:42:09 0:35:19 0:25:50 0:09:29 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
fail 6389782 2021-09-14 16:06:08 2021-09-14 16:06:50 2021-09-14 18:56:33 2:49:43 2:38:46 0:10:57 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

fail 6389783 2021-09-14 16:06:09 2021-09-14 16:06:51 2021-09-14 16:33:58 0:27:07 0:17:42 0:09:25 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/connectivity random-objectstore$/{bluestore-stupid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi119 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5c916896e8b781d418cacaaf8882eed948c66b9b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6389784 2021-09-14 16:06:10 2021-09-14 16:06:51 2021-09-14 16:40:37 0:33:46 0:21:35 0:12:11 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/1.7.0} 3
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

pass 6389785 2021-09-14 16:06:11 2021-09-14 16:06:51 2021-09-14 16:32:27 0:25:36 0:18:24 0:07:12 smithi master rhel 8.4 rados/cephadm/smoke/{distro/rhel_8.4_container_tools_3.0 fixed-2 mon_election/classic start} 2
pass 6389786 2021-09-14 16:06:12 2021-09-14 16:06:51 2021-09-14 16:54:11 0:47:20 0:36:41 0:10:39 smithi master centos 8.3 rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on distro$/{centos_latest} mon_election/classic msgr-failures/fastclose rados thrashers/pggrow thrashosds-health workloads/snaps-few-objects} 3
pass 6389787 2021-09-14 16:06:13 2021-09-14 16:06:52 2021-09-14 16:42:04 0:35:12 0:25:15 0:09:57 smithi master centos 8.2 rados/cephadm/mgr-nfs-upgrade/{0-centos_8.2_container_tools_3.0 1-bootstrap/16.2.5 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 6389788 2021-09-14 16:06:14 2021-09-14 16:06:53 2021-09-14 16:48:13 0:41:20 0:31:31 0:09:49 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} 1
fail 6389789 2021-09-14 16:06:15 2021-09-14 16:06:53 2021-09-14 16:33:56 0:27:03 0:17:31 0:09:32 smithi master centos 8.2 rados/dashboard/{centos_8.2_container_tools_3.0 debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi077 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5c916896e8b781d418cacaaf8882eed948c66b9b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'