Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6334924 2021-08-11 14:28:42 2021-08-11 15:41:16 2021-08-11 15:56:46 0:15:30 0:06:59 0:08:31 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} 1
Failure Reason:

Command failed on smithi104 with status 1: 'sudo kubeadm init --node-name smithi104 --token abcdef.zoy5wvk942j3ymd8 --pod-network-cidr 10.251.56.0/21'

fail 6334925 2021-08-11 14:28:43 2021-08-11 15:41:36 2021-08-11 16:02:50 0:21:14 0:10:19 0:10:55 smithi master ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/connectivity objectstore/bluestore-comp-zstd supported-random-distro$/{ubuntu_latest} tasks/module_selftest} 2
Failure Reason:

Test failure: test_diskprediction_local (tasks.mgr.test_module_selftest.TestModuleSelftest)

fail 6334926 2021-08-11 14:28:44 2021-08-11 15:41:36 2021-08-11 16:13:26 0:31:50 0:20:03 0:11:47 smithi master centos 8.2 rados/cephadm/dashboard/{0-distro/centos_8.2_kubic_stable task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi042 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f4c64d5e42cf943a0b09f9b8bf18e4b7e556ce8b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6334927 2021-08-11 14:28:45 2021-08-11 15:41:47 2021-08-11 16:09:18 0:27:31 0:17:23 0:10:08 smithi master centos 8.2 rados/singleton/{all/rebuild-mondb mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed on smithi114 with status 124: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json'

fail 6334928 2021-08-11 14:28:46 2021-08-11 15:42:37 2021-08-11 15:57:21 0:14:44 0:06:44 0:08:00 smithi master ubuntu 18.04 rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} 1
Failure Reason:

Command failed on smithi013 with status 1: 'sudo kubeadm init --node-name smithi013 --token abcdef.2kndet48e30m4wv2 --pod-network-cidr 10.248.96.0/21'

fail 6334929 2021-08-11 14:28:47 2021-08-11 15:42:47 2021-08-11 16:13:08 0:30:21 0:19:11 0:11:10 smithi master centos 8.2 rados/cephadm/dashboard/{0-distro/centos_8.2_kubic_stable task/test_e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi167 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f4c64d5e42cf943a0b09f9b8bf18e4b7e556ce8b TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6334930 2021-08-11 14:28:49 2021-08-11 15:44:08 2021-08-11 16:05:13 0:21:05 0:07:25 0:13:40 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

Command failed on smithi085 with status 1: 'sudo kubeadm init --node-name smithi085 --token abcdef.w8lnm6tfyczupg9g --pod-network-cidr 10.250.160.0/21'

fail 6334931 2021-08-11 14:28:50 2021-08-11 15:44:58 2021-08-11 16:07:11 0:22:13 0:10:13 0:12:00 smithi master ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-comp-snappy supported-random-distro$/{ubuntu_latest} tasks/module_selftest} 2
Failure Reason:

Test failure: test_diskprediction_local (tasks.mgr.test_module_selftest.TestModuleSelftest)

pass 6334932 2021-08-11 14:28:51 2021-08-11 15:45:58 2021-08-11 16:16:46 0:30:48 0:21:25 0:09:23 smithi master rhel 8.4 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 6334933 2021-08-11 14:28:52 2021-08-11 15:47:39 2021-08-11 16:56:43 1:09:04 0:58:32 0:10:32 smithi master centos 8.2 rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} 2
pass 6334934 2021-08-11 14:28:53 2021-08-11 15:47:59 2021-08-11 16:19:34 0:31:35 0:21:15 0:10:20 smithi master centos 8.2 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/small-objects-localized} 2
pass 6334935 2021-08-11 14:28:54 2021-08-11 15:48:39 2021-08-11 17:04:00 1:15:21 1:06:15 0:09:06 smithi master rhel 8.4 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3