Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6268827 2021-07-13 23:16:32 2021-07-13 23:16:33 2021-07-13 23:37:21 0:20:48 0:12:08 0:08:40 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/master} 1
Failure Reason:

Command failed on smithi018 with status 22: 'kubectl -n rook-ceph exec rook-ceph-tools-78cdfd976c-ck7vc -- ceph orch ps'

fail 6268828 2021-07-13 23:16:33 2021-07-13 23:16:33 2021-07-13 23:38:54 0:22:21 0:10:14 0:12:07 smithi master centos 8.2 rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi090 with status 127: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3140715d575c26ebd30aae7249fe0159e151dc95 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

fail 6268829 2021-07-13 23:16:34 2021-07-13 23:16:35 2021-07-13 23:37:55 0:21:20 0:11:07 0:10:13 smithi master ubuntu 18.04 rados/rook/smoke/{0-distro/ubuntu_18.04 0-kubeadm 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.6.2} 1
Failure Reason:

Command failed on smithi175 with status 22: 'kubectl -n rook-ceph exec rook-ceph-tools-7467d8bf8-q92nf -- ceph orch ps'

pass 6268830 2021-07-13 23:16:35 2021-07-13 23:16:35 2021-07-14 00:53:56 1:37:21 1:29:55 0:07:26 smithi master rhel 8.4 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-radosbench} 2
fail 6268831 2021-07-13 23:16:36 2021-07-13 23:16:37 2021-07-13 23:43:56 0:27:19 0:15:16 0:12:03 smithi master ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

Command failed on smithi057 with status 22: 'kubectl -n rook-ceph exec rook-ceph-tools-78cdfd976c-whzj5 -- ceph orch ps'

fail 6268832 2021-07-13 23:16:37 2021-07-13 23:16:37 2021-07-13 23:37:54 0:21:17 0:10:07 0:11:10 smithi master ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mon_election/classic objectstore/bluestore-comp-snappy supported-random-distro$/{ubuntu_latest} tasks/module_selftest} 2
Failure Reason:

Test failure: test_diskprediction_local (tasks.mgr.test_module_selftest.TestModuleSelftest)

pass 6268833 2021-07-13 23:16:38 2021-07-13 23:16:39 2021-07-14 02:43:55 3:27:16 3:20:25 0:06:51 smithi master rhel 8.4 rados/standalone/{supported-random-distro$/{rhel_8} workloads/osd} 1
fail 6268834 2021-07-13 23:16:39 2021-07-13 23:16:39 2021-07-13 23:59:44 0:43:05 0:34:18 0:08:47 smithi master ubuntu 20.04 rados/singleton/{all/mon-config-keys mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command crashed: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3140715d575c26ebd30aae7249fe0159e151dc95 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/test_mon_config_key.py'