Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6932139 2022-07-15 14:46:42 2022-07-15 14:48:07 2022-07-15 15:29:08 0:41:01 0:27:47 0:13:14 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/3-node k8s/1.21 net/host rook/1.7.2} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

fail 6932140 2022-07-15 14:46:44 2022-07-15 14:48:07 2022-07-15 15:33:30 0:45:23 0:37:41 0:07:42 smithi main rhel 8.6 rados/upgrade/parallel/{0-random-distro$/{rhel_8.6_container_tools_3.0} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi086 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

fail 6932141 2022-07-15 14:46:46 2022-07-15 14:48:08 2022-07-15 15:45:13 0:57:05 0:50:17 0:06:48 smithi main rhel 8.6 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi089 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66bea86ab447b2db8a948c7913dc6c7a4a995c31 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

pass 6932142 2022-07-15 14:46:47 2022-07-15 14:48:08 2022-07-15 15:17:08 0:29:00 0:22:26 0:06:34 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-stupid rados tasks/rados_cls_all validater/lockdep} 2
pass 6932143 2022-07-15 14:46:49 2022-07-15 14:48:08 2022-07-15 15:23:05 0:34:57 0:27:58 0:06:59 smithi main rhel 8.6 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 6932144 2022-07-15 14:46:51 2022-07-15 14:48:09 2022-07-15 15:21:20 0:33:11 0:23:50 0:09:21 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/1-node k8s/1.21 net/calico rook/master} 1
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6932146 2022-07-15 14:46:52 2022-07-15 14:48:09 2022-07-15 15:49:42 1:01:33 0:53:43 0:07:50 smithi main centos 8.stream rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi187 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66bea86ab447b2db8a948c7913dc6c7a4a995c31 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 6932148 2022-07-15 14:46:54 2022-07-15 14:48:10 2022-07-15 15:11:23 0:23:13 0:14:28 0:08:45 smithi main rados/cephadm/workunits/{agent/off mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi106 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66bea86ab447b2db8a948c7913dc6c7a4a995c31 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

fail 6932150 2022-07-15 14:46:55 2022-07-15 14:48:10 2022-07-15 15:28:11 0:40:01 0:29:02 0:10:59 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/3-node k8s/1.21 net/flannel rook/1.7.2} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

fail 6932152 2022-07-15 14:46:57 2022-07-15 14:48:10 2022-07-15 15:31:04 0:42:54 0:35:56 0:06:58 smithi main rhel 8.6 rados/upgrade/parallel/{0-random-distro$/{rhel_8.6_container_tools_rhel8} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_rgw.sh) on smithi003 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=pacific TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_rgw.sh'

fail 6932154 2022-07-15 14:46:58 2022-07-15 14:48:21 2022-07-15 15:48:41 1:00:20 0:49:58 0:10:22 smithi main ubuntu 20.04 rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi072 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66bea86ab447b2db8a948c7913dc6c7a4a995c31 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 6932156 2022-07-15 14:47:00 2022-07-15 14:48:21 2022-07-15 15:32:48 0:44:27 0:38:56 0:05:31 smithi main centos 8.stream rados/valgrind-leaks/{1-start 2-inject-leak/osd centos_latest} 1
Failure Reason:

"2022-07-15T15:26:47.122106+0000 mon.a (mon.0) 115 : cluster [WRN] Health check failed: 2 osds down (OSD_DOWN)" in cluster log

pass 6932158 2022-07-15 14:47:02 2022-07-15 14:48:22 2022-07-15 15:14:47 0:26:25 0:16:40 0:09:45 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} tasks/rados_python} 2
fail 6932160 2022-07-15 14:47:03 2022-07-15 14:49:12 2022-07-15 15:21:59 0:32:47 0:22:43 0:10:04 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/1-node k8s/1.21 net/host rook/master} 1
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

fail 6932163 2022-07-15 14:47:05 2022-07-15 14:49:12 2022-07-15 15:46:06 0:56:54 0:50:17 0:06:37 smithi main rhel 8.6 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi196 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66bea86ab447b2db8a948c7913dc6c7a4a995c31 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

pass 6932165 2022-07-15 14:47:06 2022-07-15 14:49:13 2022-07-15 17:57:06 3:07:53 3:01:45 0:06:08 smithi main rhel 8.6 rados/standalone/{supported-random-distro$/{rhel_8} workloads/scrub} 1
fail 6932167 2022-07-15 14:47:08 2022-07-15 14:49:13 2022-07-15 15:49:57 1:00:44 0:53:22 0:07:22 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/rados_api_tests} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi035 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=66bea86ab447b2db8a948c7913dc6c7a4a995c31 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'