Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 6884368 2022-06-17 13:55:42 2022-06-17 13:56:33 2022-06-17 14:27:44 0:31:11 0:22:21 0:08:50 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

pass 6884369 2022-06-17 13:55:43 2022-06-17 13:56:33 2022-06-17 14:29:42 0:33:09 0:27:18 0:05:51 smithi main rhel 8.4 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-zlib} supported-random-distro$/{rhel_8} tasks/insights} 2
fail 6884370 2022-06-17 13:55:45 2022-06-17 13:56:54 2022-06-17 14:26:51 0:29:57 0:21:51 0:08:06 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-stupid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi112 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1173fc43a300ed0562317339f06958aa5c7aaec2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6884371 2022-06-17 13:55:46 2022-06-17 13:57:44 2022-06-17 14:38:27 0:40:43 0:29:17 0:11:26 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

pass 6884372 2022-06-17 13:55:47 2022-06-17 13:59:05 2022-06-17 15:38:27 1:39:22 1:30:29 0:08:53 smithi main ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-radosbench} 2
fail 6884373 2022-06-17 13:55:48 2022-06-17 13:59:05 2022-06-17 14:46:11 0:47:06 0:37:26 0:09:40 smithi main rados/cephadm/workunits/{agent/on mon_election/connectivity task/test_nfs} 1
Failure Reason:

Test failure: test_create_delete_cluster_idempotency (tasks.cephfs.test_nfs.TestNFS)

pass 6884374 2022-06-17 13:55:50 2022-06-17 13:59:06 2022-06-17 14:32:00 0:32:54 0:26:57 0:05:57 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zlib rados tasks/rados_api_tests validater/lockdep} 2
fail 6884375 2022-06-17 13:55:51 2022-06-17 13:59:06 2022-06-17 14:29:58 0:30:52 0:22:32 0:08:20 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

'wait for operator' reached maximum tries (90) after waiting for 900 seconds

pass 6884376 2022-06-17 13:55:52 2022-06-17 13:59:07 2022-06-17 16:28:27 2:29:20 2:22:49 0:06:31 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
fail 6884377 2022-06-17 13:55:53 2022-06-17 13:59:07 2022-06-17 14:29:25 0:30:18 0:23:10 0:07:08 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-snappy} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi049 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1173fc43a300ed0562317339f06958aa5c7aaec2 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 6884378 2022-06-17 13:55:55 2022-06-17 13:59:08 2022-06-17 14:38:54 0:39:46 0:30:16 0:09:30 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds