Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7047590 2022-09-29 15:12:14 2022-09-29 15:13:04 2022-09-29 15:48:29 0:35:25 0:24:46 0:10:39 smithi main ubuntu 20.04 rados/cephadm/smoke/{0-distro/ubuntu_20.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
fail 7047591 2022-09-29 15:12:15 2022-09-29 15:13:05 2022-09-29 15:30:36 0:17:31 0:07:14 0:10:17 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

Command failed on smithi188 with status 1: 'kubectl create -f rook/cluster/examples/kubernetes/ceph/crds.yaml -f rook/cluster/examples/kubernetes/ceph/common.yaml -f operator.yaml'

dead 7047592 2022-09-29 15:12:16 2022-09-29 15:13:05 2022-09-30 03:24:47 12:11:42 smithi main centos 8.stream rados/upgrade/parallel/{0-random-distro$/{centos_8.stream_container_tools} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

hit max job timeout

fail 7047593 2022-09-29 15:12:17 2022-09-29 15:13:05 2022-09-29 15:44:00 0:30:55 0:23:15 0:07:40 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zstd} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi040 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=057e804372afec7e777c98460914a8ec1936cb20 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7047594 2022-09-29 15:12:19 2022-09-29 15:13:06 2022-09-29 15:50:47 0:37:41 0:27:57 0:09:44 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

pass 7047595 2022-09-29 15:12:20 2022-09-29 15:13:06 2022-09-29 15:53:57 0:40:51 0:35:24 0:05:27 smithi main ubuntu 20.04 rados/singleton/{all/thrash-eio mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 2
pass 7047596 2022-09-29 15:12:21 2022-09-29 15:13:07 2022-09-29 15:48:45 0:35:38 0:28:04 0:07:34 smithi main ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7047597 2022-09-29 15:12:23 2022-09-29 15:13:07 2022-09-29 15:51:02 0:37:55 0:31:21 0:06:34 smithi main centos 8.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
fail 7047598 2022-09-29 15:12:24 2022-09-29 15:13:07 2022-09-29 15:29:59 0:16:52 0:07:27 0:09:25 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

Command failed on smithi035 with status 1: 'kubectl create -f rook/cluster/examples/kubernetes/ceph/crds.yaml -f rook/cluster/examples/kubernetes/ceph/common.yaml -f operator.yaml'

fail 7047599 2022-09-29 15:12:25 2022-09-29 15:13:08 2022-09-29 15:34:12 0:21:04 0:15:13 0:05:51 smithi main rhel 8.6 rados/upgrade/parallel/{0-random-distro$/{rhel_8.6_container_tools_3.0} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed on smithi106 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/daemon-base:latest-pacific shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c7742474-400b-11ed-8432-001a4aab830c -- ceph mon dump -f json'

fail 7047600 2022-09-29 15:12:27 2022-09-29 15:13:08 2022-09-29 15:26:07 0:12:59 0:07:06 0:05:53 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} 1
Failure Reason:

Command failed on smithi061 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s'

fail 7047601 2022-09-29 15:12:28 2022-09-29 15:13:09 2022-09-29 15:52:40 0:39:31 0:18:47 0:20:44 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

Test failure: test_cephfs_mirror (tasks.cephadm_cases.test_cli.TestCephadmCLI)

fail 7047602 2022-09-29 15:12:29 2022-09-29 15:23:11 2022-09-29 18:02:08 2:38:57 2:19:05 0:19:52 smithi main rhel 8.6 rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi179 with status 1: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c \'mkdir $TESTDIR/archive/ostest && cd $TESTDIR/archive/ostest && ulimit -Sn 16384 && CEPH_ARGS="--no-log-to-stderr --log-file $TESTDIR/archive/ceph_test_objectstore.log --debug-bluestore 20" ceph_test_objectstore --gtest_filter=*/2:-*SyntheticMatrixC* --gtest_catch_exceptions=0\''

dead 7047603 2022-09-29 15:12:30 2022-09-29 15:23:12 2022-09-30 03:32:54 12:09:42 smithi main rhel 8.6 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{rhel_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 2
Failure Reason:

hit max job timeout

pass 7047604 2022-09-29 15:12:32 2022-09-29 15:23:32 2022-09-29 15:54:55 0:31:23 0:23:51 0:07:32 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zlib} tasks/e2e} 2
fail 7047605 2022-09-29 15:12:33 2022-09-29 15:24:03 2022-09-29 16:02:22 0:38:19 0:28:18 0:10:01 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds