Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7261635 2023-05-03 19:31:03 2023-05-03 20:04:57 2023-05-03 20:44:53 0:39:56 0:29:36 0:10:20 smithi main ubuntu 22.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
fail 7261636 2023-05-03 19:31:04 2023-05-03 20:04:57 2023-05-03 20:41:21 0:36:24 0:24:35 0:11:49 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 2
Failure Reason:

"1683146218.0263522 osd.5 (osd.5) 84 : cluster [WRN] WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event" in cluster log

pass 7261637 2023-05-03 19:31:05 2023-05-03 20:05:18 2023-05-03 23:04:29 2:59:11 2:52:37 0:06:34 smithi main centos 8.stream rados/standalone/{supported-random-distro$/{centos_8} workloads/scrub} 1
pass 7261638 2023-05-03 19:31:06 2023-05-03 20:05:18 2023-05-03 20:58:41 0:53:23 0:41:45 0:11:38 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
fail 7261639 2023-05-03 19:31:07 2023-05-03 20:05:48 2023-05-03 20:22:58 0:17:10 0:06:23 0:10:47 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

Command failed on smithi082 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7261640 2023-05-03 19:31:07 2023-05-03 20:06:59 2023-05-03 20:48:28 0:41:29 0:29:54 0:11:35 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-low-osd-mem-target} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=44b0182202b1b7612e75a17bf59ed1658114c7e9 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7261641 2023-05-03 19:31:08 2023-05-03 20:07:09 2023-05-03 20:25:58 0:18:49 0:06:28 0:12:21 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

Command failed on smithi101 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

pass 7261642 2023-05-03 19:31:09 2023-05-03 20:08:00 2023-05-03 20:34:25 0:26:25 0:17:37 0:08:48 smithi main centos 8.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 7261643 2023-05-03 19:31:10 2023-05-03 20:08:20 2023-05-03 20:51:03 0:42:43 0:35:20 0:07:23 smithi main centos 8.stream rados/singleton-bluestore/{all/cephtool mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 1
fail 7261644 2023-05-03 19:31:11 2023-05-03 20:08:21 2023-05-03 20:24:07 0:15:46 0:06:21 0:09:25 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

Command failed on smithi142 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7261645 2023-05-03 19:31:12 2023-05-03 20:08:21 2023-05-03 20:29:39 0:21:18 0:12:03 0:09:15 smithi main ubuntu 22.04 rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/centos_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi132 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=44b0182202b1b7612e75a17bf59ed1658114c7e9 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

pass 7261646 2023-05-03 19:31:12 2023-05-03 20:08:21 2023-05-03 20:31:29 0:23:08 0:16:22 0:06:46 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} 1
fail 7261647 2023-05-03 19:31:13 2023-05-03 20:08:22 2023-05-03 20:39:23 0:31:01 0:22:20 0:08:41 smithi main centos 8.stream rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/rhel_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi046 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=44b0182202b1b7612e75a17bf59ed1658114c7e9 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

pass 7261648 2023-05-03 19:31:14 2023-05-03 20:10:12 2023-05-03 21:00:57 0:50:45 0:33:41 0:17:04 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_orch_cli_mon} 5
fail 7261649 2023-05-03 19:31:15 2023-05-03 20:19:04 2023-05-03 20:56:33 0:37:29 0:30:44 0:06:45 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi032 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=44b0182202b1b7612e75a17bf59ed1658114c7e9 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7261650 2023-05-03 19:31:16 2023-05-03 20:19:24 2023-05-03 20:36:33 0:17:09 0:06:20 0:10:49 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

Command failed on smithi070 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'