Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7257771 2023-04-28 19:32:59 2023-04-28 19:45:21 2023-04-28 21:02:45 1:17:24 0:48:09 0:29:15 smithi main centos 8.stream rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
fail 7257772 2023-04-28 19:33:00 2023-04-28 19:45:21 2023-04-28 20:11:37 0:26:16 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus-v1only backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Command failed on smithi187 with status 1: 'sudo yum install -y kernel'

pass 7257773 2023-04-28 19:33:00 2023-04-28 19:46:32 2023-04-28 21:46:13 1:59:41 1:29:10 0:30:31 smithi main centos 8.stream rados/singleton-nomsgr/{all/full-tiering mon_election/classic rados supported-random-distro$/{centos_8}} 1
fail 7257774 2023-04-28 19:33:01 2023-04-28 19:46:52 2023-04-28 20:12:16 0:25:24 0:18:14 0:07:10 smithi main rhel 8.6 rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/rhel_latest test_envlibrados_for_rocksdb} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi202 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f3373c7976e8e598ac4c962abd6b6e1dbee9b11d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

fail 7257775 2023-04-28 19:33:02 2023-04-28 19:47:42 2023-04-28 20:12:46 0:25:04 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8} tasks/repair_test} 2
Failure Reason:

Command failed on smithi038 with status 1: 'sudo yum install -y kernel'

fail 7257776 2023-04-28 19:33:02 2023-04-28 19:47:53 2023-04-28 21:49:42 2:01:49 1:26:28 0:35:21 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} tasks/e2e} 2
Failure Reason:

Command failed on smithi023 with status 1: 'sudo yum -y install ceph-mgr-diskprediction-local'

fail 7257777 2023-04-28 19:33:03 2023-04-28 19:51:24 2023-04-28 20:10:14 0:18:50 0:06:18 0:12:32 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/master} 1
Failure Reason:

Command failed on smithi158 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

pass 7257778 2023-04-28 19:33:04 2023-04-28 19:51:24 2023-04-28 22:13:44 2:22:20 1:49:23 0:32:57 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-hybrid rados tasks/rados_api_tests validater/lockdep} 2
pass 7257779 2023-04-28 19:33:05 2023-04-28 19:54:35 2023-04-28 22:03:57 2:09:22 1:39:22 0:30:00 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_orch_cli} 1
fail 7257780 2023-04-28 19:33:05 2023-04-28 19:54:35 2023-04-28 20:13:09 0:18:34 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/nautilus backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/few rados thrashers/pggrow thrashosds-health workloads/test_rbd_api} 3
Failure Reason:

Command failed on smithi188 with status 1: 'sudo yum install -y kernel'

pass 7257781 2023-04-28 19:33:06 2023-04-28 19:55:16 2023-04-28 20:18:04 0:22:48 0:17:45 0:05:03 smithi main rhel 8.6 rados/singleton-nomsgr/{all/version-number-sanity mon_election/classic rados supported-random-distro$/{rhel_8}} 1
fail 7257782 2023-04-28 19:33:07 2023-04-28 19:55:36 2023-04-28 20:11:49 0:16:13 0:06:16 0:09:57 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

Command failed on smithi046 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

pass 7257783 2023-04-28 19:33:08 2023-04-28 19:55:36 2023-04-28 23:18:58 3:23:22 2:48:59 0:34:23 smithi main centos 8.stream rados/upgrade/parallel/{0-random-distro$/{centos_8.stream_container_tools} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
pass 7257784 2023-04-28 19:33:08 2023-04-28 19:57:47 2023-04-28 20:22:40 0:24:53 0:19:45 0:05:08 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_cephadm} 1
pass 7257785 2023-04-28 19:33:09 2023-04-28 19:57:47 2023-04-28 22:13:40 2:15:53 1:44:55 0:30:58 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/small-objects-localized} 2
pass 7257786 2023-04-28 19:33:10 2023-04-28 19:58:28 2023-04-28 22:18:10 2:19:42 1:42:06 0:37:36 smithi main centos 8.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 7257787 2023-04-28 19:33:11 2023-04-28 20:06:39 2023-04-28 22:44:04 2:37:25 2:06:42 0:30:43 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi057 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f3373c7976e8e598ac4c962abd6b6e1dbee9b11d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7257788 2023-04-28 19:33:11 2023-04-28 20:06:40 2023-04-28 20:25:30 0:18:50 0:06:27 0:12:23 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

Command failed on smithi132 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7257789 2023-04-28 19:33:12 2023-04-28 20:07:20 2023-04-28 20:44:46 0:37:26 0:24:20 0:13:06 smithi main ubuntu 22.04 rados/singleton/{all/recovery-preemption mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

"1682714131.816314 osd.3 (osd.3) 155 : cluster [WRN] WaitReplicas::react(const DigestUpdate&): Unexpected DigestUpdate event" in cluster log

pass 7257790 2023-04-28 19:33:13 2023-04-28 20:10:21 2023-04-28 20:50:50 0:40:29 0:31:19 0:09:10 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
fail 7257791 2023-04-28 19:33:13 2023-04-28 20:11:41 2023-04-28 20:27:19 0:15:38 0:06:17 0:09:21 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

Command failed on smithi187 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7257792 2023-04-28 19:33:14 2023-04-28 20:11:42 2023-04-28 20:33:02 0:21:20 0:11:29 0:09:51 smithi main ubuntu 22.04 rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/centos_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi086 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f3373c7976e8e598ac4c962abd6b6e1dbee9b11d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

fail 7257793 2023-04-28 19:33:15 2023-04-28 20:11:42 2023-04-28 21:16:52 1:05:10 0:36:22 0:28:48 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} 1
Failure Reason:

Command failed on smithi046 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s'

pass 7257794 2023-04-28 19:33:16 2023-04-28 20:11:52 2023-04-28 22:26:28 2:14:36 1:43:05 0:31:31 smithi main centos 8.stream rados/multimon/{clusters/21 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{centos_8} tasks/mon_clock_no_skews} 3
pass 7257795 2023-04-28 19:33:16 2023-04-28 20:12:53 2023-04-28 22:23:58 2:11:05 1:41:44 0:29:21 smithi main centos 8.stream rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 7257796 2023-04-28 19:33:17 2023-04-28 20:12:53 2023-04-28 22:26:39 2:13:46 1:44:21 0:29:25 smithi main centos 8.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-overwrites} 2
pass 7257797 2023-04-28 19:33:18 2023-04-28 20:13:14 2023-04-28 22:25:47 2:12:33 1:40:08 0:32:25 smithi main centos 8.stream rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} 2
pass 7257798 2023-04-28 19:33:19 2023-04-28 20:15:54 2023-04-28 22:26:27 2:10:33 1:39:43 0:30:50 smithi main centos 8.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-small-objects-many-deletes} 2