Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7195492 2023-03-06 22:10:29 2023-03-06 22:11:32 2023-03-06 22:38:25 0:26:53 0:17:24 0:09:29 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

Test failure: test_cephfs_mirror (tasks.cephadm_cases.test_cli.TestCephadmCLI)

fail 7195493 2023-03-06 22:10:31 2023-03-06 22:11:32 2023-03-06 22:27:49 0:16:17 0:05:56 0:10:21 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

Command failed on smithi161 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

pass 7195494 2023-03-06 22:10:32 2023-03-06 22:11:33 2023-03-06 22:37:31 0:25:58 0:18:59 0:06:59 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_cephadm} 1
pass 7195495 2023-03-06 22:10:33 2023-03-06 22:12:33 2023-03-06 22:46:29 0:33:56 0:26:38 0:07:18 smithi main rhel 8.6 rados/multimon/{clusters/21 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8} tasks/mon_recovery} 3
fail 7195496 2023-03-06 22:10:34 2023-03-06 22:12:43 2023-03-06 22:34:44 0:22:01 0:06:15 0:15:46 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

Command failed on smithi029 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

pass 7195497 2023-03-06 22:10:35 2023-03-06 22:15:14 2023-03-06 22:58:19 0:43:05 0:32:15 0:10:50 smithi main centos 8.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} 2
pass 7195498 2023-03-06 22:10:36 2023-03-06 22:17:15 2023-03-06 23:26:27 1:09:12 0:59:16 0:09:56 smithi main centos 8.stream rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/many workloads/rados_mon_osdmap_prune} 2
fail 7195499 2023-03-06 22:10:38 2023-03-06 22:17:25 2023-03-06 22:34:20 0:16:55 0:05:56 0:10:59 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

Command failed on smithi061 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7195500 2023-03-06 22:10:39 2023-03-06 22:18:26 2023-03-06 22:37:19 0:18:53 0:13:23 0:05:30 smithi main rhel 8.6 rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/centos_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi064 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5b264966285032609684610d5490bbbef09c1433 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

pass 7195501 2023-03-06 22:10:40 2023-03-06 22:18:26 2023-03-06 22:57:07 0:38:41 0:30:14 0:08:27 smithi main rhel 8.6 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-zstd rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 7195502 2023-03-06 22:10:41 2023-03-06 22:19:26 2023-03-06 22:37:48 0:18:22 0:07:16 0:11:06 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} 1
Failure Reason:

Command failed on smithi044 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s'

fail 7195503 2023-03-06 22:10:42 2023-03-06 22:21:07 2023-03-06 22:40:54 0:19:47 0:09:13 0:10:34 smithi main ubuntu 20.04 rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/rhel_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi133 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5b264966285032609684610d5490bbbef09c1433 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

fail 7195504 2023-03-06 22:10:44 2023-03-06 22:21:17 2023-03-06 22:38:15 0:16:58 0:06:02 0:10:56 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

Command failed on smithi018 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'