Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7200981 2023-03-10 15:01:40 2023-03-10 15:03:46 2023-03-10 15:34:43 0:30:57 0:25:05 0:05:52 smithi main rhel 8.4 rados/cephadm/workunits/{0-distro/rhel_8.4_container_tools_rhel8 agent/on mon_election/connectivity task/test_nfs} 1
Failure Reason:

Test failure: test_non_existent_cluster (tasks.cephfs.test_nfs.TestNFS)

fail 7200983 2023-03-10 15:01:41 2023-03-10 15:03:46 2023-03-10 15:33:33 0:29:47 0:19:01 0:10:46 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi032 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0b8d45143e005142a4bb7830803ffffe8fcff26 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7200985 2023-03-10 15:01:42 2023-03-10 15:04:27 2023-03-10 15:20:29 0:16:02 0:06:04 0:09:58 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/host rook/master} 1
Failure Reason:

Command failed on smithi100 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7200987 2023-03-10 15:01:43 2023-03-10 15:04:37 2023-03-10 15:20:21 0:15:44 0:06:10 0:09:34 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

Command failed on smithi179 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7200989 2023-03-10 15:01:43 2023-03-10 15:04:38 2023-03-10 15:49:46 0:45:08 0:29:51 0:15:17 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_nfs} 1
Failure Reason:

Test failure: test_non_existent_cluster (tasks.cephfs.test_nfs.TestNFS)

fail 7200991 2023-03-10 15:01:44 2023-03-10 15:10:09 2023-03-10 15:28:56 0:18:47 0:12:42 0:06:05 smithi main rhel 8.4 rados/singleton/{all/test_envlibrados_for_rocksdb mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/filestore-xfs rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi050 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0b8d45143e005142a4bb7830803ffffe8fcff26 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

fail 7200993 2023-03-10 15:01:45 2023-03-10 15:10:09 2023-03-10 15:40:37 0:30:28 0:19:55 0:10:33 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zlib} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi035 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0b8d45143e005142a4bb7830803ffffe8fcff26 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7200995 2023-03-10 15:01:46 2023-03-10 15:11:50 2023-03-10 15:32:28 0:20:38 0:06:20 0:14:18 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none 3-final cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

Command failed on smithi099 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7200997 2023-03-10 15:01:47 2023-03-10 15:14:30 2023-03-10 15:30:21 0:15:51 0:06:11 0:09:40 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench 3-final cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

Command failed on smithi117 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

pass 7200999 2023-03-10 15:01:47 2023-03-10 15:14:31 2023-03-10 16:00:12 0:45:41 0:36:32 0:09:09 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/rados_api_tests} 2
pass 7201001 2023-03-10 15:01:48 2023-03-10 15:14:51 2023-03-10 15:39:00 0:24:09 0:13:07 0:11:02 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} tasks/readwrite} 2
pass 7201003 2023-03-10 15:01:49 2023-03-10 15:14:51 2023-03-10 15:55:08 0:40:17 0:30:33 0:09:44 smithi main rhel 8.4 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7201005 2023-03-10 15:01:50 2023-03-10 15:18:02 2023-03-10 15:45:38 0:27:36 0:18:36 0:09:00 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} tasks/repair_test} 2
fail 7201007 2023-03-10 15:01:50 2023-03-10 15:18:33 2023-03-10 15:53:16 0:34:43 0:24:12 0:10:31 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_nfs} 1
Failure Reason:

Test failure: test_non_existent_cluster (tasks.cephfs.test_nfs.TestNFS)

fail 7201010 2023-03-10 15:01:51 2023-03-10 15:20:23 2023-03-10 16:32:04 1:11:41 1:00:43 0:10:58 smithi main rhel 8.4 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8} thrashers/many workloads/rados_mon_osdmap_prune} 2
Failure Reason:

Command failed (workunit test mon/test_mon_osdmap_prune.sh) on smithi190 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f0b8d45143e005142a4bb7830803ffffe8fcff26 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/mon/test_mon_osdmap_prune.sh'

pass 7201011 2023-03-10 15:01:52 2023-03-10 15:24:54 2023-03-10 15:49:16 0:24:22 0:12:44 0:11:38 smithi main centos 8.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-lz4} supported-random-distro$/{centos_8} tasks/workunits} 2