Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7231066 2023-04-04 15:18:56 2023-04-04 15:20:21 2023-04-04 15:42:36 0:22:15 0:10:53 0:11:22 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats'

fail 7231067 2023-04-04 15:18:57 2023-04-04 15:20:21 2023-04-04 15:36:43 0:16:22 0:06:16 0:10:06 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

Command failed on smithi203 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7231068 2023-04-04 15:18:58 2023-04-04 15:20:31 2023-04-04 15:55:10 0:34:39 0:23:10 0:11:29 smithi main ubuntu 22.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/rados_cls_all} 2
Failure Reason:

"1680623266.323506 mon.a (mon.0) 511 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7231069 2023-04-04 15:18:58 2023-04-04 15:21:22 2023-04-04 17:38:38 2:17:16 1:47:41 0:29:35 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi003 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=33b501cea07585ff7fe78db2ee3a55571b1f3e5f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7231070 2023-04-04 15:18:59 2023-04-04 15:21:32 2023-04-04 15:40:34 0:19:02 0:06:32 0:12:30 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

Command failed on smithi040 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7231071 2023-04-04 15:19:00 2023-04-04 15:22:13 2023-04-04 17:09:58 1:47:45 1:18:33 0:29:12 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi170.front.sepia.ceph.com: ['type=AVC msg=audit(1680628086.870:19758): avc: denied { ioctl } for pid=155771 comm="iptables" path="/var/lib/containers/storage/overlay/fc55666ab11abce6739df1008f3f08110fc4bcee610669a4f0aef013f17675d5/merged" dev="overlay" ino=3411043 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

pass 7231072 2023-04-04 15:19:01 2023-04-04 15:22:43 2023-04-04 16:44:09 1:21:26 1:11:19 0:10:07 smithi main ubuntu 22.04 rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/misc} 1
pass 7231073 2023-04-04 15:19:01 2023-04-04 15:22:43 2023-04-04 17:55:18 2:32:35 2:00:46 0:31:49 smithi main centos 8.stream rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/many workloads/rados_mon_osdmap_prune} 2
fail 7231074 2023-04-04 15:19:02 2023-04-04 15:22:54 2023-04-04 15:38:48 0:15:54 0:06:16 0:09:38 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

Command failed on smithi179 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7231075 2023-04-04 15:19:03 2023-04-04 15:22:54 2023-04-04 15:44:38 0:21:44 0:11:55 0:09:49 smithi main ubuntu 22.04 rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/centos_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi136 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=33b501cea07585ff7fe78db2ee3a55571b1f3e5f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

fail 7231076 2023-04-04 15:19:04 2023-04-04 15:23:04 2023-04-04 17:26:35 2:03:31 1:34:23 0:29:08 smithi main centos 8.stream rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/rhel_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi112 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=33b501cea07585ff7fe78db2ee3a55571b1f3e5f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

fail 7231077 2023-04-04 15:19:05 2023-04-04 15:23:15 2023-04-04 17:43:27 2:20:12 1:47:35 0:32:37 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-stupid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi045 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=33b501cea07585ff7fe78db2ee3a55571b1f3e5f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7231078 2023-04-04 15:19:05 2023-04-04 15:23:45 2023-04-04 15:40:23 0:16:38 0:06:20 0:10:18 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

Command failed on smithi083 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'