Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7101830 2022-12-02 20:41:24 2022-12-03 01:18:25 2022-12-03 01:55:43 0:37:18 0:29:40 0:07:38 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8} tasks/rados_api_tests} 2
pass 7101831 2022-12-02 20:41:25 2022-12-03 01:18:26 2022-12-03 01:50:41 0:32:15 0:22:00 0:10:15 smithi main ubuntu 20.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/filestore-xfs rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 7101832 2022-12-02 20:41:26 2022-12-03 01:19:17 2022-12-03 01:34:14 0:14:57 0:06:55 0:08:02 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

Command failed on smithi003 with status 1: 'kubectl create -f rook/cluster/examples/kubernetes/ceph/crds.yaml -f rook/cluster/examples/kubernetes/ceph/common.yaml -f operator.yaml'

fail 7101833 2022-12-02 20:41:28 2022-12-03 01:19:17 2022-12-03 01:47:48 0:28:31 0:19:53 0:08:38 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi038.front.sepia.ceph.com: ['type=AVC msg=audit(1670031864.762:19257): avc: denied { ioctl } for pid=112869 comm="iptables" path="/var/lib/containers/storage/overlay/5614d62792e30c52265d4a41b5c7de2e1abf512d120a7039e36268e1faef9dce/merged" dev="overlay" ino=3540579 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1', 'type=AVC msg=audit(1670031864.809:19259): avc: denied { ioctl } for pid=112887 comm="iptables" path="/var/lib/containers/storage/overlay/5614d62792e30c52265d4a41b5c7de2e1abf512d120a7039e36268e1faef9dce/merged" dev="overlay" ino=3540579 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

pass 7101834 2022-12-02 20:41:29 2022-12-03 01:21:08 2022-12-03 01:36:50 0:15:42 0:05:56 0:09:46 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_cephadm_repos} 1
pass 7101835 2022-12-02 20:41:30 2022-12-03 01:21:18 2022-12-03 01:41:28 0:20:10 0:08:30 0:11:40 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4K_seq_read} 1
pass 7101836 2022-12-02 20:41:31 2022-12-03 01:21:18 2022-12-03 02:02:49 0:41:31 0:28:48 0:12:43 smithi main ubuntu 20.04 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async objectstore/bluestore-bitmap rados supported-random-distro$/{ubuntu_latest} thrashers/many workloads/rados_mon_workunits} 2
fail 7101837 2022-12-02 20:41:32 2022-12-03 01:23:09 2022-12-03 01:42:33 0:19:24 0:12:50 0:06:34 smithi main centos 8.stream rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test post-file.sh) on smithi189 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0a30a5d92d67eb7011204915623f977744e8b400 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh'

pass 7101838 2022-12-02 20:41:34 2022-12-03 01:23:09 2022-12-03 01:44:39 0:21:30 0:09:10 0:12:20 smithi main ubuntu 20.04 rados/singleton/{all/resolve_stuck_peering mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest}} 2
pass 7101839 2022-12-02 20:41:35 2022-12-03 01:25:30 2022-12-03 02:05:19 0:39:49 0:31:51 0:07:58 smithi main centos 8.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 7101840 2022-12-02 20:41:36 2022-12-03 01:26:11 2022-12-03 02:02:40 0:36:29 0:25:38 0:10:51 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/snaps-few-objects-balanced} 2
pass 7101841 2022-12-02 20:41:37 2022-12-03 01:27:21 2022-12-03 01:46:21 0:19:00 0:09:08 0:09:52 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/crushdiff mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 7101842 2022-12-02 20:41:38 2022-12-03 01:27:32 2022-12-03 01:55:04 0:27:32 0:18:03 0:09:29 smithi main ubuntu 20.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/rados_stress_watch} 2
fail 7101843 2022-12-02 20:41:40 2022-12-03 01:27:32 2022-12-03 01:54:27 0:26:55 0:20:00 0:06:55 smithi main centos 8.stream rados/singleton/{all/test-crash mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test rados/test_crash.sh) on smithi132 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0a30a5d92d67eb7011204915623f977744e8b400 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_crash.sh'

pass 7101844 2022-12-02 20:41:41 2022-12-03 01:27:52 2022-12-03 02:06:50 0:38:58 0:33:18 0:05:40 smithi main centos 8.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} 2
pass 7101845 2022-12-02 20:41:42 2022-12-03 01:28:13 2022-12-03 01:45:34 0:17:21 0:08:45 0:08:36 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-bitmap openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_rand_read} 1
pass 7101846 2022-12-02 20:41:43 2022-12-03 01:28:33 2022-12-03 01:49:07 0:20:34 0:08:37 0:11:57 smithi main ubuntu 20.04 rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} tasks/mon_clock_with_skews} 2
fail 7101847 2022-12-02 20:41:45 2022-12-03 01:29:04 2022-12-03 01:54:50 0:25:46 0:17:33 0:08:13 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_nfs} 1
Failure Reason:

Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS)

fail 7101848 2022-12-02 20:41:46 2022-12-03 01:29:14 2022-12-03 02:07:55 0:38:41 0:27:12 0:11:29 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds

fail 7101849 2022-12-02 20:41:47 2022-12-03 01:30:15 2022-12-03 01:53:21 0:23:06 0:17:19 0:05:47 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi134.front.sepia.ceph.com: ['type=AVC msg=audit(1670032300.207:19364): avc: denied { ioctl } for pid=123661 comm="iptables" path="/var/lib/containers/storage/overlay/d5661490182e92735098c4a6dcb6f2a3753ff3e7005fe77f75e10cc646c614ee/merged" dev="overlay" ino=3412491 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

fail 7101850 2022-12-02 20:41:48 2022-12-03 01:30:15 2022-12-03 01:47:36 0:17:21 0:07:10 0:10:11 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

Command failed on smithi093 with status 1: 'kubectl create -f rook/cluster/examples/kubernetes/ceph/crds.yaml -f rook/cluster/examples/kubernetes/ceph/common.yaml -f operator.yaml'

pass 7101851 2022-12-02 20:41:49 2022-12-03 01:30:16 2022-12-03 01:50:07 0:19:51 0:08:43 0:11:08 smithi main ubuntu 20.04 rados/perf/{ceph mon_election/connectivity objectstore/bluestore-bitmap openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/sample_radosbench} 1
fail 7101852 2022-12-02 20:41:51 2022-12-03 01:30:46 2022-12-03 01:45:15 0:14:29 0:07:30 0:06:59 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/connectivity task/test_iscsi_pids_limit/{centos_8.stream_container_tools test_iscsi_pids_limit}} 1
Failure Reason:

Command failed on smithi002 with status 1: 'TESTDIR=/home/ubuntu/cephtest bash -s'

fail 7101853 2022-12-02 20:41:52 2022-12-03 01:30:46 2022-12-03 01:58:01 0:27:15 0:19:44 0:07:31 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_nfs} 1
Failure Reason:

Test failure: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS)

fail 7101854 2022-12-02 20:41:53 2022-12-03 01:30:57 2022-12-03 01:50:12 0:19:15 0:10:10 0:09:05 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test post-file.sh) on smithi055 with status 255: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0a30a5d92d67eb7011204915623f977744e8b400 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh'

fail 7101855 2022-12-02 20:41:54 2022-12-03 01:30:57 2022-12-03 02:09:42 0:38:45 0:28:00 0:10:45 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

'check osd count' reached maximum tries (90) after waiting for 900 seconds