Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7199824 2023-03-09 16:26:57 2023-03-09 16:27:07 2023-03-09 17:10:15 0:43:08 0:30:07 0:13:01 smithi main centos 8.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 7199825 2023-03-09 16:26:58 2023-03-09 16:27:28 2023-03-09 17:23:35 0:56:07 0:48:33 0:07:34 smithi main rhel 8.6 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{rhel_8} thrashers/force-sync-many workloads/rados_mon_osdmap_prune} 2
fail 7199826 2023-03-09 16:26:58 2023-03-09 16:28:48 2023-03-09 16:53:36 0:24:48 0:16:31 0:08:17 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

Test failure: test_cephfs_mirror (tasks.cephadm_cases.test_cli.TestCephadmCLI)

fail 7199827 2023-03-09 16:26:59 2023-03-09 16:28:48 2023-03-09 16:44:33 0:15:45 0:06:06 0:09:39 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

Command failed on smithi184 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7199828 2023-03-09 16:27:00 2023-03-09 16:28:49 2023-03-09 17:05:56 0:37:07 0:27:09 0:09:58 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-bitmap} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi046 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ac2fb4189adf4bb2da55776a7705f3862d8b7773 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7199829 2023-03-09 16:27:01 2023-03-09 16:28:49 2023-03-09 16:44:43 0:15:54 0:06:16 0:09:38 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

Command failed on smithi084 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

pass 7199830 2023-03-09 16:27:01 2023-03-09 16:28:49 2023-03-09 18:19:54 1:51:05 1:38:45 0:12:20 smithi main ubuntu 20.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-radosbench} 2
fail 7199831 2023-03-09 16:27:02 2023-03-09 16:29:10 2023-03-09 16:56:32 0:27:22 0:17:53 0:09:29 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi053.front.sepia.ceph.com: ['type=AVC msg=audit(1678380830.726:19525): avc: denied { ioctl } for pid=124850 comm="iptables" path="/var/lib/containers/storage/overlay/958e0c7a8d478eb54ebe6fca664094cc1b522edda270ae5a99dfa808f0da6455/merged" dev="overlay" ino=3540997 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

fail 7199832 2023-03-09 16:27:03 2023-03-09 16:29:20 2023-03-09 16:45:36 0:16:16 0:06:03 0:10:13 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

Command failed on smithi102 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

pass 7199833 2023-03-09 16:27:04 2023-03-09 16:29:20 2023-03-09 17:13:02 0:43:42 0:33:10 0:10:32 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/mapgap thrashosds-health workloads/cache-pool-snaps} 2
fail 7199834 2023-03-09 16:27:05 2023-03-09 16:30:23 2023-03-09 16:54:24 0:24:01 0:13:50 0:10:11 smithi main centos 8.stream rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/centos_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi029 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ac2fb4189adf4bb2da55776a7705f3862d8b7773 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

pass 7199835 2023-03-09 16:27:05 2023-03-09 16:30:24 2023-03-09 17:05:25 0:35:01 0:26:37 0:08:24 smithi main rhel 8.6 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-small-objects-fast-read} 2
pass 7199836 2023-03-09 16:27:06 2023-03-09 16:32:24 2023-03-09 17:17:20 0:44:56 0:29:47 0:15:09 smithi main ubuntu 20.04 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_latest} thrashers/one workloads/rados_mon_workunits} 2
pass 7199837 2023-03-09 16:27:07 2023-03-09 16:33:55 2023-03-09 17:11:31 0:37:36 0:22:53 0:14:43 smithi main centos 8.stream rados/singleton/{all/thrash-rados/{thrash-rados thrashosds-health} mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8}} 2
pass 7199838 2023-03-09 16:27:08 2023-03-09 16:38:16 2023-03-09 16:59:24 0:21:08 0:11:21 0:09:47 smithi main centos 8.stream rados/singleton/{all/watch-notify-same-primary mon_election/classic msgr-failures/none msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_8}} 1
pass 7199839 2023-03-09 16:27:09 2023-03-09 16:38:16 2023-03-09 17:03:22 0:25:06 0:17:08 0:07:58 smithi main rhel 8.6 rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
fail 7199840 2023-03-09 16:27:09 2023-03-09 16:45:45 2023-03-09 17:15:22 0:29:37 0:17:01 0:12:36 smithi main centos 8.stream rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/rhel_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi176 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ac2fb4189adf4bb2da55776a7705f3862d8b7773 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

pass 7199841 2023-03-09 16:27:10 2023-03-09 16:48:06 2023-03-09 17:18:34 0:30:28 0:19:23 0:11:05 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/on mon_election/connectivity task/test_orch_cli} 1
pass 7199842 2023-03-09 16:27:11 2023-03-09 16:48:06 2023-03-09 17:22:24 0:34:18 0:19:22 0:14:56 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/test_rbd_api} 3
pass 7199843 2023-03-09 16:27:12 2023-03-09 16:52:17 2023-03-09 17:37:33 0:45:16 0:39:07 0:06:09 smithi main rhel 8.6 rados/singleton/{all/backfill-toofull mon_election/connectivity msgr-failures/none msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} 1
pass 7199844 2023-03-09 16:27:12 2023-03-09 16:52:17 2023-03-09 17:13:19 0:21:02 0:09:58 0:11:04 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/health-warnings mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 7199845 2023-03-09 16:27:13 2023-03-09 16:53:38 2023-03-09 17:16:21 0:22:43 0:13:45 0:08:58 smithi main rhel 8.6 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/scrub_test} 2
pass 7199846 2023-03-09 16:27:14 2023-03-09 16:54:48 2023-03-09 17:35:25 0:40:37 0:24:53 0:15:44 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_orch_cli_mon} 5
pass 7199847 2023-03-09 16:27:14 2023-03-09 17:06:07 2023-03-09 17:40:25 0:34:18 0:27:34 0:06:44 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/radosbench-high-concurrency} 2
fail 7199848 2023-03-09 16:27:15 2023-03-09 17:07:27 2023-03-09 17:42:55 0:35:28 0:26:16 0:09:12 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zlib} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi139 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=ac2fb4189adf4bb2da55776a7705f3862d8b7773 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7199849 2023-03-09 16:27:16 2023-03-09 17:07:48 2023-03-09 17:24:28 0:16:40 0:06:16 0:10:24 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

Command failed on smithi003 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'