Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7221511 2023-03-27 13:47:06 2023-03-27 17:57:06 2023-03-27 19:40:36 1:43:30 0:28:50 1:14:40 smithi main ubuntu 22.04 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
fail 7221512 2023-03-27 13:47:07 2023-03-27 19:41:00 2023-03-27 20:00:24 0:19:24 0:10:38 0:08:46 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats'

pass 7221513 2023-03-27 13:47:08 2023-03-27 19:41:00 2023-03-27 22:44:26 3:03:26 2:57:32 0:05:54 smithi main rhel 8.6 rados/standalone/{supported-random-distro$/{rhel_8} workloads/osd} 1
pass 7221514 2023-03-27 13:47:09 2023-03-27 19:41:01 2023-03-27 20:27:35 0:46:34 0:17:26 0:29:08 smithi main rhel 8.6 rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
pass 7221515 2023-03-27 13:47:09 2023-03-27 20:04:57 2023-03-27 20:51:43 0:46:46 0:12:48 0:33:58 smithi main ubuntu 20.04 rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_20.04} tasks/mon_recovery} 2
pass 7221516 2023-03-27 13:47:10 2023-03-27 20:52:02 2023-03-27 21:09:51 0:17:49 0:08:04 0:09:45 smithi main ubuntu 20.04 rados/objectstore/{backends/objectstore-memstore supported-random-distro$/{ubuntu_20.04}} 1
pass 7221517 2023-03-27 13:47:11 2023-03-27 20:52:03 2023-03-27 22:09:03 1:17:00 0:53:06 0:23:54 smithi main rhel 8.6 rados/singleton/{all/ec-inconsistent-hinfo mon_election/connectivity msgr-failures/many msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{rhel_8}} 1
pass 7221518 2023-03-27 13:47:12 2023-03-27 21:10:09 2023-03-27 21:48:43 0:38:34 0:09:47 0:28:47 smithi main ubuntu 20.04 rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/connectivity random-objectstore$/{bluestore-comp-zstd} supported-random-distro$/{ubuntu_20.04} tasks/workunits} 2
pass 7221519 2023-03-27 13:47:12 2023-03-27 21:29:15 2023-03-27 21:46:06 0:16:51 0:11:44 0:05:07 smithi main rhel 8.6 rados/singleton/{all/erasure-code-nonregression mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{rhel_8}} 1
pass 7221520 2023-03-27 13:47:13 2023-03-27 21:29:15 2023-03-27 21:50:20 0:21:05 0:12:46 0:08:19 smithi main rhel 8.6 rados/multimon/{clusters/9 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-stupid rados supported-random-distro$/{rhel_8} tasks/mon_clock_no_skews} 3
dead 7221521 2023-03-27 13:47:14 2023-03-27 21:30:29 2023-03-28 09:41:16 12:10:47 smithi main centos 8.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-zlib rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/minsize_recovery thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 2
Failure Reason:

hit max job timeout

fail 7221522 2023-03-27 13:47:15 2023-03-27 21:31:44 2023-03-27 21:51:35 0:19:51 0:06:10 0:13:41 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

Command failed on smithi002 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7221523 2023-03-27 13:47:15 2023-03-27 21:36:08 2023-03-27 22:00:50 0:24:42 0:19:10 0:05:32 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi007.front.sepia.ceph.com: ['type=AVC msg=audit(1679954319.469:19943): avc: denied { ioctl } for pid=109368 comm="iptables" path="/var/lib/containers/storage/overlay/232ce9aaafbe32ccac072ab15d6b6587e5ed281a2e6cafa56d4cb18d37dbe277/merged" dev="overlay" ino=3278941 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

fail 7221524 2023-03-27 13:47:16 2023-03-27 21:36:09 2023-03-27 22:10:40 0:34:31 0:22:53 0:11:38 smithi main ubuntu 22.04 rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest} tasks/rados_cls_all} 2
Failure Reason:

"1679954614.730337 mon.a (mon.0) 573 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7221525 2023-03-27 13:47:17 2023-03-27 21:36:09 2023-03-27 22:27:28 0:51:19 0:42:27 0:08:52 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/on mon_election/connectivity task/test_nfs} 1
Failure Reason:

Test failure: test_update_export_with_invalid_values (tasks.cephfs.test_nfs.TestNFS)

pass 7221526 2023-03-27 13:47:18 2023-03-27 21:36:09 2023-03-27 22:01:27 0:25:18 0:16:34 0:08:44 smithi main rhel 8.6 rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
fail 7221527 2023-03-27 13:47:18 2023-03-27 21:38:44 2023-03-27 22:15:52 0:37:08 0:26:48 0:10:20 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-stupid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi038 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5e717292106ca2d310770101bfebb345837be8e1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7221528 2023-03-27 13:47:19 2023-03-27 21:38:55 2023-03-27 21:55:17 0:16:22 0:06:18 0:10:04 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/master} 3
Failure Reason:

Command failed on smithi043 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

fail 7221529 2023-03-27 13:47:20 2023-03-27 21:39:25 2023-03-27 22:00:16 0:20:51 0:10:39 0:10:12 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats'

pass 7221530 2023-03-27 13:47:20 2023-03-27 21:41:11 2023-03-27 22:15:47 0:34:36 0:24:03 0:10:33 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/rbd_cls} 3
pass 7221531 2023-03-27 13:47:21 2023-03-27 21:42:25 2023-03-27 22:07:25 0:25:00 0:18:40 0:06:20 smithi main rhel 8.6 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{rhel_8} thrashers/none thrashosds-health workloads/admin_socket_objecter_requests} 2
pass 7221532 2023-03-27 13:47:22 2023-03-27 21:43:40 2023-03-27 22:10:36 0:26:56 0:17:04 0:09:52 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/on mon_election/connectivity task/test_cephadm} 1
pass 7221533 2023-03-27 13:47:23 2023-03-27 21:43:40 2023-03-27 22:09:27 0:25:47 0:15:51 0:09:56 smithi main centos 8.stream rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
fail 7221534 2023-03-27 13:47:23 2023-03-27 21:44:01 2023-03-27 21:59:45 0:15:44 0:06:12 0:09:32 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/host rook/1.7.2} 1
Failure Reason:

Command failed on smithi073 with status 1: 'sudo systemctl enable --now kubelet && sudo kubeadm config images pull'

pass 7221535 2023-03-27 13:47:24 2023-03-27 21:44:01 2023-03-27 22:13:34 0:29:33 0:20:16 0:09:17 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
fail 7221536 2023-03-27 13:47:25 2023-03-27 21:44:11 2023-03-27 22:06:49 0:22:38 0:11:43 0:10:55 smithi main ubuntu 22.04 rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/centos_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi129 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5e717292106ca2d310770101bfebb345837be8e1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

fail 7221537 2023-03-27 13:47:26 2023-03-27 21:44:12 2023-03-27 22:06:38 0:22:26 0:11:30 0:10:56 smithi main ubuntu 22.04 rados/singleton/{all/test_envlibrados_for_rocksdb/{supported/rhel_latest test_envlibrados_for_rocksdb} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test rados/test_envlibrados_for_rocksdb.sh) on smithi136 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5e717292106ca2d310770101bfebb345837be8e1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_envlibrados_for_rocksdb.sh'

dead 7221538 2023-03-27 13:47:26 2023-03-27 21:45:16 2023-03-27 21:53:48 0:08:32 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

SSH connection to smithi123 was lost: 'sudo apt-get update'

fail 7221539 2023-03-27 13:47:27 2023-03-27 21:45:16 2023-03-27 21:54:46 0:09:30 smithi main ubuntu 20.04 rados/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/calico rook/master} 3
Failure Reason:

Command failed on smithi123 with status 100: 'sudo apt-get clean'