Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7353053 2023-07-26 14:35:04 2023-07-26 20:14:56 2023-07-26 22:47:45 2:32:49 2:19:56 0:12:53 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

"2023-07-26T20:59:24.757144+0000 mon.a (mon.0) 291 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7353055 2023-07-26 14:35:09 2023-07-26 20:16:37 2023-07-27 03:05:00 6:48:23 6:35:29 0:12:54 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi093 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c1444501ab7918ce42bdc26b9d860ad26e34dd69 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 7353056 2023-07-26 14:35:13 2023-07-26 21:10:01 2217 smithi main centos 8.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_8} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} 2
pass 7353058 2023-07-26 14:35:19 2023-07-26 21:02:57 1363 smithi main ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{default} supported-random-distro$/{ubuntu_20.04} thrashers/pggrow thrashosds-health workloads/ec-small-objects-overwrites} 2
pass 7353059 2023-07-26 14:35:25 2023-07-26 20:24:59 2023-07-26 22:10:19 1:45:20 1:34:07 0:11:13 smithi main centos 9.stream rados/standalone/{supported-random-distro$/{centos_latest} workloads/erasure-code} 1
dead 7353061 2023-07-26 14:35:35 2023-07-26 20:25:18 2023-07-26 20:51:21 0:26:03 smithi main rhel 8.6 rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

pass 7353063 2023-07-26 14:35:46 2023-07-26 20:27:04 2023-07-26 20:55:45 0:28:41 0:20:12 0:08:29 smithi main rhel 8.6 rados/singleton-nomsgr/{all/health-warnings mon_election/classic rados supported-random-distro$/{rhel_8}} 1
pass 7353066 2023-07-26 14:35:52 2023-07-26 20:28:38 2023-07-26 21:13:51 0:45:13 0:31:18 0:13:55 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_8} thrashers/none thrashosds-health workloads/small-objects-localized} 2
dead 7353069 2023-07-26 14:35:58 2023-07-26 20:31:30 2023-07-26 20:46:25 0:14:55 smithi main rhel 8.6 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

Error reimaging machines: Cannot connect to remote host smithi033

dead 7353072 2023-07-26 14:36:00 2023-07-26 20:35:46 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-stupid} tasks/e2e} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi097

pass 7353075 2023-07-26 14:36:06 2023-07-26 21:05:36 1083 smithi main centos 9.stream rados/multimon/{clusters/3 mon_election/connectivity msgr-failures/many msgr/async no_pools objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_latest} tasks/mon_clock_no_skews} 2
fail 7353076 2023-07-26 14:36:22 2023-07-26 20:35:09 2023-07-26 21:00:39 0:25:30 0:12:09 0:13:21 smithi main centos 9.stream rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats'

pass 7353080 2023-07-26 14:36:27 2023-07-26 20:37:37 2023-07-26 21:10:49 0:33:12 0:22:49 0:10:23 smithi main centos 8.stream rados/singleton-nomsgr/{all/librados_hello_world mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 7353082 2023-07-26 14:36:28 2023-07-26 20:38:37 2023-07-26 21:11:04 0:32:27 0:19:16 0:13:11 smithi main ubuntu 22.04 rados/singleton/{all/pg-autoscaler mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
pass 7353085 2023-07-26 14:36:29 2023-07-26 20:40:04 2023-07-26 21:29:17 0:49:13 0:35:55 0:13:18 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/classic task/test_iscsi_container/{centos_8.stream_container_tools test_iscsi_container}} 1
pass 7353088 2023-07-26 14:36:29 2023-07-26 20:41:27 2023-07-26 21:18:20 0:36:53 0:23:38 0:13:15 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/msgr mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
fail 7353091 2023-07-26 14:36:35 2023-07-26 20:44:19 2023-07-26 21:54:14 1:09:55 0:56:52 0:13:03 smithi main centos 9.stream rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
Failure Reason:

saw valgrind issues

fail 7353094 2023-07-26 14:36:36 2023-07-26 20:46:56 2023-07-26 22:18:04 1:31:08 1:18:07 0:13:01 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

saw valgrind issues

dead 7353097 2023-07-26 14:36:52 2023-07-26 20:49:36 2023-07-26 20:52:40 0:03:04 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/rbd_cls} 3
Failure Reason:

Error reimaging machines: Failed to power on smithi116

pass 7353102 2023-07-26 14:37:03 2023-07-26 20:55:25 2023-07-26 21:29:04 0:33:39 0:22:50 0:10:49 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/admin_socket_objecter_requests} 2
pass 7353105 2023-07-26 14:37:14 2023-07-26 20:55:52 2023-07-26 21:44:40 0:48:48 0:33:14 0:15:34 smithi main ubuntu 22.04 rados/singleton/{all/random-eio mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest}} 2
pass 7353108 2023-07-26 14:37:20 2023-07-26 21:00:03 2023-07-26 22:02:19 1:02:16 0:46:18 0:15:58 smithi main centos 8.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 7353111 2023-07-26 14:37:25 2023-07-26 21:03:18 2023-07-26 21:59:06 0:55:48 0:33:07 0:22:41 smithi main centos 8.stream rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{centos_8}} 1
pass 7353114 2023-07-26 14:37:31 2023-07-26 21:05:36 2023-07-26 22:13:53 1:08:17 0:44:34 0:23:43 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/quincy backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/osd-delay rados thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} 3
pass 7353117 2023-07-26 14:37:37 2023-07-26 21:09:20 2023-07-26 22:05:05 0:55:45 0:34:51 0:20:54 smithi main centos 8.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
pass 7353120 2023-07-26 14:37:46 2023-07-26 21:11:07 2023-07-26 21:47:58 0:36:51 0:23:52 0:12:59 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/fastclose msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{ubuntu_20.04} thrashers/morepggrow thrashosds-health workloads/cache-snaps-balanced} 2
pass 7353122 2023-07-26 14:37:57 2023-07-26 21:12:29 2023-07-26 21:46:53 0:34:24 0:16:34 0:17:50 smithi main centos 8.stream rados/cephadm/osds/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
fail 7353125 2023-07-26 14:38:07 2023-07-26 23:52:00 8559 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

"2023-07-26T22:53:55.524023+0000 mon.a (mon.0) 938 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

pass 7353132 2023-07-26 14:38:08 2023-07-26 21:59:21 1070 smithi main centos 9.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-zstd} supported-random-distro$/{centos_latest} tasks/prometheus} 2
pass 7353133 2023-07-26 14:38:19 2023-07-26 21:29:28 2023-07-26 22:10:07 0:40:39 0:33:59 0:06:40 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_adoption} 1
pass 7353138 2023-07-26 14:38:24 2023-07-26 22:33:41 2683 smithi main centos 8.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_8} thrashers/default thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 7353140 2023-07-26 14:38:25 2023-07-26 21:37:12 2023-07-26 22:17:08 0:39:56 0:31:19 0:08:37 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_cephadm} 1
Failure Reason:

SELinux denials found on ubuntu@smithi163.front.sepia.ceph.com: ['type=AVC msg=audit(1690409663.362:20749): avc: denied { ioctl } for pid=112684 comm="iptables" path="/var/lib/containers/storage/overlay/7ad4d8c21ec4ddefb3e161835936fbe86a57301cf46fa1920705f578cb08f9a4/merged" dev="overlay" ino=3408726 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:container_file_t:s0:c1022,c1023 tclass=dir permissive=1']

fail 7353143 2023-07-26 14:38:31 2023-07-26 21:38:35 2023-07-26 22:59:05 1:20:30 1:02:01 0:18:29 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-snappy} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi150 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c1444501ab7918ce42bdc26b9d860ad26e34dd69 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7353145 2023-07-26 14:38:37 2023-07-26 21:41:12 2023-07-27 04:33:01 6:51:49 6:37:39 0:14:10 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi066 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c1444501ab7918ce42bdc26b9d860ad26e34dd69 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 7353148 2023-07-26 14:38:52 2023-07-26 21:46:39 2023-07-26 22:21:09 0:34:30 0:20:52 0:13:38 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/librados_hello_world mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1