Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7412025 2023-10-05 21:44:54 2023-10-05 23:58:36 2023-10-06 00:31:49 0:33:13 0:23:32 0:09:41 smithi main centos 9.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7412026 2023-10-05 21:44:54 2023-10-05 23:58:47 2023-10-06 00:26:54 0:28:07 0:18:07 0:10:00 smithi main rhel 8.6 rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
fail 7412027 2023-10-05 21:44:55 2023-10-06 00:01:58 2023-10-06 02:02:14 2:00:16 1:50:05 0:10:11 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

saw valgrind issues

pass 7412028 2023-10-05 21:44:56 2023-10-06 00:02:18 2023-10-06 00:53:46 0:51:28 0:37:11 0:14:17 smithi main centos 8.stream rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} thrashers/many workloads/rados_mon_workunits} 2
pass 7412029 2023-10-05 21:44:57 2023-10-06 00:06:39 2023-10-06 00:33:02 0:26:23 0:13:29 0:12:54 smithi main centos 8.stream rados/singleton/{all/erasure-code-nonregression mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{centos_8}} 1
dead 7412030 2023-10-05 21:44:58 2023-10-06 00:07:40 2023-10-06 12:18:09 12:10:29 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

hit max job timeout

fail 7412031 2023-10-05 21:44:59 2023-10-06 00:07:40 2023-10-06 04:30:45 4:23:05 4:09:44 0:13:21 smithi main centos 8.stream rados/upgrade/parallel/{0-random-distro$/{centos_8.stream_container_tools} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi019 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=quincy TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh'

fail 7412032 2023-10-05 21:44:59 2023-10-06 00:09:20 2023-10-06 00:56:43 0:47:23 0:32:23 0:15:00 smithi main centos 9.stream rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} 1
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7412033 2023-10-05 21:45:00 2023-10-06 00:09:21 2023-10-06 01:07:39 0:58:18 0:46:14 0:12:04 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi123 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0e9a2b0ec5f84467c5d77f873bea81ac8580f8ac TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 7412034 2023-10-05 21:45:01 2023-10-06 00:12:32 2023-10-06 00:41:37 0:29:05 0:23:03 0:06:02 smithi main rhel 8.6 rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{rhel_8} thrashers/sync-many workloads/pool-create-delete} 2
pass 7412035 2023-10-05 21:45:02 2023-10-06 00:12:32 2023-10-06 00:48:47 0:36:15 0:25:07 0:11:08 smithi main ubuntu 22.04 rados/multimon/{clusters/21 mon_election/classic msgr-failures/few msgr/async-v2only no_pools objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} tasks/mon_recovery} 3
pass 7412036 2023-10-05 21:45:03 2023-10-06 00:15:13 2023-10-06 00:47:33 0:32:20 0:21:07 0:11:13 smithi main ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_20.04} thrashers/pggrow thrashosds-health workloads/ec-small-objects-overwrites} 2
pass 7412037 2023-10-05 21:45:03 2023-10-06 00:15:33 2023-10-06 00:57:03 0:41:30 0:28:47 0:12:43 smithi main ubuntu 22.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7412038 2023-10-05 21:45:04 2023-10-06 00:17:04 2023-10-06 00:45:41 0:28:37 0:17:39 0:10:58 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-lz4 rados tasks/rados_cls_all validater/lockdep} 2
pass 7412039 2023-10-05 21:45:05 2023-10-06 00:18:04 2023-10-06 00:50:42 0:32:38 0:19:32 0:13:06 smithi main ubuntu 20.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_20.04} thrashers/none thrashosds-health workloads/small-objects-localized} 2
fail 7412040 2023-10-05 21:45:06 2023-10-06 00:20:45 2023-10-06 01:12:36 0:51:51 0:28:48 0:23:03 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-lz4} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi111 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0e9a2b0ec5f84467c5d77f873bea81ac8580f8ac TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7412041 2023-10-05 21:45:07 2023-10-06 00:20:55 2023-10-06 00:46:00 0:25:05 0:18:00 0:07:05 smithi main rhel 8.6 rados/objectstore/{backends/keyvaluedb supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi184 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'mkdir $TESTDIR/kvtest && cd $TESTDIR/kvtest && ceph_test_keyvaluedb'"

pass 7412042 2023-10-05 21:45:07 2023-10-06 00:20:55 2023-10-06 01:55:17 1:34:22 1:24:44 0:09:38 smithi main centos 9.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-radosbench} 2
fail 7412043 2023-10-05 21:45:08 2023-10-06 00:22:06 2023-10-06 01:01:50 0:39:44 0:26:19 0:13:25 smithi main centos 9.stream rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
Failure Reason:

saw valgrind issues

fail 7412044 2023-10-05 21:45:09 2023-10-06 00:26:57 2023-10-06 01:12:25 0:45:28 0:32:17 0:13:11 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

saw valgrind issues

dead 7412045 2023-10-05 21:45:10 2023-10-06 00:29:18 2023-10-06 12:39:31 12:10:13 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

hit max job timeout

fail 7412046 2023-10-05 21:45:11 2023-10-06 00:29:18 2023-10-06 04:48:08 4:18:50 4:08:01 0:10:49 smithi main centos 8.stream rados/upgrade/parallel/{0-random-distro$/{centos_8.stream_container_tools_crun} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

Command failed (workunit test cls/test_cls_2pc_queue.sh) on smithi150 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=quincy TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_2pc_queue.sh'

fail 7412047 2023-10-05 21:45:12 2023-10-06 00:30:28 2023-10-06 02:24:05 1:53:37 1:43:09 0:10:28 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

"2023-10-06T01:13:58.771243+0000 mon.a (mon.0) 402 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7412048 2023-10-05 21:45:13 2023-10-06 00:30:59 2023-10-06 01:10:10 0:39:11 0:28:43 0:10:28 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zstd} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi105 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0e9a2b0ec5f84467c5d77f873bea81ac8580f8ac TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'