Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7479195 2023-12-05 22:28:56 2023-12-05 22:30:26 2023-12-05 22:55:35 0:25:09 0:18:33 0:06:36 smithi main rhel 8.6 rados/cephadm/smoke/{0-distro/rhel_8.6_container_tools_rhel8 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
dead 7479196 2023-12-05 22:28:57 2023-12-05 22:30:26 2023-12-05 22:48:59 0:18:33 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

pass 7479197 2023-12-05 22:28:57 2023-12-05 22:30:47 2023-12-05 23:01:27 0:30:40 0:21:21 0:09:19 smithi main centos 9.stream rados/valgrind-leaks/{1-start 2-inject-leak/mon centos_latest} 1
fail 7479198 2023-12-05 22:28:58 2023-12-05 22:30:47 2023-12-05 22:49:26 0:18:39 0:08:44 0:09:55 smithi main ubuntu 20.04 rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{ubuntu_20.04}} 1
Failure Reason:

Command failed (workunit test post-file.sh) on smithi142 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=244b703b22e9d7c48e37291bfeaf4b15a97cc628 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh'

fail 7479199 2023-12-05 22:28:59 2023-12-05 22:31:27 2023-12-06 05:04:53 6:33:26 6:23:10 0:10:16 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi057 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=244b703b22e9d7c48e37291bfeaf4b15a97cc628 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 7479200 2023-12-05 22:29:00 2023-12-05 22:31:28 2023-12-05 22:59:44 0:28:16 0:19:54 0:08:22 smithi main rhel 8.6 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-active-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
fail 7479201 2023-12-05 22:29:01 2023-12-05 22:32:48 2023-12-05 23:15:45 0:42:57 0:30:17 0:12:40 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-stupid} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi049 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=244b703b22e9d7c48e37291bfeaf4b15a97cc628 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 7479202 2023-12-05 22:29:02 2023-12-05 22:32:49 2023-12-05 23:16:29 0:43:40 0:33:24 0:10:16 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{default} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/classic msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/snaps-few-objects-balanced} 2
fail 7479203 2023-12-05 22:29:03 2023-12-05 22:33:19 2023-12-05 23:08:18 0:34:59 0:26:36 0:08:23 smithi main centos 9.stream rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
Failure Reason:

saw valgrind issues

fail 7479204 2023-12-05 22:29:03 2023-12-05 22:33:19 2023-12-05 23:14:25 0:41:06 0:31:24 0:09:42 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

saw valgrind issues

dead 7479205 2023-12-05 22:29:04 2023-12-05 22:33:20 2023-12-05 22:36:06 0:02:46 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

Error reimaging machines: 503 Server Error: Service Unavailable for url: http://fog.front.sepia.ceph.com/fog/task/active

pass 7479206 2023-12-05 22:29:05 2023-12-05 22:35:10 2023-12-06 01:15:26 2:40:16 2:13:10 0:27:06 smithi main ubuntu 20.04 rados/objectstore/{backends/objectstore-bluestore-a supported-random-distro$/{ubuntu_20.04}} 1
fail 7479207 2023-12-05 22:29:06 2023-12-05 22:35:11 2023-12-05 22:54:37 0:19:26 0:08:54 0:10:32 smithi main centos 9.stream rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed (workunit test post-file.sh) on smithi088 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=244b703b22e9d7c48e37291bfeaf4b15a97cc628 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh'

pass 7479208 2023-12-05 22:29:07 2023-12-05 22:35:11 2023-12-06 01:19:44 2:44:33 2:34:36 0:09:57 smithi main centos 8.stream rados/standalone/{supported-random-distro$/{centos_8} workloads/osd-backfill} 1
fail 7479209 2023-12-05 22:29:08 2023-12-05 22:35:52 2023-12-05 23:17:28 0:41:36 0:31:00 0:10:36 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zlib} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi032 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=244b703b22e9d7c48e37291bfeaf4b15a97cc628 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7479210 2023-12-05 22:29:09 2023-12-05 22:36:22 2023-12-05 22:55:20 0:18:58 0:09:24 0:09:34 smithi main centos 9.stream rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats'

fail 7479211 2023-12-05 22:29:09 2023-12-05 22:36:32 2023-12-06 05:09:54 6:33:22 6:23:10 0:10:12 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi050 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=244b703b22e9d7c48e37291bfeaf4b15a97cc628 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 ALLOW_TIMEOUTS=1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

pass 7479212 2023-12-05 22:29:10 2023-12-05 22:36:33 2023-12-06 01:28:08 2:51:35 2:42:15 0:09:20 smithi main centos 8.stream rados/standalone/{supported-random-distro$/{centos_8} workloads/osd} 1