Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7493218 2023-12-15 15:27:52 2023-12-15 18:08:02 2023-12-15 18:35:33 0:27:31 0:17:32 0:09:59 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/connectivity task/test_rgw_multisite} 3
Failure Reason:

Command failed on smithi033 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:c56ad3eaf0784383bf22c04e35d26e15c4baeafe shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid eb5f2fa0-9b77-11ee-95a5-87774f69a715 -- ceph rgw realm bootstrap -i -'

fail 7493219 2023-12-15 15:27:52 2023-12-15 18:08:03 2023-12-15 18:51:20 0:43:17 0:32:41 0:10:36 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate

pass 7493220 2023-12-15 15:27:53 2023-12-15 18:08:03 2023-12-15 18:29:08 0:21:05 0:14:35 0:06:30 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/connectivity task/test_adoption} 1
fail 7493221 2023-12-15 15:27:54 2023-12-15 18:08:03 2023-12-15 18:49:49 0:41:46 0:31:20 0:10:26 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi037 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c56ad3eaf0784383bf22c04e35d26e15c4baeafe TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

pass 7493222 2023-12-15 15:27:55 2023-12-15 18:08:44 2023-12-15 18:34:10 0:25:26 0:16:11 0:09:15 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/classic task/test_ca_signed_key} 2
fail 7493223 2023-12-15 15:27:56 2023-12-15 18:10:14 2023-12-15 18:29:56 0:19:42 0:09:36 0:10:06 smithi main centos 9.stream rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats'

fail 7493224 2023-12-15 15:27:57 2023-12-15 18:10:55 2023-12-15 20:04:00 1:53:05 1:43:27 0:09:38 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate

pass 7493225 2023-12-15 15:27:57 2023-12-15 18:10:55 2023-12-15 18:42:30 0:31:35 0:23:27 0:08:08 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/classic task/test_host_drain} 3
dead 7493226 2023-12-15 15:27:58 2023-12-15 18:13:06 2023-12-16 06:23:17 12:10:11 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

hit max job timeout

dead 7493227 2023-12-15 15:27:59 2023-12-15 18:13:06 2023-12-16 06:25:42 12:12:36 smithi main ubuntu 20.04 rados/upgrade/parallel/{0-random-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

hit max job timeout

fail 7493228 2023-12-15 15:28:00 2023-12-15 18:15:07 2023-12-15 18:44:04 0:28:57 0:21:06 0:07:51 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/connectivity task/test_rgw_multisite} 3
Failure Reason:

Command failed on smithi088 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:c56ad3eaf0784383bf22c04e35d26e15c4baeafe shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 04498636-9b79-11ee-95a5-87774f69a715 -- ceph rgw realm bootstrap -i -'

fail 7493229 2023-12-15 15:28:01 2023-12-15 18:16:37 2023-12-15 18:37:50 0:21:13 0:08:49 0:12:24 smithi main centos 9.stream rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed (workunit test post-file.sh) on smithi176 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c56ad3eaf0784383bf22c04e35d26e15c4baeafe TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh'

fail 7493230 2023-12-15 15:28:02 2023-12-15 18:18:58 2023-12-15 19:26:06 1:07:08 0:58:04 0:09:04 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate

pass 7493231 2023-12-15 15:28:03 2023-12-15 18:19:18 2023-12-15 18:51:47 0:32:29 0:24:00 0:08:29 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/classic task/test_set_mon_crush_locations} 3
fail 7493232 2023-12-15 15:28:03 2023-12-15 18:19:59 2023-12-15 18:39:49 0:19:50 0:10:30 0:09:20 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/connectivity task/test_adoption} 1
Failure Reason:

Command failed (workunit test cephadm/test_adoption.sh) on smithi045 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c56ad3eaf0784383bf22c04e35d26e15c4baeafe TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'

fail 7493233 2023-12-15 15:28:04 2023-12-15 18:19:59 2023-12-15 19:01:06 0:41:07 0:31:49 0:09:18 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zlib} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi052 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c56ad3eaf0784383bf22c04e35d26e15c4baeafe TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7493234 2023-12-15 15:28:05 2023-12-15 18:19:59 2023-12-15 18:42:19 0:22:20 0:11:59 0:10:21 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats'

pass 7493235 2023-12-15 15:28:06 2023-12-15 18:20:30 2023-12-15 18:48:05 0:27:35 0:20:39 0:06:56 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/connectivity task/test_extra_daemon_features} 2
fail 7493236 2023-12-15 15:28:07 2023-12-15 18:21:00 2023-12-15 18:49:52 0:28:52 0:18:45 0:10:07 smithi main centos 9.stream rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate

fail 7493237 2023-12-15 15:28:08 2023-12-15 18:21:01 2023-12-15 18:58:30 0:37:29 0:27:03 0:10:26 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate

pass 7493238 2023-12-15 15:28:08 2023-12-15 18:21:21 2023-12-15 18:51:21 0:30:00 0:22:11 0:07:49 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/connectivity task/test_orch_cli} 1
dead 7493239 2023-12-15 15:28:09 2023-12-15 18:22:31 2023-12-16 06:31:52 12:09:21 smithi main centos 9.stream rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

hit max job timeout

dead 7493240 2023-12-15 15:28:10 2023-12-15 18:22:32 2023-12-16 06:34:52 12:12:20 smithi main centos 8.stream rados/upgrade/parallel/{0-random-distro$/{centos_8.stream_container_tools} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

hit max job timeout

pass 7493241 2023-12-15 15:28:11 2023-12-15 18:26:23 2023-12-15 19:03:54 0:37:31 0:28:39 0:08:52 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/classic task/test_orch_cli_mon} 5
fail 7493242 2023-12-15 15:28:12 2023-12-15 18:28:33 2023-12-15 18:57:07 0:28:34 0:17:58 0:10:36 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/connectivity task/test_rgw_multisite} 3
Failure Reason:

Command failed on smithi093 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:c56ad3eaf0784383bf22c04e35d26e15c4baeafe shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5d363c70-9b7a-11ee-95a5-87774f69a715 -- ceph rgw realm bootstrap -i -'

fail 7493243 2023-12-15 15:28:13 2023-12-15 18:28:44 2023-12-15 20:32:13 2:03:29 1:51:04 0:12:25 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate

fail 7493244 2023-12-15 15:28:13 2023-12-15 18:29:04 2023-12-15 18:48:05 0:19:01 0:08:59 0:10:02 smithi main centos 9.stream rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed (workunit test post-file.sh) on smithi106 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c56ad3eaf0784383bf22c04e35d26e15c4baeafe TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh'