Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7487252 2023-12-11 16:43:13 2023-12-11 19:37:24 2023-12-11 20:08:57 0:31:33 0:18:26 0:13:07 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/connectivity task/test_rgw_multisite} 3
Failure Reason:

Command failed on smithi019 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:c56ad3eaf0784383bf22c04e35d26e15c4baeafe shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 222f8ff4-9860-11ee-95a3-87774f69a715 -- ceph rgw realm bootstrap -i -'

fail 7487253 2023-12-11 16:43:14 2023-12-11 19:39:15 2023-12-11 20:22:25 0:43:10 0:33:17 0:09:53 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-zstd rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate

fail 7487254 2023-12-11 16:43:15 2023-12-11 19:39:15 2023-12-11 19:56:32 0:17:17 0:11:17 0:06:00 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/connectivity task/test_adoption} 1
Failure Reason:

Command failed on smithi156 with status 1: 'sudo yum -y install ceph'

fail 7487255 2023-12-11 16:43:16 2023-12-11 19:39:15 2023-12-11 20:21:29 0:42:14 0:29:41 0:12:33 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-lz4} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi123 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c56ad3eaf0784383bf22c04e35d26e15c4baeafe TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7487256 2023-12-11 16:43:16 2023-12-11 19:41:56 2023-12-11 19:59:54 0:17:58 0:11:13 0:06:45 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/classic task/test_ca_signed_key} 2
Failure Reason:

Command failed on smithi114 with status 1: 'sudo yum -y install ceph'

pass 7487257 2023-12-11 16:43:17 2023-12-11 19:42:47 2023-12-11 20:22:16 0:39:29 0:28:08 0:11:21 smithi main ubuntu 22.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/fastclose rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-snaps-few-objects-overwrites} 2
fail 7487258 2023-12-11 16:43:18 2023-12-11 19:43:27 2023-12-11 20:02:32 0:19:05 0:09:38 0:09:27 smithi main centos 9.stream rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats'

fail 7487259 2023-12-11 16:43:19 2023-12-11 19:43:28 2023-12-11 21:43:44 2:00:16 1:49:48 0:10:28 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate

fail 7487260 2023-12-11 16:43:20 2023-12-11 19:44:28 2023-12-11 20:04:56 0:20:28 0:12:06 0:08:22 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/on mon_election/classic task/test_host_drain} 3
Failure Reason:

Command failed on smithi115 with status 1: 'sudo yum -y install ceph'

dead 7487261 2023-12-11 16:43:21 2023-12-11 19:45:39 2023-12-12 07:56:11 12:10:32 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

hit max job timeout

dead 7487262 2023-12-11 16:43:21 2023-12-11 19:45:59 2023-12-12 07:59:43 12:13:44 smithi main ubuntu 20.04 rados/upgrade/parallel/{0-random-distro$/{ubuntu_20.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

hit max job timeout

fail 7487263 2023-12-11 16:43:22 2023-12-11 19:46:19 2023-12-11 20:09:57 0:23:38 0:13:44 0:09:54 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/connectivity task/test_rgw_multisite} 3
Failure Reason:

Command failed on smithi046 with status 1: 'sudo yum -y install ceph'

fail 7487264 2023-12-11 16:43:23 2023-12-11 19:48:40 2023-12-11 20:07:46 0:19:06 0:08:46 0:10:20 smithi main centos 9.stream rados/singleton-nomsgr/{all/ceph-post-file mon_election/classic rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed (workunit test post-file.sh) on smithi078 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c56ad3eaf0784383bf22c04e35d26e15c4baeafe TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh'

fail 7487265 2023-12-11 16:43:24 2023-12-11 19:48:41 2023-12-11 21:00:13 1:11:32 1:02:34 0:08:58 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate

fail 7487266 2023-12-11 16:43:25 2023-12-11 19:49:01 2023-12-11 20:09:18 0:20:17 0:13:04 0:07:13 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/classic task/test_set_mon_crush_locations} 3
Failure Reason:

Command failed on smithi005 with status 1: 'sudo yum -y install ceph'

fail 7487267 2023-12-11 16:43:26 2023-12-11 19:50:21 2023-12-11 20:10:26 0:20:05 0:10:26 0:09:39 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/connectivity task/test_adoption} 1
Failure Reason:

Command failed (workunit test cephadm/test_adoption.sh) on smithi112 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c56ad3eaf0784383bf22c04e35d26e15c4baeafe TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_adoption.sh'

fail 7487268 2023-12-11 16:43:26 2023-12-11 19:50:32 2023-12-11 20:30:51 0:40:19 0:29:48 0:10:31 smithi main centos 8.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-comp-zlib} tasks/e2e} 2
Failure Reason:

Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi063 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c56ad3eaf0784383bf22c04e35d26e15c4baeafe TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh'

fail 7487269 2023-12-11 16:43:27 2023-12-11 19:51:32 2023-12-11 20:13:34 0:22:02 0:11:37 0:10:25 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats'

fail 7487270 2023-12-11 16:43:28 2023-12-11 19:51:33 2023-12-11 20:11:19 0:19:46 0:12:54 0:06:52 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/connectivity task/test_extra_daemon_features} 2
Failure Reason:

Command failed on smithi154 with status 1: 'sudo yum -y install ceph'

fail 7487271 2023-12-11 16:43:29 2023-12-11 19:51:53 2023-12-11 20:23:01 0:31:08 0:20:59 0:10:09 smithi main centos 9.stream rados/valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate

fail 7487272 2023-12-11 16:43:30 2023-12-11 19:52:03 2023-12-11 20:53:16 1:01:13 0:52:02 0:09:11 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7487273 2023-12-11 16:43:31 2023-12-11 19:52:04 2023-12-11 20:10:45 0:18:41 0:12:48 0:05:53 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

Command failed on smithi179 with status 1: 'sudo yum -y install ceph'

dead 7487274 2023-12-11 16:43:31 2023-12-11 19:52:04 2023-12-12 08:01:52 12:09:48 smithi main centos 9.stream rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

hit max job timeout

dead 7487275 2023-12-11 16:43:32 2023-12-11 19:52:14 2023-12-12 08:01:24 12:09:10 smithi main centos 8.stream rados/upgrade/parallel/{0-random-distro$/{centos_8.stream_container_tools} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

hit max job timeout

fail 7487276 2023-12-11 16:43:33 2023-12-11 19:52:15 2023-12-11 20:13:58 0:21:43 0:12:26 0:09:17 smithi main rhel 8.6 rados/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/classic task/test_orch_cli_mon} 5
Failure Reason:

Command failed on smithi144 with status 1: 'sudo yum -y install ceph'

fail 7487277 2023-12-11 16:43:34 2023-12-11 19:54:16 2023-12-11 20:23:49 0:29:33 0:18:21 0:11:12 smithi main ubuntu 20.04 rados/cephadm/workunits/{0-distro/ubuntu_20.04 agent/off mon_election/connectivity task/test_rgw_multisite} 3
Failure Reason:

Command failed on smithi045 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:c56ad3eaf0784383bf22c04e35d26e15c4baeafe shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c6af5e28-9861-11ee-95a3-87774f69a715 -- ceph rgw realm bootstrap -i -'

fail 7487278 2023-12-11 16:43:35 2023-12-11 19:55:16 2023-12-11 21:48:36 1:53:20 1:43:06 0:10:14 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) allocate allocate

fail 7487279 2023-12-11 16:43:36 2023-12-11 19:55:27 2023-12-11 20:14:20 0:18:53 0:08:56 0:09:57 smithi main centos 9.stream rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed (workunit test post-file.sh) on smithi170 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c56ad3eaf0784383bf22c04e35d26e15c4baeafe TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/post-file.sh'

pass 7487280 2023-12-11 16:43:37 2023-12-11 19:55:27 2023-12-11 20:17:31 0:22:04 0:09:55 0:12:09 smithi main centos 9.stream rados/multimon/{clusters/21 mon_election/classic msgr-failures/few msgr/async no_pools objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest} tasks/mon_clock_no_skews} 3