Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7898527 2024-09-09 22:20:04 2024-09-10 00:25:55 2024-09-10 01:56:09 1:30:14 1:18:51 0:11:23 smithi main centos 9.stream rados/standalone/{supported-random-distro$/{centos_latest} workloads/scrub} 1
Failure Reason:

Command failed (workunit test scrub/osd-scrub-repair.sh) on smithi191 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f03a10869dbcdcf78fc7c60470b0f6dfddc7d42e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh'

dead 7898528 2024-09-09 22:20:05 2024-09-10 00:27:15 2024-09-10 08:37:45 8:10:30 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

hit max job timeout

fail 7898529 2024-09-09 22:20:06 2024-09-10 00:27:16 2024-09-10 02:30:53 2:03:37 1:53:42 0:09:55 smithi main centos 9.stream rados/upgrade/parallel/{0-random-distro$/{centos_9.stream_runc} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

"2024-09-10T00:47:55.502926+0000 mon.a (mon.0) 665 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7898530 2024-09-09 22:20:07 2024-09-10 00:27:46 2024-09-10 01:08:35 0:40:49 0:27:48 0:13:01 smithi main centos 9.stream rados/thrash-old-clients/{0-distro$/{centos_9.stream} 0-size-min-size-overrides/2-size-2-min-size 1-install/quincy backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/cache-snaps} 3
Failure Reason:

"2024-09-10T01:00:00.000107+0000 mon.a (mon.0) 1786 : cluster [WRN] Health detail: HEALTH_WARN 1 osds down; Degraded data redundancy: 52/1290 objects degraded (4.031%), 2 pgs degraded" in cluster log

fail 7898531 2024-09-09 22:20:08 2024-09-10 00:31:48 2024-09-10 01:03:31 0:31:43 0:21:27 0:10:16 smithi main centos 9.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-hybrid} tasks/e2e} 2
Failure Reason:

"2024-09-10T01:01:06.071975+0000 mon.a (mon.0) 649 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7898532 2024-09-09 22:20:09 2024-09-10 00:31:48 2024-09-10 01:57:23 1:25:35 1:12:20 0:13:15 smithi main centos 9.stream rados/thrash-old-clients/{0-distro$/{centos_9.stream} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/radosbench} 3
Failure Reason:

"2024-09-10T01:00:00.000121+0000 mon.a (mon.0) 1191 : cluster [WRN] Health detail: HEALTH_WARN noscrub,nodeep-scrub flag(s) set; Degraded data redundancy: 4830/188553 objects degraded (2.562%), 2 pgs degraded, 5 pgs undersized" in cluster log

pass 7898533 2024-09-09 22:20:09 2024-09-10 00:33:49 2024-09-10 02:24:55 1:51:06 1:41:34 0:09:32 smithi main centos 9.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-radosbench} 2
fail 7898534 2024-09-09 22:20:10 2024-09-10 00:33:49 2024-09-10 01:16:24 0:42:35 0:31:10 0:11:25 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} 2
Failure Reason:

Command failed on smithi165 with status 32: 'sync && sudo umount -f /var/lib/ceph/osd/ceph-6'

fail 7898535 2024-09-09 22:20:11 2024-09-10 00:35:10 2024-09-10 00:57:42 0:22:32 0:13:21 0:09:11 smithi main centos 9.stream rados/singleton/{all/radostool mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed (workunit test rados/test_rados_tool.sh) on smithi028 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=f03a10869dbcdcf78fc7c60470b0f6dfddc7d42e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_rados_tool.sh'

pass 7898536 2024-09-09 22:20:12 2024-09-10 00:35:30 2024-09-10 01:13:43 0:38:13 0:28:44 0:09:29 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/rados_api_tests} 2
fail 7898537 2024-09-09 22:20:13 2024-09-10 00:36:10 2024-09-10 00:54:00 0:17:50 0:07:42 0:10:08 smithi main centos 9.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-comp-zlib} supported-random-distro$/{centos_latest} tasks/progress} 2
Failure Reason:

Test failure: test_default_progress_test (tasks.mgr.test_progress.TestProgress)

dead 7898538 2024-09-09 22:20:14 2024-09-10 00:36:51 2024-09-10 08:47:08 8:10:17 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

hit max job timeout

fail 7898539 2024-09-09 22:20:15 2024-09-10 00:36:51 2024-09-10 02:42:34 2:05:43 1:55:13 0:10:30 smithi main centos 9.stream rados/upgrade/parallel/{0-random-distro$/{centos_9.stream} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

"2024-09-10T00:57:56.115916+0000 mon.a (mon.0) 663 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log

fail 7898540 2024-09-09 22:20:16 2024-09-10 00:37:22 2024-09-10 00:59:33 0:22:11 0:11:58 0:10:13 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_rgw_multisite} 3
Failure Reason:

Command failed on smithi040 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:f03a10869dbcdcf78fc7c60470b0f6dfddc7d42e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 2310d5b4-6f0f-11ef-bcea-c7b262605968 -- bash -c \'set -e\nset -x\nwhile true; do TOKEN=$(ceph rgw realm tokens | jq -r \'"\'"\'.[0].token\'"\'"\'); echo $TOKEN; if [ "$TOKEN" != "master zone has no endpoint" ]; then break; fi; sleep 5; done\nTOKENS=$(ceph rgw realm tokens)\necho $TOKENS | jq --exit-status \'"\'"\'.[0].realm == "myrealm1"\'"\'"\'\necho $TOKENS | jq --exit-status \'"\'"\'.[0].token\'"\'"\'\nTOKEN_JSON=$(ceph rgw realm tokens | jq -r \'"\'"\'.[0].token\'"\'"\' | base64 --decode)\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.realm_name == "myrealm1"\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.endpoint | test("http://.+:\\\\d+")\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.realm_id | test("^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$")\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.access_key\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.secret\'"\'"\'\n\''

fail 7898541 2024-09-09 22:20:17 2024-09-10 00:38:32 2024-09-10 01:21:55 0:43:23 0:31:56 0:11:27 smithi main centos 9.stream rados/thrash-old-clients/{0-distro$/{centos_9.stream} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/connectivity msgr-failures/few rados thrashers/morepggrow thrashosds-health workloads/snaps-few-objects} 3
Failure Reason:

"2024-09-10T01:10:00.000114+0000 mon.a (mon.0) 2171 : cluster [WRN] Health detail: HEALTH_WARN 1 osds down; Degraded data redundancy: 69/696 objects degraded (9.914%), 6 pgs degraded, 6 pgs undersized" in cluster log

fail 7898542 2024-09-09 22:20:17 2024-09-10 00:38:43 2024-09-10 01:11:28 0:32:45 0:21:07 0:11:38 smithi main centos 9.stream rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zlib} tasks/e2e} 2
Failure Reason:

"2024-09-10T01:09:02.106212+0000 mon.a (mon.0) 611 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log

fail 7898543 2024-09-09 22:20:18 2024-09-10 00:40:13 2024-09-10 01:42:43 1:02:30 0:50:24 0:12:06 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

"2024-09-10T01:12:56.446529+0000 mon.a (mon.0) 1234 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log