User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
yuriw | 2024-08-30 15:04:49 | 2024-08-30 18:42:20 | 2024-08-31 02:58:12 | 8:15:52 | rados | wip-yuri8-testing-2024-08-28-1632-squid | smithi | 1ef8645 | 10 | 12 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7881988 | 2024-08-30 15:06:03 | 2024-08-30 18:42:19 | 2024-08-30 19:41:57 | 0:59:38 | 0:49:18 | 0:10:20 | smithi | main | ubuntu | 22.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd} | 1 | |
Failure Reason:
Command failed (workunit test osd/osd-bluefs-volume-ops.sh) on smithi079 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1ef864504b8875c83ee6c2c5fedc13315bebf7f5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-bluefs-volume-ops.sh' |
||||||||||||||
pass | 7881989 | 2024-08-30 15:06:05 | 2024-08-30 18:42:19 | 2024-08-30 19:11:51 | 0:29:32 | 0:20:58 | 0:08:34 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/osd_stale_reads mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |
fail | 7881990 | 2024-08-30 15:06:06 | 2024-08-30 18:42:20 | 2024-08-30 20:51:05 | 2:08:45 | 2:03:17 | 0:05:28 | smithi | main | centos | 9.stream | rados/standalone/{supported-random-distro$/{centos_latest} workloads/scrub} | 1 | |
Failure Reason:
Command failed (workunit test scrub/osd-scrub-test.sh) on smithi017 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1ef864504b8875c83ee6c2c5fedc13315bebf7f5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-test.sh' |
||||||||||||||
fail | 7881991 | 2024-08-30 15:06:07 | 2024-08-30 18:42:20 | 2024-08-30 18:55:40 | 0:13:20 | 0:07:06 | 0:06:14 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_latest} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
Failure Reason:
Command failed on smithi045 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd pool create unique_pool_0 16 16 erasure jerasure21profile' |
||||||||||||||
dead | 7881992 | 2024-08-30 15:06:08 | 2024-08-30 18:42:20 | 2024-08-31 02:51:50 | 8:09:30 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7881993 | 2024-08-30 15:06:09 | 2024-08-30 18:42:21 | 2024-08-30 21:11:05 | 2:28:44 | 2:17:41 | 0:11:03 | smithi | main | ubuntu | 22.04 | rados/upgrade/parallel/{0-random-distro$/{ubuntu_22.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
"2024-08-30T19:11:18.266957+0000 mon.a (mon.0) 568 : cluster [WRN] Health check failed: Degraded data redundancy: 2/6 objects degraded (33.333%), 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
pass | 7881994 | 2024-08-30 15:06:11 | 2024-08-30 18:43:21 | 2024-08-30 19:08:56 | 0:25:35 | 0:19:04 | 0:06:31 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/small-objects} | 2 | |
pass | 7881995 | 2024-08-30 15:06:12 | 2024-08-30 18:43:42 | 2024-08-30 19:45:33 | 1:01:51 | 0:54:03 | 0:07:48 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-bitmap rados tasks/rados_api_tests validater/valgrind} | 2 | |
pass | 7881996 | 2024-08-30 15:06:13 | 2024-08-30 18:45:02 | 2024-08-30 19:01:02 | 0:16:00 | 0:08:53 | 0:07:07 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} | 4 | |
fail | 7881997 | 2024-08-30 15:06:14 | 2024-08-30 18:45:43 | 2024-08-30 19:12:58 | 0:27:15 | 0:20:58 | 0:06:17 | smithi | main | centos | 9.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-hybrid} tasks/e2e} | 2 | |
Failure Reason:
"2024-08-30T19:10:54.915191+0000 mon.a (mon.0) 648 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 7881998 | 2024-08-30 15:06:16 | 2024-08-30 18:45:54 | 2024-08-30 20:18:33 | 1:32:39 | 1:23:55 | 0:08:44 | smithi | main | centos | 9.stream | rados/thrash-old-clients/{0-distro$/{centos_9.stream} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/radosbench} | 3 | |
fail | 7881999 | 2024-08-30 15:06:17 | 2024-08-30 18:46:24 | 2024-08-30 19:00:29 | 0:14:05 | 0:07:05 | 0:07:00 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
Failure Reason:
Command failed on smithi005 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd pool create unique_pool_0 16 16 erasure jerasure21profile' |
||||||||||||||
pass | 7882000 | 2024-08-30 15:06:18 | 2024-08-30 18:47:35 | 2024-08-30 19:13:00 | 0:25:25 | 0:17:58 | 0:07:27 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/cache-pool-snaps} | 2 | |
pass | 7882001 | 2024-08-30 15:06:19 | 2024-08-30 18:47:46 | 2024-08-30 19:21:49 | 0:34:03 | 0:25:41 | 0:08:22 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-snappy rados tasks/mon_recovery validater/valgrind} | 2 | |
fail | 7882002 | 2024-08-30 15:06:20 | 2024-08-30 18:48:46 | 2024-08-30 19:14:17 | 0:25:31 | 0:17:02 | 0:08:29 | smithi | main | ubuntu | 22.04 | rados/singleton/{all/radostool mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_rados_tool.sh) on smithi100 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1ef864504b8875c83ee6c2c5fedc13315bebf7f5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_rados_tool.sh' |
||||||||||||||
pass | 7882003 | 2024-08-30 15:06:21 | 2024-08-30 18:48:57 | 2024-08-30 19:15:52 | 0:26:55 | 0:21:24 | 0:05:31 | smithi | main | centos | 9.stream | rados/objectstore/{backends/objectcacher-stress supported-random-distro$/{centos_latest}} | 1 | |
dead | 7882004 | 2024-08-30 15:06:23 | 2024-08-30 18:48:57 | 2024-08-31 02:58:12 | 8:09:15 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7882005 | 2024-08-30 15:06:24 | 2024-08-30 18:48:57 | 2024-08-30 21:18:54 | 2:29:57 | 2:17:37 | 0:12:20 | smithi | main | ubuntu | 22.04 | rados/upgrade/parallel/{0-random-distro$/{ubuntu_22.04} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
"2024-08-30T19:15:55.206295+0000 mon.a (mon.0) 369 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 7882006 | 2024-08-30 15:06:25 | 2024-08-30 18:51:08 | 2024-08-30 19:08:17 | 0:17:09 | 0:10:17 | 0:06:52 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_rgw_multisite} | 3 | |
Failure Reason:
Command failed on smithi022 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:1ef864504b8875c83ee6c2c5fedc13315bebf7f5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 437e148e-6702-11ef-bcd4-c7b262605968 -- bash -c \'set -e\nset -x\nwhile true; do TOKEN=$(ceph rgw realm tokens | jq -r \'"\'"\'.[0].token\'"\'"\'); echo $TOKEN; if [ "$TOKEN" != "master zone has no endpoint" ]; then break; fi; sleep 5; done\nTOKENS=$(ceph rgw realm tokens)\necho $TOKENS | jq --exit-status \'"\'"\'.[0].realm == "myrealm1"\'"\'"\'\necho $TOKENS | jq --exit-status \'"\'"\'.[0].token\'"\'"\'\nTOKEN_JSON=$(ceph rgw realm tokens | jq -r \'"\'"\'.[0].token\'"\'"\' | base64 --decode)\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.realm_name == "myrealm1"\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.endpoint | test("http://.+:\\\\d+")\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.realm_id | test("^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$")\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.access_key\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.secret\'"\'"\'\n\'' |
||||||||||||||
fail | 7882007 | 2024-08-30 15:06:26 | 2024-08-30 18:51:19 | 2024-08-30 22:45:29 | 3:54:10 | 3:47:07 | 0:07:03 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
Failure Reason:
Command failed (workunit test cls/test_cls_lock.sh) on smithi189 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1ef864504b8875c83ee6c2c5fedc13315bebf7f5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cls/test_cls_lock.sh' |
||||||||||||||
pass | 7882008 | 2024-08-30 15:06:28 | 2024-08-30 18:52:09 | 2024-08-30 20:59:59 | 2:07:50 | 1:57:57 | 0:09:53 | smithi | main | ubuntu | 22.04 | rados/singleton/{all/thrash-backfill-full mon_election/connectivity msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} | 2 | |
fail | 7882009 | 2024-08-30 15:06:29 | 2024-08-30 18:52:10 | 2024-08-30 19:16:05 | 0:23:55 | 0:12:28 | 0:11:27 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
Failure Reason:
Command failed on smithi067 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd pool create unique_pool_0 16 16 erasure jerasure21profile' |
||||||||||||||
pass | 7882010 | 2024-08-30 15:06:30 | 2024-08-30 18:54:10 | 2024-08-30 19:25:31 | 0:31:21 | 0:22:53 | 0:08:28 | smithi | main | centos | 9.stream | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/snaps-few-objects-balanced} | 2 | |
fail | 7882011 | 2024-08-30 15:06:31 | 2024-08-30 18:54:41 | 2024-08-30 19:16:11 | 0:21:30 | 0:12:13 | 0:09:17 | smithi | main | centos | 9.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zstd} tasks/e2e} | 2 | |
Failure Reason:
Command failed (workunit test cephadm/test_dashboard_e2e.sh) on smithi033 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=1ef864504b8875c83ee6c2c5fedc13315bebf7f5 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_dashboard_e2e.sh' |