User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
lflores | 2024-09-04 18:54:56 | 2024-09-04 18:56:47 | 2024-09-05 03:11:43 | 8:14:56 | rados | wip-yuri8-testing-2024-08-28-1632-squid | smithi | 1ef8645 | 5 | 7 | 2 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7889511 | 2024-09-04 18:55:56 | 2024-09-04 18:56:47 | 2024-09-04 19:54:12 | 0:57:25 | 0:48:17 | 0:09:08 | smithi | main | ubuntu | 22.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest} workloads/osd} | 1 | |
Failure Reason:
Command failed (workunit test osd/osd-bluefs-volume-ops.sh) on smithi174 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=055b029e365e266078e29644ed3ff217dfe73d04 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/osd/osd-bluefs-volume-ops.sh' |
||||||||||||||
pass | 7889512 | 2024-09-04 18:55:57 | 2024-09-04 18:56:47 | 2024-09-04 21:30:00 | 2:33:13 | 2:26:07 | 0:07:06 | smithi | main | centos | 9.stream | rados/standalone/{supported-random-distro$/{centos_latest} workloads/scrub} | 1 | |
pass | 7889513 | 2024-09-04 18:55:58 | 2024-09-04 18:56:47 | 2024-09-04 19:28:25 | 0:31:38 | 0:24:35 | 0:07:03 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_latest} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
dead | 7889514 | 2024-09-04 18:55:59 | 2024-09-04 18:57:18 | 2024-09-05 03:08:02 | 8:10:44 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7889515 | 2024-09-04 18:56:00 | 2024-09-04 18:58:28 | 2024-09-04 21:25:34 | 2:27:06 | 2:16:05 | 0:11:01 | smithi | main | ubuntu | 22.04 | rados/upgrade/parallel/{0-random-distro$/{ubuntu_22.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
"2024-09-04T19:29:35.739700+0000 mon.a (mon.0) 675 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 7889516 | 2024-09-04 18:56:01 | 2024-09-04 18:59:29 | 2024-09-04 19:26:54 | 0:27:25 | 0:21:28 | 0:05:57 | smithi | main | centos | 9.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/connectivity random-objectstore$/{bluestore-hybrid} tasks/e2e} | 2 | |
Failure Reason:
"2024-09-04T19:24:07.487711+0000 mon.a (mon.0) 622 : cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log |
||||||||||||||
pass | 7889517 | 2024-09-04 18:56:02 | 2024-09-04 18:59:59 | 2024-09-04 19:33:26 | 0:33:27 | 0:24:07 | 0:09:20 | smithi | main | centos | 9.stream | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
fail | 7889518 | 2024-09-04 18:56:03 | 2024-09-04 19:02:10 | 2024-09-04 19:29:51 | 0:27:41 | 0:17:12 | 0:10:29 | smithi | main | ubuntu | 22.04 | rados/singleton/{all/radostool mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest}} | 1 | |
Failure Reason:
Command failed (workunit test rados/test_rados_tool.sh) on smithi115 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=055b029e365e266078e29644ed3ff217dfe73d04 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_rados_tool.sh' |
||||||||||||||
dead | 7889519 | 2024-09-04 18:56:03 | 2024-09-04 19:02:10 | 2024-09-05 03:11:43 | 8:09:33 | smithi | main | ubuntu | 22.04 | rados/singleton-nomsgr/{all/admin_socket_output mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} | 1 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7889520 | 2024-09-04 18:56:04 | 2024-09-04 19:02:21 | 2024-09-04 21:28:21 | 2:26:00 | 2:16:43 | 0:09:17 | smithi | main | ubuntu | 22.04 | rados/upgrade/parallel/{0-random-distro$/{ubuntu_22.04} 0-start 1-tasks mon_election/connectivity upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} | 2 | |
Failure Reason:
"2024-09-04T19:32:21.274042+0000 mon.a (mon.0) 682 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||||||||||||||
fail | 7889521 | 2024-09-04 18:56:05 | 2024-09-04 19:02:41 | 2024-09-04 19:22:03 | 0:19:22 | 0:11:41 | 0:07:41 | smithi | main | centos | 9.stream | rados/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_rgw_multisite} | 3 | |
Failure Reason:
Command failed on smithi005 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:1ef864504b8875c83ee6c2c5fedc13315bebf7f5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ef0ba0c6-6af1-11ef-bcd6-c7b262605968 -- bash -c \'set -e\nset -x\nwhile true; do TOKEN=$(ceph rgw realm tokens | jq -r \'"\'"\'.[0].token\'"\'"\'); echo $TOKEN; if [ "$TOKEN" != "master zone has no endpoint" ]; then break; fi; sleep 5; done\nTOKENS=$(ceph rgw realm tokens)\necho $TOKENS | jq --exit-status \'"\'"\'.[0].realm == "myrealm1"\'"\'"\'\necho $TOKENS | jq --exit-status \'"\'"\'.[0].token\'"\'"\'\nTOKEN_JSON=$(ceph rgw realm tokens | jq -r \'"\'"\'.[0].token\'"\'"\' | base64 --decode)\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.realm_name == "myrealm1"\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.endpoint | test("http://.+:\\\\d+")\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.realm_id | test("^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$")\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.access_key\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.secret\'"\'"\'\n\'' |
||||||||||||||
pass | 7889522 | 2024-09-04 18:56:06 | 2024-09-04 19:03:11 | 2024-09-04 20:58:51 | 1:55:40 | 1:44:41 | 0:10:59 | smithi | main | centos | 9.stream | rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async-v1only objectstore/bluestore-comp-zstd rados tasks/rados_cls_all validater/valgrind} | 2 | |
pass | 7889523 | 2024-09-04 18:56:07 | 2024-09-04 19:07:42 | 2024-09-04 19:49:16 | 0:41:34 | 0:27:10 | 0:14:24 | smithi | main | ubuntu | 22.04 | rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{default} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} | 3 | |
fail | 7889524 | 2024-09-04 18:56:08 | 2024-09-04 19:11:23 | 2024-09-04 19:43:55 | 0:32:32 | 0:22:10 | 0:10:22 | smithi | main | centos | 9.stream | rados/dashboard/{0-single-container-host debug/mgr mon_election/classic random-objectstore$/{bluestore-comp-zstd} tasks/e2e} | 2 | |
Failure Reason:
"2024-09-04T19:41:19.873398+0000 mon.a (mon.0) 699 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |