User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
gabrioux | 2024-09-18 12:14:13 | 2024-09-18 12:17:11 | 2024-09-18 13:52:58 | 1:35:47 | orch:cephadm | wip-guits-main-2024-09-17-1213 | smithi | e7fb7b5 | 6 | 13 | 1 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7911374 | 2024-09-18 12:14:24 | 2024-09-18 12:17:11 | 2024-09-18 12:33:10 | 0:15:59 | 0:00:33 | 0:15:26 | smithi | main | centos | 9.stream | orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream_runc} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
['ubuntu@smithi179.front.sepia.ceph.com: Permission denied (publickey).'] |
||||||||||||||
fail | 7911375 | 2024-09-18 12:14:26 | 2024-09-18 12:17:41 | 2024-09-18 12:45:33 | 0:27:52 | 0:16:14 | 0:11:38 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_ctdb_res_ips} | 4 | |
Failure Reason:
SELinux denials found on ubuntu@smithi092.front.sepia.ceph.com: ['type=AVC msg=audit(1726663239.520:10838): avc: denied { nlmsg_read } for pid=60589 comm="ss" scontext=system_u:system_r:container_t:s0:c343,c808 tcontext=system_u:system_r:container_t:s0:c343,c808 tclass=netlink_tcpdiag_socket permissive=1'] |
||||||||||||||
fail | 7911376 | 2024-09-18 12:14:27 | 2024-09-18 12:18:32 | 2024-09-18 12:54:17 | 0:35:45 | 0:26:05 | 0:09:40 | smithi | main | ubuntu | 22.04 | orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_monitoring_stack_basic} | 3 | |
Failure Reason:
Command failed on smithi003 with status 5: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e7fb7b56dbae911c82c9a0310bb8cf8c37e8363e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid bf488d62-75ba-11ef-bceb-c7b262605968 -- bash -c \'set -e\nset -x\nceph orch apply node-exporter\nceph orch apply grafana\nceph orch apply alertmanager\nceph orch apply prometheus\nsleep 240\nceph orch ls\nceph orch ps\nceph orch host ls\nMON_DAEMON=$(ceph orch ps --daemon-type mon -f json | jq -r \'"\'"\'last | .daemon_name\'"\'"\')\nGRAFANA_HOST=$(ceph orch ps --daemon-type grafana -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nPROM_HOST=$(ceph orch ps --daemon-type prometheus -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nALERTM_HOST=$(ceph orch ps --daemon-type alertmanager -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nGRAFANA_IP=$(ceph orch host ls -f json | jq -r --arg GRAFANA_HOST "$GRAFANA_HOST" \'"\'"\'.[] | select(.hostname==$GRAFANA_HOST) | .addr\'"\'"\')\nPROM_IP=$(ceph orch host ls -f json | jq -r --arg PROM_HOST "$PROM_HOST" \'"\'"\'.[] | select(.hostname==$PROM_HOST) | .addr\'"\'"\')\nALERTM_IP=$(ceph orch host ls -f json | jq -r --arg ALERTM_HOST "$ALERTM_HOST" \'"\'"\'.[] | select(.hostname==$ALERTM_HOST) | .addr\'"\'"\')\n# check each host node-exporter metrics endpoint is responsive\nALL_HOST_IPS=$(ceph orch host ls -f json | jq -r \'"\'"\'.[] | .addr\'"\'"\')\nfor ip in $ALL_HOST_IPS; do\n curl -s http://${ip}:9100/metric\ndone\n# check grafana endpoints are responsive and database health is okay\ncurl -k -s https://${GRAFANA_IP}:3000/api/health\ncurl -k -s https://${GRAFANA_IP}:3000/api/health | jq -e \'"\'"\'.database == "ok"\'"\'"\'\n# stop mon daemon in order to trigger an alert\nceph orch daemon stop $MON_DAEMON\nsleep 120\n# check prometheus endpoints are responsive and mon down alert is firing\ncurl -s http://${PROM_IP}:9095/api/v1/status/config\ncurl -s http://${PROM_IP}:9095/api/v1/status/config | jq -e \'"\'"\'.status == "success"\'"\'"\'\ncurl -s http://${PROM_IP}:9095/api/v1/alerts\ncurl -s http://${PROM_IP}:9095/api/v1/alerts | jq -e \'"\'"\'.data | .alerts | .[] | select(.labels | .alertname == "CephMonDown") | .state == "firing"\'"\'"\'\n# check alertmanager endpoints are responsive and mon down alert is active\ncurl -s http://${ALERTM_IP}:9093/api/v1/status\ncurl -s http://${ALERTM_IP}:9093/api/v1/alerts\ncurl -s http://${ALERTM_IP}:9093/api/v1/alerts | jq -e \'"\'"\'.data | .[] | select(.labels | .alertname == "CephMonDown") | .status | .state == "active"\'"\'"\'\n\'' |
||||||||||||||
fail | 7911377 | 2024-09-18 12:14:28 | 2024-09-18 12:18:42 | 2024-09-18 13:20:23 | 1:01:41 | 0:51:35 | 0:10:06 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
fail | 7911378 | 2024-09-18 12:14:30 | 2024-09-18 13:12:44 | 2617 | smithi | main | centos | 9.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream-squid 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 | ||||
Failure Reason:
Command failed on smithi017 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:squid shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0fc6f306-75ba-11ef-bceb-c7b262605968 -e sha1=e7fb7b56dbae911c82c9a0310bb8cf8c37e8363e -- bash -c \'ceph versions | jq -e \'"\'"\'.rgw | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 7911379 | 2024-09-18 12:14:31 | 2024-09-18 12:19:23 | 2024-09-18 12:42:02 | 0:22:39 | 0:12:06 | 0:10:33 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_rgw_multisite} | 3 | |
Failure Reason:
Command failed on smithi120 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e7fb7b56dbae911c82c9a0310bb8cf8c37e8363e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 87e1c7c6-75ba-11ef-bceb-c7b262605968 -- bash -c \'set -e\nset -x\nwhile true; do TOKEN=$(ceph rgw realm tokens | jq -r \'"\'"\'.[0].token\'"\'"\'); echo $TOKEN; if [ "$TOKEN" != "master zone has no endpoint" ]; then break; fi; sleep 5; done\nTOKENS=$(ceph rgw realm tokens)\necho $TOKENS | jq --exit-status \'"\'"\'.[0].realm == "myrealm1"\'"\'"\'\necho $TOKENS | jq --exit-status \'"\'"\'.[0].token\'"\'"\'\nTOKEN_JSON=$(ceph rgw realm tokens | jq -r \'"\'"\'.[0].token\'"\'"\' | base64 --decode)\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.realm_name == "myrealm1"\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.endpoint | test("http://.+:\\\\d+")\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.realm_id | test("^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$")\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.access_key\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.secret\'"\'"\'\n\'' |
||||||||||||||
pass | 7911380 | 2024-09-18 12:14:32 | 2024-09-18 12:20:33 | 2024-09-18 13:06:45 | 0:46:12 | 0:32:54 | 0:13:18 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
pass | 7911381 | 2024-09-18 12:14:34 | 2024-09-18 12:23:44 | 2024-09-18 13:21:21 | 0:57:37 | 0:48:14 | 0:09:23 | smithi | main | ubuntu | 22.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04-reef 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/classic} | 2 | |
fail | 7911382 | 2024-09-18 12:14:35 | 2024-09-18 12:23:45 | 2024-09-18 12:49:12 | 0:25:27 | 0:14:26 | 0:11:01 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/deploy-raw} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
fail | 7911383 | 2024-09-18 12:14:37 | 2024-09-18 12:24:15 | 2024-09-18 13:25:18 | 1:01:03 | 0:52:10 | 0:08:53 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
pass | 7911384 | 2024-09-18 12:14:38 | 2024-09-18 12:24:15 | 2024-09-18 13:09:16 | 0:45:01 | 0:33:07 | 0:11:54 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
fail | 7911385 | 2024-09-18 12:14:40 | 2024-09-18 12:26:16 | 2024-09-18 13:14:18 | 0:48:02 | 0:37:05 | 0:10:57 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
fail | 7911386 | 2024-09-18 12:14:41 | 2024-09-18 12:27:17 | 2024-09-18 13:19:17 | 0:52:00 | 0:42:22 | 0:09:38 | smithi | main | centos | 9.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream-squid 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi082 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:squid shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 33c70ca4-75bb-11ef-bceb-c7b262605968 -e sha1=e7fb7b56dbae911c82c9a0310bb8cf8c37e8363e -- bash -c \'ceph versions | jq -e \'"\'"\'.rgw | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 7911387 | 2024-09-18 12:14:42 | 2024-09-18 12:28:17 | 2024-09-18 13:07:23 | 0:39:06 | 0:27:11 | 0:11:55 | smithi | main | ubuntu | 22.04 | orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_ctdb_res_dom} | 4 | |
pass | 7911388 | 2024-09-18 12:14:44 | 2024-09-18 12:29:38 | 2024-09-18 13:13:02 | 0:43:24 | 0:33:17 | 0:10:07 | smithi | main | ubuntu | 22.04 | orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |
fail | 7911389 | 2024-09-18 12:14:45 | 2024-09-18 12:29:38 | 2024-09-18 12:57:37 | 0:27:59 | 0:16:30 | 0:11:29 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_ctdb_res_ips} | 4 | |
Failure Reason:
"2024-09-18T12:54:28.042466+0000 mon.a (mon.0) 787 : cluster [WRN] Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
pass | 7911390 | 2024-09-18 12:14:47 | 2024-09-18 12:30:28 | 2024-09-18 13:03:50 | 0:33:22 | 0:23:28 | 0:09:54 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} | 2 | |
dead | 7911391 | 2024-09-18 12:14:48 | 2024-09-18 12:31:09 | 2024-09-18 12:42:23 | 0:11:14 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_monitoring_stack_basic} | 3 | |||
Failure Reason:
SSH connection to smithi179 was lost: 'sudo grub2-mkconfig -o /boot/grub2/grub.cfg' |
||||||||||||||
fail | 7911392 | 2024-09-18 12:14:49 | 2024-09-18 12:34:50 | 2024-09-18 13:24:16 | 0:49:26 | 0:37:41 | 0:11:45 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
fail | 7911393 | 2024-09-18 12:14:51 | 2024-09-18 12:36:50 | 2024-09-18 13:52:58 | 1:16:08 | 1:05:01 | 0:11:07 | smithi | main | ubuntu | 22.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04-squid 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi067 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:squid shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 081ae3f8-75bd-11ef-bceb-c7b262605968 -e sha1=e7fb7b56dbae911c82c9a0310bb8cf8c37e8363e -- bash -c \'ceph versions | jq -e \'"\'"\'.rgw | length == 1\'"\'"\'\'' |