User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
gabrioux | 2024-09-13 12:30:34 | 2024-09-13 12:31:44 | 2024-09-13 21:36:12 | 9:04:28 | orch:cephadm | wip-guits-main-2024-09-13-0840 | smithi | b0201b8 | 96 | 19 | 3 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7903571 | 2024-09-13 12:30:39 | 2024-09-13 12:31:44 | 2024-09-13 13:02:46 | 0:31:02 | 0:21:29 | 0:09:33 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 | |
pass | 7903572 | 2024-09-13 12:30:40 | 2024-09-13 12:31:44 | 2024-09-13 12:56:44 | 0:25:00 | 0:15:31 | 0:09:29 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_host_drain} | 3 | |
pass | 7903573 | 2024-09-13 12:30:42 | 2024-09-13 12:31:45 | 2024-09-13 13:14:48 | 0:43:03 | 0:32:40 | 0:10:23 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
pass | 7903574 | 2024-09-13 12:30:43 | 2024-09-13 12:31:45 | 2024-09-13 13:18:49 | 0:47:04 | 0:36:07 | 0:10:57 | smithi | main | centos | 9.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_9.stream 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
pass | 7903575 | 2024-09-13 12:30:44 | 2024-09-13 12:31:46 | 2024-09-13 13:44:03 | 1:12:17 | 1:02:53 | 0:09:24 | smithi | main | ubuntu | 22.04 | orch:cephadm/nfs/{cluster/{1-node} conf/{client mds mgr mon osd} overrides/{ignore_mgr_down ignorelist_health pg_health} supported-random-distros$/{ubuntu_latest} tasks/nfs} | 1 | |
pass | 7903576 | 2024-09-13 12:30:46 | 2024-09-13 12:31:46 | 2024-09-13 12:55:15 | 0:23:29 | 0:13:40 | 0:09:49 | smithi | main | centos | 9.stream | orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/classic task/test_orch_cli} | 1 | |
pass | 7903577 | 2024-09-13 12:30:47 | 2024-09-13 12:32:26 | 2024-09-13 12:58:13 | 0:25:47 | 0:15:13 | 0:10:34 | smithi | main | ubuntu | 22.04 | orch:cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_22.04} 2-node-mgr agent/off orchestrator_cli} | 2 | |
pass | 7903578 | 2024-09-13 12:30:48 | 2024-09-13 12:32:27 | 2024-09-13 13:03:34 | 0:31:07 | 0:20:48 | 0:10:19 | smithi | main | ubuntu | 22.04 | orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
fail | 7903579 | 2024-09-13 12:30:50 | 2024-09-13 12:33:58 | 2024-09-13 12:51:33 | 0:17:35 | 0:06:49 | 0:10:46 | smithi | main | centos | 9.stream | orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} | 3 | |
Failure Reason:
Command failed on smithi152 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:b0201b8c79733293453e7f10a10c7fa43119222b shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7f2a7302-71ce-11ef-bceb-c7b262605968 -- ceph orch host add smithi179' |
||||||||||||||
pass | 7903580 | 2024-09-13 12:30:51 | 2024-09-13 12:33:58 | 2024-09-13 12:59:34 | 0:25:36 | 0:15:28 | 0:10:08 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
pass | 7903581 | 2024-09-13 12:30:52 | 2024-09-13 12:34:39 | 2024-09-13 13:04:10 | 0:29:31 | 0:19:40 | 0:09:51 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-singlehost/{0-random-distro$/{ubuntu_22.04} 1-start 2-services/basic 3-final} | 1 | |
pass | 7903582 | 2024-09-13 12:30:54 | 2024-09-13 12:34:39 | 2024-09-13 12:59:52 | 0:25:13 | 0:10:23 | 0:14:50 | smithi | main | centos | 9.stream | orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 3 | |
pass | 7903583 | 2024-09-13 12:30:55 | 2024-09-13 12:37:00 | 2024-09-13 13:45:19 | 1:08:19 | 0:57:19 | 0:11:00 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
pass | 7903584 | 2024-09-13 12:30:56 | 2024-09-13 12:38:00 | 2024-09-13 13:22:59 | 0:44:59 | 0:34:44 | 0:10:15 | smithi | main | centos | 9.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream-reef 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/classic} | 2 | |
fail | 7903585 | 2024-09-13 12:30:58 | 2024-09-13 12:38:01 | 2024-09-13 13:06:44 | 0:28:43 | 0:16:43 | 0:12:00 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_ctdb_res_ips} | 4 | |
Failure Reason:
SELinux denials found on ubuntu@smithi043.front.sepia.ceph.com: ['type=AVC msg=audit(1726232569.614:10931): avc: denied { nlmsg_read } for pid=61998 comm="ss" scontext=system_u:system_r:container_t:s0:c31,c789 tcontext=system_u:system_r:container_t:s0:c31,c789 tclass=netlink_tcpdiag_socket permissive=1'] |
||||||||||||||
pass | 7903586 | 2024-09-13 12:30:59 | 2024-09-13 12:38:01 | 2024-09-13 13:13:11 | 0:35:10 | 0:24:29 | 0:10:41 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rotate-keys} | 2 | |
pass | 7903587 | 2024-09-13 12:31:01 | 2024-09-13 12:38:22 | 2024-09-13 13:02:01 | 0:23:39 | 0:13:43 | 0:09:56 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 | |
pass | 7903588 | 2024-09-13 12:31:02 | 2024-09-13 12:38:23 | 2024-09-13 13:03:54 | 0:25:31 | 0:15:35 | 0:09:56 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
pass | 7903589 | 2024-09-13 12:31:03 | 2024-09-13 12:38:43 | 2024-09-13 13:10:34 | 0:31:51 | 0:21:34 | 0:10:17 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} | 2 | |
pass | 7903590 | 2024-09-13 12:31:05 | 2024-09-13 12:38:44 | 2024-09-13 12:59:49 | 0:21:05 | 0:12:36 | 0:08:29 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
pass | 7903591 | 2024-09-13 12:31:06 | 2024-09-13 12:38:44 | 2024-09-13 13:07:52 | 0:29:08 | 0:19:31 | 0:09:37 | smithi | main | ubuntu | 22.04 | orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_domain} | 2 | |
pass | 7903592 | 2024-09-13 12:31:08 | 2024-09-13 12:38:44 | 2024-09-13 13:14:30 | 0:35:46 | 0:26:37 | 0:09:09 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
pass | 7903593 | 2024-09-13 12:31:09 | 2024-09-13 12:38:45 | 2024-09-13 13:18:10 | 0:39:25 | 0:29:03 | 0:10:22 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
pass | 7903594 | 2024-09-13 12:31:10 | 2024-09-13 12:38:45 | 2024-09-13 13:35:23 | 0:56:38 | 0:45:11 | 0:11:27 | smithi | main | ubuntu | 22.04 | orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |
fail | 7903595 | 2024-09-13 12:31:12 | 2024-09-13 12:39:36 | 2024-09-13 13:16:11 | 0:36:35 | 0:26:29 | 0:10:06 | smithi | main | ubuntu | 22.04 | orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_monitoring_stack_basic} | 3 | |
Failure Reason:
Command failed on smithi007 with status 5: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:b0201b8c79733293453e7f10a10c7fa43119222b shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fa2bba74-71cf-11ef-bceb-c7b262605968 -- bash -c \'set -e\nset -x\nceph orch apply node-exporter\nceph orch apply grafana\nceph orch apply alertmanager\nceph orch apply prometheus\nsleep 240\nceph orch ls\nceph orch ps\nceph orch host ls\nMON_DAEMON=$(ceph orch ps --daemon-type mon -f json | jq -r \'"\'"\'last | .daemon_name\'"\'"\')\nGRAFANA_HOST=$(ceph orch ps --daemon-type grafana -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nPROM_HOST=$(ceph orch ps --daemon-type prometheus -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nALERTM_HOST=$(ceph orch ps --daemon-type alertmanager -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nGRAFANA_IP=$(ceph orch host ls -f json | jq -r --arg GRAFANA_HOST "$GRAFANA_HOST" \'"\'"\'.[] | select(.hostname==$GRAFANA_HOST) | .addr\'"\'"\')\nPROM_IP=$(ceph orch host ls -f json | jq -r --arg PROM_HOST "$PROM_HOST" \'"\'"\'.[] | select(.hostname==$PROM_HOST) | .addr\'"\'"\')\nALERTM_IP=$(ceph orch host ls -f json | jq -r --arg ALERTM_HOST "$ALERTM_HOST" \'"\'"\'.[] | select(.hostname==$ALERTM_HOST) | .addr\'"\'"\')\n# check each host node-exporter metrics endpoint is responsive\nALL_HOST_IPS=$(ceph orch host ls -f json | jq -r \'"\'"\'.[] | .addr\'"\'"\')\nfor ip in $ALL_HOST_IPS; do\n curl -s http://${ip}:9100/metric\ndone\n# check grafana endpoints are responsive and database health is okay\ncurl -k -s https://${GRAFANA_IP}:3000/api/health\ncurl -k -s https://${GRAFANA_IP}:3000/api/health | jq -e \'"\'"\'.database == "ok"\'"\'"\'\n# stop mon daemon in order to trigger an alert\nceph orch daemon stop $MON_DAEMON\nsleep 120\n# check prometheus endpoints are responsive and mon down alert is firing\ncurl -s http://${PROM_IP}:9095/api/v1/status/config\ncurl -s http://${PROM_IP}:9095/api/v1/status/config | jq -e \'"\'"\'.status == "success"\'"\'"\'\ncurl -s http://${PROM_IP}:9095/api/v1/alerts\ncurl -s http://${PROM_IP}:9095/api/v1/alerts | jq -e \'"\'"\'.data | .alerts | .[] | select(.labels | .alertname == "CephMonDown") | .state == "firing"\'"\'"\'\n# check alertmanager endpoints are responsive and mon down alert is active\ncurl -s http://${ALERTM_IP}:9093/api/v1/status\ncurl -s http://${ALERTM_IP}:9093/api/v1/alerts\ncurl -s http://${ALERTM_IP}:9093/api/v1/alerts | jq -e \'"\'"\'.data | .[] | select(.labels | .alertname == "CephMonDown") | .status | .state == "active"\'"\'"\'\n\'' |
||||||||||||||
fail | 7903596 | 2024-09-13 12:31:13 | 2024-09-13 12:39:36 | 2024-09-13 13:44:41 | 1:05:05 | 0:53:10 | 0:11:55 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
pass | 7903597 | 2024-09-13 12:31:15 | 2024-09-13 12:40:47 | 2024-09-13 13:05:40 | 0:24:53 | 0:15:11 | 0:09:42 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
fail | 7903598 | 2024-09-13 12:31:16 | 2024-09-13 12:40:47 | 2024-09-13 13:37:16 | 0:56:29 | 0:42:55 | 0:13:34 | smithi | main | centos | 9.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream-squid 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 | |
Failure Reason:
Command failed on smithi018 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:squid shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9f4b6690-71cf-11ef-bceb-c7b262605968 -e sha1=b0201b8c79733293453e7f10a10c7fa43119222b -- bash -c \'ceph versions | jq -e \'"\'"\'.rgw | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 7903599 | 2024-09-13 12:31:17 | 2024-09-13 12:42:38 | 2024-09-13 13:17:40 | 0:35:02 | 0:23:51 | 0:11:11 | smithi | main | centos | 9.stream | orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/connectivity task/test_orch_cli_mon} | 5 | |
pass | 7903600 | 2024-09-13 12:31:19 | 2024-09-13 12:43:59 | 2024-09-13 13:08:23 | 0:24:24 | 0:12:36 | 0:11:48 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
pass | 7903601 | 2024-09-13 12:31:20 | 2024-09-13 12:45:10 | 2024-09-13 13:04:07 | 0:18:57 | 0:10:12 | 0:08:45 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_res_basic} | 2 | |
fail | 7903602 | 2024-09-13 12:31:21 | 2024-09-13 12:45:10 | 2024-09-13 13:20:40 | 0:35:30 | 0:22:03 | 0:13:27 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
"2024-09-13T13:16:10.490033+0000 mon.a (mon.0) 1439 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
fail | 7903603 | 2024-09-13 12:31:23 | 2024-09-13 12:47:01 | 2024-09-13 13:10:23 | 0:23:22 | 0:13:17 | 0:10:05 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_rgw_multisite} | 3 | |
Failure Reason:
Command failed on smithi071 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:b0201b8c79733293453e7f10a10c7fa43119222b shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 5b824d60-71d0-11ef-bceb-c7b262605968 -- bash -c \'set -e\nset -x\nwhile true; do TOKEN=$(ceph rgw realm tokens | jq -r \'"\'"\'.[0].token\'"\'"\'); echo $TOKEN; if [ "$TOKEN" != "master zone has no endpoint" ]; then break; fi; sleep 5; done\nTOKENS=$(ceph rgw realm tokens)\necho $TOKENS | jq --exit-status \'"\'"\'.[0].realm == "myrealm1"\'"\'"\'\necho $TOKENS | jq --exit-status \'"\'"\'.[0].token\'"\'"\'\nTOKEN_JSON=$(ceph rgw realm tokens | jq -r \'"\'"\'.[0].token\'"\'"\' | base64 --decode)\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.realm_name == "myrealm1"\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.endpoint | test("http://.+:\\\\d+")\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.realm_id | test("^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$")\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.access_key\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.secret\'"\'"\'\n\'' |
||||||||||||||
pass | 7903604 | 2024-09-13 12:31:24 | 2024-09-13 12:47:21 | 2024-09-13 13:19:11 | 0:31:50 | 0:21:31 | 0:10:19 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nvmeof 3-final} | 2 | |
pass | 7903605 | 2024-09-13 12:31:25 | 2024-09-13 12:47:22 | 2024-09-13 13:10:56 | 0:23:34 | 0:13:14 | 0:10:20 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
pass | 7903606 | 2024-09-13 12:31:26 | 2024-09-13 12:47:42 | 2024-09-13 13:47:07 | 0:59:25 | 0:46:44 | 0:12:41 | smithi | main | ubuntu | 22.04 | orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
fail | 7903607 | 2024-09-13 12:31:28 | 2024-09-13 12:49:13 | 2024-09-13 13:32:25 | 0:43:12 | 0:32:21 | 0:10:51 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
"2024-09-13T13:20:00.000179+0000 mon.smithi184 (mon.0) 419 : cluster [WRN] osd.3 (root=default,host=smithi203) is down" in cluster log |
||||||||||||||
dead | 7903608 | 2024-09-13 12:31:29 | 2024-09-13 12:49:13 | 2024-09-13 20:59:02 | 8:09:49 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7903609 | 2024-09-13 12:31:30 | 2024-09-13 12:49:54 | 2024-09-13 13:13:10 | 0:23:16 | 0:12:53 | 0:10:23 | smithi | main | centos | 9.stream | orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 3 | |
pass | 7903610 | 2024-09-13 12:31:32 | 2024-09-13 12:49:55 | 2024-09-13 13:48:23 | 0:58:28 | 0:48:58 | 0:09:30 | smithi | main | ubuntu | 22.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04-reef 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/classic} | 2 | |
pass | 7903611 | 2024-09-13 12:31:33 | 2024-09-13 12:50:55 | 2024-09-13 13:11:49 | 0:20:54 | 0:12:02 | 0:08:52 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_res_dom} | 2 | |
pass | 7903612 | 2024-09-13 12:31:34 | 2024-09-13 12:50:56 | 2024-09-13 13:24:52 | 0:33:56 | 0:23:23 | 0:10:33 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rotate-keys} | 2 | |
pass | 7903613 | 2024-09-13 12:31:36 | 2024-09-13 12:51:46 | 2024-09-13 13:20:08 | 0:28:22 | 0:15:22 | 0:13:00 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_set_mon_crush_locations} | 3 | |
pass | 7903614 | 2024-09-13 12:31:37 | 2024-09-13 12:52:47 | 2024-09-13 13:17:57 | 0:25:10 | 0:15:34 | 0:09:36 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
pass | 7903615 | 2024-09-13 12:31:38 | 2024-09-13 12:52:57 | 2024-09-13 13:16:37 | 0:23:40 | 0:13:21 | 0:10:19 | smithi | main | ubuntu | 22.04 | orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/classic task/test_adoption} | 1 | |
fail | 7903616 | 2024-09-13 12:31:39 | 2024-09-13 12:53:08 | 2024-09-13 13:18:29 | 0:25:21 | 0:14:56 | 0:10:25 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/deploy-raw} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
pass | 7903617 | 2024-09-13 12:31:41 | 2024-09-13 12:53:18 | 2024-09-13 13:15:43 | 0:22:25 | 0:11:48 | 0:10:37 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_basic} | 2 | |
pass | 7903618 | 2024-09-13 12:31:42 | 2024-09-13 12:54:39 | 2024-09-13 13:19:29 | 0:24:50 | 0:14:11 | 0:10:39 | smithi | main | centos | 9.stream | orch:cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
pass | 7903619 | 2024-09-13 12:31:43 | 2024-09-13 12:54:39 | 2024-09-13 13:25:26 | 0:30:47 | 0:19:35 | 0:11:12 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
pass | 7903620 | 2024-09-13 12:31:45 | 2024-09-13 12:55:30 | 2024-09-13 13:38:58 | 0:43:28 | 0:33:44 | 0:09:44 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |
pass | 7903621 | 2024-09-13 12:31:46 | 2024-09-13 12:56:01 | 2024-09-13 13:17:21 | 0:21:20 | 0:10:44 | 0:10:36 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_ca_signed_key} | 2 | |
fail | 7903622 | 2024-09-13 12:31:47 | 2024-09-13 12:56:21 | 2024-09-13 14:13:26 | 1:17:05 | 1:06:19 | 0:10:46 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
pass | 7903623 | 2024-09-13 12:31:49 | 2024-09-13 12:56:21 | 2024-09-13 13:20:01 | 0:23:40 | 0:12:37 | 0:11:03 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
pass | 7903624 | 2024-09-13 12:31:50 | 2024-09-13 12:56:42 | 2024-09-13 14:14:47 | 1:18:05 | 1:08:15 | 0:09:50 | smithi | main | ubuntu | 22.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04-squid 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 | |
pass | 7903625 | 2024-09-13 12:31:51 | 2024-09-13 12:56:42 | 2024-09-13 13:24:38 | 0:27:56 | 0:18:15 | 0:09:41 | smithi | main | ubuntu | 22.04 | orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_domain} | 2 | |
pass | 7903626 | 2024-09-13 12:31:53 | 2024-09-13 12:57:03 | 2024-09-13 13:42:25 | 0:45:22 | 0:32:46 | 0:12:36 | smithi | main | ubuntu | 22.04 | orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} | 2 | |
pass | 7903627 | 2024-09-13 12:31:54 | 2024-09-13 12:58:34 | 2024-09-13 13:29:04 | 0:30:30 | 0:20:01 | 0:10:29 | smithi | main | ubuntu | 22.04 | orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm} | 1 | |
pass | 7903628 | 2024-09-13 12:31:55 | 2024-09-13 12:59:44 | 2024-09-13 13:21:05 | 0:21:21 | 0:12:33 | 0:08:48 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
pass | 7903629 | 2024-09-13 12:31:56 | 2024-09-13 13:00:05 | 2024-09-13 13:29:50 | 0:29:45 | 0:18:45 | 0:11:00 | smithi | main | ubuntu | 22.04 | orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
pass | 7903630 | 2024-09-13 12:31:58 | 2024-09-13 13:00:05 | 2024-09-13 14:12:50 | 1:12:45 | 1:02:40 | 0:10:05 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
pass | 7903631 | 2024-09-13 12:31:59 | 2024-09-13 13:00:06 | 2024-09-13 13:35:36 | 0:35:30 | 0:22:17 | 0:13:13 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |
pass | 7903632 | 2024-09-13 12:32:00 | 2024-09-13 13:02:06 | 2024-09-13 13:21:45 | 0:19:39 | 0:10:07 | 0:09:32 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_basic} | 2 | |
pass | 7903633 | 2024-09-13 12:32:02 | 2024-09-13 13:02:07 | 2024-09-13 13:36:52 | 0:34:45 | 0:23:17 | 0:11:28 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rotate-keys} | 2 | |
pass | 7903634 | 2024-09-13 12:32:03 | 2024-09-13 13:03:07 | 2024-09-13 13:17:57 | 0:14:50 | 0:05:38 | 0:09:12 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_cephadm_repos} | 1 | |
pass | 7903635 | 2024-09-13 12:32:04 | 2024-09-13 13:03:08 | 2024-09-13 13:46:34 | 0:43:26 | 0:33:14 | 0:10:12 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
pass | 7903636 | 2024-09-13 12:32:06 | 2024-09-13 13:03:38 | 2024-09-13 13:26:38 | 0:23:00 | 0:12:38 | 0:10:22 | smithi | main | centos | 9.stream | orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/connectivity task/test_cephadm_timeout} | 1 | |
pass | 7903637 | 2024-09-13 12:32:07 | 2024-09-13 13:03:49 | 2024-09-13 13:27:25 | 0:23:36 | 0:14:57 | 0:08:39 | smithi | main | ubuntu | 22.04 | orch:cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_22.04} 2-node-mgr agent/on orchestrator_cli} | 2 | |
pass | 7903638 | 2024-09-13 12:32:08 | 2024-09-13 13:03:49 | 2024-09-13 13:27:26 | 0:23:37 | 0:13:03 | 0:10:34 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
pass | 7903639 | 2024-09-13 12:32:10 | 2024-09-13 13:04:00 | 2024-09-13 13:33:31 | 0:29:31 | 0:20:48 | 0:08:43 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-singlehost/{0-random-distro$/{ubuntu_22.04} 1-start 2-services/rgw 3-final} | 1 | |
pass | 7903640 | 2024-09-13 12:32:11 | 2024-09-13 13:04:00 | 2024-09-13 13:25:00 | 0:21:00 | 0:10:48 | 0:10:12 | smithi | main | centos | 9.stream | orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 3 | |
pass | 7903641 | 2024-09-13 12:32:12 | 2024-09-13 13:04:11 | 2024-09-13 13:47:19 | 0:43:08 | 0:33:18 | 0:09:50 | smithi | main | centos | 9.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream-reef 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} | 2 | |
pass | 7903642 | 2024-09-13 12:32:14 | 2024-09-13 13:04:21 | 2024-09-13 13:27:43 | 0:23:22 | 0:13:43 | 0:09:39 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} | 2 | |
pass | 7903643 | 2024-09-13 12:32:15 | 2024-09-13 13:04:42 | 2024-09-13 13:27:56 | 0:23:14 | 0:12:36 | 0:10:38 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
fail | 7903644 | 2024-09-13 12:32:16 | 2024-09-13 13:04:42 | 2024-09-13 13:31:53 | 0:27:11 | 0:15:19 | 0:11:52 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_ctdb_res_basic} | 4 | |
Failure Reason:
"2024-09-13T13:30:01.148598+0000 mon.a (mon.0) 824 : cluster [WRN] Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
pass | 7903645 | 2024-09-13 12:32:17 | 2024-09-13 13:06:03 | 2024-09-13 13:28:58 | 0:22:55 | 0:14:15 | 0:08:40 | smithi | main | centos | 9.stream | orch:cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
pass | 7903646 | 2024-09-13 12:32:19 | 2024-09-13 13:06:03 | 2024-09-13 13:45:56 | 0:39:53 | 0:29:09 | 0:10:44 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | |
pass | 7903647 | 2024-09-13 12:32:20 | 2024-09-13 13:06:54 | 2024-09-13 13:50:20 | 0:43:26 | 0:33:36 | 0:09:50 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_api_tests} | 2 | |
pass | 7903648 | 2024-09-13 12:32:21 | 2024-09-13 13:07:04 | 2024-09-13 13:28:40 | 0:21:36 | 0:12:38 | 0:08:58 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_extra_daemon_features} | 2 | |
pass | 7903649 | 2024-09-13 12:32:23 | 2024-09-13 13:08:15 | 2024-09-13 13:40:36 | 0:32:21 | 0:21:46 | 0:10:35 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
fail | 7903650 | 2024-09-13 12:32:24 | 2024-09-13 13:08:45 | 2024-09-13 13:57:37 | 0:48:52 | 0:37:16 | 0:11:36 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
pass | 7903651 | 2024-09-13 12:32:26 | 2024-09-13 13:10:36 | 2024-09-13 13:33:51 | 0:23:15 | 0:13:39 | 0:09:36 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
fail | 7903652 | 2024-09-13 12:32:27 | 2024-09-13 13:10:57 | 2024-09-13 14:04:29 | 0:53:32 | 0:41:53 | 0:11:39 | smithi | main | centos | 9.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream-squid 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi154 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:squid shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8365424e-71d3-11ef-bceb-c7b262605968 -e sha1=b0201b8c79733293453e7f10a10c7fa43119222b -- bash -c \'ceph versions | jq -e \'"\'"\'.rgw | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 7903653 | 2024-09-13 12:32:28 | 2024-09-13 13:11:07 | 2024-09-13 13:54:30 | 0:43:23 | 0:31:18 | 0:12:05 | smithi | main | ubuntu | 22.04 | orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_ctdb_res_dom} | 4 | |
Failure Reason:
Command failed on smithi012 with status 22: "sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:b0201b8c79733293453e7f10a10c7fa43119222b shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 56b16aba-71d4-11ef-bceb-c7b262605968 -- bash -c 'ceph smb apply -i -'" |
||||||||||||||
pass | 7903654 | 2024-09-13 12:32:30 | 2024-09-13 13:12:58 | 2024-09-13 13:58:44 | 0:45:46 | 0:34:59 | 0:10:47 | smithi | main | ubuntu | 22.04 | orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |
pass | 7903655 | 2024-09-13 12:32:31 | 2024-09-13 13:13:08 | 2024-09-13 13:51:00 | 0:37:52 | 0:28:35 | 0:09:17 | smithi | main | ubuntu | 22.04 | orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_host_drain} | 3 | |
pass | 7903656 | 2024-09-13 12:32:32 | 2024-09-13 13:13:29 | 2024-09-13 13:37:34 | 0:24:05 | 0:13:20 | 0:10:45 | smithi | main | centos | 9.stream | orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/classic task/test_orch_cli} | 1 | |
pass | 7903657 | 2024-09-13 12:32:33 | 2024-09-13 13:13:29 | 2024-09-13 13:38:05 | 0:24:36 | 0:12:40 | 0:11:56 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
pass | 7903658 | 2024-09-13 12:32:35 | 2024-09-13 13:14:40 | 2024-09-13 13:39:38 | 0:24:58 | 0:15:27 | 0:09:31 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
pass | 7903659 | 2024-09-13 12:32:36 | 2024-09-13 13:14:40 | 2024-09-13 14:14:06 | 0:59:26 | 0:49:17 | 0:10:09 | smithi | main | ubuntu | 22.04 | orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} | 2 | |
fail | 7903660 | 2024-09-13 12:32:37 | 2024-09-13 13:14:41 | 2024-09-13 13:41:07 | 0:26:26 | 0:16:27 | 0:09:59 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_ctdb_res_ips} | 4 | |
Failure Reason:
SELinux denials found on ubuntu@smithi006.front.sepia.ceph.com: ['type=AVC msg=audit(1726234669.466:10133): avc: denied { nlmsg_read } for pid=57100 comm="ss" scontext=system_u:system_r:container_t:s0:c774,c872 tcontext=system_u:system_r:container_t:s0:c774,c872 tclass=netlink_tcpdiag_socket permissive=1'] |
||||||||||||||
pass | 7903661 | 2024-09-13 12:32:39 | 2024-09-13 13:16:02 | 2024-09-13 13:51:15 | 0:35:13 | 0:25:34 | 0:09:39 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} | 2 | |
pass | 7903662 | 2024-09-13 12:32:40 | 2024-09-13 13:16:22 | 2024-09-13 13:39:36 | 0:23:14 | 0:12:37 | 0:10:37 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 | |
pass | 7903663 | 2024-09-13 12:32:42 | 2024-09-13 13:16:22 | 2024-09-13 13:52:27 | 0:36:05 | 0:25:33 | 0:10:32 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
fail | 7903664 | 2024-09-13 12:32:43 | 2024-09-13 13:16:53 | 2024-09-13 14:02:37 | 0:45:44 | 0:35:38 | 0:10:06 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
"2024-09-13T13:50:00.000128+0000 mon.smithi086 (mon.0) 372 : cluster [WRN] osd.2 (root=default,host=smithi086) is down" in cluster log |
||||||||||||||
pass | 7903665 | 2024-09-13 12:32:44 | 2024-09-13 13:17:34 | 2024-09-13 13:40:53 | 0:23:19 | 0:13:41 | 0:09:38 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} | 2 | |
pass | 7903666 | 2024-09-13 12:32:46 | 2024-09-13 13:17:54 | 2024-09-13 13:41:34 | 0:23:40 | 0:12:38 | 0:11:02 | smithi | main | centos | 9.stream | orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 3 | |
pass | 7903667 | 2024-09-13 12:32:47 | 2024-09-13 13:17:55 | 2024-09-13 14:17:54 | 0:59:59 | 0:50:40 | 0:09:19 | smithi | main | ubuntu | 22.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04-reef 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} | 2 | |
pass | 7903668 | 2024-09-13 12:32:48 | 2024-09-13 13:18:15 | 2024-09-13 13:50:22 | 0:32:07 | 0:21:14 | 0:10:53 | smithi | main | ubuntu | 22.04 | orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
pass | 7903669 | 2024-09-13 12:32:50 | 2024-09-13 13:18:26 | 2024-09-13 13:40:06 | 0:21:40 | 0:11:54 | 0:09:46 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_domain} | 2 | |
pass | 7903670 | 2024-09-13 12:32:51 | 2024-09-13 13:18:46 | 2024-09-13 13:56:45 | 0:37:59 | 0:28:01 | 0:09:58 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
pass | 7903671 | 2024-09-13 12:32:52 | 2024-09-13 13:19:07 | 2024-09-13 14:04:44 | 0:45:37 | 0:35:12 | 0:10:25 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
fail | 7903672 | 2024-09-13 12:32:54 | 2024-09-13 13:19:27 | 2024-09-13 13:48:44 | 0:29:17 | 0:20:07 | 0:09:10 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_monitoring_stack_basic} | 3 | |
Failure Reason:
Command failed on smithi102 with status 5: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:b0201b8c79733293453e7f10a10c7fa43119222b shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ee9fed4c-71d4-11ef-bceb-c7b262605968 -- bash -c \'set -e\nset -x\nceph orch apply node-exporter\nceph orch apply grafana\nceph orch apply alertmanager\nceph orch apply prometheus\nsleep 240\nceph orch ls\nceph orch ps\nceph orch host ls\nMON_DAEMON=$(ceph orch ps --daemon-type mon -f json | jq -r \'"\'"\'last | .daemon_name\'"\'"\')\nGRAFANA_HOST=$(ceph orch ps --daemon-type grafana -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nPROM_HOST=$(ceph orch ps --daemon-type prometheus -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nALERTM_HOST=$(ceph orch ps --daemon-type alertmanager -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nGRAFANA_IP=$(ceph orch host ls -f json | jq -r --arg GRAFANA_HOST "$GRAFANA_HOST" \'"\'"\'.[] | select(.hostname==$GRAFANA_HOST) | .addr\'"\'"\')\nPROM_IP=$(ceph orch host ls -f json | jq -r --arg PROM_HOST "$PROM_HOST" \'"\'"\'.[] | select(.hostname==$PROM_HOST) | .addr\'"\'"\')\nALERTM_IP=$(ceph orch host ls -f json | jq -r --arg ALERTM_HOST "$ALERTM_HOST" \'"\'"\'.[] | select(.hostname==$ALERTM_HOST) | .addr\'"\'"\')\n# check each host node-exporter metrics endpoint is responsive\nALL_HOST_IPS=$(ceph orch host ls -f json | jq -r \'"\'"\'.[] | .addr\'"\'"\')\nfor ip in $ALL_HOST_IPS; do\n curl -s http://${ip}:9100/metric\ndone\n# check grafana endpoints are responsive and database health is okay\ncurl -k -s https://${GRAFANA_IP}:3000/api/health\ncurl -k -s https://${GRAFANA_IP}:3000/api/health | jq -e \'"\'"\'.database == "ok"\'"\'"\'\n# stop mon daemon in order to trigger an alert\nceph orch daemon stop $MON_DAEMON\nsleep 120\n# check prometheus endpoints are responsive and mon down alert is firing\ncurl -s http://${PROM_IP}:9095/api/v1/status/config\ncurl -s http://${PROM_IP}:9095/api/v1/status/config | jq -e \'"\'"\'.status == "success"\'"\'"\'\ncurl -s http://${PROM_IP}:9095/api/v1/alerts\ncurl -s http://${PROM_IP}:9095/api/v1/alerts | jq -e \'"\'"\'.data | .alerts | .[] | select(.labels | .alertname == "CephMonDown") | .state == "firing"\'"\'"\'\n# check alertmanager endpoints are responsive and mon down alert is active\ncurl -s http://${ALERTM_IP}:9093/api/v1/status\ncurl -s http://${ALERTM_IP}:9093/api/v1/alerts\ncurl -s http://${ALERTM_IP}:9093/api/v1/alerts | jq -e \'"\'"\'.data | .[] | select(.labels | .alertname == "CephMonDown") | .status | .state == "active"\'"\'"\'\n\'' |
||||||||||||||
pass | 7903673 | 2024-09-13 12:32:55 | 2024-09-13 13:19:48 | 2024-09-13 13:45:41 | 0:25:53 | 0:15:51 | 0:10:02 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
pass | 7903674 | 2024-09-13 12:32:57 | 2024-09-13 13:20:18 | 2024-09-13 14:13:30 | 0:53:12 | 0:40:42 | 0:12:30 | smithi | main | ubuntu | 22.04 | orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/connectivity task/test_orch_cli_mon} | 5 | |
pass | 7903675 | 2024-09-13 12:32:58 | 2024-09-13 13:20:59 | 2024-09-13 13:50:55 | 0:29:56 | 0:20:24 | 0:09:32 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
pass | 7903676 | 2024-09-13 12:32:59 | 2024-09-13 13:21:19 | 2024-09-13 13:52:14 | 0:30:55 | 0:18:38 | 0:12:17 | smithi | main | ubuntu | 22.04 | orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_res_basic} | 2 | |
pass | 7903677 | 2024-09-13 12:33:00 | 2024-09-13 13:22:00 | 2024-09-13 14:04:56 | 0:42:56 | 0:32:44 | 0:10:12 | smithi | main | ubuntu | 22.04 | orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
dead | 7903678 | 2024-09-13 12:33:02 | 2024-09-13 13:23:20 | 2024-09-13 21:34:31 | 8:11:11 | smithi | main | ubuntu | 22.04 | orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_rgw_multisite} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7903679 | 2024-09-13 12:33:03 | 2024-09-13 13:25:01 | 2024-09-13 14:11:59 | 0:46:58 | 0:37:58 | 0:09:00 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
pass | 7903680 | 2024-09-13 12:33:05 | 2024-09-13 13:25:12 | 2024-09-13 13:48:02 | 0:22:50 | 0:12:29 | 0:10:21 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nvmeof 3-final} | 2 | |
fail | 7903681 | 2024-09-13 12:33:06 | 2024-09-13 13:25:12 | 2024-09-13 14:39:27 | 1:14:15 | 1:04:44 | 0:09:31 | smithi | main | ubuntu | 22.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04-squid 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi062 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:squid shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid ff1149ea-71d5-11ef-bceb-c7b262605968 -e sha1=b0201b8c79733293453e7f10a10c7fa43119222b -- bash -c \'ceph versions | jq -e \'"\'"\'.rgw | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 7903682 | 2024-09-13 12:33:07 | 2024-09-13 13:25:23 | 2024-09-13 13:49:58 | 0:24:35 | 0:13:21 | 0:11:14 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
pass | 7903683 | 2024-09-13 12:33:09 | 2024-09-13 13:26:54 | 2024-09-13 14:34:38 | 1:07:44 | 0:58:42 | 0:09:02 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} | 2 | |
dead | 7903684 | 2024-09-13 12:33:10 | 2024-09-13 13:27:44 | 2024-09-13 21:36:12 | 8:08:28 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7903685 | 2024-09-13 12:33:11 | 2024-09-13 13:27:45 | 2024-09-13 13:49:10 | 0:21:25 | 0:11:13 | 0:10:12 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_res_dom} | 2 | |
pass | 7903686 | 2024-09-13 12:33:12 | 2024-09-13 13:28:05 | 2024-09-13 14:01:35 | 0:33:30 | 0:23:59 | 0:09:31 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} | 2 | |
pass | 7903687 | 2024-09-13 12:33:14 | 2024-09-13 13:28:16 | 2024-09-13 13:54:06 | 0:25:50 | 0:15:55 | 0:09:55 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_set_mon_crush_locations} | 3 | |
pass | 7903688 | 2024-09-13 12:33:15 | 2024-09-13 13:29:16 | 2024-09-13 14:00:53 | 0:31:37 | 0:22:16 | 0:09:21 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 |