User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
gabrioux | 2024-09-16 07:44:22 | 2024-09-16 07:56:38 | 2024-09-16 17:11:34 | 9:14:56 | orch:cephadm | wip-guits-main-2024-09-13-1248 | smithi | 8293d73 | 96 | 18 | 4 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fail | 7906913 | 2024-09-16 07:44:27 | 2024-09-16 07:50:43 | 2024-09-16 08:23:17 | 0:32:34 | 0:21:23 | 0:11:11 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
"2024-09-16T08:19:47.043949+0000 mon.a (mon.0) 1353 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
pass | 7906914 | 2024-09-16 07:44:29 | 2024-09-16 07:52:03 | 2024-09-16 08:17:24 | 0:25:21 | 0:15:15 | 0:10:06 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_host_drain} | 3 | |
pass | 7906915 | 2024-09-16 07:44:30 | 2024-09-16 07:52:24 | 2024-09-16 08:33:50 | 0:41:26 | 0:32:22 | 0:09:04 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
pass | 7906916 | 2024-09-16 07:44:32 | 2024-09-16 07:52:45 | 2024-09-16 08:35:47 | 0:43:02 | 0:32:43 | 0:10:19 | smithi | main | centos | 9.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_9.stream 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
pass | 7906917 | 2024-09-16 07:44:33 | 2024-09-16 07:52:45 | 2024-09-16 08:37:12 | 0:44:27 | 0:34:47 | 0:09:40 | smithi | main | centos | 9.stream | orch:cephadm/nfs/{cluster/{1-node} conf/{client mds mgr mon osd} overrides/{ignore_mgr_down ignorelist_health pg_health} supported-random-distros$/{centos_latest} tasks/nfs} | 1 | |
pass | 7906918 | 2024-09-16 07:44:34 | 2024-09-16 07:52:46 | 2024-09-16 08:18:01 | 0:25:15 | 0:13:17 | 0:11:58 | smithi | main | centos | 9.stream | orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/classic task/test_orch_cli} | 1 | |
pass | 7906919 | 2024-09-16 07:44:36 | 2024-09-16 07:54:56 | 2024-09-16 08:16:26 | 0:21:30 | 0:11:22 | 0:10:08 | smithi | main | centos | 9.stream | orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream} 2-node-mgr agent/off orchestrator_cli} | 2 | |
pass | 7906920 | 2024-09-16 07:44:37 | 2024-09-16 07:55:27 | 2024-09-16 08:26:37 | 0:31:10 | 0:19:42 | 0:11:28 | smithi | main | ubuntu | 22.04 | orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
fail | 7906921 | 2024-09-16 07:44:38 | 2024-09-16 07:56:38 | 2024-09-16 08:30:06 | 0:33:28 | 0:23:15 | 0:10:13 | smithi | main | centos | 9.stream | orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} | 3 | |
Failure Reason:
"2024-09-16T08:12:33.075693+0000 mon.a (mon.0) 208 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7906922 | 2024-09-16 07:44:40 | 2024-09-16 07:57:08 | 2024-09-16 08:22:27 | 0:25:19 | 0:15:13 | 0:10:06 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
pass | 7906923 | 2024-09-16 07:44:41 | 2024-09-16 07:57:28 | 2024-09-16 08:16:28 | 0:19:00 | 0:10:24 | 0:08:36 | smithi | main | centos | 9.stream | orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/basic 3-final} | 1 | |
pass | 7906924 | 2024-09-16 07:44:43 | 2024-09-16 07:57:29 | 2024-09-16 08:19:16 | 0:21:47 | 0:10:26 | 0:11:21 | smithi | main | centos | 9.stream | orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 3 | |
pass | 7906925 | 2024-09-16 07:44:44 | 2024-09-16 07:58:29 | 2024-09-16 09:11:31 | 1:13:02 | 1:03:56 | 0:09:06 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
fail | 7906926 | 2024-09-16 07:44:46 | 2024-09-16 07:58:30 | 2024-09-16 08:29:57 | 0:31:27 | 0:20:18 | 0:11:09 | smithi | main | centos | 9.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream-reef 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi090 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 8b378aa4-7403-11ef-bceb-c7b262605968 -e sha1=8293d73f8690540e843a81caec373f9cc29cf705 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 7906927 | 2024-09-16 07:44:47 | 2024-09-16 08:00:10 | 2024-09-16 08:28:02 | 0:27:52 | 0:16:38 | 0:11:14 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_ctdb_res_ips} | 4 | |
Failure Reason:
SELinux denials found on ubuntu@smithi006.front.sepia.ceph.com: ['type=AVC msg=audit(1726475001.099:10876): avc: denied { nlmsg_read } for pid=60641 comm="ss" scontext=system_u:system_r:container_t:s0:c416,c561 tcontext=system_u:system_r:container_t:s0:c416,c561 tclass=netlink_tcpdiag_socket permissive=1'] |
||||||||||||||
pass | 7906928 | 2024-09-16 07:44:48 | 2024-09-16 08:01:01 | 2024-09-16 08:34:59 | 0:33:58 | 0:22:58 | 0:11:00 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rotate-keys} | 2 | |
pass | 7906929 | 2024-09-16 07:44:50 | 2024-09-16 08:01:11 | 2024-09-16 08:25:18 | 0:24:07 | 0:12:46 | 0:11:21 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 | |
pass | 7906930 | 2024-09-16 07:44:51 | 2024-09-16 08:01:12 | 2024-09-16 08:27:09 | 0:25:57 | 0:15:22 | 0:10:35 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
pass | 7906931 | 2024-09-16 07:44:52 | 2024-09-16 08:01:12 | 2024-09-16 08:33:07 | 0:31:55 | 0:21:31 | 0:10:24 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} | 2 | |
pass | 7906932 | 2024-09-16 07:44:54 | 2024-09-16 08:01:22 | 2024-09-16 08:23:25 | 0:22:03 | 0:12:08 | 0:09:55 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
pass | 7906933 | 2024-09-16 07:44:55 | 2024-09-16 08:02:03 | 2024-09-16 08:32:07 | 0:30:04 | 0:20:23 | 0:09:41 | smithi | main | ubuntu | 22.04 | orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_domain} | 2 | |
dead | 7906934 | 2024-09-16 07:44:57 | 2024-09-16 08:02:13 | 2024-09-16 08:22:09 | 0:19:56 | 0:07:40 | 0:12:16 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
['ubuntu@smithi179.front.sepia.ceph.com: Permission denied (publickey).'] |
||||||||||||||
pass | 7906935 | 2024-09-16 07:44:58 | 2024-09-16 08:02:44 | 2024-09-16 08:43:20 | 0:40:36 | 0:29:18 | 0:11:18 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
pass | 7906936 | 2024-09-16 07:44:59 | 2024-09-16 08:04:24 | 2024-09-16 08:58:11 | 0:53:47 | 0:44:21 | 0:09:26 | smithi | main | ubuntu | 22.04 | orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |
fail | 7906937 | 2024-09-16 07:45:01 | 2024-09-16 08:04:25 | 2024-09-16 08:40:30 | 0:36:05 | 0:26:12 | 0:09:53 | smithi | main | ubuntu | 22.04 | orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_monitoring_stack_basic} | 3 | |
Failure Reason:
Command failed on smithi003 with status 5: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:8293d73f8690540e843a81caec373f9cc29cf705 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid eff20752-7404-11ef-bceb-c7b262605968 -- bash -c \'set -e\nset -x\nceph orch apply node-exporter\nceph orch apply grafana\nceph orch apply alertmanager\nceph orch apply prometheus\nsleep 240\nceph orch ls\nceph orch ps\nceph orch host ls\nMON_DAEMON=$(ceph orch ps --daemon-type mon -f json | jq -r \'"\'"\'last | .daemon_name\'"\'"\')\nGRAFANA_HOST=$(ceph orch ps --daemon-type grafana -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nPROM_HOST=$(ceph orch ps --daemon-type prometheus -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nALERTM_HOST=$(ceph orch ps --daemon-type alertmanager -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nGRAFANA_IP=$(ceph orch host ls -f json | jq -r --arg GRAFANA_HOST "$GRAFANA_HOST" \'"\'"\'.[] | select(.hostname==$GRAFANA_HOST) | .addr\'"\'"\')\nPROM_IP=$(ceph orch host ls -f json | jq -r --arg PROM_HOST "$PROM_HOST" \'"\'"\'.[] | select(.hostname==$PROM_HOST) | .addr\'"\'"\')\nALERTM_IP=$(ceph orch host ls -f json | jq -r --arg ALERTM_HOST "$ALERTM_HOST" \'"\'"\'.[] | select(.hostname==$ALERTM_HOST) | .addr\'"\'"\')\n# check each host node-exporter metrics endpoint is responsive\nALL_HOST_IPS=$(ceph orch host ls -f json | jq -r \'"\'"\'.[] | .addr\'"\'"\')\nfor ip in $ALL_HOST_IPS; do\n curl -s http://${ip}:9100/metric\ndone\n# check grafana endpoints are responsive and database health is okay\ncurl -k -s https://${GRAFANA_IP}:3000/api/health\ncurl -k -s https://${GRAFANA_IP}:3000/api/health | jq -e \'"\'"\'.database == "ok"\'"\'"\'\n# stop mon daemon in order to trigger an alert\nceph orch daemon stop $MON_DAEMON\nsleep 120\n# check prometheus endpoints are responsive and mon down alert is firing\ncurl -s http://${PROM_IP}:9095/api/v1/status/config\ncurl -s http://${PROM_IP}:9095/api/v1/status/config | jq -e \'"\'"\'.status == "success"\'"\'"\'\ncurl -s http://${PROM_IP}:9095/api/v1/alerts\ncurl -s http://${PROM_IP}:9095/api/v1/alerts | jq -e \'"\'"\'.data | .alerts | .[] | select(.labels | .alertname == "CephMonDown") | .state == "firing"\'"\'"\'\n# check alertmanager endpoints are responsive and mon down alert is active\ncurl -s http://${ALERTM_IP}:9093/api/v1/status\ncurl -s http://${ALERTM_IP}:9093/api/v1/alerts\ncurl -s http://${ALERTM_IP}:9093/api/v1/alerts | jq -e \'"\'"\'.data | .[] | select(.labels | .alertname == "CephMonDown") | .status | .state == "active"\'"\'"\'\n\'' |
||||||||||||||
fail | 7906938 | 2024-09-16 07:45:02 | 2024-09-16 08:04:55 | 2024-09-16 09:22:54 | 1:17:59 | 1:07:12 | 0:10:47 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
pass | 7906939 | 2024-09-16 07:45:04 | 2024-09-16 08:05:56 | 2024-09-16 08:31:10 | 0:25:14 | 0:13:47 | 0:11:27 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
fail | 7906940 | 2024-09-16 07:45:05 | 2024-09-16 08:08:16 | 2024-09-16 09:03:43 | 0:55:27 | 0:43:26 | 0:12:01 | smithi | main | centos | 9.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream-squid 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 | |
Failure Reason:
Command failed on smithi026 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:squid shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c216db1e-7404-11ef-bceb-c7b262605968 -e sha1=8293d73f8690540e843a81caec373f9cc29cf705 -- bash -c \'ceph versions | jq -e \'"\'"\'.rgw | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 7906941 | 2024-09-16 07:45:07 | 2024-09-16 08:08:27 | 2024-09-16 08:44:43 | 0:36:16 | 0:24:45 | 0:11:31 | smithi | main | centos | 9.stream | orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/connectivity task/test_orch_cli_mon} | 5 | |
pass | 7906942 | 2024-09-16 07:45:08 | 2024-09-16 08:09:07 | 2024-09-16 08:32:01 | 0:22:54 | 0:12:29 | 0:10:25 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
pass | 7906943 | 2024-09-16 07:45:09 | 2024-09-16 08:11:03 | 2024-09-16 08:32:26 | 0:21:23 | 0:10:14 | 0:11:09 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_res_basic} | 2 | |
fail | 7906944 | 2024-09-16 07:45:11 | 2024-09-16 08:11:04 | 2024-09-16 08:42:25 | 0:31:21 | 0:21:53 | 0:09:28 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
"2024-09-16T08:39:07.860254+0000 mon.a (mon.0) 1426 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
fail | 7906945 | 2024-09-16 07:45:12 | 2024-09-16 08:11:24 | 2024-09-16 08:34:45 | 0:23:21 | 0:12:44 | 0:10:37 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_rgw_multisite} | 3 | |
Failure Reason:
Command failed on smithi002 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:8293d73f8690540e843a81caec373f9cc29cf705 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 67d05198-7405-11ef-bceb-c7b262605968 -- bash -c \'set -e\nset -x\nwhile true; do TOKEN=$(ceph rgw realm tokens | jq -r \'"\'"\'.[0].token\'"\'"\'); echo $TOKEN; if [ "$TOKEN" != "master zone has no endpoint" ]; then break; fi; sleep 5; done\nTOKENS=$(ceph rgw realm tokens)\necho $TOKENS | jq --exit-status \'"\'"\'.[0].realm == "myrealm1"\'"\'"\'\necho $TOKENS | jq --exit-status \'"\'"\'.[0].token\'"\'"\'\nTOKEN_JSON=$(ceph rgw realm tokens | jq -r \'"\'"\'.[0].token\'"\'"\' | base64 --decode)\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.realm_name == "myrealm1"\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.endpoint | test("http://.+:\\\\d+")\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.realm_id | test("^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$")\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.access_key\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.secret\'"\'"\'\n\'' |
||||||||||||||
pass | 7906946 | 2024-09-16 07:45:13 | 2024-09-16 08:11:45 | 2024-09-16 08:44:16 | 0:32:31 | 0:21:05 | 0:11:26 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nvmeof 3-final} | 2 | |
pass | 7906947 | 2024-09-16 07:45:15 | 2024-09-16 08:12:45 | 2024-09-16 08:36:19 | 0:23:34 | 0:13:22 | 0:10:12 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
pass | 7906948 | 2024-09-16 07:45:16 | 2024-09-16 08:12:56 | 2024-09-16 09:12:35 | 0:59:39 | 0:48:28 | 0:11:11 | smithi | main | ubuntu | 22.04 | orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
pass | 7906949 | 2024-09-16 07:45:17 | 2024-09-16 08:12:56 | 2024-09-16 08:55:21 | 0:42:25 | 0:32:34 | 0:09:51 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
dead | 7906950 | 2024-09-16 07:45:19 | 2024-09-16 08:14:27 | 2024-09-16 16:23:28 | 8:09:01 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7906951 | 2024-09-16 07:45:20 | 2024-09-16 08:14:47 | 2024-09-16 08:34:36 | 0:19:49 | 0:10:30 | 0:09:19 | smithi | main | centos | 9.stream | orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 3 | |
pass | 7906952 | 2024-09-16 07:45:21 | 2024-09-16 08:15:38 | 2024-09-16 09:16:08 | 1:00:30 | 0:49:38 | 0:10:52 | smithi | main | ubuntu | 22.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04-reef 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/classic} | 2 | |
pass | 7906953 | 2024-09-16 07:45:23 | 2024-09-16 08:16:38 | 2024-09-16 08:37:47 | 0:21:09 | 0:11:00 | 0:10:09 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_res_dom} | 2 | |
pass | 7906954 | 2024-09-16 07:45:24 | 2024-09-16 08:16:49 | 2024-09-16 08:50:40 | 0:33:51 | 0:23:13 | 0:10:38 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rotate-keys} | 2 | |
pass | 7906955 | 2024-09-16 07:45:26 | 2024-09-16 08:17:39 | 2024-09-16 08:42:40 | 0:25:01 | 0:15:29 | 0:09:32 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_set_mon_crush_locations} | 3 | |
pass | 7906956 | 2024-09-16 07:45:27 | 2024-09-16 08:17:40 | 2024-09-16 08:44:44 | 0:27:04 | 0:14:56 | 0:12:08 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
pass | 7906957 | 2024-09-16 07:45:28 | 2024-09-16 08:19:30 | 2024-09-16 08:40:46 | 0:21:16 | 0:12:50 | 0:08:26 | smithi | main | ubuntu | 22.04 | orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/classic task/test_adoption} | 1 | |
pass | 7906958 | 2024-09-16 07:45:30 | 2024-09-16 08:19:31 | 2024-09-16 08:40:39 | 0:21:08 | 0:11:59 | 0:09:09 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/deploy-raw} | 2 | |
pass | 7906959 | 2024-09-16 07:45:31 | 2024-09-16 08:19:41 | 2024-09-16 08:43:55 | 0:24:14 | 0:11:49 | 0:12:25 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_basic} | 2 | |
pass | 7906960 | 2024-09-16 07:45:32 | 2024-09-16 08:22:22 | 2024-09-16 08:47:40 | 0:25:18 | 0:14:31 | 0:10:47 | smithi | main | centos | 9.stream | orch:cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
pass | 7906961 | 2024-09-16 07:45:34 | 2024-09-16 08:22:42 | 2024-09-16 08:52:45 | 0:30:03 | 0:20:06 | 0:09:57 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
pass | 7906962 | 2024-09-16 07:45:35 | 2024-09-16 08:23:03 | 2024-09-16 09:06:48 | 0:43:45 | 0:33:50 | 0:09:55 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |
pass | 7906963 | 2024-09-16 07:45:37 | 2024-09-16 08:23:33 | 2024-09-16 08:45:05 | 0:21:32 | 0:10:14 | 0:11:18 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_ca_signed_key} | 2 | |
fail | 7906964 | 2024-09-16 07:45:38 | 2024-09-16 08:23:44 | 2024-09-16 09:28:31 | 1:04:47 | 0:52:04 | 0:12:43 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
pass | 7906965 | 2024-09-16 07:45:40 | 2024-09-16 08:25:14 | 2024-09-16 08:49:02 | 0:23:48 | 0:12:16 | 0:11:32 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
fail | 7906966 | 2024-09-16 07:45:41 | 2024-09-16 08:25:35 | 2024-09-16 09:10:29 | 0:44:54 | 0:32:39 | 0:12:15 | smithi | main | ubuntu | 22.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04-squid 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 | |
Failure Reason:
Command failed on smithi027 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:squid shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f3c58644-7407-11ef-bceb-c7b262605968 -e sha1=8293d73f8690540e843a81caec373f9cc29cf705 -- bash -c \'ceph versions | jq -e \'"\'"\'.mgr | length == 2\'"\'"\'\'' |
||||||||||||||
pass | 7906967 | 2024-09-16 07:45:42 | 2024-09-16 08:25:35 | 2024-09-16 08:53:15 | 0:27:40 | 0:18:33 | 0:09:07 | smithi | main | ubuntu | 22.04 | orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_domain} | 2 | |
pass | 7906968 | 2024-09-16 07:45:44 | 2024-09-16 08:25:36 | 2024-09-16 09:12:57 | 0:47:21 | 0:35:53 | 0:11:28 | smithi | main | ubuntu | 22.04 | orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} | 2 | |
pass | 7906969 | 2024-09-16 07:45:45 | 2024-09-16 08:26:56 | 2024-09-16 08:59:04 | 0:32:08 | 0:23:03 | 0:09:05 | smithi | main | ubuntu | 22.04 | orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm} | 1 | |
pass | 7906970 | 2024-09-16 07:45:47 | 2024-09-16 08:27:27 | 2024-09-16 08:51:21 | 0:23:54 | 0:13:08 | 0:10:46 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
pass | 7906971 | 2024-09-16 07:45:48 | 2024-09-16 08:28:17 | 2024-09-16 09:00:00 | 0:31:43 | 0:22:09 | 0:09:34 | smithi | main | ubuntu | 22.04 | orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
pass | 7906972 | 2024-09-16 07:45:49 | 2024-09-16 08:28:18 | 2024-09-16 09:54:36 | 1:26:18 | 1:14:33 | 0:11:45 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
pass | 7906973 | 2024-09-16 07:45:51 | 2024-09-16 08:28:38 | 2024-09-16 09:03:13 | 0:34:35 | 0:23:46 | 0:10:49 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |
pass | 7906974 | 2024-09-16 07:45:52 | 2024-09-16 08:29:29 | 2024-09-16 08:49:02 | 0:19:33 | 0:10:02 | 0:09:31 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_basic} | 2 | |
pass | 7906975 | 2024-09-16 07:45:54 | 2024-09-16 08:30:09 | 2024-09-16 09:05:53 | 0:35:44 | 0:25:44 | 0:10:00 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rotate-keys} | 2 | |
pass | 7906976 | 2024-09-16 07:45:55 | 2024-09-16 08:30:09 | 2024-09-16 08:44:38 | 0:14:29 | 0:05:39 | 0:08:50 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_cephadm_repos} | 1 | |
pass | 7906977 | 2024-09-16 07:45:56 | 2024-09-16 08:30:20 | 2024-09-16 09:14:21 | 0:44:01 | 0:33:30 | 0:10:31 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
pass | 7906978 | 2024-09-16 07:45:58 | 2024-09-16 08:31:30 | 2024-09-16 08:55:01 | 0:23:31 | 0:12:45 | 0:10:46 | smithi | main | centos | 9.stream | orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/connectivity task/test_cephadm_timeout} | 1 | |
pass | 7906979 | 2024-09-16 07:45:59 | 2024-09-16 08:32:11 | 2024-09-16 08:53:34 | 0:21:23 | 0:11:34 | 0:09:49 | smithi | main | centos | 9.stream | orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream_runc} 2-node-mgr agent/on orchestrator_cli} | 2 | |
pass | 7906980 | 2024-09-16 07:46:00 | 2024-09-16 08:32:21 | 2024-09-16 08:58:00 | 0:25:39 | 0:15:43 | 0:09:56 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
pass | 7906981 | 2024-09-16 07:46:02 | 2024-09-16 08:32:42 | 2024-09-16 08:53:33 | 0:20:51 | 0:11:50 | 0:09:01 | smithi | main | centos | 9.stream | orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/rgw 3-final} | 1 | |
pass | 7906982 | 2024-09-16 07:46:03 | 2024-09-16 08:32:42 | 2024-09-16 08:57:14 | 0:24:32 | 0:13:42 | 0:10:50 | smithi | main | centos | 9.stream | orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 3 | |
pass | 7906983 | 2024-09-16 07:46:04 | 2024-09-16 08:34:12 | 2024-09-16 09:23:24 | 0:49:12 | 0:40:06 | 0:09:06 | smithi | main | centos | 9.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream-reef 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} | 2 | |
pass | 7906984 | 2024-09-16 07:46:06 | 2024-09-16 08:34:13 | 2024-09-16 09:01:24 | 0:27:11 | 0:17:10 | 0:10:01 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} | 2 | |
pass | 7906985 | 2024-09-16 07:46:07 | 2024-09-16 08:34:23 | 2024-09-16 08:59:52 | 0:25:29 | 0:15:11 | 0:10:18 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
pass | 7906986 | 2024-09-16 07:46:09 | 2024-09-16 08:34:54 | 2024-09-16 09:02:06 | 0:27:12 | 0:17:28 | 0:09:44 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_ctdb_res_basic} | 4 | |
pass | 7906987 | 2024-09-16 07:46:10 | 2024-09-16 08:35:04 | 2024-09-16 09:00:58 | 0:25:54 | 0:14:02 | 0:11:52 | smithi | main | centos | 9.stream | orch:cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
pass | 7906988 | 2024-09-16 07:46:11 | 2024-09-16 08:35:15 | 2024-09-16 09:13:07 | 0:37:52 | 0:28:23 | 0:09:29 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | |
pass | 7906989 | 2024-09-16 07:46:13 | 2024-09-16 08:36:05 | 2024-09-16 09:20:15 | 0:44:10 | 0:34:00 | 0:10:10 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_api_tests} | 2 | |
pass | 7906990 | 2024-09-16 07:46:14 | 2024-09-16 08:36:36 | 2024-09-16 09:00:18 | 0:23:42 | 0:12:44 | 0:10:58 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_extra_daemon_features} | 2 | |
pass | 7906991 | 2024-09-16 07:46:15 | 2024-09-16 08:36:56 | 2024-09-16 09:09:31 | 0:32:35 | 0:21:50 | 0:10:45 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
fail | 7906992 | 2024-09-16 07:46:17 | 2024-09-16 08:37:57 | 2024-09-16 09:27:56 | 0:49:59 | 0:39:28 | 0:10:31 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
pass | 7906993 | 2024-09-16 07:46:18 | 2024-09-16 08:38:37 | 2024-09-16 09:03:27 | 0:24:50 | 0:13:42 | 0:11:08 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
fail | 7906994 | 2024-09-16 07:46:20 | 2024-09-16 08:40:38 | 2024-09-16 09:33:44 | 0:53:06 | 0:42:34 | 0:10:32 | smithi | main | centos | 9.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream-squid 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi003 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:squid shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1ff68320-7409-11ef-bceb-c7b262605968 -e sha1=8293d73f8690540e843a81caec373f9cc29cf705 -- bash -c \'ceph versions | jq -e \'"\'"\'.rgw | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 7906995 | 2024-09-16 07:46:21 | 2024-09-16 08:40:49 | 2024-09-16 09:26:12 | 0:45:23 | 0:33:43 | 0:11:40 | smithi | main | ubuntu | 22.04 | orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_ctdb_res_dom} | 4 | |
pass | 7906996 | 2024-09-16 07:46:22 | 2024-09-16 08:42:39 | 2024-09-16 09:30:15 | 0:47:36 | 0:37:51 | 0:09:45 | smithi | main | ubuntu | 22.04 | orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |
pass | 7906997 | 2024-09-16 07:46:24 | 2024-09-16 08:42:50 | 2024-09-16 09:21:06 | 0:38:16 | 0:27:42 | 0:10:34 | smithi | main | ubuntu | 22.04 | orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_host_drain} | 3 | |
pass | 7906998 | 2024-09-16 07:46:25 | 2024-09-16 08:43:30 | 2024-09-16 09:11:48 | 0:28:18 | 0:17:37 | 0:10:41 | smithi | main | centos | 9.stream | orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/classic task/test_orch_cli} | 1 | |
pass | 7906999 | 2024-09-16 07:46:26 | 2024-09-16 08:44:11 | 2024-09-16 09:07:35 | 0:23:24 | 0:12:40 | 0:10:44 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
pass | 7907000 | 2024-09-16 07:46:28 | 2024-09-16 08:44:31 | 2024-09-16 09:09:46 | 0:25:15 | 0:15:38 | 0:09:37 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
pass | 7907001 | 2024-09-16 07:46:29 | 2024-09-16 08:44:51 | 2024-09-16 09:46:47 | 1:01:56 | 0:50:34 | 0:11:22 | smithi | main | ubuntu | 22.04 | orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} | 2 | |
fail | 7907002 | 2024-09-16 07:46:31 | 2024-09-16 08:45:02 | 2024-09-16 09:14:42 | 0:29:40 | 0:19:58 | 0:09:42 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_ctdb_res_ips} | 4 | |
Failure Reason:
SELinux denials found on ubuntu@smithi007.front.sepia.ceph.com: ['type=AVC msg=audit(1726477874.920:10100): avc: denied { nlmsg_read } for pid=56728 comm="ss" scontext=system_u:system_r:container_t:s0:c609,c991 tcontext=system_u:system_r:container_t:s0:c609,c991 tclass=netlink_tcpdiag_socket permissive=1'] |
||||||||||||||
pass | 7907003 | 2024-09-16 07:46:32 | 2024-09-16 08:45:02 | 2024-09-16 09:22:46 | 0:37:44 | 0:28:11 | 0:09:33 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} | 2 | |
pass | 7907004 | 2024-09-16 07:46:33 | 2024-09-16 08:45:23 | 2024-09-16 09:08:38 | 0:23:15 | 0:13:47 | 0:09:28 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 | |
pass | 7907005 | 2024-09-16 07:46:35 | 2024-09-16 08:45:23 | 2024-09-16 09:23:32 | 0:38:09 | 0:26:27 | 0:11:42 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
fail | 7907006 | 2024-09-16 07:46:36 | 2024-09-16 08:47:54 | 2024-09-16 09:30:14 | 0:42:20 | 0:32:20 | 0:10:00 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
"2024-09-16T09:20:00.000239+0000 mon.smithi119 (mon.0) 513 : cluster [WRN] pg 2.4 is active+undersized+degraded, acting [1,0]" in cluster log |
||||||||||||||
pass | 7907007 | 2024-09-16 07:46:37 | 2024-09-16 08:49:14 | 2024-09-16 09:16:47 | 0:27:33 | 0:18:11 | 0:09:22 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} | 2 | |
pass | 7907008 | 2024-09-16 07:46:39 | 2024-09-16 08:49:25 | 2024-09-16 09:14:14 | 0:24:49 | 0:13:25 | 0:11:24 | smithi | main | centos | 9.stream | orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 3 | |
pass | 7907009 | 2024-09-16 07:46:40 | 2024-09-16 08:51:06 | 2024-09-16 09:52:34 | 1:01:28 | 0:50:02 | 0:11:26 | smithi | main | ubuntu | 22.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04-reef 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} | 2 | |
pass | 7907010 | 2024-09-16 07:46:42 | 2024-09-16 08:52:56 | 2024-09-16 09:23:17 | 0:30:21 | 0:19:26 | 0:10:55 | smithi | main | ubuntu | 22.04 | orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
pass | 7907011 | 2024-09-16 07:46:43 | 2024-09-16 08:53:27 | 2024-09-16 09:15:00 | 0:21:33 | 0:10:50 | 0:10:43 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_domain} | 2 | |
pass | 7907012 | 2024-09-16 07:46:44 | 2024-09-16 08:53:47 | 2024-09-16 09:29:26 | 0:35:39 | 0:26:07 | 0:09:32 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
pass | 7907013 | 2024-09-16 07:46:46 | 2024-09-16 08:53:48 | 2024-09-16 09:38:38 | 0:44:50 | 0:34:05 | 0:10:45 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
fail | 7907014 | 2024-09-16 07:46:47 | 2024-09-16 08:55:38 | 2024-09-16 09:23:49 | 0:28:11 | 0:18:15 | 0:09:56 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_monitoring_stack_basic} | 3 | |
Failure Reason:
Command failed on smithi083 with status 5: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:8293d73f8690540e843a81caec373f9cc29cf705 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid b9b10f10-740b-11ef-bceb-c7b262605968 -- bash -c \'set -e\nset -x\nceph orch apply node-exporter\nceph orch apply grafana\nceph orch apply alertmanager\nceph orch apply prometheus\nsleep 240\nceph orch ls\nceph orch ps\nceph orch host ls\nMON_DAEMON=$(ceph orch ps --daemon-type mon -f json | jq -r \'"\'"\'last | .daemon_name\'"\'"\')\nGRAFANA_HOST=$(ceph orch ps --daemon-type grafana -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nPROM_HOST=$(ceph orch ps --daemon-type prometheus -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nALERTM_HOST=$(ceph orch ps --daemon-type alertmanager -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nGRAFANA_IP=$(ceph orch host ls -f json | jq -r --arg GRAFANA_HOST "$GRAFANA_HOST" \'"\'"\'.[] | select(.hostname==$GRAFANA_HOST) | .addr\'"\'"\')\nPROM_IP=$(ceph orch host ls -f json | jq -r --arg PROM_HOST "$PROM_HOST" \'"\'"\'.[] | select(.hostname==$PROM_HOST) | .addr\'"\'"\')\nALERTM_IP=$(ceph orch host ls -f json | jq -r --arg ALERTM_HOST "$ALERTM_HOST" \'"\'"\'.[] | select(.hostname==$ALERTM_HOST) | .addr\'"\'"\')\n# check each host node-exporter metrics endpoint is responsive\nALL_HOST_IPS=$(ceph orch host ls -f json | jq -r \'"\'"\'.[] | .addr\'"\'"\')\nfor ip in $ALL_HOST_IPS; do\n curl -s http://${ip}:9100/metric\ndone\n# check grafana endpoints are responsive and database health is okay\ncurl -k -s https://${GRAFANA_IP}:3000/api/health\ncurl -k -s https://${GRAFANA_IP}:3000/api/health | jq -e \'"\'"\'.database == "ok"\'"\'"\'\n# stop mon daemon in order to trigger an alert\nceph orch daemon stop $MON_DAEMON\nsleep 120\n# check prometheus endpoints are responsive and mon down alert is firing\ncurl -s http://${PROM_IP}:9095/api/v1/status/config\ncurl -s http://${PROM_IP}:9095/api/v1/status/config | jq -e \'"\'"\'.status == "success"\'"\'"\'\ncurl -s http://${PROM_IP}:9095/api/v1/alerts\ncurl -s http://${PROM_IP}:9095/api/v1/alerts | jq -e \'"\'"\'.data | .alerts | .[] | select(.labels | .alertname == "CephMonDown") | .state == "firing"\'"\'"\'\n# check alertmanager endpoints are responsive and mon down alert is active\ncurl -s http://${ALERTM_IP}:9093/api/v1/status\ncurl -s http://${ALERTM_IP}:9093/api/v1/alerts\ncurl -s http://${ALERTM_IP}:9093/api/v1/alerts | jq -e \'"\'"\'.data | .[] | select(.labels | .alertname == "CephMonDown") | .status | .state == "active"\'"\'"\'\n\'' |
||||||||||||||
pass | 7907015 | 2024-09-16 07:46:48 | 2024-09-16 08:56:49 | 2024-09-16 09:19:45 | 0:22:56 | 0:13:37 | 0:09:19 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
pass | 7907016 | 2024-09-16 07:46:50 | 2024-09-16 08:56:49 | 2024-09-16 09:50:04 | 0:53:15 | 0:40:46 | 0:12:29 | smithi | main | ubuntu | 22.04 | orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/connectivity task/test_orch_cli_mon} | 5 | |
pass | 7907017 | 2024-09-16 07:46:51 | 2024-09-16 08:58:10 | 2024-09-16 09:27:55 | 0:29:45 | 0:20:35 | 0:09:10 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
pass | 7907018 | 2024-09-16 07:46:53 | 2024-09-16 08:58:20 | 2024-09-16 09:27:04 | 0:28:44 | 0:18:38 | 0:10:06 | smithi | main | ubuntu | 22.04 | orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_res_basic} | 2 | |
pass | 7907019 | 2024-09-16 07:46:54 | 2024-09-16 08:59:21 | 2024-09-16 09:41:43 | 0:42:22 | 0:33:07 | 0:09:15 | smithi | main | ubuntu | 22.04 | orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
dead | 7907020 | 2024-09-16 07:46:55 | 2024-09-16 09:00:11 | 2024-09-16 17:10:28 | 8:10:17 | smithi | main | ubuntu | 22.04 | orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_rgw_multisite} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7907021 | 2024-09-16 07:46:57 | 2024-09-16 09:00:32 | 2024-09-16 09:48:58 | 0:48:26 | 0:37:10 | 0:11:16 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
pass | 7907022 | 2024-09-16 07:46:58 | 2024-09-16 09:01:12 | 2024-09-16 09:24:33 | 0:23:21 | 0:13:01 | 0:10:20 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nvmeof 3-final} | 2 | |
fail | 7907023 | 2024-09-16 07:46:59 | 2024-09-16 09:01:33 | 2024-09-16 10:15:22 | 1:13:49 | 1:04:19 | 0:09:30 | smithi | main | ubuntu | 22.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04-squid 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi017 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:squid shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a843926a-740c-11ef-bceb-c7b262605968 -e sha1=8293d73f8690540e843a81caec373f9cc29cf705 -- bash -c \'ceph versions | jq -e \'"\'"\'.rgw | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 7907024 | 2024-09-16 07:47:01 | 2024-09-16 09:01:43 | 2024-09-16 09:24:33 | 0:22:50 | 0:13:34 | 0:09:16 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
pass | 7907025 | 2024-09-16 07:47:02 | 2024-09-16 09:01:43 | 2024-09-16 10:06:28 | 1:04:45 | 0:53:00 | 0:11:45 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} | 2 | |
dead | 7907026 | 2024-09-16 07:47:03 | 2024-09-16 09:03:34 | 2024-09-16 17:11:34 | 8:08:00 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7907027 | 2024-09-16 07:47:05 | 2024-09-16 09:03:45 | 2024-09-16 09:25:13 | 0:21:28 | 0:10:53 | 0:10:35 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_res_dom} | 2 | |
pass | 7907028 | 2024-09-16 07:47:06 | 2024-09-16 09:03:55 | 2024-09-16 09:39:45 | 0:35:50 | 0:23:18 | 0:12:32 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} | 2 | |
pass | 7907029 | 2024-09-16 07:47:07 | 2024-09-16 09:32:19 | 874 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_set_mon_crush_locations} | 3 | ||||
pass | 7907030 | 2024-09-16 07:47:09 | 2024-09-16 09:07:06 | 2024-09-16 09:39:17 | 0:32:11 | 0:21:34 | 0:10:37 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 |