User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
gabrioux | 2024-09-18 06:59:44 | 2024-09-18 07:01:31 | 2024-09-18 16:27:19 | 9:25:48 | orch:cephadm | wip-guits-main-2024-09-17-1213 | smithi | e7fb7b5 | 95 | 19 | 4 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7910281 | 2024-09-18 06:59:49 | 2024-09-18 07:01:31 | 2024-09-18 07:32:22 | 0:30:51 | 0:21:09 | 0:09:42 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 | |
pass | 7910282 | 2024-09-18 06:59:50 | 2024-09-18 07:01:31 | 2024-09-18 07:27:46 | 0:26:15 | 0:14:50 | 0:11:25 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_host_drain} | 3 | |
pass | 7910283 | 2024-09-18 06:59:51 | 2024-09-18 07:02:22 | 2024-09-18 07:46:54 | 0:44:32 | 0:32:08 | 0:12:24 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
pass | 7910284 | 2024-09-18 06:59:53 | 2024-09-18 07:03:52 | 2024-09-18 07:48:13 | 0:44:21 | 0:32:39 | 0:11:42 | smithi | main | centos | 9.stream | orch:cephadm/mgr-nfs-upgrade/{0-centos_9.stream 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
pass | 7910285 | 2024-09-18 06:59:54 | 2024-09-18 07:05:13 | 2024-09-18 08:18:56 | 1:13:43 | 1:03:08 | 0:10:35 | smithi | main | ubuntu | 22.04 | orch:cephadm/nfs/{cluster/{1-node} conf/{client mds mgr mon osd} overrides/{ignore_mgr_down ignorelist_health pg_health} supported-random-distros$/{ubuntu_latest} tasks/nfs} | 1 | |
pass | 7910286 | 2024-09-18 06:59:55 | 2024-09-18 07:05:13 | 2024-09-18 07:28:28 | 0:23:15 | 0:13:26 | 0:09:49 | smithi | main | centos | 9.stream | orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/classic task/test_orch_cli} | 1 | |
pass | 7910287 | 2024-09-18 06:59:57 | 2024-09-18 07:05:23 | 2024-09-18 07:32:17 | 0:26:54 | 0:14:57 | 0:11:57 | smithi | main | ubuntu | 22.04 | orch:cephadm/orchestrator_cli/{0-random-distro$/{ubuntu_22.04} 2-node-mgr agent/off orchestrator_cli} | 2 | |
pass | 7910288 | 2024-09-18 06:59:58 | 2024-09-18 07:06:14 | 2024-09-18 07:36:05 | 0:29:51 | 0:20:25 | 0:09:26 | smithi | main | ubuntu | 22.04 | orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
pass | 7910289 | 2024-09-18 06:59:59 | 2024-09-18 07:06:34 | 2024-09-18 07:40:33 | 0:33:59 | 0:23:19 | 0:10:40 | smithi | main | centos | 9.stream | orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} | 3 | |
pass | 7910290 | 2024-09-18 07:00:00 | 2024-09-18 07:07:05 | 2024-09-18 07:32:05 | 0:25:00 | 0:14:55 | 0:10:05 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
fail | 7910291 | 2024-09-18 07:00:02 | 2024-09-18 07:07:05 | 2024-09-18 07:28:00 | 0:20:55 | 0:10:26 | 0:10:29 | smithi | main | centos | 9.stream | orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream_runc} 1-start 2-services/basic 3-final} | 1 | |
Failure Reason:
"2024-09-18T07:22:49.846509+0000 mon.smithi143 (mon.0) 253 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
pass | 7910292 | 2024-09-18 07:00:03 | 2024-09-18 07:07:15 | 2024-09-18 07:28:54 | 0:21:39 | 0:10:40 | 0:10:59 | smithi | main | centos | 9.stream | orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 3 | |
pass | 7910293 | 2024-09-18 07:00:04 | 2024-09-18 07:07:16 | 2024-09-18 08:16:44 | 1:09:28 | 0:57:48 | 0:11:40 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
pass | 7910294 | 2024-09-18 07:00:06 | 2024-09-18 07:09:36 | 2024-09-18 07:52:59 | 0:43:23 | 0:32:42 | 0:10:41 | smithi | main | centos | 9.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream-reef 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/classic} | 2 | |
fail | 7910295 | 2024-09-18 07:00:07 | 2024-09-18 07:10:07 | 2024-09-18 07:36:06 | 0:25:59 | 0:15:50 | 0:10:09 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_ctdb_res_ips} | 4 | |
Failure Reason:
SELinux denials found on ubuntu@smithi079.front.sepia.ceph.com: ['type=AVC msg=audit(1726644757.181:10851): avc: denied { nlmsg_read } for pid=60720 comm="ss" scontext=system_u:system_r:container_t:s0:c332,c768 tcontext=system_u:system_r:container_t:s0:c332,c768 tclass=netlink_tcpdiag_socket permissive=1'] |
||||||||||||||
pass | 7910296 | 2024-09-18 07:00:08 | 2024-09-18 07:11:07 | 2024-09-18 07:45:23 | 0:34:16 | 0:23:20 | 0:10:56 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rotate-keys} | 2 | |
pass | 7910297 | 2024-09-18 07:00:10 | 2024-09-18 07:12:18 | 2024-09-18 07:35:27 | 0:23:09 | 0:13:06 | 0:10:03 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 | |
pass | 7910298 | 2024-09-18 07:00:11 | 2024-09-18 07:12:38 | 2024-09-18 07:39:14 | 0:26:36 | 0:15:15 | 0:11:21 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
pass | 7910299 | 2024-09-18 07:00:12 | 2024-09-18 07:12:49 | 2024-09-18 07:46:11 | 0:33:22 | 0:22:00 | 0:11:22 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} | 2 | |
pass | 7910300 | 2024-09-18 07:00:14 | 2024-09-18 07:14:39 | 2024-09-18 07:36:09 | 0:21:30 | 0:11:58 | 0:09:32 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
pass | 7910301 | 2024-09-18 07:00:15 | 2024-09-18 07:15:20 | 2024-09-18 07:45:28 | 0:30:08 | 0:19:38 | 0:10:30 | smithi | main | ubuntu | 22.04 | orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_domain} | 2 | |
pass | 7910302 | 2024-09-18 07:00:16 | 2024-09-18 07:15:20 | 2024-09-18 07:51:55 | 0:36:35 | 0:26:27 | 0:10:08 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | |
pass | 7910303 | 2024-09-18 07:00:18 | 2024-09-18 07:16:11 | 2024-09-18 07:55:23 | 0:39:12 | 0:29:48 | 0:09:24 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} | 2 | |
pass | 7910304 | 2024-09-18 07:00:19 | 2024-09-18 07:16:31 | 2024-09-18 08:10:19 | 0:53:48 | 0:44:44 | 0:09:04 | smithi | main | ubuntu | 22.04 | orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |
fail | 7910305 | 2024-09-18 07:00:20 | 2024-09-18 07:16:51 | 2024-09-18 07:52:34 | 0:35:43 | 0:25:54 | 0:09:49 | smithi | main | ubuntu | 22.04 | orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_monitoring_stack_basic} | 3 | |
Failure Reason:
Command failed on smithi017 with status 5: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e7fb7b56dbae911c82c9a0310bb8cf8c37e8363e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 86590948-7590-11ef-bceb-c7b262605968 -- bash -c \'set -e\nset -x\nceph orch apply node-exporter\nceph orch apply grafana\nceph orch apply alertmanager\nceph orch apply prometheus\nsleep 240\nceph orch ls\nceph orch ps\nceph orch host ls\nMON_DAEMON=$(ceph orch ps --daemon-type mon -f json | jq -r \'"\'"\'last | .daemon_name\'"\'"\')\nGRAFANA_HOST=$(ceph orch ps --daemon-type grafana -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nPROM_HOST=$(ceph orch ps --daemon-type prometheus -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nALERTM_HOST=$(ceph orch ps --daemon-type alertmanager -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nGRAFANA_IP=$(ceph orch host ls -f json | jq -r --arg GRAFANA_HOST "$GRAFANA_HOST" \'"\'"\'.[] | select(.hostname==$GRAFANA_HOST) | .addr\'"\'"\')\nPROM_IP=$(ceph orch host ls -f json | jq -r --arg PROM_HOST "$PROM_HOST" \'"\'"\'.[] | select(.hostname==$PROM_HOST) | .addr\'"\'"\')\nALERTM_IP=$(ceph orch host ls -f json | jq -r --arg ALERTM_HOST "$ALERTM_HOST" \'"\'"\'.[] | select(.hostname==$ALERTM_HOST) | .addr\'"\'"\')\n# check each host node-exporter metrics endpoint is responsive\nALL_HOST_IPS=$(ceph orch host ls -f json | jq -r \'"\'"\'.[] | .addr\'"\'"\')\nfor ip in $ALL_HOST_IPS; do\n curl -s http://${ip}:9100/metric\ndone\n# check grafana endpoints are responsive and database health is okay\ncurl -k -s https://${GRAFANA_IP}:3000/api/health\ncurl -k -s https://${GRAFANA_IP}:3000/api/health | jq -e \'"\'"\'.database == "ok"\'"\'"\'\n# stop mon daemon in order to trigger an alert\nceph orch daemon stop $MON_DAEMON\nsleep 120\n# check prometheus endpoints are responsive and mon down alert is firing\ncurl -s http://${PROM_IP}:9095/api/v1/status/config\ncurl -s http://${PROM_IP}:9095/api/v1/status/config | jq -e \'"\'"\'.status == "success"\'"\'"\'\ncurl -s http://${PROM_IP}:9095/api/v1/alerts\ncurl -s http://${PROM_IP}:9095/api/v1/alerts | jq -e \'"\'"\'.data | .alerts | .[] | select(.labels | .alertname == "CephMonDown") | .state == "firing"\'"\'"\'\n# check alertmanager endpoints are responsive and mon down alert is active\ncurl -s http://${ALERTM_IP}:9093/api/v1/status\ncurl -s http://${ALERTM_IP}:9093/api/v1/alerts\ncurl -s http://${ALERTM_IP}:9093/api/v1/alerts | jq -e \'"\'"\'.data | .[] | select(.labels | .alertname == "CephMonDown") | .status | .state == "active"\'"\'"\'\n\'' |
||||||||||||||
fail | 7910306 | 2024-09-18 07:00:21 | 2024-09-18 07:17:02 | 2024-09-18 08:18:34 | 1:01:32 | 0:52:33 | 0:08:59 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
pass | 7910307 | 2024-09-18 07:00:23 | 2024-09-18 07:17:32 | 2024-09-18 07:41:18 | 0:23:46 | 0:14:46 | 0:09:00 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
fail | 7910308 | 2024-09-18 07:00:24 | 2024-09-18 07:18:33 | 2024-09-18 08:11:37 | 0:53:04 | 0:42:47 | 0:10:17 | smithi | main | centos | 9.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream-squid 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 | |
Failure Reason:
Command failed on smithi121 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:squid shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f377a918-758f-11ef-bceb-c7b262605968 -e sha1=e7fb7b56dbae911c82c9a0310bb8cf8c37e8363e -- bash -c \'ceph versions | jq -e \'"\'"\'.rgw | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 7910309 | 2024-09-18 07:00:25 | 2024-09-18 07:18:43 | 2024-09-18 07:56:10 | 0:37:27 | 0:24:06 | 0:13:21 | smithi | main | centos | 9.stream | orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/connectivity task/test_orch_cli_mon} | 5 | |
pass | 7910310 | 2024-09-18 07:00:27 | 2024-09-18 07:22:55 | 2024-09-18 07:47:15 | 0:24:20 | 0:12:42 | 0:11:38 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
pass | 7910311 | 2024-09-18 07:00:28 | 2024-09-18 07:24:15 | 2024-09-18 07:46:07 | 0:21:52 | 0:10:49 | 0:11:03 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_res_basic} | 2 | |
pass | 7910312 | 2024-09-18 07:00:29 | 2024-09-18 07:24:25 | 2024-09-18 07:56:07 | 0:31:42 | 0:22:19 | 0:09:23 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} | 2 | |
fail | 7910313 | 2024-09-18 07:00:31 | 2024-09-18 07:25:26 | 2024-09-18 07:50:50 | 0:25:24 | 0:13:46 | 0:11:38 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_rgw_multisite} | 3 | |
Failure Reason:
Command failed on smithi063 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e7fb7b56dbae911c82c9a0310bb8cf8c37e8363e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 783c356e-7591-11ef-bceb-c7b262605968 -- bash -c \'set -e\nset -x\nwhile true; do TOKEN=$(ceph rgw realm tokens | jq -r \'"\'"\'.[0].token\'"\'"\'); echo $TOKEN; if [ "$TOKEN" != "master zone has no endpoint" ]; then break; fi; sleep 5; done\nTOKENS=$(ceph rgw realm tokens)\necho $TOKENS | jq --exit-status \'"\'"\'.[0].realm == "myrealm1"\'"\'"\'\necho $TOKENS | jq --exit-status \'"\'"\'.[0].token\'"\'"\'\nTOKEN_JSON=$(ceph rgw realm tokens | jq -r \'"\'"\'.[0].token\'"\'"\' | base64 --decode)\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.realm_name == "myrealm1"\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.endpoint | test("http://.+:\\\\d+")\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.realm_id | test("^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$")\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.access_key\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.secret\'"\'"\'\n\'' |
||||||||||||||
pass | 7910314 | 2024-09-18 07:00:32 | 2024-09-18 07:25:26 | 2024-09-18 07:57:57 | 0:32:31 | 0:20:52 | 0:11:39 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nvmeof 3-final} | 2 | |
pass | 7910315 | 2024-09-18 07:00:33 | 2024-09-18 07:25:37 | 2024-09-18 07:48:34 | 0:22:57 | 0:13:43 | 0:09:14 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
pass | 7910316 | 2024-09-18 07:00:35 | 2024-09-18 07:25:37 | 2024-09-18 08:24:07 | 0:58:30 | 0:47:02 | 0:11:28 | smithi | main | ubuntu | 22.04 | orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
fail | 7910317 | 2024-09-18 07:00:36 | 2024-09-18 07:26:18 | 2024-09-18 08:11:19 | 0:45:01 | 0:34:19 | 0:10:42 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
"2024-09-18T08:00:00.000191+0000 mon.smithi103 (mon.0) 363 : cluster [WRN] osd.2 (root=default,host=smithi103) is down" in cluster log |
||||||||||||||
dead | 7910318 | 2024-09-18 07:00:37 | 2024-09-18 07:27:58 | 2024-09-18 15:36:56 | 8:08:58 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7910319 | 2024-09-18 07:00:38 | 2024-09-18 07:27:58 | 2024-09-18 07:51:12 | 0:23:14 | 0:12:11 | 0:11:03 | smithi | main | centos | 9.stream | orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 3 | |
fail | 7910320 | 2024-09-18 07:00:40 | 2024-09-18 07:27:59 | 2024-09-18 08:13:36 | 0:45:37 | 0:33:32 | 0:12:05 | smithi | main | ubuntu | 22.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04-reef 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi042 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0abf29e6-7592-11ef-bceb-c7b262605968 -e sha1=e7fb7b56dbae911c82c9a0310bb8cf8c37e8363e -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 7910321 | 2024-09-18 07:00:41 | 2024-09-18 07:27:59 | 2024-09-18 07:48:55 | 0:20:56 | 0:12:16 | 0:08:40 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_res_dom} | 2 | |
pass | 7910322 | 2024-09-18 07:00:42 | 2024-09-18 07:28:10 | 2024-09-18 08:01:06 | 0:32:56 | 0:24:36 | 0:08:20 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rotate-keys} | 2 | |
pass | 7910323 | 2024-09-18 07:00:44 | 2024-09-18 07:28:10 | 2024-09-18 07:53:44 | 0:25:34 | 0:15:11 | 0:10:23 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_set_mon_crush_locations} | 3 | |
pass | 7910324 | 2024-09-18 07:00:45 | 2024-09-18 07:28:30 | 2024-09-18 07:56:36 | 0:28:06 | 0:16:15 | 0:11:51 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
pass | 7910325 | 2024-09-18 07:00:46 | 2024-09-18 07:29:11 | 2024-09-18 07:50:51 | 0:21:40 | 0:12:57 | 0:08:43 | smithi | main | ubuntu | 22.04 | orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/classic task/test_adoption} | 1 | |
fail | 7910326 | 2024-09-18 07:00:47 | 2024-09-18 07:29:21 | 2024-09-18 07:55:17 | 0:25:56 | 0:14:14 | 0:11:42 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/deploy-raw} | 2 | |
Failure Reason:
reached maximum tries (120) after waiting for 120 seconds |
||||||||||||||
pass | 7910327 | 2024-09-18 07:00:49 | 2024-09-18 07:32:22 | 2024-09-18 07:53:30 | 0:21:08 | 0:11:56 | 0:09:12 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_basic} | 2 | |
pass | 7910328 | 2024-09-18 07:00:50 | 2024-09-18 07:32:32 | 2024-09-18 07:58:01 | 0:25:29 | 0:14:31 | 0:10:58 | smithi | main | centos | 9.stream | orch:cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
pass | 7910329 | 2024-09-18 07:00:51 | 2024-09-18 07:32:33 | 2024-09-18 08:04:53 | 0:32:20 | 0:19:29 | 0:12:51 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
pass | 7910330 | 2024-09-18 07:00:53 | 2024-09-18 07:35:13 | 2024-09-18 08:18:14 | 0:43:01 | 0:33:58 | 0:09:03 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |
pass | 7910331 | 2024-09-18 07:00:54 | 2024-09-18 07:35:14 | 2024-09-18 07:57:18 | 0:22:04 | 0:10:43 | 0:11:21 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_ca_signed_key} | 2 | |
fail | 7910332 | 2024-09-18 07:00:55 | 2024-09-18 07:36:14 | 2024-09-18 08:37:22 | 1:01:08 | 0:51:21 | 0:09:47 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
pass | 7910333 | 2024-09-18 07:00:57 | 2024-09-18 07:36:25 | 2024-09-18 07:59:25 | 0:23:00 | 0:12:13 | 0:10:47 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
pass | 7910334 | 2024-09-18 07:00:58 | 2024-09-18 07:36:25 | 2024-09-18 08:53:55 | 1:17:30 | 1:09:11 | 0:08:19 | smithi | main | ubuntu | 22.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04-squid 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 | |
pass | 7910335 | 2024-09-18 07:00:59 | 2024-09-18 07:36:26 | 2024-09-18 08:08:41 | 0:32:15 | 0:19:15 | 0:13:00 | smithi | main | ubuntu | 22.04 | orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_domain} | 2 | |
pass | 7910336 | 2024-09-18 07:01:01 | 2024-09-18 07:38:56 | 2024-09-18 08:21:01 | 0:42:05 | 0:32:22 | 0:09:43 | smithi | main | ubuntu | 22.04 | orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} | 2 | |
pass | 7910337 | 2024-09-18 07:01:02 | 2024-09-18 07:39:27 | 2024-09-18 08:12:37 | 0:33:10 | 0:20:49 | 0:12:21 | smithi | main | ubuntu | 22.04 | orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm} | 1 | |
pass | 7910338 | 2024-09-18 07:01:03 | 2024-09-18 07:39:27 | 2024-09-18 08:02:46 | 0:23:19 | 0:13:16 | 0:10:03 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
pass | 7910339 | 2024-09-18 07:01:05 | 2024-09-18 07:40:28 | 2024-09-18 08:10:55 | 0:30:27 | 0:19:06 | 0:11:21 | smithi | main | ubuntu | 22.04 | orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
pass | 7910340 | 2024-09-18 07:01:06 | 2024-09-18 07:40:48 | 2024-09-18 08:46:12 | 1:05:24 | 0:56:01 | 0:09:23 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
pass | 7910341 | 2024-09-18 07:01:07 | 2024-09-18 07:41:28 | 2024-09-18 08:17:49 | 0:36:21 | 0:23:04 | 0:13:17 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |
pass | 7910342 | 2024-09-18 07:01:09 | 2024-09-18 07:44:09 | 2024-09-18 08:05:12 | 0:21:03 | 0:09:57 | 0:11:06 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_basic} | 2 | |
pass | 7910343 | 2024-09-18 07:01:10 | 2024-09-18 07:45:40 | 2024-09-18 08:18:54 | 0:33:14 | 0:23:14 | 0:10:00 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rotate-keys} | 2 | |
pass | 7910344 | 2024-09-18 07:01:12 | 2024-09-18 07:45:40 | 2024-09-18 08:00:53 | 0:15:13 | 0:05:37 | 0:09:36 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_cephadm_repos} | 1 | |
fail | 7910345 | 2024-09-18 07:01:13 | 2024-09-18 07:45:50 | 2024-09-18 08:21:36 | 0:35:46 | 0:22:25 | 0:13:21 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
Command failed on smithi003 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v18.2.1 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 7c02829a-7594-11ef-bceb-c7b262605968 -e sha1=e7fb7b56dbae911c82c9a0310bb8cf8c37e8363e -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 7910346 | 2024-09-18 07:01:14 | 2024-09-18 07:46:31 | 2024-09-18 08:10:08 | 0:23:37 | 0:13:13 | 0:10:24 | smithi | main | centos | 9.stream | orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/connectivity task/test_cephadm_timeout} | 1 | |
pass | 7910347 | 2024-09-18 07:01:16 | 2024-09-18 07:46:31 | 2024-09-18 08:07:44 | 0:21:13 | 0:11:24 | 0:09:49 | smithi | main | centos | 9.stream | orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream} 2-node-mgr agent/on orchestrator_cli} | 2 | |
pass | 7910348 | 2024-09-18 07:01:17 | 2024-09-18 07:46:42 | 2024-09-18 08:10:12 | 0:23:30 | 0:13:06 | 0:10:24 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
pass | 7910349 | 2024-09-18 07:01:18 | 2024-09-18 07:47:12 | 2024-09-18 08:16:34 | 0:29:22 | 0:20:43 | 0:08:39 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-singlehost/{0-random-distro$/{ubuntu_22.04} 1-start 2-services/rgw 3-final} | 1 | |
pass | 7910350 | 2024-09-18 07:01:20 | 2024-09-18 07:47:12 | 2024-09-18 08:09:23 | 0:22:11 | 0:11:10 | 0:11:01 | smithi | main | centos | 9.stream | orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 3 | |
pass | 7910351 | 2024-09-18 07:01:21 | 2024-09-18 07:48:23 | 2024-09-18 08:31:43 | 0:43:20 | 0:33:21 | 0:09:59 | smithi | main | centos | 9.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream-reef 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} | 2 | |
pass | 7910352 | 2024-09-18 07:01:22 | 2024-09-18 07:48:43 | 2024-09-18 08:12:03 | 0:23:20 | 0:14:00 | 0:09:20 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} | 2 | |
pass | 7910353 | 2024-09-18 07:01:24 | 2024-09-18 07:49:14 | 2024-09-18 08:14:01 | 0:24:47 | 0:12:43 | 0:12:04 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
pass | 7910354 | 2024-09-18 07:01:25 | 2024-09-18 07:51:04 | 2024-09-18 08:16:35 | 0:25:31 | 0:15:13 | 0:10:18 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_ctdb_res_basic} | 4 | |
pass | 7910355 | 2024-09-18 07:01:26 | 2024-09-18 07:51:25 | 2024-09-18 08:14:47 | 0:23:22 | 0:14:20 | 0:09:02 | smithi | main | centos | 9.stream | orch:cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
pass | 7910356 | 2024-09-18 07:01:28 | 2024-09-18 07:51:25 | 2024-09-18 08:31:15 | 0:39:50 | 0:29:49 | 0:10:01 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | |
pass | 7910357 | 2024-09-18 07:01:29 | 2024-09-18 07:52:16 | 2024-09-18 08:35:42 | 0:43:26 | 0:34:14 | 0:09:12 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_api_tests} | 2 | |
pass | 7910358 | 2024-09-18 07:01:30 | 2024-09-18 07:52:56 | 2024-09-18 08:16:24 | 0:23:28 | 0:12:44 | 0:10:44 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_extra_daemon_features} | 2 | |
pass | 7910359 | 2024-09-18 07:01:32 | 2024-09-18 07:53:17 | 2024-09-18 08:25:22 | 0:32:05 | 0:22:07 | 0:09:58 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
fail | 7910360 | 2024-09-18 07:01:33 | 2024-09-18 07:53:47 | 2024-09-18 08:40:59 | 0:47:12 | 0:36:56 | 0:10:16 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
pass | 7910361 | 2024-09-18 07:01:34 | 2024-09-18 07:53:57 | 2024-09-18 08:17:11 | 0:23:14 | 0:13:28 | 0:09:46 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
fail | 7910362 | 2024-09-18 07:01:36 | 2024-09-18 07:53:58 | 2024-09-18 08:48:40 | 0:54:42 | 0:42:52 | 0:11:50 | smithi | main | centos | 9.stream | orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream-squid 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi047 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:squid shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 15eeb70c-7595-11ef-bceb-c7b262605968 -e sha1=e7fb7b56dbae911c82c9a0310bb8cf8c37e8363e -- bash -c \'ceph versions | jq -e \'"\'"\'.rgw | length == 1\'"\'"\'\'' |
||||||||||||||
fail | 7910363 | 2024-09-18 07:01:37 | 2024-09-18 07:55:28 | 2024-09-18 08:09:49 | 0:14:21 | smithi | main | ubuntu | 22.04 | orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_ctdb_res_dom} | 4 | |||
Failure Reason:
failed to install new kernel version within timeout |
||||||||||||||
fail | 7910364 | 2024-09-18 07:01:38 | 2024-09-18 07:56:19 | 2024-09-18 08:04:03 | 0:07:44 | smithi | main | ubuntu | 22.04 | orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_python} | 2 | |||
Failure Reason:
Command failed on smithi192 with status 100: 'sudo apt-get clean' |
||||||||||||||
pass | 7910365 | 2024-09-18 07:01:40 | 2024-09-18 07:56:19 | 2024-09-18 08:30:04 | 0:33:45 | 0:23:49 | 0:09:56 | smithi | main | ubuntu | 22.04 | orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_host_drain} | 3 | |
pass | 7910366 | 2024-09-18 07:01:41 | 2024-09-18 07:56:20 | 2024-09-18 08:18:37 | 0:22:17 | 0:14:01 | 0:08:16 | smithi | main | centos | 9.stream | orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/classic task/test_orch_cli} | 1 | |
pass | 7910367 | 2024-09-18 07:01:42 | 2024-09-18 07:56:20 | 2024-09-18 08:19:33 | 0:23:13 | 0:13:15 | 0:09:58 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
pass | 7910368 | 2024-09-18 07:01:44 | 2024-09-18 07:56:20 | 2024-09-18 08:21:49 | 0:25:29 | 0:15:46 | 0:09:43 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
pass | 7910369 | 2024-09-18 07:01:45 | 2024-09-18 07:56:51 | 2024-09-18 08:55:04 | 0:58:13 | 0:48:33 | 0:09:40 | smithi | main | ubuntu | 22.04 | orch:cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} | 2 | |
fail | 7910370 | 2024-09-18 07:01:47 | 2024-09-18 07:57:31 | 2024-09-18 08:24:09 | 0:26:38 | 0:16:11 | 0:10:27 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_ctdb_res_ips} | 4 | |
Failure Reason:
"2024-09-18T08:22:46.628872+0000 mon.a (mon.0) 813 : cluster [WRN] Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||||||||||||||
dead | 7910371 | 2024-09-18 07:01:48 | 2024-09-18 07:58:22 | 2024-09-18 08:04:45 | 0:06:23 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} | 2 | |||
Failure Reason:
Error reimaging machines: This operation would block forever Hub: <Hub '' at 0x7ff68afc0270 epoll default pending=0 ref=0 fileno=4 thread_ident=0x7ff68d739740> Handles: [] |
||||||||||||||
pass | 7910372 | 2024-09-18 07:01:49 | 2024-09-18 07:58:52 | 2024-09-18 08:19:52 | 0:21:00 | 0:12:23 | 0:08:37 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 | |
pass | 7910373 | 2024-09-18 07:01:51 | 2024-09-18 07:58:53 | 2024-09-18 08:33:08 | 0:34:15 | 0:24:59 | 0:09:16 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
pass | 7910374 | 2024-09-18 07:01:52 | 2024-09-18 07:59:33 | 2024-09-18 08:44:25 | 0:44:52 | 0:33:10 | 0:11:42 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
pass | 7910375 | 2024-09-18 07:01:53 | 2024-09-18 08:01:24 | 2024-09-18 08:25:03 | 0:23:39 | 0:13:22 | 0:10:17 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} | 2 | |
pass | 7910376 | 2024-09-18 07:01:55 | 2024-09-18 08:01:54 | 2024-09-18 08:23:44 | 0:21:50 | 0:11:14 | 0:10:36 | smithi | main | centos | 9.stream | orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 3 | |
pass | 7910377 | 2024-09-18 07:01:56 | 2024-09-18 08:03:05 | 2024-09-18 09:03:03 | 0:59:58 | 0:49:23 | 0:10:35 | smithi | main | ubuntu | 22.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04-reef 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} | 2 | |
pass | 7910378 | 2024-09-18 07:01:57 | 2024-09-18 08:03:25 | 2024-09-18 08:33:51 | 0:30:26 | 0:19:54 | 0:10:32 | smithi | main | ubuntu | 22.04 | orch:cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
pass | 7910379 | 2024-09-18 07:01:59 | 2024-09-18 08:04:16 | 2024-09-18 08:26:20 | 0:22:04 | 0:10:43 | 0:11:21 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_domain} | 2 | |
pass | 7910380 | 2024-09-18 07:02:00 | 2024-09-18 08:05:16 | 2024-09-18 08:41:35 | 0:36:19 | 0:26:11 | 0:10:08 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
pass | 7910381 | 2024-09-18 07:02:01 | 2024-09-18 08:05:26 | 2024-09-18 08:53:09 | 0:47:43 | 0:34:59 | 0:12:44 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} | 2 | |
fail | 7910382 | 2024-09-18 07:02:03 | 2024-09-18 08:08:07 | 2024-09-18 08:40:43 | 0:32:36 | 0:20:12 | 0:12:24 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_monitoring_stack_basic} | 3 | |
Failure Reason:
Command failed on smithi096 with status 5: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:e7fb7b56dbae911c82c9a0310bb8cf8c37e8363e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 9ae72dca-7597-11ef-bceb-c7b262605968 -- bash -c \'set -e\nset -x\nceph orch apply node-exporter\nceph orch apply grafana\nceph orch apply alertmanager\nceph orch apply prometheus\nsleep 240\nceph orch ls\nceph orch ps\nceph orch host ls\nMON_DAEMON=$(ceph orch ps --daemon-type mon -f json | jq -r \'"\'"\'last | .daemon_name\'"\'"\')\nGRAFANA_HOST=$(ceph orch ps --daemon-type grafana -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nPROM_HOST=$(ceph orch ps --daemon-type prometheus -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nALERTM_HOST=$(ceph orch ps --daemon-type alertmanager -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nGRAFANA_IP=$(ceph orch host ls -f json | jq -r --arg GRAFANA_HOST "$GRAFANA_HOST" \'"\'"\'.[] | select(.hostname==$GRAFANA_HOST) | .addr\'"\'"\')\nPROM_IP=$(ceph orch host ls -f json | jq -r --arg PROM_HOST "$PROM_HOST" \'"\'"\'.[] | select(.hostname==$PROM_HOST) | .addr\'"\'"\')\nALERTM_IP=$(ceph orch host ls -f json | jq -r --arg ALERTM_HOST "$ALERTM_HOST" \'"\'"\'.[] | select(.hostname==$ALERTM_HOST) | .addr\'"\'"\')\n# check each host node-exporter metrics endpoint is responsive\nALL_HOST_IPS=$(ceph orch host ls -f json | jq -r \'"\'"\'.[] | .addr\'"\'"\')\nfor ip in $ALL_HOST_IPS; do\n curl -s http://${ip}:9100/metric\ndone\n# check grafana endpoints are responsive and database health is okay\ncurl -k -s https://${GRAFANA_IP}:3000/api/health\ncurl -k -s https://${GRAFANA_IP}:3000/api/health | jq -e \'"\'"\'.database == "ok"\'"\'"\'\n# stop mon daemon in order to trigger an alert\nceph orch daemon stop $MON_DAEMON\nsleep 120\n# check prometheus endpoints are responsive and mon down alert is firing\ncurl -s http://${PROM_IP}:9095/api/v1/status/config\ncurl -s http://${PROM_IP}:9095/api/v1/status/config | jq -e \'"\'"\'.status == "success"\'"\'"\'\ncurl -s http://${PROM_IP}:9095/api/v1/alerts\ncurl -s http://${PROM_IP}:9095/api/v1/alerts | jq -e \'"\'"\'.data | .alerts | .[] | select(.labels | .alertname == "CephMonDown") | .state == "firing"\'"\'"\'\n# check alertmanager endpoints are responsive and mon down alert is active\ncurl -s http://${ALERTM_IP}:9093/api/v1/status\ncurl -s http://${ALERTM_IP}:9093/api/v1/alerts\ncurl -s http://${ALERTM_IP}:9093/api/v1/alerts | jq -e \'"\'"\'.data | .[] | select(.labels | .alertname == "CephMonDown") | .status | .state == "active"\'"\'"\'\n\'' |
||||||||||||||
pass | 7910383 | 2024-09-18 07:02:04 | 2024-09-18 08:11:34 | 2024-09-18 08:34:54 | 0:23:20 | 0:14:04 | 0:09:16 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
pass | 7910384 | 2024-09-18 07:02:05 | 2024-09-18 08:11:54 | 2024-09-18 09:07:03 | 0:55:09 | 0:40:55 | 0:14:14 | smithi | main | ubuntu | 22.04 | orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/connectivity task/test_orch_cli_mon} | 5 | |
pass | 7910385 | 2024-09-18 07:02:07 | 2024-09-18 08:13:55 | 2024-09-18 08:43:50 | 0:29:55 | 0:20:18 | 0:09:37 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
pass | 7910386 | 2024-09-18 07:02:08 | 2024-09-18 08:14:15 | 2024-09-18 08:43:05 | 0:28:50 | 0:18:26 | 0:10:24 | smithi | main | ubuntu | 22.04 | orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_res_basic} | 2 | |
pass | 7910387 | 2024-09-18 07:02:09 | 2024-09-18 08:15:06 | 2024-09-18 09:00:22 | 0:45:16 | 0:33:55 | 0:11:21 | smithi | main | ubuntu | 22.04 | orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} | 2 | |
dead | 7910388 | 2024-09-18 07:02:11 | 2024-09-18 08:16:46 | 2024-09-18 16:26:12 | 8:09:26 | smithi | main | ubuntu | 22.04 | orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_rgw_multisite} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7910389 | 2024-09-18 07:02:12 | 2024-09-18 08:16:47 | 2024-09-18 09:04:22 | 0:47:35 | 0:35:57 | 0:11:38 | smithi | main | centos | 9.stream | orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (50) after waiting for 300 seconds |
||||||||||||||
pass | 7910390 | 2024-09-18 07:02:13 | 2024-09-18 08:16:57 | 2024-09-18 08:40:11 | 0:23:14 | 0:13:01 | 0:10:13 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nvmeof 3-final} | 2 | |
fail | 7910391 | 2024-09-18 07:02:15 | 2024-09-18 08:16:58 | 2024-09-18 09:33:23 | 1:16:25 | 1:05:33 | 0:10:52 | smithi | main | ubuntu | 22.04 | orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04-squid 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
Failure Reason:
Command failed on smithi078 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:squid shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c5c73ad4-7598-11ef-bceb-c7b262605968 -e sha1=e7fb7b56dbae911c82c9a0310bb8cf8c37e8363e -- bash -c \'ceph versions | jq -e \'"\'"\'.rgw | length == 1\'"\'"\'\'' |
||||||||||||||
pass | 7910392 | 2024-09-18 07:02:16 | 2024-09-18 08:17:28 | 2024-09-18 08:40:53 | 0:23:25 | 0:14:01 | 0:09:24 | smithi | main | centos | 9.stream | orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
pass | 7910393 | 2024-09-18 07:02:18 | 2024-09-18 08:17:58 | 2024-09-18 09:29:37 | 1:11:39 | 1:00:48 | 0:10:51 | smithi | main | centos | 9.stream | orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} | 2 | |
dead | 7910394 | 2024-09-18 07:02:19 | 2024-09-18 08:18:29 | 2024-09-18 16:27:19 | 8:08:50 | smithi | main | centos | 9.stream | orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7910395 | 2024-09-18 07:02:20 | 2024-09-18 08:18:49 | 2024-09-18 08:40:09 | 0:21:20 | 0:10:48 | 0:10:32 | smithi | main | centos | 9.stream | orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_res_dom} | 2 | |
pass | 7910396 | 2024-09-18 07:02:22 | 2024-09-18 08:52:11 | 1411 | smithi | main | centos | 9.stream | orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} | 2 | ||||
pass | 7910397 | 2024-09-18 07:02:23 | 2024-09-18 08:19:10 | 2024-09-18 08:45:09 | 0:25:59 | 0:16:10 | 0:09:49 | smithi | main | centos | 9.stream | orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_set_mon_crush_locations} | 3 | |
pass | 7910398 | 2024-09-18 07:02:25 | 2024-09-18 08:20:01 | 2024-09-18 08:52:16 | 0:32:15 | 0:22:04 | 0:10:11 | smithi | main | ubuntu | 22.04 | orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 |