Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7889153 2024-09-04 15:42:23 2024-09-04 15:43:47 2024-09-04 16:12:33 0:28:46 0:21:51 0:06:55 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_python} 2
pass 7889154 2024-09-04 15:42:25 2024-09-04 15:43:47 2024-09-04 16:04:57 0:21:10 0:15:02 0:06:08 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_host_drain} 3
fail 7889155 2024-09-04 15:42:26 2024-09-04 15:43:48 2024-09-04 16:12:59 0:29:11 0:23:23 0:05:48 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi005 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:reef shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 1aabefea-6ad6-11ef-bcd6-c7b262605968 -e sha1=f9fcca5273b6971f640393d33a94730179073754 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

pass 7889156 2024-09-04 15:42:27 2024-09-04 15:44:18 2024-09-04 16:24:04 0:39:46 0:32:35 0:07:11 smithi main centos 9.stream orch:cephadm/mgr-nfs-upgrade/{0-centos_9.stream 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
pass 7889157 2024-09-04 15:42:28 2024-09-04 15:45:09 2024-09-04 16:25:50 0:40:41 0:34:27 0:06:14 smithi main centos 9.stream orch:cephadm/nfs/{cluster/{1-node} conf/{client mds mgr mon osd} overrides/{ignore_mgr_down ignorelist_health pg_health} supported-random-distros$/{centos_latest} tasks/nfs} 1
pass 7889158 2024-09-04 15:42:30 2024-09-04 15:45:09 2024-09-04 16:03:50 0:18:41 0:13:10 0:05:31 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/classic task/test_orch_cli} 1
pass 7889159 2024-09-04 15:42:31 2024-09-04 15:45:09 2024-09-04 16:14:34 0:29:25 0:22:49 0:06:36 smithi main centos 9.stream orch:cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} 3
pass 7889160 2024-09-04 15:42:32 2024-09-04 15:45:20 2024-09-04 16:06:33 0:21:13 0:15:08 0:06:05 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
pass 7889161 2024-09-04 15:42:34 2024-09-04 15:45:20 2024-09-04 16:02:28 0:17:08 0:10:44 0:06:24 smithi main centos 9.stream orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/basic 3-final} 1
pass 7889162 2024-09-04 15:42:35 2024-09-04 15:45:30 2024-09-04 16:03:20 0:17:50 0:10:48 0:07:02 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} 3
fail 7889163 2024-09-04 15:42:36 2024-09-04 15:46:11 2024-09-04 16:01:27 0:15:16 0:07:59 0:07:17 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} 2
Failure Reason:

Command failed on smithi057 with status 5: 'sudo systemctl stop ceph-8dd0a3da-6ad6-11ef-bcd6-c7b262605968@mon.a'

pass 7889164 2024-09-04 15:42:38 2024-09-04 15:46:12 2024-09-04 16:28:32 0:42:20 0:34:55 0:07:25 smithi main centos 9.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream-reef 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/classic} 2
fail 7889165 2024-09-04 15:42:39 2024-09-04 15:47:32 2024-09-04 16:12:11 0:24:39 0:15:33 0:09:06 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_ctdb_res_ips} 4
Failure Reason:

"2024-09-04T16:09:15.471263+0000 mon.a (mon.0) 784 : cluster [WRN] Health check failed: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

pass 7889166 2024-09-04 15:42:40 2024-09-04 15:48:33 2024-09-04 16:17:37 0:29:04 0:22:39 0:06:25 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rotate-keys} 2
pass 7889167 2024-09-04 15:42:42 2024-09-04 15:48:33 2024-09-04 16:07:26 0:18:53 0:12:13 0:06:40 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} 1
pass 7889168 2024-09-04 15:42:43 2024-09-04 15:48:33 2024-09-04 16:09:43 0:21:10 0:14:50 0:06:20 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
pass 7889169 2024-09-04 15:42:45 2024-09-04 15:48:44 2024-09-04 16:09:54 0:21:10 0:12:05 0:09:05 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
pass 7889170 2024-09-04 15:42:46 2024-09-04 15:50:54 2024-09-04 16:26:36 0:35:42 0:28:56 0:06:46 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v2only root} 2
fail 7889171 2024-09-04 15:42:48 2024-09-04 15:51:45 2024-09-04 16:51:10 0:59:25 0:51:34 0:07:51 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (50) after waiting for 300 seconds

fail 7889172 2024-09-04 15:42:49 2024-09-04 15:51:45 2024-09-04 16:03:53 0:12:08 0:05:19 0:06:49 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

Command failed on smithi007 with status 5: 'sudo systemctl stop ceph-112f055a-6ad7-11ef-bcd6-c7b262605968@mon.smithi007'

fail 7889173 2024-09-04 15:42:51 2024-09-04 15:53:06 2024-09-04 16:41:57 0:48:51 0:41:36 0:07:15 smithi main centos 9.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream-squid 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/connectivity} 2
Failure Reason:

Command failed on smithi084 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:squid shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 079e25c0-6ad7-11ef-bcd6-c7b262605968 -e sha1=f9fcca5273b6971f640393d33a94730179073754 -- bash -c \'ceph versions | jq -e \'"\'"\'.rgw | length == 1\'"\'"\'\''

pass 7889174 2024-09-04 15:42:52 2024-09-04 15:53:06 2024-09-04 16:24:37 0:31:31 0:22:54 0:08:37 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/connectivity task/test_orch_cli_mon} 5
pass 7889175 2024-09-04 15:42:53 2024-09-04 15:54:57 2024-09-04 16:14:02 0:19:05 0:12:29 0:06:36 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs2 3-final} 2
pass 7889176 2024-09-04 15:42:55 2024-09-04 15:55:07 2024-09-04 16:13:57 0:18:50 0:09:44 0:09:06 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_res_basic} 2
pass 7889177 2024-09-04 15:42:56 2024-09-04 15:55:48 2024-09-04 16:26:48 0:31:00 0:21:11 0:09:49 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_python} 2
fail 7889178 2024-09-04 15:42:57 2024-09-04 16:00:08 2024-09-04 16:20:44 0:20:36 0:13:11 0:07:25 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_rgw_multisite} 3
Failure Reason:

Command failed on smithi057 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:f9fcca5273b6971f640393d33a94730179073754 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid a804aec0-6ad8-11ef-bcd6-c7b262605968 -- bash -c \'set -e\nset -x\nwhile true; do TOKEN=$(ceph rgw realm tokens | jq -r \'"\'"\'.[0].token\'"\'"\'); echo $TOKEN; if [ "$TOKEN" != "master zone has no endpoint" ]; then break; fi; sleep 5; done\nTOKENS=$(ceph rgw realm tokens)\necho $TOKENS | jq --exit-status \'"\'"\'.[0].realm == "myrealm1"\'"\'"\'\necho $TOKENS | jq --exit-status \'"\'"\'.[0].token\'"\'"\'\nTOKEN_JSON=$(ceph rgw realm tokens | jq -r \'"\'"\'.[0].token\'"\'"\' | base64 --decode)\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.realm_name == "myrealm1"\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.endpoint | test("http://.+:\\\\d+")\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.realm_id | test("^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$")\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.access_key\'"\'"\'\necho $TOKEN_JSON | jq --exit-status \'"\'"\'.secret\'"\'"\'\n\''

pass 7889179 2024-09-04 15:42:59 2024-09-04 16:01:39 2024-09-04 16:20:53 0:19:14 0:13:13 0:06:01 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
fail 7889180 2024-09-04 15:43:00 2024-09-04 16:01:49 2024-09-04 16:41:10 0:39:21 0:32:10 0:07:11 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"2024-09-04T16:30:00.000140+0000 mon.smithi105 (mon.0) 467 : cluster [WRN] osd.4 (root=default,host=smithi183) is down" in cluster log

fail 7889181 2024-09-04 15:43:02 2024-09-04 16:02:10 2024-09-04 16:14:23 0:12:13 0:05:10 0:07:03 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

Command failed on smithi062 with status 5: 'sudo systemctl stop ceph-863a2a18-6ad8-11ef-bcd6-c7b262605968@mon.smithi062'

pass 7889182 2024-09-04 15:43:03 2024-09-04 16:03:20 2024-09-04 16:20:50 0:17:30 0:10:22 0:07:08 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 3
pass 7889183 2024-09-04 15:43:04 2024-09-04 16:03:41 2024-09-04 16:21:00 0:17:19 0:11:05 0:06:14 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_res_dom} 2
pass 7889184 2024-09-04 15:43:06 2024-09-04 16:04:11 2024-09-04 16:33:13 0:29:02 0:22:40 0:06:22 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rotate-keys} 2
pass 7889185 2024-09-04 15:43:07 2024-09-04 16:04:11 2024-09-04 16:25:15 0:21:04 0:15:11 0:05:53 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_set_mon_crush_locations} 3
pass 7889186 2024-09-04 15:43:08 2024-09-04 16:04:22 2024-09-04 16:25:28 0:21:06 0:14:42 0:06:24 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw 3-final} 2
fail 7889187 2024-09-04 15:43:10 2024-09-04 16:04:33 2024-09-04 16:26:12 0:21:39 0:14:33 0:07:06 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/deploy-raw} 2
Failure Reason:

reached maximum tries (120) after waiting for 120 seconds

pass 7889188 2024-09-04 15:43:11 2024-09-04 16:05:03 2024-09-04 16:24:26 0:19:23 0:11:28 0:07:55 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_basic} 2
pass 7889189 2024-09-04 15:43:13 2024-09-04 16:05:13 2024-09-04 16:27:01 0:21:48 0:14:25 0:07:23 smithi main centos 9.stream orch:cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
pass 7889190 2024-09-04 15:43:14 2024-09-04 16:05:44 2024-09-04 16:45:55 0:40:11 0:33:10 0:07:01 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
pass 7889191 2024-09-04 15:43:15 2024-09-04 16:06:44 2024-09-04 16:24:03 0:17:19 0:10:29 0:06:50 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_ca_signed_key} 2
fail 7889192 2024-09-04 15:43:17 2024-09-04 16:06:55 2024-09-04 17:05:38 0:58:43 0:50:57 0:07:46 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (50) after waiting for 300 seconds

pass 7889193 2024-09-04 15:43:18 2024-09-04 16:08:35 2024-09-04 16:28:54 0:20:19 0:11:57 0:08:22 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/client-keyring 3-final} 2
pass 7889194 2024-09-04 15:43:19 2024-09-04 16:09:56 2024-09-04 16:29:19 0:19:23 0:12:28 0:06:55 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
pass 7889195 2024-09-04 15:43:21 2024-09-04 16:10:06 2024-09-04 17:13:14 1:03:08 0:55:49 0:07:19 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
pass 7889196 2024-09-04 15:43:22 2024-09-04 16:10:07 2024-09-04 16:29:58 0:19:51 0:09:48 0:10:03 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_basic} 2
pass 7889197 2024-09-04 15:43:24 2024-09-04 16:12:27 2024-09-04 16:43:21 0:30:54 0:23:48 0:07:06 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rotate-keys} 2
pass 7889198 2024-09-04 15:43:25 2024-09-04 16:12:28 2024-09-04 16:25:15 0:12:47 0:05:30 0:07:17 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_cephadm_repos} 1
fail 7889199 2024-09-04 15:43:26 2024-09-04 16:12:28 2024-09-04 16:25:21 0:12:53 0:05:51 0:07:02 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

cannot pull file with status: requested

pass 7889200 2024-09-04 15:43:28 2024-09-04 16:12:48 2024-09-04 16:31:42 0:18:54 0:12:55 0:05:59 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/connectivity task/test_cephadm_timeout} 1
pass 7889201 2024-09-04 15:43:29 2024-09-04 16:12:49 2024-09-04 16:32:11 0:19:22 0:11:56 0:07:26 smithi main centos 9.stream orch:cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream_runc} 2-node-mgr agent/on orchestrator_cli} 2
pass 7889202 2024-09-04 15:43:30 2024-09-04 16:13:19 2024-09-04 16:33:11 0:19:52 0:12:27 0:07:25 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/mirror 3-final} 2
pass 7889203 2024-09-04 15:43:32 2024-09-04 16:14:10 2024-09-04 16:31:15 0:17:05 0:11:22 0:05:43 smithi main centos 9.stream orch:cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/rgw 3-final} 1
pass 7889204 2024-09-04 15:43:33 2024-09-04 16:14:20 2024-09-04 16:32:25 0:18:05 0:10:33 0:07:32 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} 3
fail 7889205 2024-09-04 15:43:35 2024-09-04 16:14:30 2024-09-04 16:25:37 0:11:07 0:03:42 0:07:25 smithi main centos 9.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream-reef 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} 2
Failure Reason:

cannot pull file with status: requested

pass 7889206 2024-09-04 15:43:36 2024-09-04 16:14:41 2024-09-04 16:35:36 0:20:55 0:14:16 0:06:39 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} 2
pass 7889207 2024-09-04 15:43:37 2024-09-04 16:14:41 2024-09-04 16:34:02 0:19:21 0:12:56 0:06:25 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
pass 7889208 2024-09-04 15:43:39 2024-09-04 16:14:52 2024-09-04 16:38:46 0:23:54 0:14:47 0:09:07 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_ctdb_res_basic} 4
pass 7889209 2024-09-04 15:43:40 2024-09-04 16:17:32 2024-09-04 16:36:44 0:19:12 0:13:24 0:05:48 smithi main centos 9.stream orch:cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
pass 7889210 2024-09-04 15:43:41 2024-09-04 16:17:43 2024-09-04 16:52:58 0:35:15 0:28:33 0:06:42 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} 2
pass 7889211 2024-09-04 15:43:43 2024-09-04 16:17:53 2024-09-04 16:57:53 0:40:00 0:33:20 0:06:40 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_api_tests} 2
pass 7889212 2024-09-04 15:43:44 2024-09-04 16:18:23 2024-09-04 16:38:35 0:20:12 0:12:02 0:08:10 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_extra_daemon_features} 2
fail 7889213 2024-09-04 15:43:46 2024-09-04 16:19:44 2024-09-04 17:03:54 0:44:10 0:36:25 0:07:45 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (50) after waiting for 300 seconds

pass 7889214 2024-09-04 15:43:47 2024-09-04 16:20:54 2024-09-04 16:42:04 0:21:10 0:13:56 0:07:14 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
fail 7889215 2024-09-04 15:43:48 2024-09-04 16:20:55 2024-09-04 17:10:12 0:49:17 0:42:35 0:06:42 smithi main centos 9.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream-squid 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} 2
Failure Reason:

Command failed on smithi100 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:squid shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fa5d394c-6ada-11ef-bcd6-c7b262605968 -e sha1=f9fcca5273b6971f640393d33a94730179073754 -- bash -c \'ceph versions | jq -e \'"\'"\'.rgw | length == 1\'"\'"\'\''

pass 7889216 2024-09-04 15:43:50 2024-09-04 16:21:05 2024-09-04 16:42:04 0:20:59 0:13:27 0:07:32 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/classic task/test_orch_cli} 1
pass 7889217 2024-09-04 15:43:51 2024-09-04 16:21:06 2024-09-04 16:40:09 0:19:03 0:12:31 0:06:32 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
pass 7889218 2024-09-04 15:43:53 2024-09-04 16:21:06 2024-09-04 16:41:56 0:20:50 0:15:21 0:05:29 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
fail 7889219 2024-09-04 15:43:54 2024-09-04 16:21:16 2024-09-04 16:47:23 0:26:07 0:15:24 0:10:43 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_ctdb_res_ips} 4
Failure Reason:

SELinux denials found on ubuntu@smithi092.front.sepia.ceph.com: ['type=AVC msg=audit(1725468164.361:10882): avc: denied { nlmsg_read } for pid=60732 comm="ss" scontext=system_u:system_r:container_t:s0:c467,c895 tcontext=system_u:system_r:container_t:s0:c467,c895 tclass=netlink_tcpdiag_socket permissive=1']

pass 7889220 2024-09-04 15:43:55 2024-09-04 16:24:17 2024-09-04 16:53:40 0:29:23 0:23:17 0:06:06 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} 2
pass 7889221 2024-09-04 15:43:57 2024-09-04 16:24:38 2024-09-04 16:45:03 0:20:25 0:12:52 0:07:33 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} 1
pass 7889222 2024-09-04 15:43:58 2024-09-04 16:24:48 2024-09-04 17:06:09 0:41:21 0:34:14 0:07:07 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
pass 7889223 2024-09-04 15:43:59 2024-09-04 16:24:48 2024-09-04 16:44:18 0:19:30 0:13:08 0:06:22 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} 2
pass 7889224 2024-09-04 15:44:01 2024-09-04 16:24:59 2024-09-04 16:42:19 0:17:20 0:10:13 0:07:07 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 3
pass 7889225 2024-09-04 15:44:02 2024-09-04 16:25:29 2024-09-04 16:42:35 0:17:06 0:10:46 0:06:20 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_domain} 2
pass 7889226 2024-09-04 15:44:03 2024-09-04 16:25:40 2024-09-04 17:06:26 0:40:46 0:34:02 0:06:44 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
fail 7889227 2024-09-04 15:44:05 2024-09-04 16:25:40 2024-09-04 16:50:46 0:25:06 0:18:25 0:06:41 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_monitoring_stack_basic} 3
Failure Reason:

Command failed on smithi071 with status 5: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:f9fcca5273b6971f640393d33a94730179073754 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 0a105cb0-6adc-11ef-bcd6-c7b262605968 -- bash -c \'set -e\nset -x\nceph orch apply node-exporter\nceph orch apply grafana\nceph orch apply alertmanager\nceph orch apply prometheus\nsleep 240\nceph orch ls\nceph orch ps\nceph orch host ls\nMON_DAEMON=$(ceph orch ps --daemon-type mon -f json | jq -r \'"\'"\'last | .daemon_name\'"\'"\')\nGRAFANA_HOST=$(ceph orch ps --daemon-type grafana -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nPROM_HOST=$(ceph orch ps --daemon-type prometheus -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nALERTM_HOST=$(ceph orch ps --daemon-type alertmanager -f json | jq -e \'"\'"\'.[]\'"\'"\' | jq -r \'"\'"\'.hostname\'"\'"\')\nGRAFANA_IP=$(ceph orch host ls -f json | jq -r --arg GRAFANA_HOST "$GRAFANA_HOST" \'"\'"\'.[] | select(.hostname==$GRAFANA_HOST) | .addr\'"\'"\')\nPROM_IP=$(ceph orch host ls -f json | jq -r --arg PROM_HOST "$PROM_HOST" \'"\'"\'.[] | select(.hostname==$PROM_HOST) | .addr\'"\'"\')\nALERTM_IP=$(ceph orch host ls -f json | jq -r --arg ALERTM_HOST "$ALERTM_HOST" \'"\'"\'.[] | select(.hostname==$ALERTM_HOST) | .addr\'"\'"\')\n# check each host node-exporter metrics endpoint is responsive\nALL_HOST_IPS=$(ceph orch host ls -f json | jq -r \'"\'"\'.[] | .addr\'"\'"\')\nfor ip in $ALL_HOST_IPS; do\n curl -s http://${ip}:9100/metric\ndone\n# check grafana endpoints are responsive and database health is okay\ncurl -k -s https://${GRAFANA_IP}:3000/api/health\ncurl -k -s https://${GRAFANA_IP}:3000/api/health | jq -e \'"\'"\'.database == "ok"\'"\'"\'\n# stop mon daemon in order to trigger an alert\nceph orch daemon stop $MON_DAEMON\nsleep 120\n# check prometheus endpoints are responsive and mon down alert is firing\ncurl -s http://${PROM_IP}:9095/api/v1/status/config\ncurl -s http://${PROM_IP}:9095/api/v1/status/config | jq -e \'"\'"\'.status == "success"\'"\'"\'\ncurl -s http://${PROM_IP}:9095/api/v1/alerts\ncurl -s http://${PROM_IP}:9095/api/v1/alerts | jq -e \'"\'"\'.data | .alerts | .[] | select(.labels | .alertname == "CephMonDown") | .state == "firing"\'"\'"\'\n# check alertmanager endpoints are responsive and mon down alert is active\ncurl -s http://${ALERTM_IP}:9093/api/v1/status\ncurl -s http://${ALERTM_IP}:9093/api/v1/alerts\ncurl -s http://${ALERTM_IP}:9093/api/v1/alerts | jq -e \'"\'"\'.data | .[] | select(.labels | .alertname == "CephMonDown") | .status | .state == "active"\'"\'"\'\n\''

pass 7889228 2024-09-04 15:44:06 2024-09-04 16:25:50 2024-09-04 16:47:32 0:21:42 0:14:04 0:07:38 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs 3-final} 2
fail 7889229 2024-09-04 15:44:07 2024-09-04 16:26:31 2024-09-04 17:09:49 0:43:18 0:36:55 0:06:23 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/squid 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (50) after waiting for 300 seconds

pass 7889230 2024-09-04 15:44:09 2024-09-04 16:26:51 2024-09-04 16:45:43 0:18:52 0:13:16 0:05:36 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nvmeof 3-final} 2
pass 7889231 2024-09-04 15:44:10 2024-09-04 16:27:02 2024-09-04 16:45:58 0:18:56 0:13:09 0:05:47 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
pass 7889232 2024-09-04 15:44:11 2024-09-04 16:27:12 2024-09-04 17:24:39 0:57:27 0:49:49 0:07:38 smithi main centos 9.stream orch:cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async root} 2
dead 7889233 2024-09-04 15:44:13 2024-09-04 16:27:22 2024-09-05 00:37:14 8:09:52 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
Failure Reason:

hit max job timeout

pass 7889234 2024-09-04 15:44:14 2024-09-04 16:28:43 2024-09-04 16:45:55 0:17:12 0:10:34 0:06:38 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_res_dom} 2
pass 7889235 2024-09-04 15:44:15 2024-09-04 16:29:03 2024-09-04 16:58:47 0:29:44 0:22:47 0:06:57 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} 2
pass 7889236 2024-09-04 15:44:17 2024-09-04 16:29:34 2024-09-04 16:51:45 0:22:11 0:14:00 0:08:11 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_set_mon_crush_locations} 3