User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
teuthology | 2024-05-20 20:08:15 | 2024-05-27 13:10:31 | 2024-05-28 03:32:24 | 14:21:53 | orch | main | smithi | 5e689ef | 118 | 53 | 21 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 7716331 | 2024-05-20 20:10:09 | 2024-05-27 13:10:31 | 2024-05-27 13:35:28 | 0:24:57 | 0:14:37 | 0:10:20 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nvmeof 3-final} | 2 | |
fail | 7716332 | 2024-05-20 20:10:10 | 2024-05-27 13:10:51 | 2024-05-27 13:31:16 | 0:20:25 | 0:07:27 | 0:12:58 | smithi | main | ubuntu | 22.04 | orch/rook/smoke/{0-distro/ubuntu_22.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/3-node k8s/1.21 net/flannel rook/1.7.2} | 3 | |
Failure Reason:
Command failed on smithi111 with status 100: "sudo apt update && sudo apt install -y apt-transport-https ca-certificates curl && sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg && echo 'deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list && sudo apt update && sudo apt install -y kubelet kubeadm kubectl bridge-utils" |
||||||||||||||
fail | 7716333 | 2024-05-20 20:10:11 | 2024-05-27 13:13:12 | 2024-05-27 13:37:54 | 0:24:42 | 0:14:02 | 0:10:40 | smithi | main | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
Failure Reason:
"2024-05-27T13:30:20.492180+0000 mon.smithi069 (mon.0) 161 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log |
||||||||||||||
pass | 7716334 | 2024-05-20 20:10:12 | 2024-05-27 13:17:07 | 2024-05-27 13:37:43 | 0:20:36 | 0:11:39 | 0:08:57 | smithi | main | centos | 9.stream | orch/cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_res_dom} | 2 | |
dead | 7716335 | 2024-05-20 20:10:13 | 2024-05-27 13:17:07 | 2024-05-28 01:28:22 | 12:11:15 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7716336 | 2024-05-20 20:10:14 | 2024-05-27 13:18:48 | 2024-05-27 13:55:19 | 0:36:31 | 0:26:26 | 0:10:05 | smithi | main | ubuntu | 22.04 | orch/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
pass | 7716337 | 2024-05-20 20:10:15 | 2024-05-27 13:19:48 | 2024-05-27 13:42:48 | 0:23:00 | 0:13:22 | 0:09:38 | smithi | main | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 | |
pass | 7716338 | 2024-05-20 20:10:16 | 2024-05-27 13:19:48 | 2024-05-27 14:20:16 | 1:00:28 | 0:49:07 | 0:11:21 | smithi | main | ubuntu | 22.04 | orch/cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async root} | 2 | |
pass | 7716339 | 2024-05-20 20:10:17 | 2024-05-27 13:20:29 | 2024-05-27 13:52:27 | 0:31:58 | 0:21:11 | 0:10:47 | smithi | main | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
fail | 7716340 | 2024-05-20 20:10:18 | 2024-05-27 13:20:59 | 2024-05-27 13:51:23 | 0:30:24 | 0:18:35 | 0:11:49 | smithi | main | centos | 9.stream | orch/cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/root mon_election/classic msgr/async-v2only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed (workunit test rados/test_python.sh) on smithi143 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acdd6bc00bfd8e7bf10533befba15ce193d11b90 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 1h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh' |
||||||||||||||
pass | 7716341 | 2024-05-20 20:10:19 | 2024-05-27 13:21:40 | 2024-05-27 14:02:41 | 0:41:01 | 0:32:50 | 0:08:11 | smithi | main | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
pass | 7716342 | 2024-05-20 20:10:20 | 2024-05-27 13:21:50 | 2024-05-27 14:07:52 | 0:46:02 | 0:34:42 | 0:11:20 | smithi | main | centos | 9.stream | orch/cephadm/mgr-nfs-upgrade/{0-centos_9.stream 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
fail | 7716343 | 2024-05-20 20:10:21 | 2024-05-27 13:22:41 | 2024-05-27 14:11:59 | 0:49:18 | 0:39:37 | 0:09:41 | smithi | main | centos | 9.stream | orch/cephadm/nfs/{cluster/{1-node} conf/{client mds mgr mon osd} overrides/{ignore_mgr_down ignorelist_health pg_health} supported-random-distros$/{centos_latest} tasks/nfs} | 1 | |
Failure Reason:
"2024-05-27T13:55:36.893305+0000 mds.nfs-cephfs.smithi177.pvsfjg (mds.0) 1 : cluster [WRN] client.15217 isn't responding to mclientcaps(revoke), ino 0x1 pending pAsLsXs issued pAsLsXsFs, sent 63.576792 seconds ago" in cluster log |
||||||||||||||
pass | 7716344 | 2024-05-20 20:10:22 | 2024-05-27 13:23:11 | 2024-05-27 13:42:11 | 0:19:00 | 0:09:58 | 0:09:02 | smithi | main | centos | 9.stream | orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/connectivity task/test_adoption} | 1 | |
pass | 7716345 | 2024-05-20 20:10:23 | 2024-05-27 13:23:11 | 2024-05-27 13:44:20 | 0:21:09 | 0:12:22 | 0:08:47 | smithi | main | centos | 9.stream | orch/cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream_runc} 2-node-mgr agent/off orchestrator_cli} | 2 | |
pass | 7716346 | 2024-05-20 20:10:24 | 2024-05-27 13:23:32 | 2024-05-27 13:48:16 | 0:24:44 | 0:13:34 | 0:11:10 | smithi | main | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/deploy-raw} | 2 | |
fail | 7716347 | 2024-05-20 20:10:25 | 2024-05-27 13:25:02 | 2024-05-27 14:02:34 | 0:37:32 | 0:25:50 | 0:11:42 | smithi | main | centos | 9.stream | orch/cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} | 3 | |
Failure Reason:
"2024-05-27T13:43:19.971155+0000 mon.a (mon.0) 207 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
pass | 7716348 | 2024-05-20 20:10:26 | 2024-05-27 13:27:43 | 2024-05-27 13:50:58 | 0:23:15 | 0:12:50 | 0:10:25 | smithi | main | centos | 9.stream | orch/cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_basic} | 2 | |
pass | 7716349 | 2024-05-20 20:10:27 | 2024-05-27 13:27:54 | 2024-05-27 13:50:52 | 0:22:58 | 0:13:04 | 0:09:54 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
pass | 7716350 | 2024-05-20 20:10:28 | 2024-05-27 13:27:54 | 2024-05-27 13:49:47 | 0:21:53 | 0:11:47 | 0:10:06 | smithi | main | centos | 9.stream | orch/cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream_runc} 1-start 2-services/basic 3-final} | 1 | |
pass | 7716351 | 2024-05-20 20:10:29 | 2024-05-27 13:28:24 | 2024-05-27 13:49:27 | 0:21:03 | 0:11:30 | 0:09:33 | smithi | main | centos | 9.stream | orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 3 | |
fail | 7716352 | 2024-05-20 20:10:30 | 2024-05-27 13:28:35 | 2024-05-27 14:05:42 | 0:37:07 | 0:28:05 | 0:09:02 | smithi | main | centos | 9.stream | orch/cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 | |
Failure Reason:
"2024-05-27T13:53:40.348837+0000 mon.a (mon.0) 989 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
fail | 7716353 | 2024-05-20 20:10:31 | 2024-05-27 13:28:45 | 2024-05-27 13:58:56 | 0:30:11 | 0:19:09 | 0:11:02 | smithi | main | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/classic task/test_monitoring_stack_basic} | 3 | |
Failure Reason:
"2024-05-27T13:52:46.397313+0000 mon.a (mon.0) 725 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
pass | 7716354 | 2024-05-20 20:10:32 | 2024-05-27 13:28:55 | 2024-05-27 13:54:25 | 0:25:30 | 0:14:02 | 0:11:28 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
pass | 7716355 | 2024-05-20 20:10:34 | 2024-05-27 13:29:16 | 2024-05-27 14:00:55 | 0:31:39 | 0:20:07 | 0:11:32 | smithi | main | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
pass | 7716356 | 2024-05-20 20:10:35 | 2024-05-27 13:31:17 | 2024-05-27 13:54:58 | 0:23:41 | 0:13:28 | 0:10:13 | smithi | main | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
pass | 7716357 | 2024-05-20 20:10:36 | 2024-05-27 13:31:17 | 2024-05-27 13:52:50 | 0:21:33 | 0:11:45 | 0:09:48 | smithi | main | centos | 9.stream | orch/cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_domain} | 2 | |
pass | 7716358 | 2024-05-20 20:10:37 | 2024-05-27 13:31:57 | 2024-05-27 14:03:47 | 0:31:50 | 0:19:57 | 0:11:53 | smithi | main | ubuntu | 22.04 | orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/connectivity task/test_rgw_multisite} | 3 | |
fail | 7716359 | 2024-05-20 20:10:38 | 2024-05-27 13:33:38 | 2024-05-27 13:58:28 | 0:24:50 | 0:14:47 | 0:10:03 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |
Failure Reason:
"2024-05-27T13:56:13.184801+0000 mon.smithi081 (mon.0) 822 : cluster [WRN] Health check failed: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
fail | 7716360 | 2024-05-20 20:10:39 | 2024-05-27 13:35:29 | 2024-05-27 14:33:06 | 0:57:37 | 0:48:39 | 0:08:58 | smithi | main | ubuntu | 22.04 | orch/cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v1only root} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi092 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acdd6bc00bfd8e7bf10533befba15ce193d11b90 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 7716361 | 2024-05-20 20:10:40 | 2024-05-27 13:35:29 | 2024-05-27 14:24:58 | 0:49:29 | 0:39:00 | 0:10:29 | smithi | main | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (51) after waiting for 300 seconds |
||||||||||||||
pass | 7716362 | 2024-05-20 20:10:41 | 2024-05-27 13:36:00 | 2024-05-27 13:58:31 | 0:22:31 | 0:14:17 | 0:08:14 | smithi | main | centos | 9.stream | orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/classic task/test_cephadm_timeout} | 1 | |
fail | 7716363 | 2024-05-20 20:10:42 | 2024-05-27 13:36:00 | 2024-05-27 13:43:39 | 0:07:39 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |||
Failure Reason:
Command failed on smithi204 with status 1: 'sudo yum install -y kernel' |
||||||||||||||
pass | 7716364 | 2024-05-20 20:10:43 | 2024-05-27 13:36:20 | 2024-05-27 14:09:48 | 0:33:28 | 0:23:34 | 0:09:54 | smithi | main | centos | 9.stream | orch/cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/connectivity msgr/async start tasks/rotate-keys} | 2 | |
fail | 7716365 | 2024-05-20 20:10:44 | 2024-05-27 13:36:21 | 2024-05-27 14:01:40 | 0:25:19 | 0:14:46 | 0:10:33 | smithi | main | centos | 9.stream | orch/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
Failure Reason:
"2024-05-27T13:57:39.129452+0000 mon.a (mon.0) 812 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log |
||||||||||||||
pass | 7716366 | 2024-05-20 20:10:45 | 2024-05-27 13:36:21 | 2024-05-27 14:04:37 | 0:28:16 | 0:16:48 | 0:11:28 | smithi | main | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/classic task/test_set_mon_crush_locations} | 3 | |
pass | 7716367 | 2024-05-20 20:10:46 | 2024-05-27 13:37:52 | 2024-05-27 14:09:38 | 0:31:46 | 0:21:46 | 0:10:00 | smithi | main | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} | 2 | |
pass | 7716368 | 2024-05-20 20:10:47 | 2024-05-27 13:38:02 | 2024-05-27 14:07:41 | 0:29:39 | 0:19:55 | 0:09:44 | smithi | main | ubuntu | 22.04 | orch/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
pass | 7716369 | 2024-05-20 20:10:48 | 2024-05-27 13:38:02 | 2024-05-27 14:08:50 | 0:30:48 | 0:18:52 | 0:11:56 | smithi | main | ubuntu | 22.04 | orch/cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_basic} | 2 | |
pass | 7716370 | 2024-05-20 20:10:50 | 2024-05-27 13:38:23 | 2024-05-27 14:03:36 | 0:25:13 | 0:15:19 | 0:09:54 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
fail | 7716371 | 2024-05-20 20:10:51 | 2024-05-27 13:38:43 | 2024-05-27 14:00:52 | 0:22:09 | 0:12:24 | 0:09:45 | smithi | main | centos | 9.stream | orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 3 | |
Failure Reason:
"2024-05-27T13:57:11.491724+0000 mon.a (mon.0) 446 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
pass | 7716372 | 2024-05-20 20:10:52 | 2024-05-27 13:39:34 | 2024-05-27 14:27:08 | 0:47:34 | 0:38:37 | 0:08:57 | smithi | main | ubuntu | 22.04 | orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
pass | 7716373 | 2024-05-20 20:10:53 | 2024-05-27 13:39:34 | 2024-05-27 14:00:32 | 0:20:58 | 0:11:32 | 0:09:26 | smithi | main | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/connectivity task/test_ca_signed_key} | 2 | |
pass | 7716374 | 2024-05-20 20:10:54 | 2024-05-27 13:39:34 | 2024-05-27 14:06:03 | 0:26:29 | 0:15:55 | 0:10:34 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
pass | 7716375 | 2024-05-20 20:10:55 | 2024-05-27 13:41:05 | 2024-05-27 14:41:58 | 1:00:53 | 0:50:46 | 0:10:07 | smithi | main | centos | 9.stream | orch/cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} | 2 | |
pass | 7716376 | 2024-05-20 20:10:56 | 2024-05-27 13:42:56 | 2024-05-27 14:27:22 | 0:44:26 | 0:33:13 | 0:11:13 | smithi | main | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
pass | 7716377 | 2024-05-20 20:10:57 | 2024-05-27 13:44:26 | 2024-05-27 14:18:06 | 0:33:40 | 0:23:41 | 0:09:59 | smithi | main | ubuntu | 22.04 | orch/cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/connectivity task/test_orch_cli} | 1 | |
pass | 7716378 | 2024-05-20 20:10:58 | 2024-05-27 13:44:27 | 2024-05-27 14:07:58 | 0:23:31 | 0:13:40 | 0:09:51 | smithi | main | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
fail | 7716379 | 2024-05-20 20:10:59 | 2024-05-27 13:45:17 | 2024-05-27 14:02:45 | 0:17:28 | 0:07:16 | 0:10:12 | smithi | main | ubuntu | 22.04 | orch/rook/smoke/{0-distro/ubuntu_22.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/1-node k8s/1.21 net/host rook/master} | 1 | |
Failure Reason:
Command failed on smithi087 with status 100: "sudo apt update && sudo apt install -y apt-transport-https ca-certificates curl && sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg && echo 'deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list && sudo apt update && sudo apt install -y kubelet kubeadm kubectl bridge-utils" |
||||||||||||||
pass | 7716380 | 2024-05-20 20:11:00 | 2024-05-27 13:45:17 | 2024-05-27 14:09:26 | 0:24:09 | 0:11:38 | 0:12:31 | smithi | main | centos | 9.stream | orch/cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_domain} | 2 | |
fail | 7716381 | 2024-05-20 20:11:01 | 2024-05-27 13:48:18 | 2024-05-27 14:23:17 | 0:34:59 | 0:23:59 | 0:11:00 | smithi | main | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
"2024-05-27T14:15:43.565777+0000 mon.smithi080 (mon.0) 865 : cluster [WRN] Health check failed: Failed to place 2 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log |
||||||||||||||
fail | 7716382 | 2024-05-20 20:11:03 | 2024-05-27 13:49:29 | 2024-05-27 14:41:08 | 0:51:39 | 0:41:35 | 0:10:04 | smithi | main | ubuntu | 22.04 | orch/cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/classic msgr/async start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi077 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acdd6bc00bfd8e7bf10533befba15ce193d11b90 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 7716383 | 2024-05-20 20:11:04 | 2024-05-27 13:49:29 | 2024-05-27 14:15:08 | 0:25:39 | 0:14:37 | 0:11:02 | smithi | main | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/classic task/test_cephadm} | 1 | |
pass | 7716384 | 2024-05-20 20:11:05 | 2024-05-27 13:49:50 | 2024-05-27 14:18:27 | 0:28:37 | 0:16:59 | 0:11:38 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
fail | 7716385 | 2024-05-20 20:11:06 | 2024-05-27 13:50:30 | 2024-05-27 14:16:05 | 0:25:35 | 0:15:08 | 0:10:27 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} | 2 | |
Failure Reason:
"2024-05-27T14:11:24.809995+0000 mon.smithi110 (mon.0) 839 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log |
||||||||||||||
fail | 7716386 | 2024-05-20 20:11:07 | 2024-05-27 13:51:01 | 2024-05-27 14:16:01 | 0:25:00 | 0:15:59 | 0:09:01 | smithi | main | centos | 9.stream | orch/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 2 | |
Failure Reason:
"2024-05-27T14:09:13.293562+0000 mon.a (mon.0) 572 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
pass | 7716387 | 2024-05-20 20:11:08 | 2024-05-27 13:51:01 | 2024-05-27 14:15:14 | 0:24:13 | 0:13:15 | 0:10:58 | smithi | main | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
pass | 7716388 | 2024-05-20 20:11:09 | 2024-05-27 13:51:31 | 2024-05-27 14:13:32 | 0:22:01 | 0:11:10 | 0:10:51 | smithi | main | centos | 9.stream | orch/cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_res_basic} | 2 | |
fail | 7716389 | 2024-05-20 20:11:10 | 2024-05-27 13:54:37 | 2024-05-27 14:15:10 | 0:20:33 | 0:08:50 | 0:11:43 | smithi | main | ubuntu | 22.04 | orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/connectivity task/test_cephadm_repos} | 1 | |
Failure Reason:
Command failed (workunit test cephadm/test_repos.sh) on smithi111 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acdd6bc00bfd8e7bf10533befba15ce193d11b90 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_repos.sh' |
||||||||||||||
fail | 7716390 | 2024-05-20 20:11:11 | 2024-05-27 13:55:07 | 2024-05-27 14:27:00 | 0:31:53 | 0:21:09 | 0:10:44 | smithi | main | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
"2024-05-27T14:21:35.535335+0000 mon.smithi062 (mon.0) 790 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log |
||||||||||||||
pass | 7716391 | 2024-05-20 20:11:12 | 2024-05-27 13:55:17 | 2024-05-27 14:39:38 | 0:44:21 | 0:31:09 | 0:13:12 | smithi | main | centos | 9.stream | orch/cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} | 2 | |
fail | 7716392 | 2024-05-20 20:11:13 | 2024-05-27 13:58:38 | 2024-05-27 14:47:10 | 0:48:32 | 0:39:38 | 0:08:54 | smithi | main | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (51) after waiting for 300 seconds |
||||||||||||||
pass | 7716393 | 2024-05-20 20:11:14 | 2024-05-27 13:58:39 | 2024-05-27 14:38:29 | 0:39:50 | 0:26:13 | 0:13:37 | smithi | main | centos | 9.stream | orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/classic task/test_orch_cli_mon} | 5 | |
pass | 7716394 | 2024-05-20 20:11:15 | 2024-05-27 14:02:41 | 2024-05-27 14:25:20 | 0:22:39 | 0:13:35 | 0:09:04 | smithi | main | centos | 9.stream | orch/cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream} 2-node-mgr agent/on orchestrator_cli} | 2 | |
pass | 7716395 | 2024-05-20 20:11:16 | 2024-05-27 14:02:41 | 2024-05-27 14:25:48 | 0:23:07 | 0:13:36 | 0:09:31 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
pass | 7716396 | 2024-05-20 20:11:17 | 2024-05-27 14:02:41 | 2024-05-27 14:23:50 | 0:21:09 | 0:12:39 | 0:08:30 | smithi | main | centos | 9.stream | orch/cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/rgw 3-final} | 1 | |
fail | 7716397 | 2024-05-20 20:11:18 | 2024-05-27 14:02:52 | 2024-05-27 14:25:15 | 0:22:23 | 0:12:26 | 0:09:57 | smithi | main | centos | 9.stream | orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 3 | |
Failure Reason:
"2024-05-27T14:21:41.996939+0000 mon.a (mon.0) 449 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
fail | 7716398 | 2024-05-20 20:11:19 | 2024-05-27 14:03:52 | 2024-05-27 14:29:21 | 0:25:29 | 0:15:02 | 0:10:27 | smithi | main | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/classic task/test_extra_daemon_features} | 2 | |
Failure Reason:
"2024-05-27T14:24:33.555296+0000 mon.a (mon.0) 346 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
pass | 7716399 | 2024-05-20 20:11:20 | 2024-05-27 14:03:53 | 2024-05-27 14:27:41 | 0:23:48 | 0:14:38 | 0:09:10 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nvmeof 3-final} | 2 | |
pass | 7716400 | 2024-05-20 20:11:21 | 2024-05-27 14:04:43 | 2024-05-27 14:37:23 | 0:32:40 | 0:21:05 | 0:11:35 | smithi | main | ubuntu | 22.04 | orch/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |
pass | 7716401 | 2024-05-20 20:11:22 | 2024-05-27 14:05:44 | 2024-05-27 14:35:17 | 0:29:33 | 0:19:06 | 0:10:27 | smithi | main | ubuntu | 22.04 | orch/cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_res_dom} | 2 | |
fail | 7716402 | 2024-05-20 20:11:23 | 2024-05-27 14:05:44 | 2024-05-27 14:34:58 | 0:29:14 | 0:18:48 | 0:10:26 | smithi | main | centos | 9.stream | orch/cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/connectivity msgr/async-v1only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed (workunit test rados/test_python.sh) on smithi179 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acdd6bc00bfd8e7bf10533befba15ce193d11b90 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 1h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh' |
||||||||||||||
fail | 7716403 | 2024-05-20 20:11:24 | 2024-05-27 14:06:04 | 2024-05-27 15:10:22 | 1:04:18 | 0:53:32 | 0:10:46 | smithi | main | centos | 9.stream | orch/cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/on mon_election/connectivity} | 2 | |
Failure Reason:
"2024-05-27T14:27:07.876186+0000 mon.a (mon.0) 851 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
dead | 7716404 | 2024-05-20 20:11:25 | 2024-05-27 14:07:25 | 2024-05-28 02:17:38 | 12:10:13 | smithi | main | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7716405 | 2024-05-20 20:11:26 | 2024-05-27 14:07:45 | 2024-05-27 14:33:12 | 0:25:27 | 0:15:35 | 0:09:52 | smithi | main | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/connectivity task/test_host_drain} | 3 | |
pass | 7716406 | 2024-05-20 20:11:27 | 2024-05-27 14:08:06 | 2024-05-27 14:33:28 | 0:25:22 | 0:15:47 | 0:09:35 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
fail | 7716407 | 2024-05-20 20:11:28 | 2024-05-27 14:08:56 | 2024-05-27 14:58:50 | 0:49:54 | 0:39:31 | 0:10:23 | smithi | main | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (51) after waiting for 300 seconds |
||||||||||||||
pass | 7716408 | 2024-05-20 20:11:29 | 2024-05-27 14:09:27 | 2024-05-27 14:29:05 | 0:19:38 | 0:10:04 | 0:09:34 | smithi | main | centos | 9.stream | orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/connectivity task/test_adoption} | 1 | |
pass | 7716409 | 2024-05-20 20:11:30 | 2024-05-27 14:09:27 | 2024-05-27 14:39:27 | 0:30:00 | 0:19:03 | 0:10:57 | smithi | main | ubuntu | 22.04 | orch/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/deploy-raw} | 2 | |
pass | 7716410 | 2024-05-20 20:11:31 | 2024-05-27 14:09:47 | 2024-05-27 14:39:54 | 0:30:07 | 0:19:04 | 0:11:03 | smithi | main | ubuntu | 22.04 | orch/cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_basic} | 2 | |
pass | 7716411 | 2024-05-20 20:11:32 | 2024-05-27 14:09:58 | 2024-05-27 14:47:23 | 0:37:25 | 0:24:55 | 0:12:30 | smithi | main | ubuntu | 22.04 | orch/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 2 | |
pass | 7716412 | 2024-05-20 20:11:33 | 2024-05-27 14:13:39 | 2024-05-27 14:36:38 | 0:22:59 | 0:13:33 | 0:09:26 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |
pass | 7716413 | 2024-05-20 20:11:34 | 2024-05-27 14:13:39 | 2024-05-27 15:15:15 | 1:01:36 | 0:49:58 | 0:11:38 | smithi | main | ubuntu | 22.04 | orch/cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v1only root} | 2 | |
pass | 7716414 | 2024-05-20 20:11:35 | 2024-05-27 14:15:10 | 2024-05-27 14:47:39 | 0:32:29 | 0:20:10 | 0:12:19 | smithi | main | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |
pass | 7716415 | 2024-05-20 20:11:36 | 2024-05-27 14:15:20 | 2024-05-27 14:38:01 | 0:22:41 | 0:13:37 | 0:09:04 | smithi | main | centos | 9.stream | orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 | |
dead | 7716416 | 2024-05-20 20:11:37 | 2024-05-27 14:15:20 | 2024-05-27 14:16:55 | 0:01:35 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi119 |
||||||||||||||
dead | 7716417 | 2024-05-20 20:11:38 | 2024-05-27 14:15:51 | 2024-05-28 02:23:50 | 12:07:59 | smithi | main | centos | 9.stream | orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7716418 | 2024-05-20 20:11:39 | 2024-05-27 14:16:11 | 2024-05-27 14:51:22 | 0:35:11 | 0:24:03 | 0:11:08 | smithi | main | centos | 9.stream | orch/cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/classic msgr/async-v2only start tasks/rotate-keys} | 2 | |
pass | 7716419 | 2024-05-20 20:11:40 | 2024-05-27 14:18:12 | 2024-05-27 14:42:24 | 0:24:12 | 0:13:20 | 0:10:52 | smithi | main | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
pass | 7716420 | 2024-05-20 20:11:41 | 2024-05-27 14:18:32 | 2024-05-27 14:41:32 | 0:23:00 | 0:11:13 | 0:11:47 | smithi | main | centos | 9.stream | orch/cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_domain} | 2 | |
fail | 7716421 | 2024-05-20 20:11:42 | 2024-05-27 14:23:58 | 2024-05-27 14:49:56 | 0:25:58 | 0:14:39 | 0:11:19 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |
Failure Reason:
"2024-05-27T14:45:47.863258+0000 mon.smithi059 (mon.0) 820 : cluster [WRN] Health check failed: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
pass | 7716422 | 2024-05-20 20:11:43 | 2024-05-27 14:24:59 | 2024-05-27 14:54:41 | 0:29:42 | 0:18:40 | 0:11:02 | smithi | main | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/connectivity task/test_monitoring_stack_basic} | 3 | |
pass | 7716423 | 2024-05-20 20:11:44 | 2024-05-27 14:25:19 | 2024-05-27 15:08:03 | 0:42:44 | 0:34:10 | 0:08:34 | smithi | main | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
pass | 7716424 | 2024-05-20 20:11:45 | 2024-05-27 14:25:30 | 2024-05-27 14:55:25 | 0:29:55 | 0:19:52 | 0:10:03 | smithi | main | ubuntu | 22.04 | orch/cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/classic task/test_cephadm_timeout} | 1 | |
pass | 7716425 | 2024-05-20 20:11:46 | 2024-05-27 14:25:50 | 2024-05-27 14:59:12 | 0:33:22 | 0:20:45 | 0:12:37 | smithi | main | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
fail | 7716426 | 2024-05-20 20:11:47 | 2024-05-27 14:26:00 | 2024-05-27 14:43:51 | 0:17:51 | 0:07:16 | 0:10:35 | smithi | main | ubuntu | 22.04 | orch/rook/smoke/{0-distro/ubuntu_22.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/1-node k8s/1.21 net/calico rook/1.7.2} | 1 | |
Failure Reason:
Command failed on smithi119 with status 100: "sudo apt update && sudo apt install -y apt-transport-https ca-certificates curl && sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg && echo 'deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list && sudo apt update && sudo apt install -y kubelet kubeadm kubectl bridge-utils" |
||||||||||||||
fail | 7716427 | 2024-05-20 20:11:48 | 2024-05-27 14:26:31 | 2024-05-27 15:11:14 | 0:44:43 | 0:36:06 | 0:08:37 | smithi | main | centos | 9.stream | orch/cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async-v2only root} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi007 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acdd6bc00bfd8e7bf10533befba15ce193d11b90 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
dead | 7716428 | 2024-05-20 20:11:49 | 2024-05-27 14:26:31 | 2024-05-28 02:33:12 | 12:06:41 | smithi | main | ubuntu | 22.04 | orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/classic} | 2 | |||
pass | 7716429 | 2024-05-20 20:11:50 | 2024-05-27 14:26:32 | 2024-05-27 14:52:03 | 0:25:31 | 0:15:28 | 0:10:03 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} | 2 | |
fail | 7716430 | 2024-05-20 20:11:51 | 2024-05-27 14:27:12 | 2024-05-27 14:50:44 | 0:23:32 | 0:13:40 | 0:09:52 | smithi | main | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/classic task/test_rgw_multisite} | 3 | |
Failure Reason:
"2024-05-27T14:47:37.319178+0000 mon.a (mon.0) 506 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
pass | 7716431 | 2024-05-20 20:11:52 | 2024-05-27 14:27:43 | 2024-05-27 14:50:57 | 0:23:14 | 0:13:58 | 0:09:16 | smithi | main | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
dead | 7716432 | 2024-05-20 20:11:53 | 2024-05-27 14:28:03 | 2024-05-27 14:30:27 | 0:02:24 | smithi | main | centos | 9.stream | orch/cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_basic} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi193 |
||||||||||||||
pass | 7716433 | 2024-05-20 20:11:54 | 2024-05-27 14:29:24 | 2024-05-27 14:57:44 | 0:28:20 | 0:14:25 | 0:13:55 | smithi | main | centos | 9.stream | orch/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
pass | 7716434 | 2024-05-20 20:11:56 | 2024-05-27 14:32:14 | 2024-05-27 14:58:25 | 0:26:11 | 0:15:40 | 0:10:31 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |
dead | 7716435 | 2024-05-20 20:11:57 | 2024-05-27 14:33:15 | 2024-05-27 14:34:19 | 0:01:04 | smithi | main | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi123 |
||||||||||||||
fail | 7716436 | 2024-05-20 20:11:58 | 2024-05-27 14:33:15 | 2024-05-27 15:24:56 | 0:51:41 | 0:41:51 | 0:09:50 | smithi | main | ubuntu | 22.04 | orch/cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/connectivity msgr/async-v2only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi031 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acdd6bc00bfd8e7bf10533befba15ce193d11b90 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 7716437 | 2024-05-20 20:11:59 | 2024-05-27 14:33:36 | 2024-05-27 15:06:29 | 0:32:53 | 0:22:31 | 0:10:22 | smithi | main | ubuntu | 22.04 | orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/connectivity task/test_set_mon_crush_locations} | 3 | |
fail | 7716438 | 2024-05-20 20:12:00 | 2024-05-27 14:35:06 | 2024-05-27 15:24:29 | 0:49:23 | 0:38:22 | 0:11:01 | smithi | main | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (51) after waiting for 300 seconds |
||||||||||||||
pass | 7716439 | 2024-05-20 20:12:01 | 2024-05-27 14:35:27 | 2024-05-27 15:21:41 | 0:46:14 | 0:34:22 | 0:11:52 | smithi | main | centos | 9.stream | orch/cephadm/mgr-nfs-upgrade/{0-centos_9.stream 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} | 2 | |
fail | 7716440 | 2024-05-20 20:12:02 | 2024-05-27 14:36:47 | 2024-05-27 15:26:28 | 0:49:41 | 0:39:23 | 0:10:18 | smithi | main | centos | 9.stream | orch/cephadm/nfs/{cluster/{1-node} conf/{client mds mgr mon osd} overrides/{ignore_mgr_down ignorelist_health pg_health} supported-random-distros$/{centos_latest} tasks/nfs} | 1 | |
Failure Reason:
"2024-05-27T15:09:24.896678+0000 mds.nfs-cephfs.smithi188.ntfuub (mds.0) 1 : cluster [WRN] client.15217 isn't responding to mclientcaps(revoke), ino 0x1 pending pAsLsXs issued pAsLsXsFs, sent 60.017393 seconds ago" in cluster log |
||||||||||||||
pass | 7716441 | 2024-05-20 20:12:03 | 2024-05-27 14:37:28 | 2024-05-27 15:02:09 | 0:24:41 | 0:14:38 | 0:10:03 | smithi | main | centos | 9.stream | orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/connectivity task/test_orch_cli} | 1 | |
pass | 7716442 | 2024-05-20 20:12:04 | 2024-05-27 14:37:28 | 2024-05-27 14:58:53 | 0:21:25 | 0:12:15 | 0:09:10 | smithi | main | centos | 9.stream | orch/cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream} 2-node-mgr agent/off orchestrator_cli} | 2 | |
pass | 7716443 | 2024-05-20 20:12:05 | 2024-05-27 14:38:08 | 2024-05-27 15:08:23 | 0:30:15 | 0:19:10 | 0:11:05 | smithi | main | ubuntu | 22.04 | orch/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
fail | 7716444 | 2024-05-20 20:12:06 | 2024-05-27 14:38:39 | 2024-05-27 15:16:30 | 0:37:51 | 0:25:53 | 0:11:58 | smithi | main | centos | 9.stream | orch/cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} | 3 | |
Failure Reason:
"2024-05-27T14:56:38.157591+0000 mon.a (mon.0) 206 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log |
||||||||||||||
dead | 7716445 | 2024-05-20 20:12:07 | 2024-05-27 14:38:39 | 2024-05-27 14:40:33 | 0:01:54 | smithi | main | ubuntu | 22.04 | orch/cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_domain} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi069 |
||||||||||||||
fail | 7716446 | 2024-05-20 20:12:08 | 2024-05-27 14:39:30 | 2024-05-27 15:04:36 | 0:25:06 | 0:16:48 | 0:08:18 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
"2024-05-27T14:59:42.295307+0000 mon.smithi081 (mon.0) 850 : cluster [WRN] Health check failed: Failed to place 2 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log |
||||||||||||||
pass | 7716447 | 2024-05-20 20:12:09 | 2024-05-27 14:39:40 | 2024-05-27 15:01:09 | 0:21:29 | 0:11:51 | 0:09:38 | smithi | main | centos | 9.stream | orch/cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream_runc} 1-start 2-services/basic 3-final} | 1 | |
dead | 7716448 | 2024-05-20 20:12:10 | 2024-05-27 14:40:00 | 2024-05-27 14:42:15 | 0:02:15 | smithi | main | centos | 9.stream | orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 3 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi077 |
||||||||||||||
pass | 7716449 | 2024-05-20 20:12:12 | 2024-05-27 14:41:11 | 2024-05-27 15:08:38 | 0:27:27 | 0:16:38 | 0:10:49 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |
fail | 7716450 | 2024-05-20 20:12:13 | 2024-05-27 14:41:41 | 2024-05-27 15:03:45 | 0:22:04 | 0:11:25 | 0:10:39 | smithi | main | ubuntu | 22.04 | orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/classic task/test_ca_signed_key} | 2 | |
Failure Reason:
Command failed on smithi165 with status 5: 'sudo systemctl stop ceph-d40b0bb0-1c39-11ef-bc9b-c7b262605968@mon.a' |
||||||||||||||
fail | 7716451 | 2024-05-20 20:12:14 | 2024-05-27 14:42:02 | 2024-05-27 15:11:56 | 0:29:54 | 0:20:39 | 0:09:15 | smithi | main | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} | 2 | |
Failure Reason:
"2024-05-27T15:07:31.949649+0000 mon.smithi105 (mon.0) 832 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log |
||||||||||||||
pass | 7716452 | 2024-05-20 20:12:15 | 2024-05-27 14:42:12 | 2024-05-27 15:05:04 | 0:22:52 | 0:13:18 | 0:09:34 | smithi | main | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
pass | 7716453 | 2024-05-20 20:12:16 | 2024-05-27 14:42:12 | 2024-05-27 15:03:55 | 0:21:43 | 0:10:55 | 0:10:48 | smithi | main | centos | 9.stream | orch/cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_res_basic} | 2 | |
fail | 7716454 | 2024-05-20 20:12:17 | 2024-05-27 14:42:13 | 2024-05-27 15:05:14 | 0:23:01 | 0:14:08 | 0:08:53 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
"2024-05-27T15:02:11.180669+0000 mon.smithi112 (mon.0) 804 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log |
||||||||||||||
pass | 7716455 | 2024-05-20 20:12:18 | 2024-05-27 14:42:13 | 2024-05-27 15:05:09 | 0:22:56 | 0:14:35 | 0:08:21 | smithi | main | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/connectivity task/test_cephadm} | 1 | |
pass | 7716456 | 2024-05-20 20:12:19 | 2024-05-27 14:42:14 | 2024-05-27 15:41:59 | 0:59:45 | 0:49:08 | 0:10:37 | smithi | main | centos | 9.stream | orch/cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/connectivity} | 2 | |
fail | 7716457 | 2024-05-20 20:12:20 | 2024-05-27 15:10:43 | 964 | smithi | main | centos | 9.stream | orch/cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 2 | ||||
Failure Reason:
"2024-05-27T15:02:04.234724+0000 mon.a (mon.0) 450 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
fail | 7716458 | 2024-05-20 20:12:21 | 2024-05-27 14:42:54 | 2024-05-27 15:16:20 | 0:33:26 | 0:18:31 | 0:14:55 | smithi | main | centos | 9.stream | orch/cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/root mon_election/classic msgr/async start tasks/rados_python} | 2 | |
Failure Reason:
Command failed (workunit test rados/test_python.sh) on smithi012 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acdd6bc00bfd8e7bf10533befba15ce193d11b90 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 1h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh' |
||||||||||||||
pass | 7716459 | 2024-05-20 20:12:22 | 2024-05-27 14:47:15 | 2024-05-27 15:30:32 | 0:43:17 | 0:33:06 | 0:10:11 | smithi | main | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
pass | 7716460 | 2024-05-20 20:12:23 | 2024-05-27 14:47:26 | 2024-05-27 15:23:53 | 0:36:27 | 0:25:00 | 0:11:27 | smithi | main | centos | 9.stream | orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/classic task/test_orch_cli_mon} | 5 | |
pass | 7716461 | 2024-05-20 20:12:24 | 2024-05-27 14:48:16 | 2024-05-27 15:14:50 | 0:26:34 | 0:14:03 | 0:12:31 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 | |
pass | 7716462 | 2024-05-20 20:12:25 | 2024-05-27 14:49:57 | 2024-05-27 15:44:10 | 0:54:13 | 0:44:33 | 0:09:40 | smithi | main | ubuntu | 22.04 | orch/cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} | 2 | |
dead | 7716463 | 2024-05-20 20:12:26 | 2024-05-27 14:50:47 | 2024-05-27 14:51:51 | 0:01:04 | smithi | main | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nvmeof 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi181 |
||||||||||||||
pass | 7716464 | 2024-05-20 20:12:27 | 2024-05-27 14:50:48 | 2024-05-27 15:05:46 | 0:14:58 | 0:06:21 | 0:08:37 | smithi | main | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/classic task/test_cephadm_repos} | 1 | |
dead | 7716465 | 2024-05-20 20:12:28 | 2024-05-27 14:50:58 | 2024-05-28 02:59:11 | 12:08:13 | smithi | main | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rmdir-reactivate} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
pass | 7716466 | 2024-05-20 20:12:29 | 2024-05-27 14:51:28 | 2024-05-27 15:13:09 | 0:21:41 | 0:12:01 | 0:09:40 | smithi | main | centos | 9.stream | orch/cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_res_dom} | 2 | |
dead | 7716467 | 2024-05-20 20:12:30 | 2024-05-27 14:52:09 | 2024-05-28 03:01:51 | 12:09:42 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} | 2 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7716468 | 2024-05-20 20:12:31 | 2024-05-27 14:52:29 | 2024-05-27 15:16:15 | 0:23:46 | 0:12:25 | 0:11:21 | smithi | main | centos | 9.stream | orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} | 3 | |
Failure Reason:
"2024-05-27T15:12:37.437588+0000 mon.a (mon.0) 473 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
pass | 7716469 | 2024-05-20 20:12:32 | 2024-05-27 14:54:50 | 2024-05-27 15:22:09 | 0:27:19 | 0:15:41 | 0:11:38 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw 3-final} | 2 | |
dead | 7716470 | 2024-05-20 20:12:33 | 2024-05-27 14:57:11 | 2024-05-27 14:58:55 | 0:01:44 | smithi | main | ubuntu | 22.04 | orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/connectivity task/test_extra_daemon_features} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi092 |
||||||||||||||
dead | 7716471 | 2024-05-20 20:12:34 | 2024-05-27 14:57:51 | 2024-05-27 14:58:55 | 0:01:04 | smithi | main | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi079 |
||||||||||||||
dead | 7716472 | 2024-05-20 20:12:35 | 2024-05-27 14:57:52 | 2024-05-27 14:58:56 | 0:01:04 | smithi | main | ubuntu | 22.04 | orch/rook/smoke/{0-distro/ubuntu_22.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/3-node k8s/1.21 net/flannel rook/master} | 3 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi132 |
||||||||||||||
pass | 7716473 | 2024-05-20 20:12:36 | 2024-05-27 14:57:52 | 2024-05-27 15:19:36 | 0:21:44 | 0:12:45 | 0:08:59 | smithi | main | ubuntu | 22.04 | orch/cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/connectivity task/test_adoption} | 1 | |
pass | 7716474 | 2024-05-20 20:12:37 | 2024-05-27 14:57:52 | 2024-05-27 15:21:17 | 0:23:25 | 0:13:23 | 0:10:02 | smithi | main | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/deploy-raw} | 2 | |
pass | 7716475 | 2024-05-20 20:12:38 | 2024-05-27 14:57:53 | 2024-05-27 15:22:11 | 0:24:18 | 0:12:47 | 0:11:31 | smithi | main | centos | 9.stream | orch/cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_basic} | 2 | |
dead | 7716476 | 2024-05-20 20:12:39 | 2024-05-27 14:58:54 | 2024-05-27 15:00:18 | 0:01:24 | smithi | main | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/basic 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi027 |
||||||||||||||
pass | 7716477 | 2024-05-20 20:12:40 | 2024-05-27 14:59:15 | 2024-05-27 15:32:28 | 0:33:13 | 0:24:20 | 0:08:53 | smithi | main | centos | 9.stream | orch/cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/connectivity msgr/async-v1only start tasks/rotate-keys} | 2 | |
dead | 7716478 | 2024-05-20 20:12:41 | 2024-05-27 14:59:35 | 2024-05-27 15:19:13 | 0:19:38 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/client-keyring 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds |
||||||||||||||
pass | 7716479 | 2024-05-20 20:12:42 | 2024-05-27 15:00:05 | 2024-05-27 15:47:27 | 0:47:22 | 0:35:09 | 0:12:13 | smithi | main | centos | 9.stream | orch/cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/snaps-few-objects fixed-2 msgr/async-v2only root} | 2 | |
fail | 7716480 | 2024-05-20 20:12:43 | 2024-05-27 15:02:16 | 2024-05-27 15:29:10 | 0:26:54 | 0:15:53 | 0:11:01 | smithi | main | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/classic task/test_host_drain} | 3 | |
Failure Reason:
"2024-05-27T15:24:11.344719+0000 mon.a (mon.0) 465 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
pass | 7716481 | 2024-05-20 20:12:44 | 2024-05-27 15:03:57 | 2024-05-27 15:38:15 | 0:34:18 | 0:24:45 | 0:09:33 | smithi | main | ubuntu | 22.04 | orch/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/off fixed-2 mon_election/classic start} | 2 | |
pass | 7716482 | 2024-05-20 20:12:45 | 2024-05-27 15:04:37 | 2024-05-27 15:30:25 | 0:25:48 | 0:14:20 | 0:11:28 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/iscsi 3-final} | 2 | |
pass | 7716483 | 2024-05-20 20:12:46 | 2024-05-27 15:04:48 | 2024-05-27 16:10:46 | 1:05:58 | 0:56:38 | 0:09:20 | smithi | main | ubuntu | 22.04 | orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} | 2 | |
pass | 7716484 | 2024-05-20 20:12:47 | 2024-05-27 15:05:08 | 2024-05-27 15:33:02 | 0:27:54 | 0:18:39 | 0:09:15 | smithi | main | ubuntu | 22.04 | orch/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/repave-all} | 2 | |
pass | 7716485 | 2024-05-20 20:12:48 | 2024-05-27 15:05:18 | 2024-05-27 15:34:59 | 0:29:41 | 0:18:52 | 0:10:49 | smithi | main | ubuntu | 22.04 | orch/cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_domain} | 2 | |
fail | 7716486 | 2024-05-20 20:12:49 | 2024-05-27 15:05:29 | 2024-05-27 15:38:01 | 0:32:32 | 0:21:48 | 0:10:44 | smithi | main | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/jaeger 3-final} | 2 | |
Failure Reason:
"2024-05-27T15:32:32.852537+0000 mon.smithi045 (mon.0) 781 : cluster [WRN] Health check failed: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
pass | 7716487 | 2024-05-20 20:12:50 | 2024-05-27 15:06:29 | 2024-05-27 15:29:22 | 0:22:53 | 0:13:58 | 0:08:55 | smithi | main | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/connectivity task/test_iscsi_container/{centos_9.stream test_iscsi_container}} | 1 | |
pass | 7716488 | 2024-05-20 20:12:51 | 2024-05-27 15:06:29 | 2024-05-27 15:52:30 | 0:46:01 | 0:34:38 | 0:11:23 | smithi | main | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
pass | 7716489 | 2024-05-20 20:12:52 | 2024-05-27 15:07:10 | 2024-05-27 15:30:46 | 0:23:36 | 0:14:45 | 0:08:51 | smithi | main | centos | 9.stream | orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/classic task/test_cephadm_timeout} | 1 | |
pass | 7716490 | 2024-05-20 20:12:53 | 2024-05-27 15:08:10 | 2024-05-27 15:29:35 | 0:21:25 | 0:11:49 | 0:09:36 | smithi | main | centos | 9.stream | orch/cephadm/orchestrator_cli/{0-random-distro$/{centos_9.stream} 2-node-mgr agent/on orchestrator_cli} | 2 | |
pass | 7716491 | 2024-05-20 20:12:54 | 2024-05-27 15:08:31 | 2024-05-27 15:31:42 | 0:23:11 | 0:14:45 | 0:08:26 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/mirror 3-final} | 2 | |
pass | 7716492 | 2024-05-20 20:12:55 | 2024-05-27 15:08:41 | 2024-05-27 15:29:21 | 0:20:40 | 0:12:46 | 0:07:54 | smithi | main | centos | 9.stream | orch/cephadm/smoke-singlehost/{0-random-distro$/{centos_9.stream} 1-start 2-services/rgw 3-final} | 1 | |
pass | 7716493 | 2024-05-20 20:12:56 | 2024-05-27 15:08:41 | 2024-05-27 15:31:45 | 0:23:04 | 0:11:55 | 0:11:09 | smithi | main | centos | 9.stream | orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} | 3 | |
pass | 7716494 | 2024-05-20 20:12:57 | 2024-05-27 15:10:52 | 2024-05-27 15:36:07 | 0:25:15 | 0:15:27 | 0:09:48 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} | 2 | |
pass | 7716495 | 2024-05-20 20:12:58 | 2024-05-27 15:11:23 | 2024-05-27 15:35:01 | 0:23:38 | 0:13:19 | 0:10:19 | smithi | main | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} | 2 | |
pass | 7716496 | 2024-05-20 20:12:59 | 2024-05-27 15:12:03 | 2024-05-27 15:34:08 | 0:22:05 | 0:11:26 | 0:10:39 | smithi | main | centos | 9.stream | orch/cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_basic} | 2 | |
fail | 7716497 | 2024-05-20 20:13:00 | 2024-05-27 15:13:14 | 2024-05-27 15:58:37 | 0:45:23 | 0:35:36 | 0:09:47 | smithi | main | centos | 9.stream | orch/cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/rados_api_tests fixed-2 msgr/async root} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi092 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acdd6bc00bfd8e7bf10533befba15ce193d11b90 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
fail | 7716498 | 2024-05-20 20:13:01 | 2024-05-27 15:13:34 | 2024-05-27 15:56:26 | 0:42:52 | 0:32:41 | 0:10:11 | smithi | main | centos | 9.stream | orch/cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/classic msgr/async-v1only start tasks/rados_api_tests} | 2 | |
Failure Reason:
Command failed (workunit test rados/test.sh) on smithi059 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acdd6bc00bfd8e7bf10533befba15ce193d11b90 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh' |
||||||||||||||
pass | 7716499 | 2024-05-20 20:13:02 | 2024-05-27 15:13:34 | 2024-05-27 15:49:06 | 0:35:32 | 0:25:59 | 0:09:33 | smithi | main | ubuntu | 22.04 | orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/classic task/test_monitoring_stack_basic} | 3 | |
dead | 7716500 | 2024-05-20 20:13:03 | 2024-05-27 15:13:35 | 2024-05-27 15:19:55 | 0:06:20 | smithi | main | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: Expected smithi027's OS to be ubuntu 22.04 but found centos 9 |
||||||||||||||
pass | 7716501 | 2024-05-20 20:13:04 | 2024-05-27 15:13:35 | 2024-05-27 15:41:56 | 0:28:21 | 0:16:19 | 0:12:02 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} | 2 | |
pass | 7716502 | 2024-05-20 20:13:06 | 2024-05-27 15:13:35 | 2024-05-27 15:40:56 | 0:27:21 | 0:13:44 | 0:13:37 | smithi | main | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/connectivity task/test_rgw_multisite} | 3 | |
fail | 7716503 | 2024-05-20 20:13:07 | 2024-05-27 15:15:16 | 2024-05-27 16:05:51 | 0:50:35 | 0:38:57 | 0:11:38 | smithi | main | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |
Failure Reason:
reached maximum tries (51) after waiting for 300 seconds |
||||||||||||||
pass | 7716504 | 2024-05-20 20:13:08 | 2024-05-27 15:16:17 | 2024-05-27 15:41:10 | 0:24:53 | 0:15:14 | 0:09:39 | smithi | main | centos | 9.stream | orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/connectivity task/test_orch_cli} | 1 | |
pass | 7716505 | 2024-05-20 20:13:09 | 2024-05-27 15:16:17 | 2024-05-27 15:41:28 | 0:25:11 | 0:14:23 | 0:10:48 | smithi | main | centos | 9.stream | orch/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} | 2 | |
pass | 7716506 | 2024-05-20 20:13:10 | 2024-05-27 15:16:27 | 2024-05-27 15:38:22 | 0:21:55 | 0:12:30 | 0:09:25 | smithi | main | centos | 9.stream | orch/cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_domain} | 2 | |
fail | 7716507 | 2024-05-20 20:13:11 | 2024-05-27 15:16:38 | 2024-05-27 15:43:34 | 0:26:56 | 0:16:54 | 0:10:02 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} | 2 | |
Failure Reason:
"2024-05-27T15:36:40.130111+0000 mon.smithi046 (mon.0) 840 : cluster [WRN] Health check failed: Failed to place 2 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log |
||||||||||||||
dead | 7716508 | 2024-05-20 20:13:12 | 2024-05-27 15:16:38 | 2024-05-27 15:22:23 | 0:05:45 | smithi | main | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi157 |
||||||||||||||
fail | 7716509 | 2024-05-20 20:13:12 | 2024-05-27 15:21:19 | 2024-05-27 16:02:44 | 0:41:25 | 0:30:58 | 0:10:27 | smithi | main | centos | 9.stream | orch/cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} | 2 | |
Failure Reason:
"2024-05-27T15:48:35.849124+0000 mon.a (mon.0) 1699 : cluster [WRN] Health check failed: 1 Cephadm Agent(s) are not reporting. Hosts may be offline (CEPHADM_AGENT_DOWN)" in cluster log |
||||||||||||||
fail | 7716510 | 2024-05-20 20:13:13 | 2024-05-27 15:21:50 | 2024-05-27 15:49:20 | 0:27:30 | 0:16:08 | 0:11:22 | smithi | main | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/classic task/test_set_mon_crush_locations} | 3 | |
Failure Reason:
"2024-05-27T15:42:04.527993+0000 mon.a (mon.0) 399 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log |
||||||||||||||
pass | 7716511 | 2024-05-20 20:13:14 | 2024-05-27 15:22:10 | 2024-05-27 16:36:06 | 1:13:56 | 1:02:41 | 0:11:15 | smithi | main | ubuntu | 22.04 | orch/cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v1only root} | 2 | |
fail | 7716512 | 2024-05-20 20:13:15 | 2024-05-27 15:22:21 | 2024-05-27 15:45:16 | 0:22:55 | 0:14:41 | 0:08:14 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} | 2 | |
Failure Reason:
"2024-05-27T15:42:11.127130+0000 mon.smithi053 (mon.0) 853 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log |
||||||||||||||
dead | 7716513 | 2024-05-20 20:13:16 | 2024-05-27 15:22:31 | 2024-05-28 03:32:24 | 12:09:53 | smithi | main | centos | 9.stream | orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} | 3 | |||
Failure Reason:
hit max job timeout |
||||||||||||||
fail | 7716514 | 2024-05-20 20:13:17 | 2024-05-27 15:24:02 | 2024-05-27 16:05:28 | 0:41:26 | 0:32:44 | 0:08:42 | smithi | main | ubuntu | 22.04 | orch/cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/packaged mon_election/connectivity msgr/async-v2only start tasks/rados_python} | 2 | |
Failure Reason:
Command failed (workunit test rados/test_python.sh) on smithi042 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=acdd6bc00bfd8e7bf10533befba15ce193d11b90 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 1h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test_python.sh' |
||||||||||||||
pass | 7716515 | 2024-05-20 20:13:18 | 2024-05-27 15:24:02 | 2024-05-27 15:54:16 | 0:30:14 | 0:19:00 | 0:11:14 | smithi | main | ubuntu | 22.04 | orch/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-wait} | 2 | |
pass | 7716516 | 2024-05-20 20:13:19 | 2024-05-27 15:24:33 | 2024-05-27 15:52:16 | 0:27:43 | 0:18:25 | 0:09:18 | smithi | main | ubuntu | 22.04 | orch/cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_res_basic} | 2 | |
fail | 7716517 | 2024-05-20 20:13:20 | 2024-05-27 15:25:03 | 2024-05-27 15:54:09 | 0:29:06 | 0:14:26 | 0:14:40 | smithi | main | centos | 9.stream | orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs 3-final} | 2 | |
Failure Reason:
"2024-05-27T15:51:05.303965+0000 mon.smithi121 (mon.0) 797 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log |
||||||||||||||
fail | 7716518 | 2024-05-20 20:13:21 | 2024-05-27 15:29:04 | 2024-05-27 15:46:49 | 0:17:45 | 0:07:23 | 0:10:22 | smithi | main | ubuntu | 22.04 | orch/rook/smoke/{0-distro/ubuntu_22.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/1-node k8s/1.21 net/host rook/1.7.2} | 1 | |
Failure Reason:
Command failed on smithi188 with status 100: "sudo apt update && sudo apt install -y apt-transport-https ca-certificates curl && sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg && echo 'deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list && sudo apt update && sudo apt install -y kubelet kubeadm kubectl bridge-utils" |
||||||||||||||
pass | 7716519 | 2024-05-20 20:13:22 | 2024-05-27 15:29:04 | 2024-05-27 15:50:37 | 0:21:33 | 0:11:47 | 0:09:46 | smithi | main | centos | 9.stream | orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/connectivity task/test_ca_signed_key} | 2 | |
dead | 7716520 | 2024-05-20 20:13:23 | 2024-05-27 15:29:05 | 2024-05-27 15:30:08 | 0:01:03 | smithi | main | centos | 9.stream | orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} | 2 | |||
Failure Reason:
Error reimaging machines: Failed to power on smithi055 |
||||||||||||||
pass | 7716521 | 2024-05-20 20:13:24 | 2024-05-27 15:29:05 | 2024-05-27 16:23:43 | 0:54:38 | 0:43:02 | 0:11:36 | smithi | main | ubuntu | 22.04 | orch/cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/classic task/test_orch_cli_mon} | 5 | |
pass | 7716522 | 2024-05-20 20:13:25 | 2024-05-27 15:29:05 | 2024-05-27 15:59:19 | 0:30:14 | 0:19:47 | 0:10:27 | smithi | main | ubuntu | 22.04 | orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs2 3-final} | 2 |