Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi027.front.sepia.ceph.com smithi True True 2024-04-23 13:17:39.740759 adking@teuthology centos 9 x86_64 None
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7669509 2024-04-23 09:50:08 2024-04-23 09:51:10 2024-04-23 10:16:13 0:25:03 0:14:00 0:11:03 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/with-quiesce 2-workunit/suites/pjd}} 2
pass 7669475 2024-04-23 05:01:23 2024-04-23 05:01:23 2024-04-23 05:22:00 0:20:37 0:10:37 0:10:00 smithi main centos 9.stream smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{centos_latest} tasks/{0-install test/kclient_workunit_suites_fsstress}} 3
fail 7669255 2024-04-22 22:47:11 2024-04-22 23:34:36 2024-04-22 23:57:44 0:23:08 0:09:38 0:13:30 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_monitoring_stack_basic} 3
Failure Reason:

Command failed on smithi027 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 62b1ddaa-0103-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4'

fail 7669168 2024-04-22 22:45:39 2024-04-22 22:49:56 2024-04-22 23:33:43 0:43:47 0:28:03 0:15:44 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

"2024-04-22T23:14:27.340558+0000 mon.smithi027 (mon.0) 121 : cluster [WRN] Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)" in cluster log

pass 7669158 2024-04-22 22:11:19 2024-04-23 02:16:20 2024-04-23 02:40:59 0:24:39 0:13:31 0:11:08 smithi main centos 8.stream orch/cephadm/smoke-small/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 3
fail 7669105 2024-04-22 22:10:29 2024-04-23 01:45:46 2024-04-23 02:06:28 0:20:42 0:06:02 0:14:40 smithi main ubuntu 20.04 orch/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/1-node k8s/1.21 net/calico rook/1.7.2} 1
Failure Reason:

Command failed on smithi027 with status 100: "sudo apt update && sudo apt install -y apt-transport-https ca-certificates curl && sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg && echo 'deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list && sudo apt update && sudo apt install -y kubelet kubeadm kubectl bridge-utils"

pass 7669072 2024-04-22 22:09:58 2024-04-23 01:14:33 2024-04-23 01:47:10 0:32:37 0:22:31 0:10:06 smithi main rhel 8.6 orch/cephadm/smoke-roleless/{0-distro/rhel_8.6_container_tools_3.0 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
pass 7669013 2024-04-22 22:09:02 2024-04-23 00:37:04 2024-04-23 01:16:42 0:39:38 0:30:22 0:09:16 smithi main rhel 8.6 orch/cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
pass 7668969 2024-04-22 21:32:59 2024-04-23 00:06:31 2024-04-23 00:37:33 0:31:02 0:15:31 0:15:31 smithi main ubuntu 22.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-lz4 powercycle/default supported-distros/ubuntu_latest tasks/admin_socket_objecter_requests thrashosds-health} 4
fail 7668884 2024-04-22 21:11:10 2024-04-22 22:21:47 2024-04-22 22:45:26 0:23:39 0:07:33 0:16:06 smithi main ubuntu 22.04 orch/rook/smoke/{0-distro/ubuntu_22.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/calico rook/1.7.2} 3
Failure Reason:

Command failed on smithi027 with status 100: "sudo apt update && sudo apt install -y apt-transport-https ca-certificates curl && sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg && echo 'deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list && sudo apt update && sudo apt install -y kubelet kubeadm kubectl bridge-utils"

fail 7668813 2024-04-22 21:09:57 2024-04-22 21:50:16 2024-04-22 22:06:49 0:16:33 0:06:06 0:10:27 smithi main centos 9.stream orch/cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
Failure Reason:

Command failed on smithi027 with status 1: 'sudo cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:430e09df97c8fc7dc2b2ae424f68ed11366c540f pull'

fail 7668769 2024-04-22 21:09:13 2024-04-22 21:29:47 2024-04-22 21:42:50 0:13:03 0:03:40 0:09:23 smithi main centos 9.stream orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/jaeger 3-final} 2
Failure Reason:

Command failed on smithi027 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:430e09df97c8fc7dc2b2ae424f68ed11366c540f pull'

pass 7668684 2024-04-22 20:12:33 2024-04-23 02:39:40 2024-04-23 03:17:25 0:37:45 0:28:36 0:09:09 smithi main centos 9.stream orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/classic task/test_orch_cli_mon} 5
pass 7668636 2024-04-22 20:11:46 2024-04-22 20:57:48 2024-04-22 21:30:19 0:32:31 0:18:39 0:13:52 smithi main ubuntu 22.04 orch/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/deploy-raw} 2
pass 7668600 2024-04-22 20:11:12 2024-04-22 20:34:31 2024-04-22 20:59:02 0:24:31 0:15:16 0:09:15 smithi main centos 9.stream orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
pass 7668553 2024-04-22 20:10:29 2024-04-22 20:12:00 2024-04-22 20:34:34 0:22:34 0:14:17 0:08:17 smithi main centos 9.stream orch/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
pass 7668481 2024-04-22 19:31:35 2024-04-22 19:32:33 2024-04-22 20:04:59 0:32:26 0:16:15 0:16:11 smithi main ubuntu 22.04 rgw/lua/{beast bluestore-bitmap fixed-2 ignore-pg-availability overrides supported-distros/{ubuntu_latest} tasks/{0-install test_lua}} 2
pass 7668428 2024-04-22 18:21:24 2024-04-22 18:22:00 2024-04-22 18:43:44 0:21:44 0:12:56 0:08:48 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados tasks/mon_recovery validater/lockdep} 2
fail 7668390 2024-04-22 14:52:07 2024-04-22 15:25:29 2024-04-22 16:11:47 0:46:18 0:37:12 0:09:06 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7668323 2024-04-22 14:50:58 2024-04-22 14:52:10 2024-04-22 15:20:07 0:27:57 0:17:37 0:10:20 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_host_drain} 3
Failure Reason:

"2024-04-22T15:13:51.223434+0000 mon.a (mon.0) 445 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon osd.1 on smithi027 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log