Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi188.front.sepia.ceph.com smithi True True 2024-04-23 20:19:20.731223 scheduled_teuthology@teuthology ubuntu 22.04 x86_64 /home/teuthworker/archive/teuthology-2024-04-23_20:16:13-rbd-main-distro-default-smithi/7670143
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 7670143 2024-04-23 20:18:14 2024-04-23 20:19:20 2024-04-23 20:40:57 0:23:15 smithi main ubuntu 22.04 rbd/encryption/{cache/writearound clusters/{fixed-3 openstack} conf/{disable-pool-app} data-pool/ec features/defaults msgr-failures/few objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} workloads/qemu_xfstests_luks2_luks1} 3
fail 7670101 2024-04-23 17:45:32 2024-04-23 17:54:01 2024-04-23 19:12:20 1:18:19 1:10:28 0:07:51 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/dbench}} 3
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi050 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=57e0caf76fdb7ab8b1358588f292d08519163844 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'

pass 7670037 2024-04-23 17:18:56 2024-04-23 17:36:29 2024-04-23 17:55:15 0:18:46 0:12:18 0:06:28 smithi main centos 9.stream rgw/lua/{beast bluestore-bitmap fixed-2 ignore-pg-availability overrides supported-distros/{centos_latest} tasks/{0-install test_lua}} 2
fail 7669895 2024-04-23 14:20:41 2024-04-23 17:21:58 2024-04-23 17:35:44 0:13:46 0:05:19 0:08:27 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/small-objects-balanced} 4
Failure Reason:

Command failed on smithi145 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7669729 2024-04-23 14:17:45 2024-04-23 15:48:06 2024-04-23 17:12:41 1:24:35 1:13:54 0:10:41 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/radosbench} 3
Failure Reason:

"2024-04-23T16:30:00.000089+0000 mon.a (mon.0) 1644 : cluster [WRN] Health detail: HEALTH_WARN nodeep-scrub flag(s) set" in cluster log

pass 7669681 2024-04-23 14:16:54 2024-04-23 15:16:53 2024-04-23 15:49:02 0:32:09 0:20:19 0:11:50 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/redirect} 4
pass 7669464 2024-04-23 01:24:13 2024-04-23 01:33:34 2024-04-23 02:01:08 0:27:34 0:20:02 0:07:32 smithi main centos 9.stream crimson-rados/rbd/{clusters/fixed-1 crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph tasks/rbd_python_api_tests_old_format} 1
fail 7669260 2024-04-22 22:47:16 2024-04-22 23:39:18 2024-04-23 00:05:11 0:25:53 0:14:13 0:11:40 smithi main ubuntu 22.04 orch:cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_rgw_multisite} 3
Failure Reason:

Command failed on smithi167 with status 2: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:43be020184947e53516056c9931e1ac5bdbbb1a5 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 575c0a38-0104-11ef-bc93-c7b262605968 -- ceph-volume lvm zap /dev/vg_nvme/lv_4'

fail 7669209 2024-04-22 22:46:22 2024-04-22 23:08:15 2024-04-22 23:30:20 0:22:05 0:13:13 0:08:52 smithi main centos 9.stream orch:cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-04-22T23:21:50.319982+0000 mon.a (mon.0) 330 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon osd.0 on smithi019 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log

pass 7669131 2024-04-22 22:10:53 2024-04-23 02:00:58 2024-04-23 02:46:10 0:45:12 0:35:18 0:09:54 smithi main centos 8.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
pass 7669045 2024-04-22 22:09:32 2024-04-23 00:58:47 2024-04-23 01:34:06 0:35:19 0:24:46 0:10:33 smithi main centos 8.stream orch/cephadm/with-work/{0-distro/centos_8.stream_container_tools fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_python} 2
fail 7669018 2024-04-22 22:09:06 2024-04-23 00:41:17 2024-04-23 00:57:14 0:15:57 0:06:02 0:09:55 smithi main ubuntu 20.04 orch/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/flannel rook/1.7.2} 3
Failure Reason:

Command failed on smithi044 with status 100: "sudo apt update && sudo apt install -y apt-transport-https ca-certificates curl && sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg && echo 'deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list && sudo apt update && sudo apt install -y kubelet kubeadm kubectl bridge-utils"

fail 7668974 2024-04-22 21:33:04 2024-04-23 00:10:04 2024-04-23 00:28:54 0:18:50 0:11:47 0:07:03 smithi main centos 9.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-distros/centos_latest tasks/cfuse_workunit_suites_fsx thrashosds-health} 4
Failure Reason:

Command failed (workunit test suites/fsx.sh) on smithi044 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c66b8bf2efd3f3988ac1851474c2f98eb2ca30d9 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh'

pass 7668930 2024-04-22 21:32:21 2024-04-22 22:37:16 2024-04-22 23:08:07 0:30:51 0:22:59 0:07:52 smithi main centos 9.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-distros/centos_latest tasks/cfuse_workunit_suites_ffsb thrashosds-health} 4
fail 7668856 2024-04-22 21:10:42 2024-04-22 22:05:54 2024-04-22 22:27:17 0:21:23 0:09:13 0:12:10 smithi main ubuntu 22.04 orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_extra_daemon_features} 2
Failure Reason:

Command failed on smithi167 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:430e09df97c8fc7dc2b2ae424f68ed11366c540f pull'

fail 7668824 2024-04-22 21:10:09 2024-04-22 21:50:20 2024-04-22 22:01:10 0:10:50 0:03:41 0:07:09 smithi main centos 9.stream orch/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 2
Failure Reason:

Command failed on smithi016 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:430e09df97c8fc7dc2b2ae424f68ed11366c540f pull'

fail 7668763 2024-04-22 21:09:07 2024-04-22 21:22:34 2024-04-22 21:48:24 0:25:50 0:17:14 0:08:36 smithi main centos 9.stream orch/cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/repo_digest 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/classic} 2
Failure Reason:

Command failed on smithi165 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid dccb09a4-00ef-11ef-bc93-c7b262605968 -e sha1=430e09df97c8fc7dc2b2ae424f68ed11366c540f -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

dead 7668737 2024-04-22 20:13:22 2024-04-23 03:09:56 2024-04-23 15:20:14 12:10:18 smithi main centos 9.stream orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 3
Failure Reason:

hit max job timeout

pass 7668691 2024-04-22 20:12:40 2024-04-23 02:46:03 2024-04-23 03:11:22 0:25:19 0:18:04 0:07:15 smithi main centos 9.stream orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
pass 7668644 2024-04-22 20:11:54 2024-04-22 21:03:31 2024-04-22 21:22:26 0:18:55 0:12:05 0:06:50 smithi main centos 9.stream orch/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/repave-all} 2