Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi081.front.sepia.ceph.com smithi True False centos 9 x86_64 /home/teuthworker/archive/mchangir-2024-04-16_16:32:10-fs-wip-mchangir-qa-debug-resource-temporarily-unavailable-issue-distro-default-smithi/7658442
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7658442 2024-04-16 16:33:16 2024-04-16 16:34:02 2024-04-16 17:31:12 0:57:10 0:46:16 0:10:54 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/dbench}} 3
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi040 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=b137e4a675398e295567f9652210865a4d573fac TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'

pass 7658414 2024-04-16 14:55:20 2024-04-16 14:59:54 2024-04-16 15:58:15 0:58:21 0:44:16 0:14:05 smithi main ubuntu 22.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/quiesce} 2
fail 7658341 2024-04-16 13:03:51 2024-04-16 13:07:41 2024-04-16 13:56:07 0:48:26 0:36:02 0:12:24 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/dbench}} 3
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi081 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=9b42bfbeda298874861ece590aa565cf64ac8c45 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'

pass 7658305 2024-04-16 12:54:46 2024-04-16 14:34:24 2024-04-16 14:59:58 0:25:34 0:14:53 0:10:41 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-keepalive-only 3-final} 2
pass 7658274 2024-04-16 12:54:11 2024-04-16 14:08:49 2024-04-16 14:34:30 0:25:41 0:14:23 0:11:18 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
dead 7658220 2024-04-16 12:38:56 2024-04-16 15:58:04 2024-04-16 15:59:09 0:01:05 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/kernel_untar_build}} 3
Failure Reason:

Error reimaging machines: Failed to power on smithi081

pass 7658119 2024-04-16 12:37:37 2024-04-16 12:38:37 2024-04-16 13:08:26 0:29:49 0:19:53 0:09:56 smithi main centos 9.stream fs/cephadm/renamevolume/{0-start 1-rename distro/single-container-host overrides/{ignorelist_health pg_health}} 2
fail 7658105 2024-04-16 12:13:16 2024-04-16 12:14:16 2024-04-16 12:35:27 0:21:11 0:10:34 0:10:37 smithi main centos 9.stream fs:functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/admin} 2
Failure Reason:

Test failure: test_per_client_labeled_perf_counters_io (tasks.cephfs.test_admin.TestLabeledPerfCounters), test_per_client_labeled_perf_counters_io (tasks.cephfs.test_admin.TestLabeledPerfCounters)

fail 7658080 2024-04-16 11:21:00 2024-04-16 11:21:42 2024-04-16 11:42:29 0:20:47 0:10:23 0:10:24 smithi main centos 9.stream fs:functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/admin} 2
Failure Reason:

Test failure: test_per_client_labeled_perf_counters_io (tasks.cephfs.test_admin.TestLabeledPerfCounters), test_per_client_labeled_perf_counters_io (tasks.cephfs.test_admin.TestLabeledPerfCounters)

pass 7658065 2024-04-16 10:05:27 2024-04-16 10:06:08 2024-04-16 10:32:40 0:26:32 0:15:51 0:10:41 smithi main ubuntu 22.04 fs:functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/admin} 2
fail 7657960 2024-04-16 07:22:25 2024-04-16 07:23:07 2024-04-16 08:53:07 1:30:00 1:19:58 0:10:02 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench-high-concurrency} 2
Failure Reason:

reached maximum tries (501) after waiting for 3000 seconds

pass 7657881 2024-04-16 05:01:29 2024-04-16 05:01:42 2024-04-16 05:47:57 0:46:15 0:33:56 0:12:19 smithi main ubuntu 22.04 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/{0-install test/libcephfs_interface_tests}} 3
fail 7657868 2024-04-16 00:31:19 2024-04-16 00:35:37 2024-04-16 03:18:43 2:43:06 2:34:20 0:08:46 smithi main centos 9.stream upgrade:reef-x:stress-split/{0-distro/centos_9.stream 0-roles 1-start 2-first-half-tasks/rbd-cls 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/classic} 2
Failure Reason:

"2024-04-16T01:20:00.000154+0000 mon.a (mon.0) 1086 : cluster 3 [WRN] OSDMAP_FLAGS: noscrub flag(s) set" in cluster log

fail 7657721 2024-04-15 22:09:25 2024-04-16 00:04:43 2024-04-16 00:24:50 0:20:07 0:06:05 0:14:02 smithi main ubuntu 20.04 orch/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/none cluster/3-node k8s/1.21 net/calico rook/1.7.2} 3
Failure Reason:

Command failed on smithi062 with status 100: "sudo apt update && sudo apt install -y apt-transport-https ca-certificates curl && sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg && echo 'deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list && sudo apt update && sudo apt install -y kubelet kubeadm kubectl bridge-utils"

pass 7657667 2024-04-15 21:33:09 2024-04-15 23:31:35 2024-04-16 00:05:57 0:34:22 0:23:37 0:10:45 smithi main centos 9.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-stupid powercycle/default supported-distros/centos_latest tasks/snaps-many-objects thrashosds-health} 4
pass 7657640 2024-04-15 21:32:43 2024-04-15 23:03:09 2024-04-15 23:31:45 0:28:36 0:17:50 0:10:46 smithi main ubuntu 22.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-distros/ubuntu_latest tasks/admin_socket_objecter_requests thrashosds-health} 4
fail 7657601 2024-04-15 21:11:29 2024-04-15 22:32:17 2024-04-15 22:51:42 0:19:25 0:04:47 0:14:38 smithi main centos 9.stream orch/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
Failure Reason:

Command failed on smithi081 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:a9a752df26c63acad72e1b3569fd79a515ca0765 pull'

fail 7657536 2024-04-15 21:10:23 2024-04-15 22:01:14 2024-04-15 22:30:59 0:29:45 0:17:44 0:12:01 smithi main centos 9.stream orch/cephadm/mgr-nfs-upgrade/{0-centos_9.stream 1-bootstrap/17.2.0 1-start 2-nfs 3-upgrade-with-workload 4-final} 2
Failure Reason:

Command failed on smithi081 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e7d16278-fb75-11ee-bc8f-c7b262605968 -e sha1=a9a752df26c63acad72e1b3569fd79a515ca0765 -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | keys\'"\'"\' | grep $sha1\''

fail 7657499 2024-04-15 21:09:46 2024-04-15 21:34:16 2024-04-15 21:50:03 0:15:47 0:04:27 0:11:20 smithi main centos 9.stream orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nvmeof 3-final} 2
Failure Reason:

Command failed on smithi081 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:a9a752df26c63acad72e1b3569fd79a515ca0765 pull'

pass 7657356 2024-04-15 20:11:45 2024-04-15 21:04:02 2024-04-15 21:34:29 0:30:27 0:19:42 0:10:45 smithi main ubuntu 22.04 orch/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2