Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi144.front.sepia.ceph.com smithi True True 2024-05-14 10:10:35.218925 scheduled_vshankar@teuthology ubuntu 22.04 x86_64 /home/teuthworker/archive/vshankar-2024-05-14_07:04:04-fs-wip-vshankar-testing-20240509.053109-debug-testing-default-smithi/7705703
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7705703 2024-05-14 07:05:08 2024-05-14 10:08:54 2024-05-14 10:37:18 0:28:24 0:14:55 0:13:29 smithi main ubuntu 22.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-3-client conf/{client mds mgr mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/ior-shared-file} 5
fail 7705677 2024-05-14 07:04:46 2024-05-14 09:08:19 2024-05-14 10:00:09 0:51:50 0:40:48 0:11:02 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-quiesce/no 6-workunit/fs/norstats}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

fail 7705633 2024-05-14 06:00:21 2024-05-14 08:22:13 2024-05-14 09:06:09 0:43:56 0:33:16 0:10:40 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/packaged mon_election/classic msgr/async-v2only start tasks/rados_api_tests} 2
Failure Reason:

Command failed (workunit test rados/test.sh) on smithi039 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=18b668805c5d41fc898242192a532a221db3fc6f TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh'

fail 7705603 2024-05-14 05:59:41 2024-05-14 07:51:18 2024-05-14 08:14:28 0:23:10 0:12:38 0:10:32 smithi main centos 9.stream orch:cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} 3
Failure Reason:

"2024-05-14T08:09:47.982855+0000 mon.a (mon.0) 475 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7705563 2024-05-14 05:58:49 2024-05-14 07:13:38 2024-05-14 07:42:13 0:28:35 0:15:01 0:13:34 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

"2024-05-14T07:37:58.625034+0000 mon.smithi144 (mon.0) 784 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log

pass 7705511 2024-05-14 05:20:53 2024-05-14 05:46:42 2024-05-14 07:16:57 1:30:15 1:18:49 0:11:26 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/dbench}} 3
pass 7705489 2024-05-14 04:50:24 2024-05-14 05:17:21 2024-05-14 05:49:41 0:32:20 0:22:03 0:10:17 smithi main centos 9.stream rados:thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/cache-snaps-balanced} 4
fail 7705379 2024-05-14 00:31:43 2024-05-14 00:48:51 2024-05-14 01:41:58 0:53:07 0:42:25 0:10:42 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/suites/dbench}} 3
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi045 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=2ba596c3826798bb3a2f22bc9b02848b92dbe579 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'

pass 7705337 2024-05-13 22:11:17 2024-05-14 03:27:21 2024-05-14 04:08:46 0:41:25 0:34:57 0:06:28 smithi main rhel 8.6 orch/cephadm/with-work/{0-distro/rhel_8.6_container_tools_3.0 fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} 2
pass 7705287 2024-05-13 22:10:31 2024-05-14 02:56:09 2024-05-14 03:29:02 0:32:53 0:25:50 0:07:03 smithi main rhel 8.6 orch/cephadm/workunits/{0-distro/rhel_8.6_container_tools_3.0 agent/off mon_election/classic task/test_ca_signed_key} 2
fail 7705250 2024-05-13 22:09:57 2024-05-14 02:25:31 2024-05-14 02:43:46 0:18:15 0:06:04 0:12:11 smithi main ubuntu 20.04 orch/rook/smoke/{0-distro/ubuntu_20.04 0-kubeadm 0-nvme-loop 1-rook 2-workload/radosbench cluster/3-node k8s/1.21 net/calico rook/1.7.2} 3
Failure Reason:

Command failed on smithi026 with status 100: "sudo apt update && sudo apt install -y apt-transport-https ca-certificates curl && sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg && echo 'deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list && sudo apt update && sudo apt install -y kubelet kubeadm kubectl bridge-utils"

pass 7705203 2024-05-13 22:09:14 2024-05-14 01:54:39 2024-05-14 02:27:37 0:32:58 0:26:18 0:06:40 smithi main rhel 8.6 orch/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_set_mon_crush_locations} 3
pass 7705184 2024-05-13 22:08:56 2024-05-14 00:21:55 2024-05-14 00:48:52 0:26:57 0:18:29 0:08:28 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/nfs-ingress2 3-final} 2
fail 7705150 2024-05-13 21:33:02 2024-05-13 23:55:07 2024-05-14 00:18:48 0:23:41 0:12:16 0:11:25 smithi main centos 9.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-distros/centos_latest tasks/cfuse_workunit_suites_fsx thrashosds-health} 4
Failure Reason:

Command failed (workunit test suites/fsx.sh) on smithi026 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c3e55145f4f4b3a500bb66ee08c938a215bc231d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh'

pass 7705115 2024-05-13 21:32:28 2024-05-13 23:17:47 2024-05-13 23:55:28 0:37:41 0:28:26 0:09:15 smithi main ubuntu 22.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zlib powercycle/default supported-distros/ubuntu_latest tasks/snaps-few-objects thrashosds-health} 4
pass 7705064 2024-05-13 21:11:10 2024-05-13 22:47:55 2024-05-13 23:17:36 0:29:41 0:20:53 0:08:48 smithi main ubuntu 22.04 orch/cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_basic} 2
pass 7704993 2024-05-13 21:10:00 2024-05-13 22:13:35 2024-05-13 22:47:48 0:34:13 0:20:29 0:13:44 smithi main centos 9.stream orch/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_host_drain} 3
pass 7704942 2024-05-13 21:09:11 2024-05-13 21:31:26 2024-05-13 22:01:14 0:29:48 0:17:50 0:11:58 smithi main centos 9.stream orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} 3
pass 7704694 2024-05-13 18:29:12 2024-05-13 18:30:49 2024-05-13 18:56:35 0:25:46 0:13:45 0:12:01 smithi main ubuntu 22.04 krbd:unmap/{ceph/ceph clusters/separate-client conf kernels/single-major-on tasks/unmap} 2
fail 7704674 2024-05-13 07:44:03 2024-05-13 08:58:01 2024-05-13 10:02:39 1:04:38 0:53:24 0:11:14 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds