Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi016.front.sepia.ceph.com smithi True True 2024-05-14 13:00:36.285242 scheduled_vshankar@teuthology centos 9 x86_64 /home/teuthworker/archive/vshankar-2024-05-14_07:04:04-fs-wip-vshankar-testing-20240509.053109-debug-testing-default-smithi/7705820
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7705899 2024-05-14 09:38:32 2024-05-14 09:40:54 2024-05-14 10:29:45 0:48:51 0:38:54 0:09:57 smithi main centos 9.stream rbd:nvmeof/{base/install centos_latest cluster/{fixed-4 openstack} conf/{disable-pool-app} workloads/nvmeof_initiator} 4
Failure Reason:

reached maximum tries (301) after waiting for 300 seconds

running 7705820 2024-05-14 07:06:41 2024-05-14 13:00:36 2024-05-14 16:05:59 3:07:23 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-quiesce/no 6-workunit/kernel_untar_build}} 3
pass 7705792 2024-05-14 07:06:19 2024-05-14 12:14:18 2024-05-14 13:01:01 0:46:43 0:37:08 0:09:35 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/fsstress}} 3
fail 7705757 2024-05-14 07:05:51 2024-05-14 11:27:37 2024-05-14 12:07:19 0:39:42 0:26:27 0:13:15 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-quiesce/no 6-workunit/fs/misc}} 3
Failure Reason:

No module named 'tasks.cephfs.fuse_mount'

pass 7705728 2024-05-14 07:05:28 2024-05-14 10:38:29 2024-05-14 11:30:12 0:51:43 0:38:33 0:13:10 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/data-scan} 2
pass 7705679 2024-05-14 07:04:48 2024-05-14 09:08:20 2024-05-14 09:38:54 0:30:34 0:13:20 0:17:14 smithi main centos 8.stream fs/upgrade/nofs/{bluestore-bitmap centos_8.stream conf/{client mds mgr mon osd} no-mds-cluster overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn pg_health} tasks/{0-pacific 1-upgrade}} 1
pass 7705660 2024-05-14 07:04:32 2024-05-14 08:48:21 2024-05-14 09:14:00 0:25:39 0:16:00 0:09:39 smithi main centos 9.stream fs/32bits/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/fixed-2-ucephfs conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/cfuse_workunit_suites_pjd} 2
pass 7705630 2024-05-14 06:00:17 2024-05-14 08:18:21 2024-05-14 08:49:00 0:30:39 0:18:03 0:12:36 smithi main ubuntu 22.04 orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_res_basic} 2
pass 7705589 2024-05-14 05:59:23 2024-05-14 07:39:41 2024-05-14 08:21:13 0:41:32 0:32:27 0:09:05 smithi main ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rados_python} 2
pass 7705534 2024-05-14 05:21:00 2024-05-14 06:33:48 2024-05-14 07:39:31 1:05:43 0:52:53 0:12:50 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/direct_io}} 3
fail 7705460 2024-05-14 00:34:25 2024-05-14 01:54:24 2024-05-14 02:21:58 0:27:34 0:15:54 0:11:40 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

"2024-05-14T02:12:42.809129+0000 mon.a (mon.0) 405 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7705376 2024-05-14 00:31:39 2024-05-14 00:46:49 2024-05-14 01:44:45 0:57:56 0:45:27 0:12:29 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-workunit/postgres}} 3
Failure Reason:

"2024-05-14T01:21:07.975682+0000 mds.b (mds.0) 19 : cluster [WRN] Scrub error on inode 0x1000000000f (/volumes/qa/sv_1/0ffef377-1049-4fb2-bae8-a3a96102db42/postgres/data/pg_subtrans) see mds.b log and `damage ls` output for details" in cluster log

pass 7705333 2024-05-13 22:11:13 2024-05-14 03:23:59 2024-05-14 03:52:38 0:28:39 0:17:43 0:10:56 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools 0-nvme-loop 1-start 2-services/nfs 3-final} 2
pass 7705291 2024-05-13 22:10:35 2024-05-14 02:58:41 2024-05-14 03:24:53 0:26:12 0:16:28 0:09:44 smithi main centos 8.stream orch/cephadm/smoke-roleless/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop 1-start 2-services/iscsi 3-final} 2
pass 7705247 2024-05-13 22:09:54 2024-05-14 02:25:29 2024-05-14 02:59:06 0:33:37 0:26:30 0:07:07 smithi main rhel 8.6 orch/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/off mon_election/classic task/test_host_drain} 3
pass 7705147 2024-05-13 21:32:59 2024-05-13 23:53:26 2024-05-14 00:46:53 0:53:27 0:41:27 0:12:00 smithi main ubuntu 22.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zlib powercycle/default supported-distros/ubuntu_latest tasks/cfuse_workunit_misc thrashosds-health} 4
pass 7705068 2024-05-13 21:11:14 2024-05-13 22:48:37 2024-05-13 23:54:23 1:05:46 0:54:48 0:10:58 smithi main centos 9.stream orch/cephadm/thrash/{0-distro/centos_9.stream_runc 1-start 2-thrash 3-tasks/radosbench fixed-2 msgr/async-v2only root} 2
pass 7704915 2024-05-13 21:08:44 2024-05-13 21:31:16 2024-05-13 22:48:52 1:17:36 1:03:22 0:14:14 smithi main ubuntu 22.04 orch/cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/classic msgr/async-v1only start tasks/rados_api_tests} 2
pass 7704701 2024-05-13 18:34:03 2024-05-13 18:40:57 2024-05-13 19:26:58 0:46:01 0:25:15 0:20:46 smithi main ubuntu 22.04 krbd:fsx/{ceph/ceph clusters/3-node conf features/no-deep-flatten ms_mode$/{legacy} objectstore/bluestore-bitmap striping/fancy/{msgr-failures/few randomized-striping-on} tasks/fsx-1-client} 3
fail 7704646 2024-05-13 07:43:25 2024-05-13 08:39:58 2024-05-13 09:11:40 0:31:42 0:21:22 0:10:20 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-bucket 3-final} 2
Failure Reason:

"2024-05-13T09:07:28.832996+0000 mon.smithi001 (mon.0) 934 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log