Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi037.front.sepia.ceph.com smithi True True 2024-07-27 07:31:35.471493 scheduled_teuthology@teuthology centos 9.stream x86_64 /home/teuthworker/archive/teuthology-2024-07-26_21:08:20-orch-squid-distro-default-smithi/7820182
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7821095 2024-07-26 23:51:12 2024-07-27 04:39:03 2024-07-27 06:02:49 1:23:46 1:13:09 0:10:37 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-quiesce/with-quiesce 6-workunit/fs/misc}} 3
Failure Reason:

error during quiesce thrashing: Error quiescing set '1b6feddd': 110 (ETIMEDOUT)

pass 7821033 2024-07-26 23:50:04 2024-07-27 03:37:55 2024-07-27 04:39:05 1:01:10 0:48:49 0:12:21 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-quiesce/no 6-workunit/suites/ffsb}} 3
fail 7820900 2024-07-26 23:47:39 2024-07-27 01:33:27 2024-07-27 03:38:47 2:05:20 1:54:29 0:10:51 smithi main ubuntu 22.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/admin} 2
Failure Reason:

"2024-07-27T02:57:25.507137+0000 mds.c (mds.0) 1 : cluster [WRN] client could not reconnect as file system flag refuse_client_session is set" in cluster log

pass 7820835 2024-07-26 23:46:28 2024-07-27 00:45:14 2024-07-27 01:35:15 0:50:01 0:39:38 0:10:23 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-quiesce/with-quiesce 6-workunit/fs/test_o_trunc}} 3
pass 7820623 2024-07-26 21:34:23 2024-07-27 00:06:08 2024-07-27 00:45:17 0:39:09 0:23:56 0:15:13 smithi main centos 9.stream rados/thrash-erasure-code/{ceph clusters/{fixed-4 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} 4
fail 7820562 2024-07-26 21:33:11 2024-07-26 23:34:37 2024-07-27 00:08:08 0:33:31 0:23:38 0:09:53 smithi main ubuntu 22.04 rados/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
Failure Reason:

generator didn't yield

fail 7820515 2024-07-26 21:32:16 2024-07-26 23:07:00 2024-07-26 23:34:52 0:27:52 0:17:15 0:10:37 smithi main centos 9.stream rados/thrash-old-clients/{0-distro$/{centos_9.stream} 0-size-min-size-overrides/2-size-2-min-size 1-install/reef backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/careful thrashosds-health workloads/cache-snaps} 3
Failure Reason:

generator didn't yield

fail 7820485 2024-07-26 21:31:40 2024-07-26 22:46:35 2024-07-26 23:07:36 0:21:01 0:11:57 0:09:04 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/deploy-raw} 2
Failure Reason:

generator didn't yield

pass 7820443 2024-07-26 21:30:48 2024-07-26 22:25:06 2024-07-26 22:46:47 0:21:41 0:12:29 0:09:12 smithi main centos 9.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_latest} tasks/libcephsqlite} 2
fail 7820396 2024-07-26 21:29:53 2024-07-26 21:58:43 2024-07-26 22:25:16 0:26:33 0:16:21 0:10:12 smithi main ubuntu 22.04 rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/repave-all} 2
Failure Reason:

generator didn't yield

fail 7820365 2024-07-26 21:29:15 2024-07-26 21:37:35 2024-07-26 21:59:13 0:21:38 0:12:00 0:09:38 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

generator didn't yield

running 7820182 2024-07-26 21:09:37 2024-07-27 07:31:35 2024-07-27 07:52:33 0:21:09 smithi main centos 9.stream orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/rgw-ingress 3-final} 2
pass 7819743 2024-07-26 16:25:27 2024-07-27 07:03:05 2024-07-27 07:31:25 0:28:20 0:16:50 0:11:30 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-delay msgr/async-v1only objectstore/{bluestore-options/write$/{write_v2} bluestore/bluestore-comp-lz4} rados supported-random-distro$/{centos_latest} thrashers/mapgap thrashosds-health workloads/write_fadvise_dontneed} 4
fail 7819685 2024-07-26 16:08:44 2024-07-26 16:37:20 2024-07-26 17:36:42 0:59:22 0:48:56 0:10:26 smithi main centos 9.stream fs:volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/clone}} 2
Failure Reason:

Test failure: test_subvolume_snapshot_info_if_clone_pending_for_no_group (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)

pass 7819666 2024-07-26 16:06:50 2024-07-26 16:12:29 2024-07-26 16:37:40 0:25:11 0:13:41 0:11:30 smithi main ubuntu 22.04 fs:volumes/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/volumes/{overrides test/finisher_per_module}} 2
pass 7819641 2024-07-26 16:05:28 2024-07-27 06:02:09 2024-07-27 06:22:40 0:20:31 0:10:49 0:09:42 smithi main ubuntu 20.04 rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/filestore-xfs overrides rgw_pool_type/replicated tasks/rgw_bucket_quota ubuntu_latest} 2
pass 7819388 2024-07-26 13:26:10 2024-07-26 19:38:27 2024-07-26 21:38:15 1:59:48 1:50:48 0:09:00 smithi main centos 9.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-radosbench} 2
fail 7819358 2024-07-26 13:25:54 2024-07-26 19:20:45 2024-07-26 19:39:06 0:18:21 0:08:17 0:10:04 smithi main ubuntu 20.04 rados/thrash-old-clients/{0-distro$/{ubuntu_20.04} 0-size-min-size-overrides/3-size-2-min-size 1-install/reef backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/osd-delay rados thrashers/default thrashosds-health workloads/radosbench} 3
Failure Reason:

no results found at https://shaman.ceph.com/api/search/?project=ceph&distros=ubuntu%2F20.04%2Fx86_64&flavor=default&sha1=387735b04c21d62e21975a50a7f6c06a95b3cf6d

pass 7819306 2024-07-26 13:25:26 2024-07-26 18:46:01 2024-07-26 19:20:47 0:34:46 0:22:58 0:11:48 smithi main centos 9.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 3
pass 7819196 2024-07-26 13:24:36 2024-07-26 17:36:42 2024-07-26 18:47:23 1:10:41 1:00:20 0:10:21 smithi main ubuntu 22.04 rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} conf/{disable-pool-app} data-pool/ec extra-conf/none min-compat-client/default msgr-failures/few objectstore/bluestore-comp-zlib supported-random-distro$/{ubuntu_latest} workloads/c_api_tests} 3