Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi165.front.sepia.ceph.com smithi True True 2024-07-27 07:36:51.438460 scheduled_teuthology@teuthology centos 9.stream x86_64 /home/teuthworker/archive/teuthology-2024-07-26_21:08:20-orch-squid-distro-default-smithi/7820192
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7821062 2024-07-26 23:50:36 2024-07-27 04:09:23 2024-07-27 07:36:49 3:27:26 3:17:38 0:09:48 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-quiesce/with-quiesce 6-workunit/kernel_untar_build}} 3
pass 7821037 2024-07-26 23:50:08 2024-07-27 03:40:17 2024-07-27 04:09:23 0:29:06 0:18:34 0:10:32 smithi main centos 9.stream fs/libcephfs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-1-client-coloc conf/{client mds mgr mon osd} distro/{centos_latest} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/libcephfs/{frag test}} 2
pass 7820968 2024-07-26 23:48:53 2024-07-27 02:36:17 2024-07-27 03:40:09 1:03:52 0:54:09 0:09:43 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-quiesce/no 6-workunit/suites/ffsb}} 3
pass 7820909 2024-07-26 23:47:49 2024-07-27 01:37:21 2024-07-27 02:36:39 0:59:18 0:45:51 0:13:27 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-quiesce/with-quiesce 6-workunit/fs/test_o_trunc}} 3
fail 7820856 2024-07-26 23:46:51 2024-07-27 00:54:16 2024-07-27 01:41:09 0:46:53 0:36:54 0:09:59 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/blogbench}} 3
Failure Reason:

"2024-07-27T01:29:55.999391+0000 mds.b (mds.0) 29 : cluster [WRN] Scrub error on inode 0x100000007e2 (/volumes/qa/sv_0/882277e1-e1d3-4802-98cd-5c05ed2c2ac3/client.0/tmp/blogbench-1.0/src/blogtest_in/blog-7) see mds.b log and `damage ls` output for details" in cluster log

pass 7820674 2024-07-26 21:35:21 2024-07-27 00:34:53 2024-07-27 00:55:21 0:20:28 0:10:14 0:10:14 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/ceph-post-file mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
fail 7820629 2024-07-26 21:34:30 2024-07-27 00:11:11 2024-07-27 00:35:05 0:23:54 0:11:49 0:12:05 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
Failure Reason:

generator didn't yield

fail 7820579 2024-07-26 21:33:32 2024-07-26 23:44:01 2024-07-27 00:11:46 0:27:45 0:16:31 0:11:14 smithi main ubuntu 22.04 rados/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
Failure Reason:

generator didn't yield

pass 7820511 2024-07-26 21:32:12 2024-07-26 23:03:48 2024-07-26 23:43:51 0:40:03 0:27:44 0:12:19 smithi main ubuntu 22.04 rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 4
pass 7820479 2024-07-26 21:31:33 2024-07-26 22:44:43 2024-07-26 23:05:57 0:21:14 0:11:48 0:09:26 smithi main centos 9.stream rados/singleton-nomsgr/{all/admin_socket_output mon_election/classic rados supported-random-distro$/{centos_latest}} 1
pass 7820411 2024-07-26 21:30:11 2024-07-26 22:09:01 2024-07-26 22:44:46 0:35:45 0:24:26 0:11:19 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-hybrid rados tasks/rados_api_tests validater/lockdep} 2
pass 7820364 2024-07-26 21:29:13 2024-07-26 21:36:54 2024-07-26 22:08:56 0:32:02 0:20:57 0:11:05 smithi main centos 9.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-4 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-small-objects-overwrites} 4
running 7820192 2024-07-26 21:09:48 2024-07-27 07:35:01 2024-07-27 07:51:59 0:17:39 smithi main centos 9.stream orch/cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} 3
pass 7819529 2024-07-26 13:28:33 2024-07-26 21:10:50 2024-07-26 21:37:00 0:26:10 0:16:40 0:09:30 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-wait} 2
pass 7819479 2024-07-26 13:27:40 2024-07-26 20:40:06 2024-07-26 21:11:49 0:31:43 0:21:48 0:09:55 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/small-objects-balanced} 2
dead 7819074 2024-07-26 13:23:41 2024-07-26 14:22:42 2024-07-26 20:42:05 6:19:23 smithi main ubuntu 22.04 rbd/encryption/{cache/none clusters/{fixed-3 openstack} conf/{disable-pool-app} data-pool/ec features/defaults msgr-failures/few objectstore/bluestore-comp-zlib supported-random-distro$/{ubuntu_latest} workloads/qemu_xfstests_none_luks2} 3
Failure Reason:

hit max job timeout

pass 7819059 2024-07-26 13:23:33 2024-07-26 14:03:24 2024-07-26 14:23:38 0:20:14 0:10:35 0:09:39 smithi main centos 9.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_latest} tasks/scrub_test} 2
pass 7819041 2024-07-26 13:23:21 2024-07-26 13:37:24 2024-07-26 14:03:18 0:25:54 0:13:00 0:12:54 smithi main ubuntu 22.04 rbd/device/{base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} msgr-failures/few objectstore/bluestore-comp-snappy supported-random-distro$/{ubuntu_latest} thrashers/default thrashosds-health workloads/rbd_nbd} 3
fail 7818845 2024-07-26 04:23:25 2024-07-26 12:37:18 2024-07-26 13:39:39 1:02:21 0:48:17 0:14:04 smithi main centos 9.stream fs/verify/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{k-testing mount ms-die-on-skipped} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health session_timeout} ranks/1 tasks/dbench validater/valgrind} 2
Failure Reason:

valgrind error: Leak_PossiblyLost posix_memalign UnknownInlinedFun ceph::buffer::v15_2_0::create_aligned_in_mempool(unsigned int, unsigned int, int)

pass 7818827 2024-07-26 04:23:05 2024-07-26 12:14:56 2024-07-26 12:40:16 0:25:20 0:12:34 0:12:46 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/mds_creation_retry} 2