Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi096.front.sepia.ceph.com smithi True True 2022-10-12 02:00:34.292405 scheduled_teuthology@teuthology ubuntu 20.04 x86_64 /home/teuthworker/archive/teuthology-2022-10-11_04:17:02-fs-pacific-distro-default-smithi/7062792
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7063162 2022-10-11 18:08:14 2022-10-11 19:09:27 2022-10-11 19:28:56 0:19:29 0:10:03 0:09:26 smithi main ubuntu 20.04 rgw/crypt/{0-cluster/fixed-1 1-ceph-install/install 2-kms/vault_transit 3-rgw/rgw 4-tests/{s3tests} ignore-pg-availability} 1
dead 7062924 2022-10-11 05:00:38 2022-10-11 05:41:59 2022-10-11 17:50:30 12:08:31 smithi main ubuntu 20.04 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/{0-install test/kclient_workunit_suites_dbench}} 3
Failure Reason:

hit max job timeout

running 7062792 2022-10-11 04:23:36 2022-10-12 02:00:24 2022-10-12 14:04:24 48 days, 16:23:35 smithi main ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{frag prefetch_dirfrags/no races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/5 tasks/{1-thrash/mds 2-workunit/suites/ffsb}} 2
pass 7062767 2022-10-11 04:23:05 2022-10-12 01:30:18 2022-10-12 02:00:32 0:30:14 0:14:55 0:15:19 smithi main ubuntu 20.04 fs/multiclient/{begin clusters/1-mds-2-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/mdtest} 4
pass 7062751 2022-10-11 04:22:46 2022-10-12 01:11:09 2022-10-12 01:34:56 0:23:47 0:13:26 0:10:21 smithi main ubuntu 20.04 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/mds-flush} 2
fail 7062724 2022-10-11 04:22:13 2022-10-12 00:40:03 2022-10-12 01:11:31 0:31:28 0:24:48 0:06:40 smithi main rhel 8.4 fs/volumes/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/volumes/{overrides test/basic}} 2
Failure Reason:

Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)

pass 7062695 2022-10-11 04:21:37 2022-10-11 23:59:15 2022-10-12 00:40:14 0:40:59 0:13:17 0:27:42 smithi main centos 8.stream fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/no standby-replay tasks/{0-check-counter workunit/suites/fsync-tester} wsync/{no}} 3
pass 7062681 2022-10-11 04:21:20 2022-10-11 23:44:17 2022-10-12 00:06:04 0:21:47 0:14:37 0:07:10 smithi main rhel 8.4 fs/thrash/multifs/{begin clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{frag multifs session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} tasks/{1-thrash/mds 2-workunit/iozone}} 2
pass 7062655 2022-10-11 04:20:48 2022-10-11 23:20:34 2022-10-11 23:44:50 0:24:16 0:12:52 0:11:24 smithi main ubuntu 20.04 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/auto-repair} 2
pass 7062623 2022-10-11 04:20:09 2022-10-11 22:48:06 2022-10-11 23:21:07 0:33:01 0:26:48 0:06:13 smithi main centos 8.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_8.stream_container_tools conf/{client mds mon osd} overrides/{pg-warn whitelist_health whitelist_wrongly_marked_down} roles tasks/{0-from/v16.2.4 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client 3-upgrade-with-workload 4-verify}} 2
fail 7062575 2022-10-11 04:19:10 2022-10-11 21:53:47 2022-10-11 22:47:00 0:53:13 0:43:51 0:09:22 smithi main rhel 8.4 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{centos_8} mount/kclient/{mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{no_client_pidfile whitelist_health whitelist_wrongly_marked_down} tasks/snap-schedule} 2
Failure Reason:

"2022-10-11T22:10:50.595349+0000 mon.a (mon.0) 467 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log

pass 7062546 2022-10-11 04:18:35 2022-10-11 21:29:41 2022-10-11 21:55:41 0:26:00 0:13:13 0:12:47 smithi main ubuntu 20.04 fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/1 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/iozone} wsync/{no}} 3
pass 7062526 2022-10-11 04:18:11 2022-10-11 21:06:39 2022-10-11 21:32:07 0:25:28 0:15:11 0:10:17 smithi main ubuntu 20.04 fs/thrash/workloads/{begin clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount overrides/{distro/testing/{flavor/ubuntu_latest k-testing} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{frag prefetch_dirfrags/yes races session_timeout thrashosds-health whitelist_health whitelist_wrongly_marked_down} ranks/1 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
pass 7062496 2022-10-11 04:17:34 2022-10-11 19:58:58 2022-10-11 21:06:38 1:07:40 0:44:56 0:22:44 smithi main ubuntu 20.04 fs/multiclient/{begin clusters/1-mds-3-client conf/{client mds mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-bitmap overrides/{whitelist_health whitelist_wrongly_marked_down} tasks/cephfs_misc_tests} 5
pass 7062482 2022-10-11 04:17:17 2022-10-11 19:28:09 2022-10-11 20:11:06 0:42:57 0:19:26 0:23:31 smithi main centos 8.stream fs/workload/{begin clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount overrides/{distro/testing/{flavor/centos_latest k-testing} ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{frag osd-asserts session_timeout whitelist_health whitelist_wrongly_marked_down} ranks/3 scrub/yes standby-replay tasks/{0-check-counter workunit/suites/iozone} wsync/{yes}} 3
fail 7062377 2022-10-11 04:01:11 2022-10-11 04:26:41 2022-10-11 05:41:52 1:15:11 1:05:00 0:10:11 smithi main centos 8.stream fs/snaps/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-1c-client conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/workunit/snaps} 2
Failure Reason:

"1665464111.1379025 mon.a (mon.0) 217 : cluster [WRN] Health check failed: 1 MDSs behind on trimming (MDS_TRIM)" in cluster log

fail 7062025 2022-10-10 20:40:16 2022-10-10 20:42:48 2022-10-10 20:59:33 0:16:45 0:05:55 0:10:50 smithi main fs:upgrade/upgraded_client/from_pacific/{bluestore-bitmap clusters/{1-mds-2-client} conf/{client mds mon osd} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} tasks/{0-pacific 1-client-upgrade 2-workload/kernel_cfuse_workunits_dbench_iozone}} 4
Failure Reason:

Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=pacific

dead 7062016 2022-10-10 20:29:40 2022-10-10 20:36:37 2022-10-10 20:44:51 0:08:14 smithi main rhel 8.6 fs:upgrade/upgraded_client/from_pacific/{bluestore-bitmap centos_latest clusters/{1-mds-2-client} conf/{client mds mon osd} kclient-overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} tasks/{0-pacific 1-client-upgrade 2-workload/kernel_cfuse_workunits_untarbuild_blogbench}} 4
Failure Reason:

SSH connection to smithi110 was lost: 'sudo yum install -y kernel'

dead 7062015 2022-10-10 20:29:39 2022-10-10 20:34:17 2022-10-10 20:43:40 0:09:23 smithi main centos 8.stream fs:upgrade/upgraded_client/from_pacific/{bluestore-bitmap centos_latest clusters/{1-mds-2-client} conf/{client mds mon osd} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} tasks/{0-pacific 1-client-upgrade 2-workload/kernel_cfuse_workunits_dbench_iozone}} 4
Failure Reason:

Error reimaging machines: 'ssh_keyscan smithi079.front.sepia.ceph.com' reached maximum tries (5) after waiting for 5 seconds

fail 7061952 2022-10-10 20:22:35 2022-10-11 17:48:23 2022-10-11 19:09:30 1:21:07 1:13:28 0:07:39 smithi main centos 8.stream rados:valgrind-leaks/{1-start 2-inject-leak/none centos_latest} 1
Failure Reason:

saw valgrind issues