Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi132.front.sepia.ceph.com smithi True True 2024-05-09 06:34:23.410728 scheduled_teuthology@teuthology x86_64 /home/teuthworker/archive/teuthology-2024-05-08_21:24:04-fs-squid-distro-default-smithi/7698556
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7699492 2024-05-09 03:10:22 2024-05-09 03:27:18 2024-05-09 04:28:40 1:01:22 0:48:04 0:13:18 smithi main centos 9.stream orch:cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/classic} 2
Failure Reason:

"2024-05-09T04:00:19.068007+0000 mon.a (mon.0) 928 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) ['daemon iscsi.foo.smithi078.vnplzc on smithi078 is in unknown state'] (CEPHADM_FAILED_DAEMON)" in cluster log

fail 7699086 2024-05-08 22:10:48 2024-05-09 04:38:13 2024-05-09 06:08:01 1:29:48 1:20:03 0:09:45 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/dbench}} 3
Failure Reason:

"2024-05-09T05:11:42.199199+0000 mds.b (mds.0) 39 : cluster [WRN] Scrub error on inode 0x10000000215 (/volumes/qa/sv_0/67c374de-5ca8-4911-b220-9505c406f92a/client.0/tmp/clients/client0/~dmtmp/COREL) see mds.b log and `damage ls` output for details" in cluster log

pass 7698990 2024-05-08 22:09:07 2024-05-09 01:48:12 2024-05-09 03:27:22 1:39:10 1:29:10 0:10:00 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mon 2-workunit/fs/snaps}} 2
pass 7698937 2024-05-08 22:08:12 2024-05-09 01:00:51 2024-05-09 01:48:02 0:47:11 0:38:23 0:08:48 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-quiesce/with-quiesce 6-workunit/suites/pjd}} 3
fail 7698872 2024-05-08 22:07:03 2024-05-08 23:58:50 2024-05-09 00:47:52 0:49:02 0:39:15 0:09:47 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-quiesce/no 6-workunit/suites/blogbench}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

waiting 7698556 2024-05-08 21:25:20 2024-05-09 06:34:03 2024-05-09 06:34:24 0:04:33 0:04:33 smithi main centos 9.stream fs/cephadm/multivolume/{0-start 1-mount 2-workload/dbench distro/single-container-host} 2
pass 7698536 2024-05-08 21:24:59 2024-05-09 06:11:21 2024-05-09 06:34:18 0:22:57 0:13:53 0:09:04 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mds 2-workunit/suites/fsstress}} 2
pass 7698179 2024-05-08 19:26:52 2024-05-08 23:15:25 2024-05-08 23:59:27 0:44:02 0:36:25 0:07:37 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/blogbench}} 3
pass 7698135 2024-05-08 19:26:00 2024-05-08 22:31:50 2024-05-08 23:15:35 0:43:45 0:37:16 0:06:29 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/fs/norstats}} 3
pass 7698072 2024-05-08 19:24:45 2024-05-08 21:29:14 2024-05-08 22:31:50 1:02:36 0:49:01 0:13:35 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/suites/blogbench}} 3
pass 7698011 2024-05-08 19:23:36 2024-05-08 20:20:19 2024-05-08 21:29:56 1:09:37 0:55:43 0:13:54 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/postgres}} 3
pass 7697962 2024-05-08 19:22:41 2024-05-08 19:34:59 2024-05-08 20:20:10 0:45:11 0:36:26 0:08:45 smithi main rhel 8.6 fs/workload/{0-rhel_8 begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{k-stock rhel_8} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/fs/norstats}} 3
pass 7697938 2024-05-08 17:45:47 2024-05-08 18:35:07 2024-05-08 18:56:01 0:20:54 0:12:27 0:08:27 smithi main centos 9.stream rgw/singleton/{all/radosgw-admin frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/replicated supported-random-distro$/{centos_latest}} 2
pass 7697812 2024-05-08 16:08:06 2024-05-08 16:13:02 2024-05-08 16:47:28 0:34:26 0:20:11 0:14:15 smithi main centos 9.stream rbd/librbd/{cache/writearound clusters/{fixed-3 openstack} conf/{disable-pool-app} data-pool/ec extra-conf/none min-compat-client/default msgr-failures/few objectstore/bluestore-low-osd-mem-target supported-random-distro$/{centos_latest} workloads/python_api_tests} 3
fail 7697791 2024-05-08 15:06:38 2024-05-08 17:19:07 2024-05-08 18:28:30 1:09:23 0:59:41 0:09:42 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados tasks/rados_api_tests validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

pass 7697744 2024-05-08 15:05:41 2024-05-08 16:47:32 2024-05-08 17:19:12 0:31:40 0:22:09 0:09:31 smithi main ubuntu 22.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/few rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/fastread thrashosds-health workloads/ec-small-objects-overwrites} 2
pass 7697652 2024-05-08 15:03:52 2024-05-08 15:52:05 2024-05-08 16:13:53 0:21:48 0:12:03 0:09:45 smithi main ubuntu 22.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-low-osd-mem-target openstack scheduler/dmclock_1Shard_16Threads settings/optimized ubuntu_latest workloads/radosbench_4M_write} 1
pass 7697613 2024-05-08 15:03:05 2024-05-08 15:32:46 2024-05-08 15:52:15 0:19:29 0:10:01 0:09:28 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/export-after-evict mon_election/connectivity rados supported-random-distro$/{ubuntu_latest}} 1
pass 7697571 2024-05-08 15:02:11 2024-05-08 15:02:46 2024-05-08 15:32:44 0:29:58 0:15:36 0:14:22 smithi main centos 9.stream rados/cephadm/smoke/{0-distro/centos_9.stream 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
fail 7697479 2024-05-08 13:39:35 2024-05-08 14:09:29 2024-05-08 14:30:12 0:20:43 0:08:57 0:11:46 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_set_mon_crush_locations} 3
Failure Reason:

Command failed on smithi080 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:7c8f650b36e258f639fa4a83becade57cbfd2009-aarch64 pull'