Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi008.front.sepia.ceph.com smithi True True 2024-05-14 02:25:30.466252 scheduled_teuthology@teuthology ubuntu 20.04 x86_64 /home/teuthworker/archive/teuthology-2024-05-13_22:08:02-orch-reef-distro-default-smithi/7705248
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7705448 2024-05-14 00:34:12 2024-05-14 01:46:28 2024-05-14 02:16:42 0:30:14 0:16:18 0:13:56 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/on mon_election/connectivity task/test_set_mon_crush_locations} 3
Failure Reason:

"2024-05-14T02:09:50.659057+0000 mon.a (mon.0) 400 : cluster [WRN] Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)" in cluster log

running 7705248 2024-05-13 22:09:55 2024-05-14 02:25:30 2024-05-14 03:13:30 0:48:06 smithi main ubuntu 20.04 orch/cephadm/with-work/{0-distro/ubuntu_20.04 fixed-2 mode/packaged mon_election/classic msgr/async start tasks/rados_api_tests} 2
pass 7705148 2024-05-13 21:33:00 2024-05-13 23:54:26 2024-05-14 00:32:05 0:37:39 0:27:37 0:10:02 smithi main centos 9.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-zstd powercycle/default supported-distros/centos_latest tasks/cfuse_workunit_suites_ffsb thrashosds-health} 4
pass 7705112 2024-05-13 21:32:25 2024-05-13 23:15:35 2024-05-13 23:54:47 0:39:12 0:28:29 0:10:43 smithi main centos 9.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-bitmap powercycle/default supported-distros/centos_latest tasks/rados_api_tests thrashosds-health} 4
pass 7705082 2024-05-13 21:11:29 2024-05-13 22:52:33 2024-05-13 23:15:37 0:23:04 0:13:10 0:09:54 smithi main centos 9.stream orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/on fixed-2 mon_election/classic start} 3
pass 7704981 2024-05-13 21:09:48 2024-05-13 22:13:31 2024-05-13 22:52:29 0:38:58 0:28:52 0:10:06 smithi main centos 9.stream orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/classic task/test_orch_cli_mon} 5
pass 7704924 2024-05-13 21:08:53 2024-05-13 21:31:20 2024-05-13 22:06:57 0:35:37 0:24:17 0:11:20 smithi main ubuntu 22.04 orch/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rmdir-reactivate} 2
fail 7704651 2024-05-13 07:43:32 2024-05-13 08:43:21 2024-05-13 09:06:20 0:22:59 0:14:28 0:08:31 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/classic task/test_orch_cli} 1
Failure Reason:

"2024-05-13T09:04:49.197045+0000 mon.a (mon.0) 551 : cluster [WRN] Health check failed: cephadm background work is paused (CEPHADM_PAUSED)" in cluster log

pass 7704598 2024-05-13 07:42:20 2024-05-13 08:05:45 2024-05-13 08:43:22 0:37:37 0:24:50 0:12:47 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/connectivity task/test_orch_cli_mon} 5
pass 7704541 2024-05-13 05:54:39 2024-05-13 06:07:06 2024-05-13 06:38:05 0:30:59 0:16:43 0:14:16 smithi main ubuntu 22.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/with-quiesce 2-workunit/suites/pjd}} 2
pass 7704436 2024-05-12 22:06:11 2024-05-13 15:15:41 2024-05-13 15:44:11 0:28:30 0:18:35 0:09:55 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
pass 7704341 2024-05-12 22:04:34 2024-05-13 14:35:09 2024-05-13 15:16:10 0:41:01 0:35:28 0:05:33 smithi main rhel 8.6 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{rhel_8} thrashers/minsize_recovery thrashosds-health workloads/ec-rados-plugin=jerasure-k=2-m=1} 2
pass 7704268 2024-05-12 22:03:20 2024-05-13 13:57:15 2024-05-13 14:35:03 0:37:48 0:25:59 0:11:49 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/snaps-few-objects-localized} 2
pass 7704213 2024-05-12 22:02:24 2024-05-13 13:31:31 2024-05-13 13:57:16 0:25:45 0:14:58 0:10:47 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/mapgap thrashosds-health workloads/set-chunks-read} 2
pass 7704176 2024-05-12 22:01:47 2024-05-13 13:07:03 2024-05-13 13:31:35 0:24:32 0:14:15 0:10:17 smithi main ubuntu 20.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{default} supported-random-distro$/{ubuntu_20.04} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7704087 2024-05-12 21:27:42 2024-05-12 23:40:19 2024-05-13 01:32:40 1:52:21 1:40:19 0:12:02 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/dbench}} 3
pass 7703928 2024-05-12 21:24:53 2024-05-12 21:36:41 2024-05-12 23:41:54 2:05:13 1:56:24 0:08:49 smithi main centos 9.stream fs/mirror/{begin/{0-install 1-ceph 2-logrotate 3-modules} cephfs-mirror/one-per-cluster clients/{mirror} cluster/{1-node} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health pg_health} supported-random-distros$/{centos_latest} tasks/mirror} 1
fail 7703868 2024-05-12 21:06:17 2024-05-13 02:22:56 2024-05-13 04:29:43 2:06:47 1:55:13 0:11:34 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/none mon_election/classic msgr-failures/few msgr/async objectstore/bluestore-comp-snappy rados tasks/rados_cls_all validater/valgrind} 2
Failure Reason:

valgrind error: Leak_StillReachable operator new[](unsigned long) UnknownInlinedFun UnknownInlinedFun

fail 7703814 2024-05-12 21:05:22 2024-05-13 01:57:09 2024-05-13 02:12:04 0:14:55 smithi main centos 9.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/enable mon_election/connectivity random-objectstore$/{bluestore-comp-zlib} supported-random-distro$/{centos_latest} tasks/crash} 2
Failure Reason:

Failed to reconnect to smithi008

fail 7703812 2024-05-12 21:05:20 2024-05-13 01:56:19 2024-05-13 02:04:04 0:07:45 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/connectivity task/test_rgw_multisite} 3
Failure Reason:

Command failed on smithi008 with status 1: 'sudo yum install -y kernel'