Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi157.front.sepia.ceph.com smithi True True 2024-04-27 09:19:00.382481 scheduled_teuthology@teuthology centos 9 x86_64 /home/teuthworker/archive/teuthology-2024-04-26_20:40:14-rgw-main-distro-default-smithi/7675394
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7675800 2024-04-27 03:09:18 2024-04-27 04:01:44 2024-04-27 07:04:29 3:02:45 2:52:34 0:10:11 smithi main centos 9.stream upgrade/reef-x/stress-split/{0-distro/centos_9.stream_runc 0-roles 1-start 2-first-half-tasks/rbd-import-export 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/classic} 2
Failure Reason:

"2024-04-27T05:44:18.811479+0000 osd.0 (osd.0) 16 : cluster [ERR] 57.14 soid 57:28658920:::smithi129778568-93:head : object info inconsistent , snapset inconsistent , attr value mismatch '__header'" in cluster log

pass 7675652 2024-04-26 21:40:52 2024-04-27 03:00:16 2024-04-27 04:04:18 1:04:02 0:56:28 0:07:34 smithi main centos 9.stream rgw/verify/{0-install accounts$/{main-tenant} clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability inline-data$/{off} msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec-profile s3tests-branch sharding$/{default} striping$/{stripe-equals-chunk} supported-random-distro$/{centos_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/valgrind} 2
pass 7675582 2024-04-26 21:11:12 2024-04-27 02:22:14 2024-04-27 03:00:26 0:38:12 0:30:38 0:07:34 smithi main centos 9.stream orch/cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async root} 2
pass 7675560 2024-04-26 21:10:49 2024-04-27 02:05:03 2024-04-27 02:22:36 0:17:33 0:10:23 0:07:10 smithi main centos 9.stream orch/cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_domain} 2
fail 7675493 2024-04-26 21:09:42 2024-04-27 01:19:21 2024-04-27 01:59:58 0:40:37 0:26:19 0:14:18 smithi main ubuntu 22.04 orch/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/classic start} 2
Failure Reason:

"2024-04-27T01:50:00.000246+0000 mon.a (mon.0) 817 : cluster 3 [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm" in cluster log

running 7675394 2024-04-26 20:42:51 2024-04-27 09:19:00 2024-04-27 09:49:02 0:31:49 smithi main centos 9.stream rgw/verify/{0-install accounts$/{main-tenant} clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability inline-data$/{off} msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/replicated s3tests-branch sharding$/{single} striping$/{stripe-equals-chunk} supported-random-distro$/{centos_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/valgrind} 2
fail 7675315 2024-04-26 19:35:42 2024-04-26 23:17:45 2024-04-27 00:16:37 0:58:52 0:45:04 0:13:48 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

pass 7675285 2024-04-26 19:35:04 2024-04-26 22:42:47 2024-04-26 23:17:04 0:34:17 0:22:26 0:11:51 smithi main ubuntu 22.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/mon 2-workunit/suites/fsstress}} 2
fail 7675189 2024-04-26 19:33:06 2024-04-26 20:38:50 2024-04-26 22:40:42 2:01:52 1:48:24 0:13:28 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/fs/misc}} 3
Failure Reason:

Command failed on smithi157 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:bc612ffb9c1d5bc27536bc917a705262a36d4387 shell --fsid cec9a414-040f-11ef-bc93-c7b262605968 -- ceph daemon mds.i perf dump'

pass 7675139 2024-04-26 19:32:03 2024-04-26 19:36:49 2024-04-26 20:42:38 1:05:49 0:51:41 0:14:08 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/direct_io}} 3
fail 7675098 2024-04-26 18:23:02 2024-04-27 00:32:04 2024-04-27 01:09:10 0:37:06 0:26:04 0:11:02 smithi main ubuntu 22.04 rados/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

"2024-04-27T00:56:14.689186+0000 mon.a (mon.0) 455 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7675065 2024-04-26 18:22:28 2024-04-27 00:16:08 2024-04-27 00:27:42 0:11:34 0:05:05 0:06:29 smithi main centos 9.stream rados/multimon/{clusters/9 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_latest} tasks/mon_recovery} 3
Failure Reason:

Command failed on smithi136 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7675034 2024-04-26 18:21:55 2024-04-26 19:21:12 2024-04-26 19:36:10 0:14:58 0:06:27 0:08:31 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_latest} thrashers/none thrashosds-health workloads/cache-pool-snaps-readproxy} 4
Failure Reason:

Command failed on smithi080 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7675003 2024-04-26 18:21:21 2024-04-26 19:05:38 2024-04-26 19:20:24 0:14:46 0:06:18 0:08:28 smithi main centos 9.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

Command failed on smithi175 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7674941 2024-04-26 18:20:14 2024-04-26 18:38:27 2024-04-26 19:02:48 0:24:21 0:10:11 0:14:10 smithi main ubuntu 22.04 rados/singleton-nomsgr/{all/lazy_omap_stats_output mon_election/classic rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_lazy_omap_stats'

pass 7674911 2024-04-26 17:25:27 2024-04-26 17:56:23 2024-04-26 18:28:45 0:32:22 0:21:26 0:10:56 smithi main ubuntu 22.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/with-quiesce 2-workunit/suites/iozone}} 2
pass 7674876 2024-04-26 15:09:08 2024-04-26 16:20:11 2024-04-26 16:38:03 0:17:52 0:10:55 0:06:57 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_mgr_res_dom} 2
pass 7674842 2024-04-26 15:08:16 2024-04-26 15:58:35 2024-04-26 16:20:17 0:21:42 0:14:53 0:06:49 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs-haproxy-proto 3-final} 2
pass 7674744 2024-04-26 12:11:29 2024-04-26 12:40:39 2024-04-26 13:00:27 0:19:48 0:11:45 0:08:03 smithi main centos 9.stream rgw/singleton/{all/radosgw-admin frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/replicated supported-random-distro$/{centos_latest}} 2
fail 7674554 2024-04-26 02:08:51 2024-04-26 04:18:24 2024-04-26 04:48:46 0:30:22 0:18:42 0:11:40 smithi main centos 8.stream upgrade/cephfs/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
Failure Reason:

ceph version 17.2.7-904.gd8c6b0a3 was not installed, found 15.2.17-0.el8.