Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi120.front.sepia.ceph.com smithi True False ubuntu 22.04 x86_64 /home/teuthworker/archive/teuthology-2024-04-26_20:40:14-rgw-main-distro-default-smithi/7675419
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7675769 2024-04-27 03:08:46 2024-04-27 03:29:07 2024-04-27 04:13:12 0:44:05 0:37:07 0:06:58 smithi main centos 9.stream upgrade/cephfs/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

pass 7675707 2024-04-26 22:40:59 2024-04-27 04:25:16 2024-04-27 06:05:18 1:40:02 1:30:16 0:09:46 smithi main ubuntu 20.04 rgw/verify/{0-install clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability inline-data$/{off} msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/ec s3tests-branch sharding$/{default} striping$/{stripe-equals-chunk} supported-random-distro$/{ubuntu_20.04} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/valgrind} 2
pass 7675627 2024-04-26 21:40:28 2024-04-27 02:48:35 2024-04-27 03:29:49 0:41:14 0:33:38 0:07:36 smithi main centos 9.stream rgw/sts/{cluster ignore-pg-availability objectstore overrides pool-type rgw_frontend/beast s3tests-branch supported-random-distro$/{centos_latest} tasks/{0-install 1-keycloak 2-s3tests}} 2
pass 7675589 2024-04-26 21:11:19 2024-04-27 02:29:28 2024-04-27 02:48:26 0:18:58 0:12:52 0:06:06 smithi main centos 9.stream orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/connectivity task/test_cephadm_timeout} 1
pass 7675555 2024-04-26 21:10:44 2024-04-27 02:01:10 2024-04-27 02:23:10 0:22:00 0:14:38 0:07:22 smithi main centos 9.stream orch/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/classic task/test_host_drain} 3
pass 7675524 2024-04-26 21:10:13 2024-04-27 01:42:37 2024-04-27 02:01:55 0:19:18 0:12:54 0:06:24 smithi main centos 9.stream orch/cephadm/osds/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
pass 7675456 2024-04-26 21:09:06 2024-04-27 00:47:34 2024-04-27 01:42:30 0:54:56 0:43:26 0:11:30 smithi main ubuntu 22.04 orch/cephadm/thrash/{0-distro/ubuntu_22.04 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
pass 7675419 2024-04-26 20:43:17 2024-04-27 09:30:11 2024-04-27 09:56:24 0:26:13 0:14:49 0:11:24 smithi main ubuntu 22.04 rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec-profile s3tests-branch tasks/rgw_ragweed ubuntu_latest} 2
pass 7675364 2024-04-26 20:42:20 2024-04-27 09:04:26 2024-04-27 09:30:27 0:26:01 0:15:09 0:10:52 smithi main ubuntu 22.04 rgw/singleton/{all/radosgw-admin frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec-profile supported-random-distro$/{ubuntu_latest}} 2
pass 7675346 2024-04-26 19:36:20 2024-04-26 23:55:14 2024-04-27 00:27:14 0:32:00 0:21:07 0:10:53 smithi main ubuntu 22.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/test_journal_migration} 2
pass 7675317 2024-04-26 19:35:44 2024-04-26 23:18:16 2024-04-26 23:55:27 0:37:11 0:27:35 0:09:36 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/fs/norstats}} 3
pass 7675250 2024-04-26 19:34:21 2024-04-26 21:56:26 2024-04-26 23:18:09 1:21:43 1:09:23 0:12:20 smithi main ubuntu 22.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-2-client conf/{client mds mgr mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/cephfs_misc_tests} 4
fail 7675200 2024-04-26 19:33:20 2024-04-26 20:54:17 2024-04-26 21:49:25 0:55:08 0:44:11 0:10:57 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

pass 7675187 2024-04-26 19:33:03 2024-04-26 20:38:39 2024-04-26 20:55:36 0:16:57 0:09:39 0:07:18 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/auto-repair} 2
fail 7675077 2024-04-26 18:22:41 2024-04-27 00:23:54 2024-04-27 00:38:22 0:14:28 0:04:54 0:09:34 smithi main centos 9.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-zlib} supported-random-distro$/{centos_latest} tasks/prometheus} 2
Failure Reason:

Command failed on smithi120 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7675000 2024-04-26 18:21:18 2024-04-26 19:05:36 2024-04-26 20:25:52 1:20:16 1:08:56 0:11:20 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/peering ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/few rados thrashers/mapgap thrashosds-health workloads/radosbench} 3
Failure Reason:

"2024-04-26T19:40:00.000113+0000 mon.a (mon.0) 1161 : cluster [WRN] Health detail: HEALTH_WARN noscrub flag(s) set; 1 pool(s) do not have an application enabled" in cluster log

fail 7674944 2024-04-26 18:20:18 2024-04-26 18:43:39 2024-04-26 18:56:27 0:12:48 0:03:54 0:08:54 smithi main centos 9.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_latest} tasks/scrub_test} 2
Failure Reason:

Command failed on smithi120 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd sqlite-devel sqlite-devel sqlite-devel sqlite-devel'

pass 7674915 2024-04-26 17:25:28 2024-04-26 18:00:35 2024-04-26 18:39:42 0:39:07 0:32:25 0:06:42 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/fsx}} 3
fail 7674814 2024-04-26 15:07:34 2024-04-26 15:40:02 2024-04-26 16:15:53 0:35:51 0:25:35 0:10:16 smithi main ubuntu 22.04 orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

"2024-04-26T16:00:56.759986+0000 mon.a (mon.0) 413 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

pass 7674784 2024-04-26 15:06:50 2024-04-26 15:09:07 2024-04-26 15:39:53 0:30:46 0:22:34 0:08:12 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream_runc fixed-2 mode/root mon_election/connectivity msgr/async start tasks/rados_python} 2