Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi008.front.sepia.ceph.com smithi True True 2024-04-26 22:58:31.899595 scheduled_rishabh@teuthology centos 9 x86_64 /home/teuthworker/archive/rishabh-2024-04-26_19:30:57-fs-wip-rishabh-testing-20240426.111959-testing-default-smithi/7675291
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 7675291 2024-04-26 19:35:12 2024-04-26 22:52:20 2024-04-26 23:17:16 0:25:03 smithi main centos 9.stream fs/snaps/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/workunit/snaps} 2
fail 7675258 2024-04-26 19:34:31 2024-04-26 22:04:00 2024-04-26 22:48:32 0:44:32 0:35:45 0:08:47 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/dbench}} 3
Failure Reason:

Command failed on smithi008 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs subvolumegroup pin cephfs qa random 0.10'"

pass 7675199 2024-04-26 19:33:19 2024-04-26 20:54:16 2024-04-26 22:05:13 1:10:57 1:01:42 0:09:15 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/postgres}} 3
fail 7675156 2024-04-26 19:32:24 2024-04-26 19:57:20 2024-04-26 20:52:04 0:54:44 0:44:36 0:10:08 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

dead 7675150 2024-04-26 19:32:16 2024-04-26 19:51:57 2024-04-26 19:57:08 0:05:11 smithi main centos 9.stream fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down multifs pg_health session_timeout thrashosds-health} tasks/{1-thrash/mon 2-workunit/ffsb}} 2
Failure Reason:

Error reimaging machines: Expected smithi151's OS to be centos 9 but found centos 8

fail 7675037 2024-04-26 18:21:58 2024-04-26 19:21:23 2024-04-26 19:37:49 0:16:26 0:05:01 0:11:25 smithi main centos 9.stream rados/singleton/{all/pg-autoscaler-progress-off mon_election/connectivity msgr-failures/many msgr/async-v1only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_latest}} 2
Failure Reason:

Command failed on smithi008 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

dead 7674995 2024-04-26 18:21:12 2024-04-26 19:05:35 2024-04-26 19:26:31 0:20:56 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/small-objects-localized} 4
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

fail 7674930 2024-04-26 18:20:03 2024-04-26 18:22:00 2024-04-26 19:01:31 0:39:31 0:28:22 0:11:09 smithi main ubuntu 22.04 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi008 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5d349943c59c9485df060d6adb0594f3940ec0eb TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

fail 7674846 2024-04-26 15:08:22 2024-04-26 16:02:37 2024-04-26 16:21:19 0:18:42 0:13:04 0:05:38 smithi main centos 9.stream orch:cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/classic task/test_orch_cli} 1
Failure Reason:

"2024-04-26T16:19:57.221937+0000 mon.a (mon.0) 538 : cluster [WRN] Health check failed: cephadm background work is paused (CEPHADM_PAUSED)" in cluster log

pass 7674778 2024-04-26 14:13:45 2024-04-26 14:25:49 2024-04-26 14:53:14 0:27:25 0:17:53 0:09:32 smithi main ubuntu 22.04 orch:cephadm/smb/{0-distro/ubuntu_22.04 tasks/deploy_smb_mgr_res_basic} 2
pass 7674747 2024-04-26 12:11:31 2024-04-26 12:41:30 2024-04-26 13:06:57 0:25:27 0:14:10 0:11:17 smithi main ubuntu 22.04 rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/replicated s3tests-branch tasks/rgw_multipart_upload ubuntu_latest} 2
pass 7674697 2024-04-26 12:10:49 2024-04-26 12:14:47 2024-04-26 12:41:40 0:26:53 0:13:47 0:13:06 smithi main ubuntu 22.04 rgw/service-token/{clusters/fixed-1 frontend/beast ignore-pg-availability overrides tasks/service-token ubuntu_latest} 1
pass 7674651 2024-04-26 07:23:59 2024-04-26 07:42:30 2024-04-26 08:16:44 0:34:14 0:24:07 0:10:07 smithi main ubuntu 22.04 rgw/thrash/{clusters/fixed-2 frontend/beast ignore-pg-availability install objectstore/bluestore-bitmap s3tests-branch thrasher/default thrashosds-health ubuntu_latest workload/rgw_s3tests} 2
pass 7674539 2024-04-26 01:30:19 2024-04-26 04:12:40 2024-04-26 05:00:09 0:47:29 0:37:37 0:09:52 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async-v1only objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/cache-agent-big} 2
pass 7674506 2024-04-26 01:29:44 2024-04-26 03:55:26 2024-04-26 04:15:18 0:19:52 0:11:17 0:08:35 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados tasks/mon_recovery validater/lockdep} 2
pass 7674441 2024-04-26 01:28:31 2024-04-26 03:22:05 2024-04-26 03:58:00 0:35:55 0:26:01 0:09:54 smithi main ubuntu 22.04 rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_monitoring_stack_basic} 3
pass 7674397 2024-04-26 01:27:43 2024-04-26 03:00:16 2024-04-26 03:22:02 0:21:46 0:14:57 0:06:49 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_host_drain} 3
pass 7674304 2024-04-26 01:26:04 2024-04-26 02:11:55 2024-04-26 03:01:11 0:49:16 0:41:37 0:07:39 smithi main centos 9.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest} tasks/rados_api_tests} 2
pass 7674272 2024-04-26 01:25:30 2024-04-26 01:42:48 2024-04-26 02:12:09 0:29:21 0:17:54 0:11:27 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/mapgap thrashosds-health workloads/dedup-io-mixed} 2
fail 7674240 2024-04-26 01:03:31 2024-04-26 01:11:57 2024-04-26 01:37:11 0:25:14 0:18:38 0:06:36 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/with-quiesce 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi008 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=afa1933b0fdbb6c99c947d1eda34d661d23cd327 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-git-ceph.sh'