Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi069.front.sepia.ceph.com smithi True True 2024-04-26 20:22:59.798560 scheduled_rishabh@teuthology centos 9 x86_64 /home/teuthworker/archive/rishabh-2024-04-26_19:30:57-fs-wip-rishabh-testing-20240426.111959-testing-default-smithi/7675169
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 7675169 2024-04-26 19:32:40 2024-04-26 20:22:59 2024-04-26 22:55:58 2:33:44 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/admin} 2
pass 7675110 2024-04-26 19:31:25 2024-04-26 19:36:38 2024-04-26 20:23:36 0:46:58 0:39:44 0:07:14 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/fsync-tester}} 3
fail 7675012 2024-04-26 18:21:31 2024-04-26 19:21:03 2024-04-26 19:36:11 0:15:08 0:06:23 0:08:45 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/snaps-few-objects} 4
Failure Reason:

Command failed on smithi184 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7674982 2024-04-26 18:20:58 2024-04-26 19:05:29 2024-04-26 19:20:15 0:14:46 0:05:43 0:09:03 smithi main centos 9.stream rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

Command failed on smithi069 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7674959 2024-04-26 18:20:33 2024-04-26 18:49:58 2024-04-26 19:01:15 0:11:17 0:04:58 0:06:19 smithi main centos 9.stream rados/singleton/{all/ec-lost-unfound mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed on smithi069 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7674935 2024-04-26 18:20:08 2024-04-26 18:28:53 2024-04-26 18:45:41 0:16:48 0:05:00 0:11:48 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream agent/off mon_election/classic task/test_rgw_multisite} 3
Failure Reason:

Command failed on smithi069 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7674906 2024-04-26 17:25:26 2024-04-26 17:48:39 2024-04-26 18:34:07 0:45:28 0:38:22 0:07:06 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/dbench}} 3
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi069 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bc94a2a924f4a25e7c0317e85c91b85bf7cac0b1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'

pass 7674888 2024-04-26 16:59:40 2024-04-26 17:01:00 2024-04-26 17:48:55 0:47:55 0:40:52 0:07:03 smithi main centos 9.stream rgw/verify/{0-install accounts$/{tenant} clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability inline-data$/{off} msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/ec s3tests-branch sharding$/{default} striping$/{stripe-greater-than-chunk} supported-random-distro$/{centos_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/lockdep} 2
pass 7674859 2024-04-26 15:08:41 2024-04-26 16:11:33 2024-04-26 17:01:45 0:50:12 0:38:02 0:12:10 smithi main ubuntu 22.04 orch:cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/connectivity task/test_orch_cli_mon} 5
pass 7674782 2024-04-26 15:06:47 2024-04-26 15:06:56 2024-04-26 16:13:36 1:06:40 0:57:09 0:09:31 smithi main ubuntu 22.04 orch:cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/repo_digest 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/connectivity} 2
pass 7674777 2024-04-26 14:13:43 2024-04-26 14:25:09 2024-04-26 14:42:50 0:17:41 0:11:12 0:06:29 smithi main centos 9.stream orch:cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_mgr_domain} 2
pass 7674717 2024-04-26 12:11:06 2024-04-26 12:25:56 2024-04-26 13:13:21 0:47:25 0:40:27 0:06:58 smithi main centos 9.stream rgw/verify/{0-install accounts$/{none} clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability inline-data$/{off} msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/ec-profile s3tests-branch sharding$/{default} striping$/{stripe-equals-chunk} supported-random-distro$/{centos_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/lockdep} 2
pass 7674638 2024-04-26 07:23:48 2024-04-26 07:42:23 2024-04-26 08:09:40 0:27:17 0:16:02 0:11:15 smithi main ubuntu 22.04 rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/replicated s3tests-branch tasks/rgw_bucket_quota ubuntu_latest} 2
fail 7674559 2024-04-26 02:08:56 2024-04-26 04:20:16 2024-04-26 06:15:58 1:55:42 1:44:30 0:11:12 smithi main ubuntu 22.04 upgrade/quincy-x/parallel/{0-random-distro$/{ubuntu_22.04} 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api test_rbd_python}} 2
Failure Reason:

"1714106895.171712 mon.a (mon.0) 514 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log

pass 7674504 2024-04-26 01:29:42 2024-04-26 03:54:55 2024-04-26 04:21:04 0:26:09 0:15:02 0:11:07 smithi main ubuntu 22.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7674440 2024-04-26 01:28:30 2024-04-26 03:21:45 2024-04-26 03:55:14 0:33:29 0:26:19 0:07:10 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/pool-snaps-few-objects} 2
pass 7674376 2024-04-26 01:27:21 2024-04-26 02:50:05 2024-04-26 03:21:35 0:31:30 0:22:17 0:09:13 smithi main centos 9.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-lz4 rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
pass 7674331 2024-04-26 01:26:33 2024-04-26 02:29:25 2024-04-26 02:50:20 0:20:55 0:13:15 0:07:40 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-5} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/set-chunks-read} 2
fail 7674230 2024-04-26 01:03:28 2024-04-26 01:05:01 2024-04-26 02:30:16 1:25:15 1:15:16 0:09:59 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/fs/misc}} 3
Failure Reason:

error during quiesce thrashing: Error releasing set 'de25e267': 1 (EPERM)

pass 7674210 2024-04-25 22:34:46 2024-04-26 12:02:22 2024-04-26 12:26:12 0:23:50 0:15:29 0:08:21 smithi main centos 9.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-low-osd-mem-target powercycle/default supported-all-distro/centos_latest tasks/admin_socket_objecter_requests thrashosds-health} 4