Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi171.front.sepia.ceph.com smithi True True 2024-04-26 23:33:04.379106 scheduled_rishabh@teuthology centos 9 x86_64 /home/teuthworker/archive/rishabh-2024-04-26_19:30:57-fs-wip-rishabh-testing-20240426.111959-testing-default-smithi/7675329
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 7675329 2024-04-26 19:35:59 2024-04-26 23:33:04 2024-04-27 07:36:53 8:04:15 smithi main centos 9.stream fs/multifs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug pg_health} tasks/failover} 2
fail 7675277 2024-04-26 19:34:55 2024-04-26 22:31:32 2024-04-26 23:26:02 0:54:30 0:41:47 0:12:43 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/fs/test_o_trunc}} 3
Failure Reason:

Command failed on smithi151 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs subvolumegroup pin cephfs qa random 0.10'"

pass 7675237 2024-04-26 19:34:05 2024-04-26 21:44:29 2024-04-26 22:31:24 0:46:55 0:35:03 0:11:52 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/direct_io}} 3
pass 7675202 2024-04-26 19:33:23 2024-04-26 20:54:18 2024-04-26 21:39:37 0:45:19 0:37:34 0:07:45 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/blogbench}} 3
fail 7675154 2024-04-26 19:32:21 2024-04-26 19:52:59 2024-04-26 20:45:07 0:52:08 0:39:32 0:12:36 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/ffsb}} 3
Failure Reason:

Command failed on smithi084 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs subvolumegroup pin cephfs qa random 0.10'"

pass 7675141 2024-04-26 19:32:05 2024-04-26 19:36:50 2024-04-26 19:55:42 0:18:52 0:09:30 0:09:22 smithi main centos 9.stream fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} msgr-failures/none objectstore/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down multifs pg_health session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_trivial_sync}} 2
fail 7675006 2024-04-26 18:21:24 2024-04-26 19:05:39 2024-04-26 19:23:30 0:17:51 0:06:00 0:11:51 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_monitoring_stack_basic} 3
Failure Reason:

Command failed on smithi070 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7674947 2024-04-26 18:20:21 2024-04-26 18:46:21 2024-04-26 18:57:38 0:11:17 0:05:00 0:06:17 smithi main centos 9.stream rados/multimon/{clusters/6 mon_election/connectivity msgr-failures/many msgr/async-v2only no_pools objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_latest} tasks/mon_recovery} 2
Failure Reason:

Command failed on smithi022 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

pass 7674726 2024-04-26 12:11:14 2024-04-26 12:32:01 2024-04-26 12:55:58 0:23:57 0:14:10 0:09:47 smithi main ubuntu 22.04 rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec-profile s3tests-branch tasks/rgw_multipart_upload ubuntu_latest} 2
pass 7674689 2024-04-26 12:10:42 2024-04-26 12:12:04 2024-04-26 12:27:21 0:15:17 0:09:27 0:05:50 smithi main centos 9.stream rgw/d4n/{cluster ignore-pg-availability overrides supported-random-distro$/{centos_latest} tasks/rgw_d4ntests} 1
pass 7674662 2024-04-26 07:24:08 2024-04-26 07:45:31 2024-04-26 08:02:46 0:17:15 0:11:26 0:05:49 smithi main centos 9.stream rgw/singleton/{all/radosgw-admin frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/replicated supported-random-distro$/{centos_latest}} 2
fail 7674565 2024-04-26 02:09:02 2024-04-26 04:20:18 2024-04-26 05:00:38 0:40:20 0:32:16 0:08:04 smithi main centos 9.stream upgrade/quincy-x/stress-split/{0-distro/centos_9.stream_runc 0-roles 1-start 2-first-half-tasks/rbd-cls 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/classic} 2
Failure Reason:

timeout expired in wait_until_healthy

pass 7674485 2024-04-26 01:29:19 2024-04-26 03:43:06 2024-04-26 04:19:15 0:36:09 0:27:38 0:08:31 smithi main centos 9.stream rados/monthrash/{ceph clusters/9-mons mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/one workloads/rados_mon_workunits} 2
pass 7674434 2024-04-26 01:28:23 2024-04-26 03:18:22 2024-04-26 03:43:52 0:25:30 0:18:43 0:06:47 smithi main centos 9.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay objectstore/bluestore-stupid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-small-objects-balanced} 2
pass 7674379 2024-04-26 01:27:24 2024-04-26 02:50:56 2024-04-26 03:18:18 0:27:22 0:21:16 0:06:06 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} 2
pass 7674330 2024-04-26 01:26:32 2024-04-26 02:27:45 2024-04-26 02:50:51 0:23:06 0:14:35 0:08:31 smithi main centos 9.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_latest} tasks/rados_cls_all} 2
pass 7674280 2024-04-26 01:25:38 2024-04-26 01:53:03 2024-04-26 02:29:24 0:36:21 0:22:15 0:14:06 smithi main centos 9.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
fail 7674239 2024-04-26 01:03:31 2024-04-26 01:10:26 2024-04-26 01:47:20 0:36:54 0:28:47 0:08:07 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/with-quiesce 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi152 with status 135: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=afa1933b0fdbb6c99c947d1eda34d661d23cd327 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-git-ceph.sh'

pass 7674141 2024-04-25 22:33:36 2024-04-26 10:36:01 2024-04-26 11:31:22 0:55:21 0:44:45 0:10:36 smithi main rhel 8.6 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-bitmap powercycle/default supported-all-distro/rhel_8 tasks/cfuse_workunit_kernel_untar_build thrashosds-health} 4
pass 7674115 2024-04-25 22:33:10 2024-04-26 10:03:16 2024-04-26 10:39:21 0:36:05 0:25:32 0:10:33 smithi main centos 9.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-stupid powercycle/default supported-all-distro/centos_latest tasks/cfuse_workunit_suites_ffsb thrashosds-health} 4