Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi040.front.sepia.ceph.com smithi True True 2024-04-23 18:53:54.699732 scheduled_rishabh@teuthology ubuntu 22.04 x86_64 /home/teuthworker/archive/rishabh-2024-04-23_18:53:20-fs:functional-rishabh-mds-health-testing-default-smithi/7670126
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 7670126 2024-04-23 18:53:30 2024-04-23 18:53:54 2024-04-23 21:51:38 2:58:29 smithi main ubuntu 22.04 fs:functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/admin} 2
fail 7670091 2024-04-23 17:45:29 2024-04-23 17:48:26 2024-04-23 18:39:46 0:51:20 0:41:48 0:09:32 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/postgres}} 3
Failure Reason:

error during quiesce thrashing: Error releasing set '23f38b46': 1 (EPERM)

pass 7669918 2024-04-23 14:21:05 2024-04-23 17:27:08 2024-04-23 17:49:43 0:22:35 0:12:55 0:09:40 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/deploy-raw} 2
pass 7669871 2024-04-23 14:20:15 2024-04-23 17:05:22 2024-04-23 17:30:57 0:25:35 0:15:06 0:10:29 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/upmap-read mon_election/classic msgr-failures/fastclose msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/redirect_promote_tests} 4
fail 7669853 2024-04-23 14:19:56 2024-04-23 16:50:02 2024-04-23 17:05:15 0:15:13 0:06:02 0:09:11 smithi main centos 9.stream rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_iscsi_container/{centos_9.stream test_iscsi_container}} 1
Failure Reason:

Command failed on smithi040 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

pass 7669777 2024-04-23 14:18:36 2024-04-23 16:18:50 2024-04-23 16:52:16 0:33:26 0:24:56 0:08:30 smithi main ubuntu 22.04 rados/cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
fail 7669754 2024-04-23 14:18:11 2024-04-23 16:03:28 2024-04-23 16:16:47 0:13:19 0:05:38 0:07:41 smithi main centos 9.stream rados/singleton/{all/osd-backfill mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-comp-zstd rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed on smithi040 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7669709 2024-04-23 14:17:24 2024-04-23 15:40:38 2024-04-23 15:54:40 0:14:02 0:04:59 0:09:03 smithi main centos 9.stream rados/thrash-erasure-code/{ceph clusters/{fixed-4 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay objectstore/bluestore-comp-lz4 rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/ec-small-objects-fast-read} 4
Failure Reason:

Command failed on smithi165 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

pass 7669566 2024-04-23 14:04:43 2024-04-23 14:05:34 2024-04-23 15:41:24 1:35:50 1:24:34 0:11:16 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/kernel_untar_build}} 3
pass 7669504 2024-04-23 09:50:07 2024-04-23 09:50:58 2024-04-23 10:38:12 0:47:14 0:40:22 0:06:52 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/postgres}} 3
pass 7669473 2024-04-23 05:01:21 2024-04-23 05:01:21 2024-04-23 05:26:58 0:25:37 0:14:47 0:10:50 smithi main ubuntu 22.04 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/{0-install test/kclient_workunit_direct_io}} 3
pass 7669088 2024-04-22 22:10:13 2024-04-23 01:35:38 2024-04-23 02:28:30 0:52:52 0:41:53 0:10:59 smithi main centos 8.stream orch/cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} 3
pass 7669046 2024-04-22 22:09:33 2024-04-23 00:59:09 2024-04-23 01:37:10 0:38:01 0:26:44 0:11:17 smithi main centos 8.stream orch/cephadm/no-agent-workunits/{0-distro/centos_8.stream_container_tools mon_election/classic task/test_orch_cli_mon} 5
pass 7668970 2024-04-22 21:33:00 2024-04-23 00:10:02 2024-04-23 00:59:51 0:49:49 0:41:59 0:07:50 smithi main centos 9.stream powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-distros/centos_latest tasks/cfuse_workunit_kernel_untar_build thrashosds-health} 4
pass 7668697 2024-04-22 20:12:45 2024-04-23 02:47:36 2024-04-23 03:28:19 0:40:43 0:33:46 0:06:57 smithi main centos 9.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{reef} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
fail 7668665 2024-04-22 20:12:15 2024-04-23 02:28:31 2024-04-23 02:47:15 0:18:44 0:12:35 0:06:09 smithi main centos 9.stream orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/connectivity task/test_orch_cli} 1
Failure Reason:

"2024-04-23T02:45:28.392400+0000 mon.a (mon.0) 537 : cluster [WRN] Health check failed: cephadm background work is paused (CEPHADM_PAUSED)" in cluster log

pass 7668168 2024-04-22 05:53:56 2024-04-22 07:07:34 2024-04-22 07:26:29 0:18:55 0:12:09 0:06:46 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/with-quiesce 2-workunit/suites/iozone}} 2
fail 7668151 2024-04-22 05:53:51 2024-04-22 06:36:20 2024-04-22 07:04:13 0:27:53 0:14:33 0:13:20 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/with-quiesce 2-workunit/suites/pjd}} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi040 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=86f7587a5a09af35f0895e1a2d08527638fae697 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

pass 7668127 2024-04-22 05:53:44 2024-04-22 05:55:32 2024-04-22 06:39:01 0:43:29 0:32:47 0:10:42 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/fsstress}} 3
fail 7668108 2024-04-22 01:38:08 2024-04-22 01:42:01 2024-04-22 02:00:12 0:18:11 0:09:08 0:09:03 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects-balanced} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --balance-reads --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 0 --op setattr 25 --op rmattr 25 --op copy_from 0 --op write_excl 50 --pool unique_pool_0'