Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi175.front.sepia.ceph.com smithi True True 2024-05-27 15:03:56.955192 scheduled_teuthology@teuthology centos 9 x86_64 /home/teuthworker/archive/teuthology-2024-05-20_20:08:15-orch-main-distro-default-smithi/7716480
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7727808 2024-05-27 07:27:40 2024-05-27 08:52:45 2024-05-27 10:36:37 1:43:52 1:32:52 0:11:00 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/kernel_untar_build}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

pass 7727771 2024-05-27 07:25:45 2024-05-27 08:00:45 2024-05-27 08:52:36 0:51:51 0:42:29 0:09:22 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-quiesce/with-quiesce 6-workunit/postgres}} 3
fail 7727724 2024-05-27 06:48:12 2024-05-27 10:36:41 2024-05-27 11:01:57 0:25:16 0:15:50 0:09:26 smithi main centos 9.stream fs/permission/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/fixed-2-ucephfs conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/cfuse_workunit_suites_pjd} 2
Failure Reason:

Command failed (workunit test suites/pjd.sh) on smithi129 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a6507388dd5057528934822c0163b0c347ef1d5d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/pjd.sh'

fail 7727658 2024-05-27 05:54:06 2024-05-27 07:07:15 2024-05-27 08:01:27 0:54:12 0:42:06 0:12:06 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-quiesce/no 6-workunit/suites/pjd}} 3
Failure Reason:

Command failed on smithi120 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:38f9d900c83c63799fbdbe61acc9a11b0d3554a6 shell --fsid 6afde528-1bfa-11ef-bc9b-c7b262605968 -- ceph daemon mds.a perf dump'

fail 7727608 2024-05-27 05:53:12 2024-05-27 06:10:48 2024-05-27 07:08:14 0:57:26 0:46:32 0:10:54 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

pass 7727577 2024-05-27 00:32:23 2024-05-27 03:17:01 2024-05-27 05:25:23 2:08:22 1:56:41 0:11:41 smithi main centos 8.stream upgrade:pacific-x/stress-split/{0-distro/centos_8.stream_container_tools 0-roles 1-start 2-first-half-tasks/rbd-import-export 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/connectivity} 2
dead 7727551 2024-05-27 00:24:44 2024-05-27 03:00:07 2024-05-27 03:01:51 0:01:44 smithi main ubuntu 20.04 upgrade:octopus-x/stress-split-erasure-code-no-cephadm/{0-cluster/{openstack start} 1-octopus-install/octopus 1.1-pg-log-overrides/normal_pg_log 2-partial-upgrade/firsthalf 3-thrash/default 3.1-objectstore/bluestore-hybrid 4-ec-workload/{rados-ec-workload rbd-ec-workload} 5-finish-upgrade 6-quincy 7-final-workload mon_election/connectivity thrashosds-health ubuntu_20.04} 5
Failure Reason:

Error reimaging machines: Failed to power on smithi145

pass 7727504 2024-05-26 22:06:04 2024-05-27 12:53:06 2024-05-27 13:23:07 0:30:01 0:20:30 0:09:31 smithi main ubuntu 20.04 rados/singleton/{all/thrash_cache_writeback_proxy_none mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_20.04}} 2
pass 7727486 2024-05-26 22:05:47 2024-05-27 12:31:44 2024-05-27 12:53:16 0:21:32 0:12:15 0:09:17 smithi main ubuntu 22.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-basic-min-osd-mem-target openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/radosbench_4M_write} 1
pass 7727450 2024-05-26 22:05:11 2024-05-27 12:09:38 2024-05-27 12:31:34 0:21:56 0:12:12 0:09:44 smithi main centos 8.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{centos_8} tasks/scrub_test} 2
fail 7727406 2024-05-26 22:04:27 2024-05-27 11:49:00 2024-05-27 12:10:30 0:21:30 0:11:58 0:09:32 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools agent/off mon_election/classic task/test_cephadm} 1
Failure Reason:

Command failed (workunit test cephadm/test_cephadm.sh) on smithi175 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=19f3c03194952f81fb0d3dd9621f2ed4b14b4e1d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh'

pass 7727364 2024-05-26 22:03:44 2024-05-27 11:25:59 2024-05-27 11:49:30 0:23:31 0:13:22 0:10:09 smithi main ubuntu 22.04 rados/perf/{ceph mon_election/classic objectstore/bluestore-stupid openstack scheduler/dmclock_default_shards settings/optimized ubuntu_latest workloads/fio_4M_rand_rw} 1
pass 7727325 2024-05-26 22:03:05 2024-05-27 10:59:59 2024-05-27 11:25:38 0:25:39 0:13:31 0:12:08 smithi main ubuntu 22.04 rados/singleton/{all/max-pg-per-osd.from-primary mon_election/classic msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados supported-random-distro$/{ubuntu_latest}} 1
pass 7727271 2024-05-26 22:02:11 2024-05-27 05:36:11 2024-05-27 06:11:08 0:34:57 0:25:29 0:09:28 smithi main centos 9.stream rados/thrash-erasure-code-isa/{arch/x86_64 ceph clusters/{fixed-2 openstack} mon_election/connectivity msgr-failures/fastclose objectstore/bluestore-comp-zstd rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/ec-rados-plugin=isa-k=2-m=1} 2
dead 7727255 2024-05-26 22:01:55 2024-05-27 05:25:14 2024-05-27 05:26:28 0:01:14 smithi main ubuntu 22.04 rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-2} backoff/peering ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/classic msgr-failures/fastclose msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{ubuntu_latest} thrashers/none thrashosds-health workloads/pool-snaps-few-objects} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi049

pass 7727231 2024-05-26 21:28:24 2024-05-27 00:43:39 2024-05-27 01:53:30 1:09:51 1:00:13 0:09:38 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/dbench}} 3
pass 7727203 2024-05-26 21:27:55 2024-05-27 00:17:54 2024-05-27 00:44:20 0:26:26 0:15:43 0:10:43 smithi main ubuntu 22.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-2-client conf/{client mds mgr mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/mdtest} 4
pass 7727116 2024-05-26 21:26:28 2024-05-26 23:02:22 2024-05-27 00:18:36 1:16:14 1:05:36 0:10:38 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/kernel_untar_build}} 3
pass 7727083 2024-05-26 21:25:54 2024-05-26 22:37:15 2024-05-26 23:02:15 0:25:00 0:16:25 0:08:35 smithi main centos 8.stream fs/upgrade/upgraded_client/{bluestore-bitmap branch/pacific centos_8.stream clusters/{1-mds-1-client-micro} conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn pg_health} tasks/{0-install 1-mount/mount/fuse 2-clients/fuse-upgrade 3-workload/stress_tests/fsstress}} 2
dead 7727075 2024-05-26 21:25:46 2024-05-26 22:30:51 2024-05-26 22:33:25 0:02:34 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/suites/blogbench}} 3
Failure Reason:

Error reimaging machines: Failed to power on smithi203