Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi080.front.sepia.ceph.com smithi False True 2024-06-04 17:46:45.853716 scheduled_cbodley@teuthology x86_64 reimage failed 10 times
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 7740844 2024-06-04 17:20:53 2024-06-04 17:46:15 2024-06-04 17:47:49 0:01:34 smithi main ubuntu 22.04 rgw/thrash/{clusters/fixed-2 frontend/beast ignore-pg-availability install objectstore/bluestore-bitmap s3tests-branch thrasher/default thrashosds-health ubuntu_latest workload/rgw_s3tests} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi080

dead 7740735 2024-06-04 14:12:44 2024-06-04 16:26:01 2024-06-04 16:30:06 0:04:05 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-quiesce/with-quiesce 6-workunit/fs/norstats}} 3
Failure Reason:

Error reimaging machines: Failed to power on smithi080

dead 7740709 2024-06-04 14:12:24 2024-06-04 15:09:11 2024-06-04 16:16:16 1:07:05 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-quiesce/no 6-workunit/suites/dbench}} 3
Failure Reason:

Error reimaging machines: reached maximum tries (241) after waiting for 3600 seconds

dead 7740622 2024-06-04 12:39:57 2024-06-04 13:03:35 2024-06-04 14:07:50 1:04:15 smithi main ubuntu 22.04 rados:thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/read mon_election/connectivity msgr-failures/fastclose msgr/async objectstore/bluestore-comp-zlib rados supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/admin_socket_objecter_requests} 4
Failure Reason:

Error reimaging machines: reached maximum tries (241) after waiting for 3600 seconds

dead 7740537 2024-06-04 07:20:53 2024-06-04 08:58:04 2024-06-04 10:00:18 1:02:14 smithi main centos 9.stream orch:cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

Error reimaging machines: reached maximum tries (241) after waiting for 3600 seconds

dead 7740481 2024-06-04 07:19:40 2024-06-04 07:57:24 2024-06-04 08:58:16 1:00:52 smithi main centos 9.stream orch:cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn pg_health syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Error reimaging machines: reached maximum tries (241) after waiting for 3600 seconds

dead 7740280 2024-06-03 22:10:44 2024-06-04 03:35:10 2024-06-04 04:35:39 1:00:29 smithi main rhel 8.6 orch/cephadm/workunits/{0-distro/rhel_8.6_container_tools_rhel8 agent/on mon_election/connectivity task/test_cephadm} 1
Failure Reason:

Error reimaging machines: reached maximum tries (241) after waiting for 3600 seconds

dead 7740189 2024-06-03 22:09:18 2024-06-04 02:33:29 2024-06-04 03:33:47 1:00:18 smithi main centos 8.stream orch/cephadm/upgrade/{1-start-distro/1-start-centos_8.stream_container-tools 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/off mon_election/classic} 2
Failure Reason:

Error reimaging machines: reached maximum tries (241) after waiting for 3600 seconds

dead 7740099 2024-06-03 21:32:19 2024-06-04 01:30:34 2024-06-04 02:32:00 1:01:26 smithi main ubuntu 22.04 powercycle/osd/{clusters/3osd-1per-target ignorelist_health objectstore/bluestore-comp-snappy powercycle/default supported-distros/ubuntu_latest tasks/cfuse_workunit_misc thrashosds-health} 4
Failure Reason:

Error reimaging machines: reached maximum tries (241) after waiting for 3600 seconds

dead 7740038 2024-06-03 21:10:46 2024-06-04 00:16:55 2024-06-04 01:25:11 1:08:16 smithi main centos 9.stream orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream_runc mon_election/connectivity task/test_orch_cli_mon} 5
Failure Reason:

Error reimaging machines: reached maximum tries (241) after waiting for 3600 seconds

fail 7740024 2024-06-03 21:10:33 2024-06-03 23:51:23 2024-06-04 00:24:50 0:33:27 smithi main centos 9.stream orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
Failure Reason:

Failed to reconnect to smithi080

pass 7739953 2024-06-03 21:09:27 2024-06-03 22:08:41 2024-06-03 23:51:43 1:43:02 1:16:04 0:26:58 smithi main centos 9.stream orch/cephadm/thrash/{0-distro/centos_9.stream 1-start 2-thrash 3-tasks/small-objects fixed-2 msgr/async-v1only root} 2
dead 7739952 2024-06-03 21:09:26 2024-06-03 22:06:00 2024-06-03 22:09:44 0:03:44 smithi main ubuntu 22.04 orch/cephadm/osds/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi080

fail 7739702 2024-06-03 19:45:35 2024-06-03 20:09:20 2024-06-03 20:19:30 0:10:10 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/3-size-2-min-size 1-install/quincy backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/on mon_election/connectivity msgr-failures/fastclose rados thrashers/default thrashosds-health workloads/radosbench} 3
Failure Reason:

Command failed on smithi016 with status 1: 'sudo yum install -y kernel'

pass 7739653 2024-06-03 14:57:15 2024-06-03 18:27:28 2024-06-03 20:12:24 1:44:56 1:09:33 0:35:23 smithi main centos 9.stream orch:cephadm/with-work/{0-distro/centos_9.stream fixed-2 mode/packaged mon_election/classic msgr/async-v1only start tasks/rotate-keys} 2
dead 7739650 2024-06-03 14:57:12 2024-06-03 18:18:05 2024-06-03 18:25:30 0:07:25 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nvmeof 3-final} 2
Failure Reason:

Error reimaging machines: Failed to power on smithi033

pass 7739596 2024-06-03 14:56:03 2024-06-03 16:43:28 2024-06-03 18:24:24 1:40:56 1:04:32 0:36:24 smithi main centos 9.stream orch:cephadm/workunits/{0-distro/centos_9.stream_runc agent/off mon_election/classic task/test_ca_signed_key} 2
pass 7739160 2024-06-02 22:06:01 2024-06-03 20:19:36 2024-06-03 22:08:37 1:49:01 1:16:20 0:32:41 smithi main centos 9.stream rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/classic msgr-failures/few objectstore/bluestore-hybrid rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/minsize_recovery thrashosds-health workloads/ec-rados-plugin=clay-k=4-m=2} 2
pass 7738866 2024-06-02 21:29:07 2024-06-03 02:57:08 2024-06-03 03:40:06 0:42:58 0:33:50 0:09:08 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-quiesce/no 6-workunit/suites/pjd}} 3
fail 7738825 2024-06-02 21:28:25 2024-06-03 02:12:35 2024-06-03 02:57:10 0:44:35 0:31:51 0:12:44 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-comp-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-quiesce/no 6-workunit/suites/fsx}} 3
Failure Reason:

Command failed (workunit test suites/fsx.sh) on smithi016 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=dfcc4b3c2c55532c4c04fa47551c9df1dffdf746 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/fsx.sh'