Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi125.front.sepia.ceph.com smithi False False x86_64 reimage failed 10 times
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
dead 7570857 2024-02-22 07:20:00 2024-02-22 11:17:04 2024-02-22 11:56:32 0:39:28 smithi main centos 9.stream rados:thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

dead 7570856 2024-02-22 07:20:00 2024-02-22 10:56:41 2024-02-22 11:37:09 0:40:28 smithi main centos 9.stream rados:thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

dead 7570855 2024-02-22 07:20:00 2024-02-22 10:37:18 2024-02-22 11:16:46 0:39:28 smithi main centos 9.stream rados:thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

dead 7570854 2024-02-22 07:20:00 2024-02-22 10:17:56 2024-02-22 10:56:28 0:38:32 smithi main centos 9.stream rados:thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

dead 7570853 2024-02-22 07:20:00 2024-02-22 09:58:33 2024-02-22 10:37:04 0:38:31 smithi main centos 9.stream rados:thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

dead 7570852 2024-02-22 07:20:00 2024-02-22 09:39:10 2024-02-22 10:17:42 0:38:32 smithi main centos 9.stream rados:thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

dead 7570851 2024-02-22 07:20:00 2024-02-22 09:19:57 2024-02-22 09:58:18 0:38:21 smithi main centos 9.stream rados:thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

dead 7570850 2024-02-22 07:20:00 2024-02-22 09:00:44 2024-02-22 09:38:51 0:38:07 smithi main centos 9.stream rados:thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

dead 7570849 2024-02-22 07:19:59 2024-02-22 08:46:21 2024-02-22 09:19:38 0:33:17 smithi main centos 9.stream rados:thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-zstd rados recovery-overrides/{default} supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

dead 7570585 2024-02-22 02:27:03 2024-02-22 08:29:59 2024-02-22 08:48:56 0:18:57 smithi main centos 9.stream upgrade:reef-x/stress-split/{0-distro/centos_9.stream_runc 0-roles 1-start 2-first-half-tasks/readwrite 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/connectivity} 2
Failure Reason:

Error reimaging machines: reached maximum tries (101) after waiting for 600 seconds

fail 7570584 2024-02-22 02:27:02 2024-02-22 08:29:59 2024-02-22 08:50:47 0:20:48 0:11:01 0:09:47 smithi main centos 9.stream upgrade:reef-x/stress-split/{0-distro/centos_9.stream 0-roles 1-start 2-first-half-tasks/rbd_api 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/classic} 2
Failure Reason:

Command failed on smithi125 with status 5: 'sudo systemctl stop ceph-01b3e2a6-d15f-11ee-95bf-87774f69a715@mon.a'

fail 7570397 2024-02-22 01:14:50 2024-02-22 01:42:59 2024-02-22 08:36:01 6:53:02 6:41:40 0:11:22 smithi main centos 9.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --no-omap --max-ops 400000 --objects 1024 --max-in-flight 64 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 600 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --op setattr 25 --op rmattr 25 --op copy_from 50 --op append 50 --op write_excl 50 --op append_excl 50 --pool unique_pool_0'

pass 7570265 2024-02-21 23:09:15 2024-02-22 00:25:35 2024-02-22 01:44:04 1:18:29 0:49:24 0:29:05 smithi main centos 8.stream fs/cephadm/multivolume/{0-start 1-mount 2-workload/dbench distro/single-container-host} 2
pass 7570229 2024-02-21 23:08:43 2024-02-21 23:55:57 2024-02-22 00:25:25 0:29:28 0:16:01 0:13:27 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore-ec/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/iozone}} 2
fail 7570047 2024-02-21 22:32:22 2024-02-21 23:06:50 2024-02-21 23:54:27 0:47:37 0:36:33 0:11:04 smithi main centos 9.stream fs/shell/{begin/{0-install 1-ceph 2-logrotate} clusters/1-mds-1-client-coloc conf/{client mds mon osd} distro/centos_latest mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/cephfs-shell} 2
Failure Reason:

Test failure: test_reading_conf (tasks.cephfs.test_cephfs_shell.TestShellOpts)

pass 7569953 2024-02-21 22:25:41 2024-02-21 22:26:24 2024-02-21 23:07:23 0:40:59 0:31:09 0:09:50 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/radosbench-high-concurrency} 2
pass 7569919 2024-02-21 20:43:30 2024-02-21 20:43:30 2024-02-21 21:31:37 0:48:07 0:12:43 0:35:24 smithi main centos 8.stream fs/upgrade/upgraded_client/from_nautilus/{bluestore-bitmap centos_latest clusters/{1-mds-1-client-micro} conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn} tasks/{0-nautilus 1-client-upgrade 2-client-sanity}} 2
pass 7569866 2024-02-21 20:42:48 2024-02-21 20:42:48 2024-02-21 21:08:40 0:25:52 0:16:10 0:09:42 smithi main ubuntu 20.04 fs/functional/{begin clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/sessionmap} 2
pass 7569732 2024-02-21 19:22:28 2024-02-21 19:23:27 2024-02-21 20:19:06 0:55:39 0:46:43 0:08:56 smithi main ubuntu 22.04 rgw/verify/{0-install clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability inline-data$/{on} msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/ec s3tests-branch sharding$/{default} striping$/{stripe-greater-than-chunk} supported-random-distro$/{ubuntu_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/lockdep} 2
pass 7569650 2024-02-21 14:10:20 2024-02-21 14:12:44 2024-02-21 16:19:41 2:06:57 1:58:03 0:08:54 smithi main centos 9.stream fs:mirror/{begin/{0-install 1-ceph 2-logrotate} cephfs-mirror/one-per-cluster clients/{mirror} cluster/{1-node} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health} supported-random-distros$/{centos_latest} tasks/mirror} 1