Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi040.front.sepia.ceph.com smithi True True 2024-03-19 07:16:32.651946 scheduled_pdonnell@teuthology centos 9 x86_64 /home/teuthworker/archive/pdonnell-2024-03-19_04:56:42-fs-wip-batrick-testing-20240318.181317-distro-default-smithi/7610589
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7610648 2024-03-19 05:00:35 2024-03-19 05:30:56 2024-03-19 06:17:32 0:46:36 0:33:29 0:13:07 smithi main ubuntu 22.04 fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu/{overrides ubuntu_latest}} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
running 7610589 2024-03-19 04:58:00 2024-03-19 07:16:32 2024-03-19 08:03:07 0:47:08 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef/{v18.2.0} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/no 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
fail 7610544 2024-03-19 04:57:09 2024-03-19 06:16:16 2024-03-19 07:02:59 0:46:43 0:35:10 0:11:33 smithi main centos 9.stream fs/cephadm/multivolume/{0-start 1-mount 2-workload/dbench distro/single-container-host} 2
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi189 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a76be9e33cc5b22f5135f29be5c73cd2ad978d87 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="1" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.1 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.1 CEPH_MNT=/home/ubuntu/cephtest/mnt.1 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.1/qa/workunits/suites/dbench.sh'

pass 7610334 2024-03-19 04:33:27 2024-03-19 04:53:56 2024-03-19 05:31:33 0:37:37 0:27:56 0:09:41 smithi main centos 9.stream fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu/{overrides ubuntu_latest}} kclient-overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped} objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
pass 7610254 2024-03-19 01:21:31 2024-03-19 04:29:43 2024-03-19 04:54:06 0:24:23 0:14:22 0:10:01 smithi main ubuntu 22.04 rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/replicated s3tests-branch tasks/rgw_multipart_upload ubuntu_latest} 2
pass 7610151 2024-03-19 01:04:21 2024-03-19 03:22:53 2024-03-19 04:30:06 1:07:13 0:56:46 0:10:27 smithi main centos 9.stream rgw/verify/{0-install clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability inline-data$/{off} msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec s3tests-branch sharding$/{default} striping$/{stripe-equals-chunk} supported-random-distro$/{centos_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/valgrind} 2
fail 7609433 2024-03-18 21:09:55 2024-03-19 02:16:27 2024-03-19 03:22:54 1:06:27 0:50:40 0:15:47 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/legacy wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/direct_io}} 3
Failure Reason:

"2024-03-19T03:06:11.311483+0000 mon.a (mon.0) 208 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log

pass 7609388 2024-03-18 21:09:33 2024-03-19 01:50:14 2024-03-19 02:17:37 0:27:23 0:16:54 0:10:29 smithi main ubuntu 22.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore-ec/bluestore-comp-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down prefetch_dirfrags/yes prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/3 tasks/{1-thrash/osd 2-workunit/suites/iozone}} 2
pass 7609314 2024-03-18 21:08:55 2024-03-19 01:19:15 2024-03-19 01:52:02 0:32:47 0:23:31 0:09:16 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
fail 7609268 2024-03-18 21:08:31 2024-03-19 00:30:09 2024-03-19 01:18:18 0:48:09 0:26:34 0:21:35 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
Failure Reason:

Command failed on smithi040 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell 4.29 deep-scrub'

pass 7609224 2024-03-18 21:07:46 2024-03-18 23:31:16 2024-03-19 00:31:05 0:59:49 0:37:21 0:22:28 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} subvol_versions/create_subvol_version_v2 tasks/damage} 2
fail 7609143 2024-03-18 21:06:23 2024-03-18 21:32:36 2024-03-18 23:30:35 1:57:59 1:25:48 0:32:11 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/ffsb}} 3
Failure Reason:

"2024-03-18T22:34:41.931759+0000 mon.a (mon.0) 207 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log

pass 7608885 2024-03-18 14:21:20 2024-03-18 15:17:50 2024-03-18 15:52:43 0:34:53 0:25:16 0:09:37 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/fsync-tester}} 3
fail 7608814 2024-03-18 14:20:09 2024-03-18 14:21:11 2024-03-18 15:08:30 0:47:19 0:32:05 0:15:14 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/fsync-tester}} 3
Failure Reason:

"2024-03-18T14:54:44.198569+0000 mon.a (mon.0) 684 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log

pass 7608765 2024-03-18 08:53:11 2024-03-18 09:08:45 2024-03-18 10:18:23 1:09:38 0:59:22 0:10:16 smithi main ubuntu 22.04 rados:thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-1} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/osd-delay msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{ubuntu_latest} thrashers/pggrow thrashosds-health workloads/radosbench} 2
fail 7608551 2024-03-17 23:19:23 2024-03-18 00:37:42 2024-03-18 02:57:02 2:19:20 2:08:45 0:10:35 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/legacy wsync/no} objectstore-ec/bluestore-comp omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/fs/misc}} 3
Failure Reason:

"2024-03-18T00:56:01.040376+0000 mon.a (mon.0) 212 : cluster [WRN] mon.b (rank 2) addr [v2:172.21.15.151:3300/0,v1:172.21.15.151:6789/0] is down (out of quorum)" in cluster log

fail 7608501 2024-03-17 23:18:28 2024-03-17 23:35:49 2024-03-18 00:23:34 0:47:45 0:37:01 0:10:44 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/no 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7608492 2024-03-17 23:18:19 2024-03-17 23:24:44 2024-03-17 23:35:46 0:11:02 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/dbench}} 3
Failure Reason:

Command failed on smithi040 with status 1: 'sudo yum upgrade -y linux-firmware'

dead 7608474 2024-03-17 23:18:00 2024-03-17 23:23:57 2024-03-17 23:36:05 0:12:08 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/suites/iogen}} 3
Failure Reason:

SSH connection to smithi040 was lost: 'rm -f /tmp/kernel.x86_64.rpm && echo kernel-6.8.0_rc7_g3845b7ac715a-1.x86_64.rpm | wget -nv -O /tmp/kernel.x86_64.rpm --base=https://2.chacra.ceph.com/r/kernel/testing/3845b7ac715a3ff198582638d199a90e30e254da/centos/9/flavors/default/x86_64/ --input-file=-'

dead 7608473 2024-03-17 23:17:59 2024-03-17 23:23:56 2024-03-17 23:28:32 0:04:36 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
Failure Reason:

Error reimaging machines: Expected smithi040's OS to be centos 8 but found centos 9