Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi069.front.sepia.ceph.com smithi True True 2024-03-28 15:26:08.611138 mchangir centos 9 x86_64 /home/teuthworker/archive/mchangir-2024-03-27_03:45:23-fs-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/7625318
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7628000 2024-03-28 07:18:00 2024-03-28 14:58:47 2024-03-28 15:26:00 0:27:13 0:16:47 0:10:26 smithi main centos 9.stream fs/fscrypt/{begin/{0-install 1-ceph 2-logrotate 3-modules} bluestore-bitmap clusters/1-mds-1-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} overrides/{ignorelist_health ignorelist_health_more ignorelist_wrongly_marked_down osd pg-warn pg_health} tasks/{0-client 1-tests/fscrypt-pjd}} 3
pass 7627940 2024-03-28 07:17:02 2024-03-28 14:06:38 2024-03-28 14:58:52 0:52:14 0:42:01 0:10:13 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/postgres}} 3
pass 7627884 2024-03-28 07:16:08 2024-03-28 13:22:38 2024-03-28 14:07:34 0:44:56 0:33:32 0:11:24 smithi main ubuntu 22.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/metrics} 2
pass 7627808 2024-03-28 07:14:55 2024-03-28 12:34:58 2024-03-28 13:23:54 0:48:56 0:36:34 0:12:22 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/fs/test_o_trunc}} 3
pass 7627654 2024-03-27 22:56:13 2024-03-27 23:07:53 2024-03-27 23:37:37 0:29:44 0:14:16 0:15:28 smithi main ubuntu 22.04 rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec-profile s3tests-branch tasks/rgw_bucket_quota ubuntu_latest} 2
pass 7626791 2024-03-27 18:08:22 2024-03-27 18:25:19 2024-03-27 19:01:10 0:35:51 0:21:35 0:14:16 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/default thrashosds-health workloads/small-objects-localized} 2
pass 7626734 2024-03-27 16:43:23 2024-03-27 17:21:33 2024-03-27 18:27:25 1:05:52 0:54:42 0:11:10 smithi main centos 9.stream rgw/verify/{0-install clusters/fixed-2 datacache/rgw-datacache frontend/beast ignore-pg-availability inline-data$/{on} msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec s3tests-branch sharding$/{default} striping$/{stripe-equals-chunk} supported-random-distro$/{centos_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/valgrind} 2
pass 7626702 2024-03-27 16:42:57 2024-03-27 17:21:54 1691 smithi main ubuntu 22.04 rgw/cloud-transition/{cluster ignore-pg-availability overrides s3tests-branch supported-random-distro$/{ubuntu_latest} tasks/cloud_transition_s3tests} 1
pass 7626648 2024-03-27 15:03:31 2024-03-27 22:17:10 2024-03-27 22:46:05 0:28:55 0:23:00 0:05:55 smithi main rhel 8.6 rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-async-recovery} supported-random-distro$/{rhel_8} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=lrc-k=4-m=2-l=3} 3
fail 7626578 2024-03-27 15:02:37 2024-03-27 21:42:08 2024-03-27 22:08:05 0:25:57 0:19:30 0:06:27 smithi main rhel 8.6 rados/singleton/{all/test-noautoscale-flag mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed on smithi069 with status 128: 'rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/ceph/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 7c37d80f7a88f4eb561c4fe02558660a650b10ac'

pass 7626524 2024-03-27 15:01:54 2024-03-27 21:13:03 2024-03-27 21:42:52 0:29:49 0:19:14 0:10:35 smithi main centos 9.stream rados/singleton-nomsgr/{all/multi-backfill-reject mon_election/connectivity rados supported-random-distro$/{centos_latest}} 2
pass 7626447 2024-03-27 15:00:53 2024-03-27 19:26:41 2024-03-27 21:13:33 1:46:52 0:54:53 0:51:59 smithi main centos 8.stream rados/cephadm/smoke/{0-distro/centos_8.stream_container_tools_crun 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
pass 7626421 2024-03-27 15:00:34 2024-03-27 18:58:17 2024-03-27 19:30:11 0:31:54 0:20:40 0:11:14 smithi main rhel 8.6 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{rhel_8} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7626390 2024-03-27 15:00:11 2024-03-27 16:12:47 2024-03-27 16:44:14 0:31:27 0:18:37 0:12:50 smithi main ubuntu 20.04 rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
pass 7626364 2024-03-27 14:58:42 2024-03-27 15:38:13 2024-03-27 16:16:00 0:37:47 0:18:12 0:19:35 smithi main centos 8.stream rgw/singleton/{all/radosgw-admin frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec-profile supported-random-distro$/{centos_8}} 2
fail 7626343 2024-03-27 14:58:27 2024-03-27 15:02:04 2024-03-27 15:37:38 0:35:34 0:22:45 0:12:49 smithi main ubuntu 22.04 rgw/thrash/{clusters/fixed-2 frontend/beast ignore-pg-availability install objectstore/bluestore-bitmap s3tests-branch thrasher/default thrashosds-health ubuntu_latest workload/rgw_s3tests} 2
Failure Reason:

Command failed (s3 tests against rgw) on smithi069 with status 1: "source /home/ubuntu/cephtest/tox-venv/bin/activate && cd /home/ubuntu/cephtest/s3-tests-client.0 && S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto-client.0.cfg REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt tox -- -v -m 'not fails_on_rgw and not lifecycle_expiration and not test_of_sts and not webidentity_test and not fails_with_subdomain and not sse_s3'"

pass 7625866 2024-03-27 07:38:32 2024-03-27 11:36:05 2024-03-27 12:12:20 0:36:15 0:25:42 0:10:33 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{default} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-2 openstack} crc-failures/default d-balancer/on mon_election/connectivity msgr-failures/osd-dispatch-delay msgr/async objectstore/bluestore-stupid rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/snaps-few-objects} 2
fail 7625786 2024-03-27 05:32:29 2024-03-27 10:32:34 2024-03-27 11:24:19 0:51:45 0:40:30 0:11:15 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

"2024-03-27T11:09:21.862436+0000 mon.smithi069 (mon.0) 272 : cluster [WRN] Health check failed: Degraded data redundancy: 42/216 objects degraded (19.444%), 17 pgs degraded (PG_DEGRADED)" in cluster log

fail 7625318 2024-03-27 03:47:23 2024-03-28 15:25:58 2024-03-28 16:03:03 0:37:05 0:27:36 0:09:29 smithi main centos 9.stream fs/verify/{begin/{0-install 1-ceph 2-logrotate} clusters/1a5s-mds-1c-client conf/{client mds mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-comp overrides/{ignorelist_health ignorelist_wrongly_marked_down mon-debug session_timeout} ranks/3 tasks/dbench validater/valgrind} 2
Failure Reason:

valgrind error: Leak_PossiblyLost calloc __trans_list_add

fail 7625233 2024-03-27 03:45:55 2024-03-28 00:08:18 2024-03-28 04:08:47 4:00:29 3:49:10 0:11:19 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/secure wsync/yes} objectstore-ec/bluestore-ec-root omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/ffsb}} 3
Failure Reason:

Command failed (workunit test suites/ffsb.sh) on smithi069 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3be9a5c9bc793e11e2800a8c0c696e8b46742033 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'