Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi042.front.sepia.ceph.com smithi True True 2024-03-19 02:42:18.376196 bhubbard@teuthology centos 9 x86_64 /home/bhubbard/working/archive/
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7609261 2024-03-18 21:08:24 2024-03-19 00:17:34 2024-03-19 02:42:17 2:24:43 2:12:17 0:12:26 smithi main ubuntu 22.04 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/cfuse_workunit_snaptests}} 2
fail 7609185 2024-03-18 21:07:07 2024-03-18 22:41:33 2024-03-19 00:15:54 1:34:21 1:12:14 0:22:07 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/direct_io}} 3
Failure Reason:

"2024-03-18T23:52:10.407993+0000 mon.a (mon.0) 208 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log

pass 7609141 2024-03-18 21:06:21 2024-03-18 21:31:05 2024-03-18 22:42:14 1:11:09 0:51:11 0:19:58 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
fail 7608886 2024-03-18 14:21:21 2024-03-18 16:06:48 2306 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/fs/test_o_trunc}} 3
Failure Reason:

"2024-03-18T15:44:00.444973+0000 mon.a (mon.0) 686 : cluster [WRN] Health check failed: Reduced data availability: 1 pg inactive, 1 pg peering (PG_AVAILABILITY)" in cluster log

fail 7608819 2024-03-18 14:20:14 2024-03-18 14:21:14 2024-03-18 15:15:36 0:54:22 0:40:32 0:13:50 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7608739 2024-03-18 07:45:29 2024-03-18 07:46:39 2024-03-18 08:23:02 0:36:23 0:24:17 0:12:06 smithi main centos 9.stream rbd:nvmeof/{base/install centos_latest workloads/nvmeof_initiator} 4
Failure Reason:

Command failed (workunit test rbd/nvmeof_basic_tests.sh) on smithi042 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.2/client.2/tmp && cd -- /home/ubuntu/cephtest/mnt.2/client.2/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=5c0fd1f102e0e99a72886f77654730199fa4c704 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="2" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.2 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.2 CEPH_MNT=/home/ubuntu/cephtest/mnt.2 IOSTAT_INTERVAL=10 RUNTIME=600 timeout 30m /home/ubuntu/cephtest/clone.client.2/qa/workunits/rbd/nvmeof_basic_tests.sh'

fail 7608533 2024-03-17 23:19:03 2024-03-18 00:22:12 2024-03-18 02:21:11 1:58:59 1:46:33 0:12:26 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate} clusters/1a11s-mds-1c-client-3node conf/{client mds mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/kernel_untar_build}} 3
Failure Reason:

error during scrub thrashing: rank damage found: {'backtrace'}

fail 7608476 2024-03-17 23:18:02 2024-03-17 23:23:58 2024-03-18 00:10:42 0:46:44 0:37:39 0:09:05 smithi main centos 9.stream fs/upgrade/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mon osd} fail_fs/no overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/reef/{v18.2.1} 1-volume/{0-create 1-ranks/2 2-allow_standby_replay/yes 3-inline/yes 4-verify} 2-client/fuse 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

pass 7606416 2024-03-16 15:08:08 2024-03-18 20:30:02 2024-03-18 21:31:08 1:01:06 0:17:37 0:43:29 smithi main centos 9.stream rbd/singleton/{all/read-flags-no-cache conf/{disable-pool-app} objectstore/bluestore-bitmap openstack supported-random-distro$/{centos_latest}} 1
pass 7606342 2024-03-16 15:06:26 2024-03-18 19:39:12 2024-03-18 20:32:10 0:52:58 0:40:53 0:12:05 smithi main centos 8.stream fs/mixed-clients/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{centos_8} kclient-overrides/{distro/testing/k-testing ms-die-on-skipped} objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down osd-asserts} tasks/kernel_cfuse_workunits_untarbuild_blogbench} 2
pass 7606287 2024-03-16 15:05:42 2024-03-18 18:59:14 2024-03-18 19:39:11 0:39:57 0:22:11 0:17:46 smithi main rhel 8.6 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{rhel_8} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} msgr-failures/none objectstore/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mds 2-workunit/iozone}} 2
pass 7606208 2024-03-16 15:04:38 2024-03-18 17:44:45 2024-03-18 19:01:56 1:17:11 0:15:12 1:01:59 smithi main centos 8.stream fs/permission/{begin/{0-install 1-ceph 2-logrotate} clusters/fixed-2-ucephfs conf/{client mds mon osd} distro/{centos_8} mount/fuse objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down} tasks/cfuse_workunit_suites_pjd} 2
fail 7604934 2024-03-15 21:41:25 2024-03-18 12:17:02 2024-03-18 12:48:37 0:31:35 0:22:24 0:09:11 smithi main ubuntu 22.04 rgw/crypt/{0-cluster/fixed-1 1-ceph-install/install 2-kms/vault_transit 3-rgw/rgw 4-tests/{s3tests} ignore-pg-availability s3tests-branch ubuntu_latest} 1
Failure Reason:

Command failed (s3 tests against rgw) on smithi042 with status 1: "source /home/ubuntu/cephtest/tox-venv/bin/activate && cd /home/ubuntu/cephtest/s3-tests-client.0 && S3TEST_CONF=/home/ubuntu/cephtest/archive/s3-tests.client.0.conf BOTO_CONFIG=/home/ubuntu/cephtest/boto-client.0.cfg REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt tox -- -v -m 'not fails_on_rgw and not lifecycle_expiration and not lifecycle_transition and not cloud_transition and not test_of_sts and not webidentity_test and not fails_with_subdomain'"

pass 7604840 2024-03-15 21:10:48 2024-03-18 11:25:07 2024-03-18 12:16:52 0:51:45 0:37:52 0:13:53 smithi main ubuntu 22.04 orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_orch_cli_mon} 5
fail 7604788 2024-03-15 21:09:58 2024-03-18 10:48:42 2024-03-18 11:20:35 0:31:53 0:22:07 0:09:46 smithi main ubuntu 22.04 orch/cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/nfs-ingress-rgw-user 3-final} 2
Failure Reason:

"2024-03-18T11:10:02.180945+0000 mon.smithi042 (mon.0) 251 : cluster 3 [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)" in cluster log

pass 7604601 2024-03-15 20:16:56 2024-03-18 08:36:13 2024-03-18 10:48:40 2:12:27 2:01:31 0:10:56 smithi main ubuntu 22.04 rbd/encryption/{cache/writeback clusters/{fixed-3 openstack} conf/{disable-pool-app} data-pool/ec features/defaults msgr-failures/few objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} workloads/qemu_xfstests_luks2_luks2} 3
pass 7604449 2024-03-15 20:14:50 2024-03-18 06:12:25 2024-03-18 07:47:25 1:35:00 1:26:03 0:08:57 smithi main centos 9.stream rbd/valgrind/{base/install centos_latest clusters/{fixed-1 openstack} conf/{disable-pool-app} objectstore/bluestore-comp-zlib validator/memcheck workloads/c_api_tests_with_defaults} 1
pass 7604286 2024-03-15 20:11:53 2024-03-18 04:06:20 2024-03-18 06:12:33 2:06:13 1:55:14 0:10:59 smithi main ubuntu 20.04 fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-2c-client conf/{client mds mon osd} distro/{ubuntu_20.04} mount/fuse msgr-failures/osd-mds-delay objectstore/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down multifs session_timeout thrashosds-health} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_snaptests}} 2
pass 7604252 2024-03-15 20:11:25 2024-03-18 03:41:12 2024-03-18 04:06:25 0:25:13 0:18:55 0:06:18 smithi main rhel 8.6 fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} subvol_versions/create_subvol_version_v1 tasks/mds_creation_retry} 2
pass 7604182 2024-03-15 20:09:31 2024-03-18 03:10:18 2024-03-18 03:41:23 0:31:05 0:21:15 0:09:50 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus-v1only backoff/normal ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/fastclose rados thrashers/none thrashosds-health workloads/test_rbd_api} 3