Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi086.front.sepia.ceph.com smithi True True 2024-04-27 05:11:06.573595 scheduled_teuthology@teuthology ubuntu 20.04 x86_64 /home/teuthworker/archive/teuthology-2024-04-26_22:40:03-rgw-reef-distro-default-smithi/7675745
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 7675745 2024-04-26 22:41:37 2024-04-27 05:10:36 2024-04-27 06:41:21 1:31:21 smithi main ubuntu 20.04 rgw/verify/{0-install clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability inline-data$/{on} msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/replicated s3tests-branch sharding$/{default} striping$/{stripe-greater-than-chunk} supported-random-distro$/{ubuntu_20.04} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/valgrind} 2
pass 7675726 2024-04-26 22:41:18 2024-04-27 04:48:46 2024-04-27 05:11:01 0:22:15 0:12:44 0:09:31 smithi main ubuntu 22.04 rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec-profile s3tests-branch tasks/rgw_multipart_upload ubuntu_latest} 2
pass 7675704 2024-04-26 22:40:56 2024-04-27 04:24:55 2024-04-27 04:49:09 0:24:14 0:14:09 0:10:05 smithi main ubuntu 20.04 rgw/upgrade/{1-install/pacific/{distro$/{ubuntu_20.04} install overrides} 2-setup 3-upgrade-sequence/osds-then-rgws cluster frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides} 2
pass 7675679 2024-04-26 21:41:19 2024-04-27 03:22:49 2024-04-27 04:25:51 1:03:02 0:55:22 0:07:40 smithi main centos 9.stream rgw/verify/{0-install accounts$/{none} clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability inline-data$/{on} msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec s3tests-branch sharding$/{default} striping$/{stripe-equals-chunk} supported-random-distro$/{centos_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/valgrind} 2
fail 7675603 2024-04-26 21:11:33 2024-04-27 02:36:24 2024-04-27 03:09:18 0:32:54 0:25:31 0:07:23 smithi main centos 9.stream orch/cephadm/mds_upgrade_sequence/{bluestore-bitmap centos_9.stream conf/{client mds mgr mon osd} fail_fs/yes overrides/{ignorelist_health ignorelist_upgrade ignorelist_wrongly_marked_down pg-warn syntax} roles tasks/{0-from/quincy 1-volume/{0-create 1-ranks/1 2-allow_standby_replay/yes 3-inline/no 4-verify} 2-client/kclient 3-upgrade-mgr-staggered 4-config-upgrade/{fail_fs} 5-upgrade-with-workload 6-verify}} 2
Failure Reason:

Command failed on smithi067 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:quincy shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 87b7627e-0440-11ef-bc93-c7b262605968 -e sha1=b22e2ebdeb24376882b7bda2a7329c8cccc2276a -- bash -c \'ceph versions | jq -e \'"\'"\'.overall | length == 1\'"\'"\'\''

fail 7675545 2024-04-26 21:10:34 2024-04-27 01:54:56 2024-04-27 02:27:58 0:33:02 0:25:41 0:07:21 smithi main centos 9.stream orch/cephadm/rbd_iscsi/{0-single-container-host base/install cluster/{fixed-3 openstack} conf/{disable-pool-app} workloads/cephadm_iscsi} 3
Failure Reason:

"2024-04-27T02:07:08.704008+0000 mon.a (mon.0) 209 : cluster 3 [WRN] MON_DOWN: 1/3 mons down, quorum a,c" in cluster log

pass 7675495 2024-04-26 21:09:44 2024-04-27 01:22:13 2024-04-27 01:54:46 0:32:33 0:23:44 0:08:49 smithi main centos 9.stream orch/cephadm/no-agent-workunits/{0-distro/centos_9.stream mon_election/connectivity task/test_orch_cli_mon} 5
pass 7675467 2024-04-26 21:09:17 2024-04-27 01:03:00 2024-04-27 01:22:18 0:19:18 0:10:43 0:08:35 smithi main centos 9.stream orch/cephadm/smb/{0-distro/centos_9.stream_runc tasks/deploy_smb_domain} 2
pass 7675428 2024-04-26 21:08:39 2024-04-27 00:47:13 2024-04-27 01:05:03 0:17:50 0:10:38 0:07:12 smithi main centos 9.stream orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 3
fail 7675349 2024-04-26 19:36:24 2024-04-26 23:56:16 2024-04-27 00:47:01 0:50:45 0:38:58 0:11:47 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/kernel_untar_build}} 3
Failure Reason:

Command failed on smithi026 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs subvolumegroup pin cephfs qa random 0.10'"

pass 7675293 2024-04-26 19:35:14 2024-04-26 23:03:23 2024-04-26 23:57:46 0:54:23 0:43:50 0:10:33 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-workunit/direct_io}} 3
pass 7675265 2024-04-26 19:34:40 2024-04-26 22:19:15 2024-04-26 23:04:40 0:45:25 0:34:05 0:11:20 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/1 standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/fs/norstats}} 3
pass 7675216 2024-04-26 19:33:40 2024-04-26 21:11:56 2024-04-26 22:19:28 1:07:32 0:58:14 0:09:18 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/client-recovery} 2
pass 7675194 2024-04-26 19:33:12 2024-04-26 20:50:34 2024-04-26 21:12:14 0:21:40 0:11:06 0:10:34 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/backtrace} 2
pass 7675172 2024-04-26 19:32:44 2024-04-26 20:23:01 2024-04-26 20:51:02 0:28:01 0:15:09 0:12:52 smithi main ubuntu 22.04 fs/multiclient/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1-mds-2-client conf/{client mds mgr mon osd} distros/ubuntu_latest mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/ior-shared-file} 4
pass 7675155 2024-04-26 19:32:23 2024-04-26 19:55:49 2024-04-26 20:24:59 0:29:10 0:16:21 0:12:49 smithi main ubuntu 22.04 fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse msgr-failures/none objectstore-ec/bluestore-comp overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/5 tasks/{1-thrash/mon 2-workunit/fs/trivial_sync}} 2
pass 7675124 2024-04-26 19:31:43 2024-04-26 19:36:44 2024-04-26 19:56:04 0:19:20 0:09:52 0:09:28 smithi main centos 9.stream fs/bugs/client_trim_caps/{begin/{0-install 1-ceph 2-logrotate 3-modules} centos_latest clusters/small-cluster conf/{client mds mgr mon osd} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} tasks/trim-i24137} 1
fail 7675022 2024-04-26 18:21:43 2024-04-26 19:21:07 2024-04-26 19:36:43 0:15:36 0:06:14 0:09:22 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-active-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest} thrashers/default thrashosds-health workloads/admin_socket_objecter_requests} 4
Failure Reason:

Command failed on smithi039 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7674974 2024-04-26 18:20:50 2024-04-26 18:58:15 2024-04-26 19:11:23 0:13:08 0:04:48 0:08:20 smithi main centos 9.stream rados/mgr/{clusters/{2-node-mgr} debug/mgr mgr_ttl_cache/disable mon_election/classic random-objectstore$/{bluestore-comp-zlib} supported-random-distro$/{centos_latest} tasks/crash} 2
Failure Reason:

Command failed on smithi086 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7674814 2024-04-26 15:07:34 2024-04-26 15:40:02 2024-04-26 16:15:53 0:35:51 0:25:35 0:10:16 smithi main ubuntu 22.04 orch:cephadm/smoke/{0-distro/ubuntu_22.04 0-nvme-loop agent/on fixed-2 mon_election/connectivity start} 2
Failure Reason:

"2024-04-26T16:00:56.759986+0000 mon.a (mon.0) 413 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log