Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi060.front.sepia.ceph.com smithi True True 2024-04-27 09:19:10.810907 scheduled_teuthology@teuthology ubuntu 22.04 x86_64 /home/teuthworker/archive/teuthology-2024-04-26_20:40:14-rgw-main-distro-default-smithi/7675395
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7675776 2024-04-27 03:08:53 2024-04-27 03:36:31 2024-04-27 05:57:25 2:20:54 2:11:28 0:09:26 smithi main centos 9.stream upgrade/quincy-x/stress-split/{0-distro/centos_9.stream_runc 0-roles 1-start 2-first-half-tasks/rbd-cls 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/connectivity} 2
Failure Reason:

"1714189954.1054058 mon.a (mon.0) 172 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN)" in cluster log

pass 7675660 2024-04-26 21:41:00 2024-04-27 03:07:20 2024-04-27 03:38:42 0:31:22 0:21:54 0:09:28 smithi main ubuntu 22.04 rgw/tempest/{0-install clusters/fixed-1 frontend/beast ignore-pg-availability overrides s3tests-branch tasks/tempest ubuntu_latest} 1
fail 7675583 2024-04-26 21:11:13 2024-04-27 02:22:45 2024-04-27 03:02:15 0:39:30 0:32:09 0:07:21 smithi main centos 9.stream orch/cephadm/upgrade/{1-start-distro/1-start-centos_9.stream 2-repo_digest/defaut 3-upgrade/simple 4-wait 5-upgrade-ls agent/on mon_election/connectivity} 2
Failure Reason:

"2024-04-27T02:35:55.249713+0000 mon.a (mon.0) 374 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

pass 7675555 2024-04-26 21:10:44 2024-04-27 02:01:10 2024-04-27 02:23:10 0:22:00 0:14:38 0:07:22 smithi main centos 9.stream orch/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/classic task/test_host_drain} 3
pass 7675520 2024-04-26 21:10:09 2024-04-27 01:40:45 2024-04-27 01:58:44 0:17:59 0:11:03 0:06:56 smithi main centos 9.stream orch/cephadm/smoke-small/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/connectivity start} 3
pass 7675446 2024-04-26 21:08:56 2024-04-27 00:47:30 2024-04-27 01:40:55 0:53:25 0:45:07 0:08:18 smithi main ubuntu 22.04 orch/cephadm/nfs/{cluster/{1-node} conf/{client mds mgr mon osd} overrides/{ignorelist_health pg_health} supported-random-distros$/{ubuntu_latest} tasks/nfs} 1
running 7675395 2024-04-26 20:42:52 2024-04-27 09:19:00 2024-04-27 09:32:42 0:14:27 smithi main ubuntu 22.04 rgw/multifs/{clusters/fixed-2 frontend/beast ignore-pg-availability objectstore/bluestore-bitmap overrides rgw_pool_type/ec s3tests-branch tasks/rgw_ragweed ubuntu_latest} 2
pass 7675269 2024-04-26 19:34:45 2024-04-26 22:23:57 2024-04-27 00:31:09 2:07:12 1:58:05 0:09:07 smithi main centos 9.stream fs/thrash/multifs/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-2c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse msgr-failures/none objectstore/bluestore-bitmap overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down multifs pg_health session_timeout thrashosds-health} tasks/{1-thrash/mon 2-workunit/cfuse_workunit_snaptests}} 2
pass 7675235 2024-04-26 19:34:03 2024-04-26 21:39:48 2024-04-26 22:23:56 0:44:08 0:32:27 0:11:41 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/no 5-workunit/suites/pjd}} 3
fail 7675184 2024-04-26 19:33:00 2024-04-26 20:38:38 2024-04-26 21:27:29 0:48:51 0:39:19 0:09:32 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/no 4-flush/yes 5-workunit/suites/pjd}} 3
Failure Reason:

Command failed on smithi060 with status 110: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'ceph fs subvolumegroup pin cephfs qa random 0.10'"

fail 7675129 2024-04-26 19:31:50 2024-04-26 19:36:46 2024-04-26 20:30:20 0:53:34 0:42:25 0:11:09 smithi main ubuntu 22.04 fs/mirror-ha/{begin/{0-install 1-ceph 2-logrotate 3-modules} cephfs-mirror/{1-volume-create-rm 2-three-per-cluster} clients/{mirror} cluster/{1-node} objectstore/bluestore-bitmap overrides/{ignorelist_health pg_health} supported-random-distro$/{ubuntu_latest} workloads/cephfs-mirror-ha-workunit} 1
Failure Reason:

reached maximum tries (51) after waiting for 300 seconds

fail 7675079 2024-04-26 18:22:43 2024-04-27 00:29:36 2024-04-27 00:44:57 0:15:21 0:05:10 0:10:11 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/redirect_set_object} 4
Failure Reason:

Command failed on smithi060 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7675024 2024-04-26 18:21:45 2024-04-26 19:21:08 2024-04-26 19:31:55 0:10:47 0:05:00 0:05:47 smithi main centos 9.stream rados/singleton/{all/osd-recovery-incomplete mon_election/classic msgr-failures/many msgr/async-v1only objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed on smithi060 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7674994 2024-04-26 18:21:11 2024-04-26 19:05:34 2024-04-26 19:20:16 0:14:42 0:06:01 0:08:41 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-4 openstack} crc-failures/bad_map_crc_failure d-balancer/on mon_election/classic msgr-failures/osd-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_latest} thrashers/careful thrashosds-health workloads/small-objects-balanced} 4
Failure Reason:

Command failed on smithi060 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7674964 2024-04-26 18:20:39 2024-04-26 18:50:00 2024-04-26 19:01:42 0:11:42 0:05:05 0:06:37 smithi main centos 9.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/classic msgr-failures/osd-dispatch-delay objectstore/bluestore-stupid rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{centos_latest} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
Failure Reason:

Command failed on smithi151 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7674940 2024-04-26 18:20:13 2024-04-26 18:36:16 2024-04-26 18:49:34 0:13:18 0:04:57 0:08:21 smithi main centos 9.stream rados/verify/{centos_latest ceph clusters/{fixed-2 openstack} d-thrash/default/{default thrashosds-health} mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-hybrid rados tasks/rados_api_tests validater/lockdep} 2
Failure Reason:

Command failed on smithi060 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7674866 2024-04-26 15:08:52 2024-04-26 16:16:57 2024-04-26 16:38:05 0:21:08 0:14:52 0:06:16 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/nfs 3-final} 2
Failure Reason:

"2024-04-26T16:34:01.377737+0000 mon.smithi026 (mon.0) 786 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)" in cluster log

pass 7674823 2024-04-26 15:07:48 2024-04-26 15:46:47 2024-04-26 16:17:07 0:30:20 0:19:22 0:10:58 smithi main ubuntu 22.04 orch:cephadm/smoke-roleless/{0-distro/ubuntu_22.04 0-nvme-loop 1-start 2-services/basic 3-final} 2
pass 7674746 2024-04-26 12:11:31 2024-04-26 12:41:30 2024-04-26 13:58:28 1:16:58 1:10:12 0:06:46 smithi main centos 9.stream rgw/verify/{0-install accounts$/{main} clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability inline-data$/{on} msgr-failures/few objectstore/bluestore-bitmap overrides proto/https rgw_pool_type/replicated s3tests-branch sharding$/{single} striping$/{stripe-greater-than-chunk} supported-random-distro$/{centos_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/valgrind} 2
pass 7674674 2024-04-26 07:24:18 2024-04-26 07:54:57 2024-04-26 09:34:27 1:39:30 1:28:17 0:11:13 smithi main ubuntu 22.04 rgw/verify/{0-install accounts$/{tenant} clusters/fixed-2 datacache/no_datacache frontend/beast ignore-pg-availability inline-data$/{on} msgr-failures/few objectstore/bluestore-bitmap overrides proto/http rgw_pool_type/ec-profile s3tests-branch sharding$/{single} striping$/{stripe-greater-than-chunk} supported-random-distro$/{ubuntu_latest} tasks/{bucket-check cls mp_reupload ragweed reshard s3tests-java s3tests versioning} validater/valgrind} 2