Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi027.front.sepia.ceph.com smithi True True 2024-04-27 08:31:09.061965 scheduled_teuthology@teuthology ubuntu 22.04 x86_64 /home/teuthworker/archive/teuthology-2024-04-25_20:32:15-powercycle-main-distro-default-smithi/7673700
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
pass 7675630 2024-04-26 21:40:31 2024-04-27 02:49:36 2024-04-27 05:40:32 2:50:56 2:41:57 0:08:59 smithi main centos 9.stream rgw/tools/{centos_latest cluster ignore-pg-availability tasks} 1
pass 7675591 2024-04-26 21:11:21 2024-04-27 02:30:59 2024-04-27 02:49:52 0:18:53 0:10:17 0:08:36 smithi main centos 9.stream orch/cephadm/smb/{0-distro/centos_9.stream tasks/deploy_smb_domain} 2
pass 7675550 2024-04-26 21:10:39 2024-04-27 01:57:08 2024-04-27 02:31:34 0:34:26 0:19:48 0:14:38 smithi main ubuntu 22.04 orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/connectivity task/test_extra_daemon_features} 2
pass 7675514 2024-04-26 21:10:03 2024-04-27 01:38:12 2024-04-27 01:58:27 0:20:15 0:12:31 0:07:44 smithi main centos 9.stream orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-services/basic 3-final} 2
pass 7675435 2024-04-26 21:08:46 2024-04-27 00:47:26 2024-04-27 01:38:11 0:50:45 0:37:25 0:13:20 smithi main ubuntu 22.04 orch/cephadm/no-agent-workunits/{0-distro/ubuntu_22.04 mon_election/classic task/test_orch_cli_mon} 5
pass 7675335 2024-04-26 19:36:06 2024-04-26 23:40:07 2024-04-27 00:21:02 0:40:55 0:28:49 0:12:06 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/suites/iozone}} 3
pass 7675310 2024-04-26 19:35:35 2024-04-26 23:13:53 2024-04-26 23:40:28 0:26:35 0:14:01 0:12:34 smithi main centos 9.stream fs/32bits/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/fixed-2-ucephfs conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore-ec/bluestore-comp overrides/{faked-ino ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/cfuse_workunit_suites_pjd} 2
pass 7675270 2024-04-26 19:34:46 2024-04-26 22:23:58 2024-04-26 23:13:53 0:49:55 0:36:27 0:13:28 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-workunit/suites/fsstress}} 3
pass 7675229 2024-04-26 19:33:56 2024-04-26 21:26:54 2024-04-26 22:25:31 0:58:37 0:46:31 0:12:06 smithi main centos 9.stream fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/data-scan} 2
pass 7675181 2024-04-26 19:32:56 2024-04-26 20:36:16 2024-04-26 21:27:00 0:50:44 0:37:13 0:13:31 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated-and-quota} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-workunit/suites/iozone}} 3
pass 7675163 2024-04-26 19:32:33 2024-04-26 20:07:34 2024-04-26 20:38:32 0:30:58 0:17:29 0:13:29 smithi main ubuntu 22.04 fs/permission/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/fixed-2-ucephfs conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/fuse objectstore-ec/bluestore-comp-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/cfuse_workunit_suites_pjd} 2
fail 7675075 2024-04-26 18:22:38 2024-04-27 00:20:53 2024-04-27 00:35:38 0:14:45 0:05:06 0:09:39 smithi main centos 9.stream rados/thrash-erasure-code/{ceph clusters/{fixed-4 openstack} fast/fast mon_election/classic msgr-failures/fastclose objectstore/bluestore-low-osd-mem-target rados recovery-overrides/{more-active-recovery} supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/ec-rados-plugin=jerasure-k=3-m=1} 4
Failure Reason:

Command failed on smithi094 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7675013 2024-04-26 18:21:32 2024-04-26 19:21:03 2024-04-26 19:55:47 0:34:44 0:20:49 0:13:55 smithi main ubuntu 22.04 rados/cephadm/workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_rgw_multisite} 3
Failure Reason:

"2024-04-26T19:49:31.371594+0000 mon.a (mon.0) 459 : cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log

fail 7674987 2024-04-26 18:21:04 2024-04-26 19:05:31 2024-04-26 19:20:58 0:15:27 0:06:53 0:08:34 smithi main centos 9.stream rados/singleton/{all/max-pg-per-osd.from-primary mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_latest}} 1
Failure Reason:

Command failed on smithi027 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7674952 2024-04-26 18:20:26 2024-04-26 18:49:15 2024-04-26 19:03:54 0:14:39 0:05:03 0:09:36 smithi main centos 9.stream rados/thrash-erasure-code/{ceph clusters/{fixed-4 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-stupid rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/ec-radosbench} 4
Failure Reason:

Command failed on smithi103 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7674905 2024-04-26 17:25:25 2024-04-26 17:47:49 2024-04-26 18:39:08 0:51:19 0:42:06 0:09:13 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/no} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/suites/dbench}} 3
Failure Reason:

Command failed (workunit test suites/dbench.sh) on smithi026 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=bc94a2a924f4a25e7c0317e85c91b85bf7cac0b1 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/dbench.sh'

pass 7674818 2024-04-26 15:07:40 2024-04-26 15:41:24 2024-04-26 16:37:53 0:56:29 0:44:09 0:12:20 smithi main ubuntu 22.04 orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v1only start tasks/rados_api_tests} 2
pass 7674791 2024-04-26 15:07:00 2024-04-26 15:20:11 2024-04-26 15:42:36 0:22:25 0:13:57 0:08:28 smithi main centos 9.stream orch:cephadm/smoke/{0-distro/centos_9.stream_runc 0-nvme-loop agent/off fixed-2 mon_election/classic start} 2
pass 7674748 2024-04-26 12:11:32 2024-04-26 12:43:21 2024-04-26 13:11:49 0:28:28 0:14:43 0:13:45 smithi main ubuntu 22.04 rgw/thrash/{clusters/fixed-2 frontend/beast ignore-pg-availability install objectstore/bluestore-bitmap s3tests-branch thrasher/default thrashosds-health ubuntu_latest workload/rgw_user_quota} 2
pass 7674529 2024-04-26 01:30:09 2024-04-26 04:03:25 2024-04-26 04:40:47 0:37:22 0:22:55 0:14:27 smithi main ubuntu 22.04 rados/thrash-erasure-code/{ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/few objectstore/bluestore-comp-snappy rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects} 2