Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi120.front.sepia.ceph.com smithi True True 2024-05-27 15:42:06.644494 scheduled_teuthology@teuthology centos 9 x86_64 /home/teuthworker/archive/teuthology-2024-05-21_20:16:16-rbd-main-distro-default-smithi/7718820
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 7727941 2024-05-27 14:18:47 2024-05-27 14:23:26 2024-05-27 14:42:52 0:19:26 0:10:04 0:09:22 smithi main centos 9.stream rgw:notifications/{beast bluestore-bitmap fixed-2 ignore-pg-availability overrides tasks/kafka/{0-install supported-distros/{centos_latest} test_kafka}} 2
Failure Reason:

Command failed on smithi038 with status 1: 'cd /home/ubuntu/cephtest/kafka_2.13-2.6.0/bin && ./kafka-server-stop.sh /home/ubuntu/cephtest/kafka_2.13-2.6.0/config/kafka.properties'

fail 7727807 2024-05-27 07:27:40 2024-05-27 08:51:24 2024-05-27 09:53:57 1:02:33 0:51:37 0:10:56 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-bitmap omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-quiesce/with-quiesce 6-workunit/kernel_untar_build}} 3
Failure Reason:

Command failed (workunit test kernel_untar_build.sh) on smithi008 with status 2: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=a9cb3a581a309a99b72997ae5ddb88084f5484c9 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/kernel_untar_build.sh'

pass 7727771 2024-05-27 07:25:45 2024-05-27 08:00:45 2024-05-27 08:52:36 0:51:51 0:42:29 0:09:22 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} ms_mode/crc wsync/yes} objectstore-ec/bluestore-comp omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/yes 4-flush/no 5-quiesce/with-quiesce 6-workunit/postgres}} 3
pass 7727736 2024-05-27 06:48:25 2024-05-27 10:43:37 2024-05-27 11:15:32 0:31:55 0:20:03 0:11:52 smithi main ubuntu 22.04 fs/functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v1 tasks/multimds_misc} 2
pass 7727711 2024-05-27 06:47:57 2024-05-27 09:49:22 2024-05-27 10:43:40 0:54:18 0:39:47 0:14:31 smithi main ubuntu 22.04 fs/snaps/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} objectstore-ec/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down pg_health} tasks/workunit/snaps} 2
fail 7727658 2024-05-27 05:54:06 2024-05-27 07:07:15 2024-05-27 08:01:27 0:54:12 0:42:06 0:12:06 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/5 replication/always} standby-replay tasks/{0-subvolume/{with-quota} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/yes 5-quiesce/no 6-workunit/suites/pjd}} 3
Failure Reason:

Command failed on smithi120 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:38f9d900c83c63799fbdbe61acc9a11b0d3554a6 shell --fsid 6afde528-1bfa-11ef-bc9b-c7b262605968 -- ceph daemon mds.a perf dump'

fail 7727610 2024-05-27 05:53:14 2024-05-27 06:12:49 2024-05-27 07:07:06 0:54:17 0:43:12 0:11:05 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/yes 3-snaps/yes 4-flush/no 5-quiesce/no 6-workunit/suites/blogbench}} 3
Failure Reason:

Command failed on smithi120 with status 22: 'sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:38f9d900c83c63799fbdbe61acc9a11b0d3554a6 shell --fsid ea398dae-1bf2-11ef-bc9b-c7b262605968 -- ceph daemon mds.a perf dump'

pass 7727568 2024-05-27 00:25:02 2024-05-27 03:16:46 2024-05-27 04:37:00 1:20:14 1:10:19 0:09:55 smithi main ubuntu 20.04 upgrade:octopus-x/parallel/{0-distro/ubuntu_20.04 0-start 1-tasks mon_election/classic upgrade-sequence workload/{ec-rados-default rados_api rados_loadgenbig rbd_import_export test_rbd_api}} 2
pass 7727510 2024-05-26 22:06:10 2024-05-27 12:58:59 2024-05-27 13:28:30 0:29:31 0:18:18 0:11:13 smithi main centos 8.stream rados/cephadm/workunits/{0-distro/centos_8.stream_container_tools_crun agent/off mon_election/classic task/test_set_mon_crush_locations} 3
pass 7727479 2024-05-26 22:05:39 2024-05-27 12:27:51 2024-05-27 13:00:59 0:33:08 0:21:27 0:11:41 smithi main ubuntu 20.04 rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-2 openstack} fast/normal mon_election/connectivity msgr-failures/osd-dispatch-delay rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{ubuntu_20.04} thrashers/default thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 2
pass 7727439 2024-05-26 22:05:00 2024-05-27 12:05:14 2024-05-27 12:28:53 0:23:39 0:14:32 0:09:07 smithi main centos 8.stream rados/thrash/{0-size-min-size-overrides/2-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{more-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/normal ceph clusters/{fixed-2 openstack} crc-failures/bad_map_crc_failure d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async-v2only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_8} thrashers/morepggrow thrashosds-health workloads/cache-agent-small} 2
pass 7727395 2024-05-26 22:04:16 2024-05-27 11:42:22 2024-05-27 12:05:35 0:23:13 0:13:39 0:09:34 smithi main centos 8.stream rados/singleton-nomsgr/{all/recovery-unfound-found mon_election/classic rados supported-random-distro$/{centos_8}} 1
pass 7727344 2024-05-26 22:03:24 2024-05-27 11:14:08 2024-05-27 11:42:29 0:28:21 0:16:57 0:11:24 smithi main centos 9.stream rados/monthrash/{ceph clusters/3-mons mon_election/connectivity msgr-failures/mon-delay msgr/async-v1only objectstore/bluestore-low-osd-mem-target rados supported-random-distro$/{centos_latest} thrashers/force-sync-many workloads/pool-create-delete} 2
fail 7727258 2024-05-26 22:01:58 2024-05-27 05:29:56 2024-05-27 06:13:42 0:43:46 0:35:57 0:07:49 smithi main rhel 8.6 rados/singleton-bluestore/{all/cephtool mon_election/classic msgr-failures/none msgr/async-v2only objectstore/bluestore-comp-lz4 rados supported-random-distro$/{rhel_8}} 1
Failure Reason:

Command failed (workunit test cephtool/test.sh) on smithi120 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=19f3c03194952f81fb0d3dd9621f2ed4b14b4e1d TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh'

pass 7727221 2024-05-26 21:28:14 2024-05-27 00:34:14 2024-05-27 01:42:45 1:08:31 0:56:41 0:11:50 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/fuse objectstore-ec/bluestore-bitmap omap_limit/10 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/random export-check n/3 replication/always} standby-replay tasks/{0-subvolume/{with-no-extra-options} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/no 5-workunit/fs/misc}} 3
pass 7727138 2024-05-26 21:26:50 2024-05-26 23:20:13 2024-05-27 00:35:35 1:15:22 1:03:13 0:12:09 smithi main centos 9.stream fs/workload/{0-centos_9.stream begin/{0-install 1-cephadm 2-logrotate 3-modules} clusters/1a11s-mds-1c-client-3node conf/{client mds mgr mon osd} mount/kclient/{base/{mount-syntax/{v1} mount overrides/{distro/testing/k-testing ms-die-on-skipped}} ms_mode/secure wsync/no} objectstore-ec/bluestore-ec-root omap_limit/10000 overrides/{cephsqlite-timeout frag ignorelist_health ignorelist_wrongly_marked_down osd-asserts pg_health session_timeout} ranks/multi/{balancer/automatic export-check n/3 replication/default} standby-replay tasks/{0-subvolume/{with-namespace-isolated} 1-check-counter 2-scrub/no 3-snaps/no 4-flush/yes 5-workunit/suites/ffsb}} 3
pass 7727099 2024-05-26 21:26:10 2024-05-26 22:49:03 2024-05-26 23:22:23 0:33:20 0:21:33 0:11:47 smithi main centos 8.stream fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no multimds/yes pg-warn pg_health} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/quincy}} 3
pass 7726962 2024-05-26 21:06:17 2024-05-27 02:38:23 2024-05-27 03:17:08 0:38:45 0:26:55 0:11:50 smithi main centos 9.stream rados/thrash-erasure-code-big/{ceph cluster/{12-osds openstack} mon_election/connectivity msgr-failures/osd-delay objectstore/bluestore-bitmap rados recovery-overrides/{more-async-partial-recovery} supported-random-distro$/{centos_latest} thrashers/fastread thrashosds-health workloads/ec-rados-plugin=jerasure-k=4-m=2} 3
pass 7726909 2024-05-26 21:05:25 2024-05-27 02:14:40 2024-05-27 02:39:31 0:24:51 0:14:03 0:10:48 smithi main centos 9.stream rados/cephadm/osds/{0-distro/centos_9.stream_runc 0-nvme-loop 1-start 2-ops/rm-zap-add} 2
dead 7726901 2024-05-26 21:05:17 2024-05-27 02:10:06 2024-05-27 02:11:31 0:01:25 smithi main ubuntu 22.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/osd-delay objectstore/bluestore-hybrid rados recovery-overrides/{more-active-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
Failure Reason:

Error reimaging machines: Failed to power on smithi120