Name Machine Type Up Locked Locked Since Locked By OS Type OS Version Arch Description
smithi079.front.sepia.ceph.com smithi True True 2024-04-24 13:02:24.053811 scheduled_adking@teuthology centos 9 x86_64 /home/teuthworker/archive/adking-2024-04-24_11:41:41-orch:cephadm-wip-adk-testing-2024-04-23-1222-distro-default-smithi/7671326
Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
running 7671326 2024-04-24 11:41:57 2024-04-24 12:59:33 2024-04-24 13:39:30 0:41:47 smithi main centos 9.stream orch:cephadm/smoke-roleless/{0-distro/centos_9.stream 0-nvme-loop 1-start 2-services/nfs-ingress 3-final} 2
pass 7671266 2024-04-24 09:58:54 2024-04-24 09:59:28 2024-04-24 10:40:19 0:40:51 0:28:42 0:12:09 smithi main centos 8.stream fs:upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mgr mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/no pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} 3
fail 7671138 2024-04-24 01:33:01 2024-04-24 01:40:35 2024-04-24 02:03:29 0:22:54 0:13:47 0:09:07 smithi main centos 9.stream crimson-rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/normal_pg_log 2-recovery-overrides/{default} clusters/{fixed-2} crimson-supported-all-distro/centos_latest crimson_qa_overrides deploy/ceph objectstore/bluestore thrashers/simple thrashosds-health workloads/snaps-few-objects} 2
Failure Reason:

Command crashed: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --max-ops 4000 --objects 50 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op read 100 --op write 50 --op delete 50 --op snap_create 50 --op snap_remove 50 --op rollback 0 --op copy_from 0 --op write_excl 50 --pool unique_pool_0'

pass 7671043 2024-04-24 01:17:00 2024-04-24 10:39:15 2024-04-24 13:02:21 2:23:06 2:15:17 0:07:49 smithi main rhel 8.6 upgrade:quincy-x/stress-split/{0-distro/rhel_8.6_container_tools_rhel8 0-roles 1-start 2-first-half-tasks/rbd-import-export 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/connectivity} 2
pass 7670998 2024-04-24 01:10:54 2024-04-24 07:20:09 2024-04-24 10:00:08 2:39:59 2:29:58 0:10:01 smithi main rhel 8.6 upgrade:pacific-x/stress-split/{0-distro/rhel_8.6_container_tools_3.0 0-roles 1-start 2-first-half-tasks/rbd_api 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/classic} 2
pass 7670961 2024-04-24 01:10:13 2024-04-24 05:00:58 2024-04-24 07:22:14 2:21:16 2:09:22 0:11:54 smithi main centos 8.stream upgrade:pacific-x/stress-split/{0-distro/centos_8.stream_container_tools 0-roles 1-start 2-first-half-tasks/rbd-import-export 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/rbd-import-export mon_election/classic} 2
pass 7670880 2024-04-24 01:08:49 2024-04-24 02:09:00 2024-04-24 05:02:27 2:53:27 2:42:02 0:11:25 smithi main centos 8.stream upgrade:pacific-x/stress-split/{0-distro/centos_8.stream_container_tools 0-roles 1-start 2-first-half-tasks/radosbench 3-stress-tasks/{radosbench rbd-cls rbd-import-export rbd_api readwrite snaps-few-objects} 4-second-half-tasks/radosbench mon_election/classic} 2
pass 7670526 2024-04-23 21:19:27 2024-04-24 00:11:36 2024-04-24 01:42:03 1:30:27 1:18:59 0:11:28 smithi main ubuntu 22.04 rbd/pwl-cache/tmpfs/{1-base/install 2-cluster/{fix-2 openstack} 3-supported-random-distro$/{ubuntu_latest} 4-cache-path 5-cache-mode/ssd 6-cache-size/1G 7-workloads/qemu_xfstests conf/{disable-pool-app}} 2
pass 7670478 2024-04-23 21:18:40 2024-04-23 23:24:41 2024-04-24 00:11:54 0:47:13 0:34:57 0:12:16 smithi main ubuntu 22.04 rbd/librbd/{cache/writethrough clusters/{fixed-3 openstack} conf/{disable-pool-app} data-pool/replicated extra-conf/permit-partial-discard min-compat-client/octopus msgr-failures/few objectstore/bluestore-comp-snappy supported-random-distro$/{ubuntu_latest} workloads/c_api_tests_with_journaling} 3
pass 7670173 2024-04-23 20:18:43 2024-04-23 20:23:12 2024-04-23 23:25:47 3:02:35 2:47:40 0:14:55 smithi main ubuntu 22.04 rbd/migration/{1-base/install 2-clusters/{fixed-3 openstack} 3-objectstore/bluestore-comp-zstd 4-supported-random-distro$/{ubuntu_latest} 5-data-pool/ec 6-prepare/qcow2-http 7-io-workloads/qemu_xfstests 8-migrate-workloads/execute 9-cleanup/cleanup conf/{disable-pool-app}} 3
fail 7669944 2024-04-23 14:58:42 2024-04-23 15:01:54 2024-04-23 15:46:56 0:45:02 0:35:21 0:09:41 smithi main centos 9.stream fs:functional/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a3s-mds-4c-client conf/{client mds mgr mon osd} distro/{centos_latest} mount/fuse objectstore/bluestore-ec-root overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile pg_health} subvol_versions/create_subvol_version_v2 tasks/admin} 2
Failure Reason:

Test failure: test_with_health_warn_with_2_active_MDSs (tasks.cephfs.test_admin.TestFSFail)

fail 7669893 2024-04-23 14:20:38 2024-04-23 17:21:57 2024-04-23 20:20:58 2:59:01 2:47:36 0:11:25 smithi main centos 8.stream rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/pacific backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps} 3
Failure Reason:

"2024-04-23T20:10:00.000149+0000 mon.a (mon.0) 1602 : cluster [WRN] Health detail: HEALTH_WARN nodeep-scrub flag(s) set" in cluster log

fail 7669879 2024-04-23 14:20:24 2024-04-23 17:06:26 2024-04-23 17:21:51 0:15:25 0:07:07 0:08:18 smithi main centos 9.stream rados/thrash/{0-size-min-size-overrides/3-size-2-min-size 1-pg-log-overrides/short_pg_log 2-recovery-overrides/{more-async-partial-recovery} 3-scrub-overrides/{max-simultaneous-scrubs-3} backoff/peering_and_degraded ceph clusters/{fixed-4 openstack} crc-failures/default d-balancer/crush-compat mon_election/connectivity msgr-failures/few msgr/async objectstore/bluestore-hybrid rados supported-random-distro$/{centos_latest} thrashers/pggrow thrashosds-health workloads/redirect_set_object} 4
Failure Reason:

Command failed on smithi204 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

pass 7669837 2024-04-23 14:19:39 2024-04-23 16:41:04 2024-04-23 17:06:36 0:25:32 0:14:13 0:11:19 smithi main ubuntu 22.04 rados/thrash-erasure-code-shec/{ceph clusters/{fixed-4 openstack} mon_election/classic msgr-failures/fastclose objectstore/bluestore-comp-zlib rados recovery-overrides/{more-partial-recovery} supported-random-distro$/{ubuntu_latest} thrashers/careful thrashosds-health workloads/ec-rados-plugin=shec-k=4-m=3-c=2} 4
pass 7669776 2024-04-23 14:18:35 2024-04-23 16:18:49 2024-04-23 16:42:37 0:23:48 0:12:35 0:11:13 smithi main ubuntu 22.04 rados/monthrash/{ceph clusters/3-mons mon_election/classic msgr-failures/mon-delay msgr/async-v2only objectstore/bluestore-comp-zstd rados supported-random-distro$/{ubuntu_latest} thrashers/sync workloads/rados_5925} 2
fail 7669753 2024-04-23 14:18:10 2024-04-23 16:03:08 2024-04-23 16:14:55 0:11:47 0:04:00 0:07:47 smithi main centos 9.stream rados/basic/{ceph clusters/{fixed-2 openstack} mon_election/classic msgr-failures/many msgr/async-v2only objectstore/bluestore-bitmap rados supported-random-distro$/{centos_latest} tasks/rados_stress_watch} 2
Failure Reason:

Command failed on smithi079 with status 1: 'sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd sqlite-devel sqlite-devel sqlite-devel sqlite-devel'

fail 7669713 2024-04-23 14:17:28 2024-04-23 15:44:50 2024-04-23 16:01:20 0:16:30 0:05:08 0:11:22 smithi main centos 9.stream rados/thrash-erasure-code-overwrites/{bluestore-bitmap ceph clusters/{fixed-4 openstack} fast/fast mon_election/classic msgr-failures/osd-delay rados recovery-overrides/{more-async-recovery} supported-random-distro$/{centos_latest} thrashers/morepggrow thrashosds-health workloads/ec-small-objects-fast-read-overwrites} 4
Failure Reason:

Command failed on smithi114 with status 1: 'sudo yum -y install ceph-mgr-dashboard'

fail 7669563 2024-04-23 14:04:40 2024-04-23 14:05:33 2024-04-23 14:58:43 0:53:10 0:43:15 0:09:55 smithi main centos 9.stream fs/nfs/{cluster/{1-node} conf/{client mds mgr mon osd} overrides/{ignorelist_health pg_health} supported-random-distros$/{centos_latest} tasks/nfs} 1
Failure Reason:

"2024-04-23T14:27:46.693300+0000 mon.a (mon.0) 320 : cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log

fail 7669512 2024-04-23 09:50:09 2024-04-23 09:51:11 2024-04-23 10:15:35 0:24:24 0:15:33 0:08:51 smithi main centos 9.stream fs/thrash/workloads/{begin/{0-install 1-ceph 2-logrotate 3-modules} clusters/1a5s-mds-1c-client conf/{client mds mgr mon osd} distro/{ubuntu_latest} mount/kclient/{mount-syntax/{v2} mount overrides/{distro/stock/{centos_9.stream k-stock} ms-die-on-skipped}} msgr-failures/osd-mds-delay objectstore-ec/bluestore-ec-root overrides/{client-shutdown frag ignorelist_health ignorelist_wrongly_marked_down pg_health prefetch_dirfrags/no prefetch_dirfrags/yes prefetch_entire_dirfrags/no prefetch_entire_dirfrags/yes races session_timeout thrashosds-health} ranks/1 tasks/{1-thrash/with-quiesce 2-workunit/fs/snaps}} 2
Failure Reason:

Command failed on smithi062 with status 1: 'sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp'

pass 7669477 2024-04-23 05:01:25 2024-04-23 05:01:25 2024-04-23 05:46:53 0:45:28 0:32:26 0:13:02 smithi main ubuntu 22.04 smoke/basic/{clusters/{fixed-3-cephfs openstack} objectstore/bluestore-bitmap supported-random-distro$/{ubuntu_latest} tasks/{0-install test/libcephfs_interface_tests}} 3