Description: rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps}

Log: http://qa-proxy.ceph.com/teuthology/yuriw-2023-12-11_23:27:14-rados-wip-yuri8-testing-2023-12-11-1101-distro-default-smithi/7487682/teuthology.log

Sentry event: https://sentry.ceph.com/organizations/ceph/?query=430b231412934625b724b7151e90f08f

Failure Reason:

['<https://docs.pagure.org/copr.copr/user_documentation.html#what-i-can-build-in-copr>,', 'Bugzilla. In case of problems, contact the owner of this repository.', 'Enabling a Copr repository. Please note that this repository is not part', 'Error: Request to https://copr.fedorainfracloud.org/coprs/ceph/python3-asyncssh/repo/epel-8/dnf.repo?arch=x86_64 failed: 500 - HTTP Error 500: INTERNAL SERVER ERROR', 'Error: Request to https://copr.fedorainfracloud.org/coprs/ceph/python3-asyncssh/repo/epel-8/dnf.repo?arch=x86_64 failed: 500 - HTTP Error 500: Internal Server Error', 'Please do not file bug reports about these packages in Fedora', 'The Fedora Project does not exercise any power over the contents of', 'and packages are not held to any quality or security level.', 'of the main distribution, and quality may vary.', 'this repository beyond the rules outlined in the Copr FAQ at']

  • log_href: http://qa-proxy.ceph.com/teuthology/yuriw-2023-12-11_23:27:14-rados-wip-yuri8-testing-2023-12-11-1101-distro-default-smithi/7487682/teuthology.log
  • archive_path: /home/teuthworker/archive/yuriw-2023-12-11_23:27:14-rados-wip-yuri8-testing-2023-12-11-1101-distro-default-smithi/7487682
  • description: rados/thrash-old-clients/{0-distro$/{centos_8.stream_container_tools} 0-size-min-size-overrides/2-size-2-min-size 1-install/nautilus backoff/peering_and_degraded ceph clusters/{openstack three-plus-one} d-balancer/crush-compat mon_election/classic msgr-failures/osd-delay rados thrashers/careful thrashosds-health workloads/cache-snaps}
  • duration: 0:06:08
  • email: yweinste@redhat.com
  • failure_reason: [',', 'Bugzilla. In case of problems, contact the owner of this repository.', 'Enabling a Copr repository. Please note that this repository is not part', 'Error: Request to https://copr.fedorainfracloud.org/coprs/ceph/python3-asyncssh/repo/epel-8/dnf.repo?arch=x86_64 failed: 500 - HTTP Error 500: INTERNAL SERVER ERROR', 'Error: Request to https://copr.fedorainfracloud.org/coprs/ceph/python3-asyncssh/repo/epel-8/dnf.repo?arch=x86_64 failed: 500 - HTTP Error 500: Internal Server Error', 'Please do not file bug reports about these packages in Fedora', 'The Fedora Project does not exercise any power over the contents of', 'and packages are not held to any quality or security level.', 'of the main distribution, and quality may vary.', 'this repository beyond the rules outlined in the Copr FAQ at']
  • flavor:
  • job_id: 7487682
  • kernel:
    • kdb: True
    • sha1: distro
  • last_in_suite: False
  • machine_type: smithi
  • name: yuriw-2023-12-11_23:27:14-rados-wip-yuri8-testing-2023-12-11-1101-distro-default-smithi
  • nuke_on_error: True
  • os_type: centos
  • os_version: 8.stream
  • overrides:
    • admin_socket:
      • branch: wip-yuri8-testing-2023-12-11-1101
    • ceph:
      • conf:
        • global:
          • mon client directed command retry: 5
          • mon election default strategy: 1
          • ms inject delay max: 1
          • ms inject delay probability: 0.005
          • ms inject delay type: osd
          • ms inject internal delays: 0.002
          • ms inject socket failures: 2500
          • osd_pool_default_min_size: 2
          • osd_pool_default_size: 2
        • mgr:
          • debug mgr: 20
          • debug ms: 1
        • mon:
          • debug mon: 20
          • debug ms: 1
          • debug paxos: 20
          • mon min osdmap epochs: 50
          • mon osdmap full prune interval: 2
          • mon osdmap full prune min: 15
          • mon osdmap full prune txsize: 2
          • mon scrub interval: 300
          • paxos service trim min: 10
        • osd:
          • bluestore zero block detection: True
          • debug ms: 1
          • debug osd: 20
          • osd backoff on degraded: True
          • osd backoff on peering: True
          • osd blocked scrub grace period: 3600
          • osd debug reject backfill probability: 0.3
          • osd debug verify cached snaps: True
          • osd debug verify missing on start: True
          • osd max backfills: 3
          • osd max markdown count: 1000
          • osd mclock override recovery settings: True
          • osd mclock profile: high_recovery_ops
          • osd op queue: debug_random
          • osd op queue cut off: debug_random
          • osd scrub max interval: 120
          • osd scrub min interval: 60
          • osd shutdown pgref assert: True
          • osd snap trim sleep: 2
      • flavor: default
      • log-ignorelist:
        • \(MDS_ALL_DOWN\)
        • \(MDS_UP_LESS_THAN_MAX\)
        • \(REQUEST_STUCK\)
        • \(MON_DOWN\)
        • \(OSD_SLOW_PING_TIME
        • but it is still running
        • objects unfound and apparently lost
        • \(POOL_APP_NOT_ENABLED\)
        • overall HEALTH_
        • \(OSDMAP_FLAGS\)
        • \(OSD_
        • \(PG_
        • \(POOL_
        • \(CACHE_POOL_
        • \(SMALLER_PGP_NUM\)
        • \(OBJECT_
        • \(SLOW_OPS\)
        • \(REQUEST_SLOW\)
        • \(TOO_FEW_PGS\)
        • slow request
        • timeout on replica
        • late reservation from
        • must scrub before tier agent can activate
      • sha1: 21b8309530d48c7266e1db5c3ccbe10963f99737
    • ceph-deploy:
      • conf:
        • client:
          • log file: /var/log/ceph/ceph-$name.$pid.log
        • mon:
      • install:
        • ceph:
          • flavor: default
          • sha1: 21b8309530d48c7266e1db5c3ccbe10963f99737
      • selinux:
        • whitelist:
          • scontext=system_u:system_r:logrotate_t:s0
      • workunit:
        • branch: wip-yuri8-testing-2023-12-11-1101
        • sha1: 21b8309530d48c7266e1db5c3ccbe10963f99737
    • owner: scheduled_yuriw@teuthology
    • pid:
    • roles:
      • ['mon.a', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0']
      • ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1']
      • ['mon.c', 'osd.8', 'osd.9', 'osd.10', 'osd.11', 'client.2']
    • sentry_event: https://sentry.ceph.com/organizations/ceph/?query=430b231412934625b724b7151e90f08f
    • status: dead
    • success: False
    • branch: wip-yuri8-testing-2023-12-11-1101
    • seed:
    • sha1: 21b8309530d48c7266e1db5c3ccbe10963f99737
    • subset:
    • suite:
    • suite_branch: wip-yuri8-testing-2023-12-11-1101
    • suite_path:
    • suite_relpath:
    • suite_repo:
    • suite_sha1: 21b8309530d48c7266e1db5c3ccbe10963f99737
    • targets:
      • smithi100.front.sepia.ceph.com: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEVq2NjzGXmkp7DPL6KCQEodOUt0BYVqMLmYCZjb6e6IgfDIVAh8hm4iNuD/s27e+4LQQyV2trss7Y2xwdQ5SXQ=
      • smithi107.front.sepia.ceph.com: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKSVFcPDqJ8XRrlSVZrbZ4L9mT6PaGtZIWXoh9syfiDttc9FDyL3ae9WqhdrXChGpVVKzRDQ0YbQxR2ud6RChBk=
      • smithi137.front.sepia.ceph.com: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAydphWbWOpSV35lZvkWf0ch0mK0jdjGCWtoDQF8KDj4dCCGDcuAa3aIo6bV+3d1rsFBXaybpRxep17kx6T1jEc=
    • tasks:
      • internal.save_config:
      • internal.check_lock:
      • internal.add_remotes:
      • console_log:
      • internal.connect:
      • internal.push_inventory:
      • internal.serialize_remote_roles:
      • internal.check_conflict:
      • internal.check_ceph_data:
      • internal.vm_setup:
      • kernel:
        • kdb: True
        • sha1: distro
      • internal.base:
      • internal.archive_upload:
      • internal.archive:
      • internal.coredump:
      • internal.sudo:
      • internal.syslog:
      • internal.timer:
      • pcp:
      • selinux:
      • ansible.cephlab:
      • clock:
      • pexec:
        • all:
          • sudo cp /etc/containers/registries.conf /etc/containers/registries.conf.backup
          • sudo dnf -y module reset container-tools
          • sudo dnf -y module install container-tools --allowerasing --nobest
          • sudo cp /etc/containers/registries.conf.backup /etc/containers/registries.conf
      • install:
        • branch: nautilus
        • exclude_packages:
          • cephadm
          • ceph-mgr-cephadm
          • ceph-immutable-object-cache
          • python3-rados
          • python3-rgw
          • python3-rbd
          • python3-cephfs
          • ceph-volume
        • extra_packages:
          • python-rados
          • python-rgw
          • python-rbd
          • python-cephfs
      • cephadm:
        • conf:
          • mon:
            • auth allow insecure global id reclaim: True
      • exec:
        • mon.a:
          • while ! ceph balancer status ; do sleep 1 ; done
          • ceph balancer mode crush-compat
          • ceph balancer on
      • thrashosds:
        • aggressive_pg_num_changes: False
        • chance_pgnum_grow: 1
        • chance_pgpnum_fix: 1
        • timeout: 1200
      • exec:
        • client.0:
          • sudo ceph osd pool create base 4
          • sudo ceph osd pool application enable base rados
          • sudo ceph osd pool create cache 4
          • sudo ceph osd tier add base cache
          • sudo ceph osd tier cache-mode cache writeback
          • sudo ceph osd tier set-overlay base cache
          • sudo ceph osd pool set cache hit_set_type bloom
          • sudo ceph osd pool set cache hit_set_count 8
          • sudo ceph osd pool set cache hit_set_period 3600
          • sudo ceph osd pool set cache target_max_objects 250
          • sudo ceph osd pool set cache min_read_recency_for_promote 2
      • rados:
        • clients:
          • client.2
        • objects: 500
        • op_weights:
          • cache_evict: 50
          • cache_flush: 50
          • cache_try_flush: 50
          • copy_from: 50
          • delete: 50
          • read: 100
          • rollback: 50
          • snap_create: 50
          • snap_remove: 50
          • write: 100
        • ops: 4000
        • pools:
          • base
    • teuthology_branch: main
    • verbose: True
    • pcp_grafana_url:
    • priority:
    • user:
    • queue:
    • posted: 2023-12-11 23:30:47
    • started: 2023-12-12 00:57:20
    • updated: 2023-12-12 01:14:56
    • status_class: danger
    • runtime: 0:17:36
    • wait_time: 0:11:28