Description: fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}}

Log: http://qa-proxy.ceph.com/teuthology/pdonnell-2023-10-16_23:59:58-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7430910/teuthology.log

Sentry event: https://sentry.ceph.com/organizations/ceph/?query=8e4a2e13a12840c8ace298bf51322e34

Failure Reason:

['<https://docs.pagure.org/copr.copr/user_documentation.html#what-i-can-build-in-copr>,', 'Bugzilla. In case of problems, contact the owner of this repository.', 'Enabling a Copr repository. Please note that this repository is not part', 'Error: Failed to connect to https://copr.fedorainfracloud.org/coprs/ceph/python3-asyncssh/repo/epel-8/dnf.repo?arch=x86_64: Network is unreachable', 'Please do not file bug reports about these packages in Fedora', 'The Fedora Project does not exercise any power over the contents of', 'and packages are not held to any quality or security level.', 'of the main distribution, and quality may vary.', 'this repository beyond the rules outlined in the Copr FAQ at']

  • log_href: http://qa-proxy.ceph.com/teuthology/pdonnell-2023-10-16_23:59:58-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7430910/teuthology.log
  • archive_path: /home/teuthworker/archive/pdonnell-2023-10-16_23:59:58-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7430910
  • description: fs/upgrade/featureful_client/old_client/{bluestore-bitmap centos_8.stream clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{ignorelist_health ignorelist_wrongly_marked_down multimds/yes pg-warn} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}}
  • duration: 0:07:57
  • email: pdonnell@redhat.com
  • failure_reason: [',', 'Bugzilla. In case of problems, contact the owner of this repository.', 'Enabling a Copr repository. Please note that this repository is not part', 'Error: Failed to connect to https://copr.fedorainfracloud.org/coprs/ceph/python3-asyncssh/repo/epel-8/dnf.repo?arch=x86_64: Network is unreachable', 'Please do not file bug reports about these packages in Fedora', 'The Fedora Project does not exercise any power over the contents of', 'and packages are not held to any quality or security level.', 'of the main distribution, and quality may vary.', 'this repository beyond the rules outlined in the Copr FAQ at']
  • flavor:
  • job_id: 7430910
  • kernel:
    • kdb: True
    • sha1: distro
  • last_in_suite: False
  • machine_type: smithi
  • name: pdonnell-2023-10-16_23:59:58-fs-wip-batrick-testing-20231016.203825-distro-default-smithi
  • nuke_on_error: True
  • os_type: centos
  • os_version: 8.stream
  • overrides:
    • admin_socket:
      • branch: wip-batrick-testing-20231016.203825
    • ceph:
      • cephfs:
        • max_mds: 2
      • cluster-conf:
        • client:
          • client mount timeout: 600
          • debug client: 20
          • debug ms: 1
          • rados mon op timeout: 900
          • rados osd op timeout: 900
        • mds:
          • debug mds: 20
          • debug mds balancer: 20
          • debug ms: 1
          • mds bal fragment size max: 10000
          • mds bal merge size: 5
          • mds bal split bits: 3
          • mds bal split size: 100
          • mds debug frag: True
          • mds debug scatterstat: True
          • mds op complaint time: 180
          • mds verify scatter: True
          • osd op complaint time: 180
          • rados mon op timeout: 900
          • rados osd op timeout: 900
        • mon:
          • mon op complaint time: 120
        • osd:
          • osd op complaint time: 180
      • conf:
        • global:
          • bluestore warn on legacy statfs: False
          • bluestore warn on no per pool omap: False
          • mon pg warn min per osd: 0
        • mgr:
          • debug mgr: 20
          • debug ms: 1
        • mon:
          • debug mon: 20
          • debug ms: 1
          • debug paxos: 20
          • mon warn on osd down out interval zero: False
        • osd:
          • bdev async discard: True
          • bdev enable discard: True
          • bluestore allocator: bitmap
          • bluestore block size: 96636764160
          • bluestore fsck on mount: True
          • debug bluefs: 1/20
          • debug bluestore: 1/20
          • debug ms: 1
          • debug osd: 20
          • debug rocksdb: 4/10
          • mon osd backfillfull_ratio: 0.85
          • mon osd full ratio: 0.9
          • mon osd nearfull ratio: 0.8
          • osd failsafe full ratio: 0.95
          • osd objectstore: bluestore
      • flavor: default
      • fs: xfs
      • log-ignorelist:
        • \(MDS_ALL_DOWN\)
        • \(MDS_UP_LESS_THAN_MAX\)
        • overall HEALTH_
        • \(FS_DEGRADED\)
        • \(MDS_FAILED\)
        • \(MDS_DEGRADED\)
        • \(FS_WITH_FAILED_MDS\)
        • \(MDS_DAMAGE\)
        • \(MDS_ALL_DOWN\)
        • \(MDS_UP_LESS_THAN_MAX\)
        • \(FS_INLINE_DATA_DEPRECATED\)
        • \(POOL_APP_NOT_ENABLED\)
        • overall HEALTH_
        • \(OSD_DOWN\)
        • \(OSD_
        • but it is still running
        • is not responding
        • scrub mismatch
        • ScrubResult
        • wrongly marked
        • \(POOL_APP_NOT_ENABLED\)
        • \(SLOW_OPS\)
        • overall HEALTH_
        • \(MON_MSGR2_NOT_ENABLED\)
        • slow request
      • sha1: f7420d7f8e55ceea786dcbf536c338ce920c21eb
    • ceph-deploy:
      • bluestore: True
      • conf:
        • client:
          • log file: /var/log/ceph/ceph-$name.$pid.log
        • mon:
          • osd default pool size: 2
        • osd:
          • bdev async discard: True
          • bdev enable discard: True
          • bluestore block size: 96636764160
          • bluestore fsck on mount: True
          • debug bluefs: 1/20
          • debug bluestore: 1/20
          • debug rocksdb: 4/10
          • mon osd backfillfull_ratio: 0.85
          • mon osd full ratio: 0.9
          • mon osd nearfull ratio: 0.8
          • osd failsafe full ratio: 0.95
          • osd objectstore: bluestore
      • fs: xfs
    • install:
      • ceph:
        • flavor: default
        • sha1: f7420d7f8e55ceea786dcbf536c338ce920c21eb
    • selinux:
      • whitelist:
        • scontext=system_u:system_r:logrotate_t:s0
    • thrashosds:
      • bdev_inject_crash: 2
      • bdev_inject_crash_probability: 0.5
    • workunit:
      • branch: wip-batrick-testing-20231016.203825
      • sha1: f7420d7f8e55ceea786dcbf536c338ce920c21eb
  • owner: scheduled_pdonnell@teuthology
  • pid:
  • roles:
    • ['mon.a', 'mon.b', 'mon.c', 'mgr.x', 'mgr.y', 'mds.a', 'mds.b', 'mds.c', 'osd.0', 'osd.1', 'osd.2', 'osd.3']
    • ['client.0']
    • ['client.1']
  • sentry_event: https://sentry.ceph.com/organizations/ceph/?query=8e4a2e13a12840c8ace298bf51322e34
  • status: dead
  • success: False
  • branch: wip-batrick-testing-20231016.203825
  • seed:
  • sha1: f7420d7f8e55ceea786dcbf536c338ce920c21eb
  • subset:
  • suite:
  • suite_branch: wip-batrick-testing-20231016.203825
  • suite_path:
  • suite_relpath:
  • suite_repo:
  • suite_sha1: f7420d7f8e55ceea786dcbf536c338ce920c21eb
  • targets:
    • smithi078.front.sepia.ceph.com: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK7Nx7z7bBDivepMKbAaOVYa5p8Sf8V5tBvDu0q/U1L27KGwOnmq4YEM5ACDjLuUNpGeGNpTqpNRlJOqJOBMCgU=
    • smithi133.front.sepia.ceph.com: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEZTfx0lPP2e1N15dq974CmzOEF3J5ouKLlVE/WIIiU6ZC0OfcqXT9Keq/nITGsirpSma2QGoVoBWaJcfyd9/Ag=
    • smithi187.front.sepia.ceph.com: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP8GY7oGtcmJGPyZNXquF//xl9vZRDCYrhk2eEwoCkIk/qyqZ5XqGxllDL8s+QAI6yUxEUfEx1HDkuhtQVLQZlI=
  • tasks:
    • internal.check_packages:
    • internal.buildpackages_prep:
    • internal.save_config:
    • internal.check_lock:
    • internal.add_remotes:
    • console_log:
    • internal.connect:
    • internal.push_inventory:
    • internal.serialize_remote_roles:
    • internal.check_conflict:
    • internal.check_ceph_data:
    • internal.vm_setup:
    • kernel:
      • kdb: True
      • sha1: distro
    • internal.base:
    • internal.archive_upload:
    • internal.archive:
    • internal.coredump:
    • internal.sudo:
    • internal.syslog:
    • internal.timer:
    • pcp:
    • selinux:
    • ansible.cephlab:
    • clock:
    • install:
      • branch: octopus
      • exclude_packages:
        • librados3
        • ceph-mgr-dashboard
        • ceph-mgr-diskprediction-local
        • ceph-mgr-rook
        • ceph-mgr-cephadm
        • cephadm
        • ceph-volume
      • extra_packages:
        • librados2
    • print: **** done installing octopus
    • ceph:
      • conf:
        • global:
          • mon warn on pool no app: False
          • ms bind msgr2: False
      • log-ignorelist:
        • overall HEALTH_
        • \(FS_
        • \(MDS_
        • \(OSD_
        • \(MON_DOWN\)
        • \(CACHE_POOL_
        • \(POOL_
        • \(MGR_DOWN\)
        • \(PG_
        • \(SMALLER_PGP_NUM\)
        • Monitor daemon marked osd
        • Behind on trimming
        • Manager daemon
    • exec:
      • osd.0:
        • ceph osd set-require-min-compat-client octopus
    • print: **** done ceph
    • ceph-fuse:
    • print: **** done octopus client
    • workunit:
      • clients:
        • all:
          • suites/fsstress.sh
    • print: **** done fsstress
    • mds_pre_upgrade:
    • print: **** done mds pre-upgrade sequence
    • install.upgrade:
      • mon.a:
        • branch: quincy
    • print: **** done install.upgrade the host
    • ceph.restart:
      • daemons:
        • mon.*
        • mgr.*
      • mon-health-to-clog: False
      • wait-for-healthy: False
    • ceph.healthy:
    • ceph.restart:
      • daemons:
        • osd.*
      • wait-for-healthy: False
      • wait-for-osds-up: True
    • ceph.stop:
      • mds.*
    • ceph.restart:
      • daemons:
        • mds.*
      • wait-for-healthy: False
      • wait-for-osds-up: True
    • exec:
      • mon.a:
        • ceph osd dump -f json-pretty
        • ceph versions
        • ceph osd require-osd-release quincy
        • for f in `ceph osd pool ls` ; do ceph osd pool set $f pg_autoscale_mode off ; done
    • ceph.healthy:
    • print: **** done ceph.restart
    • workunit:
      • clients:
        • all:
          • suites/fsstress.sh
    • print: **** done fsstress
  • teuthology_branch: main
  • verbose: False
  • pcp_grafana_url:
  • priority:
  • user:
  • queue:
  • posted: 2023-10-17 00:03:20
  • started: 2023-10-18 14:29:18
  • updated: 2023-10-18 14:49:14
  • status_class: danger
  • runtime: 0:19:56
  • wait_time: 0:11:59