Nodes: smithi088

Description: upgrade/cephfs/nofs/{bluestore-bitmap centos_8.stream conf/{client mds mgr mon osd} no-mds-cluster overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn pg_health} tasks/{0-pacific 1-upgrade}}

Log: http://qa-proxy.ceph.com/teuthology/yuriw-2024-04-10_14:18:30-upgrade-wip-yuri6-testing-2024-04-02-1310-distro-default-smithi/7650716/teuthology.log

Sentry event: https://sentry.ceph.com/organizations/ceph/?query=51332c8be35d4accba6cad9dd0250f71

Failure Reason:

['<https://docs.pagure.org/copr.copr/user_documentation.html#what-i-can-build-in-copr>,', 'Bugzilla. In case of problems, contact the owner of this repository.', 'Enabling a Copr repository. Please note that this repository is not part', 'Error: Failed to connect to https://copr.fedorainfracloud.org/coprs/ceph/python3-asyncssh/repo/epel-8/dnf.repo?arch=x86_64: Network is unreachable', 'Please do not file bug reports about these packages in Fedora', 'The Fedora Project does not exercise any power over the contents of', 'and packages are not held to any quality or security level.', 'of the main distribution, and quality may vary.', 'this repository beyond the rules outlined in the Copr FAQ at']

  • log_href: http://qa-proxy.ceph.com/teuthology/yuriw-2024-04-10_14:18:30-upgrade-wip-yuri6-testing-2024-04-02-1310-distro-default-smithi/7650716/teuthology.log
  • archive_path: /home/teuthworker/archive/yuriw-2024-04-10_14:18:30-upgrade-wip-yuri6-testing-2024-04-02-1310-distro-default-smithi/7650716
  • description: upgrade/cephfs/nofs/{bluestore-bitmap centos_8.stream conf/{client mds mgr mon osd} no-mds-cluster overrides/{ignorelist_health ignorelist_wrongly_marked_down pg-warn pg_health} tasks/{0-pacific 1-upgrade}}
  • duration: 0:11:30
  • email: yweinste@redhat.com
  • failure_reason: [',', 'Bugzilla. In case of problems, contact the owner of this repository.', 'Enabling a Copr repository. Please note that this repository is not part', 'Error: Failed to connect to https://copr.fedorainfracloud.org/coprs/ceph/python3-asyncssh/repo/epel-8/dnf.repo?arch=x86_64: Network is unreachable', 'Please do not file bug reports about these packages in Fedora', 'The Fedora Project does not exercise any power over the contents of', 'and packages are not held to any quality or security level.', 'of the main distribution, and quality may vary.', 'this repository beyond the rules outlined in the Copr FAQ at']
  • flavor:
  • job_id: 7650716
  • kernel:
    • kdb: 1
    • sha1: distro
  • last_in_suite: False
  • machine_type: smithi
  • name: yuriw-2024-04-10_14:18:30-upgrade-wip-yuri6-testing-2024-04-02-1310-distro-default-smithi
  • nuke_on_error: True
  • os_type: centos
  • os_version: 8.stream
  • overrides:
    • admin_socket:
      • branch: wip-yuri6-testing-2024-04-02-1310
    • ceph:
      • conf:
        • client:
          • client mount timeout: 600
          • debug client: 20
          • debug ms: 1
          • rados mon op timeout: 900
          • rados osd op timeout: 900
        • global:
          • bluestore warn on legacy statfs: False
          • bluestore warn on no per pool omap: False
          • mon pg warn min per osd: 0
        • mds:
          • debug mds: 20
          • debug mds balancer: 20
          • debug ms: 1
          • mds debug frag: True
          • mds debug scatterstat: True
          • mds op complaint time: 180
          • mds verify scatter: True
          • osd op complaint time: 180
          • rados mon op timeout: 900
          • rados osd op timeout: 900
        • mgr:
          • debug client: 20
          • debug mgr: 20
          • debug ms: 1
        • mon:
          • debug mon: 20
          • debug ms: 1
          • debug paxos: 20
          • mon down mkfs grace: 300
          • mon op complaint time: 120
          • mon warn on osd down out interval zero: False
        • osd:
          • bdev async discard: True
          • bdev enable discard: True
          • bluestore allocator: bitmap
          • bluestore block size: 96636764160
          • bluestore fsck on mount: True
          • debug bluefs: 1/20
          • debug bluestore: 1/20
          • debug ms: 1
          • debug osd: 20
          • debug rocksdb: 4/10
          • mon osd backfillfull_ratio: 0.85
          • mon osd full ratio: 0.9
          • mon osd nearfull ratio: 0.8
          • osd failsafe full ratio: 0.95
          • osd objectstore: bluestore
          • osd op complaint time: 180
      • flavor: default
      • fs: xfs
      • log-ignorelist:
        • \(MDS_ALL_DOWN\)
        • \(MDS_UP_LESS_THAN_MAX\)
        • FS_DEGRADED
        • FS_INLINE_DATA_DEPRECATED
        • FS_WITH_FAILED_MDS
        • MDS_ALL_DOWN
        • MDS_DAMAGE
        • MDS_DEGRADED
        • MDS_FAILED
        • MDS_INSUFFICIENT_STANDBY
        • MDS_UP_LESS_THAN_MAX
        • filesystem is online with fewer MDS than max_mds
        • POOL_APP_NOT_ENABLED
        • overall HEALTH_
        • Replacing daemon
        • deprecated feature inline_data
        • overall HEALTH_
        • \(OSD_DOWN\)
        • \(OSD_
        • but it is still running
        • is not responding
        • PG_AVAILABILITY
        • PG_DEGRADED
        • Reduced data availability
        • scrub mismatch
        • ScrubResult
        • wrongly marked
        • \(POOL_APP_NOT_ENABLED\)
        • \(SLOW_OPS\)
        • overall HEALTH_
        • \(MON_MSGR2_NOT_ENABLED\)
        • slow request
      • sha1: a5074d4516d566e9d8b6aec912f26afd099de101
    • ceph-deploy:
      • bluestore: True
      • conf:
        • client:
          • log file: /var/log/ceph/ceph-$name.$pid.log
        • mon:
          • osd:
            • bdev async discard: True
            • bdev enable discard: True
            • bluestore block size: 96636764160
            • bluestore fsck on mount: True
            • debug bluefs: 1/20
            • debug bluestore: 1/20
            • debug rocksdb: 4/10
            • mon osd backfillfull_ratio: 0.85
            • mon osd full ratio: 0.9
            • mon osd nearfull ratio: 0.8
            • osd failsafe full ratio: 0.95
            • osd objectstore: bluestore
        • fs: xfs
      • install:
        • ceph:
          • flavor: default
          • sha1: a5074d4516d566e9d8b6aec912f26afd099de101
      • selinux:
        • allowlist:
          • scontext=system_u:system_r:logrotate_t:s0
      • thrashosds:
        • bdev_inject_crash: 2
        • bdev_inject_crash_probability: 0.5
      • workunit:
        • branch: wip-yuri6-testing-2024-04-02-1310
        • sha1: a5074d4516d566e9d8b6aec912f26afd099de101
    • owner: scheduled_yuriw@teuthology
    • pid:
    • roles:
      • ['mon.a', 'mon.b', 'mon.c', 'mgr.x', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3']
    • sentry_event: https://sentry.ceph.com/organizations/ceph/?query=51332c8be35d4accba6cad9dd0250f71
    • status: dead
    • success: False
    • branch: wip-yuri6-testing-2024-04-02-1310
    • seed: 7488
    • sha1: a5074d4516d566e9d8b6aec912f26afd099de101
    • subset: 111/120000
    • suite: upgrade
    • suite_branch: wip-yuri6-testing-2024-04-02-1310
    • suite_path: /home/teuthworker/src/github.com_ceph_ceph-c_a5074d4516d566e9d8b6aec912f26afd099de101/qa
    • suite_relpath: qa
    • suite_repo: https://github.com/ceph/ceph-ci.git
    • suite_sha1: a5074d4516d566e9d8b6aec912f26afd099de101
    • targets:
      • smithi088.front.sepia.ceph.com: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPoJye81yeSYgMy1VlVY1RK88dEAtH2lVdznujQAP3KA7z5oJtmvj+tRRcjNNQn0KqjoLVm9K069+sENHSly9jA=
    • tasks:
      • internal.check_packages:
      • internal.buildpackages_prep:
      • internal.save_config:
      • internal.check_lock:
      • internal.add_remotes:
      • console_log:
      • internal.connect:
      • internal.push_inventory:
      • internal.serialize_remote_roles:
      • internal.check_conflict:
      • internal.check_ceph_data:
      • internal.vm_setup:
      • kernel:
        • kdb: 1
        • sha1: distro
      • internal.base:
      • internal.archive_upload:
      • internal.archive:
      • internal.coredump:
      • internal.sudo:
      • internal.syslog:
      • internal.timer:
      • pcp:
      • selinux:
      • ansible.cephlab:
      • clock:
      • install:
        • branch: pacific
        • exclude_packages:
          • librados3
          • ceph-mgr-dashboard
          • ceph-mgr-diskprediction-local
          • ceph-mgr-rook
          • ceph-mgr-cephadm
          • cephadm
          • ceph-volume
        • extra_packages:
          • librados2
      • print: **** done installing pacific
      • ceph:
        • conf:
          • global:
            • mon warn on pool no app: False
            • ms bind msgr2: False
        • log-ignorelist:
          • overall HEALTH_
          • \(FS_
          • \(MDS_
          • \(OSD_
          • \(MON_DOWN\)
          • \(CACHE_POOL_
          • \(POOL_
          • \(MGR_DOWN\)
          • \(PG_
          • \(SMALLER_PGP_NUM\)
          • Monitor daemon marked osd
          • Behind on trimming
          • Manager daemon
      • exec:
        • osd.0:
          • ceph osd set-require-min-compat-client pacific
      • print: **** done ceph
      • print: *** upgrading, no cephfs present
      • exec:
        • mon.a:
          • ceph fs dump
      • install.upgrade:
        • mon.a:
          • branch: reef
      • print: **** done install.upgrade
      • ceph.restart:
        • daemons:
          • mon.*
          • mgr.*
        • mon-health-to-clog: False
        • wait-for-healthy: False
      • ceph.healthy:
      • ceph.restart:
        • daemons:
          • osd.*
        • wait-for-healthy: False
        • wait-for-osds-up: True
      • exec:
        • mon.a:
          • ceph versions
          • ceph osd dump -f json-pretty
          • ceph fs dump
          • ceph osd require-osd-release quincy
          • for f in `ceph osd pool ls` ; do ceph osd pool set $f pg_autoscale_mode off ; done
      • ceph.healthy:
      • print: **** done ceph.restart
    • teuthology_branch: main
    • verbose: True
    • pcp_grafana_url:
    • priority: 99
    • user: yuriw
    • queue:
    • posted: 2024-04-10 14:19:46
    • started: 2024-04-10 15:50:07
    • updated: 2024-04-10 16:11:39
    • status_class: danger
    • runtime: 0:21:32
    • wait_time: 0:10:02