Description: upgrade:octopus-x/parallel-no-cephadm/{0-cluster/{openstack start} 1-ceph-install/octopus 1.1-pg-log-overrides/short_pg_log 2-workload/{rgw_ragweed_prepare} 3-upgrade-sequence/upgrade-mon-osd-mds 4-pacific 5-final-workload/{rgw rgw_ragweed_check} centos_latest}

Log: http://qa-proxy.ceph.com/teuthology/teuthology-2021-11-20_14:22:03-upgrade:octopus-x-pacific-distro-default-smithi/6516080/teuthology.log

Sentry event: https://sentry.ceph.com/organizations/ceph/?query=db084c2677d84d44abd28112ecb15d54

Failure Reason:

Command failed on smithi123 with status 1: 'sudo yum install -y kernel'

  • log_href: http://qa-proxy.ceph.com/teuthology/teuthology-2021-11-20_14:22:03-upgrade:octopus-x-pacific-distro-default-smithi/6516080/teuthology.log
  • archive_path: /home/teuthworker/archive/teuthology-2021-11-20_14:22:03-upgrade:octopus-x-pacific-distro-default-smithi/6516080
  • description: upgrade:octopus-x/parallel-no-cephadm/{0-cluster/{openstack start} 1-ceph-install/octopus 1.1-pg-log-overrides/short_pg_log 2-workload/{rgw_ragweed_prepare} 3-upgrade-sequence/upgrade-mon-osd-mds 4-pacific 5-final-workload/{rgw rgw_ragweed_check} centos_latest}
  • duration:
  • email: ceph-qa@ceph.io
  • failure_reason: Command failed on smithi123 with status 1: 'sudo yum install -y kernel'
  • flavor:
  • job_id: 6516080
  • kernel:
    • kdb: True
    • sha1: distro
  • last_in_suite: False
  • machine_type: smithi
  • name: teuthology-2021-11-20_14:22:03-upgrade:octopus-x-pacific-distro-default-smithi
  • nuke_on_error: True
  • os_type: centos
  • os_version: 8.2
  • overrides:
    • admin_socket:
      • branch: pacific
    • ceph:
      • conf:
        • global:
          • enable experimental unrecoverable data corrupting features: *
        • mgr:
          • debug mgr: 20
          • debug ms: 1
        • mon:
          • debug mon: 20
          • debug ms: 1
          • debug paxos: 20
          • mon warn on osd down out interval zero: False
        • osd:
          • debug ms: 1
          • debug osd: 20
          • osd class default list: *
          • osd class load list: *
          • osd max pg log entries: 2
          • osd min pg log entries: 1
      • flavor: default
      • log-ignorelist:
        • \(MDS_ALL_DOWN\)
        • \(MDS_UP_LESS_THAN_MAX\)
      • log-whitelist:
        • \(MDS_ALL_DOWN\)
        • \(MDS_UP_LESS_THAN_MAX\)
        • scrub mismatch
        • ScrubResult
        • wrongly marked
        • \(POOL_APP_NOT_ENABLED\)
        • \(SLOW_OPS\)
        • overall HEALTH_
        • slow request
      • sha1: f0ef1fb6260705cece97b678f26d03607d2e5e64
    • ceph-deploy:
      • conf:
        • client:
          • log file: /var/log/ceph/ceph-$name.$pid.log
        • mon:
          • osd default pool size: 2
    • install:
      • ceph:
        • flavor: default
        • sha1: f0ef1fb6260705cece97b678f26d03607d2e5e64
    • rgw:
      • frontend: civetweb
    • selinux:
      • whitelist:
        • scontext=system_u:system_r:logrotate_t:s0
    • workunit:
      • branch: pacific
      • sha1: 130bcaac868d94c7f70f0a4898bfd306f31d7865
  • owner: scheduled_teuthology@teuthology
  • pid:
  • roles:
    • ['mon.a', 'mgr.x', 'osd.0', 'osd.1', 'osd.2', 'osd.3']
    • ['mon.b', 'osd.4', 'osd.5', 'osd.6', 'osd.7']
    • ['mon.c', 'osd.8', 'osd.9', 'osd.10', 'osd.11']
    • ['client.0', 'client.1', 'client.2', 'client.3']
  • sentry_event: https://sentry.ceph.com/organizations/ceph/?query=db084c2677d84d44abd28112ecb15d54
  • status: fail
  • success: False
  • branch: pacific
  • seed:
  • sha1: f0ef1fb6260705cece97b678f26d03607d2e5e64
  • subset:
  • suite:
  • suite_branch: pacific
  • suite_path:
  • suite_relpath:
  • suite_repo:
  • suite_sha1: 130bcaac868d94c7f70f0a4898bfd306f31d7865
  • targets:
    • smithi079.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDtEJWoZmjxtkgnUw+k7iSSq2JNHpqvnOFaYiT8gee4dy4H+2G7uyTFh4U/yQdTpe8lMJwXEmy2mB2eQBvd14XHNSX4WO2HWDwI9W/ei9lejDv2Ka+gxhk+ccC2nDzyLhnYaexbZGX+5aHIi10LNQV/CJ3cmL81oEh5p2BLe3dFg5Xt/aQkL04V1WnAvWn9SDlIMbboo1kliZpnjSIIxIukPXFvl6Yr/CW582/3CPSo8FMYMSKE6VUoXG0LOS/kqkNaeMgIlu8GnTNPM/p8BIyGGHEscXE/kx2ItRuESLPzNp1nG4Qz8Zd7yNXXlYaQiqmKQUSSZDavFpH9rNg+L6ryxIfdCLLfEYgpdq1A6c0xkJ+VKU9frzIP/42kq71JbfiiETZNL92hwr6POyEapdEbv3UnYCZX48U+4Qz5lzbG5cFFlrvG89TOr0p4tf0HCnmm5xWTUG0tKcpwmY/bJcMZnNTvgtSdgjxn0DjNCIuyEsyvgGT6fJjwRqluudLlKr0=
    • smithi084.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjQkxeouNBJqsU92s+osh1YA/oqL0nvulbHdpBbX+t9C6DHZFAH8vmpYXAdkSHZwdTb27JwX0ASHot3ZFUJS4HradAnb3mYb9Kl7q8bLNUf8Mu9aDQzK1AqgWexRT6WFmg9ZkIklTAAva2em6PwLGcswH3WiyTlSu+lIkOkjJKLSqsJDsJpITxrPycq+3Hwv0FPv3l2/N9r7y03zN8NiQEQr1QaznuteIERn0UvF6nNen4OYYccwowjh2h08aiK2eZNvjn1ZoNcC+nouxT5PpASBQxNtc+50FmqUO1F5M/eB5352TG4RJuZ90tzUQrAKL2YZ9lMje/Vp3MbtW6MPC0tYWKT8YWhAeHcnVDRv7XBPua4p26NMt2XavFrq8Rc2yT2YVaovN/louoS5wqx1/1ZPaTbmwxqiyZUmzNrEdMSVlAEu8wcGvcqKEi3a5g1inxXlfQ0MNI9YobgCNkw2MvHzC9Qj2MnysL+jdvAYeqNqSGlYno1MDO3KcqRASMYUk=
    • smithi123.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTKcxoDK6rnxap5CCkiM+fYXT5zVssSLFdZWqhx7neK7RGH2cFb154jVnMEiPOUZSWks3xJbjxDfr1l7MIAob7YfIcgoc6OFPTL3QW+4T4KixVSRDgIpOEfEFMEGYL4fxNQ7C79g5uhUcvidxFdexWar8o0K8uvWNrOuTz8VPR3ez8a/8FfVTF3+zQYQ4bVc4Yoqlna0/ZJfZ3ywQkiC3c5qI8cM52r5bA0FMscZ/Qp7R/efEJlkfnJViGSB4Psl23yRYl1PPVEH3yI5LrANS5CEFTPT2WPJioijTJgdhU/Kqcnoo+FtWGtmU0vXa71r1lHJv/9WeInqfuK1rUXWUjzpT7pr0v3Xxgk2hV3d6trcRDjKmbKrH/l+Eensp8CboZ06cU9O3Qh9249xtbPuZXcRC5xYktlzEwRcB0taYNtbwVjKsf0VNJ3/K77WuJn6VMIGThYGYrQ1UatlBx6bP5CsWuL6Lp0JSU0CgUIB9bCAimBS6Vs8x60zP/JmZE3+E=
    • smithi140.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMtZ0hVYn3oGnFAIMOO5ogmI7O7yazw9W0b1wEI97MEXDt4R6NxgUHg2799B53/l0Sh3cpW/nGarUd/t5phZ3tAoaxZq30MJw23mVyWGjbJZ5Dr7mN2JyrSiyHWgDT42QJAW0pSR7O74ho0X1+QEVb9Ql8S6//c+ZMZr2d9SP5To7occX1VrcqLxC1jcK21XxpH6Ei1WrUYrz/ENNspHp1BeGI7EIOGnRrn3LJZY3AmjxIrg/GDPgF2I3+qbsDG+rPyjzATU7QXd1Hv6OWO9oGS92mCcm9bhpB9cT88iVlIfOCKomCPAGTbsoiIg45L7tmm4tohpqXvJNgAMXERvuR+i4zN18D+ZJWbGEAX6Z+SoBaXOpE+kVhLuSwz2VXWWuvk8uN7dGj9gtBFoIcalLqNx4hxfOQJmCSuqyq4i5m/aiGN1hSZlm3HfKPSD/mZTjBBhh/YM8MA0CjMeD1MMtrEvqp2hbBtRfXZZ9oFBb3EAar9Wp/hjC8R9NbOgv9Tms=
  • tasks:
    • internal.check_packages:
    • internal.buildpackages_prep:
    • internal.save_config:
    • internal.check_lock:
    • internal.add_remotes:
    • console_log:
    • internal.connect:
    • internal.push_inventory:
    • internal.serialize_remote_roles:
    • internal.check_conflict:
    • internal.check_ceph_data:
    • internal.vm_setup:
    • kernel:
      • kdb: True
      • sha1: distro
    • internal.base:
    • internal.archive_upload:
    • internal.archive:
    • internal.coredump:
    • internal.sudo:
    • internal.syslog:
    • internal.timer:
    • pcp:
    • selinux:
    • ansible.cephlab:
    • clock:
    • install:
      • branch: octopus
      • exclude_packages:
        • ceph-mgr-cephadm
        • cephadm
        • libcephfs-dev
    • print: **** done installing octopus
    • ceph:
      • conf:
        • global:
          • bluestore warn on no per pool omap: False
          • bluestore_warn_on_legacy_statfs: False
          • mon pg warn min per osd: 0
          • mon warn on pool no app: False
      • log-ignorelist:
        • overall HEALTH_
        • \(FS_
        • \(MDS_
        • \(OSD_
        • \(MON_DOWN\)
        • \(CACHE_POOL_
        • \(POOL_
        • \(MGR_DOWN\)
        • \(PG_
        • \(SMALLER_PGP_NUM\)
        • Monitor daemon marked osd
        • Behind on trimming
        • Manager daemon
    • exec:
      • osd.0:
        • ceph osd set-require-min-compat-client octopus
    • print: **** done ceph
    • install.upgrade:
      • mon.a:
      • mon.b:
      • mon.c:
    • print: **** done install.upgrade non-client hosts
    • rgw:
      • client.1
    • print: **** done => started rgw client.1
    • parallel:
      • workload
      • upgrade-sequence
    • print: **** done parallel
    • install.upgrade:
      • client.0:
    • print: **** done install.upgrade on client.0
    • exec:
      • osd.0:
        • ceph osd require-osd-release pacific
        • ceph osd set-require-min-compat-client pacific
        • for f in `ceph osd pool ls` ; do ceph osd pool set $f pg_autoscale_mode off ; done
    • ceph.healthy:
    • sequential:
      • rgw-final-workload
      • print: **** done rgw 4-final-workload
  • teuthology_branch: master
  • verbose: True
  • pcp_grafana_url:
  • priority:
  • user:
  • queue:
  • posted: 2021-11-20 14:22:53
  • started: 2021-11-20 15:38:44
  • updated: 2021-11-20 15:48:09
  • status_class: danger
  • runtime: 0:09:25