Log: http://qa-proxy.ceph.com/teuthology/teuthology-2018-02-13_04:15:05-multimds-master-distro-basic-smithi/2185361/teuthology.log

Sentry event: http://sentry.ceph.com/sepia/teuthology/?q=c76be4c2e4f24a909a0d9b70bb1d3e30

Failure Reason:

Command failed on smithi196 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 172.21.15.137:6789,172.21.15.137:6790,172.21.15.94:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds'

  • kernel:
    • sha1: distro
    • kdb: True
  • tasks:
    • internal.check_packages:
    • internal.buildpackages_prep:
    • internal.lock_machines:
      • 3
      • smithi
    • internal.save_config:
    • internal.check_lock:
    • internal.add_remotes:
    • console_log:
    • internal.connect:
    • internal.push_inventory:
    • internal.serialize_remote_roles:
    • internal.check_conflict:
    • internal.check_ceph_data:
    • internal.vm_setup:
    • kernel:
      • sha1: distro
      • kdb: True
    • internal.base:
    • internal.archive_upload:
    • internal.archive:
    • internal.coredump:
    • internal.sudo:
    • internal.syslog:
    • internal.timer:
    • pcp:
    • selinux:
    • ansible.cephlab:
    • clock:
    • install:
    • ceph:
    • exec:
      • client.0:
        • sudo ceph fs set cephfs inline_data true --yes-i-really-mean-it
    • kclient:
    • check-counter:
      • counters:
        • mds:
          • mds.exported
          • mds.imported
    • check-counter:
      • counters:
        • mds:
          • mds.dir_split
    • workunit:
      • clients:
        • all:
          • suites/blogbench.sh
  • verbose: True
  • pid:
  • duration: 0:09:24
  • owner: scheduled_teuthology@teuthology
  • flavor: basic
  • status_class: danger
  • targets:
    • smithi094.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDC4tndaZarZFUzVP4/FcbZQFehdoY7qrDbb931DelNOuYqkOTYSD9R5F1OP7/KkHgdh+/FRCAn3Xz+adNimwZ1xMek7MI+7wvpVvrdTvQ5htWBF7WZb07CApP4Tkz/wDiWcHbT/rqJkCCxBdeO59nGiIWQpLY7ZBhWZHND+lIuGFX4DVfLSbpnQqdkWlkYpA1YJYCG4tnnwMY0/jRqPgz9CwjZ+347q61173z42vEsbas5dO5D8Pc+HNHfN9VUd2thMYLBK+a2+/CpnxpHm3f1JqsRxDvrmim4rQ12zztjCzcUTowJYX8z1rWXQl3hkmHu4B2EUivQZdfIo3v8RdwT
    • smithi137.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDIoEYnPm7ss+BNgHaS5LG9Qo45SL749RkUo6d94Hz1WAYMXLe5u0QmKusG0ZWHWZN4UbJRcKGWITqNLtcTf7ou4DG4gfDIwMwDuIwCtQhWkzOb1MfRqp6fBqHQosyhxXUljRzWcYS5tu/Hbu53+0iGdA6DXESL/1y53ypNKverB0eoarUm461SfrYpru72PXwlxBRMYSGD7maqJDDJ/5Vd9dSkDRwiqbUEbxYp6cpRds1JZfWCfL9McAT65fRijN9AFHRqG4JFs4+LMfg+zECjVKChUaXAmDvVR6HIF6GbI5D5t+JwWHepzBYz/GC1ZZH/i9qnUUC8CoT8OuNjRsD1
    • smithi196.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDxOAhKPaiDlVd9rs+8Zc+DYPYNg+/3e6dF6OjvtJEn1Ss7rJFQrthf9Pjv5Ye4NbjMkkbs3/S0nd2W/q5N9/eS3HD3vWFzphC9rlZDD706lKmAMOnqAKtm4cc2wPyFIwsHWVzq3Pm1jmuflt6HXnT7gF47hsD+zeUVjJVzZDImP9M4AijgvzWQr81iQcwisbvKCfnimP8+Mv6zO90J5nwFPNh1tilqB/y4f7/uZAM6FKYkdzg+ss2a6OFRXRWJNUlpBaj+Y2x2Ez2Edug7w0LIh0xClnSK538A5VjrScoa9PCmCOTXSGWrdmRwA0urRs2jRTBWBXewK4P/NY22hKJl
  • job_id: 2185361
  • log_href: http://qa-proxy.ceph.com/teuthology/teuthology-2018-02-13_04:15:05-multimds-master-distro-basic-smithi/2185361/teuthology.log
  • suite_branch: master
  • wait_time: 0:31:11
  • os_version:
  • branch: master
  • pcp_grafana_url:
  • email: ceph-qa@ceph.com
  • archive_path: /home/teuthworker/archive/teuthology-2018-02-13_04:15:05-multimds-master-distro-basic-smithi/2185361
  • updated: 2018-02-13 06:08:27
  • description: multimds/basic/{begin.yaml clusters/3-mds.yaml inline/yes.yaml mount/kclient.yaml objectstore-ec/bluestore-ec-root.yaml overrides/{basic/{debug.yaml frag_enable.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} fuse-default-perm-no.yaml} q_check_counter/check_counter.yaml tasks/cfuse_workunit_suites_blogbench.yaml}
  • started: 2018-02-13 05:27:52
  • last_in_suite: False
  • machine_type: smithi
  • sentry_event: http://sentry.ceph.com/sepia/teuthology/?q=c76be4c2e4f24a909a0d9b70bb1d3e30
  • posted: 2018-02-13 04:17:01
  • teuthology_branch: master
  • sha1: 13738cd7522a6eea8038660ba7120ca89f34eacb
  • name: teuthology-2018-02-13_04:15:05-multimds-master-distro-basic-smithi
  • roles:
    • [u'mon.a', u'mon.c', u'mgr.y', u'mds.a', u'osd.0', u'osd.1', u'osd.2', u'osd.3']
    • [u'mon.b', u'mgr.x', u'mds.b', u'mds.c', u'osd.4', u'osd.5', u'osd.6', u'osd.7']
    • [u'client.0']
  • overrides:
    • ceph-deploy:
      • fs: xfs
      • conf:
        • client:
          • log file: /var/log/ceph/ceph-$name.$pid.log
        • mon:
          • osd default pool size: 2
        • osd:
          • mon osd full ratio: 0.9
          • mon osd backfillfull_ratio: 0.85
          • bluestore fsck on mount: True
          • mon osd nearfull ratio: 0.8
          • debug bluestore: 20
          • debug bluefs: 20
          • osd objectstore: bluestore
          • bluestore block size: 96636764160
          • debug rocksdb: 10
          • osd failsafe full ratio: 0.95
      • bluestore: True
    • workunit:
      • sha1: 13738cd7522a6eea8038660ba7120ca89f34eacb
      • branch: master
    • ceph:
      • log-whitelist:
        • slow request
        • overall HEALTH_
        • \(FS_DEGRADED\)
        • \(MDS_FAILED\)
        • \(MDS_DEGRADED\)
        • \(FS_WITH_FAILED_MDS\)
        • \(MDS_DAMAGE\)
        • overall HEALTH_
        • \(OSD_DOWN\)
        • \(OSD_
        • but it is still running
        • is not responding
      • sha1: 13738cd7522a6eea8038660ba7120ca89f34eacb
      • fs: xfs
      • conf:
        • mds:
          • mds bal split bits: 3
          • mds bal split size: 100
          • debug mds: 20
          • mds bal merge size: 5
          • debug ms: 1
          • mds bal frag: True
          • mds bal fragment size max: 10000
        • global:
          • ms die on skipped message: False
        • osd:
          • mon osd full ratio: 0.9
          • debug ms: 1
          • debug journal: 20
          • debug osd: 25
          • debug bluestore: 20
          • debug bluefs: 20
          • osd objectstore: bluestore
          • mon osd backfillfull_ratio: 0.85
          • bluestore block size: 96636764160
          • debug filestore: 20
          • debug rocksdb: 10
          • mon osd nearfull ratio: 0.8
          • osd failsafe full ratio: 0.95
          • bluestore fsck on mount: True
        • mon:
          • debug mon: 20
          • debug paxos: 20
          • debug ms: 1
        • client:
          • debug ms: 1
          • fuse default permissions: False
          • debug client: 10
      • cephfs_ec_profile:
        • m=2
        • k=2
        • crush-failure-domain=osd
    • install:
      • ceph:
        • sha1: 13738cd7522a6eea8038660ba7120ca89f34eacb
    • admin_socket:
      • branch: master
    • thrashosds:
      • bdev_inject_crash_probability: 0.5
      • bdev_inject_crash: 2
  • success: False
  • failure_reason: Command failed on smithi196 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage /sbin/mount.ceph 172.21.15.137:6789,172.21.15.137:6790,172.21.15.94:6789:/ /home/ubuntu/cephtest/mnt.0 -v -o name=0,secretfile=/home/ubuntu/cephtest/ceph.data/client.0.secret,norequire_active_mds'
  • status: fail
  • nuke_on_error: True
  • os_type:
  • runtime: 0:40:35
  • suite_sha1: 13738cd7522a6eea8038660ba7120ca89f34eacb