Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 4120898 2019-07-15 05:55:34 2019-07-15 07:32:35 2019-07-15 10:16:37 2:44:02 2:09:56 0:34:06 mira master centos 7.6 ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore.yaml distros/centos_latest.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} 4
Failure Reason:

Command failed on mira002 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120899 2019-07-15 05:55:34 2019-07-15 07:42:17 2019-07-15 08:10:16 0:27:59 0:10:20 0:17:39 mira master ubuntu 18.04 ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore_dmcrypt.yaml distros/ubuntu_latest.yaml python_versions/python_3.yaml tasks/rbd_import_export.yaml} 4
Failure Reason:

Command failed on mira063 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120900 2019-07-15 05:55:35 2019-07-15 11:17:55 2019-07-15 13:41:56 2:24:01 1:58:01 0:26:00 mira master centos 7.6 ceph-deploy/{cluster/4node.yaml config/ceph_volume_dmcrypt_off.yaml distros/centos_latest.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} 4
Failure Reason:

k --zap-all /dev/sdd', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-15 13:40:26.551347'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.023058', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2001655500'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N210EV5E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sde'}, 'ansible_loop_var': u'item', u'end': u'2019-07-15 13:40:29.089028', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2001655500'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N210EV5E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sde'}, u'cmd': u'sgdisk --zap-all /dev/sde || sgdisk --zap-all /dev/sde', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sde || sgdisk --zap-all /dev/sde', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-15 13:40:28.065970'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.034271', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2012776300'], u'uuids': [u'b3f453e7-9efb-42ff-93bf-52b0a1722812']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPS930N121G73V', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HDS721010CLA330', u'partitions': {}}, 'key': u'sdf'}, 'ansible_loop_var': u'item', u'end': u'2019-07-15 13:40:30.568988', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2012776300'], u'uuids': [u'b3f453e7-9efb-42ff-93bf-52b0a1722812']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPS930N121G73V', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HDS721010CLA330', u'partitions': {}}, 'key': u'sdf'}, u'cmd': u'sgdisk --zap-all /dev/sdf || sgdisk --zap-all /dev/sdf', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdf || sgdisk --zap-all /dev/sdf', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-15 13:40:29.534717'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.019295', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP5A1FQ', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdg'}, 'ansible_loop_var': u'item', u'end': u'2019-07-15 13:40:32.091472', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP5A1FQ', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdg'}, u'cmd': u'sgdisk --zap-all /dev/sdg || sgdisk --zap-all /dev/sdg', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdg || sgdisk --zap-all /dev/sdg', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-15 13:40:31.072177'}, {'ansible_loop_var': u'item', '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP52BEJ', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {u'sda1': {u'start': u'2048', u'sectorsize': 512, u'uuid': u'254b5f97-a3c2-4fa0-9cdb-62cf4484d4a7', u'sectors': u'1953522688', u'holders': [], u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000-part1'], u'uuids': [u'254b5f97-a3c2-4fa0-9cdb-62cf4484d4a7']}, u'size': u'931.51 GB'}}}, 'key': u'sda'}, 'skipped': True, 'changed': False, '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP52BEJ', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {u'sda1': {u'start': u'2048', u'sectorsize': 512, u'uuid': u'254b5f97-a3c2-4fa0-9cdb-62cf4484d4a7', u'sectors': u'1953522688', u'holders': [], u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000-part1'], u'uuids': [u'254b5f97-a3c2-4fa0-9cdb-62cf4484d4a7']}, u'size': u'931.51 GB'}}}, 'key': u'sda'}}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.017822', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP53FPZ', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdb'}, 'ansible_loop_var': u'item', u'end': u'2019-07-15 13:40:33.583903', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP53FPZ', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdb'}, u'cmd': u'sgdisk --zap-all /dev/sdb || sgdisk --zap-all /dev/sdb', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdb || sgdisk --zap-all /dev/sdb', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-15 13:40:32.566081'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.018349', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d208263c000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0HD2H3VPL', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdc'}, 'ansible_loop_var': u'item', u'end': u'2019-07-15 13:40:35.048955', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d208263c000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0HD2H3VPL', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdc'}, u'cmd': u'sgdisk --zap-all /dev/sdc || sgdisk --zap-all /dev/sdc', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdc || sgdisk --zap-all /dev/sdc', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-15 13:40:34.030606'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.042412', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'NA', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': [u'5cb337fe-deda-4870-8ea4-1b4dfdf78585']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'PAJ55T7E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA721010KLA330', u'partitions': {}}, 'key': u'sdh'}, 'ansible_loop_var': u'item', u'end': u'2019-07-15 13:40:36.564944', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'NA', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': [u'5cb337fe-deda-4870-8ea4-1b4dfdf78585']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'PAJ55T7E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA721010KLA330', u'partitions': {}}, 'key': u'sdh'}, u'cmd': u'sgdisk --zap-all /dev/sdh || sgdisk --zap-all /dev/sdh', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdh || sgdisk --zap-all /dev/sdh', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-15 13:40:35.522532'}, {'stderr_lines': [u'Problem opening /dev/dm-0 for reading! Error is 2.', u'The specified file does not exist!', u"Problem opening '' for writing! Program will now terminate.", u'Warning! MBR not overwritten! Error is 2!', u'Problem opening /dev/dm-0 for reading! Error is 2.', u'The specified file does not exist!', u"Problem opening '' for writing! Program will now terminate.", u'Warning! MBR not overwritten! Error is 2!'], u'changed': True, u'stdout': u'', u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'delta': u'0:00:00.017286', 'stdout_lines': [], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': None, u'links': {u'masters': [], u'labels': [], u'ids': [u'dm-name-mpatha', u'dm-uuid-mpath-2001b4d2000000000'], u'uuids': [u'4ee7e555-beeb-4a91-84bf-601d14cbe363']}, u'sas_device_handle': None, u'host': u'', u'support_discard': u'0', u'serial': u'5VP5A1FQ', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': None, u'partitions': {}}, 'key': u'dm-0'}, 'ansible_loop_var': u'item', u'end': u'2019-07-15 13:40:37.024122', '_ansible_no_log': False, u'start': u'2019-07-15 13:40:37.006836', u'failed': True, u'cmd': u'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', 'item': {'value': {u'sectorsize': u'512', u'vendor': None, u'links': {u'masters': [], u'labels': [], u'ids': [u'dm-name-mpatha', u'dm-uuid-mpath-2001b4d2000000000'], u'uuids': [u'4ee7e555-beeb-4a91-84bf-601d14cbe363']}, u'sas_device_handle': None, u'host': u'', u'support_discard': u'0', u'serial': u'5VP5A1FQ', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': None, u'partitions': {}}, 'key': u'dm-0'}, u'stderr': u"Problem opening /dev/dm-0 for reading! Error is 2.\nThe specified file does not exist!\nProblem opening '' for writing! Program will now terminate.\nWarning! MBR not overwritten! Error is 2!\nProblem opening /dev/dm-0 for reading! Error is 2.\nThe specified file does not exist!\nProblem opening '' for writing! Program will now terminate.\nWarning! MBR not overwritten! Error is 2!", u'rc': 2, u'msg': u'non-zero return code'}]}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 219, in represent_list return self.represent_sequence(u'tag:yaml.org,2002:seq', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 102, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined raise RepresenterError("cannot represent an object", data)RepresenterError: ('cannot represent an object', u'sdd')

fail 4120901 2019-07-15 05:55:36 2019-07-15 11:21:26 2019-07-15 11:39:25 0:17:59 0:10:22 0:07:37 mira master ubuntu 18.04 ceph-deploy/{cluster/4node.yaml config/ceph_volume_filestore.yaml distros/ubuntu_latest.yaml python_versions/python_3.yaml tasks/rbd_import_export.yaml} 4
Failure Reason:

Command failed on mira038 with status 1: 'sudo tar cz -f /tmp/tmpOyHy96 -C /var/lib/ceph/mon -- .'

fail 4120902 2019-07-15 05:55:37 2019-07-15 11:29:34 2019-07-15 12:07:33 0:37:59 0:27:21 0:10:38 mira master centos 7.6 ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore.yaml distros/centos_latest.yaml python_versions/python_2.yaml tasks/rbd_import_export.yaml} 4
Failure Reason:

Command failed on mira071 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120903 2019-07-15 05:55:38 2019-07-15 11:39:32 2019-07-15 11:57:31 0:17:59 0:11:36 0:06:23 mira master ubuntu 18.04 ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore_dmcrypt.yaml distros/ubuntu_latest.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} 4
Failure Reason:

Command failed on mira038 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120904 2019-07-15 05:55:38 2019-07-15 11:41:21 2019-07-15 12:13:20 0:31:59 0:21:32 0:10:27 mira master centos 7.6 ceph-deploy/{cluster/4node.yaml config/ceph_volume_dmcrypt_off.yaml distros/centos_latest.yaml python_versions/python_2.yaml tasks/rbd_import_export.yaml} 4
Failure Reason:

Command failed on mira063 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120905 2019-07-15 05:55:39 2019-07-15 11:45:07 2019-07-15 12:03:06 0:17:59 0:10:37 0:07:22 mira master ubuntu 18.04 ceph-deploy/{cluster/4node.yaml config/ceph_volume_filestore.yaml distros/ubuntu_latest.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} 4
Failure Reason:

Command failed on mira069 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120906 2019-07-15 05:55:40 2019-07-15 11:53:27 2019-07-15 14:39:29 2:46:02 2:15:03 0:30:59 mira master centos 7.6 ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore.yaml distros/centos_latest.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} 4
Failure Reason:

Command failed on mira032 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120907 2019-07-15 05:55:40 2019-07-15 11:57:37 2019-07-15 12:15:36 0:17:59 0:10:07 0:07:52 mira master ubuntu 18.04 ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore_dmcrypt.yaml distros/ubuntu_latest.yaml python_versions/python_2.yaml tasks/rbd_import_export.yaml} 4
Failure Reason:

Command failed on mira038 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120908 2019-07-15 05:55:41 2019-07-15 12:03:11 2019-07-15 14:37:12 2:34:01 2:15:38 0:18:23 mira master centos 7.6 ceph-deploy/{cluster/4node.yaml config/ceph_volume_dmcrypt_off.yaml distros/centos_latest.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} 4
Failure Reason:

Command failed on mira069 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120909 2019-07-15 05:55:42 2019-07-15 12:07:46 2019-07-15 12:29:45 0:21:59 0:09:56 0:12:03 mira master ubuntu 18.04 ceph-deploy/{cluster/4node.yaml config/ceph_volume_filestore.yaml distros/ubuntu_latest.yaml python_versions/python_2.yaml tasks/rbd_import_export.yaml} 4
Failure Reason:

Command failed on mira071 with status 1: 'sudo tar cz -f /tmp/tmpc3bejQ -C /var/lib/ceph/mon -- .'

fail 4120910 2019-07-15 05:55:43 2019-07-15 12:13:37 2019-07-15 14:47:39 2:34:02 2:14:36 0:19:26 mira master centos 7.6 ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore.yaml distros/centos_latest.yaml python_versions/python_3.yaml tasks/rbd_import_export.yaml} 4
Failure Reason:

Command failed on mira063 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120911 2019-07-15 05:55:43 2019-07-15 12:15:40 2019-07-15 12:33:39 0:17:59 0:10:04 0:07:55 mira master ubuntu 18.04 ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore_dmcrypt.yaml distros/ubuntu_latest.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} 4
Failure Reason:

Command failed on mira038 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120912 2019-07-15 05:55:44 2019-07-15 12:25:32 2019-07-15 14:59:33 2:34:01 2:11:11 0:22:50 mira master centos 7.6 ceph-deploy/{cluster/4node.yaml config/ceph_volume_dmcrypt_off.yaml distros/centos_latest.yaml python_versions/python_3.yaml tasks/rbd_import_export.yaml} 4
Failure Reason:

Command failed on mira034 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120913 2019-07-15 05:55:45 2019-07-15 12:29:50 2019-07-15 12:49:49 0:19:59 0:11:34 0:08:25 mira master ubuntu 18.04 ceph-deploy/{cluster/4node.yaml config/ceph_volume_filestore.yaml distros/ubuntu_latest.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} 4
Failure Reason:

Command failed on mira038 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120914 2019-07-15 05:55:46 2019-07-15 12:33:57 2019-07-15 12:55:57 0:22:00 0:10:27 0:11:33 mira master ubuntu 18.04 ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore.yaml distros/ubuntu_latest.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} 4
Failure Reason:

Command failed on mira061 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120915 2019-07-15 05:55:46 2019-07-15 12:40:29 2019-07-15 13:22:28 0:41:59 0:22:46 0:19:13 mira master centos 7.6 ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore_dmcrypt.yaml distros/centos_latest.yaml python_versions/python_3.yaml tasks/rbd_import_export.yaml} 4
Failure Reason:

Command failed on mira038 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120916 2019-07-15 05:55:47 2019-07-15 12:49:55 2019-07-15 13:13:54 0:23:59 0:10:19 0:13:40 mira master ubuntu 18.04 ceph-deploy/{cluster/4node.yaml config/ceph_volume_dmcrypt_off.yaml distros/ubuntu_latest.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} 4
Failure Reason:

Command failed on mira061 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

dead 4120917 2019-07-15 05:55:48 2019-07-15 12:56:01 2019-07-15 15:44:03 2:48:02 2:08:41 0:39:21 mira master centos 7.6 ceph-deploy/{cluster/4node.yaml config/ceph_volume_filestore.yaml distros/centos_latest.yaml python_versions/python_3.yaml tasks/rbd_import_export.yaml} 4
Failure Reason:

{'mira061.front.sepia.ceph.com': {'msg': 'timed out waiting for ping module test success: SSH Error: data could not be sent to remote host "mira061.front.sepia.ceph.com". Make sure this host can be reached over ssh', 'changed': False, '_ansible_no_log': False, 'elapsed': 347}}

fail 4120918 2019-07-15 05:55:49 2019-07-15 13:13:56 2019-07-15 13:39:55 0:25:59 0:10:13 0:15:46 mira master ubuntu 18.04 ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore.yaml distros/ubuntu_latest.yaml python_versions/python_2.yaml tasks/rbd_import_export.yaml} 4
Failure Reason:

Command failed on mira038 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120919 2019-07-15 05:55:49 2019-07-15 13:22:30 2019-07-15 14:20:30 0:58:00 0:20:59 0:37:01 mira master centos 7.6 ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore_dmcrypt.yaml distros/centos_latest.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} 4
Failure Reason:

Command failed on mira035 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120920 2019-07-15 05:55:50 2019-07-15 13:35:39 2019-07-15 13:55:38 0:19:59 0:10:05 0:09:54 mira master ubuntu 18.04 ceph-deploy/{cluster/4node.yaml config/ceph_volume_dmcrypt_off.yaml distros/ubuntu_latest.yaml python_versions/python_2.yaml tasks/rbd_import_export.yaml} 4
Failure Reason:

Command failed on mira038 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120921 2019-07-15 05:55:51 2019-07-15 13:39:56 2019-07-15 16:17:58 2:38:02 2:12:07 0:25:55 mira master centos 7.6 ceph-deploy/{cluster/4node.yaml config/ceph_volume_filestore.yaml distros/centos_latest.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} 4
Failure Reason:

ceph-deploy: Failed to create osds

fail 4120922 2019-07-15 05:55:52 2019-07-15 13:41:58 2019-07-15 14:37:57 0:55:59 0:10:30 0:45:29 mira master ubuntu 18.04 ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore.yaml distros/ubuntu_latest.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} 4
Failure Reason:

Command failed on mira035 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120923 2019-07-15 05:55:53 2019-07-15 13:51:38 2019-07-15 16:25:39 2:34:01 2:11:00 0:23:01 mira master centos 7.6 ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore_dmcrypt.yaml distros/centos_latest.yaml python_versions/python_2.yaml tasks/rbd_import_export.yaml} 4
Failure Reason:

Command failed on mira038 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120924 2019-07-15 05:55:54 2019-07-15 13:55:57 2019-07-15 14:53:57 0:58:00 0:09:55 0:48:05 mira master ubuntu 18.04 ceph-deploy/{cluster/4node.yaml config/ceph_volume_dmcrypt_off.yaml distros/ubuntu_latest.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} 4
Failure Reason:

Command failed on mira035 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120925 2019-07-15 05:55:54 2019-07-15 14:20:32 2019-07-15 17:12:33 2:52:01 2:16:28 0:35:33 mira master centos 7.6 ceph-deploy/{cluster/4node.yaml config/ceph_volume_filestore.yaml distros/centos_latest.yaml python_versions/python_2.yaml tasks/rbd_import_export.yaml} 4
Failure Reason:

Command failed on mira069 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120926 2019-07-15 05:55:55 2019-07-15 14:37:36 2019-07-15 14:55:35 0:17:59 0:10:25 0:07:34 mira master ubuntu 18.04 ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore.yaml distros/ubuntu_latest.yaml python_versions/python_3.yaml tasks/rbd_import_export.yaml} 4
Failure Reason:

Command failed on mira032 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120927 2019-07-15 05:55:56 2019-07-15 14:37:59 2019-07-15 15:25:58 0:47:59 0:20:55 0:27:04 mira master centos 7.6 ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore_dmcrypt.yaml distros/centos_latest.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} 4
Failure Reason:

Command failed on mira035 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'

fail 4120928 2019-07-15 05:55:57 2019-07-15 14:39:55 2019-07-15 15:01:55 0:22:00 0:09:13 0:12:47 mira master ubuntu 18.04 ceph-deploy/{cluster/4node.yaml config/ceph_volume_dmcrypt_off.yaml distros/ubuntu_latest.yaml python_versions/python_3.yaml tasks/rbd_import_export.yaml} 4
Failure Reason:

Command failed on mira077 with status 1: 'sudo tar cz -f /tmp/tmpPppVz8 -C /var/lib/ceph/mon -- .'

fail 4120929 2019-07-15 05:55:58 2019-07-15 14:48:02 2019-07-15 17:30:04 2:42:02 2:13:23 0:28:39 mira master centos 7.6 ceph-deploy/{cluster/4node.yaml config/ceph_volume_filestore.yaml distros/centos_latest.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} 4
Failure Reason:

Command failed on mira032 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health'