Nodes: gibba041

Description: rados/singleton/{all/lost-unfound msgr-failures/many msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}}

Log: http://qa-proxy.ceph.com/teuthology/teuthology-2021-06-02_03:30:02-rados-octopus-distro-basic-gibba/6145621/teuthology.log

Sentry event: https://sentry.ceph.com/organizations/ceph/?query=0f46bb804ad34370af4a467ad756d38e

Failure Reason:

{'Failure object was': {'gibba041.front.sepia.ceph.com': {'results': [{'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop1', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop1', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'changed': True, 'end': '2021-06-23 05:54:01.440057', 'stdout': 'Creating new GPT entries.\\nWarning: The kernel is still using the old partition table.\\nThe new table will be used at the next reboot or after you\\nrun partprobe(8) or kpartx(8)\\nGPT data structures destroyed! You may now partition the disk using fdisk or\\nother utilities.', 'cmd': 'sgdisk --zap-all /dev/dm-2 || sgdisk --zap-all /dev/dm-2', 'rc': 0, 'start': '2021-06-23 05:53:59.729073', 'stderr': '', 'delta': '0:00:01.710984', 'invocation': {'module_args': {'creates': 'None', 'executable': 'None', '_uses_shell': True, 'strip_empty_ends': True, '_raw_params': 'sgdisk --zap-all /dev/dm-2 || sgdisk --zap-all /dev/dm-2', 'removes': 'None', 'argv': 'None', 'warn': True, 'chdir': 'None', 'stdin_add_newline': True, 'stdin': 'None'}}, 'stdout_lines': ['Creating new GPT entries.', 'Warning: The kernel is still using the old partition table.', 'The new table will be used at the next reboot or after you', 'run partprobe(8) or kpartx(8)', 'GPT data structures destroyed! You may now partition the disk using fdisk or', 'other utilities.'], 'stderr_lines': [], '_ansible_no_log': False, 'failed': False, 'item': {'key': 'dm-2', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_3', 'dm-uuid-LVM-CRflkl1pB88B3NjfT5hvZJEQKec35UWfs8qbTfQgncEtfajtcEFrpa6pXPNovRCO'], 'uuids': ['5f25f350-8682-4c3c-bdbe-dbf791058915']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'dm-2', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_3', 'dm-uuid-LVM-CRflkl1pB88B3NjfT5hvZJEQKec35UWfs8qbTfQgncEtfajtcEFrpa6pXPNovRCO'], 'uuids': ['5f25f350-8682-4c3c-bdbe-dbf791058915']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop6', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop6', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop4', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop4', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'changed': True, 'end': '2021-06-23 05:54:02.640660', 'stdout': 'Creating new GPT entries.\\nWarning: The kernel is still using the old partition table.\\nThe new table will be used at the next reboot or after you\\nrun partprobe(8) or kpartx(8)\\nGPT data structures destroyed! You may now partition the disk using fdisk or\\nother utilities.', 'cmd': 'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', 'rc': 0, 'start': '2021-06-23 05:54:01.549245', 'stderr': '', 'delta': '0:00:01.091415', 'invocation': {'module_args': {'creates': 'None', 'executable': 'None', '_uses_shell': True, 'strip_empty_ends': True, '_raw_params': 'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', 'removes': 'None', 'argv': 'None', 'warn': True, 'chdir': 'None', 'stdin_add_newline': True, 'stdin': 'None'}}, 'stdout_lines': ['Creating new GPT entries.', 'Warning: The kernel is still using the old partition table.', 'The new table will be used at the next reboot or after you', 'run partprobe(8) or kpartx(8)', 'GPT data structures destroyed! You may now partition the disk using fdisk or', 'other utilities.'], 'stderr_lines': [], '_ansible_no_log': False, 'failed': False, 'item': {'key': 'dm-0', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_1', 'dm-uuid-LVM-CRflkl1pB88B3NjfT5hvZJEQKec35UWfEk67f0vpwF1ctSp6hZcuotcO92L0AG0v'], 'uuids': ['5a01596d-400e-49b5-9277-ffa58a22a33a']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'dm-0', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_1', 'dm-uuid-LVM-CRflkl1pB88B3NjfT5hvZJEQKec35UWfEk67f0vpwF1ctSp6hZcuotcO92L0AG0v'], 'uuids': ['5a01596d-400e-49b5-9277-ffa58a22a33a']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'sda', 'value': {'scheduler_mode': 'cfq', 'rotational': '1', 'vendor': 'ATA', 'sectors': '1953525168', 'links': {'masters': [], 'labels': [], 'ids': ['ata-TOSHIBA_MG04ACA100N_Y9I6K2K6F6XF', 'wwn-0x50000399ab9813d4'], 'uuids': []}, 'partitions': {'sda1': {'sectorsize': 512, 'uuid': '544fff2a-e246-4637-96d0-94b4cd4bcfea', 'links': {'masters': [], 'labels': [], 'ids': ['ata-TOSHIBA_MG04ACA100N_Y9I6K2K6F6XF-part1', 'wwn-0x50000399ab9813d4-part1'], 'uuids': ['544fff2a-e246-4637-96d0-94b4cd4bcfea']}, 'sectors': '1953522688', 'start': '2048', 'holders': [], 'size': '931.51 GB'}}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': 'SATA controller: Intel Corporation Cannon Lake PCH SATA AHCI Controller (rev 10)', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'holders': [], 'wwn': '0x50000399ab9813d4', 'model': 'TOSHIBA MG04ACA1', 'serial': 'Y9I6K2K6F6XF', 'size': '931.51 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'sda', 'value': {'scheduler_mode': 'cfq', 'rotational': '1', 'vendor': 'ATA', 'sectors': '1953525168', 'links': {'masters': [], 'labels': [], 'ids': ['ata-TOSHIBA_MG04ACA100N_Y9I6K2K6F6XF', 'wwn-0x50000399ab9813d4'], 'uuids': []}, 'partitions': {'sda1': {'sectorsize': 512, 'uuid': '544fff2a-e246-4637-96d0-94b4cd4bcfea', 'links': {'masters': [], 'labels': [], 'ids': ['ata-TOSHIBA_MG04ACA100N_Y9I6K2K6F6XF-part1', 'wwn-0x50000399ab9813d4-part1'], 'uuids': ['544fff2a-e246-4637-96d0-94b4cd4bcfea']}, 'sectors': '1953522688', 'start': '2048', 'holders': [], 'size': '931.51 GB'}}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': 'SATA controller: Intel Corporation Cannon Lake PCH SATA AHCI Controller (rev 10)', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'holders': [], 'wwn': '0x50000399ab9813d4', 'model': 'TOSHIBA MG04ACA1', 'serial': 'Y9I6K2K6F6XF', 'size': '931.51 GB'}}}, {'changed': True, 'end': '2021-06-23 05:54:03.860658', 'stdout': 'Creating new GPT entries.\\nGPT data structures destroyed! You may now partition the disk using fdisk or\\nother utilities.', 'cmd': 'sgdisk --zap-all /dev/nvme0n1 || sgdisk --zap-all /dev/nvme0n1', 'rc': 0, 'start': '2021-06-23 05:54:02.736942', 'stderr': '', 'delta': '0:00:01.123716', 'invocation': {'module_args': {'creates': 'None', 'executable': 'None', '_uses_shell': True, 'strip_empty_ends': True, '_raw_params': 'sgdisk --zap-all /dev/nvme0n1 || sgdisk --zap-all /dev/nvme0n1', 'removes': 'None', 'argv': 'None', 'warn': True, 'chdir': 'None', 'stdin_add_newline': True, 'stdin': 'None'}}, 'stdout_lines': ['Creating new GPT entries.', 'GPT data structures destroyed! You may now partition the disk using fdisk or', 'other utilities.'], 'stderr_lines': [], '_ansible_no_log': False, 'failed': False, 'item': {'key': 'nvme0n1', 'value': {'scheduler_mode': 'none', 'rotational': '0', 'vendor': 'None', 'sectors': '732585168', 'links': {'masters': ['dm-0', 'dm-1', 'dm-2', 'dm-3', 'dm-4'], 'labels': [], 'ids': ['lvm-pv-uuid-pcP2yl-fBMj-5VKu-fg5T-F7On-Wuql-RbouB2', 'nvme-INTEL_SSDPEL1K375GA_PHKM9200008T375A', 'nvme-nvme.8086-50484b4d393230303030385433373541-494e54454c2053534450454c314b3337354741-00000001'], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': 'Non-Volatile memory controller: Intel Corporation Optane DC P4800X Series SSD', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'INTEL SSDPEL1K375GA', 'partitions': {}, 'holders': ['vg_nvme-lv_2', 'vg_nvme-lv_5', 'vg_nvme-lv_3', 'vg_nvme-lv_1', 'vg_nvme-lv_4'], 'size': '349.32 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'nvme0n1', 'value': {'scheduler_mode': 'none', 'rotational': '0', 'vendor': 'None', 'sectors': '732585168', 'links': {'masters': ['dm-0', 'dm-1', 'dm-2', 'dm-3', 'dm-4'], 'labels': [], 'ids': ['lvm-pv-uuid-pcP2yl-fBMj-5VKu-fg5T-F7On-Wuql-RbouB2', 'nvme-INTEL_SSDPEL1K375GA_PHKM9200008T375A', 'nvme-nvme.8086-50484b4d393230303030385433373541-494e54454c2053534450454c314b3337354741-00000001'], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': 'Non-Volatile memory controller: Intel Corporation Optane DC P4800X Series SSD', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'INTEL SSDPEL1K375GA', 'partitions': {}, 'holders': ['vg_nvme-lv_2', 'vg_nvme-lv_5', 'vg_nvme-lv_3', 'vg_nvme-lv_1', 'vg_nvme-lv_4'], 'size': '349.32 GB'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop3', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop3', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop2', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop2', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'unreachable': True, 'msg': 'Data could not be sent to remote host "gibba041.front.sepia.ceph.com". Make sure this host can be reached over ssh: ssh: connect to host gibba041.front.sepia.ceph.com port 22: No route to host\\r\\n', 'item': {'key': 'dm-4', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '27262976', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_5', 'dm-uuid-LVM-CRflkl1pB88B3NjfT5hvZJEQKec35UWffov2pkrSJ78WI7ovCuwN5UiIX6flG3n0'], 'uuids': ['52da1bd7-f4d8-44b9-9aba-a2e357d1a743']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '13.00 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'dm-4', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '27262976', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_5', 'dm-uuid-LVM-CRflkl1pB88B3NjfT5hvZJEQKec35UWffov2pkrSJ78WI7ovCuwN5UiIX6flG3n0'], 'uuids': ['52da1bd7-f4d8-44b9-9aba-a2e357d1a743']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '13.00 GB'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop0', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop0', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop7', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop7', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'unreachable': True, 'msg': 'Data could not be sent to remote host "gibba041.front.sepia.ceph.com". Make sure this host can be reached over ssh: ssh: connect to host gibba041.front.sepia.ceph.com port 22: No route to host\\r\\n', 'item': {'key': 'dm-3', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_4', 'dm-uuid-LVM-CRflkl1pB88B3NjfT5hvZJEQKec35UWf1RwVnc9LgbiDQqs1LKxe0oXfX4UChYaJ'], 'uuids': ['2af68214-5462-479c-89be-7b27e7267335']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'dm-3', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_4', 'dm-uuid-LVM-CRflkl1pB88B3NjfT5hvZJEQKec35UWf1RwVnc9LgbiDQqs1LKxe0oXfX4UChYaJ'], 'uuids': ['2af68214-5462-479c-89be-7b27e7267335']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop5', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop5', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'unreachable': True, 'msg': 'Data could not be sent to remote host "gibba041.front.sepia.ceph.com". Make sure this host can be reached over ssh: ssh: connect to host gibba041.front.sepia.ceph.com port 22: No route to host\\r\\n', 'item': {'key': 'dm-1', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_2', 'dm-uuid-LVM-CRflkl1pB88B3NjfT5hvZJEQKec35UWfsOdIHP2NnRQb2c2edTRKXG2NysTrX0n3'], 'uuids': ['1348cd38-507e-4184-9425-9de0afe00881']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'dm-1', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_2', 'dm-uuid-LVM-CRflkl1pB88B3NjfT5hvZJEQKec35UWfsOdIHP2NnRQb2c2edTRKXG2NysTrX0n3'], 'uuids': ['1348cd38-507e-4184-9425-9de0afe00881']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}}], 'changed': True, 'msg': 'All items completed'}}, 'Traceback (most recent call last)': 'File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 199, in represent_list return self.represent_sequence(\'tag:yaml.org,2002:seq\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 92, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 117, in represent_mapping node_key = self.represent_data(item_key) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)', 'yaml.representer.RepresenterError': "('cannot represent an object', 'key')"}

  • log_href: http://qa-proxy.ceph.com/teuthology/teuthology-2021-06-02_03:30:02-rados-octopus-distro-basic-gibba/6145621/teuthology.log
  • archive_path: /home/teuthworker/archive/teuthology-2021-06-02_03:30:02-rados-octopus-distro-basic-gibba/6145621
  • description: rados/singleton/{all/lost-unfound msgr-failures/many msgr/async-v1only objectstore/filestore-xfs rados supported-random-distro$/{ubuntu_latest}}
  • duration: 0:05:17
  • email: ceph-qa@ceph.io
  • failure_reason: {'Failure object was': {'gibba041.front.sepia.ceph.com': {'results': [{'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop1', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop1', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'changed': True, 'end': '2021-06-23 05:54:01.440057', 'stdout': 'Creating new GPT entries.\\nWarning: The kernel is still using the old partition table.\\nThe new table will be used at the next reboot or after you\\nrun partprobe(8) or kpartx(8)\\nGPT data structures destroyed! You may now partition the disk using fdisk or\\nother utilities.', 'cmd': 'sgdisk --zap-all /dev/dm-2 || sgdisk --zap-all /dev/dm-2', 'rc': 0, 'start': '2021-06-23 05:53:59.729073', 'stderr': '', 'delta': '0:00:01.710984', 'invocation': {'module_args': {'creates': 'None', 'executable': 'None', '_uses_shell': True, 'strip_empty_ends': True, '_raw_params': 'sgdisk --zap-all /dev/dm-2 || sgdisk --zap-all /dev/dm-2', 'removes': 'None', 'argv': 'None', 'warn': True, 'chdir': 'None', 'stdin_add_newline': True, 'stdin': 'None'}}, 'stdout_lines': ['Creating new GPT entries.', 'Warning: The kernel is still using the old partition table.', 'The new table will be used at the next reboot or after you', 'run partprobe(8) or kpartx(8)', 'GPT data structures destroyed! You may now partition the disk using fdisk or', 'other utilities.'], 'stderr_lines': [], '_ansible_no_log': False, 'failed': False, 'item': {'key': 'dm-2', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_3', 'dm-uuid-LVM-CRflkl1pB88B3NjfT5hvZJEQKec35UWfs8qbTfQgncEtfajtcEFrpa6pXPNovRCO'], 'uuids': ['5f25f350-8682-4c3c-bdbe-dbf791058915']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'dm-2', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_3', 'dm-uuid-LVM-CRflkl1pB88B3NjfT5hvZJEQKec35UWfs8qbTfQgncEtfajtcEFrpa6pXPNovRCO'], 'uuids': ['5f25f350-8682-4c3c-bdbe-dbf791058915']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop6', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop6', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop4', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop4', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'changed': True, 'end': '2021-06-23 05:54:02.640660', 'stdout': 'Creating new GPT entries.\\nWarning: The kernel is still using the old partition table.\\nThe new table will be used at the next reboot or after you\\nrun partprobe(8) or kpartx(8)\\nGPT data structures destroyed! You may now partition the disk using fdisk or\\nother utilities.', 'cmd': 'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', 'rc': 0, 'start': '2021-06-23 05:54:01.549245', 'stderr': '', 'delta': '0:00:01.091415', 'invocation': {'module_args': {'creates': 'None', 'executable': 'None', '_uses_shell': True, 'strip_empty_ends': True, '_raw_params': 'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', 'removes': 'None', 'argv': 'None', 'warn': True, 'chdir': 'None', 'stdin_add_newline': True, 'stdin': 'None'}}, 'stdout_lines': ['Creating new GPT entries.', 'Warning: The kernel is still using the old partition table.', 'The new table will be used at the next reboot or after you', 'run partprobe(8) or kpartx(8)', 'GPT data structures destroyed! You may now partition the disk using fdisk or', 'other utilities.'], 'stderr_lines': [], '_ansible_no_log': False, 'failed': False, 'item': {'key': 'dm-0', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_1', 'dm-uuid-LVM-CRflkl1pB88B3NjfT5hvZJEQKec35UWfEk67f0vpwF1ctSp6hZcuotcO92L0AG0v'], 'uuids': ['5a01596d-400e-49b5-9277-ffa58a22a33a']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'dm-0', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_1', 'dm-uuid-LVM-CRflkl1pB88B3NjfT5hvZJEQKec35UWfEk67f0vpwF1ctSp6hZcuotcO92L0AG0v'], 'uuids': ['5a01596d-400e-49b5-9277-ffa58a22a33a']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'sda', 'value': {'scheduler_mode': 'cfq', 'rotational': '1', 'vendor': 'ATA', 'sectors': '1953525168', 'links': {'masters': [], 'labels': [], 'ids': ['ata-TOSHIBA_MG04ACA100N_Y9I6K2K6F6XF', 'wwn-0x50000399ab9813d4'], 'uuids': []}, 'partitions': {'sda1': {'sectorsize': 512, 'uuid': '544fff2a-e246-4637-96d0-94b4cd4bcfea', 'links': {'masters': [], 'labels': [], 'ids': ['ata-TOSHIBA_MG04ACA100N_Y9I6K2K6F6XF-part1', 'wwn-0x50000399ab9813d4-part1'], 'uuids': ['544fff2a-e246-4637-96d0-94b4cd4bcfea']}, 'sectors': '1953522688', 'start': '2048', 'holders': [], 'size': '931.51 GB'}}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': 'SATA controller: Intel Corporation Cannon Lake PCH SATA AHCI Controller (rev 10)', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'holders': [], 'wwn': '0x50000399ab9813d4', 'model': 'TOSHIBA MG04ACA1', 'serial': 'Y9I6K2K6F6XF', 'size': '931.51 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'sda', 'value': {'scheduler_mode': 'cfq', 'rotational': '1', 'vendor': 'ATA', 'sectors': '1953525168', 'links': {'masters': [], 'labels': [], 'ids': ['ata-TOSHIBA_MG04ACA100N_Y9I6K2K6F6XF', 'wwn-0x50000399ab9813d4'], 'uuids': []}, 'partitions': {'sda1': {'sectorsize': 512, 'uuid': '544fff2a-e246-4637-96d0-94b4cd4bcfea', 'links': {'masters': [], 'labels': [], 'ids': ['ata-TOSHIBA_MG04ACA100N_Y9I6K2K6F6XF-part1', 'wwn-0x50000399ab9813d4-part1'], 'uuids': ['544fff2a-e246-4637-96d0-94b4cd4bcfea']}, 'sectors': '1953522688', 'start': '2048', 'holders': [], 'size': '931.51 GB'}}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': 'SATA controller: Intel Corporation Cannon Lake PCH SATA AHCI Controller (rev 10)', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'holders': [], 'wwn': '0x50000399ab9813d4', 'model': 'TOSHIBA MG04ACA1', 'serial': 'Y9I6K2K6F6XF', 'size': '931.51 GB'}}}, {'changed': True, 'end': '2021-06-23 05:54:03.860658', 'stdout': 'Creating new GPT entries.\\nGPT data structures destroyed! You may now partition the disk using fdisk or\\nother utilities.', 'cmd': 'sgdisk --zap-all /dev/nvme0n1 || sgdisk --zap-all /dev/nvme0n1', 'rc': 0, 'start': '2021-06-23 05:54:02.736942', 'stderr': '', 'delta': '0:00:01.123716', 'invocation': {'module_args': {'creates': 'None', 'executable': 'None', '_uses_shell': True, 'strip_empty_ends': True, '_raw_params': 'sgdisk --zap-all /dev/nvme0n1 || sgdisk --zap-all /dev/nvme0n1', 'removes': 'None', 'argv': 'None', 'warn': True, 'chdir': 'None', 'stdin_add_newline': True, 'stdin': 'None'}}, 'stdout_lines': ['Creating new GPT entries.', 'GPT data structures destroyed! You may now partition the disk using fdisk or', 'other utilities.'], 'stderr_lines': [], '_ansible_no_log': False, 'failed': False, 'item': {'key': 'nvme0n1', 'value': {'scheduler_mode': 'none', 'rotational': '0', 'vendor': 'None', 'sectors': '732585168', 'links': {'masters': ['dm-0', 'dm-1', 'dm-2', 'dm-3', 'dm-4'], 'labels': [], 'ids': ['lvm-pv-uuid-pcP2yl-fBMj-5VKu-fg5T-F7On-Wuql-RbouB2', 'nvme-INTEL_SSDPEL1K375GA_PHKM9200008T375A', 'nvme-nvme.8086-50484b4d393230303030385433373541-494e54454c2053534450454c314b3337354741-00000001'], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': 'Non-Volatile memory controller: Intel Corporation Optane DC P4800X Series SSD', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'INTEL SSDPEL1K375GA', 'partitions': {}, 'holders': ['vg_nvme-lv_2', 'vg_nvme-lv_5', 'vg_nvme-lv_3', 'vg_nvme-lv_1', 'vg_nvme-lv_4'], 'size': '349.32 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'nvme0n1', 'value': {'scheduler_mode': 'none', 'rotational': '0', 'vendor': 'None', 'sectors': '732585168', 'links': {'masters': ['dm-0', 'dm-1', 'dm-2', 'dm-3', 'dm-4'], 'labels': [], 'ids': ['lvm-pv-uuid-pcP2yl-fBMj-5VKu-fg5T-F7On-Wuql-RbouB2', 'nvme-INTEL_SSDPEL1K375GA_PHKM9200008T375A', 'nvme-nvme.8086-50484b4d393230303030385433373541-494e54454c2053534450454c314b3337354741-00000001'], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': 'Non-Volatile memory controller: Intel Corporation Optane DC P4800X Series SSD', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'INTEL SSDPEL1K375GA', 'partitions': {}, 'holders': ['vg_nvme-lv_2', 'vg_nvme-lv_5', 'vg_nvme-lv_3', 'vg_nvme-lv_1', 'vg_nvme-lv_4'], 'size': '349.32 GB'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop3', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop3', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop2', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop2', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'unreachable': True, 'msg': 'Data could not be sent to remote host "gibba041.front.sepia.ceph.com". Make sure this host can be reached over ssh: ssh: connect to host gibba041.front.sepia.ceph.com port 22: No route to host\\r\\n', 'item': {'key': 'dm-4', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '27262976', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_5', 'dm-uuid-LVM-CRflkl1pB88B3NjfT5hvZJEQKec35UWffov2pkrSJ78WI7ovCuwN5UiIX6flG3n0'], 'uuids': ['52da1bd7-f4d8-44b9-9aba-a2e357d1a743']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '13.00 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'dm-4', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '27262976', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_5', 'dm-uuid-LVM-CRflkl1pB88B3NjfT5hvZJEQKec35UWffov2pkrSJ78WI7ovCuwN5UiIX6flG3n0'], 'uuids': ['52da1bd7-f4d8-44b9-9aba-a2e357d1a743']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '13.00 GB'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop0', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop0', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop7', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop7', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'unreachable': True, 'msg': 'Data could not be sent to remote host "gibba041.front.sepia.ceph.com". Make sure this host can be reached over ssh: ssh: connect to host gibba041.front.sepia.ceph.com port 22: No route to host\\r\\n', 'item': {'key': 'dm-3', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_4', 'dm-uuid-LVM-CRflkl1pB88B3NjfT5hvZJEQKec35UWf1RwVnc9LgbiDQqs1LKxe0oXfX4UChYaJ'], 'uuids': ['2af68214-5462-479c-89be-7b27e7267335']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'dm-3', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_4', 'dm-uuid-LVM-CRflkl1pB88B3NjfT5hvZJEQKec35UWf1RwVnc9LgbiDQqs1LKxe0oXfX4UChYaJ'], 'uuids': ['2af68214-5462-479c-89be-7b27e7267335']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop5', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop5', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'unreachable': True, 'msg': 'Data could not be sent to remote host "gibba041.front.sepia.ceph.com". Make sure this host can be reached over ssh: ssh: connect to host gibba041.front.sepia.ceph.com port 22: No route to host\\r\\n', 'item': {'key': 'dm-1', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_2', 'dm-uuid-LVM-CRflkl1pB88B3NjfT5hvZJEQKec35UWfsOdIHP2NnRQb2c2edTRKXG2NysTrX0n3'], 'uuids': ['1348cd38-507e-4184-9425-9de0afe00881']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'dm-1', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_2', 'dm-uuid-LVM-CRflkl1pB88B3NjfT5hvZJEQKec35UWfsOdIHP2NnRQb2c2edTRKXG2NysTrX0n3'], 'uuids': ['1348cd38-507e-4184-9425-9de0afe00881']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}}], 'changed': True, 'msg': 'All items completed'}}, 'Traceback (most recent call last)': 'File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 199, in represent_list return self.represent_sequence(\'tag:yaml.org,2002:seq\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 92, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 117, in represent_mapping node_key = self.represent_data(item_key) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_f359b10daba6e0103d42ccfc021bc797f3cd7edc/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)', 'yaml.representer.RepresenterError': "('cannot represent an object', 'key')"}
  • flavor:
  • job_id: 6145621
  • kernel:
    • sha1: distro
    • kdb: True
  • last_in_suite: False
  • machine_type: gibba
  • name: teuthology-2021-06-02_03:30:02-rados-octopus-distro-basic-gibba
  • nuke_on_error: True
  • os_type: ubuntu
  • os_version: 18.04
  • overrides:
    • ceph:
      • log-whitelist:
        • \(MDS_ALL_DOWN\)
        • \(MDS_UP_LESS_THAN_MAX\)
      • fs: xfs
      • sha1: c44bc49e7a57a87d84dfff2a077a2058aa2172e2
      • mon_bind_msgr2: False
      • conf:
        • mgr:
          • debug monc: 10
          • debug ms: 1
          • debug mgr: 20
        • global:
          • mon client hunt interval max multiple: 2
          • ms type: async
          • ms bind msgr2: False
          • mon client directed command retry: 5
          • mon mgr beacon grace: 90
          • ms inject socket failures: 1000
        • mon:
          • debug paxos: 20
          • mon scrub interval: 300
          • debug mon: 20
          • debug ms: 1
        • osd:
          • osd op queue cut off: debug_random
          • debug ms: 1
          • debug osd: 20
          • osd objectstore: filestore
          • osd debug verify cached snaps: True
          • osd debug verify missing on start: True
          • osd op queue: debug_random
          • osd sloppy crc: True
      • log-ignorelist:
        • \(MDS_ALL_DOWN\)
        • \(MDS_UP_LESS_THAN_MAX\)
        • \(OSD_SLOW_PING_TIME
        • \(MON_DOWN\)
    • ceph-deploy:
      • fs: xfs
      • filestore: True
      • conf:
        • client:
          • log file: /var/log/ceph/ceph-$name.$pid.log
        • mon:
          • osd default pool size: 2
        • osd:
          • osd sloppy crc: True
          • osd objectstore: filestore
    • workunit:
      • sha1: aa3f3e3ca676d7ba43e27a3370e5684d3ada61a0
      • branch: octopus
    • install:
      • ceph:
        • sha1: c44bc49e7a57a87d84dfff2a077a2058aa2172e2
    • admin_socket:
      • branch: octopus
  • owner: scheduled_teuthology@teuthology
  • pid:
  • roles:
    • ['mon.a', 'mon.b', 'mon.c', 'mgr.x', 'osd.0', 'osd.1', 'osd.2']
  • sentry_event: https://sentry.ceph.com/organizations/ceph/?query=0f46bb804ad34370af4a467ad756d38e
  • status: dead
  • success: False
  • branch: octopus
  • seed:
  • sha1: c44bc49e7a57a87d84dfff2a077a2058aa2172e2
  • subset:
  • suite:
  • suite_branch: octopus
  • suite_path:
  • suite_relpath:
  • suite_repo:
  • suite_sha1: aa3f3e3ca676d7ba43e27a3370e5684d3ada61a0
  • targets:
    • gibba041.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDEEKOKyvGf4CY2RhtWxgJjA1YBpv2kgkiIpOFCTibixhEHW+GprcSlX+43QvXcpHfk5sigir04Nn6AKOr32H7Kj7ZgJIcisxCTcDxa5NL1TatjBJetdRdstIax+sianf91rIUHtiJP0dHOOMRjops2Diqdut7ibOZwkUaXH/hKY8oOFtShAMi1wSXkEL4NFeqroVLmeHLEQJFSoEBcv3PcbaZSjdj4WIqeGBv0dJ315+NVBfK1BMoU0CqhhHjf5YwEQnGfW8o65DkBXTQf2k4isAiq1kPPES5gyRKQwV5SU/rfJBRVUOb1FHP0z7h1/0olyC8xdKmFltY2LQXkOPK9
  • tasks:
    • internal.check_packages:
    • internal.buildpackages_prep:
    • internal.save_config:
    • internal.check_lock:
    • internal.add_remotes:
    • console_log:
    • internal.connect:
    • internal.push_inventory:
    • internal.serialize_remote_roles:
    • internal.check_conflict:
    • internal.check_ceph_data:
    • internal.vm_setup:
    • kernel:
      • sha1: distro
      • kdb: True
    • internal.base:
    • internal.archive_upload:
    • internal.archive:
    • internal.coredump:
    • internal.sudo:
    • internal.syslog:
    • internal.timer:
    • pcp:
    • selinux:
    • ansible.cephlab:
    • clock:
    • install:
    • ceph:
      • pre-mgr-commands:
        • sudo ceph config set mgr mgr/devicehealth/enable_monitoring false --force
      • log-ignorelist:
        • objects unfound and apparently lost
        • overall HEALTH_
        • \(OSDMAP_FLAGS\)
        • \(OSD_
        • \(PG_
        • \(OBJECT_
        • \(SLOW_OPS\)
        • slow request
    • lost_unfound:
  • teuthology_branch: master
  • verbose: True
  • pcp_grafana_url:
  • priority:
  • user:
  • queue:
  • posted: 2021-06-02 03:34:14
  • started: 2021-06-23 05:44:06
  • updated: 2021-06-23 05:59:04
  • status_class: danger
  • runtime: 0:14:58
  • wait_time: 0:09:41