Description: fs/upgrade/featureful_client/old_client/{bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{frag_enable multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/pacific}}

Log: http://qa-proxy.ceph.com/teuthology/teuthology-2021-05-11_03:15:03-fs-master-distro-basic-gibba/6108278/teuthology.log

Sentry event: https://sentry.ceph.com/organizations/ceph/?query=aa1e854102344702bafa438ea0b7405d

Failure Reason:

{'Failure object was': {'gibba043.front.sepia.ceph.com': {'results': [{'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop1', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop1', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'changed': True, 'end': '2021-05-14 04:01:56.756990', 'stdout': 'Creating new GPT entries.\\nWarning: The kernel is still using the old partition table.\\nThe new table will be used at the next reboot or after you\\nrun partprobe(8) or kpartx(8)\\nGPT data structures destroyed! You may now partition the disk using fdisk or\\nother utilities.', 'cmd': 'sgdisk --zap-all /dev/dm-2 || sgdisk --zap-all /dev/dm-2', 'rc': 0, 'start': '2021-05-14 04:01:55.224371', 'stderr': '', 'delta': '0:00:01.532619', 'invocation': {'module_args': {'creates': 'None', 'executable': 'None', '_uses_shell': True, 'strip_empty_ends': True, '_raw_params': 'sgdisk --zap-all /dev/dm-2 || sgdisk --zap-all /dev/dm-2', 'removes': 'None', 'argv': 'None', 'warn': True, 'chdir': 'None', 'stdin_add_newline': True, 'stdin': 'None'}}, 'stdout_lines': ['Creating new GPT entries.', 'Warning: The kernel is still using the old partition table.', 'The new table will be used at the next reboot or after you', 'run partprobe(8) or kpartx(8)', 'GPT data structures destroyed! You may now partition the disk using fdisk or', 'other utilities.'], 'stderr_lines': [], '_ansible_no_log': False, 'failed': False, 'item': {'key': 'dm-2', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_3', 'dm-uuid-LVM-lDmDRYRs9zk4n0Kn2BBirOztfvMQQ2zVzXLpaZsBFVXq3VHKda8oL90LaY2jhuoT'], 'uuids': ['5bde1108-64c3-483d-b506-1f16efc471fd']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'dm-2', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_3', 'dm-uuid-LVM-lDmDRYRs9zk4n0Kn2BBirOztfvMQQ2zVzXLpaZsBFVXq3VHKda8oL90LaY2jhuoT'], 'uuids': ['5bde1108-64c3-483d-b506-1f16efc471fd']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop6', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop6', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop4', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop4', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'changed': True, 'end': '2021-05-14 04:01:57.959197', 'stdout': 'Creating new GPT entries.\\nWarning: The kernel is still using the old partition table.\\nThe new table will be used at the next reboot or after you\\nrun partprobe(8) or kpartx(8)\\nGPT data structures destroyed! You may now partition the disk using fdisk or\\nother utilities.', 'cmd': 'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', 'rc': 0, 'start': '2021-05-14 04:01:56.862711', 'stderr': '', 'delta': '0:00:01.096486', 'invocation': {'module_args': {'creates': 'None', 'executable': 'None', '_uses_shell': True, 'strip_empty_ends': True, '_raw_params': 'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', 'removes': 'None', 'argv': 'None', 'warn': True, 'chdir': 'None', 'stdin_add_newline': True, 'stdin': 'None'}}, 'stdout_lines': ['Creating new GPT entries.', 'Warning: The kernel is still using the old partition table.', 'The new table will be used at the next reboot or after you', 'run partprobe(8) or kpartx(8)', 'GPT data structures destroyed! You may now partition the disk using fdisk or', 'other utilities.'], 'stderr_lines': [], '_ansible_no_log': False, 'failed': False, 'item': {'key': 'dm-0', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_1', 'dm-uuid-LVM-lDmDRYRs9zk4n0Kn2BBirOztfvMQQ2zVvhzRS61NKWPq1XotfdHmK2rdMqhkk6gO'], 'uuids': ['78f829b8-5bd2-4851-9b27-42d2b5d30c9c']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'dm-0', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_1', 'dm-uuid-LVM-lDmDRYRs9zk4n0Kn2BBirOztfvMQQ2zVvhzRS61NKWPq1XotfdHmK2rdMqhkk6gO'], 'uuids': ['78f829b8-5bd2-4851-9b27-42d2b5d30c9c']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'sda', 'value': {'scheduler_mode': 'cfq', 'rotational': '1', 'vendor': 'ATA', 'sectors': '1953525168', 'links': {'masters': [], 'labels': [], 'ids': ['ata-TOSHIBA_MG04ACA100N_Y9I8K38GF6XF', 'wwn-0x50000399aba81666'], 'uuids': []}, 'partitions': {'sda1': {'sectorsize': 512, 'uuid': '544fff2a-e246-4637-96d0-94b4cd4bcfea', 'links': {'masters': [], 'labels': [], 'ids': ['ata-TOSHIBA_MG04ACA100N_Y9I8K38GF6XF-part1', 'wwn-0x50000399aba81666-part1'], 'uuids': ['544fff2a-e246-4637-96d0-94b4cd4bcfea']}, 'sectors': '1953522688', 'start': '2048', 'holders': [], 'size': '931.51 GB'}}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': 'SATA controller: Intel Corporation Cannon Lake PCH SATA AHCI Controller (rev 10)', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'holders': [], 'wwn': '0x50000399aba81666', 'model': 'TOSHIBA MG04ACA1', 'serial': 'Y9I8K38GF6XF', 'size': '931.51 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'sda', 'value': {'scheduler_mode': 'cfq', 'rotational': '1', 'vendor': 'ATA', 'sectors': '1953525168', 'links': {'masters': [], 'labels': [], 'ids': ['ata-TOSHIBA_MG04ACA100N_Y9I8K38GF6XF', 'wwn-0x50000399aba81666'], 'uuids': []}, 'partitions': {'sda1': {'sectorsize': 512, 'uuid': '544fff2a-e246-4637-96d0-94b4cd4bcfea', 'links': {'masters': [], 'labels': [], 'ids': ['ata-TOSHIBA_MG04ACA100N_Y9I8K38GF6XF-part1', 'wwn-0x50000399aba81666-part1'], 'uuids': ['544fff2a-e246-4637-96d0-94b4cd4bcfea']}, 'sectors': '1953522688', 'start': '2048', 'holders': [], 'size': '931.51 GB'}}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': 'SATA controller: Intel Corporation Cannon Lake PCH SATA AHCI Controller (rev 10)', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'holders': [], 'wwn': '0x50000399aba81666', 'model': 'TOSHIBA MG04ACA1', 'serial': 'Y9I8K38GF6XF', 'size': '931.51 GB'}}}, {'changed': True, 'end': '2021-05-14 04:01:59.183734', 'stdout': 'Creating new GPT entries.\\nGPT data structures destroyed! You may now partition the disk using fdisk or\\nother utilities.', 'cmd': 'sgdisk --zap-all /dev/nvme0n1 || sgdisk --zap-all /dev/nvme0n1', 'rc': 0, 'start': '2021-05-14 04:01:58.053424', 'stderr': '', 'delta': '0:00:01.130310', 'invocation': {'module_args': {'creates': 'None', 'executable': 'None', '_uses_shell': True, 'strip_empty_ends': True, '_raw_params': 'sgdisk --zap-all /dev/nvme0n1 || sgdisk --zap-all /dev/nvme0n1', 'removes': 'None', 'argv': 'None', 'warn': True, 'chdir': 'None', 'stdin_add_newline': True, 'stdin': 'None'}}, 'stdout_lines': ['Creating new GPT entries.', 'GPT data structures destroyed! You may now partition the disk using fdisk or', 'other utilities.'], 'stderr_lines': [], '_ansible_no_log': False, 'failed': False, 'item': {'key': 'nvme0n1', 'value': {'scheduler_mode': 'none', 'rotational': '0', 'vendor': 'None', 'sectors': '732585168', 'links': {'masters': ['dm-0', 'dm-1', 'dm-2', 'dm-3', 'dm-4'], 'labels': [], 'ids': ['lvm-pv-uuid-x9FzVK-7Dw3-8c73-ez2X-qaar-lwWR-XfrPmF', 'nvme-INTEL_SSDPEL1K375GA_PHKM913400LD375A', 'nvme-nvme.8086-50484b4d3931333430304c4433373541-494e54454c2053534450454c314b3337354741-00000001'], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': 'Non-Volatile memory controller: Intel Corporation Optane DC P4800X Series SSD', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'INTEL SSDPEL1K375GA', 'partitions': {}, 'holders': ['vg_nvme-lv_2', 'vg_nvme-lv_5', 'vg_nvme-lv_3', 'vg_nvme-lv_1', 'vg_nvme-lv_4'], 'size': '349.32 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'nvme0n1', 'value': {'scheduler_mode': 'none', 'rotational': '0', 'vendor': 'None', 'sectors': '732585168', 'links': {'masters': ['dm-0', 'dm-1', 'dm-2', 'dm-3', 'dm-4'], 'labels': [], 'ids': ['lvm-pv-uuid-x9FzVK-7Dw3-8c73-ez2X-qaar-lwWR-XfrPmF', 'nvme-INTEL_SSDPEL1K375GA_PHKM913400LD375A', 'nvme-nvme.8086-50484b4d3931333430304c4433373541-494e54454c2053534450454c314b3337354741-00000001'], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': 'Non-Volatile memory controller: Intel Corporation Optane DC P4800X Series SSD', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'INTEL SSDPEL1K375GA', 'partitions': {}, 'holders': ['vg_nvme-lv_2', 'vg_nvme-lv_5', 'vg_nvme-lv_3', 'vg_nvme-lv_1', 'vg_nvme-lv_4'], 'size': '349.32 GB'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop3', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop3', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop2', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop2', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'changed': True, 'end': '2021-05-14 04:02:02.333154', 'stdout': 'Creating new GPT entries.\\nWarning: The kernel is still using the old partition table.\\nThe new table will be used at the next reboot or after you\\nrun partprobe(8) or kpartx(8)\\nGPT data structures destroyed! You may now partition the disk using fdisk or\\nother utilities.', 'cmd': 'sgdisk --zap-all /dev/dm-4 || sgdisk --zap-all /dev/dm-4', 'rc': 0, 'start': '2021-05-14 04:02:01.120911', 'stderr': '', 'delta': '0:00:01.212243', 'invocation': {'module_args': {'creates': 'None', 'executable': 'None', '_uses_shell': True, 'strip_empty_ends': True, '_raw_params': 'sgdisk --zap-all /dev/dm-4 || sgdisk --zap-all /dev/dm-4', 'removes': 'None', 'argv': 'None', 'warn': True, 'chdir': 'None', 'stdin_add_newline': True, 'stdin': 'None'}}, 'stdout_lines': ['Creating new GPT entries.', 'Warning: The kernel is still using the old partition table.', 'The new table will be used at the next reboot or after you', 'run partprobe(8) or kpartx(8)', 'GPT data structures destroyed! You may now partition the disk using fdisk or', 'other utilities.'], 'stderr_lines': [], '_ansible_no_log': False, 'failed': False, 'item': {'key': 'dm-4', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '27262976', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_5', 'dm-uuid-LVM-lDmDRYRs9zk4n0Kn2BBirOztfvMQQ2zVnz4qSf3RtnC4j9YUApXuj5X1Hm1WQnmL'], 'uuids': ['a87b97ab-6bca-4706-aa3f-bac4a539e6d5']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '13.00 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'dm-4', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '27262976', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_5', 'dm-uuid-LVM-lDmDRYRs9zk4n0Kn2BBirOztfvMQQ2zVnz4qSf3RtnC4j9YUApXuj5X1Hm1WQnmL'], 'uuids': ['a87b97ab-6bca-4706-aa3f-bac4a539e6d5']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '13.00 GB'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop0', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop0', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop7', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop7', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'failed': True, 'module_stdout': '', 'module_stderr': "Warning: Permanently added 'gibba043.front.sepia.ceph.com,172.21.2.143' (ECDSA) to the list of known hosts.\r\nControlSocket /home/teuthworker/.ansible/cp/8522f24ae2 already exists, disabling multiplexing\r\n/bin/sh: /usr/bin/python: No such file or directory\n", 'msg': 'The module failed to execute correctly, you probably need to set the interpreter.\\nSee stdout/stderr for the exact error', 'rc': 127, '_ansible_no_log': False, 'changed': False, 'item': {'key': 'dm-3', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_4', 'dm-uuid-LVM-lDmDRYRs9zk4n0Kn2BBirOztfvMQQ2zVdtJH7fnx9rqbQF8F8BfBBsi33pjASitQ'], 'uuids': ['8f4e2d22-1e88-40aa-9700-ef869fcdd92a']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'dm-3', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_4', 'dm-uuid-LVM-lDmDRYRs9zk4n0Kn2BBirOztfvMQQ2zVdtJH7fnx9rqbQF8F8BfBBsi33pjASitQ'], 'uuids': ['8f4e2d22-1e88-40aa-9700-ef869fcdd92a']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop5', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop5', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'failed': True, 'module_stdout': '', 'module_stderr': '/bin/sh: /usr/bin/python: No such file or directory\\n', 'msg': 'The module failed to execute correctly, you probably need to set the interpreter.\\nSee stdout/stderr for the exact error', 'rc': 127, '_ansible_no_log': False, 'changed': False, 'item': {'key': 'dm-1', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_2', 'dm-uuid-LVM-lDmDRYRs9zk4n0Kn2BBirOztfvMQQ2zVkqnm47QoHfmZ4WCEZREUOJbDVNiunp7t'], 'uuids': ['1f40fee5-29dd-4b09-b4cc-7fdc7e13633c']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'dm-1', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_2', 'dm-uuid-LVM-lDmDRYRs9zk4n0Kn2BBirOztfvMQQ2zVkqnm47QoHfmZ4WCEZREUOJbDVNiunp7t'], 'uuids': ['1f40fee5-29dd-4b09-b4cc-7fdc7e13633c']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}}], 'changed': True, 'msg': 'All items completed'}}, 'Traceback (most recent call last)': 'File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 199, in represent_list return self.represent_sequence(\'tag:yaml.org,2002:seq\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 92, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 117, in represent_mapping node_key = self.represent_data(item_key) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)', 'yaml.representer.RepresenterError': "('cannot represent an object', 'key')"}

  • log_href: http://qa-proxy.ceph.com/teuthology/teuthology-2021-05-11_03:15:03-fs-master-distro-basic-gibba/6108278/teuthology.log
  • archive_path: /home/teuthworker/archive/teuthology-2021-05-11_03:15:03-fs-master-distro-basic-gibba/6108278
  • description: fs/upgrade/featureful_client/old_client/{bluestore-bitmap clusters/1-mds-2-client-micro conf/{client mds mon osd} overrides/{frag_enable multimds/yes pg-warn whitelist_health whitelist_wrongly_marked_down} tasks/{0-octopus 1-client 2-upgrade 3-compat_client/pacific}}
  • duration: 0:41:06
  • email: ceph-qa@ceph.io
  • failure_reason: {'Failure object was': {'gibba043.front.sepia.ceph.com': {'results': [{'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop1', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop1', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'changed': True, 'end': '2021-05-14 04:01:56.756990', 'stdout': 'Creating new GPT entries.\\nWarning: The kernel is still using the old partition table.\\nThe new table will be used at the next reboot or after you\\nrun partprobe(8) or kpartx(8)\\nGPT data structures destroyed! You may now partition the disk using fdisk or\\nother utilities.', 'cmd': 'sgdisk --zap-all /dev/dm-2 || sgdisk --zap-all /dev/dm-2', 'rc': 0, 'start': '2021-05-14 04:01:55.224371', 'stderr': '', 'delta': '0:00:01.532619', 'invocation': {'module_args': {'creates': 'None', 'executable': 'None', '_uses_shell': True, 'strip_empty_ends': True, '_raw_params': 'sgdisk --zap-all /dev/dm-2 || sgdisk --zap-all /dev/dm-2', 'removes': 'None', 'argv': 'None', 'warn': True, 'chdir': 'None', 'stdin_add_newline': True, 'stdin': 'None'}}, 'stdout_lines': ['Creating new GPT entries.', 'Warning: The kernel is still using the old partition table.', 'The new table will be used at the next reboot or after you', 'run partprobe(8) or kpartx(8)', 'GPT data structures destroyed! You may now partition the disk using fdisk or', 'other utilities.'], 'stderr_lines': [], '_ansible_no_log': False, 'failed': False, 'item': {'key': 'dm-2', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_3', 'dm-uuid-LVM-lDmDRYRs9zk4n0Kn2BBirOztfvMQQ2zVzXLpaZsBFVXq3VHKda8oL90LaY2jhuoT'], 'uuids': ['5bde1108-64c3-483d-b506-1f16efc471fd']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'dm-2', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_3', 'dm-uuid-LVM-lDmDRYRs9zk4n0Kn2BBirOztfvMQQ2zVzXLpaZsBFVXq3VHKda8oL90LaY2jhuoT'], 'uuids': ['5bde1108-64c3-483d-b506-1f16efc471fd']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop6', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop6', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop4', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop4', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'changed': True, 'end': '2021-05-14 04:01:57.959197', 'stdout': 'Creating new GPT entries.\\nWarning: The kernel is still using the old partition table.\\nThe new table will be used at the next reboot or after you\\nrun partprobe(8) or kpartx(8)\\nGPT data structures destroyed! You may now partition the disk using fdisk or\\nother utilities.', 'cmd': 'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', 'rc': 0, 'start': '2021-05-14 04:01:56.862711', 'stderr': '', 'delta': '0:00:01.096486', 'invocation': {'module_args': {'creates': 'None', 'executable': 'None', '_uses_shell': True, 'strip_empty_ends': True, '_raw_params': 'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', 'removes': 'None', 'argv': 'None', 'warn': True, 'chdir': 'None', 'stdin_add_newline': True, 'stdin': 'None'}}, 'stdout_lines': ['Creating new GPT entries.', 'Warning: The kernel is still using the old partition table.', 'The new table will be used at the next reboot or after you', 'run partprobe(8) or kpartx(8)', 'GPT data structures destroyed! You may now partition the disk using fdisk or', 'other utilities.'], 'stderr_lines': [], '_ansible_no_log': False, 'failed': False, 'item': {'key': 'dm-0', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_1', 'dm-uuid-LVM-lDmDRYRs9zk4n0Kn2BBirOztfvMQQ2zVvhzRS61NKWPq1XotfdHmK2rdMqhkk6gO'], 'uuids': ['78f829b8-5bd2-4851-9b27-42d2b5d30c9c']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'dm-0', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_1', 'dm-uuid-LVM-lDmDRYRs9zk4n0Kn2BBirOztfvMQQ2zVvhzRS61NKWPq1XotfdHmK2rdMqhkk6gO'], 'uuids': ['78f829b8-5bd2-4851-9b27-42d2b5d30c9c']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'sda', 'value': {'scheduler_mode': 'cfq', 'rotational': '1', 'vendor': 'ATA', 'sectors': '1953525168', 'links': {'masters': [], 'labels': [], 'ids': ['ata-TOSHIBA_MG04ACA100N_Y9I8K38GF6XF', 'wwn-0x50000399aba81666'], 'uuids': []}, 'partitions': {'sda1': {'sectorsize': 512, 'uuid': '544fff2a-e246-4637-96d0-94b4cd4bcfea', 'links': {'masters': [], 'labels': [], 'ids': ['ata-TOSHIBA_MG04ACA100N_Y9I8K38GF6XF-part1', 'wwn-0x50000399aba81666-part1'], 'uuids': ['544fff2a-e246-4637-96d0-94b4cd4bcfea']}, 'sectors': '1953522688', 'start': '2048', 'holders': [], 'size': '931.51 GB'}}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': 'SATA controller: Intel Corporation Cannon Lake PCH SATA AHCI Controller (rev 10)', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'holders': [], 'wwn': '0x50000399aba81666', 'model': 'TOSHIBA MG04ACA1', 'serial': 'Y9I8K38GF6XF', 'size': '931.51 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'sda', 'value': {'scheduler_mode': 'cfq', 'rotational': '1', 'vendor': 'ATA', 'sectors': '1953525168', 'links': {'masters': [], 'labels': [], 'ids': ['ata-TOSHIBA_MG04ACA100N_Y9I8K38GF6XF', 'wwn-0x50000399aba81666'], 'uuids': []}, 'partitions': {'sda1': {'sectorsize': 512, 'uuid': '544fff2a-e246-4637-96d0-94b4cd4bcfea', 'links': {'masters': [], 'labels': [], 'ids': ['ata-TOSHIBA_MG04ACA100N_Y9I8K38GF6XF-part1', 'wwn-0x50000399aba81666-part1'], 'uuids': ['544fff2a-e246-4637-96d0-94b4cd4bcfea']}, 'sectors': '1953522688', 'start': '2048', 'holders': [], 'size': '931.51 GB'}}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': 'SATA controller: Intel Corporation Cannon Lake PCH SATA AHCI Controller (rev 10)', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'holders': [], 'wwn': '0x50000399aba81666', 'model': 'TOSHIBA MG04ACA1', 'serial': 'Y9I8K38GF6XF', 'size': '931.51 GB'}}}, {'changed': True, 'end': '2021-05-14 04:01:59.183734', 'stdout': 'Creating new GPT entries.\\nGPT data structures destroyed! You may now partition the disk using fdisk or\\nother utilities.', 'cmd': 'sgdisk --zap-all /dev/nvme0n1 || sgdisk --zap-all /dev/nvme0n1', 'rc': 0, 'start': '2021-05-14 04:01:58.053424', 'stderr': '', 'delta': '0:00:01.130310', 'invocation': {'module_args': {'creates': 'None', 'executable': 'None', '_uses_shell': True, 'strip_empty_ends': True, '_raw_params': 'sgdisk --zap-all /dev/nvme0n1 || sgdisk --zap-all /dev/nvme0n1', 'removes': 'None', 'argv': 'None', 'warn': True, 'chdir': 'None', 'stdin_add_newline': True, 'stdin': 'None'}}, 'stdout_lines': ['Creating new GPT entries.', 'GPT data structures destroyed! You may now partition the disk using fdisk or', 'other utilities.'], 'stderr_lines': [], '_ansible_no_log': False, 'failed': False, 'item': {'key': 'nvme0n1', 'value': {'scheduler_mode': 'none', 'rotational': '0', 'vendor': 'None', 'sectors': '732585168', 'links': {'masters': ['dm-0', 'dm-1', 'dm-2', 'dm-3', 'dm-4'], 'labels': [], 'ids': ['lvm-pv-uuid-x9FzVK-7Dw3-8c73-ez2X-qaar-lwWR-XfrPmF', 'nvme-INTEL_SSDPEL1K375GA_PHKM913400LD375A', 'nvme-nvme.8086-50484b4d3931333430304c4433373541-494e54454c2053534450454c314b3337354741-00000001'], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': 'Non-Volatile memory controller: Intel Corporation Optane DC P4800X Series SSD', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'INTEL SSDPEL1K375GA', 'partitions': {}, 'holders': ['vg_nvme-lv_2', 'vg_nvme-lv_5', 'vg_nvme-lv_3', 'vg_nvme-lv_1', 'vg_nvme-lv_4'], 'size': '349.32 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'nvme0n1', 'value': {'scheduler_mode': 'none', 'rotational': '0', 'vendor': 'None', 'sectors': '732585168', 'links': {'masters': ['dm-0', 'dm-1', 'dm-2', 'dm-3', 'dm-4'], 'labels': [], 'ids': ['lvm-pv-uuid-x9FzVK-7Dw3-8c73-ez2X-qaar-lwWR-XfrPmF', 'nvme-INTEL_SSDPEL1K375GA_PHKM913400LD375A', 'nvme-nvme.8086-50484b4d3931333430304c4433373541-494e54454c2053534450454c314b3337354741-00000001'], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': 'Non-Volatile memory controller: Intel Corporation Optane DC P4800X Series SSD', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'INTEL SSDPEL1K375GA', 'partitions': {}, 'holders': ['vg_nvme-lv_2', 'vg_nvme-lv_5', 'vg_nvme-lv_3', 'vg_nvme-lv_1', 'vg_nvme-lv_4'], 'size': '349.32 GB'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop3', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop3', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop2', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop2', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'changed': True, 'end': '2021-05-14 04:02:02.333154', 'stdout': 'Creating new GPT entries.\\nWarning: The kernel is still using the old partition table.\\nThe new table will be used at the next reboot or after you\\nrun partprobe(8) or kpartx(8)\\nGPT data structures destroyed! You may now partition the disk using fdisk or\\nother utilities.', 'cmd': 'sgdisk --zap-all /dev/dm-4 || sgdisk --zap-all /dev/dm-4', 'rc': 0, 'start': '2021-05-14 04:02:01.120911', 'stderr': '', 'delta': '0:00:01.212243', 'invocation': {'module_args': {'creates': 'None', 'executable': 'None', '_uses_shell': True, 'strip_empty_ends': True, '_raw_params': 'sgdisk --zap-all /dev/dm-4 || sgdisk --zap-all /dev/dm-4', 'removes': 'None', 'argv': 'None', 'warn': True, 'chdir': 'None', 'stdin_add_newline': True, 'stdin': 'None'}}, 'stdout_lines': ['Creating new GPT entries.', 'Warning: The kernel is still using the old partition table.', 'The new table will be used at the next reboot or after you', 'run partprobe(8) or kpartx(8)', 'GPT data structures destroyed! You may now partition the disk using fdisk or', 'other utilities.'], 'stderr_lines': [], '_ansible_no_log': False, 'failed': False, 'item': {'key': 'dm-4', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '27262976', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_5', 'dm-uuid-LVM-lDmDRYRs9zk4n0Kn2BBirOztfvMQQ2zVnz4qSf3RtnC4j9YUApXuj5X1Hm1WQnmL'], 'uuids': ['a87b97ab-6bca-4706-aa3f-bac4a539e6d5']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '13.00 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'dm-4', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '27262976', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_5', 'dm-uuid-LVM-lDmDRYRs9zk4n0Kn2BBirOztfvMQQ2zVnz4qSf3RtnC4j9YUApXuj5X1Hm1WQnmL'], 'uuids': ['a87b97ab-6bca-4706-aa3f-bac4a539e6d5']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '13.00 GB'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop0', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop0', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop7', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop7', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'failed': True, 'module_stdout': '', 'module_stderr': "Warning: Permanently added 'gibba043.front.sepia.ceph.com,172.21.2.143' (ECDSA) to the list of known hosts.\r\nControlSocket /home/teuthworker/.ansible/cp/8522f24ae2 already exists, disabling multiplexing\r\n/bin/sh: /usr/bin/python: No such file or directory\n", 'msg': 'The module failed to execute correctly, you probably need to set the interpreter.\\nSee stdout/stderr for the exact error', 'rc': 127, '_ansible_no_log': False, 'changed': False, 'item': {'key': 'dm-3', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_4', 'dm-uuid-LVM-lDmDRYRs9zk4n0Kn2BBirOztfvMQQ2zVdtJH7fnx9rqbQF8F8BfBBsi33pjASitQ'], 'uuids': ['8f4e2d22-1e88-40aa-9700-ef869fcdd92a']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'dm-3', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_4', 'dm-uuid-LVM-lDmDRYRs9zk4n0Kn2BBirOztfvMQQ2zVdtJH7fnx9rqbQF8F8BfBBsi33pjASitQ'], 'uuids': ['8f4e2d22-1e88-40aa-9700-ef869fcdd92a']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}}, {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', '_ansible_no_log': False, 'item': {'key': 'loop5', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'loop5', 'value': {'scheduler_mode': 'none', 'rotational': '1', 'vendor': 'None', 'sectors': '0', 'links': {'masters': [], 'labels': [], 'ids': [], 'uuids': []}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '0', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '0.00 Bytes'}}}, {'failed': True, 'module_stdout': '', 'module_stderr': '/bin/sh: /usr/bin/python: No such file or directory\\n', 'msg': 'The module failed to execute correctly, you probably need to set the interpreter.\\nSee stdout/stderr for the exact error', 'rc': 127, '_ansible_no_log': False, 'changed': False, 'item': {'key': 'dm-1', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_2', 'dm-uuid-LVM-lDmDRYRs9zk4n0Kn2BBirOztfvMQQ2zVkqnm47QoHfmZ4WCEZREUOJbDVNiunp7t'], 'uuids': ['1f40fee5-29dd-4b09-b4cc-7fdc7e13633c']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}, 'ansible_loop_var': 'item', '_ansible_item_label': {'key': 'dm-1', 'value': {'scheduler_mode': '', 'rotational': '0', 'vendor': 'None', 'sectors': '176160768', 'links': {'masters': [], 'labels': [], 'ids': ['dm-name-vg_nvme-lv_2', 'dm-uuid-LVM-lDmDRYRs9zk4n0Kn2BBirOztfvMQQ2zVkqnm47QoHfmZ4WCEZREUOJbDVNiunp7t'], 'uuids': ['1f40fee5-29dd-4b09-b4cc-7fdc7e13633c']}, 'sas_device_handle': 'None', 'sas_address': 'None', 'virtual': 1, 'host': '', 'sectorsize': '512', 'removable': '0', 'support_discard': '512', 'model': 'None', 'partitions': {}, 'holders': [], 'size': '84.00 GB'}}}], 'changed': True, 'msg': 'All items completed'}}, 'Traceback (most recent call last)': 'File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 306, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/__init__.py", line 278, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 199, in represent_list return self.represent_sequence(\'tag:yaml.org,2002:seq\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 92, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping(\'tag:yaml.org,2002:map\', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 117, in represent_mapping node_key = self.represent_data(item_key) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_19220a3bd6e252c6e8260827019668a766d85490/virtualenv/lib/python3.6/site-packages/yaml/representer.py", line 231, in represent_undefined raise RepresenterError("cannot represent an object", data)', 'yaml.representer.RepresenterError': "('cannot represent an object', 'key')"}
  • flavor:
  • job_id: 6108278
  • kernel:
    • sha1: distro
    • kdb: True
  • last_in_suite: False
  • machine_type: gibba
  • name: teuthology-2021-05-11_03:15:03-fs-master-distro-basic-gibba
  • nuke_on_error: True
  • os_type:
  • os_version:
  • overrides:
    • ceph-deploy:
      • fs: xfs
      • conf:
        • client:
          • log file: /var/log/ceph/ceph-$name.$pid.log
        • mon:
          • osd default pool size: 2
        • osd:
          • mon osd full ratio: 0.9
          • mon osd backfillfull_ratio: 0.85
          • bluestore fsck on mount: True
          • mon osd nearfull ratio: 0.8
          • debug bluestore: 1/20
          • debug bluefs: 1/20
          • osd objectstore: bluestore
          • bluestore block size: 96636764160
          • debug rocksdb: 4/10
          • bdev enable discard: True
          • osd failsafe full ratio: 0.95
          • bdev async discard: True
      • bluestore: True
    • workunit:
      • sha1: 8594b4f9a5d9719eac330549ff18ca6f25fb0c94
      • branch: master
    • ceph:
      • log-whitelist:
        • \(MDS_ALL_DOWN\)
        • \(MDS_UP_LESS_THAN_MAX\)
      • fs: xfs
      • sha1: 907044dc5983d9f0fbcb97b692a7fbfa074013f3
      • conf:
        • global:
          • bluestore warn on no per pool omap: False
          • mon pg warn min per osd: 0
          • bluestore warn on legacy statfs: False
        • mgr:
          • debug ms: 1
          • debug mgr: 20
        • client:
          • rados osd op timeout: 15m
          • debug ms: 1
          • rados mon op timeout: 15m
          • debug client: 20
          • client mount timeout: 600
        • mon:
          • debug paxos: 20
          • debug mon: 20
          • debug ms: 1
          • mon warn on osd down out interval zero: False
          • mon op complaint time: 120
        • mds:
          • mds bal split bits: 3
          • mds bal split size: 100
          • osd op complaint time: 180
          • debug mds: 20
          • mds bal merge size: 5
          • debug ms: 1
          • mds bal frag: True
          • mds verify scatter: True
          • mds bal fragment size max: 10000
          • mds op complaint time: 180
          • rados mon op timeout: 15m
          • rados osd op timeout: 15m
          • mds debug scatterstat: True
          • mds debug frag: True
        • osd:
          • mon osd full ratio: 0.9
          • bluestore allocator: bitmap
          • bluestore fsck on mount: True
          • debug osd: 20
          • osd op complaint time: 180
          • debug bluestore: 1/20
          • debug bluefs: 1/20
          • osd objectstore: bluestore
          • debug ms: 1
          • mon osd nearfull ratio: 0.8
          • osd failsafe full ratio: 0.95
          • bluestore block size: 96636764160
          • debug rocksdb: 4/10
          • bdev enable discard: True
          • mon osd backfillfull_ratio: 0.85
          • bdev async discard: True
      • cephfs:
        • max_mds: 2
      • log-ignorelist:
        • \(MDS_ALL_DOWN\)
        • \(MDS_UP_LESS_THAN_MAX\)
        • overall HEALTH_
        • \(FS_DEGRADED\)
        • \(MDS_FAILED\)
        • \(MDS_DEGRADED\)
        • \(FS_WITH_FAILED_MDS\)
        • \(MDS_DAMAGE\)
        • \(MDS_ALL_DOWN\)
        • \(MDS_UP_LESS_THAN_MAX\)
        • \(FS_INLINE_DATA_DEPRECATED\)
        • overall HEALTH_
        • \(OSD_DOWN\)
        • \(OSD_
        • but it is still running
        • is not responding
        • scrub mismatch
        • ScrubResult
        • wrongly marked
        • \(POOL_APP_NOT_ENABLED\)
        • \(SLOW_OPS\)
        • overall HEALTH_
        • \(MON_MSGR2_NOT_ENABLED\)
        • slow request
        • missing required features
    • install:
      • ceph:
        • sha1: 907044dc5983d9f0fbcb97b692a7fbfa074013f3
    • admin_socket:
      • branch: master
    • thrashosds:
      • bdev_inject_crash_probability: 0.5
      • bdev_inject_crash: 2
  • owner: scheduled_teuthology@teuthology
  • pid:
  • roles:
    • ['mon.a', 'mon.b', 'mon.c', 'mgr.x', 'mgr.y', 'mds.a', 'mds.b', 'mds.c', 'osd.0', 'osd.1', 'osd.2', 'osd.3']
    • ['client.0']
    • ['client.1']
  • sentry_event: https://sentry.ceph.com/organizations/ceph/?query=aa1e854102344702bafa438ea0b7405d
  • status: dead
  • success: False
  • branch: master
  • seed:
  • sha1: 907044dc5983d9f0fbcb97b692a7fbfa074013f3
  • subset:
  • suite:
  • suite_branch: master
  • suite_path:
  • suite_relpath:
  • suite_repo:
  • suite_sha1: 8594b4f9a5d9719eac330549ff18ca6f25fb0c94
  • targets:
    • gibba027.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDngRRQCXe0bAewM5p7gvAdm4w0FSh0ufT9Q4pPbx29mjdwfjhP5Yg8nA4D7siv9ydR7QdM9HR9ALvF8UlUIm+Bb7RbV0qazG4URlr4ZkqB3C7sbF2dwSfx1d1CU1dtt6KKJYraxdXoV/lgBYyxkAmnzGJA45+FY/k9qOqKyQtji/4BaOd92dQPxKjgjd2aZwgt13l3ZvvnGfSxmLbmblWBr4DuwEkwHKEDlEGVUFwQDF9eYGjeps/j0gpLn2O5cr3XhtKmrURo1NieFeJrnVtPDNmJelzvfT4YM6B7ZSyq1MYxNsEWNYkebF8ZkTVNWeDYfg+Dmo4VqMpNRGUGc27l
    • gibba043.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCW+jKkCA4jeYVKZOjYoKV82tCtdvC5yCvUhEUM/p6TRlFg6TUqYjqSSBG2qi+8IK9MLs8NHAH/To+wAZG+XQMD/qdB7rD3O7YY9cnJKyzUnZ/YoO7lqYiuDNEnxIajl/0l9MrIWgYaKapRSFYzM8byIDMphh8Rmtzedl27ckVh0ToVSvpWIAvBYJjVk7ZxAtwcVqylT0aiBiG/bLnu5+vA+GzW3tAnQ8ydm+mTMTbuj9tyfzsSY+Vyf1GyJ5rp7vRWjf+D8RNNI5OtVln1tan2Oi4bU93NMna7SvIRLF0b6G7EsboErN7gsRPKTP+RbMGv0XHwnTNIPgsoPSjK0hEh
    • gibba028.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDXro1HbLUJ/+CPb5FsfDio7QLZ603z7TRjZ6Ffa8ZdqxU5pcRAFva71Arg0RErqyDxegQlV3+e0nimXC9y1vbKic09PmjFhbWgAnze7IUJ9OLvJ3BVOBA/sz3INEwYbIOjZ/uUGAdRUpVY85xt8lEEo5XZY7gtoBx08RskBC32eHLjBnvVYfZmIKduVFNZRvr9Vo/cKBbk+JnNAgLseVSkFjVvpt+jSQVoy6BDdrKZj0VD/3gb+A8mIFVd++9OW2Wy0VL7QbyNIJ/s5xcSenMu7jDqUqQL3htL5vkcnacaug/m0YohlAoECletEQvo5FroIfe5pwOqH482OviEjXOH
  • tasks:
    • internal.check_packages:
    • internal.buildpackages_prep:
    • internal.save_config:
    • internal.check_lock:
    • internal.add_remotes:
    • console_log:
    • internal.connect:
    • internal.push_inventory:
    • internal.serialize_remote_roles:
    • internal.check_conflict:
    • internal.check_ceph_data:
    • internal.vm_setup:
    • kernel:
      • sha1: distro
      • kdb: True
    • internal.base:
    • internal.archive_upload:
    • internal.archive:
    • internal.coredump:
    • internal.sudo:
    • internal.syslog:
    • internal.timer:
    • pcp:
    • selinux:
    • ansible.cephlab:
    • clock:
    • install:
      • extra_packages:
        • librados2
      • exclude_packages:
        • librados3
        • ceph-mgr-dashboard
        • ceph-mgr-diskprediction-local
        • ceph-mgr-rook
        • ceph-mgr-cephadm
        • cephadm
      • branch: octopus
    • print: **** done installing octopus
    • ceph:
      • conf:
        • global:
          • mon warn on pool no app: False
          • ms bind msgr2: False
      • log-ignorelist:
        • overall HEALTH_
        • \(FS_
        • \(MDS_
        • \(OSD_
        • \(MON_DOWN\)
        • \(CACHE_POOL_
        • \(POOL_
        • \(MGR_DOWN\)
        • \(PG_
        • \(SMALLER_PGP_NUM\)
        • Monitor daemon marked osd
        • Behind on trimming
        • Manager daemon
    • exec:
      • osd.0:
        • ceph osd set-require-min-compat-client octopus
    • print: **** done ceph
    • ceph-fuse:
    • print: **** done octopus client
    • workunit:
      • clients:
        • all:
          • suites/fsstress.sh
    • print: **** done fsstress
    • mds_pre_upgrade:
    • print: **** done mds pre-upgrade sequence
    • install.upgrade:
      • mon.a:
      • mon.b:
    • print: **** done install.upgrade both hosts
    • ceph.restart:
      • daemons:
        • mon.*
        • mgr.*
      • wait-for-healthy: False
      • mon-health-to-clog: False
    • ceph.healthy:
    • ceph.restart:
      • daemons:
        • osd.*
      • wait-for-healthy: False
      • wait-for-osds-up: True
    • ceph.stop:
      • mds.*
    • ceph.restart:
      • daemons:
        • mds.*
      • wait-for-healthy: False
      • wait-for-osds-up: True
    • exec:
      • mon.a:
        • ceph osd dump -f json-pretty
        • ceph versions
        • ceph osd require-osd-release octopus
        • for f in `ceph osd pool ls` ; do ceph osd pool set $f pg_autoscale_mode off ; done
    • ceph.healthy:
    • print: **** done ceph.restart
    • exec:
      • mon.a:
        • ceph fs dump --format=json-pretty
        • ceph fs required_client_features cephfs add metric_collect
    • sleep:
      • duration: 5
    • fs.clients_evicted:
  • teuthology_branch: master
  • verbose: True
  • pcp_grafana_url:
  • priority:
  • user:
  • queue:
  • posted: 2021-05-11 03:17:40
  • started: 2021-05-14 03:44:36
  • updated: 2021-05-14 04:41:44
  • status_class: danger
  • runtime: 0:57:08
  • wait_time: 0:16:02