Name | Machine Type | Up | Locked | Locked Since | Locked By | OS Type | OS Version | Arch | Description |
---|---|---|---|---|---|---|---|---|---|
mira058.front.sepia.ceph.com | mira | True | False | centos | 7 | x86_64 | None |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 4332311 | 2019-09-25 01:34:08 | 2019-09-25 21:13:58 | 2019-09-25 22:31:59 | 1:18:01 | 0:39:38 | 0:38:23 | mira | master | centos | 7.6 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/many.yaml workloads/snaps-few-objects.yaml} | 2 | |
fail | 4323990 | 2019-09-21 05:55:58 | 2019-09-24 21:59:43 | 2019-09-24 22:41:43 | 0:42:00 | 0:08:12 | 0:33:48 | mira | master | centos | 7.4 | ceph-deploy/basic/{ceph-deploy-overrides/enable_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/centos_latest.yaml objectstore/filestore-xfs.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
Command failed on mira030 with status 5: 'sudo stop ceph-all || sudo service ceph stop || sudo systemctl stop ceph.target' |
||||||||||||||
fail | 4323956 | 2019-09-21 05:55:34 | 2019-09-24 18:58:42 | 2019-09-24 19:26:42 | 0:28:00 | 0:04:26 | 0:23:34 | mira | master | ubuntu | 16.04 | ceph-deploy/basic/{ceph-deploy-overrides/enable_dmcrypt_diff_journal_disk.yaml config_options/cephdeploy_conf.yaml distros/ubuntu_latest.yaml objectstore/filestore-xfs.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 2 | |
Failure Reason:
Command failed on mira016 with status 5: 'sudo stop ceph-all || sudo service ceph stop || sudo systemctl stop ceph.target' |
||||||||||||||
fail | 4316337 | 2019-09-18 16:53:59 | 2019-09-19 04:12:28 | 2019-09-19 07:28:30 | 3:16:02 | 2:03:58 | 1:12:04 | mira | master | centos | 7.6 | kcephfs:cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/no.yaml kclient/{mount.yaml overrides/{distro/random/{k-testing.yaml supported$/{centos_7.yaml}} ms-die-on-skipped.yaml}} objectstore-ec/bluestore-comp-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdd || sgdisk --zap-all /dev/sdd', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-09-19 07:26:06.657867'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.017714', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d20119e5500'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211N9UE', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sde'}, 'ansible_loop_var': u'item', u'end': u'2019-09-19 07:26:09.089444', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d20119e5500'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211N9UE', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sde'}, u'cmd': u'sgdisk --zap-all /dev/sde || sgdisk --zap-all /dev/sde', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sde || sgdisk --zap-all /dev/sde', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-09-19 07:26:08.071730'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.033313', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': [u'07489ff0-6b1a-42d1-aede-96426b711cc3']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP7GG81', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdf'}, 'ansible_loop_var': u'item', u'end': u'2019-09-19 07:26:10.571338', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': [u'07489ff0-6b1a-42d1-aede-96426b711cc3']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP7GG81', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdf'}, u'cmd': u'sgdisk --zap-all /dev/sdf || sgdisk --zap-all /dev/sdf', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdf || sgdisk --zap-all /dev/sdf', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-09-19 07:26:09.538025'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.017814', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': [u'4ab6aee2-30c1-489b-ac3d-c2ae179a9c75']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'N020YX8L', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA331', u'partitions': {}}, 'key': u'sdg'}, 'ansible_loop_var': u'item', u'end': u'2019-09-19 07:26:12.015235', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': [u'4ab6aee2-30c1-489b-ac3d-c2ae179a9c75']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'N020YX8L', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA331', u'partitions': {}}, 'key': u'sdg'}, u'cmd': u'sgdisk --zap-all /dev/sdg || sgdisk --zap-all /dev/sdg', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdg || sgdisk --zap-all /dev/sdg', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-09-19 07:26:10.997421'}, {'ansible_loop_var': u'item', '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP4BY3N', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {u'sda1': {u'start': u'2048', u'sectorsize': 512, u'uuid': u'254b5f97-a3c2-4fa0-9cdb-62cf4484d4a7', u'sectors': u'1953522688', u'holders': [], u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000-part1'], u'uuids': [u'254b5f97-a3c2-4fa0-9cdb-62cf4484d4a7']}, u'size': u'931.51 GB'}}}, 'key': u'sda'}, 'skipped': True, 'changed': False, '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP4BY3N', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {u'sda1': {u'start': u'2048', u'sectorsize': 512, u'uuid': u'254b5f97-a3c2-4fa0-9cdb-62cf4484d4a7', u'sectors': u'1953522688', u'holders': [], u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000-part1'], u'uuids': [u'254b5f97-a3c2-4fa0-9cdb-62cf4484d4a7']}, u'size': u'931.51 GB'}}}, 'key': u'sda'}}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.079308', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d20220ec500'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPS930N122NPEL', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HDS721010CLA330', u'partitions': {}}, 'key': u'sdb'}, 'ansible_loop_var': u'item', u'end': u'2019-09-19 07:26:13.498480', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d20220ec500'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPS930N122NPEL', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HDS721010CLA330', u'partitions': {}}, 'key': u'sdb'}, u'cmd': u'sgdisk --zap-all /dev/sdb || sgdisk --zap-all /dev/sdb', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdb || sgdisk --zap-all /dev/sdb', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-09-19 07:26:12.419172'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.017386', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'N01XYGUL', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA331', u'partitions': {}}, 'key': u'sdc'}, 'ansible_loop_var': u'item', u'end': u'2019-09-19 07:26:14.972904', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'N01XYGUL', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA331', u'partitions': {}}, 'key': u'sdc'}, u'cmd': u'sgdisk --zap-all /dev/sdc || sgdisk --zap-all /dev/sdc', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdc || sgdisk --zap-all /dev/sdc', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-09-19 07:26:13.955518'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.019498', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d202274c700'], u'uuids': [u'e3f2211d-577a-4dd1-95be-5a12f77f880c']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPS930N122T7GL', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HDS721010CLA330', u'partitions': {}}, 'key': u'sdh'}, 'ansible_loop_var': u'item', u'end': u'2019-09-19 07:26:16.445960', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d202274c700'], u'uuids': [u'e3f2211d-577a-4dd1-95be-5a12f77f880c']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPS930N122T7GL', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HDS721010CLA330', u'partitions': {}}, 'key': u'sdh'}, u'cmd': u'sgdisk --zap-all /dev/sdh || sgdisk --zap-all /dev/sdh', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdh || sgdisk --zap-all /dev/sdh', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-09-19 07:26:15.426462'}, {'stderr_lines': [u'Problem opening /dev/dm-0 for reading! Error is 2.', u'The specified file does not exist!', u"Problem opening '' for writing! Program will now terminate.", u'Warning! MBR not overwritten! Error is 2!', u'Problem opening /dev/dm-0 for reading! Error is 2.', u'The specified file does not exist!', u"Problem opening '' for writing! Program will now terminate.", u'Warning! MBR not overwritten! Error is 2!'], u'changed': True, u'stdout': u'', u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'delta': u'0:00:00.016861', 'stdout_lines': [], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': None, u'links': {u'masters': [], u'labels': [], u'ids': [u'dm-name-mpatha', u'dm-uuid-mpath-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'', u'support_discard': u'0', u'serial': u'N01XYGUL', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': None, u'partitions': {}}, 'key': u'dm-0'}, 'ansible_loop_var': u'item', u'end': u'2019-09-19 07:26:16.917722', '_ansible_no_log': False, u'start': u'2019-09-19 07:26:16.900861', u'failed': True, u'cmd': u'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', 'item': {'value': {u'sectorsize': u'512', u'vendor': None, u'links': {u'masters': [], u'labels': [], u'ids': [u'dm-name-mpatha', u'dm-uuid-mpath-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'', u'support_discard': u'0', u'serial': u'N01XYGUL', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': None, u'partitions': {}}, 'key': u'dm-0'}, u'stderr': u"Problem opening /dev/dm-0 for reading! Error is 2.\nThe specified file does not exist!\nProblem opening '' for writing! Program will now terminate.\nWarning! MBR not overwritten! Error is 2!\nProblem opening /dev/dm-0 for reading! Error is 2.\nThe specified file does not exist!\nProblem opening '' for writing! Program will now terminate.\nWarning! MBR not overwritten! Error is 2!", u'rc': 2, u'msg': u'non-zero return code'}]}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 219, in represent_list return self.represent_sequence(u'tag:yaml.org,2002:seq', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 102, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined raise RepresenterError("cannot represent an object", data)RepresenterError: ('cannot represent an object', u'sdd') |
||||||||||||||
fail | 4316324 | 2019-09-18 16:53:46 | 2019-09-19 00:54:25 | 2019-09-19 03:34:27 | 2:40:02 | 1:34:54 | 1:05:08 | mira | master | rhel | 7.6 | kcephfs:cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml kclient/{mount.yaml overrides/{distro/rhel/{k-distro.yaml rhel_7.yaml} ms-die-on-skipped.yaml}} objectstore-ec/bluestore-ec-root.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} tasks/kclient_workunit_snaps.yaml} | 3 | |
Failure Reason:
"2019-09-19T02:17:51.762850+0000 mon.b (mon.0) 191 : cluster [WRN] Health check failed: 1 slow ops, oldest one blocked for 1568859469 sec, osd.2 has slow ops (SLOW_OPS)" in cluster log |
||||||||||||||
pass | 4316310 | 2019-09-18 16:53:31 | 2019-09-18 21:54:01 | 2019-09-19 00:52:03 | 2:58:02 | 2:27:35 | 0:30:27 | mira | master | rhel | 7.6 | kcephfs:cephfs/{begin.yaml clusters/1-mds-1-client.yaml conf/{client.yaml mds.yaml mon.yaml osd.yaml} inline/yes.yaml kclient/{mount.yaml overrides/{distro/rhel/{k-distro.yaml rhel_7.yaml} ms-die-on-skipped.yaml}} objectstore-ec/filestore-xfs.yaml overrides/{frag_enable.yaml log-config.yaml osd-asserts.yaml whitelist_health.yaml whitelist_wrongly_marked_down.yaml} tasks/kclient_workunit_o_trunc.yaml} | 3 | |
fail | 4305938 | 2019-09-14 03:59:28 | 2019-09-15 00:45:05 | 2019-09-15 02:13:05 | 1:28:00 | 0:23:33 | 1:04:27 | mira | master | centos | 7.6 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore.yaml distros/centos_latest.yaml python_versions/python_2.yaml tasks/rbd_import_export.yaml} | 4 | |
Failure Reason:
SELinux denials found on ubuntu@mira075.front.sepia.ceph.com: ['type=AVC msg=audit(1568513081.518:6427): avc: denied { getattr } for pid=5110 comm="ms_dispatch" path="/proc/kcore" dev="proc" ino=4026532068 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:proc_kcore_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1568513065.888:6370): avc: denied { getattr } for pid=5110 comm="ms_dispatch" path="/proc/kcore" dev="proc" ino=4026532068 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:proc_kcore_t:s0 tclass=file permissive=1'] |
||||||||||||||
fail | 4273405 | 2019-09-02 05:55:36 | 2019-09-02 07:44:06 | 2019-09-02 09:04:06 | 1:20:00 | 0:23:11 | 0:56:49 | mira | master | centos | 7.6 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_dmcrypt_off.yaml distros/centos_latest.yaml python_versions/python_3.yaml tasks/rbd_import_export.yaml} | 4 | |
Failure Reason:
SELinux denials found on ubuntu@mira110.front.sepia.ceph.com: ['type=AVC msg=audit(1567414692.875:6431): avc: denied { read } for pid=8160 comm="fn_anonymous" name="b8:48" dev="tmpfs" ino=126121 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1567414680.126:6419): avc: denied { getattr } for pid=7511 comm="fn_anonymous" path="/run/udev/data/b8:16" dev="tmpfs" ino=124141 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1567414680.125:6418): avc: denied { open } for pid=7511 comm="fn_anonymous" path="/run/udev/data/b8:16" dev="tmpfs" ino=124141 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1567414646.614:6302): avc: denied { getattr } for pid=6183 comm="ms_dispatch" path="/proc/kcore" dev="proc" ino=4026532068 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:proc_kcore_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1567414680.125:6418): avc: denied { read } for pid=7511 comm="fn_anonymous" name="b8:16" dev="tmpfs" ino=124141 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1567414692.875:6431): avc: denied { open } for pid=8160 comm="fn_anonymous" path="/run/udev/data/b8:48" dev="tmpfs" ino=126121 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1567414692.876:6432): avc: denied { getattr } for pid=8160 comm="fn_anonymous" path="/run/udev/data/b8:48" dev="tmpfs" ino=126121 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:udev_var_run_t:s0 tclass=file permissive=1'] |
||||||||||||||
fail | 4266190 | 2019-08-31 03:59:34 | 2019-08-31 04:25:43 | 2019-08-31 05:17:43 | 0:52:00 | 0:22:27 | 0:29:33 | mira | master | centos | 7.6 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_dmcrypt_off.yaml distros/centos_latest.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 4 | |
Failure Reason:
SELinux denials found on ubuntu@mira058.front.sepia.ceph.com: ['type=AVC msg=audit(1567228307.838:6279): avc: denied { getattr } for pid=4975 comm="ms_dispatch" path="/proc/kcore" dev="proc" ino=4026532068 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:proc_kcore_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1567228296.623:6231): avc: denied { getattr } for pid=4975 comm="ms_dispatch" path="/proc/kcore" dev="proc" ino=4026532068 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:proc_kcore_t:s0 tclass=file permissive=1'] |
||||||||||||||
fail | 4228629 | 2019-08-19 04:00:08 | 2019-08-19 21:48:09 | 2019-08-19 22:10:08 | 0:21:59 | 0:04:27 | 0:17:32 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore_dmcrypt.yaml distros/ubuntu_latest.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 4 | |
Failure Reason:
Command failed on mira107 with status 2: 'sudo tar cz -f /tmp/tmppyMMss -C /var/lib/ceph/mon -- .' |
||||||||||||||
fail | 4183861 | 2019-08-05 05:55:36 | 2019-08-05 07:35:51 | 2019-08-05 08:27:51 | 0:52:00 | 0:21:57 | 0:30:03 | mira | master | centos | 7.6 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_dmcrypt_off.yaml distros/centos_latest.yaml python_versions/python_3.yaml tasks/rbd_import_export.yaml} | 4 | |
Failure Reason:
Command failed on mira030 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health' |
||||||||||||||
fail | 4182969 | 2019-08-05 03:59:36 | 2019-08-05 05:11:43 | 2019-08-05 06:47:43 | 1:36:00 | 0:30:34 | 1:05:26 | mira | master | centos | 7.6 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore_dmcrypt.yaml distros/centos_latest.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 4 | |
Failure Reason:
SELinux denials found on ubuntu@mira027.front.sepia.ceph.com: ['type=AVC msg=audit(1564987036.980:6126): avc: denied { search } for pid=5362 comm="ceph-mgr" name="httpd" dev="sda1" ino=82020 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:httpd_config_t:s0 tclass=dir permissive=1'] |
||||||||||||||
fail | 4182956 | 2019-08-05 03:59:27 | 2019-08-05 03:59:29 | 2019-08-05 05:31:29 | 1:32:00 | 0:22:54 | 1:09:06 | mira | master | centos | 7.6 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_bluestore.yaml distros/centos_latest.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 4 | |
Failure Reason:
SELinux denials found on ubuntu@mira030.front.sepia.ceph.com: ['type=AVC msg=audit(1564982725.548:6163): avc: denied { getattr } for pid=4441 comm="ms_dispatch" path="/proc/kcore" dev="proc" ino=4026532068 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:proc_kcore_t:s0 tclass=file permissive=1', 'type=AVC msg=audit(1564982740.557:6220): avc: denied { getattr } for pid=4441 comm="ms_dispatch" path="/proc/kcore" dev="proc" ino=4026532068 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:proc_kcore_t:s0 tclass=file permissive=1'] |
||||||||||||||
pass | 4162044 | 2019-07-29 04:01:09 | 2019-07-29 05:31:35 | 2019-07-29 06:03:36 | 0:32:01 | 0:12:18 | 0:19:43 | mira | master | ubuntu | 18.04 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_dmcrypt_off.yaml distros/ubuntu_latest.yaml python_versions/python_2.yaml tasks/rbd_import_export.yaml} | 4 | |
fail | 4151462 | 2019-07-26 05:55:21 | 2019-07-26 05:55:39 | 2019-07-26 06:37:38 | 0:41:59 | 0:21:13 | 0:20:46 | mira | master | centos | 7.4 | ceph-deploy/ceph-volume/{cluster/4node.yaml config/ceph_volume_dmcrypt_off.yaml distros/centos_latest.yaml tasks/rbd_import_export.yaml} | 4 | |
Failure Reason:
SELinux denials found on ubuntu@mira016.front.sepia.ceph.com: ['type=AVC msg=audit(1564122790.298:5603): avc: denied { search } for pid=31409 comm="ceph-mgr" name="httpd" dev="sda1" ino=5904 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:httpd_config_t:s0 tclass=dir permissive=1'] |
||||||||||||||
fail | 4103004 | 2019-07-08 05:55:55 | 2019-07-09 02:42:15 | 2019-07-09 03:44:14 | 1:01:59 | 0:23:45 | 0:38:14 | mira | master | centos | 7.6 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_filestore.yaml distros/centos_latest.yaml python_versions/python_2.yaml tasks/ceph-admin-commands.yaml} | 4 | |
Failure Reason:
Command failed on mira027 with status 1: 'cd /home/ubuntu/cephtest && sudo ceph health' |
||||||||||||||
pass | 4087980 | 2019-07-02 21:32:56 | 2019-07-03 15:19:00 | 2019-07-03 17:01:01 | 1:42:01 | 1:02:41 | 0:39:20 | mira | master | ubuntu | 16.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.1-pg-log-overrides/short_pg_log.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-workload.yaml 6-luminous-with-mgr.yaml 6.5-crush-compat.yaml 7-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} 8-jewel-workload.yaml distros/ubuntu_latest.yaml slow_requests.yaml} | 4 | |
fail | 4087961 | 2019-07-02 21:32:36 | 2019-07-03 10:51:08 | 2019-07-03 14:15:11 | 3:24:03 | 2:46:54 | 0:37:09 | mira | master | centos | 7.4 | upgrade:jewel-x/stress-split/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.1-pg-log-overrides/short_pg_log.yaml 1.5-final-scrub.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{radosbench.yaml rbd-cls.yaml rbd-import-export.yaml rbd_api.yaml readwrite.yaml snaps-few-objects.yaml} 5-finish-upgrade.yaml 6-luminous.yaml 6.5-crush-compat.yaml 7-final-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/centos_latest.yaml slow_requests.yaml thrashosds-health.yaml} | 3 | |
Failure Reason:
failed to recover before timeout expired |
||||||||||||||
pass | 4087952 | 2019-07-02 21:32:26 | 2019-07-03 09:29:07 | 2019-07-03 10:51:07 | 1:22:00 | 1:07:36 | 0:14:24 | mira | master | ubuntu | 14.04 | upgrade:jewel-x/parallel/{0-cluster/{openstack.yaml start.yaml} 1-jewel-install/jewel.yaml 1.1-pg-log-overrides/short_pg_log.yaml 1.5-final-scrub.yaml 2-workload/test_rbd_python.yaml 3-upgrade-sequence/upgrade-all.yaml 4-luminous.yaml 5-workload.yaml 6-luminous-with-mgr.yaml 6.5-crush-compat.yaml 7-final-workload/{blogbench.yaml rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} 8-jewel-workload.yaml distros/ubuntu_14.04.yaml slow_requests.yaml} | 4 | |
fail | 4064480 | 2019-06-24 05:55:45 | 2019-06-25 22:54:25 | 2019-06-25 23:26:24 | 0:31:59 | 0:10:28 | 0:21:31 | mira | master | centos | 7.6 | ceph-deploy/{cluster/4node.yaml config/ceph_volume_filestore.yaml distros/centos_latest.yaml python_versions/python_3.yaml tasks/ceph-admin-commands.yaml} | 4 | |
Failure Reason:
Command failed on mira027 with status 5: 'sudo stop ceph-all || sudo service ceph stop || sudo systemctl stop ceph.target' |