Status Job ID Links Posted Started Updated
Runtime
Duration
In Waiting
Machine Teuthology Branch OS Type OS Version Description Nodes
fail 4504889 2019-11-13 18:12:52 2019-11-14 13:23:43 2019-11-14 14:13:43 0:50:00 0:22:35 0:27:25 mira master rhel 7.7 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-avl.yaml supported-random-distro$/{rhel_7.yaml} tasks/crash.yaml} 2
Failure Reason:

Test failure: test_info (tasks.mgr.test_crash.TestCrash)

pass 4504890 2019-11-13 18:12:53 2019-11-14 13:23:44 2019-11-14 16:37:46 3:14:02 2:28:40 0:45:22 mira master rhel 7.7 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} tasks/failover.yaml} 2
fail 4504891 2019-11-13 18:12:54 2019-11-14 13:23:44 2019-11-14 14:27:43 1:03:59 0:15:41 0:48:18 mira master centos 7.6 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{centos_7.yaml} tasks/insights.yaml} 2
Failure Reason:

Test failure: test_crash_history (tasks.mgr.test_insights.TestInsights)

fail 4504892 2019-11-13 18:12:56 2019-11-14 13:23:44 2019-11-14 15:35:45 2:12:01 0:22:45 1:49:16 mira master centos 7.6 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{centos_7.yaml} tasks/module_selftest.yaml} 2
Failure Reason:

Test failure: test_diskprediction_local (tasks.mgr.test_module_selftest.TestModuleSelftest)

fail 4504893 2019-11-13 18:12:57 2019-11-14 13:23:44 2019-11-14 14:17:43 0:53:59 0:17:38 0:36:21 mira master centos 7.6 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{centos_7.yaml} tasks/orchestrator_cli.yaml} 2
Failure Reason:

Test failure: test_mds_add (tasks.mgr.test_orchestrator_cli.TestOrchestratorCli)

pass 4504894 2019-11-13 18:12:58 2019-11-14 13:23:44 2019-11-14 15:51:46 2:28:02 0:18:23 2:09:39 mira master centos 7.6 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{centos_7.yaml} tasks/progress.yaml} 2
pass 4504895 2019-11-13 18:12:59 2019-11-14 13:23:44 2019-11-14 14:47:45 1:24:01 0:18:00 1:06:01 mira master centos 7.6 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-avl.yaml supported-random-distro$/{centos_7.yaml} tasks/prometheus.yaml} 2
pass 4504896 2019-11-13 18:13:00 2019-11-14 13:23:44 2019-11-14 15:15:45 1:52:01 0:25:28 1:26:33 mira master ubuntu 18.04 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/ssh_orchestrator.yaml} 2
pass 4504897 2019-11-13 18:13:01 2019-11-14 13:23:44 2019-11-14 15:59:46 2:36:02 0:14:11 2:21:51 mira master centos 7.6 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{centos_7.yaml} tasks/workunits.yaml} 2
fail 4504898 2019-11-13 18:13:02 2019-11-14 13:23:44 2019-11-14 19:37:50 6:14:06 2:20:28 3:53:38 mira master rhel 7.7 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{rhel_7.yaml} tasks/crash.yaml} 2
Failure Reason:

Test failure: test_info (tasks.mgr.test_crash.TestCrash)

pass 4504899 2019-11-13 18:13:03 2019-11-14 13:43:20 2019-11-14 15:07:21 1:24:01 0:27:21 0:56:40 mira master ubuntu 18.04 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/failover.yaml} 2
fail 4504900 2019-11-13 18:13:04 2019-11-14 13:45:21 2019-11-14 14:27:21 0:42:00 0:24:14 0:17:46 mira master rhel 7.7 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{rhel_7.yaml} tasks/insights.yaml} 2
Failure Reason:

Test failure: test_crash_history (tasks.mgr.test_insights.TestInsights)

fail 4504901 2019-11-13 18:13:05 2019-11-14 13:47:09 2019-11-14 14:33:08 0:45:59 0:20:19 0:25:40 mira master centos 7.6 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-avl.yaml supported-random-distro$/{centos_7.yaml} tasks/module_selftest.yaml} 2
Failure Reason:

Test failure: test_diskprediction_local (tasks.mgr.test_module_selftest.TestModuleSelftest)

fail 4504902 2019-11-13 18:13:06 2019-11-14 13:47:09 2019-11-14 16:37:11 2:50:02 2:27:00 0:23:02 mira master rhel 7.7 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} tasks/orchestrator_cli.yaml} 2
Failure Reason:

Test failure: test_mds_add (tasks.mgr.test_orchestrator_cli.TestOrchestratorCli)

pass 4504903 2019-11-13 18:13:07 2019-11-14 13:51:19 2019-11-14 17:07:21 3:16:02 0:20:16 2:55:46 mira master centos 7.6 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{centos_7.yaml} tasks/progress.yaml} 2
pass 4504904 2019-11-13 18:13:08 2019-11-14 13:53:09 2019-11-14 14:45:09 0:52:00 0:18:17 0:33:43 mira master centos 7.6 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{centos_7.yaml} tasks/prometheus.yaml} 2
pass 4504905 2019-11-13 18:13:09 2019-11-14 13:53:14 2019-11-14 15:07:15 1:14:01 0:10:03 1:03:58 mira master ubuntu 18.04 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/ssh_orchestrator.yaml} 2
pass 4504906 2019-11-13 18:13:10 2019-11-14 13:55:37 2019-11-14 17:33:40 3:38:03 0:14:15 3:23:48 mira master centos 7.6 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{centos_7.yaml} tasks/workunits.yaml} 2
fail 4504907 2019-11-13 18:13:11 2019-11-14 14:03:36 2019-11-14 14:33:35 0:29:59 0:08:50 0:21:09 mira master ubuntu 18.04 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/crash.yaml} 2
Failure Reason:

Test failure: test_info (tasks.mgr.test_crash.TestCrash)

pass 4504908 2019-11-13 18:13:12 2019-11-14 14:06:56 2019-11-14 17:12:58 3:06:02 2:28:11 0:37:51 mira master rhel 7.7 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-avl.yaml supported-random-distro$/{rhel_7.yaml} tasks/failover.yaml} 2
fail 4504909 2019-11-13 18:13:13 2019-11-14 14:13:51 2019-11-14 15:29:51 1:16:00 0:18:10 0:57:50 mira master centos 7.6 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{centos_7.yaml} tasks/insights.yaml} 2
Failure Reason:

Test failure: test_crash_history (tasks.mgr.test_insights.TestInsights)

fail 4504910 2019-11-13 18:13:14 2019-11-14 14:15:37 2019-11-14 17:31:40 3:16:03 2:26:15 0:49:48 mira master rhel 7.7 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{rhel_7.yaml} tasks/module_selftest.yaml} 2
Failure Reason:

Test failure: test_diskprediction_local (tasks.mgr.test_module_selftest.TestModuleSelftest)

fail 4504911 2019-11-13 18:13:15 2019-11-14 14:17:43 2019-11-14 14:57:43 0:40:00 0:23:49 0:16:11 mira master rhel 7.7 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{rhel_7.yaml} tasks/orchestrator_cli.yaml} 2
Failure Reason:

Test failure: test_mds_add (tasks.mgr.test_orchestrator_cli.TestOrchestratorCli)

pass 4504912 2019-11-13 18:13:16 2019-11-14 14:17:45 2019-11-14 15:07:44 0:49:59 0:21:16 0:28:43 mira master centos 7.6 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{centos_7.yaml} tasks/progress.yaml} 2
pass 4504913 2019-11-13 18:13:17 2019-11-14 14:27:40 2019-11-14 16:49:41 2:22:01 0:16:48 2:05:13 mira master centos 7.6 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{centos_7.yaml} tasks/prometheus.yaml} 2
pass 4504914 2019-11-13 18:13:18 2019-11-14 14:27:45 2019-11-14 14:57:44 0:29:59 0:15:18 0:14:41 mira master centos 7.6 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-avl.yaml supported-random-distro$/{centos_7.yaml} tasks/ssh_orchestrator.yaml} 2
pass 4504915 2019-11-13 18:13:19 2019-11-14 14:29:40 2019-11-14 15:03:39 0:33:59 0:22:19 0:11:40 mira master rhel 7.7 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} tasks/workunits.yaml} 2
fail 4504916 2019-11-13 18:13:20 2019-11-14 14:33:26 2019-11-14 15:45:26 1:12:00 0:21:55 0:50:05 mira master ubuntu 18.04 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/crash.yaml} 2
Failure Reason:

Test failure: test_info (tasks.mgr.test_crash.TestCrash)

pass 4504917 2019-11-13 18:13:21 2019-11-14 14:33:37 2019-11-14 17:45:39 3:12:02 2:23:52 0:48:10 mira master rhel 7.7 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{rhel_7.yaml} tasks/failover.yaml} 2
fail 4504918 2019-11-13 18:13:22 2019-11-14 14:45:33 2019-11-14 15:55:33 1:10:00 0:14:29 0:55:31 mira master centos 7.6 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{centos_7.yaml} tasks/insights.yaml} 2
Failure Reason:

Test failure: test_crash_history (tasks.mgr.test_insights.TestInsights)

fail 4504919 2019-11-13 18:13:23 2019-11-14 14:47:50 2019-11-14 15:59:50 1:12:00 0:31:52 0:40:08 mira master ubuntu 18.04 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/module_selftest.yaml} 2
Failure Reason:

"2019-11-14T15:43:49.985017+0000 mds.c (mds.0) 1 : cluster [WRN] evicting unresponsive client mira061:x (4686), after 303.718 seconds" in cluster log

fail 4504920 2019-11-13 18:13:24 2019-11-14 14:47:50 2019-11-14 17:47:52 3:00:02 2:21:49 0:38:13 mira master rhel 7.7 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-avl.yaml supported-random-distro$/{rhel_7.yaml} tasks/orchestrator_cli.yaml} 2
Failure Reason:

Test failure: test_mds_add (tasks.mgr.test_orchestrator_cli.TestOrchestratorCli)

pass 4504921 2019-11-13 18:13:25 2019-11-14 14:58:08 2019-11-14 16:12:08 1:14:00 0:19:41 0:54:19 mira master centos 7.6 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{centos_7.yaml} tasks/progress.yaml} 2
fail 4504922 2019-11-13 18:13:26 2019-11-14 14:58:08 2019-11-14 16:12:08 1:14:00 0:15:36 0:58:24 mira master centos 7.6 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{centos_7.yaml} tasks/prometheus.yaml} 2
Failure Reason:

Test failure: test_standby (tasks.mgr.test_prometheus.TestPrometheus)

pass 4504923 2019-11-13 18:13:27 2019-11-14 15:03:44 2019-11-14 15:31:43 0:27:59 0:10:26 0:17:33 mira master ubuntu 18.04 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/ssh_orchestrator.yaml} 2
pass 4504924 2019-11-13 18:13:28 2019-11-14 15:07:36 2019-11-14 15:57:36 0:50:00 0:22:40 0:27:20 mira master ubuntu 18.04 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/workunits.yaml} 2
fail 4504925 2019-11-13 18:13:29 2019-11-14 15:07:36 2019-11-14 16:59:37 1:52:01 0:14:26 1:37:35 mira master centos 7.6 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{centos_7.yaml} tasks/crash.yaml} 2
Failure Reason:

Test failure: test_info (tasks.mgr.test_crash.TestCrash)

pass 4504926 2019-11-13 18:13:30 2019-11-14 15:07:47 2019-11-14 16:25:48 1:18:01 0:21:59 0:56:02 mira master rhel 7.7 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{rhel_7.yaml} tasks/failover.yaml} 2
fail 4504927 2019-11-13 18:13:31 2019-11-14 15:11:42 2019-11-14 16:23:42 1:12:00 0:18:45 0:53:15 mira master rhel 7.7 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-avl.yaml supported-random-distro$/{rhel_7.yaml} tasks/insights.yaml} 2
Failure Reason:

Test failure: test_crash_history (tasks.mgr.test_insights.TestInsights)

fail 4504928 2019-11-13 18:13:31 2019-11-14 15:11:42 2019-11-14 16:45:43 1:34:01 0:19:16 1:14:45 mira master centos 7.6 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{centos_7.yaml} tasks/module_selftest.yaml} 2
Failure Reason:

Test failure: test_diskprediction_local (tasks.mgr.test_module_selftest.TestModuleSelftest)

fail 4504929 2019-11-13 18:13:32 2019-11-14 15:15:59 2019-11-14 17:46:01 2:30:02 0:16:28 2:13:34 mira master centos 7.6 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{centos_7.yaml} tasks/orchestrator_cli.yaml} 2
Failure Reason:

Test failure: test_mds_add (tasks.mgr.test_orchestrator_cli.TestOrchestratorCli)

pass 4504930 2019-11-13 18:13:33 2019-11-14 15:29:55 2019-11-14 16:53:55 1:24:00 0:23:34 1:00:26 mira master rhel 7.7 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{rhel_7.yaml} tasks/progress.yaml} 2
pass 4504931 2019-11-13 18:13:34 2019-11-14 15:30:23 2019-11-14 16:56:23 1:26:00 0:28:17 0:57:43 mira master ubuntu 18.04 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/prometheus.yaml} 2
pass 4504932 2019-11-13 18:13:35 2019-11-14 15:31:58 2019-11-14 19:44:02 4:12:04 2:20:34 1:51:30 mira master rhel 7.7 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{rhel_7.yaml} tasks/ssh_orchestrator.yaml} 2
fail 4504933 2019-11-13 18:13:36 2019-11-14 15:36:05 2019-11-14 18:22:07 2:46:02 2:12:20 0:33:42 mira master rhel 7.7 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-avl.yaml supported-random-distro$/{rhel_7.yaml} tasks/workunits.yaml} 2
Failure Reason:

'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdd || sgdisk --zap-all /dev/sdd', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-11-14 18:19:58.257123'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.010985', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011a85600'], u'uuids': [u'79b5cfd0-3348-460b-b875-fd184e1d0c85']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211XZVE', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sde'}, 'ansible_loop_var': u'item', u'end': u'2019-11-14 18:20:01.033579', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011a85600'], u'uuids': [u'79b5cfd0-3348-460b-b875-fd184e1d0c85']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211XZVE', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sde'}, u'cmd': u'sgdisk --zap-all /dev/sde || sgdisk --zap-all /dev/sde', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sde || sgdisk --zap-all /dev/sde', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-11-14 18:20:00.022594'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.012115', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2040775100'], u'uuids': [u'e2d204c0-ff17-4049-b933-b65cf285a41e']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N204WG1E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdf'}, 'ansible_loop_var': u'item', u'end': u'2019-11-14 18:20:02.293060', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2040775100'], u'uuids': [u'e2d204c0-ff17-4049-b933-b65cf285a41e']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N204WG1E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdf'}, u'cmd': u'sgdisk --zap-all /dev/sdf || sgdisk --zap-all /dev/sdf', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdf || sgdisk --zap-all /dev/sdf', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-11-14 18:20:01.280945'}, {'ansible_loop_var': u'item', '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP66QW9', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000524AS', u'partitions': {u'sda1': {u'start': u'2048', u'sectorsize': 512, u'uuid': u'f1bca609-9cd3-46ed-8f34-af0538e7246e', u'sectors': u'1953522688', u'holders': [], u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000-part1'], u'uuids': [u'f1bca609-9cd3-46ed-8f34-af0538e7246e']}, u'size': u'931.51 GB'}}}, 'key': u'sda'}, 'skipped': True, 'changed': False, '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP66QW9', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000524AS', u'partitions': {u'sda1': {u'start': u'2048', u'sectorsize': 512, u'uuid': u'f1bca609-9cd3-46ed-8f34-af0538e7246e', u'sectors': u'1953522688', u'holders': [], u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000-part1'], u'uuids': [u'f1bca609-9cd3-46ed-8f34-af0538e7246e']}, u'size': u'931.51 GB'}}}, 'key': u'sda'}}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.028222', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP8NWLD', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdb'}, 'ansible_loop_var': u'item', u'end': u'2019-11-14 18:20:03.598339', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP8NWLD', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdb'}, u'cmd': u'sgdisk --zap-all /dev/sdb || sgdisk --zap-all /dev/sdb', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdb || sgdisk --zap-all /dev/sdb', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-11-14 18:20:02.570117'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.023272', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'6VPBDH90', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdc'}, 'ansible_loop_var': u'item', u'end': u'2019-11-14 18:20:04.867994', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'6VPBDH90', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdc'}, u'cmd': u'sgdisk --zap-all /dev/sdc || sgdisk --zap-all /dev/sdc', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdc || sgdisk --zap-all /dev/sdc', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-11-14 18:20:03.844722'}, {'stderr_lines': [u'Problem opening /dev/dm-0 for reading! Error is 2.', u'The specified file does not exist!', u"Problem opening '' for writing! Program will now terminate.", u'Warning! MBR not overwritten! Error is 2!', u'Problem opening /dev/dm-0 for reading! Error is 2.', u'The specified file does not exist!', u"Problem opening '' for writing! Program will now terminate.", u'Warning! MBR not overwritten! Error is 2!'], u'changed': True, u'stdout': u'', u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'delta': u'0:00:00.008469', 'stdout_lines': [], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': None, u'links': {u'masters': [], u'labels': [], u'ids': [u'dm-name-mpatha', u'dm-uuid-mpath-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'', u'support_discard': u'0', u'serial': u'5VP8NWLD', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': None, u'partitions': {}}, 'key': u'dm-0'}, 'ansible_loop_var': u'item', u'end': u'2019-11-14 18:20:05.121540', '_ansible_no_log': False, u'start': u'2019-11-14 18:20:05.113071', u'failed': True, u'cmd': u'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', 'item': {'value': {u'sectorsize': u'512', u'vendor': None, u'links': {u'masters': [], u'labels': [], u'ids': [u'dm-name-mpatha', u'dm-uuid-mpath-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'', u'support_discard': u'0', u'serial': u'5VP8NWLD', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': None, u'partitions': {}}, 'key': u'dm-0'}, u'stderr': u"Problem opening /dev/dm-0 for reading! Error is 2.\nThe specified file does not exist!\nProblem opening '' for writing! Program will now terminate.\nWarning! MBR not overwritten! Error is 2!\nProblem opening /dev/dm-0 for reading! Error is 2.\nThe specified file does not exist!\nProblem opening '' for writing! Program will now terminate.\nWarning! MBR not overwritten! Error is 2!", u'rc': 2, u'msg': u'non-zero return code'}]}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 219, in represent_list return self.represent_sequence(u'tag:yaml.org,2002:seq', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 102, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined raise RepresenterError("cannot represent an object", data)RepresenterError: ('cannot represent an object', u'sdd')

fail 4504934 2019-11-13 18:13:37 2019-11-14 15:45:49 2019-11-14 17:13:49 1:28:00 0:09:15 1:18:45 mira master ubuntu 18.04 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/crash.yaml} 2
Failure Reason:

Test failure: test_info (tasks.mgr.test_crash.TestCrash)

pass 4504935 2019-11-13 18:13:38 2019-11-14 15:52:02 2019-11-14 17:14:02 1:22:00 0:13:57 1:08:03 mira master ubuntu 18.04 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/failover.yaml} 2
fail 4504936 2019-11-13 18:13:39 2019-11-14 15:55:50 2019-11-14 17:27:51 1:32:01 0:19:34 1:12:27 mira master ubuntu 18.04 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/insights.yaml} 2
Failure Reason:

Test failure: test_crash_history (tasks.mgr.test_insights.TestInsights)

fail 4504937 2019-11-13 18:13:40 2019-11-14 15:57:53 2019-11-14 16:29:53 0:32:00 0:22:15 0:09:45 mira master rhel 7.7 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{rhel_7.yaml} tasks/module_selftest.yaml} 2
Failure Reason:

Test failure: test_diskprediction_local (tasks.mgr.test_module_selftest.TestModuleSelftest)

fail 4504938 2019-11-13 18:13:41 2019-11-14 16:00:06 2019-11-14 16:56:05 0:55:59 0:16:17 0:39:42 mira master centos 7.6 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{centos_7.yaml} tasks/orchestrator_cli.yaml} 2
Failure Reason:

Test failure: test_mds_add (tasks.mgr.test_orchestrator_cli.TestOrchestratorCli)

pass 4504939 2019-11-13 18:13:42 2019-11-14 16:00:08 2019-11-14 19:30:11 3:30:03 2:26:20 1:03:43 mira master rhel 7.7 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-avl.yaml supported-random-distro$/{rhel_7.yaml} tasks/progress.yaml} 2
pass 4504940 2019-11-13 18:13:43 2019-11-14 16:12:10 2019-11-14 17:06:10 0:54:00 0:20:18 0:33:42 mira master rhel 7.7 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} tasks/prometheus.yaml} 2
pass 4504941 2019-11-13 18:13:44 2019-11-14 16:12:10 2019-11-14 17:48:11 1:36:01 0:27:47 1:08:14 mira master ubuntu 18.04 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/ssh_orchestrator.yaml} 2
pass 4504942 2019-11-13 18:13:45 2019-11-14 16:24:02 2019-11-14 19:18:04 2:54:02 2:21:33 0:32:29 mira master rhel 7.7 rados:mgr/{clusters/{2-node-mgr.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{rhel_7.yaml} tasks/workunits.yaml} 2