User | Scheduled | Started | Updated | Runtime | Suite | Branch | Machine Type | Revision | Pass | Fail | Dead |
---|---|---|---|---|---|---|---|---|---|---|---|
kchai | 2019-07-21 12:10:46 | 2019-07-21 12:11:05 | 2019-07-22 01:49:41 | 13:38:36 | rados | wip-ceph-mutex-kefu | mira | 17037e5 | 114 | 25 | 12 |
Status | Job ID | Links | Posted | Started | Updated | Runtime |
Duration |
In Waiting |
Machine | Teuthology Branch | OS Type | OS Version | Description | Nodes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pass | 4136152 | 2019-07-21 12:11:01 | 2019-07-21 12:11:05 | 2019-07-21 12:35:04 | 0:23:59 | 0:15:35 | 0:08:24 | mira | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rados_striper.yaml} | 2 | |
pass | 4136153 | 2019-07-21 12:11:02 | 2019-07-21 12:11:05 | 2019-07-21 12:41:04 | 0:29:59 | 0:20:42 | 0:09:17 | mira | master | rhel | 7.6 | rados/multimon/{clusters/9.yaml msgr-failures/few.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/mon_clock_no_skews.yaml} | 3 | |
fail | 4136154 | 2019-07-21 12:11:03 | 2019-07-21 12:11:07 | 2019-07-21 14:57:08 | 2:46:01 | 2:26:49 | 0:19:12 | mira | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml tasks/mon_recovery.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
saw valgrind issues |
||||||||||||||
fail | 4136155 | 2019-07-21 12:11:04 | 2019-07-21 12:11:07 | 2019-07-21 12:33:05 | 0:21:58 | 0:14:59 | 0:06:59 | mira | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/ssh_orchestrator.yaml} | 2 | |
Failure Reason:
Timed out waiting for MDS daemons to become healthy |
||||||||||||||
pass | 4136156 | 2019-07-21 12:11:05 | 2019-07-21 12:11:09 | 2019-07-21 12:35:06 | 0:23:57 | 0:18:23 | 0:05:34 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_omap_write.yaml} | 1 | |
pass | 4136157 | 2019-07-21 12:11:06 | 2019-07-21 12:11:09 | 2019-07-21 14:55:09 | 2:44:00 | 2:22:58 | 0:21:02 | mira | master | rhel | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=3-m=1.yaml} | 2 | |
pass | 4136158 | 2019-07-21 12:11:07 | 2019-07-21 12:11:10 | 2019-07-21 12:43:08 | 0:31:58 | 0:24:44 | 0:07:14 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
pass | 4136159 | 2019-07-21 12:11:08 | 2019-07-21 12:11:10 | 2019-07-21 13:27:10 | 1:16:00 | 1:09:21 | 0:06:39 | mira | master | ubuntu | 18.04 | rados/singleton/{all/osd-backfill.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4136160 | 2019-07-21 12:11:09 | 2019-07-21 12:11:11 | 2019-07-21 13:13:15 | 1:02:04 | 0:55:39 | 0:06:25 | mira | master | ubuntu | 18.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/scrub.yaml} | 1 | |
pass | 4136161 | 2019-07-21 12:11:10 | 2019-07-21 12:11:12 | 2019-07-21 12:35:11 | 0:23:59 | 0:18:03 | 0:05:56 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 4136162 | 2019-07-21 12:11:11 | 2019-07-21 12:33:21 | 2019-07-21 13:11:20 | 0:37:59 | 0:30:18 | 0:07:41 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
pass | 4136163 | 2019-07-21 12:11:12 | 2019-07-21 12:35:06 | 2019-07-21 13:47:07 | 1:12:01 | 1:03:49 | 0:08:12 | mira | master | rhel | 7.6 | rados/singleton/{all/osd-recovery-incomplete.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4136164 | 2019-07-21 12:11:13 | 2019-07-21 12:35:08 | 2019-07-21 13:01:07 | 0:25:59 | 0:16:55 | 0:09:04 | mira | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/sample_fio.yaml} | 1 | |
dead | 4136165 | 2019-07-21 12:11:14 | 2019-07-21 12:35:12 | 2019-07-22 00:37:39 | 12:02:27 | mira | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/recovery-unfound-found.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |||
pass | 4136166 | 2019-07-21 12:11:15 | 2019-07-21 12:41:27 | 2019-07-21 13:17:26 | 0:35:59 | 0:26:44 | 0:09:15 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/luminous-v1only.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 4 | |
pass | 4136167 | 2019-07-21 12:11:16 | 2019-07-21 12:43:10 | 2019-07-21 13:13:09 | 0:29:59 | 0:20:24 | 0:09:35 | mira | master | centos | 7.6 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/mon-delay.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/force-sync-many.yaml workloads/rados_api_tests.yaml} | 2 | |
pass | 4136168 | 2019-07-21 12:11:17 | 2019-07-21 13:01:10 | 2019-07-21 13:39:10 | 0:38:00 | 0:31:39 | 0:06:21 | mira | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_workunit_loadgen_big.yaml} | 2 | |
pass | 4136169 | 2019-07-21 12:11:17 | 2019-07-21 13:11:23 | 2019-07-21 13:51:22 | 0:39:59 | 0:31:04 | 0:08:55 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
pass | 4136170 | 2019-07-21 12:11:18 | 2019-07-21 13:13:25 | 2019-07-21 13:33:24 | 0:19:59 | 0:13:28 | 0:06:31 | mira | master | ubuntu | 18.04 | rados/singleton/{all/osd-recovery.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
fail | 4136171 | 2019-07-21 12:11:19 | 2019-07-21 13:13:25 | 2019-07-21 14:01:25 | 0:48:00 | 0:39:04 | 0:08:56 | mira | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} tasks/workunits.yaml} | 2 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
pass | 4136172 | 2019-07-21 12:11:20 | 2019-07-21 13:17:41 | 2019-07-21 16:35:44 | 3:18:03 | 3:02:01 | 0:16:02 | mira | master | centos | 7.6 | rados/objectstore/{backends/objectstore.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4136173 | 2019-07-21 12:11:21 | 2019-07-21 13:27:12 | 2019-07-21 13:41:11 | 0:13:59 | 0:06:42 | 0:07:17 | mira | master | ubuntu | 18.04 | rados/singleton/{all/peer.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4136174 | 2019-07-21 12:11:22 | 2019-07-21 13:33:39 | 2019-07-21 16:09:45 | 2:36:06 | 2:18:42 | 0:17:24 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
pass | 4136175 | 2019-07-21 12:11:23 | 2019-07-21 13:39:12 | 2019-07-21 14:13:11 | 0:33:59 | 0:22:35 | 0:11:24 | mira | master | centos | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
pass | 4136176 | 2019-07-21 12:11:24 | 2019-07-21 13:41:13 | 2019-07-21 14:11:12 | 0:29:59 | 0:22:05 | 0:07:54 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} | 2 | |
pass | 4136177 | 2019-07-21 12:11:25 | 2019-07-21 13:47:08 | 2019-07-21 14:03:07 | 0:15:59 | 0:08:45 | 0:07:14 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/sample_radosbench.yaml} | 1 | |
fail | 4136178 | 2019-07-21 12:11:26 | 2019-07-21 13:51:25 | 2019-07-21 14:37:24 | 0:45:59 | 0:38:38 | 0:07:21 | mira | master | rhel | 7.6 | rados/singleton-nomsgr/{all/version-number-sanity.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
pass | 4136179 | 2019-07-21 12:11:27 | 2019-07-21 14:01:40 | 2019-07-21 15:25:40 | 1:24:00 | 1:17:30 | 0:06:30 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-radosbench.yaml} | 2 | |
pass | 4136180 | 2019-07-21 12:11:27 | 2019-07-21 14:03:22 | 2019-07-21 14:21:21 | 0:17:59 | 0:10:59 | 0:07:00 | mira | master | ubuntu | 18.04 | rados/singleton/{all/pg-autoscaler.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 2 | |
pass | 4136181 | 2019-07-21 12:11:28 | 2019-07-21 14:11:14 | 2019-07-21 14:49:14 | 0:38:00 | 0:27:45 | 0:10:15 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
fail | 4136182 | 2019-07-21 12:11:29 | 2019-07-21 14:13:13 | 2019-07-21 14:41:13 | 0:28:00 | 0:20:23 | 0:07:37 | mira | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_workunit_loadgen_mix.yaml} | 2 | |
Failure Reason:
"2019-07-21T14:26:32.205685+0000 mon.b (mon.0) 249 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log |
||||||||||||||
pass | 4136183 | 2019-07-21 12:11:30 | 2019-07-21 14:21:23 | 2019-07-21 14:51:22 | 0:29:59 | 0:19:00 | 0:10:59 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/luminous.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | 4 | |
pass | 4136184 | 2019-07-21 12:11:31 | 2019-07-21 14:37:38 | 2019-07-21 14:57:37 | 0:19:59 | 0:12:42 | 0:07:17 | mira | master | centos | 7.6 | rados/singleton/{all/pg-removal-interruption.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
fail | 4136185 | 2019-07-21 12:11:32 | 2019-07-21 14:41:27 | 2019-07-21 17:03:28 | 2:22:01 | 2:03:54 | 0:18:07 | mira | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} tasks/crash.yaml} | 2 | |
Failure Reason:
'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdd || sgdisk --zap-all /dev/sdd', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-21 17:02:01.552357'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.011401', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011a85600'], u'uuids': [u'ef007223-4f01-4763-ba34-0dc633a8aa30']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211XZVE', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sde'}, 'ansible_loop_var': u'item', u'end': u'2019-07-21 17:02:03.833444', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011a85600'], u'uuids': [u'ef007223-4f01-4763-ba34-0dc633a8aa30']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211XZVE', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sde'}, u'cmd': u'sgdisk --zap-all /dev/sde || sgdisk --zap-all /dev/sde', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sde || sgdisk --zap-all /dev/sde', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-21 17:02:02.822043'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.024492', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2040775100'], u'uuids': [u'f526f824-5693-4744-8883-83d10162d886']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N204WG1E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdf'}, 'ansible_loop_var': u'item', u'end': u'2019-07-21 17:02:05.111339', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2040775100'], u'uuids': [u'f526f824-5693-4744-8883-83d10162d886']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N204WG1E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdf'}, u'cmd': u'sgdisk --zap-all /dev/sdf || sgdisk --zap-all /dev/sdf', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdf || sgdisk --zap-all /dev/sdf', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-21 17:02:04.086847'}, {'ansible_loop_var': u'item', '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP66QW9', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000524AS', u'partitions': {u'sda1': {u'start': u'2048', u'sectorsize': 512, u'uuid': u'f602365c-3e1b-4c7f-a435-7729abad47a6', u'sectors': u'1953522688', u'holders': [], u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000-part1'], u'uuids': [u'f602365c-3e1b-4c7f-a435-7729abad47a6']}, u'size': u'931.51 GB'}}}, 'key': u'sda'}, 'skipped': True, 'changed': False, '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP66QW9', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000524AS', u'partitions': {u'sda1': {u'start': u'2048', u'sectorsize': 512, u'uuid': u'f602365c-3e1b-4c7f-a435-7729abad47a6', u'sectors': u'1953522688', u'holders': [], u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000-part1'], u'uuids': [u'f602365c-3e1b-4c7f-a435-7729abad47a6']}, u'size': u'931.51 GB'}}}, 'key': u'sda'}}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.011010', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP8NWLD', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdb'}, 'ansible_loop_var': u'item', u'end': u'2019-07-21 17:02:06.381030', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP8NWLD', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdb'}, u'cmd': u'sgdisk --zap-all /dev/sdb || sgdisk --zap-all /dev/sdb', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdb || sgdisk --zap-all /dev/sdb', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-21 17:02:05.370020'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.010953', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'6VPBDH90', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdc'}, 'ansible_loop_var': u'item', u'end': u'2019-07-21 17:02:07.637147', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'6VPBDH90', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sdc'}, u'cmd': u'sgdisk --zap-all /dev/sdc || sgdisk --zap-all /dev/sdc', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdc || sgdisk --zap-all /dev/sdc', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-21 17:02:06.626194'}, {'stderr_lines': [u'Problem opening /dev/dm-0 for reading! Error is 2.', u'The specified file does not exist!', u"Problem opening '' for writing! Program will now terminate.", u'Warning! MBR not overwritten! Error is 2!', u'Problem opening /dev/dm-0 for reading! Error is 2.', u'The specified file does not exist!', u"Problem opening '' for writing! Program will now terminate.", u'Warning! MBR not overwritten! Error is 2!'], u'changed': True, u'stdout': u'', u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'delta': u'0:00:00.008370', 'stdout_lines': [], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': None, u'links': {u'masters': [], u'labels': [], u'ids': [u'dm-name-mpatha', u'dm-uuid-mpath-2001b4d2000000000'], u'uuids': [u'c64c7f1e-42c2-43de-aee4-5a96673b3824']}, u'sas_device_handle': None, u'host': u'', u'support_discard': u'0', u'serial': u'5VP8NWLD', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': None, u'partitions': {}}, 'key': u'dm-0'}, 'ansible_loop_var': u'item', u'end': u'2019-07-21 17:02:07.892543', '_ansible_no_log': False, u'start': u'2019-07-21 17:02:07.884173', u'failed': True, u'cmd': u'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', 'item': {'value': {u'sectorsize': u'512', u'vendor': None, u'links': {u'masters': [], u'labels': [], u'ids': [u'dm-name-mpatha', u'dm-uuid-mpath-2001b4d2000000000'], u'uuids': [u'c64c7f1e-42c2-43de-aee4-5a96673b3824']}, u'sas_device_handle': None, u'host': u'', u'support_discard': u'0', u'serial': u'5VP8NWLD', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': None, u'partitions': {}}, 'key': u'dm-0'}, u'stderr': u"Problem opening /dev/dm-0 for reading! Error is 2.\nThe specified file does not exist!\nProblem opening '' for writing! Program will now terminate.\nWarning! MBR not overwritten! Error is 2!\nProblem opening /dev/dm-0 for reading! Error is 2.\nThe specified file does not exist!\nProblem opening '' for writing! Program will now terminate.\nWarning! MBR not overwritten! Error is 2!", u'rc': 2, u'msg': u'non-zero return code'}]}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 219, in represent_list return self.represent_sequence(u'tag:yaml.org,2002:seq', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 102, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined raise RepresenterError("cannot represent an object", data)RepresenterError: ('cannot represent an object', u'sdd') |
||||||||||||||
pass | 4136186 | 2019-07-21 12:11:33 | 2019-07-21 14:49:15 | 2019-07-21 15:11:15 | 0:22:00 | 0:15:50 | 0:06:10 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/cosbench_64K_read_write.yaml} | 1 | |
pass | 4136187 | 2019-07-21 12:11:34 | 2019-07-21 14:51:24 | 2019-07-21 17:25:26 | 2:34:02 | 2:16:12 | 0:17:50 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 4136188 | 2019-07-21 12:11:35 | 2019-07-21 14:55:11 | 2019-07-21 15:33:11 | 0:38:00 | 0:28:02 | 0:09:58 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 4136189 | 2019-07-21 12:11:36 | 2019-07-21 14:57:10 | 2019-07-21 15:15:09 | 0:17:59 | 0:11:13 | 0:06:46 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/pool-snaps-few-objects.yaml} | 2 | |
pass | 4136190 | 2019-07-21 12:11:37 | 2019-07-21 14:57:39 | 2019-07-21 15:09:38 | 0:11:59 | 0:06:01 | 0:05:58 | mira | master | ubuntu | 18.04 | rados/multimon/{clusters/21.yaml msgr-failures/many.yaml msgr/async-v2only.yaml no_pools.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_clock_with_skews.yaml} | 3 | |
pass | 4136191 | 2019-07-21 12:11:37 | 2019-07-21 15:09:40 | 2019-07-21 15:29:39 | 0:19:59 | 0:13:11 | 0:06:48 | mira | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml tasks/rados_api_tests.yaml validater/lockdep.yaml} | 2 | |||
fail | 4136192 | 2019-07-21 12:11:38 | 2019-07-21 15:11:28 | 2019-07-21 17:33:30 | 2:22:02 | 2:04:44 | 0:17:18 | mira | master | rhel | 7.6 | rados/singleton/{all/radostool.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
k --zap-all /dev/sdd', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-21 17:32:15.037702'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.035129', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP8VMXN', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sde'}, 'ansible_loop_var': u'item', u'end': u'2019-07-21 17:32:17.346064', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [u'dm-0'], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP8VMXN', u'holders': [u'mpatha'], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000528AS', u'partitions': {}}, 'key': u'sde'}, u'cmd': u'sgdisk --zap-all /dev/sde || sgdisk --zap-all /dev/sde', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sde || sgdisk --zap-all /dev/sde', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-21 17:32:16.310935'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.010911', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d20112c5400'], u'uuids': [u'30b3c3fd-5f19-4d34-9933-4a61f42bca86']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211LB4E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdf'}, 'ansible_loop_var': u'item', u'end': u'2019-07-21 17:32:18.601177', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d20112c5400'], u'uuids': [u'30b3c3fd-5f19-4d34-9933-4a61f42bca86']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211LB4E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdf'}, u'cmd': u'sgdisk --zap-all /dev/sdf || sgdisk --zap-all /dev/sdf', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdf || sgdisk --zap-all /dev/sdf', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-21 17:32:17.590266'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.011050', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011e25500'], u'uuids': [u'd7ad77a9-c3cf-4ad6-9406-18b12a0efbf5']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N2112NUE', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdg'}, 'ansible_loop_var': u'item', u'end': u'2019-07-21 17:32:19.857298', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011e25500'], u'uuids': [u'd7ad77a9-c3cf-4ad6-9406-18b12a0efbf5']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N2112NUE', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdg'}, u'cmd': u'sgdisk --zap-all /dev/sdg || sgdisk --zap-all /dev/sdg', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdg || sgdisk --zap-all /dev/sdg', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-21 17:32:18.846248'}, {'ansible_loop_var': u'item', '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP91N0M', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000524AS', u'partitions': {u'sda1': {u'start': u'2048', u'sectorsize': 512, u'uuid': u'f602365c-3e1b-4c7f-a435-7729abad47a6', u'sectors': u'1953522688', u'holders': [], u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000-part1'], u'uuids': [u'f602365c-3e1b-4c7f-a435-7729abad47a6']}, u'size': u'931.51 GB'}}}, 'key': u'sda'}, 'skipped': True, 'changed': False, '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Seagate', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'5VP91N0M', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'ST31000524AS', u'partitions': {u'sda1': {u'start': u'2048', u'sectorsize': 512, u'uuid': u'f602365c-3e1b-4c7f-a435-7729abad47a6', u'sectors': u'1953522688', u'holders': [], u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2000000000-part1'], u'uuids': [u'f602365c-3e1b-4c7f-a435-7729abad47a6']}, u'size': u'931.51 GB'}}}, 'key': u'sda'}}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.011258', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011b25000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211RK0E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdb'}, 'ansible_loop_var': u'item', u'end': u'2019-07-21 17:32:21.124631', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011b25000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211RK0E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdb'}, u'cmd': u'sgdisk --zap-all /dev/sdb || sgdisk --zap-all /dev/sdb', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdb || sgdisk --zap-all /dev/sdb', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-21 17:32:20.113373'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.022986', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011335800'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211338E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdc'}, 'ansible_loop_var': u'item', u'end': u'2019-07-21 17:32:22.400283', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011335800'], u'uuids': []}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211338E', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdc'}, u'cmd': u'sgdisk --zap-all /dev/sdc || sgdisk --zap-all /dev/sdc', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdc || sgdisk --zap-all /dev/sdc', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-21 17:32:21.377297'}, {'stderr_lines': [], u'changed': True, u'stdout': u'Creating new GPT entries.\nGPT data structures destroyed! You may now partition the disk using fdisk or\nother utilities.', u'delta': u'0:00:01.010791', 'stdout_lines': [u'Creating new GPT entries.', u'GPT data structures destroyed! You may now partition the disk using fdisk or', u'other utilities.'], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011a05200'], u'uuids': [u'76b3c7eb-8303-405c-b212-5ef36e7eb0d3']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211PZBE', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdh'}, 'ansible_loop_var': u'item', u'end': u'2019-07-21 17:32:23.686486', '_ansible_no_log': False, 'item': {'value': {u'sectorsize': u'512', u'vendor': u'Hitachi', u'links': {u'masters': [], u'labels': [], u'ids': [u'scsi-2001b4d2011a05200'], u'uuids': [u'76b3c7eb-8303-405c-b212-5ef36e7eb0d3']}, u'sas_device_handle': None, u'host': u'RAID bus controller: Areca Technology Corp. ARC-1680 series PCIe to SAS/SATA 3Gb RAID Controller', u'support_discard': u'0', u'serial': u'JPW9K0N211PZBE', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': u'HUA722010CLA330', u'partitions': {}}, 'key': u'sdh'}, u'cmd': u'sgdisk --zap-all /dev/sdh || sgdisk --zap-all /dev/sdh', 'failed': False, u'stderr': u'', u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/sdh || sgdisk --zap-all /dev/sdh', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'start': u'2019-07-21 17:32:22.675695'}, {'stderr_lines': [u'Problem opening /dev/dm-0 for reading! Error is 2.', u'The specified file does not exist!', u"Problem opening '' for writing! Program will now terminate.", u'Warning! MBR not overwritten! Error is 2!', u'Problem opening /dev/dm-0 for reading! Error is 2.', u'The specified file does not exist!', u"Problem opening '' for writing! Program will now terminate.", u'Warning! MBR not overwritten! Error is 2!'], u'changed': True, u'stdout': u'', u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'delta': u'0:00:00.008652', 'stdout_lines': [], '_ansible_item_label': {'value': {u'sectorsize': u'512', u'vendor': None, u'links': {u'masters': [], u'labels': [], u'ids': [u'dm-name-mpatha', u'dm-uuid-mpath-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'', u'support_discard': u'0', u'serial': u'5VP8VMXN', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': None, u'partitions': {}}, 'key': u'dm-0'}, 'ansible_loop_var': u'item', u'end': u'2019-07-21 17:32:23.942367', '_ansible_no_log': False, u'start': u'2019-07-21 17:32:23.933715', u'failed': True, u'cmd': u'sgdisk --zap-all /dev/dm-0 || sgdisk --zap-all /dev/dm-0', 'item': {'value': {u'sectorsize': u'512', u'vendor': None, u'links': {u'masters': [], u'labels': [], u'ids': [u'dm-name-mpatha', u'dm-uuid-mpath-2001b4d2000000000'], u'uuids': []}, u'sas_device_handle': None, u'host': u'', u'support_discard': u'0', u'serial': u'5VP8VMXN', u'holders': [], u'size': u'931.51 GB', u'scheduler_mode': u'deadline', u'rotational': u'1', u'sectors': u'1953525168', u'sas_address': None, u'virtual': 1, u'removable': u'0', u'model': None, u'partitions': {}}, 'key': u'dm-0'}, u'stderr': u"Problem opening /dev/dm-0 for reading! Error is 2.\nThe specified file does not exist!\nProblem opening '' for writing! Program will now terminate.\nWarning! MBR not overwritten! Error is 2!\nProblem opening /dev/dm-0 for reading! Error is 2.\nThe specified file does not exist!\nProblem opening '' for writing! Program will now terminate.\nWarning! MBR not overwritten! Error is 2!", u'rc': 2, u'msg': u'non-zero return code'}]}}Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure log.error(yaml.safe_dump(failure)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump return dump_all([data], stream, Dumper=SafeDumper, **kwds) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all dumper.represent(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent node = self.represent_data(data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 219, in represent_list return self.represent_sequence(u'tag:yaml.org,2002:seq', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 102, in represent_sequence node_item = self.represent_data(item) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict return self.represent_mapping(u'tag:yaml.org,2002:map', data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping node_value = self.represent_data(item_value) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data node = self.yaml_representers[None](self, data) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined raise RepresenterError("cannot represent an object", data)RepresenterError: ('cannot represent an object', u'sdd') |
||||||||||||||
pass | 4136193 | 2019-07-21 12:11:39 | 2019-07-21 15:15:23 | 2019-07-21 15:53:23 | 0:38:00 | 0:31:08 | 0:06:52 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/rados_api_tests.yaml} | 2 | |
pass | 4136194 | 2019-07-21 12:11:40 | 2019-07-21 15:25:42 | 2019-07-21 15:47:41 | 0:21:59 | 0:14:30 | 0:07:29 | mira | master | centos | 7.6 | rados/objectstore/{backends/alloc-hint.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
fail | 4136195 | 2019-07-21 12:11:41 | 2019-07-21 15:29:41 | 2019-07-21 16:15:41 | 0:46:00 | 0:38:05 | 0:07:55 | mira | master | rhel | 7.6 | rados/rest/{mgr-restful.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
fail | 4136196 | 2019-07-21 12:11:42 | 2019-07-21 15:33:25 | 2019-07-21 18:49:27 | 3:16:02 | 3:08:18 | 0:07:44 | mira | master | ubuntu | 18.04 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on mira088 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=17037e5974de4e0c87994d93135727a1072d2b9e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
fail | 4136197 | 2019-07-21 12:11:43 | 2019-07-21 15:47:43 | 2019-07-21 16:17:42 | 0:29:59 | 0:21:56 | 0:08:03 | mira | master | centos | rados/singleton-flat/valgrind-leaks.yaml | 1 | ||
Failure Reason:
Command failed on mira105 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' |
||||||||||||||
fail | 4136198 | 2019-07-21 12:11:43 | 2019-07-21 15:53:25 | 2019-07-21 16:33:25 | 0:40:00 | 0:31:49 | 0:08:11 | mira | master | centos | 7.6 | rados/singleton-nomsgr/{all/admin_socket_output.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
pass | 4136199 | 2019-07-21 12:11:44 | 2019-07-21 16:09:48 | 2019-07-21 16:33:47 | 0:23:59 | 0:16:45 | 0:07:14 | mira | master | centos | 7.6 | rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/crush.yaml} | 1 | |
fail | 4136200 | 2019-07-21 12:11:45 | 2019-07-21 16:15:55 | 2019-07-21 21:33:59 | 5:18:04 | 4:56:04 | 0:22:00 | mira | master | rhel | 7.6 | rados/upgrade/mimic-x-singleton/{0-cluster/{openstack.yaml start.yaml} 1-install/mimic.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 5-workload/{radosbench.yaml rbd_api.yaml} 6-finish-upgrade.yaml 7-nautilus.yaml 8-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} thrashosds-health.yaml} | 4 | |
Failure Reason:
Timed out waiting for MDS daemons to become healthy |
||||||||||||||
pass | 4136201 | 2019-07-21 12:11:46 | 2019-07-21 16:17:44 | 2019-07-21 19:31:46 | 3:14:02 | 2:54:30 | 0:19:32 | mira | master | centos | 7.6 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/many.yaml workloads/rados_mon_osdmap_prune.yaml} | 2 | |
pass | 4136202 | 2019-07-21 12:11:47 | 2019-07-21 16:33:41 | 2019-07-21 17:11:41 | 0:38:00 | 0:32:17 | 0:05:43 | mira | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/cosbench_64K_write.yaml} | 1 | |
pass | 4136203 | 2019-07-21 12:11:48 | 2019-07-21 16:33:49 | 2019-07-21 17:11:48 | 0:37:59 | 0:31:01 | 0:06:58 | mira | master | rhel | 7.6 | rados/singleton/{all/random-eio.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 2 | |
pass | 4136204 | 2019-07-21 12:11:49 | 2019-07-21 16:35:46 | 2019-07-21 17:23:45 | 0:47:59 | 0:39:54 | 0:08:05 | mira | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rados_workunit_loadgen_mostlyread.yaml} | 2 | |
fail | 4136205 | 2019-07-21 12:11:50 | 2019-07-21 17:03:32 | 2019-07-21 18:35:33 | 1:32:01 | 1:24:06 | 0:07:55 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 2 | |
Failure Reason:
reached maximum tries (500) after waiting for 3000 seconds |
||||||||||||||
pass | 4136206 | 2019-07-21 12:11:51 | 2019-07-21 17:11:42 | 2019-07-21 17:45:42 | 0:34:00 | 0:24:52 | 0:09:08 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/mimic-v1only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 4 | |
dead | 4136207 | 2019-07-21 12:11:52 | 2019-07-21 17:11:49 | 2019-07-21 17:33:49 | 0:22:00 | mira | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{centos_7.yaml} tasks/dashboard.yaml} | — | |||
Failure Reason:
reached maximum tries (100) after waiting for 600 seconds |
||||||||||||||
pass | 4136208 | 2019-07-21 12:11:53 | 2019-07-21 17:23:47 | 2019-07-21 17:49:46 | 0:25:59 | 0:16:51 | 0:09:08 | mira | master | centos | 7.6 | rados/singleton/{all/rebuild-mondb.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4136209 | 2019-07-21 12:11:54 | 2019-07-21 17:25:28 | 2019-07-21 19:53:29 | 2:28:01 | 2:08:56 | 0:19:05 | mira | master | centos | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
fail | 4136210 | 2019-07-21 12:11:54 | 2019-07-21 17:33:38 | 2019-07-21 18:15:37 | 0:41:59 | 0:33:00 | 0:08:59 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-small-objects-overwrites.yaml} | 2 | |
Failure Reason:
not clean before minsize thrashing starts |
||||||||||||||
pass | 4136211 | 2019-07-21 12:11:55 | 2019-07-21 17:33:50 | 2019-07-21 18:13:50 | 0:40:00 | 0:33:26 | 0:06:34 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/redirect.yaml} | 2 | |
pass | 4136212 | 2019-07-21 12:11:56 | 2019-07-21 17:45:44 | 2019-07-21 18:29:44 | 0:44:00 | 0:36:12 | 0:07:48 | mira | master | rhel | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/fastread.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read.yaml} | 2 | |
fail | 4136213 | 2019-07-21 12:11:57 | 2019-07-21 17:49:48 | 2019-07-21 18:33:48 | 0:44:00 | 0:38:24 | 0:05:36 | mira | master | rhel | 7.6 | rados/singleton-nomsgr/{all/cache-fs-trunc.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
pass | 4136214 | 2019-07-21 12:11:58 | 2019-07-21 18:14:07 | 2019-07-21 18:42:06 | 0:27:59 | 0:21:23 | 0:06:36 | mira | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/fio_4K_rand_read.yaml} | 1 | |
pass | 4136215 | 2019-07-21 12:11:59 | 2019-07-21 18:15:39 | 2019-07-21 19:11:39 | 0:56:00 | 0:50:32 | 0:05:28 | mira | master | rhel | 7.6 | rados/singleton/{all/recovery-preemption.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4136216 | 2019-07-21 12:12:00 | 2019-07-21 18:29:46 | 2019-07-21 19:01:45 | 0:31:59 | 0:23:37 | 0:08:22 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/redirect_promote_tests.yaml} | 2 | |
fail | 4136217 | 2019-07-21 12:12:01 | 2019-07-21 18:34:02 | 2019-07-21 18:46:01 | 0:11:59 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |||
Failure Reason:
Command failed on mira115 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y install linux-image-generic' |
||||||||||||||
pass | 4136218 | 2019-07-21 12:12:02 | 2019-07-21 18:35:34 | 2019-07-21 19:09:34 | 0:34:00 | 0:23:25 | 0:10:35 | mira | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/readwrite.yaml} | 2 | |
pass | 4136219 | 2019-07-21 12:12:03 | 2019-07-21 18:42:21 | 2019-07-21 18:56:20 | 0:13:59 | 0:07:17 | 0:06:42 | mira | master | ubuntu | 18.04 | rados/singleton/{all/resolve_stuck_peering.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 2 | |
pass | 4136220 | 2019-07-21 12:12:04 | 2019-07-21 18:46:03 | 2019-07-21 19:08:03 | 0:22:00 | 0:11:35 | 0:10:25 | mira | master | ubuntu | 18.04 | rados/objectstore/{backends/ceph_objectstore_tool.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4136221 | 2019-07-21 12:12:05 | 2019-07-21 18:49:31 | 2019-07-21 19:17:30 | 0:27:59 | 0:20:43 | 0:07:16 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/redirect_set_object.yaml} | 2 | |
pass | 4136222 | 2019-07-21 12:12:06 | 2019-07-21 18:56:35 | 2019-07-21 20:06:35 | 1:10:00 | 0:59:56 | 0:10:04 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/mimic.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/morepggrow.yaml thrashosds-health.yaml workloads/radosbench.yaml} | 4 | |
fail | 4136223 | 2019-07-21 12:12:06 | 2019-07-21 19:01:47 | 2019-07-21 19:47:46 | 0:45:59 | 0:39:11 | 0:06:48 | mira | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-low-osd-mem-target.yaml supported-random-distro$/{rhel_7.yaml} tasks/failover.yaml} | 2 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
pass | 4136224 | 2019-07-21 12:12:07 | 2019-07-21 19:08:17 | 2019-07-21 19:38:16 | 0:29:59 | 0:22:01 | 0:07:58 | mira | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/fio_4K_rand_rw.yaml} | 1 | |
pass | 4136225 | 2019-07-21 12:12:08 | 2019-07-21 19:09:48 | 2019-07-21 19:23:47 | 0:13:59 | 0:07:31 | 0:06:28 | mira | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/ceph-kvstore-tool.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4136226 | 2019-07-21 12:12:09 | 2019-07-21 19:11:41 | 2019-07-21 21:51:43 | 2:40:02 | 2:17:24 | 0:22:38 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 4136227 | 2019-07-21 12:12:10 | 2019-07-21 19:17:45 | 2019-07-21 19:31:44 | 0:13:59 | 0:08:03 | 0:05:56 | mira | master | ubuntu | 18.04 | rados/singleton/{all/test-crash.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4136228 | 2019-07-21 12:12:11 | 2019-07-21 19:23:49 | 2019-07-21 19:39:48 | 0:15:59 | 0:08:31 | 0:07:28 | mira | master | ubuntu | 18.04 | rados/multimon/{clusters/3.yaml msgr-failures/few.yaml msgr/async.yaml no_pools.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_recovery.yaml} | 2 | |
fail | 4136229 | 2019-07-21 12:12:12 | 2019-07-21 19:31:46 | 2019-07-21 20:25:46 | 0:54:00 | 0:43:28 | 0:10:32 | mira | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml tasks/rados_cls_all.yaml validater/valgrind.yaml} | 2 | ||
Failure Reason:
saw valgrind issues |
||||||||||||||
pass | 4136230 | 2019-07-21 12:12:13 | 2019-07-21 19:32:00 | 2019-07-21 19:57:59 | 0:25:59 | 0:18:36 | 0:07:23 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/set-chunks-read.yaml} | 2 | |
pass | 4136231 | 2019-07-21 12:12:14 | 2019-07-21 19:38:18 | 2019-07-21 20:28:18 | 0:50:00 | 0:40:35 | 0:09:25 | mira | master | rhel | 7.6 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/one.yaml workloads/rados_mon_workunits.yaml} | 2 | |
pass | 4136232 | 2019-07-21 12:12:15 | 2019-07-21 19:39:50 | 2019-07-21 20:03:49 | 0:23:59 | 0:17:44 | 0:06:15 | mira | master | ubuntu | 18.04 | rados/singleton/{all/test_envlibrados_for_rocksdb.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4136233 | 2019-07-21 12:12:16 | 2019-07-21 19:48:01 | 2019-07-21 20:38:00 | 0:49:59 | 0:43:53 | 0:06:06 | mira | master | centos | 7.6 | rados/standalone/{supported-random-distro$/{centos_7.yaml} workloads/erasure-code.yaml} | 1 | |
pass | 4136234 | 2019-07-21 12:12:18 | 2019-07-21 19:53:31 | 2019-07-21 20:31:30 | 0:37:59 | 0:31:04 | 0:06:55 | mira | master | rhel | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} tasks/repair_test.yaml} | 2 | |
pass | 4136235 | 2019-07-21 12:12:19 | 2019-07-21 19:58:01 | 2019-07-21 20:22:00 | 0:23:59 | 0:14:30 | 0:09:29 | mira | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/fio_4M_rand_read.yaml} | 1 | |
pass | 4136236 | 2019-07-21 12:12:20 | 2019-07-21 20:03:51 | 2019-07-21 20:41:50 | 0:37:59 | 0:25:52 | 0:12:07 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/small-objects.yaml} | 2 | |
fail | 4136237 | 2019-07-21 12:12:21 | 2019-07-21 20:06:37 | 2019-07-21 20:30:36 | 0:23:59 | 0:17:51 | 0:06:08 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/minsize_recovery.yaml thrashosds-health.yaml workloads/ec-small-objects-many-deletes.yaml} | 2 | |
Failure Reason:
not clean before minsize thrashing starts |
||||||||||||||
pass | 4136238 | 2019-07-21 12:12:22 | 2019-07-21 20:22:02 | 2019-07-21 20:46:01 | 0:23:59 | 0:16:36 | 0:07:23 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |
pass | 4136239 | 2019-07-21 12:12:23 | 2019-07-21 20:25:47 | 2019-07-21 23:05:49 | 2:40:02 | 2:22:04 | 0:17:58 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/few.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-snaps-few-objects-overwrites.yaml} | 2 | |
pass | 4136240 | 2019-07-21 12:12:24 | 2019-07-21 20:28:33 | 2019-07-21 21:28:33 | 1:00:00 | 0:48:35 | 0:11:25 | mira | master | ubuntu | 18.04 | rados/singleton/{all/thrash-backfill-full.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 2 | |
pass | 4136241 | 2019-07-21 12:12:25 | 2019-07-21 20:30:38 | 2019-07-21 20:42:37 | 0:11:59 | 0:06:13 | 0:05:46 | mira | master | ubuntu | 18.04 | rados/singleton-nomsgr/{all/ceph-post-file.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4136242 | 2019-07-21 12:12:26 | 2019-07-21 20:31:45 | 2019-07-21 21:01:44 | 0:29:59 | 0:19:36 | 0:10:23 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus-v1only.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/none.yaml thrashosds-health.yaml workloads/rbd_cls.yaml} | 4 | |
fail | 4136243 | 2019-07-21 12:12:28 | 2019-07-21 20:38:02 | 2019-07-21 21:10:02 | 0:32:00 | 0:24:33 | 0:07:27 | mira | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-stupid.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/insights.yaml} | 2 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
pass | 4136244 | 2019-07-21 12:12:28 | 2019-07-21 20:41:52 | 2019-07-21 21:21:51 | 0:39:59 | 0:31:41 | 0:08:18 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 2 | |
pass | 4136245 | 2019-07-21 12:12:30 | 2019-07-21 20:42:50 | 2019-07-21 21:12:50 | 0:30:00 | 0:23:38 | 0:06:22 | mira | master | ubuntu | 18.04 | rados/singleton/{all/thrash-eio.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 2 | |
pass | 4136246 | 2019-07-21 12:12:31 | 2019-07-21 20:46:15 | 2019-07-21 20:58:14 | 0:11:59 | 0:05:40 | 0:06:19 | mira | master | ubuntu | 18.04 | rados/objectstore/{backends/filejournal.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4136247 | 2019-07-21 12:12:32 | 2019-07-21 20:58:27 | 2019-07-21 23:32:29 | 2:34:02 | 2:15:03 | 0:18:59 | mira | master | rhel | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-bitmap.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{rhel_7.yaml} workloads/fio_4M_rand_rw.yaml} | 1 | |
pass | 4136248 | 2019-07-21 12:12:33 | 2019-07-21 21:01:46 | 2019-07-21 21:19:45 | 0:17:59 | 0:11:41 | 0:06:18 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 4136249 | 2019-07-21 12:12:34 | 2019-07-21 21:10:15 | 2019-07-21 21:48:15 | 0:38:00 | 0:30:18 | 0:07:42 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/write_fadvise_dontneed.yaml} | 2 | |
pass | 4136250 | 2019-07-21 12:12:35 | 2019-07-21 21:12:51 | 2019-07-21 23:48:53 | 2:36:02 | 2:18:11 | 0:17:51 | mira | master | centos | 7.6 | rados/singleton/{all/thrash-rados/{thrash-rados.yaml thrashosds-health.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 2 | |
fail | 4136251 | 2019-07-21 12:12:36 | 2019-07-21 21:19:47 | 2019-07-22 00:47:50 | 3:28:03 | 3:22:02 | 0:06:01 | mira | master | rhel | 7.6 | rados/singleton-bluestore/{all/cephtool.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
Failure Reason:
Command failed (workunit test cephtool/test.sh) on mira107 with status 124: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=17037e5974de4e0c87994d93135727a1072d2b9e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephtool/test.sh' |
||||||||||||||
pass | 4136252 | 2019-07-21 12:12:37 | 2019-07-21 21:21:53 | 2019-07-21 21:55:53 | 0:34:00 | 0:22:05 | 0:11:55 | mira | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rgw_snaps.yaml} | 2 | |
pass | 4136253 | 2019-07-21 12:12:38 | 2019-07-21 21:28:36 | 2019-07-21 21:48:35 | 0:19:59 | 0:12:16 | 0:07:43 | mira | master | centos | 7.6 | rados/singleton-nomsgr/{all/export-after-evict.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4136254 | 2019-07-21 12:12:39 | 2019-07-21 21:34:01 | 2019-07-21 21:54:00 | 0:19:59 | 0:13:36 | 0:06:23 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/admin_socket_objecter_requests.yaml} | 2 | |
pass | 4136255 | 2019-07-21 12:12:40 | 2019-07-21 21:48:29 | 2019-07-21 22:24:29 | 0:36:00 | 0:28:52 | 0:07:08 | mira | master | centos | 7.6 | rados/singleton/{all/thrash_cache_writeback_proxy_none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 2 | |
fail | 4136256 | 2019-07-21 12:12:41 | 2019-07-21 21:48:36 | 2019-07-21 22:20:35 | 0:31:59 | 0:24:33 | 0:07:26 | mira | master | ubuntu | 18.04 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/filestore-xfs.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/module_selftest.yaml} | 2 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
pass | 4136257 | 2019-07-21 12:12:41 | 2019-07-21 21:51:57 | 2019-07-21 22:05:56 | 0:13:59 | 0:08:04 | 0:05:55 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/bluestore-comp.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/fio_4M_rand_write.yaml} | 1 | |
pass | 4136258 | 2019-07-21 12:12:42 | 2019-07-21 21:54:02 | 2019-07-22 00:34:04 | 2:40:02 | 2:17:32 | 0:22:30 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/nautilus-v2only.yaml backoff/peering_and_degraded.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/osd-delay.yaml rados.yaml thrashers/pggrow.yaml thrashosds-health.yaml workloads/snaps-few-objects.yaml} | 4 | |
pass | 4136259 | 2019-07-21 12:12:43 | 2019-07-21 21:55:54 | 2019-07-21 22:23:54 | 0:28:00 | 0:18:00 | 0:10:00 | mira | master | centos | 7.6 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |
pass | 4136260 | 2019-07-21 12:12:44 | 2019-07-21 22:05:58 | 2019-07-21 22:49:58 | 0:44:00 | 0:37:24 | 0:06:36 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/cache-agent-big.yaml} | 2 | |
pass | 4136261 | 2019-07-21 12:12:45 | 2019-07-21 22:20:50 | 2019-07-21 22:32:49 | 0:11:59 | 0:05:36 | 0:06:23 | mira | master | ubuntu | 18.04 | rados/multimon/{clusters/6.yaml msgr-failures/many.yaml msgr/async-v1only.yaml no_pools.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_clock_no_skews.yaml} | 2 | |
pass | 4136262 | 2019-07-21 12:12:46 | 2019-07-21 22:23:55 | 2019-07-21 22:41:55 | 0:18:00 | 0:10:34 | 0:07:26 | mira | master | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/default/{default.yaml thrashosds-health.yaml} msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-stupid.yaml rados.yaml tasks/mon_recovery.yaml validater/lockdep.yaml} | 2 | |||
pass | 4136263 | 2019-07-21 12:12:47 | 2019-07-21 22:24:30 | 2019-07-21 23:00:29 | 0:35:59 | 0:29:53 | 0:06:06 | mira | master | rhel | 7.6 | rados/monthrash/{ceph.yaml clusters/9-mons.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/sync-many.yaml workloads/snaps-few-objects.yaml} | 2 | |
pass | 4136264 | 2019-07-21 12:12:48 | 2019-07-21 22:33:04 | 2019-07-21 23:03:03 | 0:29:59 | 0:20:29 | 0:09:30 | mira | master | rhel | 7.6 | rados/singleton/{all/watch-notify-same-primary.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4136265 | 2019-07-21 12:12:49 | 2019-07-21 22:42:09 | 2019-07-21 23:22:08 | 0:39:59 | 0:31:34 | 0:08:25 | mira | master | centos | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/few.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{centos_7.yaml} thrashers/morepggrow.yaml thrashosds-health.yaml workloads/ec-small-objects.yaml} | 2 | |
pass | 4136266 | 2019-07-21 12:12:50 | 2019-07-21 22:49:59 | 2019-07-21 23:17:59 | 0:28:00 | 0:17:43 | 0:10:17 | mira | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/scrub_test.yaml} | 2 | |
pass | 4136267 | 2019-07-21 12:12:51 | 2019-07-21 23:00:31 | 2019-07-21 23:20:30 | 0:19:59 | 0:08:57 | 0:11:02 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/few.yaml objectstore/bluestore-stupid.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=lrc-k=4-m=2-l=3.yaml} | 3 | |
pass | 4136268 | 2019-07-21 12:12:52 | 2019-07-21 23:03:05 | 2019-07-22 01:37:07 | 2:34:02 | 2:15:07 | 0:18:55 | mira | master | centos | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/fast.yaml msgr-failures/osd-delay.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{centos_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-pool-snaps-few-objects-overwrites.yaml} | 2 | |
pass | 4136269 | 2019-07-21 12:12:53 | 2019-07-21 23:06:04 | 2019-07-21 23:24:03 | 0:17:59 | 0:11:34 | 0:06:25 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/none.yaml thrashosds-health.yaml workloads/cache-agent-small.yaml} | 2 | |
pass | 4136270 | 2019-07-21 12:12:54 | 2019-07-21 23:18:00 | 2019-07-22 00:08:00 | 0:50:00 | 0:44:02 | 0:05:58 | mira | master | ubuntu | 18.04 | rados/objectstore/{backends/filestore-idempotent-aio-journal.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4136271 | 2019-07-21 12:12:55 | 2019-07-21 23:20:32 | 2019-07-21 23:40:31 | 0:19:59 | 0:14:09 | 0:05:50 | mira | master | centos | 7.6 | rados/singleton-nomsgr/{all/full-tiering.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
fail | 4136272 | 2019-07-21 12:12:56 | 2019-07-21 23:22:10 | 2019-07-21 23:36:09 | 0:13:59 | 0:06:25 | 0:07:34 | mira | master | ubuntu | 18.04 | rados/standalone/{supported-random-distro$/{ubuntu_latest.yaml} workloads/misc.yaml} | 1 | |
Failure Reason:
Command failed (workunit test misc/ok-to-stop.sh) on mira026 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=17037e5974de4e0c87994d93135727a1072d2b9e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/misc/ok-to-stop.sh' |
||||||||||||||
pass | 4136273 | 2019-07-21 12:12:57 | 2019-07-21 23:24:05 | 2019-07-21 23:52:04 | 0:27:59 | 0:20:17 | 0:07:42 | mira | master | rhel | 7.6 | rados/singleton/{all/admin-socket.yaml msgr-failures/few.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4136274 | 2019-07-21 12:12:58 | 2019-07-21 23:32:30 | 2019-07-21 23:58:30 | 0:26:00 | 0:18:27 | 0:07:33 | mira | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-low-osd-mem-target.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/radosbench_4K_rand_read.yaml} | 1 | |
pass | 4136275 | 2019-07-21 12:12:59 | 2019-07-21 23:36:11 | 2019-07-22 00:18:10 | 0:41:59 | 0:34:21 | 0:07:38 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/cache-pool-snaps-readproxy.yaml} | 2 | |
pass | 4136276 | 2019-07-21 12:13:00 | 2019-07-21 23:40:46 | 2019-07-22 00:00:45 | 0:19:59 | 0:09:06 | 0:10:53 | mira | master | ubuntu | 18.04 | rados/singleton/{all/deduptool.yaml msgr-failures/many.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
dead | 4136277 | 2019-07-21 12:13:01 | 2019-07-21 23:49:07 | 2019-07-22 01:49:08 | 2:00:01 | mira | master | rhel | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-bitmap.yaml supported-random-distro$/{rhel_7.yaml} tasks/orchestrator_cli.yaml} | 2 | |||
pass | 4136278 | 2019-07-21 12:13:04 | 2019-07-21 23:52:18 | 2019-07-22 00:12:18 | 0:20:00 | 0:13:31 | 0:06:29 | mira | master | ubuntu | 18.04 | rados/thrash-erasure-code-isa/{arch/x86_64.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{ubuntu_latest.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=isa-k=2-m=1.yaml} | 2 | |
pass | 4136279 | 2019-07-21 12:13:05 | 2019-07-21 23:58:44 | 2019-07-22 00:28:43 | 0:29:59 | 0:19:44 | 0:10:15 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-install/nautilus.yaml backoff/normal.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/crush-compat.yaml distro$/{centos_latest.yaml} msgr-failures/fastclose.yaml rados.yaml thrashers/careful.yaml thrashosds-health.yaml workloads/test_rbd_api.yaml} | 4 | |
pass | 4136280 | 2019-07-21 12:13:06 | 2019-07-22 00:01:00 | 2019-07-22 00:28:59 | 0:27:59 | 0:21:26 | 0:06:33 | mira | master | rhel | 7.6 | rados/singleton/{all/divergent_priors.yaml msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4136281 | 2019-07-21 12:13:07 | 2019-07-22 00:08:14 | 2019-07-22 00:36:14 | 0:28:00 | 0:20:10 | 0:07:50 | mira | master | centos | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/normal.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/off.yaml msgr-failures/fastclose.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/cache-pool-snaps.yaml} | 2 | |
fail | 4136282 | 2019-07-21 12:13:08 | 2019-07-22 00:12:32 | 2019-07-22 00:36:31 | 0:23:59 | 0:16:27 | 0:07:32 | mira | master | centos | 7.6 | rados/perf/{ceph.yaml objectstore/bluestore-stupid.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{centos_7.yaml} workloads/radosbench_4K_seq_read.yaml} | 1 | |
Failure Reason:
Found coredumps on ubuntu@mira111.front.sepia.ceph.com |
||||||||||||||
pass | 4136283 | 2019-07-21 12:13:09 | 2019-07-22 00:18:12 | 2019-07-22 00:46:11 | 0:27:59 | 0:16:50 | 0:11:09 | mira | master | centos | 7.6 | rados/singleton-nomsgr/{all/health-warnings.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
pass | 4136284 | 2019-07-21 12:13:10 | 2019-07-22 00:28:58 | 2019-07-22 00:54:58 | 0:26:00 | 0:15:05 | 0:10:55 | mira | master | ubuntu | 18.04 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/few.yaml msgr/async-v2only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/rados_api_tests.yaml} | 2 | |
pass | 4136285 | 2019-07-21 12:13:11 | 2019-07-22 00:29:01 | 2019-07-22 00:57:00 | 0:27:59 | 0:21:06 | 0:06:53 | mira | master | rhel | 7.6 | rados/singleton/{all/divergent_priors2.yaml msgr-failures/many.yaml msgr/async.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{rhel_7.yaml}} | 1 | |
pass | 4136286 | 2019-07-21 12:13:12 | 2019-07-22 00:34:08 | 2019-07-22 00:58:07 | 0:23:59 | 0:17:34 | 0:06:25 | mira | master | ubuntu | 18.04 | rados/thrash/{0-size-min-size-overrides/3-size-2-min-size.yaml 1-pg-log-overrides/short_pg_log.yaml 2-recovery-overrides/{default.yaml} backoff/peering.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/upmap.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 2 | |
pass | 4136287 | 2019-07-21 12:13:13 | 2019-07-22 00:36:16 | 2019-07-22 01:16:15 | 0:39:59 | 0:31:18 | 0:08:41 | mira | master | rhel | 7.6 | rados/thrash-erasure-code/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/osd-delay.yaml objectstore/bluestore-comp.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/pggrow.yaml thrashosds-health.yaml workloads/ec-rados-plugin=clay-k=4-m=2.yaml} | 2 | |
dead | 4136288 | 2019-07-21 12:13:14 | 2019-07-22 00:36:33 | 2019-07-22 01:48:33 | 1:12:00 | mira | master | centos | 7.6 | rados/objectstore/{backends/filestore-idempotent.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |||
dead | 4136289 | 2019-07-21 12:13:15 | 2019-07-22 00:37:41 | 2019-07-22 01:49:41 | 1:12:00 | mira | master | centos | 7.6 | rados/monthrash/{ceph.yaml clusters/3-mons.yaml msgr-failures/mon-delay.yaml msgr/async-v1only.yaml objectstore/bluestore-low-osd-mem-target.yaml rados.yaml supported-random-distro$/{centos_7.yaml} thrashers/sync.yaml workloads/pool-create-delete.yaml} | 2 | |||
fail | 4136290 | 2019-07-21 12:13:16 | 2019-07-22 00:46:26 | 2019-07-22 01:28:26 | 0:42:00 | 0:33:24 | 0:08:36 | mira | master | centos | 7.6 | rados/mgr/{clusters/{2-node-mgr.yaml openstack.yaml} debug/mgr.yaml objectstore/bluestore-comp.yaml supported-random-distro$/{centos_7.yaml} tasks/progress.yaml} | 2 | |
Failure Reason:
'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds |
||||||||||||||
pass | 4136291 | 2019-07-21 12:13:17 | 2019-07-22 00:48:05 | 2019-07-22 01:04:04 | 0:15:59 | 0:09:07 | 0:06:52 | mira | master | ubuntu | 18.04 | rados/perf/{ceph.yaml objectstore/filestore-xfs.yaml openstack.yaml settings/optimized.yaml supported-random-distro$/{ubuntu_latest.yaml} workloads/radosbench_4M_rand_read.yaml} | 1 | |
pass | 4136292 | 2019-07-21 12:13:18 | 2019-07-22 00:55:14 | 2019-07-22 01:13:13 | 0:17:59 | 0:07:16 | 0:10:43 | mira | master | ubuntu | 18.04 | rados/singleton/{all/dump-stuck.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/bluestore-bitmap.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml}} | 1 | |
pass | 4136293 | 2019-07-21 12:13:19 | 2019-07-22 00:57:04 | 2019-07-22 01:45:04 | 0:48:00 | 0:41:13 | 0:06:47 | mira | master | rhel | 7.6 | rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml 2-recovery-overrides/{more-active-recovery.yaml} backoff/peering_and_degraded.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-balancer/crush-compat.yaml msgr-failures/osd-delay.yaml msgr/async-v2only.yaml objectstore/bluestore-stupid.yaml rados.yaml supported-random-distro$/{rhel_7.yaml} thrashers/mapgap.yaml thrashosds-health.yaml workloads/cache.yaml} | 2 | |
dead | 4136294 | 2019-07-21 12:13:21 | 2019-07-22 00:58:21 | 2019-07-22 01:48:21 | 0:50:00 | mira | master | centos | 7.6 | rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-install/hammer.yaml backoff/peering.yaml ceph.yaml clusters/{openstack.yaml three-plus-one.yaml} d-balancer/off.yaml distro$/{centos_latest.yaml} msgr-failures/few.yaml rados.yaml thrashers/default.yaml thrashosds-health.yaml workloads/cache-snaps.yaml} | 4 | |||
pass | 4136295 | 2019-07-21 12:13:22 | 2019-07-22 01:04:06 | 2019-07-22 01:28:05 | 0:23:59 | 0:14:43 | 0:09:16 | mira | master | centos | 7.6 | rados/singleton-nomsgr/{all/large-omap-object-warnings.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |
dead | 4136296 | 2019-07-21 12:13:22 | 2019-07-22 01:13:28 | 2019-07-22 01:49:27 | 0:35:59 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-big/{ceph.yaml cluster/{12-osds.yaml openstack.yaml} msgr-failures/osd-delay.yaml objectstore/filestore-xfs.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-rados-plugin=jerasure-k=4-m=2.yaml} | 3 | |||
dead | 4136297 | 2019-07-21 12:13:24 | 2019-07-22 01:16:31 | 2019-07-22 01:48:30 | 0:31:59 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-overwrites/{bluestore-bitmap.yaml ceph.yaml clusters/{fixed-2.yaml openstack.yaml} fast/normal.yaml msgr-failures/fastclose.yaml rados.yaml recovery-overrides/{default.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/careful.yaml thrashosds-health.yaml workloads/ec-small-objects-fast-read-overwrites.yaml} | 2 | |||
dead | 4136298 | 2019-07-21 12:13:25 | 2019-07-22 01:28:07 | 2019-07-22 01:48:06 | 0:19:59 | mira | master | rhel | 7.6 | rados/thrash-erasure-code-shec/{ceph.yaml clusters/{fixed-4.yaml openstack.yaml} msgr-failures/fastclose.yaml objectstore/bluestore-bitmap.yaml rados.yaml recovery-overrides/{more-active-recovery.yaml} supported-random-distro$/{rhel_7.yaml} thrashers/default.yaml thrashosds-health.yaml workloads/ec-rados-plugin=shec-k=4-m=3-c=2.yaml} | 4 | |||
pass | 4136299 | 2019-07-21 12:13:26 | 2019-07-22 01:28:38 | 2019-07-22 01:40:37 | 0:11:59 | 0:05:46 | 0:06:13 | mira | master | ubuntu | 18.04 | rados/multimon/{clusters/9.yaml msgr-failures/few.yaml msgr/async-v2only.yaml no_pools.yaml objectstore/filestore-xfs.yaml rados.yaml supported-random-distro$/{ubuntu_latest.yaml} tasks/mon_clock_with_skews.yaml} | 3 | |
dead | 4136300 | 2019-07-21 12:13:26 | 2019-07-22 01:37:17 | 2019-07-22 01:49:21 | 0:12:04 | mira | master | centos | rados/verify/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} d-thrash/none.yaml msgr-failures/few.yaml msgr/async-v1only.yaml objectstore/filestore-xfs.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml} | — | ||||
dead | 4136301 | 2019-07-21 12:13:28 | 2019-07-22 01:40:52 | 2019-07-22 01:48:50 | 0:07:58 | mira | master | centos | 7.6 | rados/singleton/{all/ec-lost-unfound.yaml msgr-failures/many.yaml msgr/async-v2only.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml}} | 1 | |||
dead | 4136302 | 2019-07-21 12:13:28 | 2019-07-22 01:45:05 | 2019-07-22 01:49:04 | 0:03:59 | mira | master | centos | 7.6 | rados/basic/{ceph.yaml clusters/{fixed-2.yaml openstack.yaml} msgr-failures/many.yaml msgr/async.yaml objectstore/bluestore-comp.yaml rados.yaml supported-random-distro$/{centos_7.yaml} tasks/rados_cls_all.yaml} | — |