Skip to content
Snippets Groups Projects
Commit e44e9742 authored by Jan Caron's avatar Jan Caron
Browse files

Merge branch 'release-0.2.0' into 'master'

Release 0.2.0

Closes #45

See merge request empyre/empyre!30
parents 63dbdeca 461a6c39
No related branches found
No related tags found
No related merge requests found
Showing
with 776 additions and 35 deletions
...@@ -38,6 +38,8 @@ Per default, only the strictly required libraries are installed, but there are a ...@@ -38,6 +38,8 @@ Per default, only the strictly required libraries are installed, but there are a
* ``colors`` will install the `cmocean <https://matplotlib.org/cmocean/>`_, whose ``balance`` color map is used as a default for the ``imshow`` commmand, if available. * ``colors`` will install the `cmocean <https://matplotlib.org/cmocean/>`_, whose ``balance`` color map is used as a default for the ``imshow`` commmand, if available.
* ``3d`` will install the `mayavi <https://docs.enthought.com/mayavi/mayavi/>`_ package for 3D plotting capabilities.
* ``tests`` will install all dependencies that are needed to test the package (usually not necessary for the average user). * ``tests`` will install all dependencies that are needed to test the package (usually not necessary for the average user).
* ``all`` will install all of the dependencies listed above. * ``all`` will install all of the dependencies listed above.
......
...@@ -3,6 +3,8 @@ The Field container class ...@@ -3,6 +3,8 @@ The Field container class
General empyre.fields docu here! General empyre.fields docu here!
All functions implemented in the subpackages can be accessed directly in the ``empyre.fields`` namespace.
The field module The field module
---------------- ----------------
......
The io visualization submodule
===============================
General empyre.io docu here!
The io_field module
-------------------
.. automodule:: empyre.io.io_field
:members:
:show-inheritance:
The field_plugins module
------------------------
.. automodule:: empyre.io.field_plugins
:members:
:show-inheritance:
...@@ -3,9 +3,32 @@ The vis visualization submodule ...@@ -3,9 +3,32 @@ The vis visualization submodule
General empyre.vis docu here! General empyre.vis docu here!
All functions implemented in the subpackages can be accessed directly in the ``empyre.io`` namespace, with the exception
of the ``color`` module, whose methods are accessible via ``empyre.vis.colors``.
Note that the y-axis of all image plots is flipped in comparison to ``matplotlib.pyplot.imshow``, i.e. that the Note that the y-axis of all image plots is flipped in comparison to ``matplotlib.pyplot.imshow``, i.e. that the
origin is `'lower'` in this case instead of `'upper'`. origin is `'lower'` in this case instead of `'upper'`.
Please note that 3D plots only work in your Jupyter notebook after first installing and then enabling the corresponding
mayavi extension:
.. code-block:: bash
$ jupyter nbextension install --py mayavi --user
$ jupyter nbextension enable mayavi --user --py
If you'd like to utilize 3D plots in your Jupyter Notebook or in the Python Interactive Window
.. code-block:: Python
>>> from mayavi import mlab
>>> mlab.init_notebook('x3d')
The backend, chosen by the `init_notebook` function can be `ipy` (the default, produces static images in Juypter
Notebook and does not work in VSCode), `x3d` (produces interactive plots in Jupyter Noteboo, but seems to not work with
VSCode), or `png` (produces png-images, which work in both Jupyter Notebooks and VSCode).
For more information and a quick introduction visit the `mayavi tips & tricks section <https://docs.enthought.com/mayavi/mayavi/tips.html>`_.
The plot2d module The plot2d module
----------------- -----------------
......
...@@ -22,13 +22,15 @@ dependencies: ...@@ -22,13 +22,15 @@ dependencies:
# File IO: # File IO:
- hyperspy=1.5 - hyperspy=1.5
- hyperspy-gui-ipywidgets=1.2 - hyperspy-gui-ipywidgets=1.2
#- h5py=2.9 # TODO: depends on hyperspy? Not needed here? #- h5py=2.9 # TODO: dependency of hyperspy? Not needed here?
# Plotting and colors: # Plotting and colors:
- matplotlib=3.1 - matplotlib=3.1
- Pillow=6.1 - Pillow=6.1
- cmocean=2.0 - cmocean=2.0
#- qt=5.9 # TODO: only needed for mayavi? which version? # 3D plotting:
- mayavi=4.6 # TODO: Get rid of! - mayavi=4.7
- ipyevents=0.7
- ipywidgets=7.5
# Testing: # Testing:
- pytest=5.0 - pytest=5.0
- pytest-cov=2.7 - pytest-cov=2.7
......
...@@ -7,7 +7,7 @@ ...@@ -7,7 +7,7 @@
[metadata] [metadata]
name = empyre name = empyre
version = 0.1.0 version = 0.2.0
author = Jan Caron author = Jan Caron
author-email = j.caron@fz-juelich.de author-email = j.caron@fz-juelich.de
description = Electron Microscopy Python Reconstruction description = Electron Microscopy Python Reconstruction
...@@ -55,6 +55,8 @@ fftw = ...@@ -55,6 +55,8 @@ fftw =
pyfftw pyfftw
colors = colors =
cmocean cmocean
3d =
mayavi >= 4.7
all = all =
pyfftw pyfftw
cmocean cmocean
...@@ -68,6 +70,7 @@ tests = ...@@ -68,6 +70,7 @@ tests =
# TODO: pip install .[io] is not working because hyperspy depends on trait which has no wheels on PyPI at the moment... # TODO: pip install .[io] is not working because hyperspy depends on trait which has no wheels on PyPI at the moment...
# TODO: See https://github.com/hyperspy/hyperspy/issues/2315 and https://github.com/enthought/traits/issues/357 # TODO: See https://github.com/hyperspy/hyperspy/issues/2315 and https://github.com/enthought/traits/issues/357
# TODO: Add hyperspy back as soon (if?) this is resolved... Until then: install hyperspy with conda! # TODO: Add hyperspy back as soon (if?) this is resolved... Until then: install hyperspy with conda!
# TODO: mayavi has the same exact problem and tvtk is apparently also a problem...
# CONFIGURATION FOR TESTING: # CONFIGURATION FOR TESTING:
......
...@@ -95,13 +95,13 @@ class Field(NDArrayOperatorsMixin): ...@@ -95,13 +95,13 @@ class Field(NDArrayOperatorsMixin):
@scale.setter @scale.setter
def scale(self, scale): def scale(self, scale):
if isinstance(scale, Number): # Scale is the same for each dimension! if isinstance(scale, Number): # Scale is the same for each dimension!
self.__scale = (scale,) * len(self.dim) self.__scale = (float(scale),) * len(self.dim)
elif isinstance(scale, tuple): elif isinstance(scale, tuple):
ndim = len(self.dim) ndim = len(self.dim)
assert len(scale) == ndim, f'Each of the {ndim} dimensions needs a scale, but {scale} was given!' assert len(scale) == ndim, f'Each of the {ndim} dimensions needs a scale, but {scale} was given!'
self.__scale = scale self.__scale = scale
else: else:
raise AssertionError('Scaling has to be a number or a tuple of numbers!') raise AssertionError(f'Scaling has to be a number or a tuple of numbers, was {scale} instead!')
@property @property
def shape(self): def shape(self):
...@@ -135,13 +135,13 @@ class Field(NDArrayOperatorsMixin): ...@@ -135,13 +135,13 @@ class Field(NDArrayOperatorsMixin):
def mask(self): # TODO: Function or property? Philosophical question? def mask(self): # TODO: Function or property? Philosophical question?
return Field(np.where(np.asarray(self.amp) > 0, True, False), self.scale, vector=False) return Field(np.where(np.asarray(self.amp) > 0, True, False), self.scale, vector=False)
def __init__(self, data, scale=1, vector=False): def __init__(self, data, scale=1.0, vector=False):
self._log.debug('Calling __init__') self._log.debug('Calling __init__')
self.__data = data self.__data = data
self.vector = vector # Set vector before scale, because scale needs to know if vector (via calling dim)! self.vector = vector # Set vector before scale, because scale needs to know if vector (via calling dim)!
self.scale = scale self.scale = scale
if self.vector: if self.vector:
assert self.ncomp in (2, 3), 'Only 2- or 3-dimensional vector fields are supported!' assert self.ncomp in (2, 3), 'Only 2- or 3-component vector fields are supported!'
self._log.debug('Created ' + str(self)) self._log.debug('Created ' + str(self))
def __repr__(self): def __repr__(self):
...@@ -292,7 +292,7 @@ class Field(NDArrayOperatorsMixin): ...@@ -292,7 +292,7 @@ class Field(NDArrayOperatorsMixin):
return cls(np.stack(scalar_list, axis=-1), scalar_list[0].scale, vector=True) return cls(np.stack(scalar_list, axis=-1), scalar_list[0].scale, vector=True)
@classmethod @classmethod
def from_signal(cls, signal, scale=None, vector=False): def from_signal(cls, signal, scale=None, vector=None, comp_pos=-1):
"""Convert a :class:`~hyperspy.signals.Signal` object to a :class:`~.Field` object. """Convert a :class:`~hyperspy.signals.Signal` object to a :class:`~.Field` object.
Parameters Parameters
...@@ -310,6 +310,11 @@ class Field(NDArrayOperatorsMixin): ...@@ -310,6 +310,11 @@ class Field(NDArrayOperatorsMixin):
If set to True, forces the signal to be interpreted as a vector field. EMPyRe will check if the first axis If set to True, forces the signal to be interpreted as a vector field. EMPyRe will check if the first axis
is named 'vector components' (EMPyRe saves vector fields like this). If this is the case, vector will be is named 'vector components' (EMPyRe saves vector fields like this). If this is the case, vector will be
automatically set to True and the signal will also be interpreted as a vector field. automatically set to True and the signal will also be interpreted as a vector field.
comp_pos: int, optoinal
The index of the axis containing the vector components (if `vector=True`). EMPyRe needs this to be the last
axis (index `-1`, which is the default). In case another position is given, the vector component will be
moved to the last axis. Old Pyramid files will have this axis at index `0`, so use this for backwards
compatibilty.
Notes Notes
----- -----
...@@ -318,10 +323,12 @@ class Field(NDArrayOperatorsMixin): ...@@ -318,10 +323,12 @@ class Field(NDArrayOperatorsMixin):
""" """
cls._log.debug('Calling from_signal') cls._log.debug('Calling from_signal')
data = signal.data data = signal.data
if signal.axes_manager[0].name == 'vector components': if vector and comp_pos != -1: # component axis should be last, but is currently first -> roll to the end:
vector = True # Automatic detection! data = np.moveaxis(data, source=comp_pos, destination=-1)
if scale is None: # If not provided, try to read from axes_manager: if vector is None: # Automatic detection:
scale = [signal.axes_manager[i].scale for i in range(len(data.shape) - vector)] # One less axis if vector! vector = True if signal.axes_manager[0].name == 'vector components' else False
if scale is None: # If not provided, try to read from axes_manager, one less axis if vector!:
scale = tuple([signal.axes_manager[i].scale for i in range(len(data.shape) - vector)])
return cls(data, scale, vector) return cls(data, scale, vector)
def to_signal(self): def to_signal(self):
......
...@@ -20,7 +20,7 @@ _log = logging.getLogger(__name__) ...@@ -20,7 +20,7 @@ _log = logging.getLogger(__name__)
# TODO: TEST!!! # TODO: TEST!!!
def create_shape_slab(dim, center=None, width=None, scale=1): def create_shape_slab(dim, center=None, width=None, scale=1.0):
"""Creates a Field object with the shape of a slab as a scalar field in arbitrary dimensions. """Creates a Field object with the shape of a slab as a scalar field in arbitrary dimensions.
Attributes Attributes
...@@ -51,7 +51,7 @@ def create_shape_slab(dim, center=None, width=None, scale=1): ...@@ -51,7 +51,7 @@ def create_shape_slab(dim, center=None, width=None, scale=1):
return Field(data=data, scale=scale, vector=False) return Field(data=data, scale=scale, vector=False)
def create_shape_disc(dim, center=None, radius=None, height=None, axis=0, scale=1): def create_shape_disc(dim, center=None, radius=None, height=None, axis=0, scale=1.0):
"""Creates a Field object with the shape of a cylindrical disc in 2D or 3D. """Creates a Field object with the shape of a cylindrical disc in 2D or 3D.
Attributes Attributes
...@@ -102,7 +102,7 @@ def create_shape_disc(dim, center=None, radius=None, height=None, axis=0, scale= ...@@ -102,7 +102,7 @@ def create_shape_disc(dim, center=None, radius=None, height=None, axis=0, scale=
return Field(data=data, scale=scale, vector=False) return Field(data=data, scale=scale, vector=False)
def create_shape_ellipse(dim, center=None, width=None, height=None, axis=0, scale=1): def create_shape_ellipse(dim, center=None, width=None, height=None, axis=0, scale=1.0):
"""Creates a Field object with the shape of an ellipse in 2D or 3D. """Creates a Field object with the shape of an ellipse in 2D or 3D.
Attributes Attributes
...@@ -154,7 +154,7 @@ def create_shape_ellipse(dim, center=None, width=None, height=None, axis=0, scal ...@@ -154,7 +154,7 @@ def create_shape_ellipse(dim, center=None, width=None, height=None, axis=0, scal
return Field(data=data, scale=scale, vector=False) return Field(data=data, scale=scale, vector=False)
def create_shape_ellipsoid(dim, center=None, width=None, scale=1): def create_shape_ellipsoid(dim, center=None, width=None, scale=1.0):
"""Creates a Field object with the shape of an ellipsoid in arbitrary dimensions. """Creates a Field object with the shape of an ellipsoid in arbitrary dimensions.
Attributes Attributes
...@@ -182,7 +182,7 @@ def create_shape_ellipsoid(dim, center=None, width=None, scale=1): ...@@ -182,7 +182,7 @@ def create_shape_ellipsoid(dim, center=None, width=None, scale=1):
return Field(data=data, scale=scale, vector=False) return Field(data=data, scale=scale, vector=False)
def create_shape_sphere(dim, center=None, radius=None, scale=1): def create_shape_sphere(dim, center=None, radius=None, scale=1.0):
"""Creates a Field object with the shape of a sphere in arbitrary dimensions. """Creates a Field object with the shape of a sphere in arbitrary dimensions.
Attributes Attributes
...@@ -211,7 +211,7 @@ def create_shape_sphere(dim, center=None, radius=None, scale=1): ...@@ -211,7 +211,7 @@ def create_shape_sphere(dim, center=None, radius=None, scale=1):
return Field(data=data, scale=scale, vector=False) return Field(data=data, scale=scale, vector=False)
def create_shape_filament(dim, pos=None, axis=0, scale=1): def create_shape_filament(dim, pos=None, axis=0, scale=1.0):
"""Creates a Field object with the shape of a filament in arbitrary dimension. """Creates a Field object with the shape of a filament in arbitrary dimension.
Parameters Parameters
...@@ -239,7 +239,7 @@ def create_shape_filament(dim, pos=None, axis=0, scale=1): ...@@ -239,7 +239,7 @@ def create_shape_filament(dim, pos=None, axis=0, scale=1):
return Field(data=data, scale=scale, vector=False) return Field(data=data, scale=scale, vector=False)
def create_shape_voxel(dim, pos=None, scale=1): def create_shape_voxel(dim, pos=None, scale=1.0):
"""Creates a Field object with the shape of a single voxel in arbitrary dimension. """Creates a Field object with the shape of a single voxel in arbitrary dimension.
Parameters Parameters
......
...@@ -17,7 +17,7 @@ __all__ = ['create_vector_homog', 'create_vector_vortex', 'create_vector_skyrmio ...@@ -17,7 +17,7 @@ __all__ = ['create_vector_homog', 'create_vector_vortex', 'create_vector_skyrmio
_log = logging.getLogger(__name__) _log = logging.getLogger(__name__)
def create_vector_homog(dim, phi=0, theta=None, scale=1): def create_vector_homog(dim, phi=0, theta=None, scale=1.0):
"""Field subclass implementing a homogeneous vector field with 3 components in 2 or 3 dimensions. """Field subclass implementing a homogeneous vector field with 3 components in 2 or 3 dimensions.
Attributes Attributes
...@@ -37,11 +37,13 @@ def create_vector_homog(dim, phi=0, theta=None, scale=1): ...@@ -37,11 +37,13 @@ def create_vector_homog(dim, phi=0, theta=None, scale=1):
assert len(dim) in (2, 3), 'Disc can only be build in 2 or 3 dimensions!' assert len(dim) in (2, 3), 'Disc can only be build in 2 or 3 dimensions!'
assert isinstance(phi, Number), 'phi has to be an angle in radians!' assert isinstance(phi, Number), 'phi has to be an angle in radians!'
assert isinstance(theta, Number) or theta is None, 'theta has to be an angle in radians or None!' assert isinstance(theta, Number) or theta is None, 'theta has to be an angle in radians or None!'
if theta is None: if len(dim) == 2 and theta is None: # 2D and 3rd component not explicitely wanted:
y_comp = np.ones(dim) * np.sin(phi) y_comp = np.ones(dim) * np.sin(phi)
x_comp = np.ones(dim) * np.cos(phi) x_comp = np.ones(dim) * np.cos(phi)
data = np.stack([x_comp, y_comp], axis=-1) data = np.stack([x_comp, y_comp], axis=-1)
else: else: # 3D, always have a third component:
if theta is None:
theta = np.pi/2 # xy-plane if not specified otherwise!
z_comp = np.ones(dim) * np.cos(theta) z_comp = np.ones(dim) * np.cos(theta)
y_comp = np.ones(dim) * np.sin(theta) * np.sin(phi) y_comp = np.ones(dim) * np.sin(theta) * np.sin(phi)
x_comp = np.ones(dim) * np.sin(theta) * np.cos(phi) x_comp = np.ones(dim) * np.sin(theta) * np.cos(phi)
...@@ -49,7 +51,7 @@ def create_vector_homog(dim, phi=0, theta=None, scale=1): ...@@ -49,7 +51,7 @@ def create_vector_homog(dim, phi=0, theta=None, scale=1):
return Field(data=data, scale=scale, vector=True) return Field(data=data, scale=scale, vector=True)
def create_vector_vortex(dim, center=None, core_r=0, oop_r=None, axis=0, scale=1): def create_vector_vortex(dim, center=None, core_r=0, oop_r=None, axis=0, scale=1.0):
# TODO: oop and core_r documentation!!! General description! # TODO: oop and core_r documentation!!! General description!
# TODO: CHIRALITY # TODO: CHIRALITY
"""Field subclass implementing a vortex vector field with 3 components in 2 or 3 dimensions. """Field subclass implementing a vortex vector field with 3 components in 2 or 3 dimensions.
...@@ -123,7 +125,7 @@ def create_vector_vortex(dim, center=None, core_r=0, oop_r=None, axis=0, scale=1 ...@@ -123,7 +125,7 @@ def create_vector_vortex(dim, center=None, core_r=0, oop_r=None, axis=0, scale=1
return Field(data=data, scale=scale, vector=True) return Field(data=data, scale=scale, vector=True)
def create_vector_skyrmion(dim, center=None, phi_0=0, skyrm_d=None, wall_d=None, axis=0, scale=1): def create_vector_skyrmion(dim, center=None, phi_0=0, skyrm_d=None, wall_d=None, axis=0, scale=1.0):
"""Create a 3-dimensional magnetic Bloch or Neel type skyrmion distribution. """Create a 3-dimensional magnetic Bloch or Neel type skyrmion distribution.
Parameters Parameters
...@@ -230,7 +232,7 @@ def create_vector_skyrmion(dim, center=None, phi_0=0, skyrm_d=None, wall_d=None, ...@@ -230,7 +232,7 @@ def create_vector_skyrmion(dim, center=None, phi_0=0, skyrm_d=None, wall_d=None,
return Field(data=data, scale=scale, vector=True) return Field(data=data, scale=scale, vector=True)
def create_vector_singularity(dim, center=None, scale=1): def create_vector_singularity(dim, center=None, scale=1.0):
"""Create a 3-dimensional magnetic distribution of a homogeneous magnetized object. """Create a 3-dimensional magnetic distribution of a homogeneous magnetized object.
Parameters Parameters
......
# -*- coding: utf-8 -*-
# Copyright 2020 by Forschungszentrum Juelich GmbH
# Author: J. Caron
#
"""Subpackage containing EMPyRe IO functionality for several EMPyRe classes."""
from .io_field import *
__all__ = []
__all__.extend(io_field.__all__)
del io_field
# -*- coding: utf-8 -*-
# Copyright 2020 by Forschungszentrum Juelich GmbH
# Author: J. Caron
#
"""Subpackage containing EMPyRe IO functionality for the Field class."""
from . import llg, numpy, ovf, tec, text, vtk
plugin_list = [llg, numpy, ovf, tec, text, vtk]
__all__ = ['plugin_list']
# -*- coding: utf-8 -*-
# Copyright 2020 by Forschungszentrum Juelich GmbH
# Author: J. Caron
#
"""IO plugin for LLG format."""
import logging
import numpy as np
from ...fields.field import Field
_log = logging.getLogger(__name__)
file_extensions = ('.llg',) # Recognised file extensions
def reader(filename, scale=None, vector=None, **kwargs):
_log.debug('Call reader')
if vector is None:
vector = True
assert vector is True, 'Only vector fields can be read from the llg file format!'
SCALE = 1.0E-9 / 1.0E-2 # From cm to nm
data_columns = np.genfromtxt(filename, skip_header=2)
dim = tuple(np.genfromtxt(filename, dtype=int, skip_header=1, skip_footer=len(data_columns[:, 0])))
if scale is None: # Otherwise overwrite!
stride_x = 1 # x varies fastest
stride_y = dim[2] # one y step for dim[2] x steps
stride_z = dim[1] * dim[2] # one z step for one x-y layer (dim[1]*dim[2])
scale_x = (data_columns[stride_x, 0] - data_columns[0, 0]) / SCALE # first column varies in x
scale_y = (data_columns[stride_y, 1] - data_columns[0, 1]) / SCALE # second column varies in y
scale_z = (data_columns[stride_z, 2] - data_columns[0, 2]) / SCALE # third column varies in z
scale = (scale_z, scale_y, scale_x)
x_mag, y_mag, z_mag = data_columns[:, 3:6].T
data = np.stack((x_mag.reshape(dim), y_mag.reshape(dim), z_mag.reshape(dim)), axis=-1)
return Field(data, scale, vector=True)
def writer(filename, field, **kwargs):
_log.debug('Call writer')
assert field.vector and len(field.dim) == 3, 'Only 3D vector fields can be saved to the llg file format!'
SCALE = 1.0E-9 / 1.0E-2 # from nm to cm
# Create 3D meshgrid and reshape it and the field into a list where x varies first:
zzz, yyy, xxx = (np.indices(field.dim) + 0.5) * np.reshape(field.scale, (3, 1, 1, 1)) * SCALE # broadcast shape!
z_coord, y_coord, x_coord = np.ravel(zzz), np.ravel(yyy), np.ravel(xxx) # Turn into vectors!
x_comp, y_comp, z_comp = field.comp # Extract scalar field components!
x_vec, y_vec, z_vec = np.ravel(x_comp.data), np.ravel(y_comp.data), np.ravel(z_comp.data) # Turn into vectors!
data = np.array([x_coord, y_coord, z_coord, x_vec, y_vec, z_vec]).T
# Save data to file:
with open(filename, 'w') as mag_file:
mag_file.write('LLGFileCreator: EMPyRe vector Field\n')
mag_file.write(' {:d} {:d} {:d}\n'.format(*field.dim))
mag_file.writelines('\n'.join(' '.join('{:7.6e}'.format(cell) for cell in row) for row in data))
# -*- coding: utf-8 -*-
# Copyright 2020 by Forschungszentrum Juelich GmbH
# Author: J. Caron
#
"""IO plugin for the numpy format."""
import logging
import numpy as np
from ...fields.field import Field
_log = logging.getLogger(__name__)
file_extensions = ('.npy', '.npz') # Recognised file extensions
def reader(filename, scale=None, vector=None, **kwargs):
_log.debug('Call reader')
if vector is None:
vector = False
if scale is None:
scale = 1.0
return Field(np.load(filename, **kwargs), scale, vector)
def writer(filename, field, **kwargs):
_log.debug('Call writer')
np.save(filename, field.data, **kwargs)
# -*- coding: utf-8 -*-
# Copyright 2020 by Forschungszentrum Juelich GmbH
# Author: J. Caron
#
"""IO plugin for simple text format."""
import logging
import os
import numpy as np
from ...fields.field import Field
_log = logging.getLogger(__name__)
file_extensions = ('.ovf', '.omf', '.ohf', 'obf') # Recognised file extensions
def reader(filename, scale=None, vector=None, segment=None, **kwargs):
"""More info at:
http://math.nist.gov/oommf/doc/userguide11b2/userguide/vectorfieldformat.html
http://math.nist.gov/oommf/doc/userguide12a5/userguide/OVF_2.0_format.html
"""
_log.debug('Calling reader')
if vector is None:
vector = True
assert vector is True, 'Only vector fields can be loaded from ovf-files!'
with open(filename, 'rb') as load_file:
line = load_file.readline()
assert line.startswith(b'# OOMMF') # File has OVF format!
read_header, read_data = False, False
header = {'version': line.split()[-1].decode('utf-8')}
x_mag, y_mag, z_mag = [], [], []
data_mode = None
while True:
# --- READ START OF FILE OR IN BETWEEN SEGMENTS LINE BY LINE ---------------------------
if not read_header and not read_data: # Start of file or between segments!
line = load_file.readline()
if line == b'':
break # End of file is reached!
if line.startswith(b'# Segment count'):
seg_count = int(line.split()[-1]) # Total number of segments (often just 1)!
seg_curr = 0 # Current segment (0: not in first segment, yet!)
if seg_count > 1: # If multiple segments, check if "segment" was set correctly:
assert segment is not None, (f'Multiple ({seg_count}) segments were found! '
'Chose one via the segment parameter!')
elif segment is None: # Only one segment AND parameter not set:
segment = 1 # Default to the first/only segment!
assert 0 < segment <= seg_count, (f'parameter segment={segment} out of bounds, '
f'Use value between 1 and {seg_count}!')
header['segment_count'] = seg_count
elif line.startswith(b'# Begin: Segment'): # Segment start!
seg_curr += 1
if seg_curr > segment:
break # Stop reading the file!
elif line.startswith(b'# Begin: Header'): # Header start!
read_header = True
elif line.startswith(b'# Begin: Data'): # Data start!
read_data = True
data_mode = ' '.join(line.decode('utf-8').split()[3:])
assert data_mode in ['text', 'Text', 'Binary 4', 'Binary 8'], \
f'Data mode {data_mode} is currently not supported by this reader!'
assert header.get('meshtype') == 'rectangular', \
'Only rectangular grids can be currently read!'
# --- READ HEADER LINE BY LINE ---------------------------------------------------------
elif read_header:
line = load_file.readline()
if line.startswith(b'# End: Header'): # Header is done:
read_header = False
continue
line = line.decode('utf-8') # Decode to use strings here!
line_list = line.split()
if '##' in line_list: # Strip trailing comments:
del line_list[line_list.index('##'):]
if len(line_list) <= 1: # Just '#' or empty line:
continue
key, value = line_list[1].strip(':'), ' '.join(line_list[2:])
if key not in header: # Add new key, value pair if not existant:
header[key] = value
elif key == 'Desc': # Description can go over several lines:
header['Desc'] = ' '.join([header['Desc'], value])
# --- READ DATA LINE BY LINE -----------------------------------------------------------
elif read_data: # Currently in a data block:
if data_mode in ['text', 'Text']: # Read data as text, line by line:
line = load_file.readline()
if line.startswith(b'# End: Data'):
read_data = False # Stop reading data and search for new segments!
continue
elif seg_curr < segment: # Do nothing with the line if wrong segment!
continue
else:
x, y, z = [float(i) for i in line.split()]
x_mag.append(x)
y_mag.append(y)
z_mag.append(z)
elif 'Binary' in data_mode: # Read data as binary, all bytes at the same time:
# Currently every segment is read until the wanted one is processed. Only that one is returned!
count = int(data_mode.split()[-1]) # Either 4 or 8!
if header['version'] == '1.0': # Big endian float:
dtype = f'>f{count}'
elif header['version'] == '2.0': # Little endian float:
dtype = f'<f{count}'
test = np.fromfile(load_file, dtype=dtype, count=1) # Read test byte!
if count == 4: # Binary 4:
assert test == 1234567.0, 'Wrong test bytes!'
elif count == 8: # Binary 8:
assert test == 123456789012345.0, 'Wrong test bytes!'
dim = (int(header['znodes']), int(header['ynodes']), int(header['xnodes']))
data_raw = np.fromfile(load_file, dtype=dtype, count=3*np.prod(dim))
x_mag, y_mag, z_mag = data_raw[0::3], data_raw[1::3], data_raw[2::3]
read_data = False # Stop reading data and search for new segments (if any).
# --- READING DONE -------------------------------------------------------------------------
# Format after reading:
dim = (int(header['znodes']), int(header['ynodes']), int(header['xnodes']))
x_mag = np.asarray(x_mag).reshape(dim)
y_mag = np.asarray(y_mag).reshape(dim)
z_mag = np.asarray(z_mag).reshape(dim)
data = np.stack((x_mag, y_mag, z_mag), axis=-1) * float(header.get('valuemultiplier', 1))
if scale is None:
unit = header.get('meshunit', 'nm')
if unit == 'unspecified':
unit = 'nm'
_log.info(f'unit: {unit}')
unit_scale = {'m': 1e9, 'mm': 1e6, 'µm': 1e3, 'nm': 1}[unit]
xstep = float(header.get('xstepsize')) * unit_scale
ystep = float(header.get('ystepsize')) * unit_scale
zstep = float(header.get('zstepsize')) * unit_scale
scale = (zstep, ystep, xstep)
return Field(data, scale=scale, vector=True)
def writer(filename, field, **kwargs):
_log.debug('Call writer')
assert field.vector and len(field.dim) == 3, 'Only 3D vector fields can be saved to ovf files!'
with open(filename, 'w') as save_file:
save_file.write('# OOMMF OVF 2.0\n')
save_file.write('# Segment count: 1\n')
save_file.write('# Begin: Segment\n')
# Write Header:
save_file.write('# Begin: Header\n')
name = os.path.split(filename)[1]
save_file.write(f'# Title: PYRAMID-VECTORDATA {name}\n')
save_file.write('# meshtype: rectangular\n')
save_file.write('# meshunit: nm\n')
save_file.write('# valueunit: A/m\n')
save_file.write('# valuemultiplier: 1.\n')
save_file.write('# xmin: 0.\n')
save_file.write('# ymin: 0.\n')
save_file.write('# zmin: 0.\n')
save_file.write(f'# xmax: {field.scale[2] * field.dim[2]}\n')
save_file.write(f'# ymax: {field.scale[1] * field.dim[1]}\n')
save_file.write(f'# zmax: {field.scale[0] * field.dim[0]}\n')
save_file.write('# xbase: 0.\n')
save_file.write('# ybase: 0.\n')
save_file.write('# zbase: 0.\n')
save_file.write(f'# xstepsize: {field.scale[2]}\n')
save_file.write(f'# ystepsize: {field.scale[1]}\n')
save_file.write(f'# zstepsize: {field.scale[0]}\n')
save_file.write(f'# xnodes: {field.dim[2]}\n')
save_file.write(f'# ynodes: {field.dim[1]}\n')
save_file.write(f'# znodes: {field.dim[0]}\n')
save_file.write('# End: Header\n')
# Write data:
save_file.write('# Begin: Data Text\n')
x_mag, y_mag, z_mag = field.comp
x_mag = x_mag.data.ravel()
y_mag = y_mag.data.ravel()
z_mag = z_mag.data.ravel()
for i in range(np.prod(field.dim)):
save_file.write(f'{x_mag[i]:g} {y_mag[i]:g} {z_mag[i]:g}\n')
save_file.write('# End: Data Text\n')
save_file.write('# End: Segment\n')
# -*- coding: utf-8 -*-
# Copyright 2020 by Forschungszentrum Juelich GmbH
# Author: J. Caron
#
"""IO plugin for simple text format."""
import logging
import re
import numpy as np
from ...fields.field import Field
from ...utils.misc import interp_to_regular_grid
_log = logging.getLogger(__name__)
file_extensions = ('.tec',) # Recognised file extensions
def reader(filename, scale=None, vector=None, **kwargs):
assert isinstance(scale, tuple), 'The scale must be a tuple, each entry corresponding to one grid dimensions!'
_log.debug('Call reader')
if vector is None:
vector = True
assert vector is True, 'Only vector fields can be loaded from tec-files!'
with open(filename, 'r') as mag_file:
lines = mag_file.readlines() # Read in lines!
match = re.search(R'N=(\d+)', lines[2]) # Extract number of points from third line!
if match:
n_points = int(match.group(1))
else:
raise IOError('File does not seem to match .tec format!')
n_head, n_foot = 3, len(lines) - (3 + n_points)
# Read in data:
data_raw = np.genfromtxt(filename, skip_header=n_head, skip_footer=n_foot)
if scale is None:
raise ValueError('For the interpolation of unstructured grids, the `scale` parameter is required!')
data = interp_to_regular_grid(data_raw[:, :3], data_raw[:, 3:], scale, **kwargs)
return Field(data, scale, vector=vector)
def writer(filename, field, **kwargs):
_log.debug('Call writer')
raise NotImplementedError('A writer for this extension is not yet implemented!')
# -*- coding: utf-8 -*-
# Copyright 2020 by Forschungszentrum Juelich GmbH
# Author: J. Caron
#
"""IO plugin for simple text format."""
import logging
import ast
import numpy as np
from ...fields.field import Field
_log = logging.getLogger(__name__)
file_extensions = ('.txt',) # Recognised file extensions
def reader(filename, scale=None, vector=None, **kwargs):
_log.debug('Call reader')
if vector is None:
vector = False
assert vector is False, 'Only scalar 2D fields can currently be read with this file reader!'
with open(filename, 'r') as load_file: # Read data:
empyre_format = load_file.readline().startswith('EMPYRE-FORMAT')
if empyre_format: # File has EMPyRe structure:
scale = ast.literal_eval(load_file.readline()[8:-4]) # [8:-4] takes just the scale string!
data = np.loadtxt(filename, delimiter='\t', skiprows=2) # skips header!
else: # Try default with provided kwargs:
scale = 1.0 if scale is None else scale # Set default if not provided!
data = np.loadtxt(filename, **kwargs)
return Field(data, scale=scale, vector=False)
def writer(filename, field, with_header=True, **kwargs):
_log.debug('Call writer')
assert not field.vector, 'Vector fields can currently not be saved to text!'
assert len(field.dim) == 2, 'Only 2D fields can currenty be saved to text!'
if with_header: # write header:
with open(filename, 'w') as save_file:
save_file.write('EMPYRE-FORMAT\n')
save_file.write(f'scale = {field.scale} nm\n')
save_kwargs = {'fmt': '%7.6e', 'delimiter': '\t'}
else:
save_kwargs = kwargs
with open(filename, 'ba') as save_file: # the 'a' is for append!
np.savetxt(save_file, field.data, **save_kwargs)
# -*- coding: utf-8 -*-
# Copyright 2020 by Forschungszentrum Juelich GmbH
# Author: J. Caron
#
"""IO plugin for simple text format."""
import logging
from numbers import Number
import numpy as np
from ...fields.field import Field
from ...utils.misc import interp_to_regular_grid
from ...vis import colors
_log = logging.getLogger(__name__)
file_extensions = ('.vtk',) # Recognised file extensions
def reader(filename, scale=None, vector=None, **kwargs):
"""More infos at:
overview: https://docs.enthought.com/mayavi/mayavi/data.html
writing: https://vtk.org/Wiki/VTK/Writing_VTK_files_using_python
format: https://www.vtk.org/wp-content/uploads/2015/04/file-formats.pdf
"""
_log.debug('Calling reader')
try:
from tvtk.api import tvtk
except ImportError:
_log.error('This extension recquires the tvtk package!')
return
if vector is None:
vector = True
# Setting up reader:
reader = tvtk.DataSetReader(file_name=filename, read_all_scalars=True, read_all_vectors=vector)
reader.update()
# Getting output:
output = reader.output
assert output is not None, 'File reader could not find data or file "{}"!'.format(filename)
# Reading points and vectors:
if isinstance(output, tvtk.ImageData): # tvtk.StructuredPoints is a subclass of tvtk.ImageData!
# Connectivity: implicit; described by: 3D data array and spacing along each axis!
_log.info('geometry: ImageData')
# Load relevant information from output (reverse to get typical Python order z,y,x):
dim = output.dimensions[::-1]
origin = output.origin[::-1]
spacing = output.spacing[::-1]
_log.info(f'dim: {dim}, origin: {origin}, spacing: {spacing}')
assert len(dim) == 3, 'Data has to be three-dimensional!'
if scale is None:
scale = tuple(spacing)
if vector: # Extract vector compontents and create magnitude array:
vector_array = np.asarray(output.point_data.vectors, dtype=np.float)
x_mag, y_mag, z_mag = vector_array.T
data = np.stack((x_mag.reshape(dim), y_mag.reshape(dim), z_mag.reshape(dim)), axis=-1)
else: # Extract scalar data and create magnitude array:
scalar_array = np.asarray(output.point_data.scalars, dtype=np.float)
data = scalar_array.reshape(dim)
elif isinstance(output, tvtk.RectilinearGrid):
# Connectivity: implicit; described by: 3D data array and 1D array of spacing for each axis!
_log.info('geometry: RectilinearGrid')
raise NotImplementedError('RectilinearGrid is currently not supported!')
elif isinstance(output, tvtk.StructuredGrid):
# Connectivity: implicit; described by: 3D data array and 3D position arrays for each axis!
_log.info('geometry: StructuredGrid')
raise NotImplementedError('StructuredGrid is currently not supported!')
elif isinstance(output, tvtk.PolyData):
# Connectivity: explicit; described by: x, y, z positions of vertices and arrays of surface cells!
_log.info('geometry: PolyData')
raise NotImplementedError('PolyData is currently not supported!')
elif isinstance(output, tvtk.UnstructuredGrid):
# Connectivity: explicit; described by: x, y, z positions of vertices and arrays of volume cells!
_log.info('geometry: UnstructuredGrid')
# Load relevant information from output:
point_array = np.asarray(output.points, dtype=np.float)
if vector:
data_array = np.asarray(output.point_data.vectors, dtype=np.float)
else:
data_array = np.asarray(output.point_data.scalars, dtype=np.float)
if scale is None:
raise ValueError('For the interpolation of unstructured grids, the `scale` parameter is required!')
elif isinstance(scale, Number): # Scale is the same for each dimension x, y, z!
scale = (scale,) * 3
elif isinstance(scale, tuple):
assert len(scale) == 3, f'Each dimension (z, y, x) needs a scale, but {scale} was given!'
data = interp_to_regular_grid(point_array, data_array, scale, **kwargs)
else:
raise TypeError('Data type of {} not understood!'.format(output))
return Field(data, scale, vector=True)
def writer(filename, field, **kwargs):
_log.debug('Call writer')
assert len(field.dim) == 3, 'Currently only 3D fields can be saved to vtk!'
try:
from tvtk.api import tvtk, write_data
except ImportError:
_log.error('This extension recquires the tvtk package!')
return
# Create dataset:
origin = (0, 0, 0)
spacing = (field.scale[2], field.scale[1], field.scale[0])
dimensions = (field.dim[2], field.dim[1], field.dim[0])
sp = tvtk.StructuredPoints(origin=origin, spacing=spacing, dimensions=dimensions)
# Fill with data from field:
if field.vector: # Handle vector fields:
# Put vector components in corresponding array:
vectors = field.data.reshape(-1, 3)
sp.point_data.vectors = vectors
sp.point_data.vectors.name = 'vectors'
# Calculate colors:
x_mag, y_mag, z_mag = field.comp
magvec = np.asarray((x_mag.data.ravel(), y_mag.data.ravel(), z_mag.data.ravel()))
rgb = colors.CMAP_CIRCULAR_DEFAULT.rgb_from_vector(magvec)
point_colors = tvtk.UnsignedIntArray()
point_colors.number_of_components = 3
point_colors.name = 'colors'
point_colors.from_array(rgb)
sp.point_data.scalars = point_colors
sp.point_data.scalars.name = 'colors'
else: # Handle scalar fields:
scalars = field.data.ravel()
sp.point_data.scalars = scalars
sp.point_data.scalars.name = 'scalars'
# Write the data to file:
write_data(sp, filename)
# -*- coding: utf-8 -*-
# Copyright 2020 by Forschungszentrum Juelich GmbH
# Author: J. Caron
#
"""IO functionality for Field objects."""
import logging
import os
from ..fields.field import Field
from .field_plugins import plugin_list
__all__ = ['load_field', 'save_field']
_log = logging.getLogger(__name__)
def load_field(filename, scale=None, vector=None, **kwargs):
"""Load supported file into a :class:`~..fields.Field` instance.
The function loads the file according to the extension:
SCALAR???
- hdf5 for HDF5. # TODO: You can use comp_pos here!!!
- EMD Electron Microscopy Dataset format (also HDF5).
- npy or npz for numpy formats.
PHASEMAP
- hdf5 for HDF5.
- rpl for Ripple (useful to export to Digital Micrograph).
- dm3 and dm4 for Digital Micrograph files.
- unf for SEMPER unf binary format.
- txt format.
- npy or npz for numpy formats.
- Many image formats such as png, tiff, jpeg...
VECTOR
- hdf5 for HDF5.
- EMD Electron Microscopy Dataset format (also HDF5).
- llg format.
- ovf format.
- npy or npz for numpy formats.
Any extra keyword is passed to the corresponsing reader. For available options see their individual documentation.
Parameters
----------
filename: str
The filename to be loaded.
scale: tuple of float, optional
Scaling along the dimensions of the underlying data. Default is 1.
vector: bool, optional
True if the field should be a vector field, False if it should be interpreted as a scalar field (default).
Returns
-------
field: `Field`
A `Field` object containing the loaded data.
Notes
-----
Falls back to HyperSpy routines for loading data, make sure it is installed if you need the full capabilities.
"""
_log.debug('Calling load_field')
extension = os.path.splitext(filename)[1]
for plugin in plugin_list: # Iterate over all plugins:
if extension in plugin.file_extensions: # Check if extension is recognised:
return plugin.reader(filename, scale=scale, vector=vector, **kwargs)
# If nothing was found, try HyperSpy
_log.debug('Using HyperSpy')
try:
import hyperspy.api as hs
except ImportError:
_log.error('This extension recquires the hyperspy package!')
return
comp_pos = kwargs.pop('comp_pos', -1)
return Field.from_signal(hs.load(filename, **kwargs), scale=scale, vector=vector, comp_pos=comp_pos)
def save_field(filename, field, **kwargs):
"""Saves the Field in the specified format.
The function gets the format from the extension:
- hdf5 for HDF5.
- EMD Electron Microscopy Dataset format (also HDF5).
- npy or npz for numpy formats.
If no extension is provided, 'hdf5' is used. Most formats are saved with the HyperSpy package (internally the field
is first converted to a HyperSpy Signal.
Each format accepts a different set of parameters. For details see the specific format documentation.
Parameters
----------
filename : str, optional
Name of the file which the Field is saved into. The extension determines the saving procedure.
"""
"""Saves the phasemap in the specified format.
The function gets the format from the extension:
- hdf5 for HDF5.
- rpl for Ripple (useful to export to Digital Micrograph).
- unf for SEMPER unf binary format.
- txt format.
- Many image formats such as png, tiff, jpeg...
If no extension is provided, 'hdf5' is used. Most formats are
saved with the HyperSpy package (internally the phasemap is first
converted to a HyperSpy Signal.
Each format accepts a different set of parameters. For details
see the specific format documentation.
Parameters
----------
filename: str, optional
Name of the file which the phasemap is saved into. The extension
determines the saving procedure.
save_mask: boolean, optional
If True, the `mask` is saved, too. For all formats, except HDF5, a separate file will
be created. HDF5 always saves the `mask` in the metadata, independent of this flag. The
default is False.
save_conf: boolean, optional
If True, the `confidence` is saved, too. For all formats, except HDF5, a separate file
will be created. HDF5 always saves the `confidence` in the metadata, independent of
this flag. The default is False
pyramid_format: boolean, optional
Only used for saving to '.txt' files. If this is True, the grid spacing is saved
in an appropriate header. Otherwise just the phase is written with the
corresponding `kwargs`.
"""
_log.debug('Calling save_field')
extension = os.path.splitext(filename)[1]
for plugin in plugin_list: # Iterate over all plugins:
if extension in plugin.file_extensions: # Check if extension is recognised:
plugin.writer(filename, field, **kwargs)
return
# If nothing was found, try HyperSpy:
_log.debug('Using HyperSpy')
field.to_signal().save(filename, **kwargs)
...@@ -24,7 +24,7 @@ def levi_civita(i, j, k): ...@@ -24,7 +24,7 @@ def levi_civita(i, j, k):
return (j-i) * (k-i) * (k-j) / (np.abs(j-i) * np.abs(k-i) * np.abs(k-j)) return (j-i) * (k-i) * (k-j) / (np.abs(j-i) * np.abs(k-i) * np.abs(k-j))
def interp_to_regular_grid(points, values, scale, scale_factor=1, step=1, convex=True): def interp_to_regular_grid(points, values, scale, scale_factor=1, step=1, convex=True, distance_upper_bound=None):
"""Interpolate values on points to regular grid. """Interpolate values on points to regular grid.
Parameters Parameters
...@@ -45,6 +45,12 @@ def interp_to_regular_grid(points, values, scale, scale_factor=1, step=1, convex ...@@ -45,6 +45,12 @@ def interp_to_regular_grid(points, values, scale, scale_factor=1, step=1, convex
convex : bool, optional convex : bool, optional
By default True. If this is set to False, additional measures are taken to find holes in the point cloud. By default True. If this is set to False, additional measures are taken to find holes in the point cloud.
WARNING: this is an experimental feature that should be used with caution! WARNING: this is an experimental feature that should be used with caution!
distance_upper_bound: float, optional
Only used if `convex=False`. Set the upper bound, determining if a point of the new (interpolated) grid is too
far away from any original point. They are assumed to be in a "hole" and their values are set to zero. Set this
value in nm, it will be converted to the local unit of the original points internally. If not set and
`convex=True`, double of the the mean of `scale` is calculated and used (can be troublesome if the scales vary
drastically).
Returns Returns
------- -------
...@@ -66,10 +72,11 @@ def interp_to_regular_grid(points, values, scale, scale_factor=1, step=1, convex ...@@ -66,10 +72,11 @@ def interp_to_regular_grid(points, values, scale, scale_factor=1, step=1, convex
_log.info(f'y-range: {y_min:.2g} <-> {y_max:.2g} ({y_diff:.2g})') _log.info(f'y-range: {y_min:.2g} <-> {y_max:.2g} ({y_diff:.2g})')
_log.info(f'z-range: {z_min:.2g} <-> {z_max:.2g} ({z_diff:.2g})') _log.info(f'z-range: {z_min:.2g} <-> {z_max:.2g} ({z_diff:.2g})')
# Determine dimensions from given grid spacing a: # Determine dimensions from given grid spacing a:
dim = tuple(np.round(np.asarray((z_diff/scale, y_diff/scale, x_diff/scale), dtype=int))) dim = tuple(np.round(np.asarray((z_diff/scale[0], y_diff/scale[1], x_diff/scale[2]), dtype=int)))
x = x_min + scale * (np.arange(dim[2]) + 0.5) # +0.5: shift to pixel center! assert all(dim) > 0, f'All dimensions of dim={dim} need to be > 0, please adjust the scale accordingly!'
y = y_min + scale * (np.arange(dim[1]) + 0.5) # +0.5: shift to pixel center! z = z_min + scale[0] * (np.arange(dim[0]) + 0.5) # +0.5: shift to pixel center!
z = z_min + scale * (np.arange(dim[0]) + 0.5) # +0.5: shift to pixel center! y = y_min + scale[1] * (np.arange(dim[1]) + 0.5) # +0.5: shift to pixel center!
x = x_min + scale[2] * (np.arange(dim[2]) + 0.5) # +0.5: shift to pixel center!
# Create points for new Euclidian grid; fliplr for (x, y, z) order: # Create points for new Euclidian grid; fliplr for (x, y, z) order:
points_euc = np.fliplr(np.asarray(list(itertools.product(z, y, x)))) points_euc = np.fliplr(np.asarray(list(itertools.product(z, y, x))))
# Make values 2D (if not already); double .T so that a new axis is added at the END (n, 1): # Make values 2D (if not already); double .T so that a new axis is added at the END (n, 1):
...@@ -92,12 +99,25 @@ def interp_to_regular_grid(points, values, scale, scale_factor=1, step=1, convex ...@@ -92,12 +99,25 @@ def interp_to_regular_grid(points, values, scale, scale_factor=1, step=1, convex
# If NOT convex, we have to check for additional holes in the structure (EXPERIMENTAL): # If NOT convex, we have to check for additional holes in the structure (EXPERIMENTAL):
if not convex: # Only necessary if the user expects holes in the (-> nonconvex) distribution: if not convex: # Only necessary if the user expects holes in the (-> nonconvex) distribution:
# Create k-dimensional tree for queries: # Create k-dimensional tree for queries:
_log.info('Create cKDTree...')
tick = time()
tree = cKDTree(points) tree = cKDTree(points)
tock = time()
_log.info(f'cKDTree creation complete (took {tock-tick:.2f} s)!')
# Query the tree for nearest neighbors, x: points to query, k: number of neighbors, p: norm # Query the tree for nearest neighbors, x: points to query, k: number of neighbors, p: norm
# to use (here: 2 - Euclidean), distance_upper_bound: maximum distance that is searched! # to use (here: 2 - Euclidean), distance_upper_bound: maximum distance that is searched!
data, leafsize = tree.query(x=points_euc, k=1, p=2, distance_upper_bound=2*np.mean(scale)) if distance_upper_bound is None: # Take the mean of the scale for the upper bound:
distance_upper_bound = 2 * np.mean(scale) # NOTE: could be problematic for wildly varying scale numbers.
else: # Convert to local scale:
distance_upper_bound *= scale_factor
_log.info('Start cKDTree neighbour query...')
tick = time()
data, leafsize = tree.query(x=points_euc, k=1, p=2, distance_upper_bound=distance_upper_bound)
tock = time()
_log.info(f'cKDTree neighbour query complete (took {tock-tick:.2f} s)!')
# Create boolean mask that determines which interpolation points have no neighbor near enough: # Create boolean mask that determines which interpolation points have no neighbor near enough:
mask = np.isinf(data).reshape(dim) # Points further away than upper bound were marked 'inf'! mask = np.isinf(data).reshape(dim) # Points further away than upper bound were marked 'inf'!
for i in tqdm(range(values.shape[-1])): # Set these points to zero (NOTE: This can take a looooong time): _log.info(f'{np.sum(mask)} of {points_euc.shape[0]} points were assumed to be in holes of the point cloud!')
interpolation[..., i].ravel()[mask.ravel()] = 0 # Set these points to zero (NOTE: This can take a looooong time):
interpolation[mask, :] = 0
return np.squeeze(interpolation) return np.squeeze(interpolation)
...@@ -8,14 +8,17 @@ from . import colors ...@@ -8,14 +8,17 @@ from . import colors
from .plot2d import * from .plot2d import *
from .decorators import * from .decorators import *
from .tools import * from .tools import *
from .plot3d import *
__all__ = ['colors'] __all__ = ['colors']
__all__.extend(plot2d.__all__) __all__.extend(plot2d.__all__)
__all__.extend(decorators.__all__) __all__.extend(decorators.__all__)
__all__.extend(tools.__all__) __all__.extend(tools.__all__)
__all__.extend(plot3d.__all__)
del decorators
del plot2d del plot2d
del decorators
del tools del tools
del plot3d
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment