API reference

This is a complete api reference to the openpiv python module.

The openpiv.preprocess module

This module contains image processing routines that improve images prior to PIV processing.

openpiv.preprocess.contrast_stretch(img, lower_limit=2, upper_limit=98)[source]

Simple percentile-based contrast stretching

Parameters:
  • img (image) – a two dimensional array of float32 or float64, but can be uint16, uint8 or similar type
  • lower_limit (int) – lower percentile limit
  • upper_limit (int) – upper percentile limit
Returns:

img – a filtered two dimensional array of the input image

Return type:

image

openpiv.preprocess.dynamic_masking(image, method='edges', filter_size=7, threshold=0.005)[source]

Dynamically masks out the objects in the PIV images

Parameters:
  • image (image) – a two dimensional array of uint16, uint8 or similar type
  • method (string) – ‘edges’ or ‘intensity’: ‘edges’ method is used for relatively dark and sharp objects, with visible edges, on dark backgrounds, i.e. low contrast ‘intensity’ method is useful for smooth bright objects or dark objects or vice versa, i.e. images with high contrast between the object and the background
  • filter_size (integer) – a scalar that defines the size of the Gaussian filter
  • threshold (float) – a value of the threshold to segment the background from the object default value: None, replaced by sckimage.filter.threshold_otsu value
Returns:

  • image (array of the same datatype as the incoming image with the)
  • object masked out
  • as a completely black region(s) of zeros (integers or floats).

Example

frame_a = openpiv.tools.imread( ‘Camera1-001.tif’ ) imshow(frame_a) # original

frame_a = dynamic_masking(frame_a,method=’edges’,filter_size=7, threshold=0.005) imshow(frame_a) # masked

openpiv.preprocess.gen_lowpass_background(img_list, sigma=3, resize=None)[source]

Generate a background by averaging a low pass of all images in an image list. Apply by subtracting generated background image.

Parameters:
  • img_list (list) – list of image directories
  • sigma (float) – sigma of the gaussian filter
  • resize (int or float) – disabled by default, normalize array and set value to user selected max pixel intensity
Returns:

img – a mean of all low-passed images

Return type:

image

openpiv.preprocess.gen_min_background(img_list, resize=255)[source]

Generate a background by averaging the minimum intensity of all images in an image list. Apply by subtracting generated background image.

Parameters:
  • img_list (list) – list of image directories
  • resize (int or float) – disabled by default, normalize array and set value to user selected max pixel intensity
Returns:

img – a mean of all images

Return type:

image

openpiv.preprocess.high_pass(img, sigma=5, clip=False)[source]

Simple high pass filter

Parameters:
  • img (image) – a two dimensional array of float32 or float64, but can be uint16, uint8 or similar type
  • sigma (float) – sigma value of the gaussian filter
Returns:

img – a filtered two dimensional array of the input image

Return type:

image

openpiv.preprocess.instensity_cap(img, std_mult=2)[source]

Simple intensity capping.

Parameters:
  • img (image) – a two dimensional array of float32 or float64, but can be uint16, uint8 or similar type
  • std_mult (int) – how strong the intensity capping is. Lower values yields a lower threshold
Returns:

img – a filtered two dimensional array of the input image

Return type:

image

openpiv.preprocess.intensity_clip(img, min_val=0, max_val=None, flag='clip')[source]

Simple intensity clipping

Parameters:
  • img (image) – a two dimensional array of float32 or float64, but can be uint16, uint8 or similar type
  • min_val (int or float) – min allowed pixel intensity
  • max_val (int or float) – min allowed pixel intensity
  • flag (str) – one of two methods to set invalid pixels intensities
Returns:

img – a filtered two dimensional array of the input image

Return type:

image

openpiv.preprocess.local_variance_normalization(img, sigma_1=2, sigma_2=1, clip=True)[source]

Local variance normalization by two gaussian filters. This method is used by common commercial softwares

Parameters:
  • img (image) – a two dimensional array of float32 or float64, but can be uint16, uint8 or similar type
  • sigma_1 (float) – sigma value of the first gaussian low pass filter
  • sigma_2 (float) – sigma value of the second gaussian low pass filter
  • clip (bool) – set negative pixels to zero
Returns:

img – a filtered two dimensional array of the input image

Return type:

image

openpiv.preprocess.mask_coordinates(image_mask, tolerance=1.5, min_length=10, plot=False)[source]
Creates set of coordinates of polygons from the image mask

Inputs: mask : binary image of a mask.

[tolerance] : float - tolerance for approximate_polygons, default = 1.5

[min_length] : int - minimum length of the polygon, filters out the small polygons like noisy regions, default = 10

Outputs:
mask_coord : list of mask coordinates in pixels

Example

# if masks of image A and B are slightly different: image_mask = np.logical_and(image_mask_a, image_mask_b) mask_coords = mask_coordinates(image_mask)

openpiv.preprocess.normalize_array(array, axis=None)[source]

Min/max normalization to [0,1].

Parameters:
  • array (np.ndarray) – array to normalize
  • axis (int, tuple) – axis to find values for normalization
Returns:

array – normalized array

Return type:

np.ndarray

openpiv.preprocess.offset_image(img, offset_x, offset_y, pad='zero')[source]

Offset an image by padding.

Parameters:
  • img (image) – a two dimensional array of float32 or float64, but can be uint16, uint8 or similar type
  • offset_x (int) – offset an image by integer values. Positive values shifts the image to the right and negative values shift to the left
  • offset_y (int) – offset an image by integer values. Positive values shifts the image to the top and negative values shift to the bottom
  • pad (str) – pad the shift with zeros or a reflection of the shift
Returns:

img – a transformed two dimensional array of the input image

Return type:

image

openpiv.preprocess.prepare_mask_from_polygon(x, y, mask_coords)[source]

Converts mask coordinates of the image mask to the grid of 1/0 on the x,y grid Inputs:

x,y : grid of x,y points mask_coords : array of coordinates in pixels of the image_mask
Outputs:
grid of points of the mask, of the shape of x
openpiv.preprocess.prepare_mask_on_grid(x: numpy.ndarray, y: numpy.ndarray, image_mask: numpy.ndarray) → numpy.array[source]

_summary_

Parameters:
  • x (np.ndarray) – x coordinates of vectors in pixels
  • y (np.ndarray) – y coordinates of vectors in pixels
  • image_mask (np.ndarray) – image of the mask, 1 or True is to be masked
Returns:

boolean array of the size of x,y with 1 where the values are masked

Return type:

np.ndarray

openpiv.preprocess.standardize_array(array, axis=None)[source]

Standardize an array.

Parameters:
  • array (np.ndarray) – array to normalize
  • axis (int, tuple) – axis to find values for standardization
Returns:

array – normalized array

Return type:

np.ndarray

openpiv.preprocess.stretch_image(img, x_axis=0, y_axis=0)[source]

Stretch an image by interplation.

Parameters:
  • img (image) – a two dimensional array of float32 or float64, but can be uint16, uint8 or similar type
  • x_axis (float) – stretch the x-axis of an image where 0 == no stretching
  • y_axis (float) – stretch the y-axis of an image where 0 == no stretching
Returns:

img – a transformed two dimensional array of the input image

Return type:

image

openpiv.preprocess.threshold_binarize(img, threshold, max_val=255)[source]

Simple binarizing threshold

Parameters:
  • img (image) – a two dimensional array of float32 or float64, but can be uint16, uint8 or similar type
  • threshold (int or float) – boundary where pixels set lower than the threshold are set to zero and values higher than the threshold are set to the maximum user selected value
  • max_val (int or float) – maximum pixel value of the image
Returns:

img – a filtered two dimensional array of the input image

Return type:

image

The openpiv.tools module

The openpiv.tools module is a collection of utilities and tools.

openpiv.tools.convert_16bits_tif(filename, save_name)[source]

convert 16 bits TIFF to an openpiv readable image

Parameters:
  • filename (_type_) – filename of a 16 bit TIFF
  • save_name (_type_) – new image filename
openpiv.tools.display(message)[source]

Display a message to standard output.

Parameters:message (string) – a message to be printed
openpiv.tools.display_vector_field(filename: Union[pathlib.Path, str], on_img: Optional[bool] = False, image_name: Union[pathlib.Path, str, None] = None, window_size: Optional[int] = 32, scaling_factor: Optional[float] = 1.0, ax: Optional[Any] = None, width: Optional[float] = 0.0025, show_invalid: Optional[bool] = True, **kw)[source]

Displays quiver plot of the data stored in the file

Parameters:
  • filename (string) – the absolute path of the text file
  • on_img (Bool, optional) – if True, display the vector field on top of the image provided by image_name
  • image_name (string, optional) – path to the image to plot the vector field onto when on_img is True
  • window_size (int, optional) – when on_img is True, provide the interrogation window size to fit the background image to the vector field
  • scaling_factor (float, optional) – when on_img is True, provide the scaling factor to scale the background image to the vector field
  • show_invalid (bool, show or not the invalid vectors, default is True) –
Key arguments : (additional parameters, optional)
scale: [None | float] width: [None | float]

matplotlib.pyplot.quiver

Examples

— only vector field >>> openpiv.tools.display_vector_field(‘./exp1_0000.txt’,scale=100,

width=0.0025)

— vector field on top of image >>> openpiv.tools.display_vector_field(Path(‘./exp1_0000.txt’), on_img=True,

image_name=Path(‘exp1_001_a.bmp’), window_size=32, scaling_factor=70, scale=100, width=0.0025)
openpiv.tools.display_windows_sampling(x, y, window_size, skip=0, method='standard')[source]

Displays a map of the interrogation points and windows

Parameters:
  • x (2d np.ndarray) – a two dimensional array containing the x coordinates of the interrogation window centers, in pixels.
  • y (2d np.ndarray) – a two dimensional array containing the y coordinates of the interrogation window centers, in pixels.
  • window_size (the interrogation window size, in pixels) –
  • skip (the number of windows to skip on a row during display.) – Recommended value is 0 or 1 for standard method, can be more for random method -1 to not show any window
  • method (can be only <standard> (uniform sampling and constant window size)) – <random> (pick randomly some windows)

Examples

>>> openpiv.tools.display_windows_sampling(x, y, window_size=32, skip=0, method='standard')
openpiv.tools.imread(filename, flatten=0)[source]

Read an image file into a numpy array using imageio imread

Parameters:
  • filename (string) – the absolute path of the image file
  • flatten (bool) – True if the image is RGB color or False (default) if greyscale
Returns:

frame – a numpy array with grey levels

Return type:

np.ndarray

Examples

>>> image = openpiv.tools.imread( 'image.bmp' )
>>> print image.shape
    (1280, 1024)
openpiv.tools.imsave(filename, arr)[source]

Write an image file from a numpy array using imageio imread

Parameters:
  • filename (string) – the absolute path of the image file that will be created
  • arr (2d np.ndarray) – a 2d numpy array with grey levels

Example

>>> image = openpiv.tools.imread( 'image.bmp' )
>>> image2 = openpiv.tools.negative(image)
>>> imsave( 'negative-image.tif', image2)
openpiv.tools.mark_background(threshold: float, list_img: list, filename: str) → numpy.ndarray[source]

marks background

Parameters:
  • threshold (float) – threshold
  • list_img (list of images) – _description_
  • filename (str) – image filename to save the mask
Returns:

_description_

Return type:

_type_

openpiv.tools.natural_sort(file_list: List[pathlib.Path]) → List[pathlib.Path][source]

Creates naturally sorted list

openpiv.tools.negative(image)[source]

Return the negative of an image

image : 2d np.ndarray of grey levels

Returns:(255-image)
Return type:2d np.ndarray of grey levels
openpiv.tools.rgb2gray(rgb: numpy.ndarray) → numpy.ndarray[source]

converts rgb image to gray

Parameters:rgb (_type_) – numpy.ndarray, image size, three channels
Returns:numpy.ndarray of the same shape, one channel
Return type:gray
openpiv.tools.save(filename: Union[pathlib.Path, str], x: numpy.ndarray, y: numpy.ndarray, u: numpy.ndarray, v: numpy.ndarray, flags: Optional[numpy.ndarray] = None, mask: Optional[numpy.ndarray] = None, fmt: str = '%.4e', delimiter: str = '\t') → None[source]

Save flow field to an ascii file.

Parameters:
  • filename (string) – the path of the file where to save the flow field
  • x (2d np.ndarray) – a two dimensional array containing the x coordinates of the interrogation window centers, in pixels.
  • y (2d np.ndarray) – a two dimensional array containing the y coordinates of the interrogation window centers, in pixels.
  • u (2d np.ndarray) – a two dimensional array containing the u velocity components, in pixels/seconds.
  • v (2d np.ndarray) – a two dimensional array containing the v velocity components, in pixels/seconds.
  • flags (2d np.ndarray) – a two dimensional integers array where elements corresponding to vectors: 0 - valid, 1 - invalid (, 2 - interpolated) default: None, will create all valid 0
  • mask (2d np.ndarray boolean, marks the image masked regions (dynamic and/or static)) – default: None - will be all False
fmt : string
a format string. See documentation of numpy.savetxt for more details.
delimiter : string
character separating columns

Examples

openpiv.tools.save(‘field_001.txt’, x, y, u, v, flags, mask, fmt=’%6.3f’,
delimiter=’ ‘)
openpiv.tools.sorted_unique(array: numpy.ndarray) → numpy.ndarray[source]

Creates sorted unique array

openpiv.tools.transform_coordinates(x, y, u, v)[source]

Converts coordinate systems from/to the image based / physical based

Input/Output: x,y,u,v

image based is 0,0 top left, x = columns to the right, y = rows downwards and so u,v

physical or right hand one is that leads to the positive vorticity with the 0,0 origin at bottom left to be counterclockwise

The openpiv.pyprocess module

This module contains a pure python implementation of the basic cross-correlation algorithm for PIV image processing.

openpiv.pyprocess.correlate_windows(window_a, window_b, correlation_method='fft', convolve2d=<function convolve2d>, rfft2=<function rfft2>, irfft2=<function irfft2>)[source]

Compute correlation function between two interrogation windows. The correlation function can be computed by using the correlation theorem to speed up the computation. :param window_a: a two dimensions array for the first interrogation window :type window_a: 2d np.ndarray :param window_b: a two dimensions array for the second interrogation window :type window_b: 2d np.ndarray :param correlation_method: ‘circular’ - FFT based without zero-padding

‘linear’ - FFT based with zero-padding ‘direct’ - linear convolution based Default is ‘fft’, which is much faster.
Parameters:
  • convolve2d (function) – function used for 2d convolutions
  • rfft2 (function) – function used for rfft2
  • irfft2 (function) – function used for irfft2
Returns:

  • corr (2d np.ndarray) – a two dimensions array for the correlation function.
  • Note that due to the wish to use 2^N windows for faster FFT
  • we use a slightly different convention for the size of the
  • correlation map. The theory says it is M+N-1, and the
  • ’direct’ method gets this size out
  • the FFT-based method returns M+N size out, where M is the window_size
  • and N is the search_area_size
  • It leads to inconsistency of the output

openpiv.pyprocess.correlation_to_displacement(corr, n_rows, n_cols, subpixel_method='gaussian')[source]

Correlation maps are converted to displacement for each interrogation window using the convention that the size of the correlation map is 2N -1 where N is the size of the largest interrogation window (in frame B) that is called search_area_size Inputs:

corr : 3D nd.array
contains output of the fft_correlate_images
n_rows, n_cols : number of interrogation windows, output of the
get_field_shape
openpiv.pyprocess.extended_search_area_piv(frame_a: numpy.ndarray, frame_b: numpy.ndarray, window_size: Union[int, Tuple[int, int]], overlap: Union[int, Tuple[int, int]] = (0, 0), dt: float = 1.0, search_area_size: Union[int, Tuple[int, int], None] = None, correlation_method: str = 'circular', subpixel_method: str = 'gaussian', sig2noise_method: Optional[str] = 'peak2mean', width: int = 2, normalized_correlation: bool = False, use_vectorized: bool = False)[source]

Standard PIV cross-correlation algorithm, with an option for extended area search that increased dynamic range. The search region in the second frame is larger than the interrogation window size in the first frame. For Cython implementation see openpiv.process.extended_search_area_piv

This is a pure python implementation of the standard PIV cross-correlation algorithm. It is a zero order displacement predictor, and no iterative process is performed.

Parameters:
  • frame_a (2d np.ndarray) – an two dimensions array of integers containing grey levels of the first frame.
  • frame_b (2d np.ndarray) – an two dimensions array of integers containing grey levels of the second frame.
  • window_size (int) – the size of the (square) interrogation window, [default: 32 pix].
  • overlap (int) – the number of pixels by which two adjacent windows overlap [default: 16 pix].
  • dt (float) – the time delay separating the two frames [default: 1.0].
  • correlation_method (string) – one of the two methods implemented: ‘circular’ or ‘linear’, default: ‘circular’, it’s faster, without zero-padding ‘linear’ requires also normalized_correlation = True (see below)
  • subpixel_method (string) – one of the following methods to estimate subpixel location of the peak: ‘centroid’ [replaces default if correlation map is negative], ‘gaussian’ [default if correlation map is positive], ‘parabolic’.
  • sig2noise_method (string) – defines the method of signal-to-noise-ratio measure, (‘peak2peak’ or ‘peak2mean’. If None, no measure is performed.)
  • width (int) – the half size of the region around the first correlation peak to ignore for finding the second peak. [default: 2]. Only used if sig2noise_method==peak2peak.
  • search_area_size (int) – the size of the interrogation window in the second frame, default is the same interrogation window size and it is a fallback to the simplest FFT based PIV
  • normalized_correlation (bool) – if True, then the image intensity will be modified by removing the mean, dividing by the standard deviation and the correlation map will be normalized. It’s slower but could be more robust
Returns:

  • u (2d np.ndarray) – a two dimensional array containing the u velocity component, in pixels/seconds.
  • v (2d np.ndarray) – a two dimensional array containing the v velocity component, in pixels/seconds.
  • sig2noise (2d np.ndarray, ( optional: only if sig2noise_method != None )) – a two dimensional array the signal to noise ratio for each window pair.

The implementation of the one-step direct correlation with different size of the interrogation window and the search area. The increased size of the search areas cope with the problem of loss of pairs due to in-plane motion, allowing for a smaller interrogation window size, without increasing the number of outlier vectors.

See:

Particle-Imaging Techniques for Experimental Fluid Mechanics

Annual Review of Fluid Mechanics Vol. 23: 261-304 (Volume publication date January 1991) DOI: 10.1146/annurev.fl.23.010191.001401

originally implemented in process.pyx in Cython and converted to a NumPy vectorized solution in pyprocess.py

openpiv.pyprocess.fft_correlate_images(image_a: numpy.ndarray, image_b: numpy.ndarray, correlation_method: str = 'circular', normalized_correlation: bool = True, conj: Callable = <ufunc 'conjugate'>, rfft2=<function rfft2>, irfft2=<function irfft2>, fftshift=<function fftshift>) → numpy.ndarray[source]

FFT based cross correlation of two images with multiple views of np.stride_tricks() The 2D FFT should be applied to the last two axes (-2,-1) and the zero axis is the number of the interrogation window This should also work out of the box for rectangular windows. :param image_a: and two last dimensions are interrogation windows of the first image :type image_a: 3d np.ndarray, first dimension is the number of windows, :param image_b: :type image_b: similar :param correlation_method: one of the three methods implemented: ‘circular’ or ‘linear’

[default: ‘circular].
Parameters:
  • normalized_correlation (string) – decides wetehr normalized correlation is done or not: True or False [default: True].
  • conj (function) – function used for complex conjugate
  • rfft2 (function) – function used for rfft2
  • irfft2 (function) – function used for irfft2
  • fftshift (function) – function used for fftshift
openpiv.pyprocess.fft_correlate_windows(window_a, window_b, rfft2=<function rfft2>, irfft2=<function irfft2>)[source]

FFT based cross correlation it is a so-called linear convolution based, since we increase the size of the FFT to reduce the edge effects. This should also work out of the box for rectangular windows.

Parameters:
  • window_a (2d np.ndarray) – a two dimensions array for the first interrogation window
  • window_b (2d np.ndarray) – a two dimensions array for the second interrogation window
  • rfft2 (function) – function used for rfft2
  • irfft2 (function) – function used for irfft2
  • from Stackoverflow (#) –
  • scipy import linalg (from) –
  • numpy as np (import) –
  • works for rectangular windows as well (#) –
  • = [[1 , 0 , 0 , 0] , [0 , -1 , 0 , 0] , [0 , 0 , 3 , 0] , (x) – [0 , 0 , 0 , 1], [0 , 0 , 0 , 1]]
  • = np.array(x,dtype=np.float) (x) –
  • = [[4 , 5] , [3 , 4]] (y) –
  • = np.array(y) (y) –
  • ("conv (print) –
  • = np.array(x.shape) (s1) –
  • = np.array(y.shape) (s2) –
  • = s1 + s2 - 1 (size) –
  • = 2 ** np.ceil(np.log2(size))astype(int) (fsize) –
  • = tuple([slice(0, int(sz)) for sz in size]) (fslice) –
  • = np.fft.fft2(x , fsize) (new_x) –
  • = np.fft.fft2(y , fsize) (new_y) –
  • = np.fft.ifft2(new_x*new_y)[fslice]copy() (result) –
  • for my method (print("fft) –
openpiv.pyprocess.find_all_first_peaks(corr)[source]

Find row and column indices of the first correlation peak.

Parameters:corr (np.ndarray) – the correlation map fof the strided images (N,K,M) where N is the number of windows, KxM is the interrogation window size
Returns:
  • index_list (integers, index of the peak position in (N,i,j))
  • peaks_max (amplitude of the peak)
openpiv.pyprocess.find_all_second_peaks(corr, width=2)[source]

Find row and column indices of the first correlation peak.

Parameters:
  • corr (np.ndarray) – the correlation map fof the strided images (N,K,M) where N is the number of windows, KxM is the interrogation window size
  • width (int) – the half size of the region around the first correlation peak to ignore for finding the second peak
Returns:

  • index_list (integers, index of the peak position in (N,i,j))
  • peaks_max (amplitude of the peak)

openpiv.pyprocess.find_first_peak(corr)[source]

Find row and column indices of the first correlation peak.

Parameters:corr (np.ndarray) – the correlation map fof the strided images (N,K,M) where N is the number of windows, KxM is the interrogation window size
Returns:
  • (i,j) (integers, index of the peak position)
  • peak (amplitude of the peak)
openpiv.pyprocess.find_second_peak(corr, i=None, j=None, width=2)[source]

Find the value of the second largest peak.

The second largest peak is the height of the peak in the region outside a 3x3 submatrxi around the first correlation peak.

Parameters:
  • corr (np.ndarray) – the correlation map.
  • i,j (ints) – row and column location of the first peak.
  • width (int) – the half size of the region around the first correlation peak to ignore for finding the second peak.
Returns:

  • i (int) – the row index of the second correlation peak.
  • j (int) – the column index of the second correlation peak.
  • corr_max2 (int) – the value of the second correlation peak.

openpiv.pyprocess.find_subpixel_peak_position(corr, subpixel_method='gaussian')[source]

Find subpixel approximation of the correlation peak.

This function returns a subpixels approximation of the correlation peak by using one of the several methods available. If requested, the function also returns the signal to noise ratio level evaluated from the correlation map.

Parameters:
  • corr (np.ndarray) – the correlation map.
  • subpixel_method (string) – one of the following methods to estimate subpixel location of the peak: ‘centroid’ [replaces default if correlation map is negative], ‘gaussian’ [default if correlation map is positive], ‘parabolic’.
Returns:

subp_peak_position – the fractional row and column indices for the sub-pixel approximation of the correlation peak. If the first peak is on the border of the correlation map or any other problem, the returned result is a tuple of NaNs.

Return type:

two elements tuple

openpiv.pyprocess.get_coordinates(image_size: Tuple[int, int], search_area_size: int, overlap: int, center_on_field: bool = True) → Tuple[numpy.ndarray, numpy.ndarray][source]

Compute the x, y coordinates of the centers of the interrogation windows. for the SQUARE windows only, see also get_rect_coordinates

the origin (0,0) is like in the image, top left corner positive x is an increasing column index from left to right positive y is increasing row index, from top to bottom

Parameters:
  • image_size (two elements tuple) – a two dimensional tuple for the pixel size of the image first element is number of rows, second element is the number of columns.
  • search_area_size (int) – the size of the search area windows, sometimes it’s equal to the interrogation window size in both frames A and B
  • overlap (int = 0 (default is no overlap)) – the number of pixel by which two adjacent interrogation windows overlap.
Returns:

  • x (2d np.ndarray) – a two dimensional array containing the x coordinates of the interrogation window centers, in pixels.

  • y (2d np.ndarray) – a two dimensional array containing the y coordinates of the interrogation window centers, in pixels.

    Coordinate system 0,0 is at the top left corner, positive x to the right, positive y from top downwards, i.e. image coordinate system

openpiv.pyprocess.get_field_shape(image_size: Tuple[int, int], search_area_size: Tuple[int, int], overlap: Tuple[int, int]) → Tuple[int, int][source]

Compute the shape of the resulting flow field.

Given the image size, the interrogation window size and the overlap size, it is possible to calculate the number of rows and columns of the resulting flow field.

Parameters:
  • image_size (two elements tuple) – a two dimensional tuple for the pixel size of the image first element is number of rows, second element is the number of columns, easy to obtain using .shape
  • search_area_size (tuple) – the size of the interrogation windows (if equal in frames A,B) or the search area (in frame B), the largest of the two
  • overlap (tuple) – the number of pixel by which two adjacent interrogation windows overlap.
Returns:

field_shape – the shape of the resulting flow field

Return type:

2-element tuple

openpiv.pyprocess.get_rect_coordinates(image_size: Tuple[int, int], window_size: Union[int, Tuple[int, int]], overlap: Union[int, Tuple[int, int]], center_on_field: bool = False)[source]

Rectangular grid version of get_coordinates.

openpiv.pyprocess.moving_window_array(array, window_size, overlap)[source]

This is a nice numpy trick. The concept of numpy strides should be clear to understand this code.

Basically, we have a 2d array and we want to perform cross-correlation over the interrogation windows. An approach could be to loop over the array but loops are expensive in python. So we create from the array a new array with three dimension, of size (n_windows, window_size, window_size), in which each slice, (along the first axis) is an interrogation window.

openpiv.pyprocess.nextpower2(i)[source]

Find 2^n that is equal to or greater than.

openpiv.pyprocess.normalize_intensity(window)[source]
Normalize interrogation window or strided image of many windows,
by removing the mean intensity value per window and clipping the negative values to zero
Parameters:window (2d np.ndarray) – the interrogation window array
Returns:window – the interrogation window array, with mean value equal to zero and intensity normalized to -1 +1 and clipped if some pixels are extra low/high
Return type:2d np.ndarray
openpiv.pyprocess.sig2noise_ratio(correlation: numpy.ndarray, sig2noise_method: str = 'peak2peak', width: int = 2) → numpy.ndarray[source]

Computes the signal to noise ratio from the correlation map.

The signal to noise ratio is computed from the correlation map with one of two available method. It is a measure of the quality of the matching between to interrogation windows.

Parameters:
  • corr (3d np.ndarray) – the correlation maps of the image pair, concatenated along 0th axis
  • sig2noise_method (string) – the method for evaluating the signal to noise ratio value from the correlation map. Can be peak2peak, peak2mean or None if no evaluation should be made.
  • width (int, optional) – the half size of the region around the first correlation peak to ignore for finding the second peak. [default: 2]. Only used if sig2noise_method==peak2peak.
Returns:

sig2noise – the signal to noise ratios from the correlation maps.

Return type:

np.array

openpiv.pyprocess.sliding_window_array(image: numpy.ndarray, window_size: Tuple[int, int] = (64, 64), overlap: Tuple[int, int] = (32, 32)) → numpy.ndarray[source]

This version does not use numpy as_strided and is much more memory efficient. Basically, we have a 2d array and we want to perform cross-correlation over the interrogation windows. An approach could be to loop over the array but loops are expensive in python. So we create from the array a new array with three dimension, of size (n_windows, window_size, window_size), in which each slice, (along the first axis) is an interrogation window.

openpiv.pyprocess.vectorized_correlation_to_displacements(corr: numpy.ndarray, n_rows: Optional[int] = None, n_cols: Optional[int] = None, subpixel_method: str = 'gaussian', eps: float = 1e-07)[source]

Correlation maps are converted to displacement for each interrogation window using the convention that the size of the correlation map is 2N -1 where N is the size of the largest interrogation window (in frame B) that is called search_area_size

Parameters:
  • corr (3D nd.array) – contains output of the fft_correlate_images
  • n_cols (n_rows,) – number of interrogation windows, output of the get_field_shape
  • mask_width (int) – distance, in pixels, from the interrogation window in which correlation peaks would be flagged as invalid
Returns:

u, v – 2d array of displacements in pixels/dt

Return type:

2D nd.array

openpiv.pyprocess.vectorized_sig2noise_ratio(correlation, sig2noise_method='peak2peak', width=2)[source]

Computes the signal to noise ratio from the correlation map in a mostly vectorized approach, thus much faster.

The signal to noise ratio is computed from the correlation map with one of two available method. It is a measure of the quality of the matching between to interrogation windows.

Parameters:
  • corr (3d np.ndarray) – the correlation maps of the image pair, concatenated along 0th axis
  • sig2noise_method (string) – the method for evaluating the signal to noise ratio value from the correlation map. Can be peak2peak, peak2mean or None if no evaluation should be made.
  • width (int, optional) – the half size of the region around the first correlation peak to ignore for finding the second peak. [default: 2]. Only used if sig2noise_method==peak2peak.
Returns:

sig2noise – the signal to noise ratios from the correlation maps.

Return type:

np.array

The openpiv.process module

The openpiv.lib module

openpiv.lib.replace_nans(array, max_iter, tol, kernel_size=2, method='disk')[source]
Replace NaN elements in an array using an iterative image inpainting
algorithm.

The algorithm is the following:

  1. For each element in the input array, replace it by a weighted average of the neighbouring elements which are not NaN themselves. The weights depend on the method type. See Methods below.
  2. Several iterations are needed if there are adjacent NaN elements. If this is the case, information is “spread” from the edges of the missing regions iteratively, until the variation is below a certain threshold.

Methods:

localmean - A square kernel where all elements have the same value,
weights are equal to n/( (2*kernel_size+1)**2 -1 ), where n is the number of non-NaN elements.
disk - A circular kernel where all elements have the same value,
kernel is calculated by::
if ((S-i)**2 + (S-j)**2)**0.5 <= S:
kernel[i,j] = 1.0
else:
kernel[i,j] = 0.0

where S is the kernel radius.

distance - A circular inverse distance kernel where elements are

weighted proportional to their distance away from the center of the kernel, elements farther away have less weight. Elements outside the specified radius are set to 0.0 as in ‘disk’, the remaining of the weights are calculated as:

maxDist = ((S)**2 + (S)**2)**0.5
kernel[i,j] = -1*(((S-i)**2 + (S-j)**2)**0.5 - maxDist)

where S is the kernel radius.

Parameters:
  • array (2d or 3d np.ndarray) – an array containing NaN elements that have to be replaced if array is a masked array (numpy.ma.MaskedArray), then the mask is reapplied after the replacement
  • max_iter (int) – the number of iterations
  • tol (float) – On each iteration check if the mean square difference between values of replaced elements is below a certain tolerance tol
  • kernel_size (int) – the size of the kernel, default is 1
  • method (str) – the method used to replace invalid values. Valid options are localmean, disk, and distance.
Returns:

filled – a copy of the input array, where NaN elements have been replaced.

Return type:

2d or 3d np.ndarray

The openpiv.filters module

The openpiv.filters module contains some filtering/smoothing routines.

openpiv.filters.gaussian(u: numpy.ndarray, v: numpy.ndarray, half_width: int = 1) → Tuple[numpy.ndarray, numpy.ndarray][source]

Smooths the velocity field with a Gaussian kernel.

Parameters:
  • u (2d np.ndarray) – the u velocity component field
  • v (2d np.ndarray) – the v velocity component field
  • half_width (int) – the half width of the kernel. Kernel has shape 2*half_width+1, default = 1
Returns:

  • uf (2d np.ndarray) – the smoothed u velocity component field
  • vf (2d np.ndarray) – the smoothed v velocity component field

openpiv.filters.gaussian_kernel(sigma: float, truncate: float = 4.0) → numpy.ndarray[source]

Return Gaussian that truncates at the given number of standard deviations.

openpiv.filters.replace_outliers(u: numpy.ndarray, v: numpy.ndarray, flags: numpy.ndarray, w: Optional[numpy.ndarray] = None, method: str = 'localmean', max_iter: int = 5, tol: float = 0.001, kernel_size: int = 1) → Tuple[numpy.ndarray, ...][source]
Replace invalid vectors in an velocity field using an iterative image
inpainting algorithm.

The algorithm is the following:

  1. For each element in the arrays of the u and v components, replace it by a weighted average of the neighbouring elements which are not invalid themselves. The weights depends of the method type. If method=localmean weight are equal to 1/( (2*kernel_size+1)**2 -1 )
  2. Several iterations are needed if there are adjacent invalid elements. If this is the case, inforation is “spread” from the edges of the missing regions iteratively, until the variation is below a certain threshold.
Parameters:
  • u (2d or 3d np.ndarray) – the u velocity component field
  • v (2d or 3d np.ndarray) – the v velocity component field
  • w (2d or 3d np.ndarray) – the w velocity component field
  • flags (2d array of positions with invalid vectors) –
  • grid_mask (2d array of positions masked by the user) –
  • max_iter (int) – the number of iterations
  • kernel_size (int) – the size of the kernel, default is 1
  • method (str) – the type of kernel used for repairing missing vectors
Returns:

  • uf (2d or 3d np.ndarray) – the smoothed u velocity component field, where invalid vectors have been replaced
  • vf (2d or 3d np.ndarray) – the smoothed v velocity component field, where invalid vectors have been replaced
  • wf (2d or 3d np.ndarray) – the smoothed w velocity component field, where invalid vectors have been replaced

The openpiv.validation module

A module for spurious vector detection.

openpiv.validation.global_std(u: numpy.ndarray, v: numpy.ndarray, std_threshold: int = 5) → numpy.ndarray[source]

Eliminate spurious vectors with a global threshold defined by the standard deviation

This validation method tests for the spatial consistency of the data and outliers vector are replaced with NaN (Not a Number) if at least one of the two velocity components is out of a specified global range.

Parameters:
  • u (2d masked np.ndarray) – a two dimensional array containing the u velocity component.
  • v (2d masked np.ndarray) – a two dimensional array containing the v velocity component.
  • std_threshold (float) – If the length of the vector (actually the sum of squared components) is larger than std_threshold times standard deviation of the flow field, then the vector is treated as an outlier. [default = 3]
Returns:

flag – a boolean array. True elements corresponds to outliers.

Return type:

boolean 2d np.ndarray

openpiv.validation.global_val(u: numpy.ndarray, v: numpy.ndarray, u_thresholds: Tuple[int, int], v_thresholds: Tuple[int, int]) → numpy.ndarray[source]

Eliminate spurious vectors with a global threshold.

This validation method tests for the spatial consistency of the data and outliers vector are replaced with Nan (Not a Number) if at least one of the two velocity components is out of a specified global range.

Parameters:
  • u (2d np.ndarray) – a two dimensional array containing the u velocity component.
  • v (2d np.ndarray) – a two dimensional array containing the v velocity component.
  • u_thresholds (two elements tuple) – u_thresholds = (u_min, u_max). If u<u_min or u>u_max the vector is treated as an outlier.
  • v_thresholds (two elements tuple) – v_thresholds = (v_min, v_max). If v<v_min or v>v_max the vector is treated as an outlier.
Returns:

flag – a boolean array. True elements corresponds to outliers.

Return type:

boolean 2d np.ndarray

openpiv.validation.local_median_val(u, v, u_threshold, v_threshold, size=1)[source]

Eliminate spurious vectors with a local median threshold.

This validation method tests for the spatial consistency of the data. Vectors are classified as outliers and replaced with Nan (Not a Number) if the absolute difference with the local median is greater than a user specified threshold. The median is computed for both velocity components.

The image masked areas (obstacles, reflections) are marked as masked array:
u = np.ma.masked(u, flag = image_mask)

and it should not be replaced by the local median, but remain masked.

Parameters:
  • u (2d np.ndarray) – a two dimensional array containing the u velocity component.
  • v (2d np.ndarray) – a two dimensional array containing the v velocity component.
  • u_threshold (float) – the threshold value for component u
  • v_threshold (float) – the threshold value for component v
Returns:

flag – a boolean array. True elements corresponds to outliers.

Return type:

boolean 2d np.ndarray

openpiv.validation.sig2noise_val(s2n: numpy.ndarray, threshold: float = 1.0) → numpy.ndarray[source]

Marks spurious vectors if signal to noise ratio is below a specified threshold.

Parameters:
  • u (2d or 3d np.ndarray) – a two or three dimensional array containing the u velocity component.
  • v (2d or 3d np.ndarray) – a two or three dimensional array containing the v velocity component.
  • s2n (2d np.ndarray) – a two or three dimensional array containing the value of the signal to noise ratio from cross-correlation function.
  • w (2d or 3d np.ndarray) – a two or three dimensional array containing the w (in z-direction) velocity component.
  • threshold (float) – the signal to noise ratio threshold value.
Returns:

flag – a boolean array. True elements corresponds to outliers.

Return type:

boolean 2d np.ndarray

References

    1. Keane and R. J. Adrian, Measurement Science & Technology, 1990,

    1, 1202-1215.

openpiv.validation.typical_validation(u: numpy.ndarray, v: numpy.ndarray, s2n: numpy.ndarray, settings: PIVSettings) → numpy.ndarray[source]

validation using gloabl limits and std and local median,

with a special option of ‘no_std’ for the case of completely uniform shift, e.g. in tests.

see windef.PIVSettings() for the parameters:

MinMaxU : two elements tuple
sets the limits of the u displacment component Used for validation.
MinMaxV : two elements tuple
sets the limits of the v displacment component Used for validation.
std_threshold : float
sets the threshold for the std validation
median_threshold : float
sets the threshold for the median validation

The openpiv.scaling module

Scaling utilities

openpiv.scaling.uniform(x, y, u, v, scaling_factor)[source]

Apply an uniform scaling

Parameters:
  • x (2d np.ndarray) –
  • y (2d np.ndarray) –
  • u (2d np.ndarray) –
  • v (2d np.ndarray) –
  • scaling_factor (float) – the image scaling factor in pixels per meter
Returns:

  • x (2d np.ndarray)
  • y (2d np.ndarray)
  • u (2d np.ndarray)
  • v (2d np.ndarray)