pylablib.core.dataproc package¶
Submodules¶
pylablib.core.dataproc.callable module¶
-
class
pylablib.core.dataproc.callable.
ICallable
[source]¶ Bases:
object
Fit function generalization.
Has a set of mandatory argument with no default values and a set of parameters with default values (there may or may not be an explicit list of them).
All the arguments are passed explicitly by name. Passed value supersede default values. Extra arguments (not used in the calculations) are ignored.
Assumed (but not enforced) to be immutable: changes after creation can break the behavior.
Implements (possibly; depends on subclasses) call namelist binding boosting: if the function is to be called many times with the same parameter names list, one can first bind parameters list, and then call bound function with the corresponding arguments. This way,
callable(**p)
should be equivalent tocallable.bind(p.keys())(*p.values())
.-
has_arg
(arg_name)[source]¶ Determine if the function has an argument arg_name (of all 3 categories).
-
filter_args_dict
(args)[source]¶ Filter argument names dictionary to leave only the arguments that are used.
-
get_mandatory_args
()[source]¶ Return list of mandatory arguments (these are the ones without default values).
-
get_arg_default
(arg_name)[source]¶ Return default value of the argument arg_name.
Raise
KeyError
if the argument is not defined orValueError
if it has no default value.
-
-
class
pylablib.core.dataproc.callable.
MultiplexedCallable
(func, multiplex_by, join_method='stack')[source]¶ Bases:
pylablib.core.dataproc.callable.ICallable
Multiplex a single callable based on a single parameter.
If the function is called with this parameter as an iterable, then the underlying callable will be called for each value of the parameter separately, and the results will be joined into a single array (if return the values are scalar, they’re joined in 1D array; otherwise, they’re joined using join_method).
Parameters: - func (callable) – Function to be parallelized.
- multiplex_by (str) – Name of the argument to be multiplexed by.
- join_method (str) – Method for combining individual results together if they’re non-scalars.
Can be either
'list'
(combine the results in a single list),'stack'
(combine usingnumpy.column_stack()
, i.e., add dimension to the result), or'concatenate'
(concatenate the return values; the dimension of the result stays the same).
Multiplexing also makes use of call signatures for underlying function even if
__call__
is used.Note that this operation is slow, and should be used only for high-dimensional multiplexing; for 1D case it’s much better to just use numpy arrays as arguments and rely on numpy parallelizing.
-
has_arg
(arg_name)[source]¶ Determine if the function has an argument arg_name (of all 3 categories).
-
get_mandatory_args
()[source]¶ Return list of mandatory arguments (these are the ones without default values).
-
get_arg_default
(arg_name)[source]¶ Return default value of the argument arg_name.
Raise
KeyError
if the argument is not defined orValueError
if it has no default value.
-
class
pylablib.core.dataproc.callable.
JoinedCallable
(funcs, join_method='stack')[source]¶ Bases:
pylablib.core.dataproc.callable.ICallable
Join several callables sharing the same arguments list.
The results will be joined into a single array (if return the values are scalar, they’re joined in 1D array; otherwise, they’re joined using join_method).
Parameters: - funcs ([callable]) – List of functions to be joined together.
- join_method (str) – Method for combining individual results together if they’re non-scalars.
Can be either
'list'
(combine the results in a single list),'stack'
(combine usingnumpy.column_stack()
, i.e., add dimension to the result), or'concatenate'
(concatenate the return values; the dimension of the result stays the same).
-
has_arg
(arg_name)[source]¶ Determine if the function has an argument arg_name (of all 3 categories).
-
get_mandatory_args
()[source]¶ Return list of mandatory arguments (these are the ones without default values).
-
get_arg_default
(arg_name)[source]¶ Return default value of the argument arg_name.
Raise
KeyError
if the argument is not defined orValueError
if it has no default value.
-
class
pylablib.core.dataproc.callable.
FunctionCallable
(func, function_signature=None, defaults=None, alias=None)[source]¶ Bases:
pylablib.core.dataproc.callable.ICallable
Callable based on a function or a method.
Parameters: - func – Function to be wrapped.
- function_signature – A
FunctionSignature
object supplying information about function’s argument names and default values, if they’re different from what’s extracted from its signature. - defaults (dict) – A dictionary
{name: value}
of additional default parameters values. Override the defaults from the signature. All default values must be pass-able to the function as a parameter - alias (dict) – A dictionary
{alias: original}
for renaming some of the original arguments. Original argument names can’t be used if aliased (though, multi-aliasing can be used explicitly, e.g.,alias={'alias':'arg','arg':'arg'}
). A name can be blocked (its usage causes error) if it’s aliased to None (alias={'blocked_name':None}
).
Optional non-named arguments in the form
*args
are not supported, since all the arguments are passed to the function by keywords.Optional named arguments in the form
**kwargs
are supported only if their default values are explicitly provided in defaults (otherwise it would be unclear whether argument should be added into**kwargs
or ignored altogether).-
has_arg
(arg_name)[source]¶ Determine if the function has an argument arg_name (of all 3 categories).
-
get_mandatory_args
()[source]¶ Return list of mandatory arguments (these are the ones without default values).
-
get_arg_default
(arg_name)[source]¶ Return default value of the argument arg_name.
Raise
KeyError
if the argument is not defined orValueError
if it has no default value.
-
class
pylablib.core.dataproc.callable.
MethodCallable
(method, function_signature=None, defaults=None, alias=None)[source]¶ Bases:
pylablib.core.dataproc.callable.FunctionCallable
Similar to
FunctionCallable
, but accepts class method instead of a function.The only addition is that now object’s attributes can also parameters to the function: all the parameters which are not explicitly mentioned in the method signature are assumed to be object’s attributes.
The parameters are affected by alias, but NOT affected by defaults (since it’s impossible to ensure that all object’s attributes are kept constant, and it’s impractical to reset them all to default values at every function call).
Parameters: - method – Method to be wrapped.
- function_signature – A
FunctionSignature
object supplying information about function’s argument names and default values, if they’re different from what’s extracted from its signature. If it’s assumed that the first self argument is already excluded. - defaults (dict) – A dictionary
{name: value}
of additional default parameters values. Override the defaults from the signature. All default values must be pass-able to the function as a parameter - alias (dict) – A dictionary
{alias: original}
for renaming some of the original arguments. Original argument names can’t be used if aliased (though, multi-aliasing can be used explicitly, e.g.,alias={'alias':'arg','arg':'arg'}
). A name can be blocked (its usage causes error) if it’s aliased to None (alias={'blocked_name':None}
).
This callable is implemented largely to be used with
TheoryCalculator
class (currently deprecated).-
has_arg
(arg_name)[source]¶ Determine if the function has an argument arg_name (of all 3 categories).
-
get_arg_default
(arg_name)[source]¶ Return default value of the argument arg_name.
Raise
KeyError
if the argument is not defined orValueError
if it has no default value.
-
pylablib.core.dataproc.callable.
to_callable
(func)[source]¶ Convert a function to an
ICallable
instance.If it’s already
ICallable
, return unchanged. Otherwise, returnFunctionCallable
orMethodCallable
depending on whether it’s a function or a bound method.
pylablib.core.dataproc.feature module¶
Traces feature detection: peaks, baseline, local extrema.
-
class
pylablib.core.dataproc.feature.
Baseline
[source]¶ Bases:
pylablib.core.dataproc.feature.Baseline
Baseline (background) for a trace.
position is the background level, and width is its noise width.
-
pylablib.core.dataproc.feature.
get_baseline_simple
(trace, find_width=True)[source]¶ Get the baseline of the 1D trace.
If
find_width==True
, calculate its width as well.
-
pylablib.core.dataproc.feature.
subtract_baseline
(trace)[source]¶ Subtract baseline from the trace (make its background zero).
-
class
pylablib.core.dataproc.feature.
Peak
[source]¶ Bases:
pylablib.core.dataproc.feature.Peak
A trace peak.
kernel defines its shape (for, e.g., generation purposes).
-
pylablib.core.dataproc.feature.
find_peaks_cutoff
(trace, cutoff, min_width=0, kind='peak', subtract_bl=True)[source]¶ Find peaks in the data using cutoff.
Parameters: - trace – 1D data array.
- cutoff (float) – Cutoff value for the peak finding.
- min_width (int) – Minimal uninterrupted width (in datapoints) of a peak. Any peaks this width are ignored.
- kind (str) – Peak kind. Can be
'peak'
(positive direction),'dip'
(negative direction) or'both'
(both directions). - subtract_bl (bool) – If
True
, subtract baseline of the trace before checking cutoff.
Returns: List of
Peak
objects.
-
pylablib.core.dataproc.feature.
rescale_peak
(peak, xoff=0.0, xscale=1.0, yoff=0, yscale=1.0)[source]¶ Rescale peak’s position, width and height.
xscale rescales position and width, xoff shifts position, yscale and yoff affect peak height.
-
pylablib.core.dataproc.feature.
peaks_sum_func
(peaks, peak_func='lorentzian')[source]¶ Create a function representing sum of peaks.
peak_func determines default peak kernel (used if
peak.kernel=="generic"
). Kernel is either a name string or a function taking 3 arguments(x, width, height)
.
-
pylablib.core.dataproc.feature.
get_kernel
(width, kernel_width=None, kernel='lorentzian')[source]¶ Get a finite-sized kernel.
Return 1D array of length
2*kernel_width+1
containing the given kernel. By default,kernel_width=int(width*3)
.
-
pylablib.core.dataproc.feature.
get_peakdet_kernel
(peak_width, background_width, norm_width=None, kernel_width=None, kernel='lorentzian')[source]¶ Get a peak detection kernel.
Return 1D array of length
2*kernel_width+1
containing the kernel. The kernel is a sum of narrow positive peak (with the width peak_width) and a broad negative peak (with the width background_width); both widths are specified in datapoints (index). Each peak is normalized to have unit sum, i.e., the kernel has zero total sum. By default,kernel_width=int(background_width*3)
.
-
pylablib.core.dataproc.feature.
multi_scale_peakdet
(trace, widths, background_ratio, kind='peak', norm_ratio=None, kernel='lorentzian')[source]¶ Detect multiple peak widths using
get_peakdet_kernel()
kernel.Parameters: - trace – 1D data array.
- widths ([float]) – Array of possible peak widths.
- background_ratio (float) – ratio of the background_width to the peak_width in
get_peakdet_kernel()
. - kind (str) – Peak kind. Can be
'peak'
(positive direction) or'dip'
(negative direction). - norm_ratio (float) – if not
None
, defines the width of the “normalization region” (in units of the kernel width, same as for the background kernel); it is then used to calculate a local trace variance to normalize the peaks magnitude. - kernel – Peak matching kernel.
Returns: Filtered trace which shows peak ‘affinity’ at each point.
-
pylablib.core.dataproc.feature.
find_local_extrema
(wf, region_width=3, kind='max', min_distance=None)[source]¶ Find local extrema (minima or maxima) of 1D waveform.
kind can be
"min"
or"max"
and determines the kind of the extrema. Local minima (maxima) are defined as points which are smaller (greater) than all other points in the region of width region_width around it. region_width is always round up to an odd integer. min_distance defines the minimal distance between the exterma (region_width//2
by default). If there are several exterma within min_distance, their positions are averaged together.
-
pylablib.core.dataproc.feature.
find_state_hysteretic
(wf, threshold_off, threshold_on, normalize=True)[source]¶ Determine on/off state in 1D array with hysteretic threshold algorithm.
Return a state array containing
+1
for ‘on’ states and-1
for ‘off’ states. The states switches from ‘off’ to ‘on’ when the value goes above threshold_on, and from ‘on’ to ‘off’ when the value goes below threshold_off. The intermediate states are determined by the nearest neighbor.
-
pylablib.core.dataproc.feature.
trigger_hysteretic
(wf, threshold_on, threshold_off, init_state='undef', result_kind='separate')[source]¶ Determine indices of rise and fall trigger events with hysteresis thresholds.
Return either two arrays
(rise_trig, fall_trig)
containing trigger indices (ifresult_kind=="separate"
), or a single array of tuples[(dir,pos)]
, where dir is the trigger direction (+1
or-1
) and pos is its index (ifresult_kind=="joined"
). Triggers happen when a state switch from ‘high’ to ‘low’ (rising) or vice versa (falling). The state switches from ‘low’ to ‘high’ when the trace value goes above threshold_on, and from ‘high’ to ‘low’ when the trace value goes below threshold_off. init_state specifies the initial state:"low"
,"high"
, or"undef"
(undefined state).
pylablib.core.dataproc.filters module¶
Routines for filtering arrays (mostly 1D data).
-
pylablib.core.dataproc.filters.
convolve1d
(wf, kernel, mode='reflect', cval=0.0)[source]¶ Convolution filter.
Convolves wf with the given kernel (1D array). mode and cval determine how the endpoints are handled. Simply a wrapper around the standard
scipy.ndimage.convolve()
that handles complex arguments.
-
pylablib.core.dataproc.filters.
convolution_filter
(wf, width=1.0, kernel='gaussian', kernel_span='auto', mode='reflect', cval=0.0, kernel_height=None)[source]¶ Convolution filter.
Parameters: - wf – Waveform for filtering.
- width (float) – Kernel width (second parameter to the kernel function).
- kernel – Either a string defining the kernel function (see
specfunc.get_kernel_func()
for possible kernels), or a function taking 3 arguments(pos, width, height)
, where height can beNone
(assumes normalization by area). - kernel_span – The cutoff for the kernel function. Either an integer (number of points) or
'auto'
. - mode (str) – Convolution mode (see
scipy.ndimage.convolve()
). - cval (float) – Convolution fill value (see
scipy.ndimage.convolve()
). - kernel_height – Height parameter to be passed to the kernel function.
None
means normalization by area.
-
pylablib.core.dataproc.filters.
gaussian_filter
(wf, width=1.0, mode='reflect', cval=0.0)[source]¶ Simple gaussian filter. Can handle complex data.
Equivalent to a convolution with a gaussian. Equivalent to
scipy.ndimage.gaussian_filter1d()
, usesconvolution_filter()
.
-
pylablib.core.dataproc.filters.
gaussian_filter_nd
(wf, width=1.0, mode='reflect', cval=0.0)[source]¶ Simple gaussian filter. Can’t handle complex data.
Equivalent to a convolution with a gaussian. Wrapper around
scipy.ndimage.gaussian_filter()
.
-
pylablib.core.dataproc.filters.
low_pass_filter
(wf, t=1.0, mode='reflect', cval=0.0)[source]¶ Simple single-pole low-pass filter.
t is the filter time constant, mode and cval are the waveform expansion parameters (only from the left). Implemented as a recursive digital filter, so its performance doesn’t depend strongly on t. Works only for 1D arrays.
-
pylablib.core.dataproc.filters.
high_pass_filter
(wf, t=1.0, mode='reflect', cval=0.0)[source]¶ Simple single-pole high-pass filter (equivalent to subtracting a low-pass filter).
t is the filter time constant, mode and cval are the waveform expansion parameters (only from the left). Implemented as a recursive digital filter, so its performance doesn’t depend strongly on t. Works only for 1D arrays.
-
pylablib.core.dataproc.filters.
integrate
(wf)[source]¶ Calculate the integral of the waveform.
Works only for 1D arrays.
-
pylablib.core.dataproc.filters.
differentiate
(wf)[source]¶ Calculate the differential of the waveform.
Works only for 1D arrays.
-
pylablib.core.dataproc.filters.
sliding_average
(wf, width=1.0, mode='reflect', cval=0.0)[source]¶ Simple sliding average filter
Equivalent to convolution with a rectangle peak function.
-
pylablib.core.dataproc.filters.
median_filter
(wf, width=1, mode='reflect', cval=0.0)[source]¶ Median filter.
Wrapper around
scipy.ndimage.median_filter()
.
-
pylablib.core.dataproc.filters.
sliding_filter
(wf, n=1, dec_mode='bin', mode='reflect', cval=0.0)¶ Perform sliding filtering on the data.
Parameters: - wf – 1D array-like object.
- n (int) – bin width.
- dec_mode (str) –
- Decimation mode. Can be
'bin'
or'mean'
- do a binning average;'sum'
- sum points;'min'
- leave min point;'max'
- leave max point;'median'
- leave median point (works as a median filter).
- mode (str) – Expansion mode. Can be
'constant'
(added values are determined by cval),'nearest'
(added values are endvalues of the waveform),'reflect'
(reflect waveform wrt its endpoint) or'wrap'
(wrap the values from the other size). - cval (float) – If
mode=='constant'
, determines the expanded values.
-
pylablib.core.dataproc.filters.
decimate
(wf, n=1, dec_mode='skip', axis=0, mode='drop')¶ Decimate the data.
Parameters: - wf – Data.
- n (int) – Decimation factor.
- dec_mode (str) – Decimation mode. Can be
-
'skip'
- just leave every n’th point while completely omitting everything else; -'bin'
or'mean'
- do a binning average; -'sum'
- sum points; -'min'
- leave min point; -'max'
- leave max point; -'median'
- leave median point (works as a median filter). - axis (int) – Axis along which to perform the decimation.
- mode (str) – Determines what to do with the last bin if it’s incomplete. Can be either
'drop'
(omit the last bin) or'leave'
(keep it).
-
pylablib.core.dataproc.filters.
binning_average
(wf, width=1, axis=0, mode='drop')[source]¶ Binning average filter.
Equivalent to
decimate()
withdec_mode=='bin'
.
-
pylablib.core.dataproc.filters.
decimate_full
(wf, dec_mode='skip', axis=0)[source]¶ Completely decimate the data along a given axis
Parameters: - wf – Data.
- dec_mode (str) –
- Decimation mode. Can be
'skip'
- just leave every n’th point while completely omitting everything else;'bin'
or'mean'
- do a binning average;'sum'
- sum points;'min'
- leave min point;'max'
- leave max point;'median'
- leave median point (works as a median filter).
- axis (int) – Axis along which to perform the decimation.
-
pylablib.core.dataproc.filters.
decimate_datasets
(wfs, dec_mode='mean')[source]¶ Decimate datasets with the same shape element-wise (works only for 1D or 2D arrays).
dec_mode has the same values and meaning as in
decimate()
.
-
pylablib.core.dataproc.filters.
collect_into_bins
(values, distance, preserve_order=False, to_return='value')[source]¶ Collect all values into bins separated at least by distance.
Return the extent of each bin. If
preserve_order==False
, values are sorted before splitting. Ifto_return="value"
, the extent is given in values; ifto_return="index"
, it is given in indices (only useful ifpreserve_order=True
, as otherwise the indices correspond to a sorted array). If distance is a tuple, then it denotes the minimal and the maximal separation between consecutive elements; otherwise, it is a single number denoting maximal absolute distance (i.e., it corresponds to a tuple(-distance,distance)
).
-
pylablib.core.dataproc.filters.
split_into_bins
(values, max_span, max_size=None)[source]¶ Split values into bins of the span at most max_span and number of elements at most max_size.
If max_size is
None
, it’s assumed to be infinite. Return array of indices for each bin. Values are sorted before splitting.
-
pylablib.core.dataproc.filters.
fourier_filter
(trace, response, preserve_real=True)[source]¶ Apply filter to a trace in frequency domain.
response is a (possibly) complex function with single 1D real numpy array as a frequency argument.
If
preserve_real==True
, then the response for negative frequencies is automatically taken to be complex conjugate of the response for positive frequencies (so that the real trace stays real).
-
pylablib.core.dataproc.filters.
fourier_make_response_real
(response)[source]¶ Turn a frequency filter function into a real one (in the time domain).
Done by reflecting and complex conjugating positive frequency part to negative frequencies. response is a function with a single argument (frequency), return value is a modified function.
pylablib.core.dataproc.fitting module¶
Universal function fitting interface.
-
class
pylablib.core.dataproc.fitting.
Fitter
(func, xarg_name=None, fit_parameters=None, fixed_parameters=None, scale=None, limits=None)[source]¶ Bases:
object
Fitter object.
Can handle variety of different functions, complex arguments or return values, array arguments.
Parameters: - func (callable) – Fit function. Can be anything callable (function, method, object with
__call__
method, etc.). - xarg_name (str or list) – Name (or multiple names) for x arguments. These arguments are passed to func (as named arguments) when calling for fitting. Can be a string (single argument) or a list (arbitrary number of arguments, including zero).
- fit_parameters (dict) – Dictionary
{name: value}
of parameters to be fitted (value is the starting value for the fitting procedure). If value isNone
, try and get the default value from the func. - fixed_parameters (dict) – Dictionary
{name: value}
of parameters to be fixed during the fitting procedure. If value isNone
, try and get the default value from the func. - scale (dict) – Defines typical scale of fit parameters (used to normalize fit parameters supplied of
scipy.optimize.least_squares()
). Note: for complex parameters scale must also be a complex number, with re and im parts of the scale variable corresponding to the scale of the re and im part. - limits (dict) – Boundaries for the fit parameters (missing entries are assumed to be unbound). Each boundary parameter is a tuple
(lower, upper)
.lower
orupper
can beNone
,numpy.nan
ornumpy.inf
(with the appropriate sign), which implies no bounds in the given direction. Note: for compound data types (such as lists) the entries are still tuples of 2 elements, each of which is eitherNone
(no bound for any sub-element) or has the same structure as the full parameter. Note: for complex parameters limits must also be complex numbers (orNone
), with re and im parts of the limits variable corresponding to the limits of the re and im part.
-
set_xarg_name
(xarg_name)[source]¶ Set names of x arguments.
Can be a string (single argument) or a list (arbitrary number of arguments, including zero).
-
fit
(x=None, y=0, fit_parameters=None, fixed_parameters=None, scale='default', limits='default', weight=1.0, parscore=None, return_stderr=False, return_residual=False, **kwargs)[source]¶ Fit the data.
Parameters: - x – x arguments. If the function has single x argument, x is an array-like object;
otherwise, x is a list of array-like objects (can be
None
if there are no x parameters). - y – Target function values.
- fit_parameters (dict) – Adds to the default fit_parameters of the fitter (has priority on duplicate entries).
- fixed_parameters (dict) – Adds to the default fixed_parameters of the fitter (has priority on duplicate entries).
- scale (dict) – Defines typical scale of fit parameters (used to normalize fit parameters supplied of
scipy.optimize.least_squares()
). Note: for complex parameters scale must also be a complex number, with re and im parts of the scale variable corresponding to the scale of the re and im part. If value is"default"
, use the value supplied on the fitter creation. - limits (dict) – Boundaries for the fit parameters (missing entries are assumed to be unbound). Each boundary parameter is a tuple
(lower, upper)
.lower
orupper
can beNone
,numpy.nan
ornumpy.inf
(with the appropriate sign), which implies no bounds in the given direction. Note: for compound data types (such as lists) the entries are still tuples of 2 elements, each of which is eitherNone
(no bound for any sub-element) or has the same structure as the full parameter. Note: for complex parameters limits must also be complex numbers (orNone
), with re and im parts of the limits variable corresponding to the limits of the re and im part. If value is"default"
, use the value supplied on the fitter creation. - weight (list or numpy.ndarray) – Determines the weights of y-points. Can be either an array broadcastable to y (e.g., a scalar or an array with the same shape as y), in which case it’s interpreted as list of individual point weights (which multiply residuals before they are squared). Or it can be an array with number of elements which is square of the number of elements in y, in which case it’s interpreted as a weight matrix (which matrix-multiplies residuals before they are squared).
- parscore (callable) – parameter score function, whose value is added to the mean-square error (sum of all residuals squared) after applying weights. Takes the same parameters as the fit function, only without the x-arguments, and return an array-like value. Can be used for, e.g., ‘soft’ fit parameter constraining.
- return_stderr (bool) – If
True
, append stderr to the output. - return_residual – If not
False
, append residual to the output. - **kwargs – arguments passed to
scipy.optimize.least_squares()
function.
Returns: (params, bound_func[, stderr][, residual])
:- params: a dictionary
{name: value}
of the parameters supplied to the function (both fit and fixed). - bound_func: the fit function with all the parameters bound (i.e., it only requires x parameters).
- stderr: a dictionary
{name: error}
of standard deviation for fit parameters to the return parameters. - If the fitting routine returns no residuals (usually for a bad or an underconstrained fit), all residuals are set to NaN.
- stderr: a dictionary
- residual: either a full array of residuals
func(x,**params)-y
(ifreturn_residual=='full'
), - a mean magnitude of the residuals
mean(abs(func(x,**params)-y)**2)
(ifreturn_residual==True
orreturn_residual=='mean'
), or the total residuals including weightsmean(abs((func(x,**params)-y)*weight)**2)
(ifreturn_residual=='weighted'
).
- residual: either a full array of residuals
- params: a dictionary
Return type: - x – x arguments. If the function has single x argument, x is an array-like object;
otherwise, x is a list of array-like objects (can be
-
initial_guess
(fit_parameters=None, fixed_parameters=None, return_stderr=False, return_residual=False)[source]¶ Return the initial guess for the fitting.
Parameters: Returns: (params, bound_func)
.- params: a dictionary
{name: value}
of the parameters supplied to the function (both fit and fixed). - bound_func: the fit function with all the parameters bound (i.e., it only requires x parameters).
- stderr: a dictionary
{name: error}
of standard deviation for fit parameters to the return parameters. - Always zero, added for better compatibility with
fit()
.
- stderr: a dictionary
- residual: either a full array of residuals
func(x,**params)-y
(ifreturn_residual=='full'
) or - a mean magnitude of the residuals
mean(abs(func(x,**params)-y)**2)
(ifreturn_residual==True
orreturn_residual=='mean'
). Always zero, added for better compatibility withfit()
.
- residual: either a full array of residuals
Return type: - params: a dictionary
- func (callable) – Fit function. Can be anything callable (function, method, object with
-
pylablib.core.dataproc.fitting.
get_best_fit
(x, y, fits)[source]¶ Select the best (lowest residual) fit result.
x and y are the argument and the value of the bound fit function. fits is the list of fit results (tuples returned by
Fitter.fit()
).
pylablib.core.dataproc.fourier module¶
Routines for Fourier transform.
-
pylablib.core.dataproc.fourier.
truncate_len_pow2
(trace, truncate_power=None)[source]¶ Truncate trace length to the the nearest power of 2.
If truncate_power is not
None
, it determines the minimal power of 2 that has to divide the length. (if it isNone
, than it’s the maximal possible power).
-
pylablib.core.dataproc.fourier.
normalize_fourier_transform
(ft, normalization='none')[source]¶ Normalize the Fourier transform data.
ft is a 2D data with 2 columns: frequency and complex amplitude. normalization can be
'none'
(none done),'sum'
(the power sum is preserved:sum(abs(ft)**2)==sum(abs(trace)**2)
) or'density'
(power spectral density normalization).
-
pylablib.core.dataproc.fourier.
apply_window
(trace_values, window='rectangle', window_power_compensate=True)[source]¶ Apply FT window to the trace.
If
window_power_compensate==True
, multiply the data is multiplied by a compensating factor to preserve power in the spectrum.
-
pylablib.core.dataproc.fourier.
fourier_transform
(trace, truncate=False, truncate_power=None, normalization='none', no_time=False, single_sided=False, window='rectangle', window_power_compensate=True)[source]¶ Calculate a fourier transform of the trace.
Parameters: - trace – Time trace to be transformed. Either an
Nx2
array, wheretrace[:,0]
is time andtrace[:,1]
is data (real or complex), or anNx3
array, wheretrace[:,0]
is time,trace[:,1]
is the real part of the signal andtrace[:,2]
is the imaginary part. - truncate (bool) – If
True
, cut the data to the power of 2. - truncate_power – If
None
, cut to the nearest power of 2; otherwise, cut to the largest possible length that divides2**truncate_power
. Only relevant iftruncate==True
. - normalization (str) –
Fourier transform normalization:
'none'
: no normalization;'sum'
: then norm of the data is conserved (sum(abs(ft[:,1])**2)==sum(abs(trace[:,1])**2)
);'density'
: power spectral density normalization, inx/rtHz
(sum(abs(ft[:,1])**2)*df==mean(abs(trace[:,1])**2)
);'dBc'
: like'density'
, but normalized to the mean trace value.
- no_time (bool) – If
True
, assume that the time axis is missing and use the standard index instead (if trace is 1D data, no_time is alwaysTrue
). - single_sided (bool) – If
True
, only leave positive frequency side of the transform. - window (str) – FT window. Can be
'rectangle'
(essentially, no window),'hann'
or'hamming'
. - window_power_compensate (bool) – If
True
, the data is multiplied by a compensating factor to preserve power in the spectrum.
Returns: a two-column array, where the first column is frequency, and the second is complex FT data.
- trace – Time trace to be transformed. Either an
-
pylablib.core.dataproc.fourier.
flip_fourier_transform
(ft)[source]¶ Flip the fourier transform (analogous to making frequencies negative and flipping the order).
-
pylablib.core.dataproc.fourier.
inverse_fourier_transform
(ft, truncate=False, truncate_power=None, no_freq=False, zero_loc=None, symmetric_time=False)[source]¶ Calculate an inverse fourier transform of the trace.
Parameters: - ft – Fourier transform data to be inverted. Is an
Nx2
array, whereft[:,0]
is frequency andft[:,1]
is fourier transform (real or complex). - truncate (bool) – If
True
, cut the data to the power of 2. - truncate_power – If
None
, cut to the nearest power of 2; otherwise, cut to the largest possible length that divides2**truncate_power
. Only relevant iftruncate==True
. - no_freq (bool) – If
True
, assume that the frequency axis is missing and use the standard index instead (if trace is 1D data, no_freq is alwaysTrue
). - zero_loc (bool) – Location of the zero frequency point. Can be
None
(the one with the value of f-axis closest to zero),'center'
(mid-point) or an integer index. - symmetric_time (bool) – If
True
, make time axis go from(-0.5/df, 0.5/df)
rather than(0, 1./df)
.
Returns: a two-column array, where the first column is frequency, and the second is the complex-valued trace data.
- ft – Fourier transform data to be inverted. Is an
-
pylablib.core.dataproc.fourier.
power_spectral_density
(trace, truncate=False, truncate_power=None, normalization='density', no_time=False, single_sided=False, window='rectangle', window_power_compensate=True)[source]¶ Calculate a power spectral density of the trace.
Parameters: - trace – Time trace to be transformed. Either an
Nx2
array, wheretrace[:,0]
is time andtrace[:,1]
is data (real or complex), or anNx3
array, wheretrace[:,0]
is time,trace[:,1]
is the real part of the signal andtrace[:,2]
is the imaginary part. - truncate (bool) – If
True
, cut the data to the power of 2. - truncate_power – If
None
, cut to the nearest power of 2; otherwise, cut to the largest possible length that divides2**truncate_power
. Only relevant iftruncate==True
. - normalization (str) –
Fourier transform normalization:
'none'
: no normalization;'sum'
: then norm of the data is conserved (sum(PSD[:,1]))==sum(abs(trace[:,1])**2)
);'density'
: power spectral density normalization, inx/rtHz
(sum(PSD[:,1])*df==mean(abs(trace[:,1])**2)
);'dBc'
: like'density'
, but normalized to the mean trace value.
- no_time (bool) – If
True
, assume that the time axis is missing and use the standard index instead (if trace is 1D data, no_time is alwaysTrue
). - single_sided (bool) – If
True
, only leave positive frequency side of the PSD. - window (str) – FT window. Can be
'rectangle'
(essentially, no window),'hann'
or'hamming'
. - window_power_compensate (bool) – If
True
, the data is multiplied by a compensating factor to preserve power in the spectrum.
Returns: a two-column array, where the first column is frequency, and the second is positive PSD.
- trace – Time trace to be transformed. Either an
-
pylablib.core.dataproc.fourier.
get_real_part
(ft)[source]¶ Get the fourier transform of the real part only from the fourier transform of a complex variable.
-
pylablib.core.dataproc.fourier.
get_imag_part
(ft)[source]¶ Get the fourier transform of the imaginary part only from the fourier transform of a complex variable.
-
pylablib.core.dataproc.fourier.
get_correlations
(ft_a, ft_b, zero_mean=True, normalization='none')[source]¶ Calculate the correlation function of the two variables given their fourier transforms.
Parameters: - ft_a – first variable fourier transform
- ft_b – second variable fourier transform
- zero_mean (bool) – If
True
, the value corresponding to the zero frequency is set to zero (only fluctuations around means of a and b are calculated). - normalization (str) – Can be
'whole'
(correlations are normalized by product of PSDs derived from ft_a and ft_b) or'individual'
(normalization is done for each frequency individually, so that the absolute value is always 1).
pylablib.core.dataproc.iir_transform module¶
Digital recursive filter.
Implemented using Numba library (JIT high-performance compilation); used to be a precompiled C-package.
pylablib.core.dataproc.image module¶
-
pylablib.core.dataproc.image.
convert_shape_indexing
(shape, src, dst)[source]¶ Convert image indexing style.
shape is the source image shape (2-tuple), src and dst are current format and desired format. Formats can be
"rcb"
(first index is row, second is column, rows count from the bottom),"rct"
(same, but rows count from the top)."xyb"
(first index is column, second is row, rows count from the bottom), or"xyt"
(same but rows count form the top)."rc"
is interpreted as"rct"
,"xy"
as"xyt"
-
pylablib.core.dataproc.image.
convert_image_indexing
(img, src, dst)[source]¶ Convert image indexing style.
img is the source image (2D numpy array), src and dst are current format and desired format. Formats can be
"rcb"
(first index is row, second is column, rows count from the bottom),"rct"
(same, but rows count from the top)."xyb"
(first index is column, second is row, rows count from the bottom), or"xyt"
(same but rows count form the top)."rc"
is interpreted as"rct"
,"xy"
as"xyt"
-
pylablib.core.dataproc.image.
get_region
(image, center, size, axis=(-2, -1))[source]¶ Get part of the image with the given center and size (both are tuples
(i, j)
).The region is automatically reduced if a part of it is outside of the image.
-
pylablib.core.dataproc.image.
get_region_sum
(image, center, size, axis=(-2, -1))[source]¶ Sum part of the image with the given center and size (both are tuples
(i, j)
).The region is automatically reduced if a part of it is outside of the image. Return tuple
(sum, area)
, where area is the actual summer region are (in pixels).
pylablib.core.dataproc.interpolate module¶
-
pylablib.core.dataproc.interpolate.
interpolate1D_func
(x, y, kind='linear', axis=-1, copy=True, bounds_error=True, fill_values=nan, assume_sorted=False)[source]¶ 1D interpolation.
Simply a wrapper around
scipy.interpolate.interp1d
.Parameters: - x – 1D arrays of x coordinates for the points at which to find the values.
- y – array of values corresponding to x points (can have more than 1 dimension, in which case the output values are (N-1)-dimensional)
- kind – Interpolation method.
- axis – axis in y-data over which to interpolate.
- copy – if
True
, make internal copies of x and y. - bounds_error – if
True
, raise error if interpolation function arguments are outside of x bounds. - fill_values – values to fill the outside-bounds regions if
bounds_error==False
. - assume_sorted – if
True
, assume that data is sorted.
Returns: A 1D array with interpolated data.
-
pylablib.core.dataproc.interpolate.
interpolate1D
(data, x, kind='linear', bounds_error=True, fill_values=nan, assume_sorted=False)[source]¶ 1D interpolation.
Parameters: - data – 2-column array [(x,y)], where
y
is a function ofx
. - x – Arrays of x coordinates for the points at which to find the values.
- kind – Interpolation method.
- bounds_error – if
True
, raise error if x values are outside of data bounds. - fill_values – values to fill the outside-bounds regions if
bounds_error==False
- assume_sorted – if
True
, assume that data is sorted.
Returns: A 1D array with interpolated data.
- data – 2-column array [(x,y)], where
-
pylablib.core.dataproc.interpolate.
interpolate2D
(data, x, y, method='linear', fill_value=nan)[source]¶ Interpolate data in 2D.
Simply a wrapper around
scipy.interpolate.griddata()
.Parameters: - data – 3-column array [(x,y,z)], where
z
is a function ofx
andy
. - x/y – Arrays of x and y coordinates for the points at which to find the values.
- method – Interpolation method.
Returns: A 2D array with interpolated data.
- data – 3-column array [(x,y,z)], where
-
pylablib.core.dataproc.interpolate.
interpolateND
(data, xs, method='linear')[source]¶ Interpolate data in N dimensions.
Simply a wrapper around
scipy.interpolate.griddata()
.Parameters: - data –
(N+1)
-column array[(x_1,..,x_N,y)]
, wherey
is a function ofx_1, ... ,x_N
. - xs –
N
-tuple of arrays of coordinates for the points at which to find the values. - method – Interpolation method.
Returns: An ND array with interpolated data.
- data –
-
pylablib.core.dataproc.interpolate.
regular_grid_from_scatter
(data, x_points, y_points, x_range=None, y_range=None, method='nearest')[source]¶ Turn irregular scatter-points data into a regular 2D grid function.
Parameters: - data – 3-column array
[(x,y,z)]
, wherez
is a function ofx
andy
. - x_points/y_points – Number of points along x/y axes.
- x_range/y_range – If not
None
, a tuple specifying the desired range of the data (all points in data outside the range are excluded). - method – Interpolation method (see
scipy.interpolate.griddata()
for options).
Returns: A nested tuple
(data, (x_grid, y_grid))
, where all entries are 2D arrays (either with data or with gridpoint locations).- data – 3-column array
-
pylablib.core.dataproc.interpolate.
interpolate_trace
(trace, step, rng=None, x_column=0, select_columns=None, kind='linear', assume_sorted=False)[source]¶ Interpolate trace data over a regular grid with the given step.
rng specifies interpolation range (by default, whole data range). x_column specifies column index for x-data. select_column specifies which columns to interpolate and keep at the output (by default, all data). If
assume_sorted==True
, assume that x-data is sorted. kind specifies interpolation method.
-
pylablib.core.dataproc.interpolate.
average_interpolate_1D
(data, step, rng=None, avg_kernel=1, min_weight=0, kind='linear')[source]¶ 1D interpolation combined with pre-averaging.
Parameters: - data – 2-column array [(x,y)], where
y
is a function ofx
. - step – distance between the points in the interpolated data (all resulting x-coordinates are multiples of step).
- rng – if not
None
, specifies interpolation range (by default, whole data range). - avg_kernel – kernel used for initial averaging. Can be either a 1D array, where each point corresponds to the relative bin weight, or an integer, which specifies simple rectangular kernel of the given width.
- min_weight – minimal accumulated weight in the bin to consider it ‘valid’
(if the bin is invalid, its accumulated value is ignored, and its value is obtained by the interpolation step).
min_weight of 0 implies any non-zero weight; otherwise, weight
>=min_weight
. - kind – Interpolation method.
Returns: A 2-column array with the interpolated data.
- data – 2-column array [(x,y)], where
pylablib.core.dataproc.specfunc module¶
Specific useful functions.
-
pylablib.core.dataproc.specfunc.
gaussian_k
(x, sigma=1.0, height=None)[source]¶ Gaussian kernel function.
Normalized by the area if height is
None
, otherwise height is the value at 0.
-
pylablib.core.dataproc.specfunc.
rectangle_k
(x, width=1.0, height=None)[source]¶ ” Symmetric rectangle kernel function.
Normalized by the area if height is
None
, otherwise height is the value at 0.
-
pylablib.core.dataproc.specfunc.
lorentzian_k
(x, gamma=1.0, height=None)[source]¶ Lorentzian kernel function
Normalized by the area if height is
None
, otherwise height is the value at 0.
-
pylablib.core.dataproc.specfunc.
complex_lorentzian_k
(x, gamma=1.0, amplitude=1j)[source]¶ Complex Lorentzian kernel function.
-
pylablib.core.dataproc.specfunc.
exp_decay_k
(x, width=1.0, height=None, mode='causal')[source]¶ Exponential decay kernel function
Normalized by area if
height=None
(if possible), otherwise height is the value at 0.- Mode determines value for
x<0
: 'causal'
- it’s 0 forx<0
;'step'
- it’s constant forx<=0
;'continue'
- it’s a continuous decaying exponent;'mirror'
- function is symmetric:exp(-|x|/width)
.
- Mode determines value for
-
pylablib.core.dataproc.specfunc.
get_kernel_func
(kernel)[source]¶ Get a kernel function by its name.
Available functions are:
'gaussian'
,'rectangle'
,'lorentzian'
,'exp_decay'
,'complex_lorentzian'
.
-
pylablib.core.dataproc.specfunc.
rectangle_w
(x, N, ft_compensated=False)[source]¶ Rectangle FT window function.
-
pylablib.core.dataproc.specfunc.
gen_hamming_w
(x, N, alpha, beta, ft_compensated=False)[source]¶ Generalized Hamming FT window function.
If
ft_compensated==True
, multiply the window function by a compensating factor to preserve power in the spectrum.
-
pylablib.core.dataproc.specfunc.
hann_w
(x, N, ft_compensated=False)[source]¶ Hann FT window function.
If
ft_compensated==True
, multiply the window function by a compensating factor to preserve power in the spectrum.
-
pylablib.core.dataproc.specfunc.
hamming_w
(x, N, ft_compensated=False)[source]¶ Specific Hamming FT window function.
If
ft_compensated==True
, multiply the window function by a compensating factor to preserve power in the spectrum.
-
pylablib.core.dataproc.specfunc.
get_window_func
(window)[source]¶ Get a window function by its name.
Available functions are:
'hamming'
,'rectangle'
,'hann'
.
-
pylablib.core.dataproc.specfunc.
gen_hamming_w_ft
(f, t, alpha, beta)[source]¶ Get Fourier Transform of a generalized Hamming FT window function.
f is the argument, t is the total window size.
-
pylablib.core.dataproc.specfunc.
rectangle_w_ft
(f, t)[source]¶ Get Fourier Transform of the rectangle FT window function.
f is the argument, t is the total window size.
-
pylablib.core.dataproc.specfunc.
hann_w_ft
(f, t)[source]¶ Get Fourier Transform of the Hann FT window function.
f is the argument, t is the total window size.
pylablib.core.dataproc.waveforms module¶
Generic utilities for dealing with numerical arrays.
-
pylablib.core.dataproc.waveforms.
is_ascending
(wf)¶ Check the if waveform is ascending.
If it has more than 1 dimension, check all lines along 0’th axis.
-
pylablib.core.dataproc.waveforms.
is_descending
(wf)¶ Check if the waveform is descending.
If it has more than 1 dimension, check all lines along 0’th axis.
-
pylablib.core.dataproc.waveforms.
is_ordered
(wf)[source]¶ Check if the waveform is ordered (ascending or descending).
If it has more than 1 dimension, check all lines along 0’th axis.
-
pylablib.core.dataproc.waveforms.
is_linear
(wf)¶ Check if the waveform is linear (values go with a constant step).
If it has more than 1 dimension, check all lines along 0’th axis (with the same step for all).
-
pylablib.core.dataproc.waveforms.
get_x_column
(wf, x_column=None, idx_default=False)¶ Get x column of the waveform.
- x_column can be
- an array: return as is;
'#'
: return index array;None
: equivalent to ‘#’ for 1D data ifidx_default==False
, or to0
otherwise;- integer: return the column with this index.
-
pylablib.core.dataproc.waveforms.
get_y_column
(wf, y_column=None)[source]¶ Get y column of the waveform.
- y_column can be
- an array: return as is;
'#'
: return index array;None
: return wf for 1D data, or the column1
otherwise;- integer: return the column with this index.
-
pylablib.core.dataproc.waveforms.
sort_by
(wf, x_column=None, reverse=False, stable=False)[source]¶ Sort 2D array using selected column as a key and preserving rows.
If
reverse==True
, sort in descending order. x_column values are described inwaveforms.get_x_column()
. Ifstable==True
, use stable sort (could be slower and uses more memory)
-
pylablib.core.dataproc.waveforms.
filter_by
(wf, columns=None, pred=None, exclude=False)[source]¶ Filter 1D or 2D array using a predicate.
If the data is 2D, columns contains indices of columns to be passed to the pred function. If
exclude==False
, drop all of the rows satisfying pred rather than keep them.
-
pylablib.core.dataproc.waveforms.
unique_slices
(wf, u_column)[source]¶ Split a table into subtables with different values in a given column.
Return a list of wf subtables, each of which has a different (and equal among all rows in the subtable) value in u_column.
-
pylablib.core.dataproc.waveforms.
merge
(wfs, idx=None)[source]¶ Merge several tables column-wise.
If idx is not
None
, then it is a list of index columns (one column per table) used for merging. The rows that have the same value in the index columns are merged; if some values aren’t contained in all the wfs, the corresponding rows are omitted.If idx is
None
, just join the tables together (they must have the same number of rows).
-
class
pylablib.core.dataproc.waveforms.
Range
(start=None, stop=None)[source]¶ Bases:
object
Single data range.
If start or stop are
None
, it’s implied that they’re at infinity (i.e., Range(None,None) is infinite). If the range object isNone
, it’s implied that the range is empty-
start
¶
-
stop
¶
-
-
pylablib.core.dataproc.waveforms.
find_closest_arg
(xs, x, approach='both', ordered=False)[source]¶ Find the index of a value in xs that is closest to x.
approach can take values
'top'
,'bottom'
or'both'
and denotes from which side should array elements approach x (meaning that the found array element should be>x
,<x
or just the closest one). If there are no elements lying on the desired side of x (e.g.approach=='top'
and all elements of xs are less than x), the function returnsNone
. ifordered==True
, then xs is assumed to be in ascending or descending order, and binary search is implemented (works only for 1D arrays). if there are recurring elements, return any of them.
-
pylablib.core.dataproc.waveforms.
find_closest_arg_linear
(params, x, approach='both')[source]¶ Same as
find_closest_arg()
, but works for linear column data.
-
pylablib.core.dataproc.waveforms.
get_range_indices
(xs, xs_range, ordered=False)[source]¶ Find waveform indices correspoding to the given range.
The range is defined as
xs_range[0]:xs_range[1]
, or infinite ifxs_range=None
(so the data is returned unchanged in that case). Ifordered_x==True
, then the function assumes that x_column in ascending order.
-
pylablib.core.dataproc.waveforms.
cut_to_range
(wf, xs_range, x_column=None, ordered=False)¶ Cut the waveform to the given range based on x_column.
The range is defined as
xs_range[0]:xs_range[1]
, or infinite ifxs_range=None
. x_column is used to determine which colmn’s values to use to check if the point is in range (seewaveforms.get_x_column()
). Ifordered_x==True
, then the function assumes that x_column in ascending order.
-
pylablib.core.dataproc.waveforms.
cut_out_regions
(wf, regions, x_column=None, ordered=False, multi_pass=True)¶ Cut the regions out of the wf based on x_column.
x_column is used to determine which colmn’s values to use to check if the point is in range (see
waveforms.get_x_column()
). Ifordered_x==True
, then the function assumes that x_column in ascending order. Ifmulti_pass==False
, combine all indices before deleting the data in a single operation (works faster, but only for non-intersecting regions).
-
pylablib.core.dataproc.waveforms.
find_discrete_step
(wf, min_fraction=1e-08, tolerance=1e-05)[source]¶ Try to find a minimal divisor of all steps in a 1D waveform.
min_fraction is the minimal possible size of the divisor (relative to the minimal non-zero step size). tolerance is the tolerance of the division. Raise an
ArithmeticError
if no such value was found.
-
pylablib.core.dataproc.waveforms.
unwrap_mod_data
(wf, wrap_range)[source]¶ Unwrap data given wrap_range.
Assume that every jump greater than
0.5*wrap_range
is not real and is due to value being restricted. Can be used to, e.g., unwrap the phase data.
-
pylablib.core.dataproc.waveforms.
expand_waveform
(wf, size=0, mode='constant', cval=0.0, side='both')[source]¶ Expand 1D waveform for different convolution techniques.
Parameters: - wf – 1D array-like object.
- size (int) – Expansion size. Can’t be greater than
len(wf)
(truncated automatically). - mode (str) – Expansion mode. Can be
'constant'
(added values are determined by cval),'nearest'
(added values are endvalues of the waveform),'reflect'
(reflect waveform wrt its endpoint) or'wrap'
(wrap the values from the other size). - cval (float) – If
mode=='constant'
, determines the expanded values. - side (str) – Expansion side. Can be
'left'
,'right'
or'both'
.
Module contents¶
-
pylablib.core.dataproc.
cut_out_regions
(wf, regions, x_column=None, ordered=False, multi_pass=True)¶ Cut the regions out of the wf based on x_column.
x_column is used to determine which colmn’s values to use to check if the point is in range (see
waveforms.get_x_column()
). Ifordered_x==True
, then the function assumes that x_column in ascending order. Ifmulti_pass==False
, combine all indices before deleting the data in a single operation (works faster, but only for non-intersecting regions).
-
pylablib.core.dataproc.
cut_to_range
(wf, xs_range, x_column=None, ordered=False)¶ Cut the waveform to the given range based on x_column.
The range is defined as
xs_range[0]:xs_range[1]
, or infinite ifxs_range=None
. x_column is used to determine which colmn’s values to use to check if the point is in range (seewaveforms.get_x_column()
). Ifordered_x==True
, then the function assumes that x_column in ascending order.
-
pylablib.core.dataproc.
decimate
(wf, n=1, dec_mode='skip', axis=0, mode='drop')¶ Decimate the data.
Parameters: - wf – Data.
- n (int) – Decimation factor.
- dec_mode (str) – Decimation mode. Can be
-
'skip'
- just leave every n’th point while completely omitting everything else; -'bin'
or'mean'
- do a binning average; -'sum'
- sum points; -'min'
- leave min point; -'max'
- leave max point; -'median'
- leave median point (works as a median filter). - axis (int) – Axis along which to perform the decimation.
- mode (str) – Determines what to do with the last bin if it’s incomplete. Can be either
'drop'
(omit the last bin) or'leave'
(keep it).
-
pylablib.core.dataproc.
get_x_column
(wf, x_column=None, idx_default=False)¶ Get x column of the waveform.
- x_column can be
- an array: return as is;
'#'
: return index array;None
: equivalent to ‘#’ for 1D data ifidx_default==False
, or to0
otherwise;- integer: return the column with this index.
-
pylablib.core.dataproc.
is_ascending
(wf)¶ Check the if waveform is ascending.
If it has more than 1 dimension, check all lines along 0’th axis.
-
pylablib.core.dataproc.
is_descending
(wf)¶ Check if the waveform is descending.
If it has more than 1 dimension, check all lines along 0’th axis.
-
pylablib.core.dataproc.
is_linear
(wf)¶ Check if the waveform is linear (values go with a constant step).
If it has more than 1 dimension, check all lines along 0’th axis (with the same step for all).
-
pylablib.core.dataproc.
sliding_filter
(wf, n=1, dec_mode='bin', mode='reflect', cval=0.0)¶ Perform sliding filtering on the data.
Parameters: - wf – 1D array-like object.
- n (int) – bin width.
- dec_mode (str) –
- Decimation mode. Can be
'bin'
or'mean'
- do a binning average;'sum'
- sum points;'min'
- leave min point;'max'
- leave max point;'median'
- leave median point (works as a median filter).
- mode (str) – Expansion mode. Can be
'constant'
(added values are determined by cval),'nearest'
(added values are endvalues of the waveform),'reflect'
(reflect waveform wrt its endpoint) or'wrap'
(wrap the values from the other size). - cval (float) – If
mode=='constant'
, determines the expanded values.