diff --git a/README.md b/README.md
index 6a5ff7f..2439459 100644
--- a/README.md
+++ b/README.md
@@ -19,35 +19,70 @@ Python methods for numerical differentiation of noisy data, including multi-obje
-## Table of contents
-- [PyNumDiff](#pynumdiff)
- - [Table of contents](#table-of-contents)
- - [Introduction](#introduction)
- - [Structure](#structure)
- - [Citation](#citation)
- - [PyNumDiff python package:](#pynumdiff-python-package)
- - [Optimization algorithm:](#optimization-algorithm)
- - [Getting Started](#getting-started)
- - [Prerequisite](#prerequisite)
- - [Installing](#installing)
- - [Usage](#usage)
- - [Code snippets](#code-snippets)
- - [Notebook examples](#notebook-examples)
- - [Important notes](#important-notes)
- - [Running the tests](#running-the-tests)
- - [License](#license)
-
## Introduction
-PyNumDiff is a Python package that implements various methods for computing numerical derivatives of noisy data, which
-can be a critical step in developing dynamic models or designing control. There are four different families of methods
-implemented in this repository: smoothing followed by finite difference calculation, local approximation with linear
-models, Kalman filtering based methods and total variation regularization methods. Most of these methods have multiple
-parameters involved to tune. We take a principled approach and propose a multi-objective optimization framework for
-choosing parameters that minimize a loss function to balance the faithfulness and smoothness of the derivative estimate.
-For more details, refer to [this paper](https://doi.org/10.1109/ACCESS.2020.3034077).
+PyNumDiff is a Python package that implements various methods for computing numerical derivatives of noisy data, which can be a critical step in developing dynamic models or designing control. There are seven different families of methods implemented in this repository:
+
+1. convolutional smoothing followed by finite difference calculation
+2. polynomial-fit-based methods
+3. iterated finite differencing
+4. total variation regularization of a finite difference derivative
+5. Kalman (RTS) smoothing
+6. Fourier spectral with tricks
+7. linear local approximation with linear model
+
+Most of these methods have multiple parameters, so we take a principled approach and propose a multi-objective optimization framework for choosing parameters that minimize a loss function to balance the faithfulness and smoothness of the derivative estimate. For more details, refer to [this paper](https://doi.org/10.1109/ACCESS.2020.3034077).
+
+## Installing
+
+Dependencies are listed in [pyproject.toml](https://github.com/florisvb/PyNumDiff/blob/master/pyproject.toml). They include the usual suspects like `numpy` and `scipy`, but also optionally `cvxpy`.
+
+The code is compatible with >=Python 3.10. Install from PyPI with `pip install pynumdiff`, from source with `pip install git+https://github.com/florisvb/PyNumDiff`, or from local download with `pip install .`. Call `pip install pynumdiff[advanced]` to automatically install optional dependencies from the advanced list, like [CVXPY](https://www.cvxpy.org).
+
+## Usage
+
+For more details, read our [Sphinx documentation](https://pynumdiff.readthedocs.io/master/). The basic pattern of all differentiation methods is:
-## Structure
+```python
+somethingdiff(x, dt, **kwargs)
+```
+
+where `x` is data, `dt` is a step size, and various keyword arguments control the behavior. Some methods support variable step size, in which case `dt` can also receive an array of values to denote sample locations.
+
+You can provide the parameters:
+```python
+from pynumdiff.submodule import method
+
+x_hat, dxdt_hat = method(x, dt, param1=val1, param2=val2, ...)
+```
+
+Or you can find parameter by calling the multi-objective optimization algorithm from the `optimize` module:
+```python
+from pynumdiff.optimize import optimize
+
+# estimate cutoff_frequency by (a) counting the number of true peaks per second in the data or (b) look at power spectra and choose cutoff
+tvgamma = np.exp(-1.6*np.log(cutoff_frequency) -0.71*np.log(dt) - 5.1) # see https://ieeexplore.ieee.org/abstract/document/9241009
+
+params, val = optimize(somethingdiff, x, dt, tvgamma=tvgamma, # smoothness hyperparameter which defaults to None if dxdt_truth given
+ dxdt_truth=None, # give ground truth data if available, in which case tvgamma goes unused
+ search_space_updates={'param1':[vals], 'param2':[vals], ...})
+
+print('Optimal parameters: ', params)
+x_hat, dxdt_hat = somethingdiff(x, dt, **params)
+```
+If no `search_space_updates` is given, a default search space is used. See the top of `_optimize.py`.
+
+The following heuristic works well for choosing `tvgamma`, where `cutoff_frequency` is the highest frequency content of the signal in your data, and `dt` is the timestep: `tvgamma=np.exp(-1.6*np.log(cutoff_frequency)-0.71*np.log(dt)-5.1)`. Larger values of `tvgamma` produce smoother derivatives. The value of `tvgamma` is largely universal across methods, making it easy to compare method results. Be aware the optimization is a fairly heavy process.
+
+### Notebook examples
+
+Much more extensive usage is demonstrated in Jupyter notebooks:
+* Differentiation with different methods: [1_basic_tutorial.ipynb](https://github.com/florisvb/PyNumDiff/blob/master/examples/1_basic_tutorial.ipynb)
+* Parameter Optimization with known ground truth (only for demonstration purpose): [2a_optimizing_parameters_with_dxdt_known.ipynb](https://github.com/florisvb/PyNumDiff/blob/master/examples/2a_optimizing_parameters_with_dxdt_known.ipynb)
+* Parameter Optimization with unknown ground truth: [2b_optimizing_parameters_with_dxdt_unknown.ipynb](https://github.com/florisvb/PyNumDiff/blob/master/examples/2b_optimizing_parameters_with_dxdt_unknown.ipynb)
+* Automatic method suggestion: [3_automatic_method_suggestion.ipynb](https://github.com/florisvb/PyNumDiff/blob/master/examples/3_automatic_method_suggestion.ipynb)
+
+## Repo Structure
- `.github/workflows` contains `.yaml` that configures our GitHub Actions continuous integration (CI) runs.
- `docs/` contains `make` files and `.rst` files to govern the way `sphinx` builds documentation, either locally by navigating to this folder and calling `make html` or in the cloud by `readthedocs.io`.
@@ -82,7 +117,6 @@ See CITATION.cff file as well as the following references.
journal = {Journal of Open Source Software}
}
-
### Optimization algorithm:
@article{ParamOptimizationDerivatives2020,
@@ -93,86 +127,16 @@ See CITATION.cff file as well as the following references.
year={2020}
}
-## Getting Started
-
-### Prerequisite
-
-PyNumDiff requires common packages like `numpy`, `scipy`, and `matplotlib`. For a full list, you can check the file [pyproject.toml](https://github.com/florisvb/PyNumDiff/blob/master/pyproject.toml)
-
-In addition, it also requires certain additional packages for select functions, though these are not required for a successful install of PyNumDiff:
-- Total Variation Regularization methods: [`cvxpy`](http://www.cvxpy.org/install/index.html)
-- `pytest` for unittests
-
-### Installing
-
-The code is compatible with >=Python 3.5. It can be installed using pip or directly from the source code. Basic installation options include:
-
-* From PyPI using pip: `pip install pynumdiff`.
-* From source using pip git+: `pip install git+https://github.com/florisvb/PyNumDiff`
-* From local source code using setup.py: Run `pip install .` from inside this directory. See below for example.
-
-Call `pip install pynumdiff[advanced]` to automatically install [CVXPY](https://www.cvxpy.org) along with PyNumDiff. Note: Some CVXPY solvers require a license, like ECOS and MOSEK. The latter offers a [free academic license](https://www.mosek.com/products/academic-licenses/).
-
-## Usage
-
-**PyNumDiff** uses [Sphinx](http://www.sphinx-doc.org/en/master/) for code documentation, so read more details about the API usage [there](https://pynumdiff.readthedocs.io/master/).
-
-### Code snippets
-
-* Basic Usage: you provide the parameters
-```python
-from pynumdiff.submodule import method
-
-x_hat, dxdt_hat = method(x, dt, param1=val1, param2=val2, ...)
-```
-* Intermediate usage: automated parameter selection through multi-objective optimization
-```python
-from pynumdiff.optimize import optimize
-
-params, val = optimize(method, x, dt, search_space={'param1':[vals], 'param2':[vals], ...},
- tvgamma=tvgamma, # hyperparameter, defaults to None if dxdt_truth given
- dxdt_truth=None) # or give ground truth data, in which case tvgamma unused
-print('Optimal parameters: ', params)
-x_hat, dxdt_hat = method(x, dt, **params)
-```
-If no `search_space` is given, a default one is used.
-
-* Advanced usage: automated parameter selection through multi-objective optimization using a user-defined cutoff frequency
-```python
-# cutoff_freq: estimate by (a) counting the number of true peaks per second in the data or (b) look at power spectra and choose cutoff
-log_gamma = -1.6*np.log(cutoff_frequency) -0.71*np.log(dt) - 5.1 # see: https://ieeexplore.ieee.org/abstract/document/9241009
-tvgamma = np.exp(log_gamma)
-
-params, val = optimize(method, x, dt, search_space={'param1':[options], 'param2':[options], ...},
- tvgamma=tvgamma)
-print('Optimal parameters: ', params)
-x_hat, dxdt_hat = method(x, dt, **params)
-```
-
-### Notebook examples
-
-We will frequently update simple examples for demo purposes, and here are currently exisiting ones:
-* Differentiation with different methods: [1_basic_tutorial.ipynb](https://github.com/florisvb/PyNumDiff/blob/master/examples/1_basic_tutorial.ipynb)
-* Parameter Optimization with known ground truth (only for demonstration purpose): [2a_optimizing_parameters_with_dxdt_known.ipynb](https://github.com/florisvb/PyNumDiff/blob/master/examples/2a_optimizing_parameters_with_dxdt_known.ipynb)
-* Parameter Optimization with unknown ground truth: [2b_optimizing_parameters_with_dxdt_unknown.ipynb](https://github.com/florisvb/PyNumDiff/blob/master/examples/2b_optimizing_parameters_with_dxdt_unknown.ipynb)
-
-### Important notes
-
-* Larger values of `tvgamma` produce smoother derivatives
-* The value of `tvgamma` is largely universal across methods, making it easy to compare method results
-* The optimization is not fast. Run it on subsets of your data if you have a lot of data. It will also be much faster with faster differentiation methods, like `savgoldiff` and `butterdiff`.
-* The following heuristic works well for choosing `tvgamma`, where `cutoff_frequency` is the highest frequency content of the signal in your data, and `dt` is the timestep: `tvgamma=np.exp(-1.6*np.log(cutoff_frequency)-0.71*np.log(dt)-5.1)`
-
-### Running the tests
+## Running the tests
We are using GitHub Actions for continuous intergration testing.
-To run tests locally, type:
+Run tests locally by navigating to the repo in a terminal and calling
```bash
-> pytest pynumdiff
+> pytest -s
```
-Add the flag `--plot` to see plots of the methods against test functions. Add the flag `--bounds` to print log error bounds (useful when changing method behavior).
+Add the flag `--plot` to see plots of the methods against test functions. Add the flag `--bounds` to print $\log$ error bounds (useful when changing method behavior).
## License
diff --git a/pynumdiff/polynomial_fit/_polynomial_fit.py b/pynumdiff/polynomial_fit/_polynomial_fit.py
index 02a2f92..9e319a8 100644
--- a/pynumdiff/polynomial_fit/_polynomial_fit.py
+++ b/pynumdiff/polynomial_fit/_polynomial_fit.py
@@ -10,7 +10,8 @@ def splinediff(x, dt, params=None, options={}, degree=3, s=None, num_iterations=
scipy.interpolate.UnivariateSpline.
:param np.array[float] x: data to differentiate
- :param float dt: step size
+ :param float or array[float] dt: This function supports variable step size. This parameter is either the constant
+ step size if given as a single float, or data locations if given as an array of same length as :code:`x`.
:param list params: (**deprecated**, prefer :code:`degree`, :code:`cutoff_freq`, and :code:`num_iterations`)
:param dict options: (**deprecated**, prefer :code:`num_iterations`) an empty dictionary or {'iterate': (bool)}
:param int degree: polynomial degree of the spline. A kth degree spline can be differentiated k times.
@@ -29,7 +30,11 @@ def splinediff(x, dt, params=None, options={}, degree=3, s=None, num_iterations=
if 'iterate' in options and options['iterate']:
num_iterations = params[2]
- t = np.arange(len(x))*dt
+ if isinstance(dt, (np.ndarray, list)): # support variable step size for this function
+ if len(x) != len(dt): raise ValueError("If `dt` is given as array-like, must have same length as `x`.")
+ t = dt
+ else:
+ t = np.arange(len(x))*dt
x_hat = x
for _ in range(num_iterations):
diff --git a/pynumdiff/tests/test_diff_methods.py b/pynumdiff/tests/test_diff_methods.py
index 65de063..f6a001a 100644
--- a/pynumdiff/tests/test_diff_methods.py
+++ b/pynumdiff/tests/test_diff_methods.py
@@ -8,15 +8,17 @@
from ..total_variation_regularization import velocity, acceleration, jerk, iterative_velocity, smooth_acceleration, jerk_sliding
from ..kalman_smooth import constant_velocity, constant_acceleration, constant_jerk
from ..smooth_finite_difference import mediandiff, meandiff, gaussiandiff, friedrichsdiff, butterdiff
-# Function alias for testing a case where parameters change the behavior in a big way
+# Function aliases for testing cases where parameters change the behavior in a big way, so error limits can be indexed in dict
def iterated_second_order(*args, **kwargs): return second_order(*args, **kwargs)
def iterated_fourth_order(*args, **kwargs): return fourth_order(*args, **kwargs)
+def spline_irreg_step(*args, **kwargs): return splinediff(*args, **kwargs)
dt = 0.1
t = np.linspace(0, 3, 31) # sample locations, including the endpoint
tt = np.linspace(0, 3) # full domain, for visualizing denser plots
np.random.seed(7) # for repeatability of the test, so we don't get random failures
noise = 0.05*np.random.randn(*t.shape)
+t_irreg = t + np.random.uniform(-dt/3, dt/3, *t.shape) # add jostle
# Analytic (function, derivative) pairs on which to test differentiation methods.
test_funcs_and_derivs = [
@@ -33,32 +35,32 @@ def iterated_fourth_order(*args, **kwargs): return fourth_order(*args, **kwargs)
diff_methods_and_params = [
(first_order, {}), (second_order, {}), (fourth_order, {}), # empty dictionary for the case of no parameters
(iterated_second_order, {'num_iterations':5}), (iterated_fourth_order, {'num_iterations':10}),
- (lineardiff, {'order':3, 'gamma':5, 'window_size':11, 'solver':'CLARABEL'}), (lineardiff, [3, 5, 11], {'solver':'CLARABEL'}),
- (polydiff, {'degree':2, 'window_size':3}), (polydiff, [2, 3]),
- (savgoldiff, {'degree':2, 'window_size':5, 'smoothing_win':5}), (savgoldiff, [2, 5, 5]),
- (spectraldiff, {'high_freq_cutoff':0.2}), (spectraldiff, [0.2]),
- (mediandiff, {'window_size':3, 'num_iterations':2}), (mediandiff, [3, 2], {'iterate':True}),
(meandiff, {'window_size':3, 'num_iterations':2}), (meandiff, [3, 2], {'iterate':True}),
+ (mediandiff, {'window_size':3, 'num_iterations':2}), (mediandiff, [3, 2], {'iterate':True}),
(gaussiandiff, {'window_size':5}), (gaussiandiff, [5]),
(friedrichsdiff, {'window_size':5}), (friedrichsdiff, [5]),
(butterdiff, {'filter_order':3, 'cutoff_freq':0.074}), (butterdiff, [3, 0.074]),
(splinediff, {'degree':5, 's':2}), (splinediff, [5, 2]),
+ (spline_irreg_step, {'degree':5, 's':2}),
+ (polydiff, {'degree':2, 'window_size':3}), (polydiff, [2, 3]),
+ (savgoldiff, {'degree':2, 'window_size':5, 'smoothing_win':5}), (savgoldiff, [2, 5, 5]),
(constant_velocity, {'r':1e-4, 'q':1e-2}), (constant_velocity, [1e-4, 1e-2]),
(constant_acceleration, {'r':1e-4, 'q':1e-1}), (constant_acceleration, [1e-4, 1e-1]),
(constant_jerk, {'r':1e-4, 'q':10}), (constant_jerk, [1e-4, 10]),
- # TODO (known_dynamics), but presently it doesn't calculate a derivative
(velocity, {'gamma':0.5}), (velocity, [0.5]),
(acceleration, {'gamma':1}), (acceleration, [1]),
(jerk, {'gamma':10}), (jerk, [10]),
(iterative_velocity, {'num_iterations':5, 'gamma':0.05}), (iterative_velocity, [5, 0.05]),
(smooth_acceleration, {'gamma':2, 'window_size':5}), (smooth_acceleration, [2, 5]),
- (jerk_sliding, {'gamma':1, 'window_size':15}), (jerk_sliding, [1], {'window_size':15})
+ (jerk_sliding, {'gamma':1, 'window_size':15}), (jerk_sliding, [1], {'window_size':15}),
+ (spectraldiff, {'high_freq_cutoff':0.2}), (spectraldiff, [0.2]),
+ (lineardiff, {'order':3, 'gamma':5, 'window_size':11, 'solver':'CLARABEL'}), (lineardiff, [3, 5, 11], {'solver':'CLARABEL'})
]
# All the testing methodology follows the exact same pattern; the only thing that changes is the
-# closeness to the right answer various methods achieve with the given parameterizations. So index a
-# big ol' table by the method, then the test function, then the pair of quantities we're comparing.
-# The tuples are order of magnitude of (L2,Linf) distances for pairs
+# closeness to the right answer various methods achieve with the given parameterizations and random seed.
+# So index a big ol' table by method, then by test function number, and finally by the truth-result pair
+# being compared. The tuples are order of magnitude of (L2,Linf) distances for pairs
# (x,x_hat), (dxdt,dxdt_hat), (x,x_hat_noisy), (dxdt,dxdt_hat_noisy).
error_bounds = {
first_order: [[(-25, -25), (-25, -25), (0, 0), (1, 1)],
@@ -91,42 +93,18 @@ def iterated_fourth_order(*args, **kwargs): return fourth_order(*args, **kwargs)
[(0, -1), (1, 1), (0, 0), (1, 1)],
[(1, 1), (2, 2), (1, 1), (2, 2)],
[(1, 1), (3, 3), (1, 1), (3, 3)]],
- lineardiff: [[(-6, -6), (-5, -6), (0, -1), (0, 0)],
- [(0, 0), (2, 1), (0, 0), (2, 1)],
- [(1, 0), (2, 2), (1, 0), (2, 2)],
- [(1, 0), (2, 1), (1, 0), (2, 1)],
- [(1, 1), (2, 2), (1, 1), (2, 2)],
- [(1, 1), (3, 3), (1, 1), (3, 3)]],
- polydiff: [[(-14, -15), (-14, -14), (0, -1), (1, 1)],
- [(-14, -14), (-13, -13), (0, -1), (1, 1)],
- [(-14, -14), (-13, -13), (0, -1), (1, 1)],
- [(-2, -2), (0, 0), (0, -1), (1, 1)],
- [(0, 0), (1, 1), (0, -1), (1, 1)],
- [(0, 0), (3, 3), (0, 0), (3, 3)]],
- savgoldiff: [[(-9, -10), (-13, -14), (0, -1), (0, 0)],
- [(-9, -10), (-13, -13), (0, -1), (0, 0)],
- [(-2, -2), (-1, -1), (0, -1), (0, 0)],
- [(0, -1), (0, 0), (0, 0), (1, 0)],
- [(1, 1), (2, 2), (1, 1), (2, 2)],
- [(1, 1), (3, 3), (1, 1), (3, 3)]],
- spectraldiff: [[(-9, -10), (-14, -15), (-1, -1), (0, 0)],
- [(0, 0), (1, 1), (0, 0), (1, 1)],
- [(1, 1), (1, 1), (1, 1), (1, 1)],
- [(0, 0), (1, 1), (0, 0), (1, 1)],
- [(1, 1), (2, 2), (1, 1), (2, 2)],
- [(1, 1), (3, 3), (1, 1), (3, 3)]],
- mediandiff: [[(-25, -25), (-25, -25), (-1, -1), (0, 0)],
- [(0, 0), (1, 1), (0, 0), (1, 1)],
- [(0, 0), (1, 1), (0, 0), (1, 1)],
- [(-1, -1), (0, 0), (0, 0), (1, 1)],
- [(0, 0), (2, 2), (0, 0), (2, 2)],
- [(1, 1), (3, 3), (1, 1), (3, 3)]],
meandiff: [[(-25, -25), (-25, -25), (0, -1), (0, 0)],
[(0, 0), (1, 1), (0, 0), (1, 1)],
[(0, 0), (1, 1), (0, 0), (1, 1)],
[(0, 0), (1, 1), (0, 0), (1, 1)],
[(1, 1), (2, 2), (1, 1), (2, 2)],
[(1, 1), (3, 3), (1, 1), (3, 3)]],
+ mediandiff: [[(-25, -25), (-25, -25), (-1, -1), (0, 0)],
+ [(0, 0), (1, 1), (0, 0), (1, 1)],
+ [(0, 0), (1, 1), (0, 0), (1, 1)],
+ [(-1, -1), (0, 0), (0, 0), (1, 1)],
+ [(0, 0), (2, 2), (0, 0), (2, 2)],
+ [(1, 1), (3, 3), (1, 1), (3, 3)]],
gaussiandiff: [[(-14, -15), (-14, -14), (0, -1), (1, 0)],
[(-1, -1), (1, 0), (0, 0), (1, 1)],
[(0, 0), (1, 1), (0, 0), (1, 1)],
@@ -151,24 +129,24 @@ def iterated_fourth_order(*args, **kwargs): return fourth_order(*args, **kwargs)
[(0, 0), (1, 1), (0, 0), (1, 1)],
[(1, 0), (2, 2), (1, 0), (2, 2)],
[(1, 0), (3, 3), (1, 0), (3, 3)]],
- constant_velocity: [[(-25, -25), (-25, -25), (0, -1), (0, 0)],
- [(-3, -3), (-2, -2), (0, -1), (0, 0)],
- [(-1, -2), (0, 0), (0, -1), (0, 0)],
- [(-1, -1), (1, 0), (0, -1), (1, 0)],
- [(1, 1), (2, 2), (1, 1), (2, 2)],
- [(1, 1), (3, 3), (1, 1), (3, 3)]],
- constant_acceleration: [[(-25, -25), (-25, -25), (0, -1), (0, 0)],
- [(-3, -4), (-3, -3), (0, -1), (0, 0)],
- [(-3, -3), (-2, -2), (0, -1), (0, 0)],
- [(-1, -1), (0, 0), (0, -1), (0, 0)],
- [(1, 1), (2, 2), (1, 1), (2, 2)],
- [(1, 1), (3, 3), (1, 1), (3, 3)]],
- constant_jerk: [[(-25, -25), (-25, -25), (0, -1), (0, 0)],
- [(-4, -5), (-3, -4), (0, -1), (0, 0)],
- [(-3, -4), (-2, -3), (0, -1), (0, 0)],
- [(-1, -2), (0, 0), (0, -1), (0, 0)],
- [(1, 0), (2, 1), (1, 0), (2, 1)],
- [(1, 1), (3, 3), (1, 1), (3, 3)]],
+ spline_irreg_step: [[(-14, -14), (-14, -14), (-1, -1), (0, 0)],
+ [(-14, -14), (-13, -13), (-1, -1), (0, 0)],
+ [(-14, -14), (-13, -13), (-1, -1), (0, 0)],
+ [(0, 0), (1, 1), (0, 0), (1, 1)],
+ [(1, 0), (2, 2), (1, 0), (2, 2)],
+ [(1, 0), (3, 3), (1, 0), (3, 3)]],
+ polydiff: [[(-14, -15), (-14, -14), (0, -1), (1, 1)],
+ [(-14, -14), (-13, -13), (0, -1), (1, 1)],
+ [(-14, -14), (-13, -13), (0, -1), (1, 1)],
+ [(-2, -2), (0, 0), (0, -1), (1, 1)],
+ [(0, 0), (1, 1), (0, -1), (1, 1)],
+ [(0, 0), (3, 3), (0, 0), (3, 3)]],
+ savgoldiff: [[(-9, -10), (-13, -14), (0, -1), (0, 0)],
+ [(-9, -10), (-13, -13), (0, -1), (0, 0)],
+ [(-2, -2), (-1, -1), (0, -1), (0, 0)],
+ [(0, -1), (0, 0), (0, 0), (1, 0)],
+ [(1, 1), (2, 2), (1, 1), (2, 2)],
+ [(1, 1), (3, 3), (1, 1), (3, 3)]],
velocity: [[(-25, -25), (-18, -19), (0, -1), (1, 0)],
[(-12, -12), (-11, -12), (-1, -1), (-1, -2)],
[(0, 0), (1, 0), (0, 0), (1, 0)],
@@ -204,13 +182,43 @@ def iterated_fourth_order(*args, **kwargs): return fourth_order(*args, **kwargs)
[(-14, -14), (-14, -14), (0, -1), (0, 0)],
[(-1, -1), (0, 0), (0, -1), (1, 0)],
[(0, 0), (2, 2), (0, 0), (2, 2)],
- [(1, 1), (3, 3), (1, 1), (3, 3)]]
+ [(1, 1), (3, 3), (1, 1), (3, 3)]],
+ constant_velocity: [[(-25, -25), (-25, -25), (0, -1), (0, 0)],
+ [(-3, -3), (-2, -2), (0, -1), (0, 0)],
+ [(-1, -2), (0, 0), (0, -1), (0, 0)],
+ [(-1, -1), (1, 0), (0, -1), (1, 0)],
+ [(1, 1), (2, 2), (1, 1), (2, 2)],
+ [(1, 1), (3, 3), (1, 1), (3, 3)]],
+ constant_acceleration: [[(-25, -25), (-25, -25), (0, -1), (0, 0)],
+ [(-3, -4), (-3, -3), (0, -1), (0, 0)],
+ [(-3, -3), (-2, -2), (0, -1), (0, 0)],
+ [(-1, -1), (0, 0), (0, -1), (0, 0)],
+ [(1, 1), (2, 2), (1, 1), (2, 2)],
+ [(1, 1), (3, 3), (1, 1), (3, 3)]],
+ constant_jerk: [[(-25, -25), (-25, -25), (0, -1), (0, 0)],
+ [(-4, -5), (-3, -4), (0, -1), (0, 0)],
+ [(-3, -4), (-2, -3), (0, -1), (0, 0)],
+ [(-1, -2), (0, 0), (0, -1), (0, 0)],
+ [(1, 0), (2, 1), (1, 0), (2, 1)],
+ [(1, 1), (3, 3), (1, 1), (3, 3)]],
+ spectraldiff: [[(-9, -10), (-14, -15), (-1, -1), (0, 0)],
+ [(0, 0), (1, 1), (0, 0), (1, 1)],
+ [(1, 1), (1, 1), (1, 1), (1, 1)],
+ [(0, 0), (1, 1), (0, 0), (1, 1)],
+ [(1, 1), (2, 2), (1, 1), (2, 2)],
+ [(1, 1), (3, 3), (1, 1), (3, 3)]],
+ lineardiff: [[(-6, -6), (-5, -6), (0, -1), (0, 0)],
+ [(0, 0), (2, 1), (0, 0), (2, 1)],
+ [(1, 0), (2, 2), (1, 0), (2, 2)],
+ [(1, 0), (2, 1), (1, 0), (2, 1)],
+ [(1, 1), (2, 2), (1, 1), (2, 2)],
+ [(1, 1), (3, 3), (1, 1), (3, 3)]]
}
# Essentially run the cartesian product of [diff methods] x [test functions] through this one test
@mark.filterwarnings("ignore::DeprecationWarning") # I want to test the old and new functionality intentionally
-@mark.parametrize("diff_method_and_params", diff_methods_and_params)
-@mark.parametrize("test_func_and_deriv", test_funcs_and_derivs)
+@mark.parametrize("diff_method_and_params", diff_methods_and_params) # things like splinediff, with their parameters
+@mark.parametrize("test_func_and_deriv", test_funcs_and_derivs) # analytic functions, with their true derivatives
def test_diff_method(diff_method_and_params, test_func_and_deriv, request): # request gives access to context
# unpack
diff_method, params = diff_method_and_params[:2]
@@ -222,50 +230,57 @@ def test_diff_method(diff_method_and_params, test_func_and_deriv, request): # re
try: import cvxpy
except: warn(f"Cannot import cvxpy, skipping {diff_method} test."); return
- # sample the true function and make noisy samples, and sample true derivative
- x = f(t)
+ # sample the true function and true derivative, and make noisy samples
+ if diff_method in [spline_irreg_step]: # list that can handle variable dt
+ x = f(t_irreg)
+ dxdt = df(t_irreg)
+ _t = t_irreg
+ else:
+ x = f(t)
+ dxdt = df(t)
+ _t = dt
x_noisy = x + noise
- dxdt = df(t)
# differentiate without and with noise, accounting for new and old styles of calling functions
- x_hat, dxdt_hat = diff_method(x, dt, **params) if isinstance(params, dict) \
- else diff_method(x, dt, params) if (isinstance(params, list) and len(diff_method_and_params) < 3) \
- else diff_method(x, dt, params, options)
- x_hat_noisy, dxdt_hat_noisy = diff_method(x_noisy, dt, **params) if isinstance(params, dict) \
- else diff_method(x_noisy, dt, params) if (isinstance(params, list) and len(diff_method_and_params) < 3) \
- else diff_method(x_noisy, dt, params, options)
+ x_hat, dxdt_hat = diff_method(x, _t, **params) if isinstance(params, dict) \
+ else diff_method(x, _t, params) if (isinstance(params, list) and len(diff_method_and_params) < 3) \
+ else diff_method(x, _t, params, options)
+ x_hat_noisy, dxdt_hat_noisy = diff_method(x_noisy, _t, **params) if isinstance(params, dict) \
+ else diff_method(x_noisy, _t, params) if (isinstance(params, list) and len(diff_method_and_params) < 3) \
+ else diff_method(x_noisy, _t, params, options)
# plotting code
if request.config.getoption("--plot") and not isinstance(params, list): # Get the plot flag from pytest configuration
fig, axes = request.config.plots[diff_method] # get the appropriate plot, set up by the store_plots fixture in conftest.py
- axes[i, 0].plot(t, f(t))
- axes[i, 0].plot(t, x, 'C0+')
- axes[i, 0].plot(t, x_hat, 'C2.', ms=4)
+ t_ = t_irreg if diff_method in [spline_irreg_step] else t
+ axes[i, 0].plot(t_, f(t_))
+ axes[i, 0].plot(t_, x, 'C0+')
+ axes[i, 0].plot(t_, x_hat, 'C2.', ms=4)
axes[i, 0].plot(tt, df(tt))
- axes[i, 0].plot(t, dxdt_hat, 'C1+')
+ axes[i, 0].plot(t_, dxdt_hat, 'C1+')
axes[i, 0].set_ylabel(latex_name, rotation=0, labelpad=50)
if i < len(test_funcs_and_derivs)-1: axes[i, 0].set_xticklabels([])
else: axes[i, 0].set_xlabel('t')
if i == 0: axes[i, 0].set_title('noiseless')
- axes[i, 1].plot(t, f(t), label=r"$x(t)$")
- axes[i, 1].plot(t, x_noisy, 'C0+', label=r"$x_n$")
- axes[i, 1].plot(t, x_hat_noisy, 'C2.', ms=4, label=r"$\hat{x}_n$")
+ axes[i, 1].plot(t_, f(t_), label=r"$x(t)$")
+ axes[i, 1].plot(t_, x_noisy, 'C0+', label=r"$x_n$")
+ axes[i, 1].plot(t_, x_hat_noisy, 'C2.', ms=4, label=r"$\hat{x}_n$")
axes[i, 1].plot(tt, df(tt), label=r"$\frac{dx(t)}{dt}$")
- axes[i, 1].plot(t, dxdt_hat_noisy, 'C1+', label=r"$\hat{\frac{dx}{dt}}_n$")
+ axes[i, 1].plot(t_, dxdt_hat_noisy, 'C1+', label=r"$\hat{\frac{dx}{dt}}_n$")
if i < len(test_funcs_and_derivs)-1: axes[i, 1].set_xticklabels([])
else: axes[i, 1].set_xlabel('t')
axes[i, 1].set_yticklabels([])
if i == 0: axes[i, 1].set_title('with noise')
# check x_hat and x_hat_noisy are close to x and that dxdt_hat and dxdt_hat_noisy are close to dxdt
- if request.config.getoption("--bounds"): print("\n[", end="")
+ if request.config.getoption("--bounds"): print("\n[", end="") # print stuff if the user gave the --bounds flag
for j,(a,b) in enumerate([(x,x_hat), (dxdt,dxdt_hat), (x,x_hat_noisy), (dxdt,dxdt_hat_noisy)]):
l2_error = np.linalg.norm(a - b)
linf_error = np.max(np.abs(a - b))
# bounds-printing for establishing bounds
if request.config.getoption("--bounds"):
- #print(f"({l2_error},{linf_error})", end=", ")
+ #print(f"({l2_error},{linf_error})", end=", ") # <- in case you want to print actual errors rather than powers
print(f"({int(np.ceil(np.log10(l2_error))) if l2_error > 0 else -25}, {int(np.ceil(np.log10(linf_error))) if linf_error > 0 else -25})", end=", ")
# bounds checking
else: