-
Notifications
You must be signed in to change notification settings - Fork 24
Make backers public #143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make backers public #143
Conversation
| """This module implements some common finite difference schemes | ||
| """ | ||
| from ._finite_difference import first_order, second_order, fourth_order | ||
| from ._finite_difference import finite_difference, first_order, second_order, fourth_order |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are simply more public methods now. The docstring makes reference to the fact other methods in the module really just call the backing one.
| def _finite_difference(x, dt, num_iterations, order): | ||
| """Helper for all finite difference methods, since their iteration structure is all the same. | ||
| def finite_difference(x, dt, num_iterations, order): | ||
| """Perform iterated finite difference of a given order. This serves as the common backing function for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Had to add more proper docstrings.
…rward and backward difference, it doesn't do as well as just plain first order at the edges
| """ | ||
| xhat0 = np.zeros(A.shape[0]); xhat0[0] = x[0] # See #110 for why this choice of xhat0 | ||
| q = 10**int(np.log10(qr_ratio)/2) # even-ish split of the powers across 0 | ||
| r = q/qr_ratio |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressing #139.
…referred order rather than alphabetically
| """ | ||
| from ._kalman_smooth import constant_velocity, constant_acceleration, constant_jerk, known_dynamics | ||
|
|
||
| __all__ = ['constant_velocity', 'constant_acceleration', 'constant_jerk', 'known_dynamics'] # So these get treated as direct members of the module by sphinx |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For modules where I manually reference methods in the .rst, there is no need to list out __all__ anymore.
| method_params_and_bounds = { | ||
| spectraldiff: ({'even_extension': [True, False], | ||
| 'pad_to_zero_dxdt': [True, False], | ||
| spectraldiff: ({'even_extension': (True, False), # give boolean or numerical params in a list to scipy.optimize over them |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Per #144
| :param np.array[float] dxdt_truth: actual time series of the derivative of x, if known | ||
| :param float tvgamma: Only used if :code:`dxdt_truth` is given. Regularization value used to select for parameters | ||
| that yield a smooth derivative. Larger value results in a smoother derivative. | ||
| :param dict search_space_updates: At the top of :code:`_optimize.py`, each method has a search space of parameters |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I renamed this param, because the old name was a little too grandiose for what it is.
|
|
||
| methods = [second_order, fourth_order, mediandiff, meandiff, gaussiandiff, friedrichsdiff, butterdiff, | ||
| splinediff, spectraldiff, polydiff, savgoldiff, constant_velocity, constant_acceleration, constant_jerk] | ||
| methods = [finite_difference, mediandiff, meandiff, gaussiandiff, friedrichsdiff, butterdiff, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shorter list now
|
|
||
| def metrics(x, dt, x_hat, dxdt_hat, x_truth=None, dxdt_truth=None, padding=0): | ||
| """Evaluate x_hat based on various metrics, depending on whether dxdt_truth and x_truth are known or not. | ||
| def rmse(x, dt, x_hat, dxdt_hat, x_truth=None, dxdt_truth=None, padding=0): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I decided to rename this one, because all the metrics it's calculating are RMSE-related.
I decided to take on a bit of #138 with a "yes and". The old methods all live where they lived, but I've exposed the common backing methods of TVR, Kalman, and Iterated FD. In the process of the RTS smoothing/Kalman one, I took on #139, because I couldn't help myself.
Of course that then entailed some changes to the tests and to the optimization code. Most profound, I realized Iterated FD has numerical order, but only some are implemented, so the search space of that parameter needs to be categorical. Handling categoricals is a challenge I chose to leave alone last time I worked on the optimizer, but happy to say I've successfully wrestled that bear.