Add interface for smooth problems#305
Add interface for smooth problems#305MaxenceGollier wants to merge 4 commits intoJuliaSmoothOptimizers:masterfrom
Conversation
There was a problem hiding this comment.
Pull request overview
Adds a “smooth problem” interface so the library’s solvers can be called directly on unregularized AbstractNLPModel / AbstractNLSModel inputs by automatically wrapping them in a Regularized*Model with a null regularizer (supporting issue #302’s goal of running solvers on unregularized problems).
Changes:
- Added new tests exercising “smooth” (unregularized) NLP/NLS calls for multiple solvers.
- Extended test harness to use
SolverTestand to include the new smooth test file. - Added convenience overloads (via
@eval) soR2/R2N/R2DH/TR/TRDHacceptAbstractNLPModelandLM/LMTRacceptAbstractNLSModel.
Reviewed changes
Copilot reviewed 3 out of 4 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
src/utils.jl |
Adds smooth-problem overloads that wrap inputs with a null regularizer before calling existing regularized entrypoints. |
test/test-smooth.jl |
New testsets validating smooth (unregularized) NLP and NLS behavior for multiple solvers. |
test/runtests.jl |
Imports SolverTest and includes the new smooth test file in the test suite. |
Project.toml |
Adds SolverTest as a test dependency and constrains its compatible version. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
src/utils.jl
Outdated
| reg_nls = RegularizedNLSModel(nlp, h) | ||
| $solver(reg_nls; kwargs...) |
There was a problem hiding this comment.
The NLS smooth-problem overload uses RegularizedNLSModel, but the rest of this package’s NLS path (e.g. LM(nls, h, options, ...)) wraps NLS models with RegularizedNLPModel. To avoid relying on a type that isn’t otherwise used here (and may not exist in the current dependency set), consider switching this wrapper to RegularizedNLPModel(nlp, h) for consistency with the existing LM/LMTR constructors.
| reg_nls = RegularizedNLSModel(nlp, h) | |
| $solver(reg_nls; kwargs...) | |
| reg_nlp = RegularizedNLPModel(nlp, h) | |
| $solver(reg_nlp; kwargs...) |
There was a problem hiding this comment.
A RegularizedNLSModel <: AbstractRegularizedNLPModel so this is fine with the API from RegularizedProblems though i agree that the above subtyping is odd.
| for solver in [ | ||
| :R2, | ||
| :R2N, | ||
| :R2DH, | ||
| :TR, | ||
| :TRDH, | ||
| ] | ||
| @eval $solver(nlp::AbstractNLPModel{T, V}; kwargs...) where{T, V} = begin | ||
| h = NullRegularizer(T) |
There was a problem hiding this comment.
The issue/PR description example for #302 expects users to be able to work with plain AbstractNLPModels via the solver objects (solver = R2NSolver(nlp) and solve!(solver, nlp, stats)), but this PR only adds function-style convenience constructors (R2(nlp), TR(nlp), etc.). As-is, R2NSolver (and similar solver structs) still only accept AbstractRegularizedNLPModel, so the example workflow won’t work without additional overloads.
There was a problem hiding this comment.
Hmm actually, it would be risky to add an initialization of a reg_nlp directly in those I think.
Now that I think of it we could have a constructor RegularizedNLPModel(nlp) that returns a RegularizedNLPModel(nlp, NullRegularizer).
|
Closing following the discussion with @dpo. |
closes #302.
Requires JuliaSmoothOptimizers/ShiftedProximalOperators.jl#156 and JuliaSmoothOptimizers/SolverTest.jl#80.