Provides proximal operator evaluation routines and proximal optimization algorithms, such as (accelerated) proximal gradient methods and alternating direction method of multipliers (ADMM), for non-smooth/non-differentiable objective functions.
Stopping criterion in the proximal gradient algorithms is based on convergence of the derivative, which is incorrect and should be based on the change in parameter values.
This issue is used to trigger TagBot; feel free to unsubscribe.
If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.
If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!
linearized ADMM is able to solve problems of the form $\min f(x) + g(Ax)$ by means of linearizing the augmented Lagrangian. These type of optimizations occur e.g. in generalized Lasso problems.