Keras-Optimizer -- Collection of Optimizers for Keras and TensorFlow Keras
https://arxiv.org/abs/2302.06675 | Lion | Symbolic Discovery of Optimization Algorithms |
---|---|---|
https://arxiv.org/abs/2208.06677 | Adan | Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models |
https://arxiv.org/abs/2106.11514 | AdaMomentum | Rethinking Adam: A Twofold Exponential Moving Average Approach |
https://arxiv.org/abs/2102.07227 | nero | Learning by Turning: Neural Architecture Aware Optimisation |
https://arxiv.org/abs/2101.11075 | MadGrad | Adaptivity without Compromise: A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization |
https://arxiv.org/abs/2011.06220 | VAdam | Artificial Neural Variability for Deep Learning: On Overfitting, Noise Memorization, and Catastrophic Forgetting |
https://arxiv.org/abs/2010.07468 | AdaBelief | AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed Gradients |
https://arxiv.org/abs/2009.13586 | Apollo | Apollo: An Adaptive Parameter-wise Diagonal Quasi-Newton Method for Nonconvex Stochastic Optimization |
https://arxiv.org/abs/2006.13484 | Lans | Accelerated Large Batch Optimization of BERT Pretraining in 54 minutes |
https://arxiv.org/abs/2003.07422 | Rm3 | Weak and Strong Gradient Directions: Explaining Memorization, Generalization, and Hardness of Examples at Scale |
https://arxiv.org/abs/2002.03432 | Fromage | On the distance between two neural networks and the stability of learning |
https://arxiv.org/abs/1909.11015 | DiffGrad | diffGrad: An Optimization Method for Convolutional Neural Networks |
https://arxiv.org/abs/1908.03265 | RectifiedAdam | On The Variance Of The Adaptive Learning Rate And Beyond |
https://arxiv.org/abs/1904.00962 | LAMB | Large Batch Optimization for Deep Learning: Training BERT in 76 minutes |
https://arxiv.org/abs/1804.04235 | AdaFactor | Adafactor: Adaptive Learning Rates with Sublinear Memory Cost |