Implementing different machine learning optimizers from scratch
-
Finding a linear relationship between a dependent variable and one or more independent variables is the goal of the widely used machine learning algorithm known as linear regression.
-
Numerical optimization strategies can be applied to enhance the model's functionality and identify the best match. In this project, we'll examine my line-by-line implementation of various optimization algorithms, how various optimization methods may be used to solve a linear regression issue, and how to visually display the evolution of the loss and parameter values over iterations. Here are a few typical methods for optimisation you'll encounter throughout the project:
1- Batch Gradient Descent
2- Stochastic Gradient Descent
3- Mini-Batch Gradient Descent
4- Momentum based Gradient Descent
5- Nesterov Accelerated Gradient (NAG)
6- Adaptive Gradient Algorithm (Adagrad)
7- Root Mean Squared Propagation (RMSProp)
8- Adam
9- Broyden–Fletcher–Goldfarb–Shanno (BFGS)