My happy place to play around reimplementing the basic building blocks of deep learning in Numpy.
-
All main activations (ReLU, Leaky ReLU, Linear, Sigmoid, Softmax)
-
A bunch of initializers (normal, uniform, ones, zeros, Glorot Normal/Uniform, He Normal/Uniform)
-
A bunch of optimizers (SGD, Momentum, Nesterov Momentum, Adagrad, RMSProp, Adam)
-
MSE, Binary Cross Entropy and Categorical Cross Entropy losses
-
Batch Normalization
-
Dropout
-
Gradient checking
-
L1 Regularization
-
MNIST prediction
-
Weight plotting
-
Conv2D, DeConv and Pooling layers
-
RNN, LSTM and everything recurrent
-
Self-attention layer
What's this? A deep learning library written from scratch in pure Python and Numpy.
Is this the best possible way of implementing a neural network library? Definitely not
Couldn't you think of a better way of doing this? Yes, absolutely
Is this thing bug-free? Probably not, it's still very much WIP
Why didn't you do X instead? Either I didn't have the time, or I didn't think about it, or I didn't feel like it, or a mix of the three :)
Why didn't you implement a full autodiff library with computational graphs instead? Because I started implementing stuff and kept going until I felt like it, and then I didn't feel like refactoring the whole project. But I'd love to do it at some point in the future, if I ever have the time
So, what's the point of this project? Well, I learned a lot reimplementing all these things from scratch, and I had a lot of fun
Does it really work? Yes, it does!
Here's some of the things it can do...
How do I run it?
Install the dependencies with:
pip install -r requirements.txt
There's a bunch of test functions in main.py - just comment out the ones you don't want to run, then run main.py
cd src && python main.py