This repository contains my sources of PyTorch Tutorials - Complete Beginner Course by Patrick Loeber (Github, Youtube) on Youtube.
There is on overlap with Patrick's repo, but I want to have my own version.
Create you virtual environment.
Local user
python -m venv .venv
If virtual environment already exist activate it.
Linux
. .venv/Scripts/activate
Win
.venv/Scripts/activate.bat // CMD
.venv/Scripts/Activate.ps1 // Powershell
Install PyTorch, Numpy, and Matplot without CUDA support. This works on all computers.
Admin
pip3 install numpy matplotlib pytorch-lightning scikit-learn tensorboard torch torchvision torchaudio
Local user
python -m pip install numpy matplotlib pytorch-lightning scikit-learn tensorboard torch torchvision torchaudio
Install PyTorch, NumPy, and Matplot for CUDA. This works only if you have installed a graphic card with Nvidia Chip. Then you can use GPU for tensor calculation which is way faster than CPU.
Admin
pip3 install numpy matplotlib pytorch-lightning scikit-learn torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
Local user
python -m pip install numpy matplotlib pytorch-lightning scikit-learn torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
Create requirements.txt
Admin
pip freeze > requirements.txt
Local user
python -m pip freeze > requirements.txt
Content
- Create tensors
- Calculations with tensors
- Transform tensors
- Translate tensors to Numpy arrays forth and back
Content
- Create tensors with gradient
- Calculate the gradient
- Remove gradient from tensor
Content
- Theory
- Calculate a backpropagation
Content
- Train a manually implemented neuron
Content
- Train a PyTorch neural network (either an instance of nn.Linear or a subclass of nn.Module)
- Minimizes the error of a tensor to another tensor
Content
- Implement a linear regression by training of a neural network
- Minimizes the error of a tensor to Scikit-learn. data set
- (Lesson 6 and 7 are almost the same)
Content
- Implement a logistic regression by training of a neural network
- Minimizes the error of a tensor to a Scikit-learn. data set
- (again very similar to lesson 7, but usage of another activation function and loss)
Content
- Implement a dataset (subclass of Dataset)
- Use Dataloader to iterate over data in batches
Content
- Quick Introduction to existing Transforms
- Implement different Transforms
- Combine those Transforms and execute it on Dataset
Content
- Softmax
- Softmax function takes a vector of input values and transforms them to values (probability distribution) between 0 and 1. Constraint is that the sum of all values will be 1.
- It is used for multi-class classification. (Usually non-binary problems there sigmoid is more common.)
- Softmax implementation in numpy and pytorch.
- Cross-entropy
- Cross-entropy is a loss function for classification.
- Cross-entropy is a metric to quantify the difference between two probability distributions (e.g. predicted and true distribution).
Content
- Activation Functions
- Step function
- Sigmoid
- TanH
- ReLU
- Leaky ReLU
- Softmax
- ReLU is the most commonly used action function between hidden layers.
- Softmax is mostly the last activation function in classification output.
- Sigmoid is mostly the last activation function in binary output.
Content
- Implementation of a Neural Network with one hidden layer using the ReLU activation function
- Uses a DataLoader to load data from MNIST using SciKit-Learn
- Uses a cross entropy loss and an Adam optimizer
- Defines a training loop with a forward and a backward pass
- Usage of GPU if available
Content
- Implementation of a Convolutional Neural Network with multiple convolutional, maxpool, and linear layers
- The convolutional layer uses a 5x5 filter with zero padding and a stride of one.
- The formula to calculate the new image size is
(original-size - filter + 2 * padding) / stride + 1
- Examples
- 1st convolutional layer: (32 - 5 + 2 * 0) / 1 + 1 = 28
- 1st maxpool layer: (28 - 2 + 2 * 0) / 2 + 1 = 14
- Examples
- Uses a DataLoader to load data from MNIST using SciKit-Learn
- Uses a cross entropy loss and a stochastic gradient descent (SGD) optimizer
- Defines a training loop with a forward and a backward pass
- Usage of GPU if available
Open points
- What kind of filter also known as kernel is used by the convolutional layer?
Content
- Train a pre-trained CNN for you distinct purpose (to save time)
- Two cases
- Continue to train the complete CNN
- Train the last layer only
Content
- Generate statistics to analyze the efficiency of the neural net
Run Tensorboard
.venv/Scripts/tensorboard.exe --logdir=runs
Content
- Save and load model only,
- Save and load model and optimizers, called checkpoint during training.
Content
- Lightning is a "hyper-minimalistic framework, to build machine learning components that can plug into existing ML workflows".
- The tutorial works with an outdated version of Lightning, formerly known as PyTorch Lightning.
- The big benefits of Lightning are
- Better and clearer code structure
- Integration of Tensorboard
- Provides functionality for scalable & distributed training
Comments
- According to my knowledge, the examples uses the validation function instead of the test function.
- The code doesn't use type hints, although they are recommended by many python experts.