Day 2 will provide an overview of Artificial Intelligence with a focus on Deep Learning (DL) and Deep Neural Networks (DNN). We will have hands-on tutorials on several popular DL problems such as Linear regression and classification cases. Mathematical background for supervised learning in DL will be discussed and fundamental foundation for advanced techniques will be built through step-by-step approaches. Hands-on exercises with PyTorch will be practiced on Google Colab platform and will be extended to run a DL code on GPU nodes in Graham system of Compute Canada.
Contents
- Session 1 : Introduction to Deep Learning
- Session 2 : Linear regression problem
- Session 3 : Multi-Layer Perceptron
- Session 4 : Convolutional Neural Network
Session 1 (9:30 AM - 10:30 AM) : [Lecture slide]
๐ป Lab1A: Calculating gradient (NumPy) [Demo code]
๐ป Lab1B: Calculating gradient (PyTorch) [Demo code]
๐ป Lab1C: Calculating gradient (Tensorflow) [Demo code]
Session 2 (10:45 AM - 12:30 PM) : [Lecture slide]
๐ป Lab2A: Linear regression (vanilla) [Demo code]
๐ป Lab2B: Linear regression (pytorch) [Demo code]
Session 3 (14:00 PM - 15:30 PM) : [Lecture slide]
-
Log into graham.computecanada.ca with guest account and p/w : please see [this page] for further details.
(Use MobaXterm or Putty for Windows / Open terminal in Linux or Mac)
-
Load modules and make a virtual environment: please see [this page] for further details.
module load python module load scipy-stack virtualenv --no-download ~/ENV
-
Activate virtual enviornment and upgrade/install Pip and PyTorch: please see [this page] for further details.
source ~/ENV/bin/activate pip install --no-index --upgrade pip pip install --no-index torch pip install --no-index torchvision torchtext torchaudio pip install sklearn
-
(Optional) Deactivate virtual enviornment
deactivate
-
Download
Lab2A_Linear_Reg_Vanilla.ipynb
as .py file from Google Colab -
File transfer
Lab2A_Linear_Reg_Vanilla.py
to Graham using WinScp or MobaXterm (Windows) / sftp (Linux, Mac): please see [this page] for further details. -
Activate virtual environment (make sure you load python and scipy-stack module)
source ~/ENV/bin/activate
-
Run it
python Lab2A_Linear_Reg_Vanilla.py
-
Note you need to collect all import commands into the beginning of code using text editor (Nano/emacs/VI): Please see [Nano basic] for further details.
-
Note that you need to save/close your plots with proper filename for each plotting command. (You may see what I did [here.])
-
File transfer plotting files to your local computer using WinScp or MobaXterm (Windows) / sftp (Linux, Mac) and check it out
๐ป Lab3B: Linear regression (MLP) [Demo code]
-
Download
Lab3B_Linear_Reg_MLP.ipynb
as .py file from Colab -
File transfer
Lab3B_Linear_Reg_MLP.py
to Graham using WinScp or MobaXterm (Windows) / sftp (Linux, Mac) -
Start interactive running mode
salloc --time=0:30:0 --ntasks=1 --cpus-per-task=3 --nodes=1 --mem=1000M --account=def-training-wa --reservation=snss20_wr_cpu
-
virtual environment (make sure you load python and scipy-stack module)
module load python module load sci-py-stack source ~/ENV/bin/activate
-
Run it
python Lab3B_Linear_Reg_MLP.py
-
Note you need to collect all import commands into the beginning of code using text editor (Nano/emacs/VI)
-
Note that you need to save/close your plots with proper filename for each plotting command like below
-
File transfer plotting files to your local computer using WinScp or MobaXterm (Windows) / sftp (Linux, Mac) and check it out
๐ป Lab3C: Linear regression (MLP) with GPU [Demo code]
-
Copy Lab3C_Linear_Reg_MLP_GPU.py from /home/isaac/SS20_ML_Day2
cp /home/isaac/SS20_ML_Day2/Lab3C_linear_Reg_MLP_GPU.py /home/$USER
-
Start interactive running mode with T4 GPU in Graham
salloc --time=0:30:0 --ntasks=1 --cpus-per-task=3 --reservation=snss20_wr_gpu --gres=gpu:t4:1 --nodes=1 --mem=1000M --account=def-training-wa_gpu
-
virtual environment (make sure you load python and scipy-stack module)
module load python module load sci-py-stack source ~/ENV/bin/activate
-
Run it
python Lab3C_linear_Reg_MLP_GPU.py
-
File transfer plotting files to your local computer using WinScp or MobaXterm (Windows) / sftp (Linux, Mac) and check it out
-
Write a submission script 'job_s.sh' like below using text editor
#!/bin/bash # #SBATCH --nodes=1 #SBATCH --gres=gpu:t4:1 #SBATCH --mem=20000M #SBATCH --time=0-30:00 #SBATCH --account=def-training-wa_gpu #SBATCH --output=slurm.%x.%j.out module load python scipy-stack source ~/ENV/bin/activate python Lab3C_Linear_Reg_MLP_GPU.py
-
Submit it
sbatch job_s.sh
-
Check the submitted job
squeue -u $USER
-
Note you may need to collect all import commands into the beginning of code using text editor (Nano/emacs/VI)
-
Note that you may need to save/close your plots with proper filename for each plotting command like below
-
File transfer plotting files to your local computer using WinScp or MobaXterm (Windows) / sftp (Linux, Mac) and check it out
Session 4 (15:45 PM - 17:00 PM) : [Lecture slide]
๐ป Lab4A: MNIST Image classification [Demo code]
๐ป Lab4B: CIFAR-10 Classification (pytorch) [Demo code]
KAIST Idea Factory - [Deep learning alone]
Stanford [CS231n]
Pytorch document [1.5.0]