Final project as a part of Technion's IEM 097215 "Deep learning for NLP" & EE 046211 "Deep Learning" ๐ .
Implemented in PyTorch ๐ฅ.
-
Animation by @rizkiarm .
In this project we combine the BlazeFace algorithm and the transformer architecture and acheive near SOTA performance on the GRID dataset with very fast training and inference.
We provide here a short explaination about the structure of this repository:
videos/[speaker_id]
andalignments/[speaker_id]
contain the raw data from the GRID dataset; videos and word alignments respectievly.npy_landmarks
andnpy_alignments
contain the processed videos and alignments. The pre-processing is done automatically by runningpreprocess.py
. The pre-processing mechanisem itself is splitted to theVideo.py
which pre-processes the videos andAnnotation.py
which pre-processes the alignments.dataloader.py
contains data loaders for both training and testing as well as a tokenizer which prepares the data for the transformer. Tokenization is done usingvocab.txt
which contains all the possible tokens, as well as<pad>
,<sos>
and<eos>
tokens.model.py
contains our architecture, divided to the Transformer and an additional Landmarks Neural Net modules.run.py
is the main file of our project. It trains the architecture and then generates predictions on unseen test samples.config.py
containts all the constants and hyper-parameters that are used in the project.- Finally,
inference.py
is used to make predictions using the pre-trained models.
In order to predict the transcript from some given GRID corpus videos, put them in examples/videos
path.
Then, just run inference.py
.
It is possible to change the path/make an inference on a single video by changing the last line of inference.py
.
Important: remember to download our pretrained models here, or create them by running run.py
In order to train the model with the preprocessed videos:
- Unzip the preprocessed GRID dataset:
On Linux, use the command
unzip npy_folders.zip
. Make sure to have bothnpy_landmarks
, andnpy_alignments
directories located in your project root. - Run
run.py
- Training and validation metrics will be save under themetrics
directory. - To check the test-set word accuracy, run
test-evaluation.py
.
In order to train the models from scratch: 4. Download the desired videos to train on from the GRID corpus which can be found here. Make sure that you download the high quality videos and the corresponding word alignments.
-
Put the videos in the project directory according to the following path format:
videos/[speaker_id]/[video.mpg]
.Put the alignments according to the following path format:
alignments/[speaker_id]/[alignment.align]
. -
Change the
SPEAKERS
attribute in theconfig.py
file to a list containing all the speaker ids to train on. -
Run
preprocess.py
. This might take a while -
Run
run.py
. -
To check the test-set word accuracy, run
test-evaluation.py
.
Before trying to run anything please make sure to install all the packages below.
Library | Command to Run | Minimal Version |
---|---|---|
NumPy |
pip install numpy |
1.19.5 |
matplotlib |
pip install matplotlib |
3.3.4 |
PyTorch |
pip install torch |
1.1.10 |
Open CV |
pip install opencv-python |
4.5.4 |
DLib |
pip install dlib |
19.22.1 |
scikit-learn |
pip install scikit-learn |
0.24.2 |
tqdm |
pip install tqdm |
4.62.3 |