This repo offers sample implementation of a Fully connected and convolutional auto encoder models, trained and analyzed for reconstruction of MNIST digits.
Fully connected autoencoder has an encoder and decoder, with 2 fully connected layers each with latent vector of size 128, whereas the Convoltuional Autoencoder has 3 layers in decoder with an upsampling layer, and latent feature map of size 28,56,40.
conda create -n autoencoder python=3.8 #(optional)
pip install requirements.txt
python3 train_autoenc.py --mode Conv --num_epochs 10 --batch_size 10 --learning_rate 0.1 #Train
python3 test_autoenc.py #For_inference
- mod is Conv / Lin depending on you need a Liear encoder.Decoder architecture or Convolutional architecture.
- Creating a conda environment is optional but recomended.
- Trained model is saved as model.pt in the working directory.
- Hyperparams for training can be changed and the dataset can be altered too.
- You can perform infernce to get results, using test_autoenc.py