This repository contains a script for training a Variational Autoencoder (VAE) using the NASNetLarge neural network architecture. The script is designed for flexibility and customization, making it suitable for various image generation tasks.
Will be in development for a unchosen time.
- Clone this repository to google colab or your preferred environment.
- Ensure you have the required dependencies installed by following the installation instructions.
- Customize the script to meet your specific project requirements, such as input image dimensions, dataset paths, and model hyperparameters.
- Run the script to train your VAE model.
You can install the required dependencies using the following commands:
pip install safetensors
pip install matplotlib
pip install tensorflow
pip install numpy
This script uses the NASNetLarge model for feature extraction. You can choose to load weights from either ImageNet or provide custom weights.
Configure the VAE model with various hyperparameters, including latent space size, filter sizes, and regularization parameters. The model is compiled using the Adam optimizer.
You should specify the path to your training and validation data directories. Data augmentation options such as rotation, shift, shear, zoom, and flip are available for data preprocessing.
The script supports training for a specified number of epochs with a learning rate schedule. You can choose to save model checkpoints at the end of each epoch. Training progress is recorded, including loss values and learning rate adjustments.
After training, you can use the trained model to generate images by providing an input image path.
The generated image will be displayed alongside the original image for visual comparison.
Feel free to customize and extend this script to suit your specific VAE training needs.