Giter VIP home page Giter VIP logo

txt2img-mhn's Introduction

Txt2Img-MHN: Remote Sensing Image Generation from Text Using Modern Hopfield Networks


This is the official PyTorch implementation of the paper Txt2Img-MHN: Remote Sensing Image Generation from Text Using Modern Hopfield Networks.

Table of content

  1. Preparation
  2. Training VQVAE and VQGAN
  3. Training Txt2Img-MHN
  4. Image Generation
  5. Inception Score and FID Score
  6. CLIP Score
  7. Zero-Shot Classification
  8. Paper
  9. Acknowledgement
  10. License

Preparation

  • Install required packages: pip install -r requirements.txt
  • Install Taming Transformers:
    • Download the repo
    • Run pip install -e .
    • Copy the files in the folder taming-transformers-master from this repo to the downloaded taming-transformers-master folder
  • Download the remote sensing text-image dataset RSICD used in this repo
  • Extract separate .txt files for the text descriptions of each image in RSICD: python data_preparation.py
  • Data folder structure:
├── RSICD/
│   ├── airport_1.jpg   
│   ├── airport_2.jpg  
│   ├── ...  
│   ├── viaduct_420.jpg  
│   ├── airport_1.txt   
│   ├── airport_2.txt   
│   ├── ...  
│   ├── viaduct_420.txt   

Training VQVAE and VQGAN

  • Train VQVAE:
$ cd Txt2Img-MHN-main
$ CUDA_VISIBLE_DEVICES=0 python train_vqvae.py --data_dir /Path/To/RSICD/
  • Train VQGAN:
$ cd taming-transformers-master
$ CUDA_VISIBLE_DEVICES=0 python main.py --base configs/custom_vqgan.yaml -t True --gpus 0,

Training Txt2Img-MHN

  • Train Txt2Img-MHN with the pretrained VQVAE:
$ cd Txt2Img-MHN-main
$ CUDA_VISIBLE_DEVICES=0 python train_txt2img_mhn.py --vae_type 0 --data_dir /Path/To/RSICD/ --vqvae_path /Path/To/vae.pth --batch_size 8
  • Train Txt2Img-MHN with the pretrained VQGAN:
$ cd Txt2Img-MHN-main
$ CUDA_VISIBLE_DEVICES=0 python train_txt2img_mhn.py --vae_type 1 --data_dir /Path/To/RSICD/ --vqgan_model_path /Path/To/last.ckpt --vqgan_config_path /Path/To/project.yaml --batch_size 8

Note: Training with multiple GPUs is supported. Simply specify the GPU ids with CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7,...

  • Use tensorboard to monitor the training process:
$ cd Txt2Img-MHN-main
$ tensorboard --logdir ./ --samples_per_plugin images=100

Image Generation

  • Txt2Img-MHN (VQVAE):
$ cd Txt2Img-MHN-main
$ CUDA_VISIBLE_DEVICES=0 python gen_im.py --vae_type 0 --data_dir /Path/To/RSICD/ --vqvae_path /Path/To/vae.pth --mhn_vqvae_path /Path/To/mhn_vqvae.pth --num_gen_per_image 10
  • Txt2Img-MHN (VQGAN):
$ cd Txt2Img-MHN-main
$ CUDA_VISIBLE_DEVICES=0 python gen_im.py --vae_type 1 --data_dir /Path/To/RSICD/ --vqgan_model_path /Path/To/last.ckpt --vqgan_config_path /Path/To/project.yaml  --mhn_vqgan_path /Path/To/mhn_vqgan.pth --num_gen_per_image 10

Alternatively, you can download our pretrained models for a quick look.

Inception Score and FID Score

  • Data preparation: Before training the Inception model, prepare a new data folder with the structure below:
├── RSICD_cls/
│   ├── airport/     
|   |   ├── airport_1.jpg   
|   |   ├── airport_2.jpg   
|   |   ├── ... 
│   ├── bareland/     
|   |   ├── bareland_1.jpg   
|   |   ├── bareland_2.jpg   
|   |   ├── ... 
│   ├── ...  
│   ├── viaduct/     
|   |   ├── viaduct_1.jpg   
|   |   ├── viaduct_2.jpg   
|   |   ├── ...   
  • Pretrain the Inception model:
$ cd Txt2Img-MHN-main/is_fid_score
$ CUDA_VISIBLE_DEVICES=0 python pretrain_inception.py --root_dir /Path/To/RSICD_cls/
  • Calculate the Inception score and FID score:
$ cd Txt2Img-MHN-main/is_fid_score
$ CUDA_VISIBLE_DEVICES=0 python is_fid_score.py --gen_dir /Path/To/GenImgFolder/ --data_dir /Path/To/RSICD/

CLIP Score

$ cd Txt2Img-MHN-main
$ CUDA_VISIBLE_DEVICES=0 python clip_score.py --gen_dir /Path/To/GenImgFolder/ --data_dir /Path/To/RSICD/

Zero-Shot Classification

$ cd Txt2Img-MHN-main/zero_shot_classification
$ CUDA_VISIBLE_DEVICES=0 python zero_shot_evaluation.py --gen_dir /Path/To/GenImgFolder/ --root_dir /Path/To/RSICD/

Paper

Txt2Img-MHN: Remote Sensing Image Generation from Text Using Modern Hopfield Networks

Please cite the following paper if you find it useful for your research:

@article{txt2img_mhn,
  title={Txt2Img-MHN: Remote Sensing Image Generation from Text Using Modern Hopfield Networks},
  author={Xu, Yonghao and Yu, Weikang and Ghamisi, Pedram and Kopp, Michael and Hochreiter, Sepp},
  journal={IEEE Trans. Image Process.}, 
  doi={10.1109/TIP.2023.3323799},
  year={2023}
}

Acknowledgement

DALLE-pytorch

taming-transformers

metrics

CLIP-rsicd

This research has been conducted at the Institute of Advanced Research in Artificial Intelligence (IARAI).

License

This repo is distributed under MIT License. The code can be used for academic purposes only.

txt2img-mhn's People

Contributors

yonghaoxu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

txt2img-mhn's Issues

Are all the annotations in the RSICD dataset used?

Great work and thanks to the authors for their hard work!

But I have a little doubt:

We know that an image in the RSICD dataset corresponds to 5 annotations, I would like to ask that when predicting or training, for each image, is one annotation randomly selected, or are all 5 annotations used to generate the image?
If all of them are used, will it result in semantic duplication?

Looking forward to your reply!

FID evaluation

Hi, can you provide pre-trained Inception model for Inception score and FID score evaluation? I look forward to your reply, thanks!

RuntimeError: CUDA error: no kernel image is available for execution on the device

image_file /media/max/a/Txt2Img/Txt2Img-MHN-main/dataset/RSICD/RSICD_images/bridge_273.jpg
image_file /media/max/a/Txt2Img/Txt2Img-MHN-main/dataset/RSICD/RSICD_images/00386.jpg
image_file /media/max/a/Txt2Img/Txt2Img-MHN-main/dataset/RSICD/RSICD_images/meadow_184.jpg
0%| | 0/35 [00:00<?, ?it/s]
Traceback (most recent call last):
File "train_vqvae.py", line 93, in
main(parser.parse_args())
File "train_vqvae.py", line 47, in main
loss, recons = vae(images,temp=temp)
File "/home/max/anaconda3/envs/taming/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in call_impl
result = self.forward(*input, **kwargs)
File "/media/max/a/Txt2Img/Txt2Img-MHN-main/tools/model.py", line 159, in forward
img = self.norm(img)
File "/media/max/a/Txt2Img/Txt2Img-MHN-main/tools/model.py", line 140, in norm
images.sub
(means).div_(stds)
RuntimeError: CUDA error: no kernel image is available for execution on the device

about the dataset

hello.How do you use the dataset RSICD? Use directly or need to set a new structure? I'm looking forward to your code, and I'm confused about it

Good job!

This is a very interesting work. Thanks for sharing your delicate work! Pay tribute to your labor.

How to apply methods to RSICD dataset

I couldn't find the code for this article, which was difficult to reproduce for a while. Secondly, the AttnGAN,DAE-GAN, DF-GAN compared in the paper is difficult to directly apply to remote sensing dataset RSICD, especially due to the lack of metadata, making it difficult to conduct experiments directly. I don't quite understand what preprocessing is required for the dataset.

hope to receive your help.

VQVAE is significantly slower than VQGAN

Hi, I'm using the pre-trained weights you provided for inference on RSICD, VQGAN is running fine but VQVAE is so slow that it's rendering it almost unusable, I've counted the intervals between before and after image generation and it's taking about 10 minutes to generate an image.

The inference is done on a 4090, is there something else about VQVAE that requires additional configuration?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.