Giter VIP home page Giter VIP logo

deblurganv2's Introduction

DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better

Code for this paper DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better

Orest Kupyn, Tetiana Martyniuk, Junru Wu, Zhangyang Wang

In ICCV 2019

Overview

We present a new end-to-end generative adversarial network (GAN) for single image motion deblurring, named DeblurGAN-v2, which considerably boosts state-of-the-art deblurring efficiency, quality, and flexibility. DeblurGAN-v2 is based on a relativistic conditional GAN with a double-scale discriminator. For the first time, we introduce the Feature Pyramid Network into deblurring, as a core building block in the generator of DeblurGAN-v2. It can flexibly work with a wide range of backbones, to navigate the balance between performance and efficiency. The plug-in of sophisticated backbones (e.g., Inception-ResNet-v2) can lead to solid state-of-the-art deblurring. Meanwhile, with light-weight backbones (e.g., MobileNet and its variants), DeblurGAN-v2 reaches 10-100 times faster than the nearest competitors, while maintaining close to state-of-the-art results, implying the option of real-time video deblurring. We demonstrate that DeblurGAN-v2 obtains very competitive performance on several popular benchmarks, in terms of deblurring quality (both objective and subjective), as well as efficiency. Besides, we show the architecture to be effective for general image restoration tasks too.

DeblurGAN-v2 Architecture

Datasets

The datasets for training can be downloaded via the links below:

Training

Command

python train.py

training script will load config under config/config.yaml

Tensorboard visualization

Testing

To test on a single image,

python predict.py IMAGE_NAME.jpg

By default, the name of the pretrained model used by Predictor is 'best_fpn.h5'. One can change it in the code ('weights_path' argument). It assumes that the fpn_inception backbone is used. If you want to try it with different backbone pretrain, please specify it also under ['model']['g_name'] in config/config.yaml.

Pre-trained models

Dataset G Model D Model Loss Type PSNR/ SSIM Link
GoPro Test Dataset InceptionResNet-v2 double_gan ragan-ls 29.55/ 0.934 fpn_inception.h5
MobileNet double_gan ragan-ls 28.17/ 0.925 fpn_mobilenet.h5
MobileNet-DSC double_gan ragan-ls 28.03/ 0.922

Parent Repository

The code was taken from https://github.com/KupynOrest/RestoreGAN . This repository contains flexible pipelines for different Image Restoration tasks.

Citation

If you use this code for your research, please cite our paper.

​```
@InProceedings{Kupyn_2019_ICCV,
author = {Orest Kupyn and Tetiana Martyniuk and Junru Wu and Zhangyang Wang},
title = {DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2019}
}
​```

deblurganv2's People

Contributors

arsenyinfo avatar choprahetarth avatar kupynorest avatar polytechwangchao avatar sandbox3aster avatar t-martyniuk avatar vlee-harmonicinc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deblurganv2's Issues

the net input size

The size of original images is 1280*720, When we test the net, should we put 256 * 256 in or 1280 * 720?

SSIM reproduction in either DeblurGANV2, V1

Hi,

Could you expose which SSIM was used? Did you use CW-SSIM from pyssim package?
People in this and your previous repo looking for the answer for reproducing the SSIM reported in your papers.
It seems none of individuals could get higher than 0.9 with the trained weights.

Thanks for any comments

Question about your code

Hi,
Thank you for your excellent work. When I was running your code, I noticed that you set the parameter 'retain_graph=True' when the discriminator D was updated. Could you explain why ?

Pretrained Model

Hi~

Thanks for your work. I have read the DeblurGANv1 paper several month ago and DeblurGANv2 as soon as I can.
Recently, I 'm dealing with my paper of deblur task, I really want to quote and compare with your V2 result in it. May I ask when would you upload your pre-trained model?
It will be very helpful and thank you!

test.py

Could you provide test.py?

Have any one meet this problem when "python train.py"?

Traceback (most recent call last):
File "train.py", line 192, in
trainer.train()
File "train.py", line 41, in train
self.run_epoch(epoch)
File "train.py", line 75, in run_epoch
loss_content = self.criterionG(outputs, targets)
File "/home/a/DeblurGANv2/models/losses.py", line 63, in call
return self.get_loss(fakeIm, realIm)
File "/home/a/DeblurGANv2/models/losses.py", line 54, in get_loss
fakeIm[0, :, :, :] = self.transform(fakeIm[0, :, :, :])
File "/home/a/DeblurGANv2/venv/lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 163, in call
return F.normalize(tensor, self.mean, self.std, self.inplace)
File "/home/a/DeblurGANv2/venv/lib/python3.6/site-packages/torchvision/transforms/functional.py", line 208, in normalize
tensor.sub
(mean[:, None, None]).div
(std[:, None, None])
RuntimeError: expected type torch.cuda.FloatTensor but got torch.FloatTensor

can not import name inceptionresnetv2

python predict.py 1.jpg

Traceback (most recent call last):
File "predict.py", line 13, in
from models.networks import get_generator
File "/data/cdp_algo_ceph_ssd/users/yuyangyin/SR/GAN_deblur/GAN_deblur/DeblurGANv2/models/networks.py", line 8, in
from models.fpn_inception import FPNInception
File "/data/cdp_algo_ceph_ssd/users/yuyangyin/SR/GAN_deblur/GAN_deblur/DeblurGANv2/models/fpn_inception.py", line 3, in
from pretrainedmodels import inceptionresnetv2
ModuleNotFoundError: No module named 'pretrainedmodels'

problem in dataset.py

msg = f'Subsampling buckets from {lower_bound} to {upper_bound}, total buckets number is {n_buckets}'
                                                                                                    ^

SyntaxError: invalid syntax

what is the problem ??

test

how to get best_fpn.h5,is own train ? i just want to test the deblur result

Can't pickle local object 'get_corrupt_function.<locals>.process'

When I run train.py, this error happens.Does anyone have the same problem and how to solve it?
Pretrained models and datasets have been downloaded.
I have read some blogs, there may be the problems about multi thread.However I don't know where is wrong....seeking help

Which variable cause the torch.FloatTensor/torch.cuda.FloatTensor error?

Have anyone met and solved this error?

Traceback (most recent call last):
File "DeblurGANv2/train.py", line 189, in
trainer.train()
File "DeblurGANv2/train.py", line 45, in train
self.run_epoch(epoch)
File "DeblurGANv2/train.py", line 75, in run_epoch
loss_content = self.criterionG(outputs, targets)
File "/home/work/user-job-dir/DeblurGANv2/models/losses.py", line 64, in __call__
return self.get_loss(fakeIm, realIm)
File "/home/work/user-job-dir/DeblurGANv2/models/losses.py", line 55, in get_loss
fakeIm[0, :, :, :] = self.transform(fakeIm[0, :, :, :])
File "/home/work/anaconda3/lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 163, in __call__
return F.normalize(tensor, self.mean, self.std, self.inplace)
File "/home/work/anaconda3/lib/python3.6/site-packages/torchvision/transforms/functional.py", line 208, in normalize
tensor.sub
(mean[:, None, None]).div
(std[:, None, None])
RuntimeError: expected type torch.cuda.FloatTensor but got torch.FloatTensor

RAGAN-LS loss is difference between code and paper

Hi~
Thanks for your work.
There are some question about RAGAN-LS loss, the implementation in code is
self.loss_D = (torch.mean((self.pred_real - torch.mean(self.fake_pool.query()) - 1) ** 2) +
torch.mean((self.pred_fake - torch.mean(self.real_pool.query()) + 1) ** 2)) / 2

but the details in paper is
LRaLSGAN
D = Ex∼pdata(x)�(D(x) − Ez∼pz(z)D(G(z)) − 1)2�
+ Ez∼pz(z)�(D(G(z)) − Ex∼pdata(x)D(x) + 1)2�

I think its difference between "torch.mean(self.fake_pool.query())" and Ez∼pz(z)D(G(z)).

test.sh

how to test "test.sh"?What parameters need to be modified?

Impossible to convert the pre-trained H5 model to TFLITE

Hello,

Just for your information, I was unable to convert your pre-trained H5 model to TFLITE one using this colab cell:

import tensorflow as tf
model = tf.keras.models.load_model('/content/drive/.../your_model.h5')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open("/content/drive/.../your_model.tflite", "wb").write(tflite_model)

The error was:

OSError: SavedModel file does not exist at: /content/drive/.../your_model.h5/{saved_model.pbtxt|saved_model.pb}

No module named 'pretrainedmodels'

when I run the code, I got this error
Is anyone know where I can find the pretrained inceptionresnetv2 model?
Thank you!

Traceback (most recent call last):
File "D:/PyProject/DeblurGANv2/train.py", line 17, in
from models.networks import get_nets
File "D:\PyProject\DeblurGANv2\models\networks.py", line 8, in
from models.fpn_inception import FPNInception
File "D:\PyProject\DeblurGANv2\models\fpn_inception.py", line 3, in
from pretrainedmodels import inceptionresnetv2
ModuleNotFoundError: No module named 'pretrainedmodels'

train problem

hello, when i run the train.py,there is a mistake.How can i solve the problem?
I0318 16:56:19.757143 8820 dataset.py:28] Subsampling buckets from 0 to 90.0, total buckets number is 100
I0318 16:56:19.757143 8820 dataset.py:71] Dataset has been created with 18 samples
I0318 16:56:19.762583 8820 dataset.py:28] Subsampling buckets from 90.0 to 100, total buckets number is 100
Traceback (most recent call last):
File "train.py", line 179, in
train, val = map(get_dataloader, datasets)
File "E:\DeblurGANv2\dataset.py", line 133, in from_config
files_a, files_b = map(list, zip(*data))
ValueError: not enough values to unpack (expected 2, got 0)

and
train:
files_a: ./datasets/train/blur/.png
files_b: ./datasets/train/sharp/
.png
val:
files_a: ./datasets/test/blur/.png
files_b: ./datasets/test/sharp/
.png

I don't known how to adress this question or my operation is error? thank you

(wzs) student@rtxserver:~/DeblurGANv2-master$ python train.py
W0916 21:22:00.635229 29048 warnings.py:99] train.py:172: YAMLLoadWarning: calling yaml.load() wit hout Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org /load for full details.
config = yaml.load(f)

I0916 21:22:00.642313 29048 dataset.py:28] Subsampling buckets from 0 to 90.0, total buckets numbe r is 100
Traceback (most recent call last):
File "train.py", line 179, in
train, val = map(get_dataloader, datasets)
File "/home/student/DeblurGANv2-master/dataset.py", line 133, in from_config
files_a, files_b = map(list, zip(*data))
ValueError: not enough values to unpack (expected 2, got 0)

Problems in running under Windows

First of all, thank you very much for your contribution. When I use your data to train again, the following problems arise. How to solve them?
2

cannot import name 'inceptionresnetv2'

I'm sorry to bother you. I want to retrain the model by myself, but the following errors will occur:
The from pretrainedmodels import inceptionresnetv2
ImportError: cannot import name 'inceptionresnetv2'
May I ask how to solve this problem? Thank you very much

pre-trained model

Hello,thanks for your great job,but it seems that we can't download the pre-trained model,please upload it again .

Question about the pretrained model

Hi,
FPN-inception model performs very good in my tests. I have two questions:

  1. The color in the top-left corner seems to be strange. Is there something wrong?
  2. I want to repeat your training process. What is your training config and dataset?

Thanks.

Run pretty slowly on test_metrics.py

Hello,
I tested the pre-trained model using python test_metrics.py
but it seems like running pretty slowly for testing, is there anything i need to note for running this code test_metrics.py?

Implementation Details for datasets

I saw that you used datasets GoPro, DVD, and NFS for training in your paper.
Does it means that I need download that three datasets mentioned above, and put all of them in following way?
train:
files_a: ./datasets/my_dataset/train//blur/*.png
files_b: ./datasets/my_dataset/train/
/sharp/.png
...
...
val:
files_a: ./datasets/my_dataset/test/**/blur/
.png
files_b: ./datasets/my_dataset/test/**/sharp/*.png
...
...
After that, I can just use command : python train.py, then datasets setting will be the same as paper?
Thank You!

Port to tensorflow.js

Hi,
Thanks for your work.
Taking into account that your solution should be fast, will you be porting/converting trained models to tensorflow.js?

How to test a 256×256 image?

Hello ,good work, but I have a question.
I want to test your model in my own data with the size of 256×256, and when I run the command with

python test_metrics.py --img_folder=./datasets/my_dataset/ --weights_path=./best_fpn.h5

the program will result in an error,

ValueError: operands could not be broadcast together with shapes (720,1280,3) (256,256,3)

I would like to know why the input with an image of 256×256×3, and then the output with the size of 720×1280×3.
Looking forward to your reply?

Batch images predict

Is there any command to predict all images in the folder ?
I do not want to process all images one by one.

tryed: python predict.py /home/specyfick/Downloads/Wladek/raw/blurred5/*.JPG

I also looked into the code, but unfortunately I'm no very good in python.
Thanks

adversarail loss got negative value

Hi~
Thanks for your work.
I printed the loss. It confused me that the adv loss got negative value and D loss is equal to 0 all the time.

Here is my log:
epoch: 554/5000
G_loss=-0.3759; G_loss_content=0.0664; G_loss_adv=-0.4422; D_loss=0.0000; PSNR=23.0817; SSIM=0.7117
time spent: 17:59:02.860182 time remain: 5 days, 12:53:57.710205 107.61s/epoch
epoch: 555/5000
G_loss=0.0658; G_loss_content=0.0636; G_loss_adv=0.0022; D_loss=0.0000; PSNR=22.2133; SSIM=0.6799
time spent: 18:01:14.923619 time remain: 6 days, 19:03:34.153112 132.06s/epoch
epoch: 556/5000
G_loss=-0.2186; G_loss_content=0.0646; G_loss_adv=-0.2832; D_loss=0.0000; PSNR=21.4166; SSIM=0.6062
time spent: 18:03:03.482869 time remain: 5 days, 14:00:28.552603 108.56s/epoch

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.