Giter VIP home page Giter VIP logo

srn-deblur's Introduction

Hi there 👋



srn-deblur's People

Contributors

jiangsutx avatar rimchang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

srn-deblur's Issues

Could not get obvious effect from the color model

Dear Author,
Thanks a lot for such a great model and the results are really great in most of the cases.
But in some cases i could not find the sharpening of the images too distinguishable.
Just wanted to confirm, whether the pretrained model shared in the github, can handle focal blurred images, or it only handles motion blurred ?

Eagerly waiting for your response, TIA!

crop_size

hi, could anyone tell me that what the scripts do with the crop_size(at model.py). when I run training. it shows <img_in, img_gt (4, 128, 128, 1) (4, 128, 128, 1)>, 4 is my batchsize,I set the crop_size = 128, but I don not know what is it means? Does it means input placeholders? thank you

puzzle on training speed

I train the model on a Tesla M40 GPU. However, I could only train 40 steps within 1 minute. I admit that Titan Xp is faster than M40, so is multi-GPU training feasible?

model.genenrator

In model.generator,I want to ask why put inp_blur,inp_pred together?

Some questions on BasicConvLSTMCell

Hi, I have some questions:

  1. there are no definition for "self._num_units" in class BasicConvLSTMCell ?
  2. what's the role of "forget_bias=1.0" ? some conv_lstm implementation don't use this forget bias.

Looking forward to your reply!

About the kernel size of the deconvolution layer

Hi, in your released paper, you said "all kernel sizes are set to 5", but I found that the kernel size of deconvolution layer in decoder part was set to 4, as your code written:
deconv2_4 = slim.conv2d_transpose(deconv3_1, 64, [4, 4], stride=2, scope='dec2_4')
deconv1_4 = slim.conv2d_transpose(deconv2_1, 32, [4, 4], stride=2, scope='dec1_4')
Why did you make this change? Does this make performance better?

Can you please provide more information on how can I train my own mode?

It's about newbies as me.. I know some python and I have never had anything to do with AI prior - I work as a visual effects artist and your results are VERY promising.. The thing is I would like to train my own model from my own images that I feed. I have access to very high quality images and the resources to train a model but with given enough time - while I cannot share the model itself I can share the results of course!

I have a few questions if you could help me please, because i am newbie...

  1. How do I actually train my own model (the deblur.model-523000.index and . meta and the .data) - this is a separate model provided by you - but how do I generate those files?

  2. When training the model and get good results - do I have to feed it with identical images one blurred and the other without blur (my question i suppose is, can I use very similar frames, but different, to train my model, as in the next frame of a movie sequence)

  3. Due to the OS we are using at work, for the time being, I cannot use the GPU - so Can I use the cpu only to train the model with 2k resolution images. What command would I use for that?

  4. How do I specify what model I want to use?
    python run_model.py --input_path=./testing_set --output_path=./testing_res
    In this command (or on the git page, I can't find any code or parameter that would specify what model to be used that are provided in the checkpoints folder (color, gray, lstm)

Thank you in advance!

图片尺寸的影响

请问图片尺寸是不是会影响去模糊的效果?应该怎么设置height,width效果更好?

Using own image post-training

Quick question: after one has trained the model on the go-pro dataset - how can I then input a random blurry picture into the model to have it generate a non-blurred image?

Questions about test pictures

I used my own test picture and the pre-processing model provided by the author for testing, but the picture of the final result could not be opened for viewing, I wonder what is the reason

why it shows ''loss_total_val=NaN''

I want to know why it shows up ''AssertionError: Model diverged with loss = NaN’' when I run the code.I have checked the code and try to revise it however it still shows ''loss_total_val=NaN''.Could you tell me how to handle it?plz.

cannot reproduce PSNR=30.19, SSIM=0.9334

thanks you for great work.

I use below command line.
python run_model.py --input_path=./testing_set --output_path=./testing_res --gpu=0 --model=lstm

I try to test on whole GOPRO-test set, but i cannot reproduce PSNR=30.19, SSIM=0.9334.
Could you tell me how to reproduce it?plz.

below code is my matlab code and evaluation result.

source='/home/ubuntu/Desktop/workspace/result/result_GOPRO/SRN_deblur/deblur/';
target='/home/ubuntu/Desktop/workspace/result/result_GOPRO/sharp/';
output='/home/ubuntu/Desktop/workspace/result/result_GOPRO/SRN_deblur/matlab_metric.txt';


dir_list = dir(source);
bool_list = not([dir_list.isdir]);
file_list = dir_list(bool_list);


M = containers.Map();
for i = 1:length(file_list)
    
    file_name = file_list(i).name;
    deblur = imread(strcat(source, file_name));
    sharp = imread(strcat(target, file_name));
    %sharp = imread(strcat(target, strcat(file_name(1:length(file_name)-4), '.jpg')));
    
    split_name = strsplit(file_name, '_');
    split_name = split_name(1,1:length(split_name)-1);
    name = join(split_name, '_');
    name = char(name);
    
    result_psnr = psnr(deblur, sharp);
    result_ssim = 0;
    %reuslt_ssim =ssim(deblur, sharp);
    
    if ~isKey(M, name)
        M(name)=[1, result_psnr, result_ssim];
    else
        metric_list = M(name);
        metric_list(1) = metric_list(1)+1;
        metric_list(2) = metric_list(2)+result_psnr;
        metric_list(3) = metric_list(3)+result_ssim;
        
        M(name) = metric_list;
    end
    
    disp(M(name));    
    
end

total = [0,0,0];
key_set = M.keys;
fid = fopen(output, 'wt');
for i = 1:length(key_set)
    name = key_set{i};
    
    metric_list = M(name);
    avg_psnr = metric_list(2)/metric_list(1);
    avg_ssim = metric_list(3)/metric_list(1);
    
    total(1) = total(1) + metric_list(1);
    total(2) = total(2) + metric_list(2);
    total(3) = total(3) + metric_list(3);
   
    fprintf(fid, '%s Video PSNR : %4.2f, SSIM : %4.2f, Count : %i \n', name, avg_psnr, avg_ssim, metric_list(1));
end

fprintf(fid, 'Total Video PSNR : %4.2f, SSIM : %4.2f, Count : %i \n', total(2)/total(1), total(3)/total(1), total(1));

GOPR0384_11_00 Video PSNR : 30.13, SSIM : 0.00, Count : 100
GOPR0384_11_05 Video PSNR : 28.53, SSIM : 0.00, Count : 100
GOPR0385_11_01 Video PSNR : 28.62, SSIM : 0.00, Count : 100
GOPR0396_11_00 Video PSNR : 30.48, SSIM : 0.00, Count : 100
GOPR0410_11_00 Video PSNR : 28.67, SSIM : 0.00, Count : 134
GOPR0854_11_00 Video PSNR : 26.38, SSIM : 0.00, Count : 100
GOPR0862_11_00 Video PSNR : 24.99, SSIM : 0.00, Count : 77
GOPR0868_11_00 Video PSNR : 26.04, SSIM : 0.00, Count : 100
GOPR0869_11_00 Video PSNR : 27.82, SSIM : 0.00, Count : 100
GOPR0871_11_00 Video PSNR : 26.25, SSIM : 0.00, Count : 100
GOPR0881_11_01 Video PSNR : 27.69, SSIM : 0.00, Count : 100
Total Video PSNR : 27.87, SSIM : 0.00, Count : 1111

Blind or NonBlind?

the method of SRN-DeblurNet is Blind or NonBlind?
How to distinguish Blind or NonBlind?

test on kohler dataset

I download kohler dataset from here, where I save images as JPG. Then, I use the following command:

python run_model.py --input_path= xxx --output_path=xxxx

where I use the pretrained model parameters stored at the 523000th step.

After obtaining the deblurring images, I use the ssim and psnr provided by matlab to calculate the quantitative results. The average psnr and ssim are 19.37 and 0.74, respectively.

I am confused at this experimental results. Could you provide some information how to set up the evaluation at Kohler dataset?

人脸老化模拟

你好,我在《Generative Face Completion 》这个GitHub上看到了你的回复,了解到你在做人脸老化模拟,我刚好最近也在做这方面,有几个问题想跟你交流以下。在你GitHub主页上没有看到你的联系方式,只能通过这种方式,如有打扰还请原谅。谢谢!

Questions about the training

Greetings,

so that are some beginner questions about training.

  1. If i train it, will he update the checkpoint Database and overwrite old training data or will it be updatet.
  2. If i train with some pictures the checkpoint folder will ne be changed in it's size after he is finnished, what could this mean? Aren't the training files saved?
  3. How could i get better results for as example "fog" here are some example images
    neuer film 9 movie mp4_snapshot_00 19 068

Is it possible to reach you via an E-Mail Adress to get in contact for learning questions?

not enough values to unpack (expected 3, got 2)

Traceback (most recent call last):
File "run_model.py", line 50, in
tf.app.run()
File "D:\Anaconda3\envs\tf18\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run
_sys.exit(main(argv))
File "run_model.py", line 42, in main
deblur.test(args.height, args.width, args.input_path, args.output_path)
File "F:\python\SRN-Deblur-master\models\model.py", line 282, in test
h, w, c = blur.shape
ValueError: not enough values to unpack (expected 3, got 2)

If the Bit Resolution of a picture is 8, there will be error.
Then I change "blur = scipy.misc.imread(os.path.join(input_path, imgName))" to blur = scipy.misc.imread(os.path.join(input_path, imgName),mode='RGB') and solve it.

OutOfRangeError: FIFOQueue '_1_input/batch/fifo_queue' is closed and has insufficient elements

Caused by op u'input/batch', defined at:
File "/mnt/hgfs/G/00Deblur/SRN-Deblur-master/run_model.py", line 50, in
main()
File "/mnt/hgfs/G/00Deblur/SRN-Deblur-master/run_model.py", line 44, in main
deblur.train()
File "/mnt/hgfs/G/00Deblur/SRN-Deblur-master/models/model.py", line 183, in train
self.build_model()
File "/mnt/hgfs/G/00Deblur/SRN-Deblur-master/models/model.py", line 134, in build_model
img_in, img_gt = self.input_producer(self.batch_size)
File "/mnt/hgfs/G/00Deblur/SRN-Deblur-master/models/model.py", line 61, in input_producer
batch_in, batch_gt = tf.train.batch([image_in, image_gt], batch_size=batch_size, num_threads=8, capacity=20)
File "/home/zhang/anaconda2/envs/srn/lib/python2.7/site-packages/tensorflow/python/training/input.py", line 927, in batch
name=name)
File "/home/zhang/anaconda2/envs/srn/lib/python2.7/site-packages/tensorflow/python/training/input.py", line 722, in _batch
dequeued = queue.dequeue_many(batch_size, name=name)
File "/home/zhang/anaconda2/envs/srn/lib/python2.7/site-packages/tensorflow/python/ops/data_flow_ops.py", line 464, in dequeue_many
self._queue_ref, n=n, component_types=self._dtypes, name=name)
File "/home/zhang/anaconda2/envs/srn/lib/python2.7/site-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 2418, in _queue_dequeue_many_v2
component_types=component_types, timeout_ms=timeout_ms, name=name)
File "/home/zhang/anaconda2/envs/srn/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/zhang/anaconda2/envs/srn/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
op_def=op_def)
File "/home/zhang/anaconda2/envs/srn/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1470, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

OutOfRangeError (see above for traceback): FIFOQueue '_1_input/batch/fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: input/batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/batch/fifo_queue, input/batch/n)]]

Enlarge go-pro dataset and update datalist.txt file

Hello , i enlarged go-pro dataset with another dataset and update datalist.txt file with the new images . but an error is always come out and this the error report: ( ValueError: got shape [2104],but wanted [2104,2] )
it works fine when i give to the new images the same name format as gopro images formats and put them in the same pre-exist folders . but if i changes the directory or the images name formats this error come out .
please how you can explain this ?

can the model test on cifar10dataset?

when i test on the cifar10 dataset ,i met the error:ValueError: Cannot feed value of shape (1, 720, 1280, 4) for Tensor u'Placeholder:0', which has shape '(1, 720, 1280, 3)'

文件缺失

请问为什么没有这个文件呢? ./checkpoints\color\deblur.model-523000
感谢

你好

我想问一下您治愈要发布的代码是什么版本呢

The details of "--model=gray"

Hi, in the README text, you said "--model=gray: According to our further experiments after paper acceptance, we are able to get a slightly better model by tuning parameters, even without LSTM. This model should produce visually sharper and quantitatively better results.". Could you please tell me the details of this model? What's the difference from the accepted paper?

Model link is note found.

Hello,I found the model link is lost,I am not sure if it's my network environment's problem. Could you upload it again?

model.genenrator

In model.generator,I want to ask why put inp_blur,inp_pred together?

testing

how to calculate psnr of testing data ?

BasicConvLSTMCell without sequence marker!

Hi, @jiangsutx .
Given others' Convlstm implementation, like here, which including a sequence marker input for Convlstm. And it is used to decide whether discard the state(both c and h) or not. However, I do not find this point in BasicConvLSTMCell.
And I also noticed that you may implementated Convlstm using this line. Therefore, I wonder that this line run only once in the whole train phase OR synchronous execute with this loop?
Thanks a lot!

Question about testing grayscale images!Could you please help me to see if my modified code is correct?

Thank you for your code!
My question is that Could you please help me to see if my modified code is correct?
I modified the "test" function in model.py to test grayscale images. The code can be run after modification. But I'm not sure if the details are correct....The modified part of the code has been marked with "Here!".
Looking forward to your reply!

    def test(self, height, width, input_path, output_path):
        if not os.path.exists(output_path):
            os.makedirs(output_path)
        imgsName = sorted(os.listdir(input_path))

        H, W = height, width
        inp_chns = 3 if self.args.model == 'color' else 1
        self.batch_size = 1                                                    #Here!
        inputs = tf.placeholder(shape=[self.batch_size, H, W, inp_chns], dtype=tf.float32)
        outputs = self.generator(inputs, reuse=False)

        sess = tf.Session(config=tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True)))

        self.saver = tf.train.Saver()
        self.load(sess, self.train_dir, step=3000)

        for imgName in imgsName:
            blur = scipy.misc.imread(os.path.join(input_path, imgName))
            blur = np.expand_dims(blur, 2)                           #Here!
            h, w, c = blur.shape
            # make sure the width is larger than the height
            rot = False
            if h > w:
                blur = np.transpose(blur, [1, 0, 2])
                rot = True
            h = int(blur.shape[0])
            w = int(blur.shape[1])
            resize = False
            if h > H or w > W:
                scale = min(1.0 * H / h, 1.0 * W / w)
                new_h = int(h * scale)
                new_w = int(w * scale)
                blur = scipy.misc.imresize(blur, [new_h, new_w], 'bicubic')
                resize = True
                blurPad = np.pad(blur, ((0, H - new_h), (0, W - new_w), (0, 0)), 'edge')
            else:
                blurPad = np.pad(blur, ((0, H - h), (0, W - w), (0, 0)), 'edge')
            blurPad = np.expand_dims(blurPad, 0)
            if self.args.model != 'color':
                blurPad = np.transpose(blurPad, (3, 1, 2, 0))

            start = time.time()
            deblur = sess.run(outputs, feed_dict={inputs: blurPad / 255.0})
            duration = time.time() - start
            print('Saving results: %s ... %4.3fs' % (os.path.join(output_path, imgName), duration))
            res = deblur[-1]
            if self.args.model != 'color':
                res = np.transpose(res, (3, 1, 2, 0))
            res = im2uint8(res[0, :, :,0])                            #Here!
            # crop the image into original size
            if resize:
                res = res[:new_h, :new_w]                               #Here!
                res = scipy.misc.imresize(res, [h, w], 'bicubic')
            else:
                res = res[:h, :w]                                                 #Here!
            if rot:
                res = np.transpose(res, [1, 0])                           #Here!
            scipy.misc.imsave(os.path.join(output_path, imgName), res)

Can I use the previous checkpoint to train the modified model ?

I saw your paper on CVPR ~ the results are really good !!
Thanks for sharing your model :)

I wanna modify somewhere in your model,
can I use the previous checkpoint to retrain it ?

will it gets something wrong or somewhere I need to notice ~
Cause I read this issue re train using previous checkpoint
I got the "KeyError: "The name 'input/decode_image/cond_jpeg/is_png' refers to an Operation not in the graph.", but I don't know why :(

Can the model apply to the focal-blurred image?

Hi, jiangsutx:
I know the model is trained by motion-blurred image database, and it can repaired well.
I input some focal-blurred images, the output images just increased a little in ssim/gmg/psnr(even part of images decreased).
Maybe I need to retrain the model use focal-blurred/clear photographs?
Looking forward to your suggestions. Thanks.

关于模型大小和无法用作者提供的模型继续训练的问题

您好,非常感谢您开源此项目的代码。我在按照您提供的方式训练模型后,得到了的模型大小是82.6MB,但是您提供的训练好的模型大小是27.5MB,并且无法加载您提供的训练好的模型继续训练,请您帮我看看问题出在哪里,万分感谢!

Does it work for Motion Deblur?

I have been training this model with my dataset of blur images and have got some results but I am not able to Deblur the Images having Motion blur. Also Is there any way for generatng dataset of Motion Blur images?

util包问题

from util.util import *
请问下这个模块是怎么安装的, 网上没有解决这个问题,谢谢

NameError: global name 'ResnetBlock' is not defined

你好,我编译报“NameError: global name 'ResnetBlock' is not defined”错误,请问是哪儿的问题,谢谢。
Traceback (most recent call last):
File "run_model.py", line 53, in
tf.app.run()
File "/Users/jockey/tensorflow/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "run_model.py", line 45, in main
deblur.test(args.height, args.width, args.input_path, args.output_path)
File "/Users/jockey/tensorflow/SRN-Deblur-master/models/model.py", line 271, in test
outputs = self.generator(inputs, reuse=False)
File "/Users/jockey/tensorflow/SRN-Deblur-master/models/model.py", line 93, in generator
conv1_2 = ResnetBlock(conv1_1, 32, 5, scope='enc1_2')
NameError: global name 'ResnetBlock' is not defined

How to train gray images by one channel?

I'm sorry for asking this stupid question, I have tried for so many days, but I can't figure it out. I can ask no one in my laboratory, I wish someone could help me, thank you!

model.py

from __future__ import print_function
import os
import time
import random
import datetime
import scipy.misc
import numpy as np
import tensorflow as tf
import tensorflow.contrib.slim as slim
from datetime import datetime
from util.util import *
from util.BasicConvLSTMCell import *
import sys
reload(sys)
sys.setdefaultencoding('utf8')


class DEBLUR(object):
    def __init__(self, args):
        self.args = args
        self.n_levels = 3 
        self.scale = 0.5 
        self.chns = 1

        # if args.phase == 'train':
        self.crop_size = 128 
        self.data_list = open(args.datalist, 'rt').read().splitlines()
        self.data_list = list(map(lambda x: x.split(' '), self.data_list))
        random.shuffle(self.data_list)
        self.train_dir = os.path.join('./checkpoints', args.model)
        if not os.path.exists(self.train_dir):
            os.makedirs(self.train_dir)

        self.batch_size = args.batch_size
        self.epoch = args.epoch
        self.data_size = (len(self.data_list)) // self.batch_size
        self.max_steps = int(self.epoch * self.data_size)
        self.learning_rate = args.learning_rate

    def input_producer(self, batch_size=10):
        def read_data():
            img_a = tf.image.decode_image(tf.read_file(tf.string_join(['./training_set/', self.data_queue[0]])),
                                          channels=0)
            img_b = tf.image.decode_image(tf.read_file(tf.string_join(['./training_set/', self.data_queue[1]])),
                                          channels=0)
            img_a, img_b = preprocessing([img_a, img_b])
            return img_a, img_b

        def preprocessing(imgs):
            imgs = [tf.cast(img, tf.float32) / 255.0 for img in imgs]
            if self.args.model != 'color':
                imgs = [tf.image.rgb_to_grayscale(img) for img in imgs]
            img_crop = tf.unstack(tf.random_crop(tf.stack(imgs, axis=0), [2, self.crop_size, self.crop_size, self.chns]),
                                  axis=0)
            return img_crop

        with tf.variable_scope('input'):
            List_all = tf.convert_to_tensor(self.data_list, dtype=tf.string)
            gt_list = List_all[:, 0]
            in_list = List_all[:, 1]

            self.data_queue = tf.train.slice_input_producer([in_list, gt_list], capacity=20)
            image_in, image_gt = read_data()
            batch_in, batch_gt = tf.train.batch([image_in, image_gt], batch_size=batch_size, num_threads=8, capacity=20)

        return batch_in, batch_gt

    def generator(self, inputs, reuse=False, scope='g_net'):
        n, h, w, c = inputs.get_shape().as_list()

        if self.args.model == 'lstm':
            with tf.variable_scope('LSTM'):
                cell = BasicConvLSTMCell([h / 4, w / 4], [3, 3], 128)
                rnn_state = cell.zero_state(batch_size=self.batch_size, dtype=tf.float32)

        x_unwrap = []
        with tf.variable_scope(scope, reuse=reuse):
            with slim.arg_scope([slim.conv2d, slim.conv2d_transpose],
                                activation_fn=tf.nn.relu, padding='SAME', normalizer_fn=None,
                                weights_initializer=tf.contrib.layers.xavier_initializer(uniform=True),
                                biases_initializer=tf.constant_initializer(0.0)):

                inp_pred = inputs
                for i in xrange(self.n_levels):
                    scale = self.scale ** (self.n_levels - i - 1)
                    hi = int(round(h * scale))
                    wi = int(round(w * scale))
                    inp_blur = tf.image.resize_images(inputs, [hi, wi], method=0)
                    inp_pred = tf.stop_gradient(tf.image.resize_images(inp_pred, [hi, wi], method=0))
                    inp_all = tf.concat([inp_blur, inp_pred], axis=3, name='inp')
                    if self.args.model == 'lstm':
                        rnn_state = tf.image.resize_images(rnn_state, [hi // 4, wi // 4], method=0)

                    # encoder
                    conv1_1 = slim.conv2d(inp_all, 32, [5, 5], scope='enc1_1')
                    conv1_2 = ResnetBlock(conv1_1, 32, 5, scope='enc1_2')
                    conv1_3 = ResnetBlock(conv1_2, 32, 5, scope='enc1_3')
                    conv1_4 = ResnetBlock(conv1_3, 32, 5, scope='enc1_4')
                    conv2_1 = slim.conv2d(conv1_4, 64, [5, 5], stride=2, scope='enc2_1')
                    conv2_2 = ResnetBlock(conv2_1, 64, 5, scope='enc2_2')
                    conv2_3 = ResnetBlock(conv2_2, 64, 5, scope='enc2_3')
                    conv2_4 = ResnetBlock(conv2_3, 64, 5, scope='enc2_4')
                    conv3_1 = slim.conv2d(conv2_4, 128, [5, 5], stride=2, scope='enc3_1')
                    conv3_2 = ResnetBlock(conv3_1, 128, 5, scope='enc3_2')
                    conv3_3 = ResnetBlock(conv3_2, 128, 5, scope='enc3_3')
                    conv3_4 = ResnetBlock(conv3_3, 128, 5, scope='enc3_4')

                    if self.args.model == 'lstm':
                        deconv3_4, rnn_state = cell(conv3_4, rnn_state)
                    else:
                        deconv3_4 = conv3_4

                    # decoder
                    deconv3_3 = ResnetBlock(deconv3_4, 128, 5, scope='dec3_3')
                    deconv3_2 = ResnetBlock(deconv3_3, 128, 5, scope='dec3_2')
                    deconv3_1 = ResnetBlock(deconv3_2, 128, 5, scope='dec3_1')
                    deconv2_4 = slim.conv2d_transpose(deconv3_1, 64, [4, 4], stride=2, scope='dec2_4')
                    cat2 = deconv2_4 + conv2_4
                    deconv2_3 = ResnetBlock(cat2, 64, 5, scope='dec2_3')
                    deconv2_2 = ResnetBlock(deconv2_3, 64, 5, scope='dec2_2')
                    deconv2_1 = ResnetBlock(deconv2_2, 64, 5, scope='dec2_1')
                    deconv1_4 = slim.conv2d_transpose(deconv2_1, 32, [4, 4], stride=2, scope='dec1_4')
                    cat1 = deconv1_4 + conv1_4
                    deconv1_3 = ResnetBlock(cat1, 32, 5, scope='dec1_3')
                    deconv1_2 = ResnetBlock(deconv1_3, 32, 5, scope='dec1_2')
                    deconv1_1 = ResnetBlock(deconv1_2, 32, 5, scope='dec1_1')
                    inp_pred = slim.conv2d(deconv1_1, self.chns, [5, 5], activation_fn=None, scope='dec1_0')

                    if i >= 0:
                        x_unwrap.append(inp_pred)
                    if i == 0:
                        tf.get_variable_scope().reuse_variables()

            return x_unwrap

    def build_model(self):
        img_in, img_gt = self.input_producer(self.batch_size)

        tf.summary.image('img_in', im2uint8(img_in))
        tf.summary.image('img_gt', im2uint8(img_gt))
        print('img_in, img_gt', img_in.get_shape(), img_gt.get_shape())

        # generator
        x_unwrap = self.generator(img_in, reuse=False, scope='g_net')
        # calculate multi-scale loss
        self.loss_total = 0
        for i in xrange(self.n_levels):
            _, hi, wi, _ = x_unwrap[i].get_shape().as_list()
            gt_i = tf.image.resize_images(img_gt, [hi, wi], method=0)
            loss = tf.reduce_mean((gt_i - x_unwrap[i]) ** 2)
            self.loss_total += loss

            tf.summary.image('out_' + str(i), im2uint8(x_unwrap[i]))
            tf.summary.scalar('loss_' + str(i), loss)

        # losses
        tf.summary.scalar('loss_total', self.loss_total)

        # training vars
        all_vars = tf.trainable_variables()
        self.all_vars = all_vars
        self.g_vars = [var for var in all_vars if 'g_net' in var.name]
        self.lstm_vars = [var for var in all_vars if 'LSTM' in var.name]
        for var in all_vars:
            print(var.name)

    def train(self):
        def get_optimizer(loss, global_step=None, var_list=None, is_gradient_clip=False):
            train_op = tf.train.AdamOptimizer(self.lr)
            if is_gradient_clip:
                grads_and_vars = train_op.compute_gradients(loss, var_list=var_list)
                unchanged_gvs = [(grad, var) for grad, var in grads_and_vars if not 'LSTM' in var.name]
                rnn_grad = [grad for grad, var in grads_and_vars if 'LSTM' in var.name]
                rnn_var = [var for grad, var in grads_and_vars if 'LSTM' in var.name]
                capped_grad, _ = tf.clip_by_global_norm(rnn_grad, clip_norm=3)
                capped_gvs = list(zip(capped_grad, rnn_var))
                train_op = train_op.apply_gradients(grads_and_vars=capped_gvs + unchanged_gvs, global_step=global_step)
            else:
                train_op = train_op.minimize(loss, global_step, var_list)
            return train_op

        global_step = tf.Variable(initial_value=0, dtype=tf.int32, trainable=False)
        self.global_step = global_step

        # build model
        self.build_model() # TODO 

        # learning rate decay
        self.lr = tf.train.polynomial_decay(self.learning_rate, global_step, self.max_steps, end_learning_rate=0.0,
                                            power=0.3)
        tf.summary.scalar('learning_rate', self.lr)

        # training operators
        train_gnet = get_optimizer(self.loss_total, global_step, self.all_vars)

        # session and thread
        gpu_options = tf.GPUOptions(allow_growth=True)
        sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
        self.sess = sess
        sess.run(tf.global_variables_initializer())
        self.saver = tf.train.Saver(max_to_keep=50, keep_checkpoint_every_n_hours=1)
        coord = tf.train.Coordinator()
        threads = tf.train.start_queue_runners(sess=sess, coord=coord)

        # training summary
        summary_op = tf.summary.merge_all()
        summary_writer = tf.summary.FileWriter(self.train_dir, sess.graph, flush_secs=30)

        for step in xrange(sess.run(global_step), self.max_steps + 1):

            start_time = time.time()

            # update G network
            _, loss_total_val = sess.run([train_gnet, self.loss_total])

            duration = time.time() - start_time
            # print loss_value
            assert not np.isnan(loss_total_val), 'Model diverged with loss = NaN'

            if step % 5 == 0:
                num_examples_per_step = self.batch_size
                examples_per_sec = num_examples_per_step / duration
                sec_per_batch = float(duration)

                format_str = ('%s: step %d, loss = (%.5f; %.5f, %.5f)(%.1f data/s; %.3f s/bch)')
                print(format_str % (datetime.now().strftime('%Y-%m-%d %H:%M:%S'), step, loss_total_val, 0.0,
                                    0.0, examples_per_sec, sec_per_batch))

            if step % 20 == 0:
                # summary_str = sess.run(summary_op, feed_dict={inputs:batch_input, gt:batch_gt})
                summary_str = sess.run(summary_op)
                summary_writer.add_summary(summary_str, global_step=step)

            # Save the model checkpoint periodically.
            if step % 1000 == 0 or step == self.max_steps:
                checkpoint_path = os.path.join(self.train_dir, 'checkpoints')
                self.save(sess, checkpoint_path, step)

    def save(self, sess, checkpoint_dir, step):
        model_name = "deblur.model"
        if not os.path.exists(checkpoint_dir):
            os.makedirs(checkpoint_dir)
        self.saver.save(sess, os.path.join(checkpoint_dir, model_name), global_step=step)

    def load(self, sess, checkpoint_dir, step=None):
        print(" [*] Reading checkpoints...")
        model_name = "deblur.model"
        ckpt = tf.train.get_checkpoint_state(checkpoint_dir)

        if step is not None:
            ckpt_name = model_name + '-' + str(step)
            self.saver.restore(sess, os.path.join(checkpoint_dir, ckpt_name))
            print(" [*] Reading intermediate checkpoints... Success")
            return str(step)
        elif ckpt and ckpt.model_checkpoint_path:
            ckpt_name = os.path.basename(ckpt.model_checkpoint_path)
            ckpt_iter = ckpt_name.split('-')[1]
            self.saver.restore(sess, os.path.join(checkpoint_dir, ckpt_name))
            print(" [*] Reading updated checkpoints... Success")
            return ckpt_iter
        else:
            print(" [*] Reading checkpoints... ERROR")
            return False

    def test(self, height, width, input_path, output_path):
        if not os.path.exists(output_path):
            os.makedirs(output_path)
        imgsName = sorted(os.listdir(input_path))

        H, W = height, width
        inp_chns = 3 if self.args.model == 'color' else 1
        #self.batch_size = 1 if self.args.model == 'color' else 3
        self.batch_size = 1
        inputs = tf.placeholder(shape=[self.batch_size, H, W, inp_chns], dtype=tf.float32)
        outputs = self.generator(inputs, reuse=False)

        sess = tf.Session(config=tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True)))

        self.saver = tf.train.Saver()
        self.load(sess, self.train_dir, step=523000)

        for imgName in imgsName:
            blur = scipy.misc.imread(os.path.join(input_path, imgName))
            if self.args.model == 'gray': # gai
                blur = blur.reshape(blur.shape[0], blur.shape[1], 1)
            h, w, c = blur.shape
            # make sure the width is larger than the height
            rot = False
            if h > w:
                blur = np.transpose(blur, [1, 0, 2])
                rot = True
            h = int(blur.shape[0])
            w = int(blur.shape[1])
            resize = False
            if h > H or w > W:
                scale = min(1.0 * H / h, 1.0 * W / w)
                new_h = int(h * scale)
                new_w = int(w * scale)
                blur = scipy.misc.imresize(blur, [new_h, new_w], 'bicubic')
                resize = True
                blurPad = np.pad(blur, ((0, H - new_h), (0, W - new_w), (0, 0)), 'edge')
            else:
                blurPad = np.pad(blur, ((0, H - h), (0, W - w), (0, 0)), 'edge')
            blurPad = np.expand_dims(blurPad, 0)
            if self.args.model != 'color':
                blurPad = np.transpose(blurPad, (3, 1, 2, 0))

            start = time.time()
            deblur = sess.run(outputs, feed_dict={inputs: blurPad / 255.0})
            duration = time.time() - start
            print('Saving results: %s ... %4.3fs' % (os.path.join(output_path, imgName), duration))
            res = deblur[-1]
            if self.args.model != 'color':
                res = np.transpose(res, (3, 1, 2, 0))
            res = im2uint8(res[0, :, :, :])
            # crop the image into original size
            if resize:
                res = res[:new_h, :new_w, :]
                res = scipy.misc.imresize(res, [h, w], 'bicubic')
            else:
                res = res[:h, :w, :]

            if rot:
                res = np.transpose(res, [1, 0, 2])
                if self.args.model == 'gray': # gai
                    res = res.reshape(res.shape[0], res.shape[1])
            imgName = imgName.replace('blur', 'deblur')
            scipy.misc.imsave(os.path.join(output_path, imgName), res)

error infomation

/home/faelan/anaconda3/envs/py27/bin/python /media/faelan/WestData/code/Deblur/SRN-Deblur/run_model.py
img_in, img_gt (1, 128, 128, 1) (1, 128, 128, 1)
g_net/enc1_1/weights:0
g_net/enc1_1/biases:0
g_net/enc1_2/conv1/weights:0
g_net/enc1_2/conv1/biases:0
g_net/enc1_2/conv2/weights:0
g_net/enc1_2/conv2/biases:0
g_net/enc1_3/conv1/weights:0
g_net/enc1_3/conv1/biases:0
g_net/enc1_3/conv2/weights:0
g_net/enc1_3/conv2/biases:0
g_net/enc1_4/conv1/weights:0
g_net/enc1_4/conv1/biases:0
g_net/enc1_4/conv2/weights:0
g_net/enc1_4/conv2/biases:0
g_net/enc2_1/weights:0
g_net/enc2_1/biases:0
g_net/enc2_2/conv1/weights:0
g_net/enc2_2/conv1/biases:0
g_net/enc2_2/conv2/weights:0
g_net/enc2_2/conv2/biases:0
g_net/enc2_3/conv1/weights:0
g_net/enc2_3/conv1/biases:0
g_net/enc2_3/conv2/weights:0
g_net/enc2_3/conv2/biases:0
g_net/enc2_4/conv1/weights:0
g_net/enc2_4/conv1/biases:0
g_net/enc2_4/conv2/weights:0
g_net/enc2_4/conv2/biases:0
g_net/enc3_1/weights:0
g_net/enc3_1/biases:0
g_net/enc3_2/conv1/weights:0
g_net/enc3_2/conv1/biases:0
g_net/enc3_2/conv2/weights:0
g_net/enc3_2/conv2/biases:0
g_net/enc3_3/conv1/weights:0
g_net/enc3_3/conv1/biases:0
g_net/enc3_3/conv2/weights:0
g_net/enc3_3/conv2/biases:0
g_net/enc3_4/conv1/weights:0
g_net/enc3_4/conv1/biases:0
g_net/enc3_4/conv2/weights:0
g_net/enc3_4/conv2/biases:0
g_net/convLSTM/LSTM_conv/weights:0
g_net/convLSTM/LSTM_conv/biases:0
g_net/dec3_3/conv1/weights:0
g_net/dec3_3/conv1/biases:0
g_net/dec3_3/conv2/weights:0
g_net/dec3_3/conv2/biases:0
g_net/dec3_2/conv1/weights:0
g_net/dec3_2/conv1/biases:0
g_net/dec3_2/conv2/weights:0
g_net/dec3_2/conv2/biases:0
g_net/dec3_1/conv1/weights:0
g_net/dec3_1/conv1/biases:0
g_net/dec3_1/conv2/weights:0
g_net/dec3_1/conv2/biases:0
g_net/dec2_4/weights:0
g_net/dec2_4/biases:0
g_net/dec2_3/conv1/weights:0
g_net/dec2_3/conv1/biases:0
g_net/dec2_3/conv2/weights:0
g_net/dec2_3/conv2/biases:0
g_net/dec2_2/conv1/weights:0
g_net/dec2_2/conv1/biases:0
g_net/dec2_2/conv2/weights:0
g_net/dec2_2/conv2/biases:0
g_net/dec2_1/conv1/weights:0
g_net/dec2_1/conv1/biases:0
g_net/dec2_1/conv2/weights:0
g_net/dec2_1/conv2/biases:0
g_net/dec1_4/weights:0
g_net/dec1_4/biases:0
g_net/dec1_3/conv1/weights:0
g_net/dec1_3/conv1/biases:0
g_net/dec1_3/conv2/weights:0
g_net/dec1_3/conv2/biases:0
g_net/dec1_2/conv1/weights:0
g_net/dec1_2/conv1/biases:0
g_net/dec1_2/conv2/weights:0
g_net/dec1_2/conv2/biases:0
g_net/dec1_1/conv1/weights:0
g_net/dec1_1/conv1/biases:0
g_net/dec1_1/conv2/weights:0
g_net/dec1_1/conv2/biases:0
g_net/dec1_0/weights:0
g_net/dec1_0/biases:0
2019-11-15 18:50:58.905733: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-11-15 18:50:58.985701: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-11-15 18:50:58.986010: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties: 
name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.8095
pciBusID: 0000:0c:00.0
totalMemory: 5.93GiB freeMemory: 5.49GiB
2019-11-15 18:50:58.986029: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:0c:00.0, compute capability: 6.1)
2019-11-15 18:51:00.790961: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
2019-11-15 18:51:00.834833: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image_1/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image_1/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
2019-11-15 18:51:00.835682: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
2019-11-15 18:51:00.837560: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
2019-11-15 18:51:00.837569: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
2019-11-15 18:51:00.837582: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
2019-11-15 18:51:00.837729: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image_1/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image_1/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
2019-11-15 18:51:00.837744: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image_1/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image_1/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
2019-11-15 18:51:00.837721: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image_1/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image_1/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
2019-11-15 18:51:00.838160: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image_1/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image_1/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
2019-11-15 18:51:00.838148: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
2019-11-15 18:51:00.838247: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image_1/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image_1/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
2019-11-15 18:51:00.838156: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
2019-11-15 18:51:00.838156: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
2019-11-15 18:51:00.838317: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image_1/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image_1/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
2019-11-15 18:51:00.838385: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image_1/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image_1/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
2019-11-15 18:51:00.838392: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image_1/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image_1/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
2019-11-15 18:51:00.838418: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
2019-11-15 18:51:00.838435: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image_1/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image_1/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
2019-11-15 18:51:00.846275: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image_1/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image_1/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
2019-11-15 18:51:00.846288: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image_1/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image_1/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image_1/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
2019-11-15 18:51:00.893219: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
2019-11-15 18:51:00.893239: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: Number of channels must be 3 or 4, was 1
         [[Node: input/decode_image/cond_jpeg/cond_png/cond_gif/DecodeBmp = DecodeBmp[channels=0, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/decode_image/cond_jpeg/cond_png/cond_gif/Substr/Switch, ^input/decode_image/cond_jpeg/cond_png/cond_gif/Assert_1/Assert, ^input/decode_image/cond_jpeg/cond_png/cond_gif/Assert_2/Assert)]]
Traceback (most recent call last):
  File "/media/faelan/WestData/code/Deblur/SRN-Deblur/run_model.py", line 54, in <module>
    tf.app.run()
  File "/home/faelan/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "/media/faelan/WestData/code/Deblur/SRN-Deblur/run_model.py", line 48, in main
    deblur.train()
  File "/media/faelan/WestData/code/Deblur/SRN-Deblur/models/model.py", line 214, in train
    _, loss_total_val = sess.run([train_gnet, self.loss_total])
  File "/home/faelan/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 889, in run
    run_metadata_ptr)
  File "/home/faelan/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1120, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/faelan/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1317, in _do_run
    options, run_metadata)
  File "/home/faelan/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1336, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.OutOfRangeError: FIFOQueue '_1_input/batch/fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
         [[Node: input/batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/batch/fifo_queue, input/batch/n)]]

Caused by op u'input/batch', defined at:
  File "/media/faelan/WestData/code/Deblur/SRN-Deblur/run_model.py", line 54, in <module>
    tf.app.run()
  File "/home/faelan/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "/media/faelan/WestData/code/Deblur/SRN-Deblur/run_model.py", line 48, in main
    deblur.train()
  File "/media/faelan/WestData/code/Deblur/SRN-Deblur/models/model.py", line 186, in train
    self.build_model() # TODO
  File "/media/faelan/WestData/code/Deblur/SRN-Deblur/models/model.py", line 137, in build_model
    img_in, img_gt = self.input_producer(self.batch_size)
  File "/media/faelan/WestData/code/Deblur/SRN-Deblur/models/model.py", line 64, in input_producer
    batch_in, batch_gt = tf.train.batch([image_in, image_gt], batch_size=batch_size, num_threads=8, capacity=20)
  File "/home/faelan/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/training/input.py", line 927, in batch
    name=name)
  File "/home/faelan/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/training/input.py", line 722, in _batch
    dequeued = queue.dequeue_many(batch_size, name=name)
  File "/home/faelan/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/ops/data_flow_ops.py", line 464, in dequeue_many
    self._queue_ref, n=n, component_types=self._dtypes, name=name)
  File "/home/faelan/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 2418, in _queue_dequeue_many_v2
    component_types=component_types, timeout_ms=timeout_ms, name=name)
  File "/home/faelan/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/home/faelan/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
    op_def=op_def)
  File "/home/faelan/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1470, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

OutOfRangeError (see above for traceback): FIFOQueue '_1_input/batch/fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
         [[Node: input/batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input/batch/fifo_queue, input/batch/n)]]

quetion about testing

I used my own test picture and the pre-processing model provided by the author for testing but failed,could you help me see this question.
图片

Weight Decay

Do you imply weight decay when training SRN?
I do not find weight decay in the code.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.