Giter VIP home page Giter VIP logo

deeply-recursive-cnn-tf's People

Contributors

jiny2001 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deeply-recursive-cnn-tf's Issues

How many feature number do you used?

As you said, 'I use half num of features (128) to make training faster for those results below.' I am a little confuse. How many of your feature number of result image shown at the page. 64, 128 or 96. Thanks.

Keyerror verbosity

Duplicate flags error, keyerror verbosity
on windows 10 platform for spyder

The problem of not fount files during testing

When I test with a trained model, I use the command: python test.py --dataSet14 --inference_depth 9 --feature_num 96 has the following error

Features:96 Inference Depth:9 Initial LR:0.00100 [model_F96_D9_LR0.001000]
(3, 3, 1, 96)-864, (96,)-96, (3, 3, 96, 96)-82944, (96,)-96, (3, 3, 96, 96)-82944, (96,)-96, (3, 3, 96, 96)-82944, (96,)-96, (3, 3, 97, 1)-873, (1,)-1, (9,)-9,
Total 11 variables, 250,963 params
Model restored.
Traceback (most recent call last):
File "test.py", line 82, in
if name == 'main':
File "C:\Users\lenovo\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\platform\app.py", line 125, in run
_sys.exit(main(argv))
File "test.py", line 78, in main
model.init_all_variables(load_initial_data=FLAGS.load_model)
File "E:\程序\deeply-recursive-cnn-tf-master\super_resolution.py", line 446, in do_super_resolution
org_image = util.load_image(file_path)
File "E:\程序\deeply-recursive-cnn-tf-master\super_resolution_utilty.py", line 205, in load_image
raise LoadError("File not found [%s]" % filename)
super_resolution_utilty.LoadError: File not found []

x2 scale my own photo

Hi @jiny2001 , thank you for sharing your great work , i have some (1000 x 660 ) photos want to scale it with 2x upscal ( 2000 x 1334 ) , could you please explan to me how to get that , when i try it give me output photo with the same size for input photo .

Where to download BSD 100 4x and Urban 100 4x

Thank you for your project. It seems that the BSD 100 4x link in your url provided is broken and I cannot find the link your Urban 100 4x database in your link. Could you please upload to Dropbox? Thanks.

about the gpu

hi, thank you for your code! i want to try your code on gpu ,but i don not know how to add the code, Cloud you tell me where i add the code " tf.device("/gpu:0")"?

Progress log of training with scale 4x?

Could you please show you progress log of training with scale 4x? I used my custom domain images for training and want to compare with your result? Thanks.

X4 model

Hello there! Which is your X4 model? I changed the scale to 4 and the result was very poor.

the value of PSNR

I run the code you provided in my own computer and finally the PSNR eauqls to about from 5 to 7, not 3* as you showed. What happened to this suitation?

Why don't add relu in inference network?

In paper, we find the authors use conv+relu in inference Network. Due to:

The recurrence relation is Hd = g(Hd−1) = max(0; W ∗ Hd−1 + b), Inference net f2 is equivalent to the composition of the same elementary function g: f2(H) = (g ◦ g ◦ · · · ◦)g(H) = gD(H)

But in your code, you have not use relu in the inference Network.
Your code: (the 200th-201th line in super_resolution.py file)

for i in range(0, self.inference_depth):
      self.H_conv[i+1] = util.conv2d_with_bias(self.H_conv[i], self.W_conv, 1, self.B_conv, name="H%d"%(i+1))

I think need to change like this:

for i in range(0, self.inference_depth):
      self.H_conv[i+1] = util.conv2d_with_bias(self.H_conv[i], self.W_conv, 1, self.B_conv, add_relu=True, name="H%d"%(i+1))

Is this a mistake? The reason you can't the achieve the performance like authors?
Looking forward to your reply

Is it recursive?

In the paper, the inference network only contains a single conv layer, and it is conducted with a D-times loop. In this implementation, it seems construct D conv layers with same W and B. Thus, this version cannot accomplish reduction on parameters and memory, with no recursion, right?

About Skip connection?

Hi, Jin:
I am sorry to bother you again. However, I have met a question recently.

In your super_resolution.py file, 227-230 line:
Your code:

for i in range(0, self.inference_depth + 1):
	self.Y1_conv[i] = util.conv2d_with_bias(self.H_conv[i], self.WD1_conv, self.cnn_stride, self.BD1_conv,
											add_relu=not self.residual, name="Y%d_1" % i)
	self.Y2_conv[i] = util.conv2d_with_bias(self.Y1_conv[i], self.WD2_conv, self.cnn_stride, self.BD2_conv,
											add_relu=not self.residual, name="Y%d_2" % i)

However, self.H_conv[0] is in embeding network. And from the authors' papet Figure 3(c). The skip connection should start self_conv[1].

and in 263-271 line,
Your code:

for i in range(0, self.inference_depth):
	if self.residual:
		self.Y2_conv[i] = self.Y2_conv[i] + self.x
	inference_sub = tf.subtract(self.y, self.Y2_conv[i], name="Loss1_%d_sub" % i)
	inference_square = tf.square(inference_sub, name="Loss1_%d_squ" % i)
	loss1_mse[i] = tf.reduce_mean(inference_square, name="Loss1_%d" % i)

loss1 = loss1_mse[0]
for i in range(1, self.inference_depth):
	if i == self.inference_depth:
		loss1 = tf.add(loss1, loss1_mse[i], name="loss1")
	else:
		loss1 = tf.add(loss1, loss1_mse[i], name="loss1_%d_add" % i)

only calculate loss from H[0] to H[self.inference_depth-1].
In fact, the loss should calculate from H[1] to H[self.inference_depth].

I think it should change like this. for 220-230 lines:

self.Y1_conv = self.inference_depth * [None]
self.Y2_conv = self.inference_depth * [None]
self.W = tf.Variable(
	np.full(fill_value=1.0 / (self.inference_depth), shape=[self.inference_depth], dtype=np.float32),name="layer_weight")
W_sum = tf.reduce_sum(self.W)

for i in range(0, self.inference_depth):
	self.Y1_conv[i] = util.conv2d_with_bias(self.H_conv[i+1], self.WD1_conv, self.cnn_stride, self.BD1_conv,
											add_relu=True, name="Y%d_1" % i)
	self.Y2_conv[i] = util.conv2d_with_bias(self.Y1_conv[i], self.WD2_conv, self.cnn_stride, self.BD2_conv,
											add_relu=not self.residual, name="Y%d_2" % i)

And it doesn't need to change for 263-271 line,.

I have got these results: (Set91 aug x4, with residual learning)
Set5: 37.04
Set14: 32.57
urban100: 29.58
BSD: 31.41

And the converge is faster, I only use 2.5 hours get the results in NV1080 GPU

I cannot see where you apply transpose convolution

Hi,

if I understand the paper correctly, we input a small image and output a large image with good resolution.

So I read through your code and I dont know where you apply transposed convolution(upscaling) to get larger outputs.

from the paper it must be in re-construct layer but I just cannot find in your code.

Can you explain a bit about this point?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.