Giter VIP home page Giter VIP logo

srntt's Issues

map_123

Hello, thank you for the open source of the code, I am very interested in your research content, but I have a mistake in training, find the reason, is missing the map_123 file, I want to know how to get the "map_123" file, you Can you share the link to me?
look forward to your reply!

About SRNTT-l2 with ref image

Hi,I know SRNTT-l2(SISR) is a simplfied version of SRNTT by only minimizing the MSE from the paper, but I am confused about the SRNTT-l2 with ref image.Is it also trained only with MSE?Or it just remove the adversarial loss?
Hoping for your reply!Thank you!

help with testing

I want to testing some lr image, using pre-trained model, seting --ref_dir is None. The line in program is to down_sample input image to 1/4 size of input image, and using input image as ref_image. when I saw the HR image and input image, they are very similar.
My question is: how to not using ref_image when testing, and not downsample input image as ref_image.

Help with training

I am trying to train a model using around 12,000 128x128 ref/input images (extracted from 8 border 128x128 regions of 512x512 images):

ref example:
11
corresponding input example:
11

My aim is to deblur images like this:

00001_0

So they are closer to this:

00032_0

I have ran offline_patchMatch_textureSwap.py on my data which generated 329 GB of feature maps, then used this command to try to run training:

python main.py --is_train True --save_dir SRNTT --input_dir data/train/CUFED/input --ref_dir data/train/CUFED/ref --map_dir data/train/CUFED/map_321 --batch_size 9 --num_epochs 100 --input_size 32

This results in the following:

X:\python\DF\srntt\SRNTT-master>python main.py --is_train True --save_dir SRNTT --input_dir data/train/CUFED/input --ref_dir data/train/CUFED/ref --map_dir data/train/CUFED/map_321 --batch_size 9 --num_epochs 100 --input_size 32
12443 12443 12443
2019-03-20 16:01:54,106 root  INFO     Building graph ...
Traceback (most recent call last):
  File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1628, in _create_c_op
    c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension 1 in both shapes must be equal, but are 32 and 40. Shapes are [9,32,32] and [9,40,40]. for 'texture_transfer/concatenation1' (op: 'ConcatV2') with input shapes: [9,32,32,64], [9,40,40,256], [] and with computed input tensors: input[2] = <-1>.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "X:\python\DF\srntt\SRNTT-master\SRNTT\tensorlayer\layers.py", line 5172, in __init__
    self.outputs = tf.concat(self.inputs, concat_dim, name=name)
  File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1124, in concat
    return gen_array_ops.concat_v2(values=values, axis=axis, name=name)
  File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 1202, in concat_v2
    "ConcatV2", values=values, axis=axis, name=name)
  File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
    return func(*args, **kwargs)
  File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 3274, in create_op
    op_def=op_def)
  File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1792, in __init__
    control_input_ops)
  File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1631, in _create_c_op
    raise ValueError(str(e))
ValueError: Dimension 1 in both shapes must be equal, but are 32 and 40. Shapes are [9,32,32] and [9,40,40]. for 'texture_transfer/concatenation1' (op: 'ConcatV2') with input shapes: [9,32,32,64], [9,40,40,256], [] and with computed input tensors: input[2] = <-1>.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1628, in _create_c_op
    c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension 1 in both shapes must be equal, but are 32 and 40. Shapes are [9,32,32,64] and [9,40,40,256].
        From merging shape 0 with other shapes. for 'texture_transfer/concatenation1_1/concat_dim' (op: 'Pack') with input shapes: [9,32,32,64], [9,40,40,256].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "main.py", line 95, in <module>
    use_lower_layers_in_per_loss=args.use_lower_layers_in_per_loss
  File "X:\python\DF\srntt\SRNTT-master\SRNTT\model.py", line 359, in train
    self.net_upscale, self.net_srntt = self.model(self.input, self.maps)
  File "X:\python\DF\srntt\SRNTT-master\SRNTT\model.py", line 131, in model
    net = ConcatLayer(layer=[map_in, map_ref], concat_dim=-1, name='concatenation1')
  File "X:\python\DF\srntt\SRNTT-master\SRNTT\tensorlayer\layers.py", line 5174, in __init__
    self.outputs = tf.concat(concat_dim, self.inputs, name=name)
  File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1121, in concat
    dtype=dtypes.int32).get_shape().assert_is_compatible_with(
  File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1050, in convert_to_tensor
    as_ref=False)
  File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1146, in internal_convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\array_ops.py", line 971, in _autopacking_conversion_function
    return _autopacking_helper(v, dtype, name or "packed")
  File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\array_ops.py", line 923, in _autopacking_helper
    return gen_array_ops.pack(elems_as_tensors, name=scope)
  File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 5857, in pack
    "Pack", values=values, axis=axis, name=name)
  File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
    return func(*args, **kwargs)
  File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 3274, in create_op
    op_def=op_def)
  File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1792, in __init__
    control_input_ops)
  File "C:\Users\User\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1631, in _create_c_op
    raise ValueError(str(e))
ValueError: Dimension 1 in both shapes must be equal, but are 32 and 40. Shapes are [9,32,32,64] and [9,40,40,256].
        From merging shape 0 with other shapes. for 'texture_transfer/concatenation1_1/concat_dim' (op: 'Pack') with input shapes: [9,32,32,64], [9,40,40,256].

How should I be running training in this case where I'm using 128x128 images? Or should I regenerate the training data as 160x160 images same as the CUFED dataset?

ValueError: Object arrays cannot be loaded when allow_pickle=False

Traceback (most recent call last):
File "main.py", line 130, in
is_original_image=args.is_original_image
File "/root/SRNTT/SRNTT/model.py", line 925, in test
network=self.net_upscale) is False:
File "/root/SRNTT/SRNTT/tensorlayer/files.py", line 722, in load_and_assign_npz
params = load_npz(name=name)
File "/root/SRNTT/SRNTT/tensorlayer/files.py", line 634, in load_npz
return d['params']
File "/root/anaconda3/lib/python3.7/site-packages/numpy/lib/npyio.py", line 262, in getitem
pickle_kwargs=self.pickle_kwargs)
File "/root/anaconda3/lib/python3.7/site-packages/numpy/lib/format.py", line 692, in read_array
raise ValueError("Object arrays cannot be loaded when "
ValueError: Object arrays cannot be loaded when allow_pickle=False

When I run offline_patchMatch_textureSwap.py,I got this error.Please help.

`Traceback (most recent call last):
File "/workspace/share/user/SRNTT-master/SRNTT/vgg19.py", line 36, in init
self.layers = self.vgg19()
File "/workspace/share/user/SRNTT-master/SRNTT/vgg19.py", line 41, in vgg19
params = loadmat(self.model_path)
File "/root/anaconda2/envs/py36/lib/python3.6/site-packages/scipy/io/matlab/mio.py", line 142, in loadmat
matfile_dict = MR.get_variables(variable_names)
File "/root/anaconda2/envs/py36/lib/python3.6/site-packages/scipy/io/matlab/mio5.py", line 292, in get_variables
res = self.read_var_array(hdr, process)
File "/root/anaconda2/envs/py36/lib/python3.6/site-packages/scipy/io/matlab/mio5.py", line 252, in read_var_array
return self._matrix_reader.array_from_header(header, process)
File "mio5_utils.pyx", line 675, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header
File "mio5_utils.pyx", line 721, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header
File "mio5_utils.pyx", line 894, in scipy.io.matlab.mio5_utils.VarReader5.read_cells
File "mio5_utils.pyx", line 673, in scipy.io.matlab.mio5_utils.VarReader5.read_mi_matrix
File "mio5_utils.pyx", line 723, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header
File "mio5_utils.pyx", line 969, in scipy.io.matlab.mio5_utils.VarReader5.read_struct
File "mio5_utils.pyx", line 673, in scipy.io.matlab.mio5_utils.VarReader5.read_mi_matrix
File "mio5_utils.pyx", line 721, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header
File "mio5_utils.pyx", line 894, in scipy.io.matlab.mio5_utils.VarReader5.read_cells
File "mio5_utils.pyx", line 673, in scipy.io.matlab.mio5_utils.VarReader5.read_mi_matrix
File "mio5_utils.pyx", line 705, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header
File "mio5_utils.pyx", line 778, in scipy.io.matlab.mio5_utils.VarReader5.read_real_complex
File "mio5_utils.pyx", line 450, in scipy.io.matlab.mio5_utils.VarReader5.read_numeric
File "mio5_utils.pyx", line 355, in scipy.io.matlab.mio5_utils.VarReader5.read_element
File "streams.pyx", line 195, in scipy.io.matlab.streams.ZlibInputStream.read_string
File "streams.pyx", line 188, in scipy.io.matlab.streams.ZlibInputStream.read_into
OSError: could not read bytes

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "offline_patchMatch_textureSwap.py", line 44, in
net_vgg19 = VGG19(model_path=vgg19_model_path)
File "/workspace/share/user/maojiaxing/SRNTT-master/SRNTT/vgg19.py", line 38, in init
self.layers = self.vgg19(reuse=True)
File "/workspace/share/user/maojiaxing/SRNTT-master/SRNTT/vgg19.py", line 41, in vgg19
params = loadmat(self.model_path)
File "/root/anaconda2/envs/py36/lib/python3.6/site-packages/scipy/io/matlab/mio.py", line 142, in loadmat
matfile_dict = MR.get_variables(variable_names)
File "/root/anaconda2/envs/py36/lib/python3.6/site-packages/scipy/io/matlab/mio5.py", line 292, in get_variables
res = self.read_var_array(hdr, process)
File "/root/anaconda2/envs/py36/lib/python3.6/site-packages/scipy/io/matlab/mio5.py", line 252, in read_var_array
return self._matrix_reader.array_from_header(header, process)
File "mio5_utils.pyx", line 675, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header
File "mio5_utils.pyx", line 721, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header
File "mio5_utils.pyx", line 894, in scipy.io.matlab.mio5_utils.VarReader5.read_cells
File "mio5_utils.pyx", line 673, in scipy.io.matlab.mio5_utils.VarReader5.read_mi_matrix
File "mio5_utils.pyx", line 723, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header
File "mio5_utils.pyx", line 969, in scipy.io.matlab.mio5_utils.VarReader5.read_struct
File "mio5_utils.pyx", line 673, in scipy.io.matlab.mio5_utils.VarReader5.read_mi_matrix
File "mio5_utils.pyx", line 721, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header
File "mio5_utils.pyx", line 894, in scipy.io.matlab.mio5_utils.VarReader5.read_cells
File "mio5_utils.pyx", line 673, in scipy.io.matlab.mio5_utils.VarReader5.read_mi_matrix
File "mio5_utils.pyx", line 705, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header
File "mio5_utils.pyx", line 778, in scipy.io.matlab.mio5_utils.VarReader5.read_real_complex
File "mio5_utils.pyx", line 450, in scipy.io.matlab.mio5_utils.VarReader5.read_numeric
File "mio5_utils.pyx", line 355, in scipy.io.matlab.mio5_utils.VarReader5.read_element
File "streams.pyx", line 195, in scipy.io.matlab.streams.ZlibInputStream.read_string
File "streams.pyx", line 188, in scipy.io.matlab.streams.ZlibInputStream.read_into
OSError: could not read bytes
`

Does the loadmat cause error? or scipy?? If you have any ideas or advice,please help me.

How to validate if the feature swapping is going well?

I'm trying to reproduce your great work using the other framework.

I think generating the feature-swapped maps is more critical for reproducing than the network architectures and loss functions etc.
From the perspective, I want to validate my feature swapping results.

Do you have any suggestions?
Note: I think the direct comparison between your and my results is not reasonable because of the difference of VGG weights and value range etc.

how to make the code run faster

it takes more than 60 seconds for to process an image with resolution 480*270 , it is so slow that not usable in real applications.

How to make the test process run faster ?

Thanks !

PSNR measurement

Hello.

First, Thank you for your great work!

I want to know the details on how you measured PSNR and SSIM regarding table 1 and 2 in your paper. On which channel(RGB channel or Y channel(Luminance)) did you measure those metrics?

Also, could you tell me which reference image you used for measuring PSNR /SSIM for table 1's RefSR method? On table 2 there are 5 PSNR measurements on each reference images(L1~L5), but I see different number(26.24 for SRNTT-l2) on CUFED dataset

Looking forward to your reply, Thank you!

Multi-Scale in Feature Swapping

How do you get multi-scale feature maps. Is it because the different pooling ('pool1', 'pool2') in vgg? And, I realize offline_patchMatch_textureSwap.py only operates on the 'relu3_1' feature maps, is that true? Looking forward to your reply.

help with trianing on my own dataset

2019-05-15 12:21:03,876 root WARNING The existing model dir demo_training_srntt/model is removed!
Traceback (most recent call last):
File "main.py", line 95, in
use_lower_layers_in_per_loss=args.use_lower_layers_in_per_loss
File "/home/n/nxy/SRNTT/SRNTT/model.py", line 329, in train
assert num_files == len(files_ref) == len(files_map)

about PSNR and SSIM

Hello, according to your method, training and testing I have passed, but I would like to ask how you calculate PSNR and SSIM?
If I want to use your method to compare PSNR and SSIM on X2, X3, X4, X8, what should I do?

What is the upscale.npz model in SRNTT/models/SRNTT?

Hi, I want to ask what is the upscale.npz model in the folder SRNTT/models/SRNTT?

I found it is loaded in offline_patchMatch_textureSwap.py to upsample the LR and the downsampled Ref. But in your paper, you mentioned that you apply bicubic upsampling before conducting the feature swapping. Could you give a explaination? Thanks in advance.

input size

Hello, I want to ask can I set the image size by myself during training? It seems that it only accept square images. Can I use some rectangular pictures?

about training epoch and loss

Hi, How can I know the training is done?The losses especially l_dis seem always not to converge.
Thank you very much!

[bug report]SRNTT/SRNTT/bicubic_kernel.py: a bug in def kener(in_length, out_length)

SRNTT/SRNTT/bicubic_kernel.py: a bug in method kener(in_length, out_length)

As the required python version is 3.6,
in python3 division ‘/’ betwen no matter integers or floats are both floating-point division,
if want integer division, '//' should be used.

So, in SRNTT/SRNTT/bicubic_kernel.py, the assert sentence of method kernel(in_length, out_length) didn't works.
I think it should be write as

assert in_length >= out_length and in_length // out_length == in_length / out_length

but not

assert in_length >= out_length and in_length / out_length == 1.0 * in_length / out_length

the original code is shown below:

def kernel(in_length, out_length):
 # assume in_length is larger scale
 # decide whether a convolution kernel can be constructed
 assert in_length >= out_length and in_length / out_length == 1.0 * in_length / out_length

 # decide kernel width
 scale = 1.0 * out_length / in_length
 kernel_length = 4.0 / scale
 ...

About the patch similarity.

Thanks for your gorgeous work.
Have you considered other similarity measure instead of the cosine one, which seems to naively treat the 3x3 patch as 9x1 vector and drop the inner spatial information.

about SR result

I also try perceptual loss and gan loss, I found it can not work well for compressed images.
do you try you algorithm for compressed images

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.