Giter VIP home page Giter VIP logo

edsr-pytorch's Introduction

About PyTorch 1.2.0

  • Now the master branch supports PyTorch 1.2.0 by default.
  • Due to the serious version problem (especially torch.utils.data.dataloader), MDSR functions are temporarily disabled. If you have to train/evaluate the MDSR model, please use legacy branches.

EDSR-PyTorch

About PyTorch 1.1.0

  • There have been minor changes with the 1.1.0 update. Now we support PyTorch 1.1.0 by default, and please use the legacy branch if you prefer older version.

This repository is an official PyTorch implementation of the paper "Enhanced Deep Residual Networks for Single Image Super-Resolution" from CVPRW 2017, 2nd NTIRE. You can find the original code and more information from here.

If you find our work useful in your research or publication, please cite our work:

[1] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee, "Enhanced Deep Residual Networks for Single Image Super-Resolution," 2nd NTIRE: New Trends in Image Restoration and Enhancement workshop and challenge on image super-resolution in conjunction with CVPR 2017. [PDF] [arXiv] [Slide]

@InProceedings{Lim_2017_CVPR_Workshops,
  author = {Lim, Bee and Son, Sanghyun and Kim, Heewon and Nah, Seungjun and Lee, Kyoung Mu},
  title = {Enhanced Deep Residual Networks for Single Image Super-Resolution},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
  month = {July},
  year = {2017}
}

We provide scripts for reproducing all the results from our paper. You can train your model from scratch, or use a pre-trained model to enlarge your images.

Differences between Torch version

  • Codes are much more compact. (Removed all unnecessary parts.)
  • Models are smaller. (About half.)
  • Slightly better performances.
  • Training and evaluation requires less memory.
  • Python-based.

Dependencies

  • Python 3.6
  • PyTorch >= 1.0.0
  • numpy
  • skimage
  • imageio
  • matplotlib
  • tqdm
  • cv2 >= 3.xx (Only if you want to use video input/output)

Code

Clone this repository into any place you want.

git clone https://github.com/thstkdgus35/EDSR-PyTorch
cd EDSR-PyTorch

Quickstart (Demo)

You can test our super-resolution algorithm with your images. Place your images in test folder. (like test/<your_image>) We support png and jpeg files.

Run the script in src folder. Before you run the demo, please uncomment the appropriate line in demo.sh that you want to execute.

cd src       # You are now in */EDSR-PyTorch/src
sh demo.sh

You can find the result images from experiment/test/results folder.

Model Scale File name (.pt) Parameters **PSNR
EDSR 2 EDSR_baseline_x2 1.37 M 34.61 dB
*EDSR_x2 40.7 M 35.03 dB
3 EDSR_baseline_x3 1.55 M 30.92 dB
*EDSR_x3 43.7 M 31.26 dB
4 EDSR_baseline_x4 1.52 M 28.95 dB
*EDSR_x4 43.1 M 29.25 dB
MDSR 2 MDSR_baseline 3.23 M 34.63 dB
*MDSR 7.95 M 34.92 dB
3 MDSR_baseline 30.94 dB
*MDSR 31.22 dB
4 MDSR_baseline 28.97 dB
*MDSR 29.24 dB

*Baseline models are in experiment/model. Please download our final models from here (542MB) **We measured PSNR using DIV2K 0801 ~ 0900, RGB channels, without self-ensemble. (scale + 2) pixels from the image boundary are ignored.

You can evaluate your models with widely-used benchmark datasets:

Set5 - Bevilacqua et al. BMVC 2012,

Set14 - Zeyde et al. LNCS 2010,

B100 - Martin et al. ICCV 2001,

Urban100 - Huang et al. CVPR 2015.

For these datasets, we first convert the result images to YCbCr color space and evaluate PSNR on the Y channel only. You can download benchmark datasets (250MB). Set --dir_data <where_benchmark_folder_located> to evaluate the EDSR and MDSR with the benchmarks.

You can download some results from here. The link contains EDSR+_baseline_x4 and EDSR+_x4. Otherwise, you can easily generate result images with demo.sh scripts.

How to train EDSR and MDSR

We used DIV2K dataset to train our model. Please download it from here (7.1GB).

Unpack the tar file to any place you want. Then, change the dir_data argument in src/option.py to the place where DIV2K images are located.

We recommend you to pre-process the images before training. This step will decode all png files and save them as binaries. Use --ext sep_reset argument on your first run. You can skip the decoding part and use saved binaries with --ext sep argument.

If you have enough RAM (>= 32GB), you can use --ext bin argument to pack all DIV2K images in one binary file.

You can train EDSR and MDSR by yourself. All scripts are provided in the src/demo.sh. Note that EDSR (x3, x4) requires pre-trained EDSR (x2). You can ignore this constraint by removing --pre_train <x2 model> argument.

cd src       # You are now in */EDSR-PyTorch/src
sh demo.sh

Update log

  • Jan 04, 2018

    • Many parts are re-written. You cannot use previous scripts and models directly.
    • Pre-trained MDSR is temporarily disabled.
    • Training details are included.
  • Jan 09, 2018

    • Missing files are included (src/data/MyImage.py).
    • Some links are fixed.
  • Jan 16, 2018

    • Memory efficient forward function is implemented.
    • Add --chop_forward argument to your script to enable it.
    • Basically, this function first split a large image to small patches. Those images are merged after super-resolution. I checked this function with 12GB memory, 4000 x 2000 input image in scale 4. (Therefore, the output will be 16000 x 8000.)
  • Feb 21, 2018

    • Fixed the problem when loading pre-trained multi-GPU model.
    • Added pre-trained scale 2 baseline model.
    • This code now only saves the best-performing model by default. For MDSR, 'the best' can be ambiguous. Use --save_models argument to keep all the intermediate models.
    • PyTorch 0.3.1 changed their implementation of DataLoader function. Therefore, I also changed my implementation of MSDataLoader. You can find it on feature/dataloader branch.
  • Feb 23, 2018

    • Now PyTorch 0.3.1 is a default. Use legacy/0.3.0 branch if you use the old version.

    • With a new src/data/DIV2K.py code, one can easily create new data class for super-resolution.

    • New binary data pack. (Please remove the DIV2K_decoded folder from your dataset if you have.)

    • With --ext bin, this code will automatically generate and saves the binary data pack that corresponds to previous DIV2K_decoded. (This requires huge RAM (~45GB, Swap can be used.), so please be careful.)

    • If you cannot make the binary pack, use the default setting (--ext img).

    • Fixed a bug that PSNR in the log and PSNR calculated from the saved images does not match.

    • Now saved images have better quality! (PSNR is ~0.1dB higher than the original code.)

    • Added performance comparison between Torch7 model and PyTorch models.

  • Mar 5, 2018

    • All baseline models are uploaded.
    • Now supports half-precision at test time. Use --precision half to enable it. This does not degrade the output images.
  • Mar 11, 2018

    • Fixed some typos in the code and script.
    • Now --ext img is default setting. Although we recommend you to use --ext bin when training, please use --ext img when you use --test_only.
    • Skip_batch operation is implemented. Use --skip_threshold argument to skip the batch that you want to ignore. Although this function is not exactly the same with that of Torch7 version, it will work as you expected.
  • Mar 20, 2018

    • Use --ext sep-reset to pre-decode large png files. Those decoded files will be saved to the same directory with DIV2K png files. After the first run, you can use --ext sep to save time.
    • Now supports various benchmark datasets. For example, try --data_test Set5 to test your model on the Set5 images.
    • Changed the behavior of skip_batch.
  • Mar 29, 2018

    • We now provide all models from our paper.
    • We also provide MDSR_baseline_jpeg model that suppresses JPEG artifacts in the original low-resolution image. Please use it if you have any trouble.
    • MyImage dataset is changed to Demo dataset. Also, it works more efficient than before.
    • Some codes and script are re-written.
  • Apr 9, 2018

    • VGG and Adversarial loss is implemented based on SRGAN. WGAN and gradient penalty are also implemented, but they are not tested yet.
    • Many codes are refactored. If there exists a bug, please report it.
    • D-DBPN is implemented. The default setting is D-DBPN-L.
  • Apr 26, 2018

    • Compatible with PyTorch 0.4.0
    • Please use the legacy/0.3.1 branch if you are using the old version of PyTorch.
    • Minor bug fixes
  • July 22, 2018

    • Thanks for recent commits that contains RDN and RCAN. Please see code/demo.sh to train/test those models.
    • Now the dataloader is much stable than the previous version. Please erase DIV2K/bin folder that is created before this commit. Also, please avoid using --ext bin argument. Our code will automatically pre-decode png images before training. If you do not have enough spaces(~10GB) in your disk, we recommend --ext img(But SLOW!).
  • Oct 18, 2018

    • with --pre_train download, pretrained models will be automatically downloaded from the server.
    • Supports video input/output (inference only). Try with --data_test video --dir_demo [video file directory].
  • About PyTorch 1.0.0

    • We support PyTorch 1.0.0. If you prefer the previous versions of PyTorch, use legacy branches.
    • --ext bin is not supported. Also, please erase your bin files with --ext sep-reset. Once you successfully build those bin files, you can remove -reset from the argument.

edsr-pytorch's People

Contributors

in3omnia avatar mradassaad avatar sanghyun-son avatar sxd0071 avatar tabetomo avatar trantoan89 avatar yangcha avatar yulunzhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

edsr-pytorch's Issues

test EDSR_x2.pt error

I download pre-train model and test like this:

python main.py --data_test Demo --scale 2 --pre_train ../experiment/model/EDSR_x2.pt --test_only --save_results

but i get an error

Making model...
Loading model from ../experiment/model/EDSR_x2.pt
Traceback (most recent call last):
  File "/home/x00346096/code/pytorch/EDSR-PyTorch/code/model/edsr.py", line 64, in load_state_dict
    own_state[name].copy_(param)
RuntimeError: invalid argument 2: sizes do not match at /pytorch/torch/lib/THC/THCTensorCopy.cu:31

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "main.py", line 15, in <module>
    model = model.Model(args, checkpoint)
  File "/home/x00346096/code/pytorch/EDSR-PyTorch/code/model/__init__.py", line 40, in __init__
    cpu=args.cpu
  File "/home/x00346096/code/pytorch/EDSR-PyTorch/code/model/__init__.py", line 110, in load
    strict=False
  File "/home/x00346096/code/pytorch/EDSR-PyTorch/code/model/edsr.py", line 70, in load_state_dict
    .format(name, own_state[name].size(), param.size()))
RuntimeError: While copying the parameter named head.0.weight, whose dimensions in the model are torch.Size([64, 3, 3, 3]) and whose dimensions in the checkpoint are torch.Size([256, 3, 3, 3]).

Patch Modification

Hi,

I am interested in modifying the patch implementation here. I have a csv file that records all box location for my training images. And I only want these boxes to be the patch for training. I have found that I need to add an argument in option.py for my csv file and modify get_patch in common.pyto return the box coordinates. In addition, insrdata.py, I need to add another attribute for my csv file, also modify get_patchto add one more parameter to callget_patchfromcommon.py`. What did I miss here? I wonder how you implemented the number of patches for each image, since the number of boxes for each image may be different.

Thanks for your help !!

questions about weights Initialization and rgb_range

Hello, I cannot find that this code employs specific initialization method. I remember the gaussian filler is used in the previous version. If you don't mind, please interpret this issue. In addition, the 'rgb_range' in the code is set to 255, and this setting makes the input data ranging from [0, 255], which is different from usual data range [0, 1]. I want to know this way whether makes the training faster and more stable. @thstkdgus35

The question about test EDSR_x2 model

excuse me! I recently train the EDSR_x2 model that used your pytorch code. During trainng, the validation PSNR is normal, around 38.5dB. but after training, I used DIV2K n_val 100 to test, the reconstruction image is too poor. So I want to request for you, thank you!

PyTorch training slower than Torch?

Hi, I have noticed that Pytorch models are significantly slower than Torch models. Did you experience this too or is it just me?

I am training exactly the same model with same dataset (having same binary format), yet Torch version is significantly faster.

How to train EDSR_baseline_x4 with WGAN-GP?

Hi, I'm trying to train EDSR_baseline_x4 with WGAN-GP, but I don't know how to do it. I want to ask the following questions:

  1. In the discriminator, should batch normalization be removed? (I see that batch normalization has not been removed in your code )

  2. How to set (beta1, beta2, learning rate) of Adam for optimizing discriminator and generator?

  3. How to set the k value for adversarial loss? (I see that the default value of gan_k is 1 in your code )

  4. How to set the weights of VGG54 and generator loss?

Can you give me some advice๏ผŸ

Thank you!

EDSR out of memory at test time

I get out of memory error (in 12GB GPU RAM) when running final model of EDSR.

THCudaCheck FAIL file=/home/sibt/pytorch/aten/src/THC/generic/THCStorage.cu line=58 error=2 : out of memory
Traceback (most recent call last):
  File "main.py", line 14, in <module>
    while not t.terminate():
  File "/home/sibt/muneeb/_superResolution/code/trainer.py", line 164, in terminate
    self.test()
  File "/home/sibt/muneeb/_superResolution/code/trainer.py", line 98, in test
    output = _test_forward(input, scale)
  File "/home/sibt/muneeb/_superResolution/code/trainer.py", line 87, in _test_forward
    self.args.chop_shave, self.args.chop_size)
  File "/home/sibt/muneeb/_superResolution/code/utils.py", line 240, in chop_forward
    output_batch = model(input_batch)
  File "/datadrive/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/sibt/muneeb/_superResolution/code/model/EDSR.py", line 49, in forward
    x = self.tail(res)
  File "/datadrive/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in __call__
    result = self.forward(*input, **kwargs)
  File "/datadrive/anaconda2/lib/python2.7/site-packages/torch/nn/modules/container.py", line 89, in forward
    input = module(input)
  File "/datadrive/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in __call__
    result = self.forward(*input, **kwargs)
  File "/datadrive/anaconda2/lib/python2.7/site-packages/torch/nn/modules/container.py", line 89, in forward
    input = module(input)
  File "/datadrive/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in __call__
    result = self.forward(*input, **kwargs)
  File "/datadrive/anaconda2/lib/python2.7/site-packages/torch/nn/modules/pixelshuffle.py", line 40, in forward
    return F.pixel_shuffle(input, self.upscale_factor)
  File "/datadrive/anaconda2/lib/python2.7/site-packages/torch/nn/functional.py", line 1688, in pixel_shuffle
    shuffle_out = input_view.permute(0, 1, 4, 2, 5, 3).contiguous()
RuntimeError: cuda runtime error (2) : out of memory at /home/sibt/pytorch/aten/src/THC/generic/THCStorage.cu:58

Python command is attached below for your reference

python main.py --dir_data /datadrive --scale 4 --n_train 790 --n_val 10 --offset_val 790 --print_model --model EDSR --n_feats 256 --n_resblocks 32 --patch_size 96 --chop_forward --test_only

EDSR_baseline_x4 model

Thank you for sharing.
I have a question: Is the provided EDSR_baseline_x4.pt trained from scratch, or it is fine-tuned from x2 model.

Question about using a patch

Hi, I'm a MS student from KAIST.

I have a simple question about patches you are using.
In the code, dataloader gets one random patch per image.
common.getPatch(...)

1.I have been implemented super resolution code for iterating all patches in one epoch, but randomly choosing scheme seems a lot easier to implement.
However, randomly 'one' patch per 'one' image seems to require more epoch than above approach (iterating over all patches) for achieving same performance, is it right?
Since default epoch in option.py is only 60, I'm really curious of it.
2. Also, does randomly choosing approach has equal performance to iterating all patches?
3. I have been a tensorflow user, so I'm just beginner in pytorch, could I run your code in pytorch 0.3 which is current norm version? Are there any API changes I should take care of?

Thanks for nice code.
Hyunho Yeo.

question about your model

I read your paper and I go deeply impressed. I wanted to know how would you compare your work to GAN-oriented approach such as SRGAN (advantage/drawback)

Thanks,

Got errors when running demo.sh

I follow the steps from the readme file to run the demo file with Windows 10 x64 + python 3.6.6 + pytorch 0.4.1 cuda92.
Firstly I got a division by zero exception. And I place DIV2K to the database folder. It cause another error. It prints error messages here:

PS D:\Document\Python\EDSR-PyTorch\code> sh demo.sh
Making model...
Preparing loss function:
1.000 * L1
[Epoch 1] Learning rate: 1.00e-4
rm: cannot remove '../experiment/RCAN_BIX2_G10R20P48/log.txt': Device or resource busy
Making model...
Preparing loss function:
1.000 * L1
[Epoch 1] Learning rate: 1.00e-4
Traceback (most recent call last):
File "", line 1, in
Traceback (most recent call last):
File "main.py", line 19, in
t.train()
File "D:\Document\Python\EDSR-PyTorch\code\trainer.py", line 45, in train
for batch, (lr, hr, _, idx_scale) in enumerate(self.loader_train):
File "D:\Document\Python\EDSR-PyTorch\code\dataloader.py", line 144, in iter
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main
return _MSDataLoaderIter(self)
File "D:\Document\Python\EDSR-PyTorch\code\dataloader.py", line 117, in init
exitcode = _main(fd)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 114, in _main
w.start()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 105, in start
prepare(preparation_data)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 225, in prepare
self._popen = self._Popen(self)
_fixup_main_from_path(data['init_main_from_path'])
File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
return _default_context.get_context().Process._Popen(process_obj)
run_name="mp_main") File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen

File "C:\ProgramData\Anaconda3\lib\runpy.py", line 263, in run_path
return Popen(process_obj)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 65, in init
pkg_name=pkg_name, script_name=fname)
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 85, in _run_code
reduction.dump(process_obj, to_child)
exec(code, run_globals) File "C:\ProgramData\Anaconda3\lib\multiprocessing\reduction.py", line 60, in dump

File "D:\Document\Python\EDSR-PyTorch\code\main.py", line 19, in
ForkingPickler(file, protocol).dump(obj)
BrokenPipeErrort.train():
[Errno 32] Broken pipe File "D:\Document\Python\EDSR-PyTorch\code\trainer.py", line 45, in train

for batch, (lr, hr, _, idx_scale) in enumerate(self.loader_train):

File "D:\Document\Python\EDSR-PyTorch\code\dataloader.py", line 144, in iter
return _MSDataLoaderIter(self)
File "D:\Document\Python\EDSR-PyTorch\code\dataloader.py", line 117, in init
w.start()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 33, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

I do not know the code, so I can not modify by myself. I would be thankful if you could give me some advice. Thanks.

memory problem

Hi, I found current program will cost too much memory when test images.
do you try cudnn.benchmark = True
it seems for test 1920x1080 Top memory will reach limit of GPU(my GPU memory is 11G)

Something wrong about multiprocessing

I got a prolem while training.
It seems that something went wrong while using multi-process for produing training data.

Preparing loss function...
[{'type': 'L1', 'weight': 1.0, 'function': L1Loss(
)}]
[Epoch 1] Learning rate: 1.00e-3
Traceback (most recent call last):
Traceback (most recent call last):
File "", line 1, in
File "main.py", line 17, in
File "d:\Anaconda3\envs\pyth\lib\multiprocessing\spawn.py", line 105, in spawn_main
t.train()
exitcode = _main(fd)
File "F:\YiPeng\py\EDSR-PyTorch-master\code\trainer.py", line 48, in train
File "d:\Anaconda3\envs\pyth\lib\multiprocessing\spawn.py", line 114, in _main
for batch, (input, target, idx_scale) in enumerate(self.loader_train):
prepare(preparation_data)
File "F:\YiPeng\py\EDSR-PyTorch-master\code\dataloader.py", line 133, in iter
File "d:\Anaconda3\envs\pyth\lib\multiprocessing\spawn.py", line 225, in prepare
return MSDataLoaderIter(self)
_fixup_main_from_path(data['init_main_from_path'])
File "F:\YiPeng\py\EDSR-PyTorch-master\code\dataloader.py", line 106, in init
File "d:\Anaconda3\envs\pyth\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
w.start()
run_name="mp_main")
File "d:\Anaconda3\envs\pyth\lib\multiprocessing\process.py", line 105, in start
File "d:\Anaconda3\envs\pyth\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "d:\Anaconda3\envs\pyth\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "d:\Anaconda3\envs\pyth\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "F:\YiPeng\py\EDSR-PyTorch-master\code\main.py", line 17, in
t.train()
File "F:\YiPeng\py\EDSR-PyTorch-master\code\trainer.py", line 48, in train
for batch, (input, target, idx_scale) in enumerate(self.loader_train):
File "F:\YiPeng\py\EDSR-PyTorch-master\code\dataloader.py", line 133, in iter
return MSDataLoaderIter(self)
File "F:\YiPeng\py\EDSR-PyTorch-master\code\dataloader.py", line 106, in init
w.start()
File "d:\Anaconda3\envs\pyth\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
self._popen = self._Popen(self)
File "d:\Anaconda3\envs\pyth\lib\multiprocessing\context.py", line 223, in _Popen
File "d:\Anaconda3\envs\pyth\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
return _default_context.get_context().Process._Popen(process_obj)
File "d:\Anaconda3\envs\pyth\lib\multiprocessing\context.py", line 322, in _Popen
File "d:\Anaconda3\envs\pyth\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "d:\Anaconda3\envs\pyth\lib\multiprocessing\popen_spawn_win32.py", line 65, in init
return Popen(process_obj)
File "d:\Anaconda3\envs\pyth\lib\multiprocessing\popen_spawn_win32.py", line 33, in init
reduction.dump(process_obj, to_child)
prep_data = spawn.get_preparation_data(process_obj._name)
File "d:\Anaconda3\envs\pyth\lib\multiprocessing\reduction.py", line 60, in dump
File "d:\Anaconda3\envs\pyth\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "d:\Anaconda3\envs\pyth\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.
ForkingPickler(file, protocol).dump(obj)

BrokenPipeError: [Errno 32] Broken pipe

about MDSR (JPEG)*

hi, I want to ask about MDSR (JPEG), is it have special operation for train model? I found the result from MDSR (JPEG) have big difference with others.
Plz share it. Thank you!

low psnr performance when using the pre-trained model 'EDSR_baseline_x4.pt'

Hi,
I used the pre-trained model and test it on Set4 and BSD100, but the performance is not good.
The psnr result:
Set5 : 26.86; BSD100:25.15.

The test code: save result images

for iteration, batch in enumerate(data_loader,1):
    img = batch[0]
    img = Variable(img, volatile=True)
    if cuda:
        img = img.cuda()
    out = my_model(img)
    out_img = out.cpu().data[0].numpy()
    out_img *= 255.0
    out_img = out_img.clip(0,255)
    out_img = out_img.transpose(1,2,0)
    out_img = Image.fromarray(np.uint8(out_img))
    
    save_path = save_dir+'/'+set_name+'_'+str(iteration)+'.png'
    out_img.save(save_path)
    print('output image saved to', save_path)

and the evaluate code:

function psnr=compute_psnr(im1,im2)
if size(im1, 3) > 1
    im1 = rgb2ycbcr(im1);
    im1 = im1(:, :, 1);
end

if size(im2, 3) == 3
    im2 = rgb2ycbcr(im2);
    im2 = im2(:, :, 1);
end

cropPix     = 4;
im1 = shave(im1, [cropPix, cropPix]);
im2  = shave(im2, [cropPix, cropPix]); 

imdff = double(im1) - double(im2);
imdff = imdff(:);

rmse = sqrt(mean(imdff.^2));
psnr = 20*log10(255/rmse);
end

function I = shave(I, border)
I = I(1+border(1):end-border(1), ...
      1+border(2):end-border(2), :, :);
end

Besides, the test LR iand HR images were provided by SelfExSR

Any advice?

Half precision error

When I executed EDSR baseline model (x2) training additionally with half precision option, I encountered errors as following.
Could you give me any suggestions to avoid it?

python main.py --model EDSR --scale 2 --save EDSR_baseline_x2_half --reset --precision half
Making model...
Preparing loss function:
1.000 * L1
[Epoch 1] Learning rate: 1.00e-4
Skip this batch 2! (Loss: inf)
Skip this batch 3! (Loss: inf)
Skip this batch 4! (Loss: inf)
Skip this batch 5! (Loss: inf)
Skip this batch 6! (Loss: inf)

Reported an error when ran test model

Hi, thank you very much for sharing your code.
I ran your test model(EDSR_x2 ) based on python-3.6.4, pytorch-0.3.1-post(windows, only cpu),
torchvision-0.2.0. but it reported an error as follows:

Making model...
Loading model from ../experiment/model/EDSR_baseline_x2.pt...
Traceback (most recent call last):
File "D:\02.Work\Project\x-prj\03.code\01.EDSR\EDSR-PyTorch-master\EDSR-PyTorch-master\code\main.py", line 13, in
t = Trainer(my_loader, checkpoint, args)
File "D:\02.Work\Project\x-prj\03.code\01.EDSR\EDSR-PyTorch-master\EDSR-PyTorch-master\code\trainer.py", line 15, in init
self.model, self.loss, self.optimizer, self.scheduler = ckp.load()
File "D:\02.Work\Project\x-prj\03.code\01.EDSR\EDSR-PyTorch-master\EDSR-PyTorch-master\code\utility.py", line 80, in load
my_model = model().get_model(self.args)
File "D:\02.Work\Project\x-prj\03.code\01.EDSR\EDSR-PyTorch-master\EDSR-PyTorch-master\code\model_init_.py", line 17, in get_model
my_model.load_state_dict(torch.load(args.pre_train))
File "D:\06.ProgramData\Anaconda3\lib\site-packages\torch\serialization.py", line 267, in load
return _load(f, map_location, pickle_module)
File "D:\06.ProgramData\Anaconda3\lib\site-packages\torch\serialization.py", line 420, in _load
result = unpickler.load()
File "D:\06.ProgramData\Anaconda3\lib\site-packages\torch\serialization.py", line 389, in persistent_load
data_type(size), location)
File "D:\06.ProgramData\Anaconda3\lib\site-packages\torch\serialization.py", line 87, in default_restore_location
result = fn(storage, location)
File "D:\06.ProgramData\Anaconda3\lib\site-packages\torch\serialization.py", line 69, in _cuda_deserialize
return obj.cuda(device)
File "D:\06.ProgramData\Anaconda3\lib\site-packages\torch_utils.py", line 61, in cuda
with torch.cuda.device(device):
File "D:\06.ProgramData\Anaconda3\lib\site-packages\torch\cuda_init
.py", line 226, in enter
self.prev_idx = torch._C._cuda_getDevice()
AttributeError: module 'torch._C' has no attribute '_cuda_getDevice'

Thanks.

dataloader does not work for pytorch == 0.3.0

Thank you very much for sharing the code!

There are two Errors when the pytorch version is 0.3.0:
AttributeError: 'MSDataLoaderIter' object has no attribute 'timeout'
AttributeError: 'MSDataLoaderIter' object has no attribute 'worker_pids_set'.
Thanks!

Missing sigmoid as last discriminator layer

Is the discriminator missing a sigmoid as the last layer? (see these lines)
If I'm not mistaken, the current last layer is a linear layer, which can output any value including negative ones. Isn't this problematic in the standard GAN (Goodfellow et al. 2014) formulation?

Unexpected key when resuming the model.

Hi, thank you for sharing your code.
I can run your train model (EDSR_x2) successfully, but when resuming the model the following error occurs:

File "user path/EDSR-PyTorch-master/code/utility.py", line 127, in load
torch.load(self.dir + '/model/model_latest.pt'))
File "/home/cggi/miniconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 522, in load_state_dict
.format(name))
KeyError: 'unexpected key "sub_mean.weight" in state_dict'

I just use this command:
python main.py --model EDSR --scale 2 --save EDSR_x2 --n_resblocks 32 --n_feats 256 --res_scale 0.1 --resume 46 --load EDSR_x2

Thanks!

Torch not compiled with CUDA enabled

Hi @thstkdgus35 ,thank you for your great work in this project

Unfortunately, when i try to run
python main.py --data_test Demo --scale 2 --pre_train ../experiment/model/EDSR_baseline_x2.pt --test_only --save_results

based on python 3.5, , pytorch 0.4, windows 10
it show this error

Making model...
Traceback (most recent call last):
File "main.py", line 15, in
model = model.Model(args, checkpoint)
File "G:\Sr\New folder\EDSR-PyTorch-master\code\model_init_.py", line 24, in init
self.model = module.make_model(args).to(self.device)
File "C:\Users\ABDO\AppData\Local\Programs\Python\Python35\lib\site-packages\torch\nn\modules\module.py", line 393, in to
return self._apply(lambda t: t.to(device))
File "C:\Users\ABDO\AppData\Local\Programs\Python\Python35\lib\site-packages\torch\nn\modules\module.py", line 176, in _apply
module._apply(fn)
File "C:\Users\ABDO\AppData\Local\Programs\Python\Python35\lib\site-packages\torch\nn\modules\module.py", line 182, in _apply
param.data = fn(param.data)
File "C:\Users\ABDO\AppData\Local\Programs\Python\Python35\lib\site-packages\torch\nn\modules\module.py", line 393, in
return self._apply(lambda t: t.to(device))
RuntimeError: Error attempting to use dtype torch.float32 with layout torch.strided and device type CUDA. Torch not compiled with CUDA enabled.

could you please show me how to test it in cpu only without gpu

thanks for Advice

Superresolution at x1

Hello, I am new to SR and playing with your creation. Decided to use it on formerly scanned paper printed photos from past and get very nice results. But the problem is, scans are already big in filesize so I am splitting them to many pieces due to GPU mem limit.
If we can use SR to just to enhance photo while preserving the same resolution, It will (probably) consume less memory during process, meaning less parts to split. May it be possible to give a command like below, one day? :)

python main.py --data_test Demo --scale 1 --pre_train ../experiment/model/EDSR_baseline_x1.pt --test_only --save_results

I appreciate if you consider possibilities for this.

ValueError: not enough values to unpack (expected 4, got 3)

Hi, Thank you for providing excellent super resolution technology.
I wanted to test the behavior and I ran the following command and I got an error.

C:\Users\yusuke\EDSR-PyTorch\code>python main.py --data_test Demo --cpu --scale 4 --n_threads 0 --pre_train ../experiment/model/EDSR_baseline_x4.pt --test_only --save_results
Making model...
Loading model from ../experiment/model/EDSR_baseline_x4.pt

Evaluation:
0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File "main.py", line 18, in
while not t.terminate():
File "C:\Users\yusuke\EDSR-PyTorch\code\trainer.py", line 136, in terminate
self.test()
File "C:\Users\yusuke\EDSR-PyTorch\code\trainer.py", line 88, in test
for idx_img, (lr, hr, filename, _) in enumerate(tqdm_test):
ValueError: not enough values to unpack (expected 4, got 3)

Windows 10 64bit

Anaconda 4.5.0
Python 3.6.4
pytorch 0.3.1.post2
torchvision 0.2.0

Add license

I am trying to evaluate for testing. Can you please add the license to this project?

Training a x4 model using a pretrained x2 model

First of all, thank you for sharing this great work.

I have a problem when training a x4 model using a pre-trained x2 model (../experiment/EDSR_baseline_x2/model/model_best.pt) as follows.

$ python main.py --model EDSR --scale 4 --save EDSR_baseline_x4 --reset  --dir_data /data --pre_train ../experiment/EDSR_baseline_x2/model/model_best.pt
...
Loading model from ../experiment/EDSR_baseline_x2/model/model_best.pt...
Traceback (most recent call last):
  File "main.py", line 13, in <module>
    t = Trainer(my_loader, checkpoint, args)
  File "/home/yschoi/work/SR/EDSR-PyTorch_custom/code/trainer.py", line 21, in __init__
    self.model, self.loss, self.optimizer, self.scheduler = ckp.load()
  File "/home/yschoi/work/SR/EDSR-PyTorch_custom/code/utils.py", line 80, in load
    my_model = model(self.args).get_model()
  File "/home/yschoi/work/SR/EDSR-PyTorch_custom/code/model/__init__.py", line 18, in get_model
    my_model.load_state_dict(torch.load(self.args.pre_train))
  File "/home/yschoi/work/SR/EDSR-PyTorch_custom/code/model/EDSR.py", line 78, in load_state_dict
    raise KeyError('missing keys in state_dict: "{}"'.format(missing))
KeyError: 'missing keys in state_dict: "{\'tail.0.2.bias\', \'tail.0.2.weight\'}"'

Although all above errors could be removed by adding 'strict=False' to the call statement of 'load_state_dict' (line 15) in './code/model/__init__.py' as follows, I'm not sure this is a right way to handle this situation.

- my_model.load_state_dict(torch.load(self.args.pre_train))
+ my_model.load_state_dict(torch.load(self.args.pre_train), strict=False)

Please let me know if I'm missing something important.

How to test the model with multi-GPU?

Thank you for your excellent code.

I encounter a problem when I train this code. As the code wrote in 'main.py':

while not t.terminate():
t.train()
t.test()

where we can see that the test phase begins immediately after the train phase. However, since the GPU memory is not released, and the test model only runs on a single GPU even though this model can run on 4 GPUs on the train phase. The problem of out of memory occurs then.

Actually, I can run this code successfully on 4 GPUs of GTX 1080Ti, even though the test model only runs on a single GPU. In recent days my work environment changes and I train these netwoks on 4 GPUs of Titan Xp. Although the GPU memory increases the problem of out of memory occurs.

I wonder if we can test the model with multi-GPU just like the train phase. By the way, setting --chop_forward doesn't work for me.

Thank you!

Library Dependencies

skimage dependency problem occurs. It seems that scikit-image's Dependencies need to be added.

nan of loss during training

Hello, when i use my own training data with 11500 pictures, batch size =12 and scale = 2, a problem occurred
[Epoch 1] Learning rate: 1.00e-4
[1200/10500] [L1: 11.2929] 21.0+60.3s
[2400/10500] [L1: 8.8598] 20.0+63.7s
[3600/10500] [L1: 7.7807] 20.0+69.5s
[4800/10500] [L1: 7.1179] 20.0+74.0s
[6000/10500] [L1: 6.6776] 20.1+69.4s
[7200/10500] [L1: 6.3562] 20.1+75.5s
[8400/10500] [L1: 6.0822] 20.0+75.2s
[9600/10500] [L1: 5.8959] 19.9+75.0s

Evaluation:
[Sequence x2] PSNR: 37.031 (Best: 37.031 from epoch 1)
Time: 415.90s

[Epoch 2] Learning rate: 1.00e-4
[1200/10500] [L1: nan] 20.2+46.7s
[2400/10500] [L1: nan] 20.1+48.3s
[3600/10500] [L1: nan] 20.1+54.2s
[4800/10500] [L1: nan] 20.1+60.7s
[6000/10500] [L1: nan] 20.2+62.0s
[7200/10500] [L1: nan] 20.1+63.0s
[8400/10500] [L1: nan] 20.1+68.9s
[9600/10500] [L1: nan] 20.1+68.6s

Evaluation:
[Sequence x2] PSNR: 37.779 (Best: 37.779 from epoch 2)
Time: 403.14s
...
...
...
[Epoch 43] Learning rate: 1.00e-4
[1200/10500] [L1: nan] 19.2+43.1s
[2400/10500] [L1: nan] 19.1+42.1s
[3600/10500] [L1: nan] 19.1+41.9s
[4800/10500] [L1: nan] 19.1+41.7s
[6000/10500] [L1: nan] 19.0+45.8s
[7200/10500] [L1: nan] 19.1+48.2s
[8400/10500] [L1: nan] 19.3+73.3s
[9600/10500] [L1: nan] 19.1+62.8s

Evaluation:
[Sequence x2] PSNR: 39.445 (Best: 39.445 from epoch 43)
Time: 396.57s

Using Multiple GPUs

I successfully trained the x2 model from the paper using multiple GPUs thanks to the following code in code/models/__init__.py:

if self.args.n_GPUs > 1:
                my_model = nn.DataParallel(my_model, range(0, self.args.n_GPUs))
                print('\tMultiple GPUs Ready!')

However, I ran into problems when trying to use this model to train the x4 from the paper. This is because in the same file (__init__.py), when loading a model instead of creating one, the program attempts to load the model before enabling multiple GPUs. This is a problem since the keys of the saved x2 model all start with "module" while the keys from the empty dictionary do not. It therefore throws a KeyException.

I tried to fix this by switching the order of the GPU initialization and the dictionary loading, however it then complains about missing keys. Specifically, the error message is

KeyError: 'missing keys in state_dict: "{'module.tail.0.2.bias', 'module.tail.0.2.weight'}"'

Is there a better way to try and fix the problem? Thank you very much.

test error on EDSR_x4 model

I running the test code (pytorch version 0.4.0) with this line
python3 main.py --data_test Demo --scale 4 --n_resblocks 32 --n_feats 256 --res_scale 0.1 --pre_train ../experiment/model/EDSR_x4.pt --test_only --save_results

I get the following error:

Traceback (most recent call last):
File "main.py", line 18, in
while not t.terminate():
File "/home/yochai/models/EDSR/EDSR-PyTorch-master/code/trainer.py", line 136, in terminate
self.test()
File "/home/yochai/models/EDSR/EDSR-PyTorch-master/code/trainer.py", line 88, in test
for idx_img, (lr, hr, filename, _) in enumerate(tqdm_test):
File "/usr/local/lib/python3.5/dist-packages/tqdm/_tqdm.py", line 941, in iter
for obj in iterable:
File "/home/yochai/models/EDSR/EDSR-PyTorch-master/code/dataloader.py", line 133, in iter
return MSDataLoaderIter(self)
File "/home/yochai/models/EDSR/EDSR-PyTorch-master/code/dataloader.py", line 114, in init
self._put_indices()
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 298, in _put_indices
self.index_queues[self.worker_queue_idx].put((self.send_idx, indices))
AttributeError: 'MSDataLoaderIter' object has no attribute 'index_queues'
Exception ignored in: <bound method _DataLoaderIter.del of <dataloader.MSDataLoaderIter object at 0x7f74abd3c278>>
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 349, in del
self._shutdown_workers()
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 323, in _shutdown_workers
for q in self.index_queues:
AttributeError: 'MSDataLoaderIter' object has no attribute 'index_queues'

I would appreciate help with this. Thanks!

Question about EDSR architecture

I have a question about EDSR network architecture.

I apply EDSR resblock on VDSR architecture.

Simply, all network architecture is almost same except 4 below features.

  1. Existence of upsampling layer (e.g. sub-pixel), I remove it.
  2. Input is first upscaled as output resolution by using matlab bicubic.
  3. Modify VDSR residual input to EDSR residual input (e.g. EDSR use residual input as output of first convolution layer)
  4. Number of residual block is 10, which has 20 + 2 convolution layer, each convolution has 64 features which is same as VDSR or EDSR baseline.

However, it doesn't well converge showing worse result than bicubic.
I'm curious upsampling layer is important in EDSR style ResNet architecture.

I'm not sure there is flaw on my code, but I want to get some advice on your opinion.

Summary: Do you think EDSR ResNet architecture also apply well on VDSR style which doesn't use upsampling layer?

Thank you!

ImportError: cannot import name 'DataLoaderIter'

hi @thstkdgus35
thank you for your great work in this project
i try to test with in windows
python main.py --data_test Demo --scale 4 --pre_train ../experiment/model/EDSR_baseline_x4.pt --test_only --save_results
but i get this erro

python main.py --data_test Demo --scale 4 --pre_train ../experiment/model/EDSR_baseline_x4.pt --test_only --save_resu
lts
Traceback (most recent call last):
File "main.py", line 4, in
import data
File "D:\EDSR PyTorch\code\data_init_.py", line 3, in
from dataloader import MSDataLoader
File "D:\EDSR PyTorch\code\dataloader.py", line 13, in
from torch.utils.data.dataloader import DataLoaderIter
ImportError: cannot import name 'DataLoaderIter'
Traceback (most recent call last):
File "main.py", line 4, in
import data
File "D:\EDSR PyTorch\code\data_init_.py", line 3, in
from dataloader import MSDataLoader
File "D:\EDSR PyTorch\code\dataloader.py", line 13, in from torch.utils.data.dataloader import DataL

how can i solve this please.

The count of output images is limited?

Hi, I am testing your Super-Resolution code. However, I find that the process would stuck when the count of output images is over 800. I am a fresh man in this area, and I don't know why this happened. Please help me.

evaluate trained model on other datasets

Hi, thanks for sharing the code. I have learned a lot from it. Here I have a question, how to evaluate trained model on other datasets such as BSDS100 and Manga109?

Many thanks!

Question about getPatch method

Hi, may I ask how is the getPatch method in data directory's common.py gets called? I've went through the implementation and only found it's called in SRData.py's getItem function but couldn't find where the getItem function gets called.

And another question is if there's a way we could save the patches information we used for testing without re-trainning the model?

Thanks ahead for all the response!

Training error

Hello, I was trying to train a EDSR model with DIV2K dataset.
I downloaded the DIV2K and put it in the directory ~/data/DIV2K. Then I put it as the default dataset_dir in option.py. Then I train the model with the command

python main.py --model EDSR --scale 2 --save EDSR_x2 --n_resblocks 32 --n_feats 256 --res_scale 0.1

However, I got the error as follows

Traceback (most recent call last):
File "main.py", line 14, in
loader = data.Data(args)
File "/home/caffe/EDSR-PyTorch/code/data/init.py", line 11, in init
trainset = getattr(module_train, args.data_train)(args)
File "/home/caffe/EDSR-PyTorch/code/data/div2k.py", line 7, in init
args, name=name, train=train, benchmark=benchmark
File "/home/caffe/EDSR-PyTorch/code/data/srdata.py", line 86, in init
= args.test_every // (len(self.images_hr) // args.batch_size)
ZeroDivisionError: integer division or modulo by zero

I want to train a model with DIV2K, so can anyone help me figure out where the problem is. The README does not explain things clearly as I am confused to find out a solution.

Thanks

Dataloading for seperate binary files

Hi, I recently migrated from Lua version to PyTorch version. However, in Lua, there was an option to load binary file for each image. I don't see that option in PyTorch.

Since I do not have 16GB+ RAM unfortunately, I cannot have a binary file for whole dataset loaded. So, I was wondering if an option to load binary file for each image is implemented? If not, is it going to be implemented?

MDSR-GAN model

Hi @thstkdgus35
i download your trained MDSR-GAN model from here #27 , but i did not Succeed to use them
i use this
python main.py --n_threads 0 --data_test Demo --pre_train ../experiment/model/model_best.pt --test_only --scale 4 --save_results --chop --cpu

but the result is gray photo, could you please explain the script for make it work

you say that x4 output is not that satisfying with default hyper-parameters, but when i compared your gan result with the most 2 famous srgan project on github, i found your result better than all of them, even compare to let's enhance your Gan Surprisingly do better result, so i wonder whey you did not publishing your amazing MDSR-GAN model with other models after all, thanks man for this great work and effort.

why the reconstruction image is too poor?

excuse me! I recently train the EDSR_baseline_x2.pt model that used your pytorch code. the reconstruction image is too poor. So I want to request for you, thank you!
I used cmd is that --> python3 main.py --data_test Demo --scale 2 --pre_train ../experiment/model/EDSR_baseline_x2.pt --test_only --save_results

and got the reconstruction image is too poor:
img034_x2_sr

the input image is
img034

Question about training on new dataset

How can I train EDSR on my own (non-standard) dataset? Is there a straightforward way to do this with the current code? I'd also like to explore transfer learning with one of the pre-trained models if possible.

how many epochs do you train to get your final model?

hello,
i have another question.
how many epochs do you train to get your final model(EDSRx2.pt/EDSRx3.pt/EDSRx4.pt)?

which indicator determines that the training is completed? loss or PSNR?
and what the value of the loss and PSNR when the model has been trained completed?

thank you

How to super-resolve our own images?

Hi, apologies for a basic question; but I wasn't able to figure out how to 'test' our own images WITHOUT 'validating/evaluating' the results. I ran the following command.

python main.py --dir_data ~/dataset --n_val 10 --offset_val 790 --scale 2 --chop_forward --model EDSR --pre_train ../experiment/model/EDSR_baseline_x4.pt --test_only --save_results

However, it gave the following error.

Loading model from ../experiment/model/EDSR_baseline_x4.pt...
	CUDA is ready!
Preparing loss function...
[{'function': L1Loss(
), 'type': 'L1', 'weight': 1.0}]

Evaluation:
Traceback (most recent call last):
  File "main.py", line 14, in <module>
    while not t.terminate():
  File "/home/muneeb/ntire2018/code/trainer.py", line 180, in terminate
    self.test()
  File "/home/muneeb/ntire2018/code/trainer.py", line 113, in test
    self.ckp.log_test[-1, idx_scale] = eval_acc / len(self.loader_test)
ZeroDivisionError: integer division or modulo by zero

Could you kindly tell how to super-resolve our own set of images? Thankyou.

Quick start error

When I run the demo I get an error:
python main.py --model EDSR --scale 3 --save EDSR_baseline_x3 --reset --pre_train ../experiment/model/EDSR_baseline_x4.pt --test_only

Preparing binary packages...
Loading ../../../dataset/DIV2K_decoded\DIV2K_train_HR\packv.pt
Traceback (most recent call last):
File "main.py", line 12, in
my_loader = data(args).get_loader()
File "D:\Projects\SR\EDSR-PyTorch-master\code\data_init_.py", line 25, in get_loader
self.args, train=False)
File "D:\Projects\SR\EDSR-PyTorch-master\code\data\DIV2K.py", line 32, in init
self.pack_tar = torch.load(name_tar)
File "C:\Program Files\Anaconda3\lib\site-packages\torch\serialization.py", line 259, in load
f = open(f, 'rb')
FileNotFoundError: [Errno 2] No such file or directory: '../../../dataset/DIV2K_decoded\DIV2K_train_HR\packv.pt'

Do I really need this file for just running a demo?

OS: Win10
Also, your link to DIV2K dataset is unavaliable. Can you update it, please?

Request for skip_batch functionality

Hi, there was a skip_batch functionality in Lua that skipped the batch for training if the error was above some certain threshold value. Although, it wasn't used frequently in smaller models, but it was extremely useful when training bigger models (such as final version of EDSR); it lead to more stable performance.

Any chance you'd be adding that?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.