Giter VIP home page Giter VIP logo

dan's Introduction

Our new work about blind image super-resolution has been accepted to IJCV. The paper is available at End-to-end Alternating Optimization for Real-World Blind Super Resolution. The codes are released at RealDAN.

This is an official implementation of Unfolding the Alternating Optimization for Blind Super Resolution and End-to-end Alternating Optimization for Blind Super Resolution

If this repo works for you, please cite our papers:

@article{luo2020unfolding,
  title={Unfolding the Alternating Optimization for Blind Super Resolution},
  author={Luo, Zhengxiong and Huang, Yan and Li, Shang and Wang, Liang and Tan, Tieniu},
  journal={Advances in Neural Information Processing Systems (NeurIPS)},
  volume={33},
  year={2020}
}
@misc{luo2021endtoend,
      title={End-to-end Alternating Optimization for Blind Super Resolution}, 
      author={Zhengxiong Luo and Yan Huang and Shang Li and Liang Wang and Tieniu Tan},
      year={2021},
      eprint={2105.06878},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

This repo is build on the basis of [MMSR] and [IKC]

News

  • Add more pretrained weights and update the results of DANv1 !

  • Add pretrained weights and update the results of about [IKC]!

  • Add DANv2 !!!

Main Results

Results about Setting 1

Method Scale Set5 Set5 Set14 Set14 B100 B100 Urban100 Urban100 Mangan109 Manga109
PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM
IKC x2 37.19 0.9526 32.94 0.9024 31.51 0.8790 29.85 0.8928 36.93 0.9667
DANv1 x2 37.34 0.9526 33.08 0.9041 31.76 0.8858 30.60 0.9060 37.23 0.9710
DANv2 x2 37.60 0.9544 33.44 0.9094 32.00 0.8904 31.43 0.9174 38.07 0.9734
IKC x3 33.06 0.9146 29.38 0.8233 28.53 0.7899 27.43 0.8302 32.43 0.9316
DANv1 x3 34.04 0.9199 30.09 0.8287 28.94 0.7919 27.65 0.8352 33.16 0.9382
DANv2 x3 34.19 0.9209 30.20 0.8309 29.03 0.7948 27.83 0.8395 33.28 0.9400
IKC x4 31.67 0.8829 28.31 0.7643 27.37 0.7192 25.33 0.7504 28.91 0.8782
DANv1 x4 31.89 0.8864 28.42 0.7687 27.51 0.7248 25.86 0.7721 30.50 0.9037
DANv2 x4 32.00 0.8885 28.50 0.7715 27.56 0.7277 25.94 0.7748 30.45 0.9037

Results about Setting 2 (DIV2KRK)

Method x2 x2 x4 x4
PSNR SSIM PSNR SSIM
KernelGAN + ZSSR 30.36 0.8669 26.81 0.7316
DANv1 32.56 0.8997 27.55 0.7582
DANv2 32.58 0.9048 28.74 0.7893

Dependenices

  • python3
  • pytorch >= 1.5
  • NVIDIA GPU + CUDA
  • Python packages: pip3 install numpy opencv-python lmdb pyyaml

Pretrained Weights

Pretrained weights of DANv1 and IKC are available at BaiduYun(Password: cbjv) or GoogleDrive. Download the weights to checkpoints

.
|-- checkpoints
`-- |-- DANv1
    |   |-- ...
    |-- DANv2
    |   |-- ...
    `--IKC
        |-- ... 

Dataset Preparation

We use DIV2K and Flickr2K as our training datasets.

For evaluation of Setting 1, we use five datasets, i.e., Set5, Set14, Urban100, BSD100 and Manga109.

We use DIV2KRK for evaluation of Setting 2.

To train a model on the full dataset(DIV2K+Flickr2K, totally 3450 images), download datasets from official websites. After download, run codes/scripts/generate_mod_blur_LR_bic.py to generate LRblur/LR/HR/Bicubic datasets paths. (You need to modify the file paths by yourself.)

python3 codes/scripts/generate_mod_blur_LR_bic.py

For efficient IO, run codes/scripts/create_lmdb.py to transform datasets to binary files. (You need to modify the file paths by yourself.)

python3 codes/scripts/create_lmdb.py

Train

For single GPU:

cd codes/config/DANv1
python3 train.py -opt=options/setting1/train_setting1_x4.yml

For distributed training

cd codes/config/DANv1
python3 -m torch.distributed.launch --nproc_per_node=8 --master_poer=4321 train.py -opt=options/setting1/train_setting1_x4.yml --launcher pytorch

Test on Synthetic Images

cd codes/config/DANv1
python3 test.py -opt=options/setting1/test_setting1_x4.yml

Test on Real Images

cd codes/config/DANv1
python3 inference.py -input_dir=/path/to/real/images/ -output_dir=/path/to/save/sr/results/

dan's People

Contributors

greatlog avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

dan's Issues

Error of testing code

Many thanks for your work.
When I tried to test your code by using the default settings, there are some errors. It seems that the mismatch of the output size and the gt size.

On x4
Traceback (most recent call last):
File "test.py", line 119, in
psnr = util.calculate_psnr(cropped_sr_img * 255, cropped_gt_img * 255)
File "../../utils/util.py", line 698, in calculate_psnr
mse = np.mean((img1 - img2) ** 2)
ValueError: operands could not be broadcast together with shapes (1348,2032,3) (1144,2032,3)

On x2
Traceback (most recent call last):
File "test.py", line 119, in
psnr = util.calculate_psnr(cropped_sr_img * 255, cropped_gt_img * 255)
File "../../utils/util.py", line 698, in calculate_psnr
mse = np.mean((img1 - img2) ** 2)
ValueError: operands could not be broadcast together with shapes (1352,2036,3) (1148,2036,3)

关于您给的model进行fine-tuning的问题

你好,我使用你给出的训练好的model进行test没有问题,但我加载它进行fine-tuning的时候,会出现如下问题,请问是为什么?
Traceback (most recent call last):
File "train.py", line 337, in
main()
File "train.py", line 202, in main
model = create_model(opt) # load pretrained model of SFTMD
File "/media/xbm/data/xbm/Self-ImageSR-Extend/DAN-master/codes/config/DANv1/models/init.py", line 17, in create_model
m = M(opt)
File "/media/xbm/data/xbm/Self-ImageSR-Extend/DAN-master/codes/config/DANv1/models/blind_model.py", line 37, in init
self.load()
File "/media/xbm/data/xbm/Self-ImageSR-Extend/DAN-master/codes/config/DANv1/models/blind_model.py", line 234, in load
self.load_network(load_path_G, self.netG, self.opt["path"]["strict_load"])
File "/media/xbm/data/xbm/Self-ImageSR-Extend/DAN-master/codes/config/DANv1/models/base_model.py", line 102, in load_network
network.load_state_dict(load_net_clean, strict=strict)
File "/home/xbm/anaconda3/envs/pytorch1.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1223, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for DAN:
Missing key(s) in state_dict: "init_kernel", "init_ker_map".

About testing on real image.

Hi, it's definitely a fantastic work.
About testing on real image, I am confused.
According to your paper, the SR model inferencing on real image is trained with AWGN. However, in codes/config/DANv1/inference.py, SR model is the same with that of setting1.
I also tested it (with inference.py) on image captured by my own device. The result is quite blur.
Is there any misunderstanding.
Thanks a lot.

Wrong pth file?

It seems pca_aniso_matrix_x4.pth and pca_matrix.pth are identical, while one is anisotropic and the other isotropic. Please check

Results when not using blur kernel

Hello, and thanks for making the code available!
I am curious as to what the SR results are when not using the blur kernel during the SR process (for ex., only inputting a delta kernel as the conditional input for the Restorer during the training process). Of course, in this case it won't be possible to estimate the blur kernel.
If you have checked the SR results when not using the blur kernel, it would be great if you could share them!

training dataset

请问在setting1下训练不同分辨率模型使用的x2HR.lmdb、x3HR.lmdb、x4HR.lmdb 是相同的么,如果不相同,各自是怎么得到的呢,感谢解答!

Local test the performance of pretrained model, different from the reported value

Hi, thanks for providing detailed codes and it's a great job!

I'm testing the pretrained model in local on self generated Gaussian 8 testset with kernel size = 21x21.
And I got the performance as below.

Method Scale Set5 Set5 Set14 Set14 B100 B100 Urban100 Urban100 Mangan109 Manga109
    PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM
DANv1 local test x4 31.75 0.8833 28.19 0.7767 27.42 0.7214 25.77 0.7683 30.39 0.9021
DANv1 x4 31.89 0.8864 28.42 0.7687 27.51 0.7248 25.86 0.7721 30.50 0.9037

Though the local test perf is also excellent, it's different from the reported one (quote from paper).
I'm a little bit confused... Not sure whether there is a mistake made during testing...
Expect for your reply!

kernel size of setting2

I'm interested in the great work. But I found the size of blur kernel is not consistent in DANv1 and v2 for setting2 scale 4.
In DANv1 and kernelGAN, the kernel size is 11.
In DANv2, it is 31.
The quantitative results in the paper of DANv2 is the same as those in DANv1 paper except the additional DANv2, which means testing of DANv2 is the same as DANv1. So you train DANv2 with data blurred by kernel 31 and test it with data blurred by kernel 11. Is there something wrong in my understanding?

'OrderedYaml' is not defined

I am having the following error;

~/DAN/codes/config/DANv1$ python3 test.py -opt=options/setting1/test_setting1_x4.yml
Traceback (most recent call last):
  File "test.py", line 12, in <module>
    import options as option
  File "/home/alper/DAN/codes/config/DANv1/options.py", line 14, in <module>
    Loader, Dumper = OrderedYaml()
NameError: name 'OrderedYaml' is not defined

您好,我在训练DANv2的时候遇见以下这个问题,想请问您如何解决?

File "/home/lct/L/DAN/codes/config/DANv2/train.py", line 203, in main
model = create_model(opt) # load pretrained model of SFTMD
File "/home/lct/L/DAN/codes/config/DANv2/models/init.py", line 17, in create_model
m = M(opt)
File "/home/lct/L/DAN/codes/config/DANv2/models/blind_model.py", line 90, in init
lr_scheduler.MultiStepLR_Restart(
File "/home/lct/L/DAN/codes/config/DANv2/models/lr_scheduler.py", line 27, in init
super(MultiStepLR_Restart, self).init(optimizer, last_epoch)
File "/home/lct/anaconda3/envs/DAN/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 78, in init
self.step()
File "/home/lct/anaconda3/envs/DAN/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 141, in step
values = self.get_lr()
File "/home/lct/L/DAN/codes/config/DANv2/models/lr_scheduler.py", line 34, in get_lr
return [
File "/home/lct/L/DAN/codes/config/DANv2/models/lr_scheduler.py", line 35, in
group["initial_lr"] * weight for group in self.optimizer.param_groups
TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'
(DAN) lct@251:~/L/DAN/codes/config/DANv2$
训练DANv1时并没有报错,能正常训练,但容易Loss发散;训练DANv2时会出现以上的报错信息,我应该如何修改才能正常训练呢?万分感谢!

How can I get the same SSIM results?

Hi, I noticed that due to some bugs in the older version of IKC, I can't get the same SSIM results in Table 1. Could you please provide the codes about calculating SSIM results in Table 1? I can't find a version of IKC which provides the same SSIM results as in their paper. Thank you!

Error of testing

Hi, thank you for this excellent work and thank you for sharing the code.

I have the following problem when runing the testing code as :
python test_single_img.py -opt=test_option.yml -input_dir=D:/2020/ReferenceCode/DAN-master/our_testimages/ -output_dir=D:/2020/ReferenceCode/DAN-master/our_results/

The environment is: python 3.7 / torch 1.6

Could you please give me some suggestions to slove this problem? Thanks


G:\Anaconda\envs\py3.7to1.6\python.exe D:/2020/ReferenceCode/DAN-master/codes/config/DAN/test_single_img.py
export CUDA_VISIBLE_DEVICES=0
OrderedDict([('name', 'DIV2KRK'), ('mode', 'LQGTker'), ('dataroot_GT', '/data/DIV2KRK_public/HRblur.lmdb'), ('dataroot_LQ', '/data/DIV2KRK_public/x2LRblur.lmdb')])
Traceback (most recent call last):
File "D:/2020/ReferenceCode/DAN-master/codes/config/DAN/test_single_img.py", line 32, in
opt = option.parse(args.opt, is_train=False)
File "D:\2020\ReferenceCode\DAN-master\codes\config\DAN\options.py", line 56, in parse
config_dir = path.split("/")[-3]
IndexError: list index out of range

Process finished with exit code 1


about the setting of the kernel width?

Thank you for sharing. I learned a lot from your paper, but I still have a question about the kernel settings. Does the kernel width and its standard deviation mean the same thing? In your paper you mentioned that "During training, the kernel width is uniformly sampled in [0.2, 4.0], [0.2, 3.0] and [0.2, 2.0] for scale factors 4 , 3 and 2 respectively", but I found in your code that the kernels sigma are all set to [0.2, 4.0] for different upsampling rates, so do I need to change this part when retraining if I want to keep the same settings as in your original paper? Thank you very much!

About test results

Hi,DAN is very cool and thanks you for releasing the code!
When I run the test.py in DANv1,I didn't run the generate_mod_blur_LR_bic.py and create_lmdb.py,just set the LR and HR images in options/setting1/xx.yml,such as in setting1_x2.yml:

 test1:
    name: Set5
    mode: LQGT
    dataroot_GT: /datasets/Set5/LR/x2
    dataroot_LQ: /datasets/Set5/HR/x2

But the result of PSNR/SSIM is 35.59dB/0.938,is lower than the results in your paper."37.34/0.9526"
May I ask is that the different setting of the .yml file contribute to the non-ideal results?

RuntimeError

RuntimeError: ../../../checkpoints/DANx4.pth is a zip archive (did you mean to use torch.jit.load()?)

/DAN-master/codes/config/DAN/models/base_model.py", line 96, in load_network
load_net = torch.load(load_path)

Image in greyscale : 1-Channel training and testing

Hello,

Thank you for this implementation. I would like to train DAN on my own dataset which is in greyscale. So I created a new yml file in options and I set color option to grey. However weight are still of the size [nbkernel, 3, kernelsize, kernelsize] instead of [nbkernel, 1, kernelsize, kernelsize].
Can't we change the number of channel of the weights ? and How ?

Thank you a lot.

About test results

抱歉,因为我的英文可能表达的不太准确,我在test Set5数据集x2时,PSNR=28.85,我还是没能找到问题的所在,我可以把我操作的过程说明一下吗?
Step1:设置generate_mod_blur_LR_bic.py

up_scale = 2
mod_scale = 2
sourcedir = '/datasets/Set5'
savedir = '/datasets/DAN/Set5'
'rate_iso' : 1.0

其他的都没有更改,点击Run,生成的四个文件夹,HR和LRblur都是40张图片,不同的siga。
Step2:之后将这两个数据集放入create_lmdb.py

ima_folder = '/datasets/DAN/Set5/HR/x2/*'#../LRblur/x2/*
lmdb_save_path = '/datasets/DAN_lmdb/Set5/HR.lmdb'#../LRblur.lmdb
meta_info = {"name":"Set5"}

其他内容也没有更改,点击Run,HR.lmdb和LRblur.lmdb都有三个文件.
Sept3:分别放入option/test/test_setting1_x2.yml文件中

suffix : x2
scale: 2
pca_matrix_path:'/DAN/pca_matrix/DANv2/pca_aniso_matrix_x2.pth'
datasets:
    test1:
    dataroot_GT: /datasets/DAN_lmdb/Set5/HR.lmdb
    dataroot_LQ: /datasets/DAN_lmdb/Set5/LRblur.lmdb
pretain_model_G: ../../danv2_x2_setting1.pth

结果报了错:

RuntimeError: Expected tensor to have size 144 at dimension1, but got size 121 for argument #2 'batch2'(while checking argument for bmm)

如果将pca_matrix_path改为pca_matrix.pth就可以正常跑程序,但是结果PSNR=28.648437dB,SSIM=0.817695
这个问题我一直没能解决,不知道那里操作有误导致结果和论文中的结果不相同,还请您指正!

DANv2推理出来的都是黑图,psnr只有4-5

24-01-20 17:47:31.710 - INFO: img:baby - PSNR: 4.304825 dB; SSIM: 0.010210; PSNR_Y: 5.539371 dB; SSIM_Y: 0.173895.
24-01-20 17:47:32.200 - INFO: img:bird - PSNR: 9.119347 dB; SSIM: 0.052580; PSNR_Y: 9.932940 dB; SSIM_Y: 0.291013.
24-01-20 17:47:32.561 - INFO: img:butterfly - PSNR: 5.385315 dB; SSIM: 0.000441; PSNR_Y: 6.387867 dB; SSIM_Y: 0.097870.
24-01-20 17:47:33.031 - INFO: img:head - PSNR: 8.389810 dB; SSIM: 0.031692; PSNR_Y: 10.075234 dB; SSIM_Y: 0.338463.
24-01-20 17:47:33.388 - INFO: img:woman - PSNR: 4.811639 dB; SSIM: 0.010355; PSNR_Y: 6.219289 dB; SSIM_Y: 0.218729.

关于训练出现loss变为NAN的问题

作者你好,谢谢你非常不错的工作,
在我训练时大约在60k个iter会出现loss都变成NAN的情况并且按照 issue#8 的解决办法选取checkpoint继续train 但是一段时间后还是会出现loss变为NAN的情况(如下图),似乎无法解决,想请教一下您怎么train让模型收敛到最终的结果的?
image

group["initial_lr"]: unsupported operand type(s) for *: 'NoneType' and 'int'

Traceback (most recent call last): File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\train.py", line 349, in <module> main() File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\train.py", line 207, in main model = create_model(opt) # load pretrained model of SFTMD File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\__init__.py", line 17, in create_model m = M(opt) File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\blind_model.py", line 90, in __init__ lr_scheduler.MultiStepLR_Restart( File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\lr_scheduler.py", line 27, in __init__ super(MultiStepLR_Restart, self).__init__(optimizer, last_epoch) File "C:\dev\PycharmInterpreters\PyTorchStar\lib\site-packages\torch\optim\lr_scheduler.py", line 77, in __init__ self.step() File "C:\dev\PycharmInterpreters\PyTorchStar\lib\site-packages\torch\optim\lr_scheduler.py", line 154, in step values = self.get_lr() File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\lr_scheduler.py", line 34, in get_lr return [ File "C:\dev\PycharmProjects\DAN\codes\config\DANv2\models\lr_scheduler.py", line 35, in <listcomp> group["initial_lr"] * weight for group in self.optimizer.param_groups TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'

PyTorch 1.11 and 1.5. tried both.

DANv2 group["initial_lr"] throwing error

Hi all, has anyone solved the following error message when trying to train DANv2?

group["initial_lr"] * weight for group in self.optimizer.param_groups TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'

It does not appear with DANv1, and I've seen others report the same issue, but there are no solutions to be found.

runtime error

Traceback (most recent call last):
File "test_single_img.py", line 36, in
model = create_model(opt)
File "/workspace/Visual_Media_Restore/DAN/codes/config/DAN/models/init.py", line 17, in create_model
m = M(opt)
File "/workspace/Visual_Media_Restore/DAN/codes/config/DAN/models/blind_model.py", line 27, in init
self.netG = networks.define_G(opt).to(self.device)
File "/workspace/Visual_Media_Restore/DAN/codes/config/DAN/models/networks.py", line 43, in define_G
pca_matrix_path=opt["pca_matrix_path"],
File "/workspace/Visual_Media_Restore/DAN/codes/config/DAN/models/modules/dan_arch.py", line 173, in init
self.init_kernel.view(1, 1, self.ksize ** 2).matmul(self.encoder)[:, 0],
RuntimeError: Expected tensor to have size 441 at dimension 1, but got size 121 for argument #2 'batch2' (while checking arguments for bmm)

The SSIM is much lower

I have generated the test dataset according to the test setting1 and test the provided pre-trained model on it. The PNSR results are roughly the same as you have mentioned in Table 1. However, the SSIM is much lower. Could tell me how to solve this issue.

关于LQGT

你好,谢谢你很有意义的工作和代码。但是我有一个问题就是提供的代码中,yaml的训练数据的模式是LQGTKer,但是训练中却是将GT模糊下采样后得到LR_img, ker_map并用他们训练。dataloader中的LQ数据并没有被使用且没有ker_map的lmdb。这样做是为了增加模糊核的随机性么?将LQ GT 和模糊核都做成lmdb而不是在训练的时候生成会不会降低性能,减少训练时间?

About the dataset preparation

Hi, I would like to ask a question about the dataset preparation. I find that in 'train.py', the 'blur_lr_imgs' are generated online, that is to say, they are generated during training process. For those 'blur_lr_img' generated offline, they are only used in validating process. Does that mean I should only run 'python3 codes/scripts/generate_mod_blur_LR_bic.py' for validation dataset not for train dataset? I think it's not clarified very well in this repo.

Hi, how to set the test_setting.yml when testing the DANx2 (scale=2) model?

I suppose the params of the network structures need to be modified? Could you pls provide an x2 version yml file? Thanks!
What I have changed is:
_name: DANx2
suffix: x2
scale: 2
pca_matrix_path: ../../../pca_aniso_matrix_x2.pth
upscale: 2
kernel_size: 11
pca_matrix_path: ../../../pca_aniso_matrix_x2.pth
path:
pretrain_model_G: ../../../checkpoints/DANx2.pth


name: DANx2
suffix: _x2  # add suffix to saved images
model: blind
distortion: sr
scale: 2
crop_border: ~  # crop border when evaluation. If None(~), crop the scale pixels
gpu_ids: [0]
pca_matrix_path: ../../../pca_aniso_matrix_x2.pth # ../../../pca_aniso_matrix_2.pth when scale=2

datasets:
  test0:
    name: DIV2KRK
    mode: LQGT
    dataroot_GT: /data/DIV2KRK_public/HRblur.lmdb
    dataroot_LQ: /data/DIV2KRK_public/x4LRblur.lmdb


#### network structures
network_G:
  which_model_G: DAN
  setting:
    nf: 64
    nb: 40
    input_para: 10
    loop: 4
    upscale: 2
    kernel_size: 11
    pca_matrix_path: ../../../pca_aniso_matrix_x2.pth 

#### path
path:
  pretrain_model_G: ../../../checkpoints/DANx2.pth

can't find results

Hello, DAN is an amazing framework, but I couldn't save the test results when I was testing the real image, I don't know what went wrong, hope to get a reply. thank you.

image
image

Syntax error in test_single_img.py

Traceback (most recent call last):
  File "test_single_img.py", line 15, in <module>
    import options as option
  File "/home/SERILOCAL/mario.a/DAN/codes/config/DAN/options.py", line 10, in <module>
    from utils import OrderedYaml
  File "../../utils/__init__.py", line 1, in <module>
    from .deg_utils import *
  File "../../utils/deg_utils.py", line 88
    mask_iso = np.random.uniform(0, 1, (batch)) < rate_iso]
                                                          ^
SyntaxError: invalid syntax

作者您好,运行generate_mod_blur_LR_bic.py 时

微信图片_20210913223546
请问作者您有遇到这种情况吗?一开始报错找不到pca_matrix.pth路径,我在generate_mod_blur_LR_bic.py里修改路径(../../pca_matrix/DANv1/pca_matrix.pth)之后出现如图的报错。

Run without external GPU?

Hi.

Is there any way to run this if I don't have an external GPU?

(I just have an integrated graphics card.)

Thanks.

About pretrained weights

Hi, it' a wonderful job.
Why two pretrained weights ("danv1_x4_setting1.pth" and "danv1_x4_setting2.pth") are the same?

About methods comparison

Thanks for your nice work!
I am a beginner in blind sr, so I do not understand why methods like EDSR are trained under the bicubic downsampling setting while tested under the multiple-degradation setting.
Looking forward to your reply!

About the 'kernel_code'?

hello,
I am confused about the generation process of 'kernel_code'.
When calculating the PCA matrix, the code do 'X = X - X_mean.expand_as(X)' before get the PCA matrix.
Then, in 'PCAEncoder', It seems direct projection by 'torch.bmm(
batch_kernel.view((B, 1, H * W)), self.weight.expand((B,) + self.size)
).view((B, -1))' without '- X_mean'.
Did we need '- X_mean'?

Thank you!

Information about the loss

Thanks a lot for the code.

I have some questions on this part of the code that appears both in v1 and v2:

total_loss = 0
for ind in range(len(srs)):
d_kr = self.cri_pix(
kernels[ind], self.real_kernel.view(*kernels[ind].shape)
)
# d_kr = self.cri_pix(ker_maps[ind], self.real_ker_map)
d_sr = self.cri_pix(srs[ind], self.real_H)
self.log_dict["l_pix%d" % ind] = d_sr.item()
self.log_dict["l_ker%d" % ind] = d_kr.item()
total_loss += d_sr
total_loss += d_kr

It seems that your loss is the combination of SR loss (contained in d_sr) + kernel loss (contained in d_ker) but only for the last step of the Restorer and Estimator since the lines "total_loss += d_ker" and "total_loss += d_sr" are outside the loop.
It leads me to two questions:

  • Why are you keeping track of the d_sr and d_ker for the other steps.
  • Did you try to use the d_sr and d_ker for each step and is it giving bad results?

Thanks a lot,
Charles

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.