Giter VIP home page Giter VIP logo

dasr's People

Contributors

longguangwang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dasr's Issues

训练出错,ImportError: cannot import name '_DataLoaderIter'

感谢您的工作,我有一个问题。当我用DF2K训练时,发现报错信息:

Traceback (most recent call last):
File "main.py", line 4, in
import data
File "/home/XXXX/F/XXX/DASR/data/init.py", line 2, in
from dataloader import MSDataLoader
File "/home/XXXX/F/XXX/DASR/dataloader.py", line 12, in
from torch.utils.data.dataloader import _DataLoaderIter
ImportError: cannot import name '_DataLoaderIter'

我的环境是:

torch 1.1.0
能解决下吗?

blur kernels in Tab.3

Hi. I'm sorry to bothering you.
Does this paper contain information about eigen values and rotate angles of the blur kernels in Tab.3 ?
I can see just shapes.

Generalization ability

Thank you for sharing the exciting work. I have three questions:

  1. In traditional blind-SR setting, only LR images are provided such as kernelGAN/ZSSR, while in your work HR images are used, and the network are trained using synthesized LR images in a supervised setting. I am wondering if it is blindSR actually.
  2. Although multiple Gaussian blur/noise are added when generating LR images, it is still far from real-world degradation process. I am wondering if the model generalizes well to the real-world scenarios, where the degradation might not be Gaussian blur or noise.
  3. Can your model work in real world unpaired LR/HR setting,where no systhesized images are used but unpaired LR and HR images are provided?

关于论文中Figure 6. 的绘制

感谢您在图像超分领域的贡献,您的工作对我有很大的帮助。请问下图6中用于提取degradation representations的moco是怎么训练的?是正常的训练还是只用了4种模糊核的数据进行训练?能提供下该部分的代码吗?

I wonder why crop HR image too?

Hi, Thank you for your work and sharing code.

I would like ask to about multiscalesrdata.py code.
I don't understand why you crop HR images in line 157.
Can't we just crop the low resolution image for degradation learning?

Thank you!

images without labels

Hello, thank you very much for your excellent work, but I would like to know how to train the model for realistic scene images without labels

环境

能否提供一下代码的环境,以及程序版本

怎么理解unsupervised

您好 十分感谢您有理有据并且完整精彩的工作。不过有一点我还是不太理解,您提到的无监督是怎么体现的。或者说您认为他潜在的无监督价值怎么实现。可能是我论文读的不细致未能理解您的意图。十分感谢您的回答。

训练时出现一个错误AssertionError: Invalid device id

Traceback (most recent call last):
File "main.py", line 15, in
model = model.Model(args, checkpoint)
File "C:\Users\Luffy\Desktop\DASR-main\model_init_.py", line 29, in init
self.model = nn.DataParallel(self.model, range(args.n_GPUs))
File "C:\Users\Luffy\Anaconda3\lib\site-packages\torch\nn\parallel\data_parallel.py", line 142, in init
_check_balance(self.device_ids)
File "C:\Users\Luffy\Anaconda3\lib\site-packages\torch\nn\parallel\data_parallel.py", line 23, in _check_balance
dev_props = _get_devices_properties(device_ids)
File "C:\Users\Luffy\Anaconda3\lib\site-packages\torch_utils.py", line 455, in _get_devices_properties
return [_get_device_attr(lambda m: m.get_device_properties(i)) for i in device_ids]
File "C:\Users\Luffy\Anaconda3\lib\site-packages\torch_utils.py", line 455, in
return [_get_device_attr(lambda m: m.get_device_properties(i)) for i in device_ids]
File "C:\Users\Luffy\Anaconda3\lib\site-packages\torch_utils.py", line 438, in _get_device_attr
return get_member(torch.cuda)
File "C:\Users\Luffy\Anaconda3\lib\site-packages\torch_utils.py", line 455, in
return [get_device_attr(lambda m: m.get_device_properties(i)) for i in device_ids]
File "C:\Users\Luffy\Anaconda3\lib\site-packages\torch\cuda_init
.py", line 312, in get_device_properties
raise AssertionError("Invalid device id")
AssertionError: Invalid device id
请问是GPU设置的问题吗?

实验结果图

感谢您的工作,请问可否提供一下,实验结果测试图集的原图(Figure 5,Figure 7,Figure 8),以及Figure 5中比较的两个核宽参数,以及4.4部分的真实图像的数据集。非常感谢,这会对我的研究有很大的帮助 麻烦您了!

关于训练退化表示模型

作者您好,我想自己训练一个模型,对于训练退化表示的网络,我应该如何修改代码,为何k取32×256=8192,其中256是什么?是32个退化的图片每个截取256个patch吗?感谢您的解答

Environment settings

Thanks for the great job. I had some trouble with my environment settings. but I could not find the environment settings requirement in your article or in your code ,could you please give me a brief introduction about you settings(GPU memory.)
Looking foward to your reply.Thanks again for your job.

作者您好,想请问一下您补充材料里面梯度和psnr问题

根据您补充材料里面说的梯度和psnr的关系,我自己也选了几张图像进行了实验,并且用的模型是您代码里提供的只有同性高斯模糊核的模型,但是我测出来的结果,梯度和psnr的关系并不是很明显,所以想请问一下,您的平均梯度是怎么算的?是只有一幅图有这个结果还是在多幅上得出来的?在模型上有选择吗?想请问一下具体问下您怎么测的。

Trainning only need HR images?

Hello, thank you very much for your excellent work. Does the trainning data only need HR images? Can I delete the self.dir_lr in df2k.py ?

a question about train

The resolution of images in DF2K is relatively high. May I ask if you need to crop to a certain resolution before training?

supplemental tests available?

As mentioned in your paper, you "re-trained our DASR using their degradation model and
provide the results in the supplemental material" for comparisons with DAN/USRNet. Could you share these supplemental materials? Thanks

The gaint difference on 4x isotropic model with the paper writing?

Thanks for the author providing interesting work!
Can you provide your checkpoint on 4x isotropic model? When we train your DASR as your setting on the paper, the performance of DASR is even lower than predictor+SRMDNF 0.8dB on Set5 under 4x iso! (We have train 5 parallel DASR models, and select the best one.) Now, my experiment shows that unsupervised representation has no superiority! Besides, the retraining predictor+SRMDNF is 1.7 dB higher than your paper shows on isotropic sigma=2.6 on Set14.
We wish the author to provide checkpoint on 4x isotropic model to help us find the issues.

关于在训练时进行test

您好,十分感谢您的工作带来的帮助。请问option.py文件中的test_every参数是在何处调用的,在训练时整个网络时如何实现每训练一个epoch就test一次呢?

想问一下训练时显存占用问题

您好!非常喜欢您的工作,我自己在两张1080ti上从头train的时候,发现显存占用只有759MB和691MB,但是程序也在正常输出训练过程,请问这正常吗?

issue about resume

Hi.

There's a problem when resume training.

I tried to restart training DASR using this :

python main.py --dir_data='my/path' \
               --model='blindsr' \
               --scale='4' \
               --blur_type='aniso_gaussian' \
                --noise=25.0 \
               --lambda_min=0.2 \
               --lambda_max=4.0 \
               --start_epoch=157\
               --resume=157\

The problem is that contrastive loss gets bigger.
I think parameters of encoder for degradation representation can't be loaded.

[Epoch 158]	Learning rate: 1.00e-4
Epoch: [0158][6400/31050]	Loss [SR loss: 9.753 | contrastive loss: 0.892 ]	Time [ 145.0 s]
Epoch: [0158][12800/31050]	Loss [SR loss: 9.747 | contrastive loss: 0.920 ]	Time [ 143.7 s]
Epoch: [0158][19200/31050]	Loss [SR loss: 9.722 | contrastive loss: 0.918 ]	Time [ 144.1 s]
[Epoch 158]	Learning rate: 1.00e-4
Epoch: [0158][6400/31050]	Loss [SR loss: 9.598 | contrastive loss: 7.457 ]	Time [ 145.2 s]

psnr results

Hi. I'm sorry to bothering you.

Are the PSNR results calculated by MATLAB with YCbCr space or by your calc_psnr in utility.py?

Thanks!

Pretrained x3 and x4 isotropic model?

Thanks for this interesting work!
Since only x2 isotropic model was released in "./experiment" folder, can you release the pretrained x3 and x4 isotropic models for testing?

Models

Will you release pretrained models?

PSNR and SSIM

Hi, author. Thank you for your excellent work. But I'm a little confused about the code which calculate PSNR and SSIM metric. For PSNR, why the diff is multipide by a convert coefficient and shave the border when benchmarch is set to True. For SSIM, why not directly use api from skimage? Waiting for your response, sincerely.

About training step

Thank you for sharing your work! I have a question about the training steps:
When train the whole network, whether the parameters of the Degradation encoder network are fixed or not?

Issue about resume

作者您好!我今天想继续训练,bash文件如下所示
python main.py --dir_data='./dataset'
--model='blindsr'
--scale='2'
--blur_type='iso_gaussian'
--noise=0.0
--sig_min=0.2
--sig_max=2.0
--save='bldsr_repro2'
--resume=319
但是输出文件显示从epoch1重新开始训练了。我找不到bug在哪里,请问这是为什么呢

test配置问题

您好 ,请问论文表2 中的kernel width 与test.sh 中 lambda_1,lambda_2的关系是什么呢?

关于训练数据

我直接跑了您提供的代码,请问HR图和LR图需要自己提供吗,我只存放HR图,直接生成了*.pt文件 。

the latest torch version

Hi. I'm sorry to bothering you.

How can i change the code for the latest torch version?

For torch==1.7.1 I think _DataLoaderIter matters.

Question about the preparation for the training set

Thanks for the impressive work!
To train the dataset, I downloaded 2650 Flickr2K HR images and 900 DIV2K_train_HR images, and copy them together to the /traing_path/HR folder. However, when I change the path and run the main.sh code, an error occurs in line 65 of multiscalesrdata.py: "self.repeat = args.test_every // (len(self.images_hr) // args.batch_size) ERROR: ZeroDivisionError: integer division or modulo by zero".
Is there anything wrong with the train set? Could you please offer some advice on the situation? Thanks a lot!

自己训练的model 与您提供的model,测试PSNR结果差异。X2 iso

image
image
第一张图是我自己在X2 各向同性高斯核上的测试结果,配置信息如下: 为什么与您论文中的X2iso 测试结果有差距呢?
python main.py --dir_data='/home/bobo/F/SuperResolution/DASRend' \ --model='blindsr' \ --scale='2' \ --blur_type='iso_gaussian' \ --noise=0.0 \ --sig_min=0.2 \ --sig_max=2.0

How to test on a real-world image?

Thank you for sharing your excellent work!!
In your test code, it seens we only need to provide benchmark HR test images, but how could I test on a real-world image that dont have label ?

训练时的一些问题

感谢您的工作,我在训练中遇到一些问题,想向您请教一下
1.请问在选择不同的放大倍数,sig_min/max是否要设置不一样的范围?例如: [0.2,2.0], [0.2,3.0] and[0.2,4.0] for ×2/3/4 SR
2.在训练加 Noise的退化时,--noise需要设置成25还是其他值?
3.Table3中
tab3
对应的模糊核参数(致使其高矮胖瘦,旋转角度),lambda_1,lambda_2,theta,您分别设置的是多少?
在此表示十分感谢!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.