glassywu / aecr-net Goto Github PK
View Code? Open in Web Editor NEWContrastive Learning for Compact Single Image Dehazing, CVPR2021
Contrastive Learning for Compact Single Image Dehazing, CVPR2021
Can you provide the Supplementary materials you mentioned in paper?I wanna know more details about this impressive work!
Is there any way I can get the google drive link of the pretrained weights.
Hello, could you please provide a complete code that can be trained? Thanks
你好,我复现的时候, 按照论文中的损失权重进行训练发现,对比损失 远远大于 L1 损失,并且 图片出现严重的失真和伪影,请问 关于对比损失权重这个问题,您有尝试过其他参数吗,还是 到了训练后期,这种情况会有所改善呢?
Hello, thank you for your great work!
I'd like to know when your version of Pytorch code will be released?
请问deconv这部分的代码是什么呢?
Thank you for your dedication. Will you publish train.py?
I really enjoyed this article,When will the pytorch code be released?
When will you release the pre-trained models?
What is the fastdeconv package in the model,Is it a package in torch?
In your CR.py file , you select postive and negative samples with the number of batch size. However, how to select one positive sample and multiple negative samples and compute CR loss as shown in your paper? Can you explain or update the code?
Thanks for your reply.
作者可以上传下训练文件吗
Nice work and congrats. When will you release the training code?
Could you please upload your test.py file for qualitative comparison
When I resume the released model, there is an error: "Unexpected key(s) in state_dict: "mix4.w", "mix5.w"". mix4 and mix5 don't appear in the network constructure. So the released model is not matched with the released code. Am I right? Thanks.
how to get the h5 file?
您论文里面提到AECR模型的参数大小为2.61M,但是我在使用您给出的网络训练出来的模型参数为10.5M,远远大于论文中的描述。。请问能否提供您的训练文件和预训练模型?
Hey, could you please offer the specific parameters for training different dataset? For example, you have mentioned 100 epochs in the paper, and in your released code the default iteration is 500000. Then what's the true final choice? Is there any difference between NH-HAZE and SOTS? etc.. Thank you very much!
Great work!
I have some question about your paper:
1.How to make a positive sample with multiple negative samples to compute contastive loss?
2.How to select multiple negative samples in paper?
Looking forward to your reply
Hi! Using Contrastive Loss is indeed an impressive idea! However, I am confused as to how this loss is being calculated and updated. Do you flatten the model output and compare(use contrastive loss) it with the flattened Target output?? And if we also use L1 loss, how do we update the model weights using both L1 and Contrastive Loss function?? I hope the PyTorch code will be updated soon enough, it might give some more clarity.
Thanks!
Open source code seems to be problematic, e.g. :
x1 = self.block(x_down3)
x2 = self.block(x1)
x3 = self.block(x2)
x4 = self.block(x3)
x5 = self.block(x4)
x6 = self.block(x5)
Does this mean that the FABlock of the network is sharing weights? So should it be 1 instead of 6 as stated in the paper, after all the comparison FFANet is not shared.
We simply changed the organisation of the dataset: from h5 format to direct feeding of jpg images, and the use of DCNs from MMCV-Full instead of those provided by the authors, yielding results that are only 60% of the paper's results.
If shared weights are used, then the number of participants is consistent with the paper but extremely poor, and if no weights are shared, then the number of participants is increased by a factor of 4.
What is the problem, please? Looking forward to your answer
开源的代码似乎是有问题的,如:
这里的意思是网络的FABlock是共享权重的么?那么是否应该是1个,而不是论文中说的6个,毕竟对比的FFANet并不是共享的。
x1 = self.block(x_down3)
x2 = self.block(x1)
x3 = self.block(x2)
x4 = self.block(x3)
x5 = self.block(x4)
x6 = self.block(x5)
我们仅仅是改变了数据集的组织方式:由h5格式改为直接喂入jpg图像,以及使用MMCV-Full的DCN代替作者提供的DCN,得到的结果只有论文结果的60%。
如果使用共享权重,那么参数量和论文一致但是效果极差,如果不共享权重,那么参数量增加4倍。
请问是什么问题呢?期待您的回答
In the line 98 of AECRNet.py
The authors define the DehazeBlock only once.
So all DehazeBlocks share parameters.
This is different from the description in the paper.
作者您好!代码中deconv模块及FastDeconv函数都无法在标准库中找到,请问这部分是什么呢~
感谢你的开源,从mindspore版本作者得知实验结果为pytorch版本训练所得,想用你的代码跑一个对比实验,十分冒昧的的提出这个请求,希望你能同意
Could you provide other links to pre-trained models apart from Baidu cloud? I am not able to download from Baidu cloud. Or could you share the models in GitHub directly?
Could you please upload your test.py file for qualitative comparison
请作者提供源代码,谢谢
Thank you for your dedication. When will the code be released?
Thank you for your dedication. Will you publish train.py?
作者你好,可以提供一下训练数据集的h5文件吗
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.