Giter VIP home page Giter VIP logo

cbdnet's People

Contributors

guoshi28 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cbdnet's Issues

噪声等级图

您好:
为了试验您的方法有效性,我将它用来CT图像上进行噪声等级评估。请问您的噪声等级图可以用来评估ct图像的噪声吗,目前我用您的噪声水平评估方法去做,好像无法生成有效的噪声水平图,您能否提供一下您的噪声等级图可视化结果,这对我非常有帮助,谢谢!

train

how do you get your training model ,I use the training code of DnCNN in my own dataset ,there is some trouble in my training.How to solve it?

image

UNet Architecture

Hi Guo Shi,

Can you please explain the UNet being used in CBDNet a little bit more. Few things are unclear from the description in the paper.

Example:-

  1. The original UNet paper uses max-pooling to reduce the size of the image, however it looks like in CBDNet it is done through convolution strides.
  2. Also it is unclear at what level the features are concatenated -- shouldn't that be shown by increasing the width of green and yellow rectangles in figure2 of the paper e.g. that is how it was shown in Fig1 of original UNet paper cited in CBDNet.
  3. Are the conv layers in UNet being padded -- otherwise the resolution would be different?

Since the model is in MatConvNet and not everyone has access to Matlab.

Thank you for your time!
Touqeer

question about the parameter γ

Hey, I noticed that you mentioned in your paper that 'allowing the user to adjust γ'.
But when I went through the codes, I didn't find the parameter γ and didn't know how to use it. Is it part of the training network? And is it set up when training the network?

Q about CRF_Map and ICRF_Map

I suppose CRF_Map and ICRF_Map is somehow reversed (“写反了”) either revered in comments or in callee function AddNoiseMosai.

  • Detail:
    In CRF_Map comment, input image (RGB?) and returns L, while in ICRF_Map comment, input L and returns image.
    But in AddNoiseMosai, I believe it inputs image and returns noised image. But it first calls ICRF_Map and then calls CRF_Map at last.

So I suppose your comment is wrong.

About low speed of (inverse) gamma mapping (CRF_Map and ICRF_Map)

Functions CRF_Map and ICRF_Map are very slow in python (each function takes about 8 seconds when processing a 512x512 image) and are bottlenecks in efficiency.

I wonder where do you get the two .mat files used in CRF_map & ICRF_Map? Are they generated by formula or collected data?

In my experience, we only need one lookup table to do gamma mapping, which should be ultra fast. So can we combine the two lookup tables in .mat files to make the algorithms faster?

python isp result different with matlab code

Hey, thanks for your python implementation of the isp code.

I tried with the python isp code, and I found the result is really different with the matlab result.

I set the icrf_index and the pattern_index the same.

Does the python implementation is the same as the matlab?

dateSet

can you share your dataset in training your model,there is some trouble in my reading the matlab code,is there anyone convert it to tensorflow ?

Question about CRF

Hi, it's an amazing work!
I wonder in your noise model (EQ 3), did you try to train the model with and without CRF? If we train CBDNet without the knowledge of CRF, how much gap will appear in real photo denoising?
I don't find any comparison among EQ 1, 2 and 3.

Receptive field

Hello. What is the receptive field of the denoising network (CNN_d)? My calculations yield 83. I used the info given in the network illustration and the fact that all filters are 3x3 (strided and transpose conv). Is this correct? I assumed that the receptive field would be as large as the training input (128x128)

What is the meaning of "x" in loss function L_rec?

Hi Guo Shi
In your paper, I read
"For a batch of real images, due to the unavailability of ground-truth noise level map, only L_rec and L_TV are considered in training."
For real image, we have not clean image, so what is the meaning of "x" in loss function L_rec?
Thank you for answer!

About Training data

Hi, thanks for sharing the details, great work.

Will you release the training data you used for efficiently reproducing?

Thanks a lot!

其他数据集的训练问题

您好:
目前我在寻找一种处理真实噪声的网络模型。您预训练的模型十分适用于我的数据集(医疗方向),去噪效果比较优异,但也有low-level上的细节损失,所以我想根据您分享的matlab训练代码用于我的数据集中。
拜读过您的论文以后我有几个问题想了解:
1)论文里您提到噪声估计网络中需要输入noisy observation y来产生noise level map σ_hat(y),我是否可以理解为:noisy observation y = 带有真实噪声的原图,noisy level map σ_hat(y)是噪声的分布?
2)基于问题1),CNN_D把CNN_E的输出作为输入,最终输出为去噪图像,那么整个CNN_D+CNN_E是不需要训练集的清晰图像?

The results in Table 2 are strange

In Table 2, the PSNR and SSIM results on the 15 cropped images provided by Nam et al. in CVPR 2016 are not consistent with the paper of Nam et al. (CVPR 2016), MCWNNM, TWSC, the method of NI (neat image software). How do you compute the PSNR and SSIM for Table 2?

Here are my PSNR results:

NI & CC & MCWNNM & TWSC & DnCNN+ &FFDNet+&CBDNet
35.68 & 38.37 & 41.13 & 40.76 & 38.02 & 39.35 & 36.68
34.03 & 35.37 & 37.28 & 36.02 & 35.87 & 36.99 & 35.58
32.63 & 34.91 & 36.52 & 34.99 & 35.51 & 36.50 & 35.27
31.78 & 34.98 & 35.53 & 35.32 & 34.75 & 34.96 & 34.01
35.16 & 35.95 & 37.02 & 37.10 & 35.28 & 36.70 & 35.19
39.98 & 41.15 & 39.56 & 40.90 & 37.43 & 40.94 & 39.80
34.84 & 37.99 & 39.26 & 39.23 & 37.63 & 38.62 & 38.03
38.42 & 40.36 & 41.43 & 41.90 & 38.79 & 41.45 & 40.40
35.79 & 38.30 & 39.55 & 39.06 & 37.07 & 38.76 & 36.86
38.36 & 39.01 & 38.91 & 40.03 & 35.45 & 40.09 & 38.75
35.53 & 36.75 & 37.41 & 36.89 & 35.43 & 37.57 & 36.52
40.05 & 39.06 & 39.39 & 41.49 & 34.98 & 41.10 & 38.42
34.08 & 34.61 & 34.80 & 35.47 & 31.12 & 34.11 & 34.13
32.13 & 33.21 & 33.95 & 34.05 & 31.93 & 33.64 & 33.45
31.52 & 33.22 & 33.94 & 33.88 & 31.79 & 33.68 & 33.45
Average
35.33 & 36.88 & 37.71 & 37.81 & 35.40 & 37.63 & 36.44

A question about the loss

Hello, I noticed you used L2 norm instead of F norm in your paper.
image
image
In the loss function, it seems the variable x and y are matrices because I haven't seen any sum operation.
Notably, with regard to vectors, L2 norm is equal to F norm. But for matrix, L2 norm is totally different from F norm.
I want to confirm whether you used L2 norm of the matrix because papers often use F norm rather than L2 norm in image denoising.
Can you help me? Thanks a lot!

About Perceptual loss

Can I ask about the influence of perceptual loss used in your model? Specifically, the change of PSNR and SSIM?

Poisson-Gaussian noise implementation

Hi, I notice that the Poisson-Gaussian noise parameter are randomly sampled. How do you determine the 0.16 and 0.06 range for sigmas? I also read the implementation in the unprocessing paper, they use a quite smaller range (lower noise level).

Training image patches

In the paper you use various datasets (BSD500, Waterloo etc.) that have various sized images. For training 128x128 sized patches are used. Could you give information about the number of patches extracted (Like in DnCNN paper)? So basically I want to know the dimensions of the training dataset ?x128x128x3. Also did you use any data augmentation methods?

there is a error when I run Test_Patchs.m , How to solve this problem?

Test_Patches
Warning: Name is nonexistent or not a directory: utilities
In path (line 109)
In addpath (line 86)
In Test_Patches (line 2)
Error using dagnn.Layer/load (line 200)
No property ignoreAverage for a layer of type dagnn.Loss.
Error in dagnn.DagNN.loadobj (line 28)
block.load(struct(s.layers(l).block)) ;
Error in Test_Patches (line 22)
net = dagnn.DagNN.loadobj(net) ;

Question about blur effect and noisy level map

Hi I tried to reproduce the whole training procedure according to your paper
I set the level map as \sigma_c + \sigma_s*L and converted it to 0~1, and simply concat the map with rgb image as the input of blind denoise net

The training process is ok and I can get a good result on sidd or etc, when the noise is not very high.
However the denoised image is a little blurred, did you find this in your origin work?
Also when the noise getting higher the result gets worse, so I tried to give the noise level map a coefficient(x2) as mention in section 4.4 of the paper, but I found it just changed nothing. Even x2 gives barely any changes, which is more strange. I think if the noisy level map matters, giving a zero map should leads to a very bad denoise result right (as denoise ≈ input)?

In fact I've read the paper many times but I did't found how you take the noise map as input, am I right about concating it with the rgb image, or I just missed something? Also did you do any experiments about is? Like training a net without using the noise level map as a supervision (which means let LAMBasy = 0 and LAMBtv = 0), and compare the PSNR changes?

Noise estimation map

Hi, is it possible to show one or two output image of the noise estimation network? For both synthesis image and real world image? Thanks.

can not run the test demo on centos7

Hi,
I run the test_patches and test_fullimage demos on the centos7 with the environment matconvnet and cudnn. The error is "No property ignoreAverage for a layer of type dagnn.Loss."
I have modified the path of models. Is there anything that I need to do before run the test file?

test dataset

Hello, thank you for sharing. Can I get all 1000 patches of DND dataset from you?

exe time

hi @GuoShi28
your paper said CBDNet takes about 0.4s to process an 512 × 512 image,
i was consider in what platform it execute?
have you comparison the execute time between CBDnet and BM3D?
ths a lot

question about JPG image

I've read the Q&A about the JPG image issue. I am not quite understand the reason: after loading the image into memory, they are just array numbers, what are the differences between JPG's array(matrix) and PNG's array?

I did a test to convert JPG back to PNG then feed to CBDNet, I can see the result is much better. I am wondering if this is a matplotlib.pyplot/pillow issue? maybe it can not read the JPG data correctly?

BTW, I am using this code since it provides me a easy to use interface: https://gitlab.com/Yggdrasyll/cbdnet-denoiser

Here is the test result.
Origin:
crop1
Result directly processed from origin JPG image:
FromJPG
Convert JPG to PNG by Mac's preview, then feed to CBDNet:
fromJPGsPNG

Log Files from Training

Thank you for your awesome code!

I am hoping you might open-source the log files you have from training. Maybe the training and validation loss as a function of epoch
(and/or batch) with an estimate of the runtime?

Questions regarding estimator training

Hi Guo Shi,

I am trying to train the estimator network alone in the tensorflow and have few questions about training settings that I could not get answers from the paper of the GitHub page; would you please answer them?

  1. In paper the patch size is mentioned to be 128x128 -- are these patches cropped from images at random? or they are cropped on a grid basis with a stride from each training image?
  2. How many number of batches are there in one epoch?
  3. Are the 1600 images from each Waterloo and FiveK data sets are chosen at random befogging the training or they different in each epoch?
  4. From paper it takes around 3 days to train full network, do you have any estimate for the estimator network -- since the estimator seems to be a fraction of the full CBDNet.

Hoping to get a prompt response.
Thank you!

load model failure

I have downloaded your 'CBDNet.mat' file. However, I find the wrong message is that "load -ASCII" because the model is not the Binary file.

some artifacts in generate ground truth data

I try to use ./SomeISP_operator_python/ISP_implement.py to generate synthetic noisy image & ground truth image(gt), but in many images, i saw artifacts like this

cbd_synthetic_issue_gt

and the raw srgb noisy-free image is like this

cbd_synthetic_issue_clear

I wonder if these artifacts are reasonable in synthetic ISP process, and why these happens?

Ground Truth for Noise level

Hi,

Thanks a lot for sharing your test code!

I am trying to re-train using the same approach that you mentioned in the paper. I am not able to understand how the ground truth for the noise level map was generated for synthetic data (from the paper/code).

Can you please explain how you create the ground truth for the noise level map? For e.g., each pixel in the noise level map contains the std. dev. information in RGB domain or something else?

If you can share the code that would be great as well.

Thanks,
Tejas

Update about License ?

Sir , your model is amazing and results are crisp, can we use it for for our commercial purpose

Why UNet and not continuing Strategies of FFDNet?

Hi,
Great work and very good effort towards actual denoising instead of celebrating AWGN denoising as many others do :)
I was curious as to why you did not continue using the strategy used in FFDNet where the image is downscaled to four quarter size images and concatenated with the noise map.

Why not just replace the noise map (based on AWGN sigma) with the noise map generated by Noise Estimator Network? I was wanting to know if you gave FFDNet architecture a try or otherwise what was the reason to deviate from that towards UNet?

Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.