csjcai / sice Goto Github PK
View Code? Open in Web Editor NEWLearning a Deep Single Image Contrast Enhancer from Multi-Exposure Images (TIP 2018)
Learning a Deep Single Image Contrast Enhancer from Multi-Exposure Images (TIP 2018)
In the Section V and Part B, the last line is(corresponding to the Table V results):
Note that for all the methods, we set the different model parameters for under-exposed and over-exposed images.
And I download your provided dataset, there is just a dim .txt
file to describe the test phase.
Can you give a more clear description to split the under or over exposed images?
Or more directly just give the two name filelist in .txt
file.
Hi,
I noticed that in your paper, you split all the 589 sequences randomly into training, validation, and test sets with a ratio of 7:1:2. However, I only find a txt file which contains 58 image numbers for testing. Could you please provide your testset and validation image numbers in your experiment?
Many thanks for your work!
你好,我想了解一下关于数据集处理部分,我已经下好了589个image sets,有个问题对于每个子文件里的图片未定义是低曝光图片还是高曝光图片,你能简单介绍一下吗?谢谢!
hi,
In your paper, 720128 patches of size 129*129 are cropped.
Do you crop these patches from all the different exposed image?
I try to crop patches from EV=-1 image of each sample, I have get about 350000 patches.
can you give more detail about this part?
Thans a lot!
hi,this is very good work.
But I have a question: Are some of the scenes captured by gama transformations instead of photography?
like part1:1-10
I don't understand why
Hi, I would like to ask you what is the fully aligned low-light input image of the label when you are training. I have seen several images and found that the corresponding low-light image's Number of the label are different. Because it is difficult to align them one by one, I would like to ask you to tell me. thank you!
You really did a great job. It is amazing. I want to run the demo. But meet some problems.
I use pycaffe to load model. "Message type "caffe.LayerParameter" has no field named "dtow_param"
can you help me?
I noticed that, in your paper, the CNN models were trained based on TensorFlow.
So, would you glad to share your TensorFlow implement of your project?
: )
thanks for your kindness to make such a dataset, could you please release your rest dataset parts. thaks
Thanks for your fancy work.
I just want to use your paper to do enhancement for my night photos.
Because you don't provide the pretrained model, I have to try my best to reimplement your paper by my self. And I found the code in WLSFilter to split one gray image to low frequency and high frequency, how about the WLSFilter params? I try some combinations, and do not get good results.
Q1:Can you provide the WLSFilter params or the processed images?
Q2:By the way, the author WLSFilter is doing on gray image, how to process on the RGB image?
Hi,
I notice this repository hasn't been updated in a long time. It would be nice to have access to a trained model and some basic code to reproduce the results in the paper. Do you intend to provide these in a foreseeable future?
Thanks for your fancy work. Can you give some explanations about how to run these caffe codes ?
Thank you for your work, and I'm going to cite this. (I used the dataset actually.)
But when I want to calculate the parameters in the model, I check the prototxt file as the README file writes:
Network structure: *.prototxt (to view the network structure, use this link)
However, this structure is quite different from that in the paper (e.g., the kernel size, stride, number of convolutional layers. See below).
By the way, since the model is originally implemented with tensorflow, is there any plan to release the tensorflow version? The caffe model requires so much memory(~64G).
gradient descent (SGD) with a batch size of 80 patches is used in training. We implement our model using the TensorFlow package. The momentum parameter and weight decay parameter are set to 0.9 and 0.0001, respectively.
Thank you for your work.
I have a question about exposure values of indoor dataset.
(1) According to the paper, Indoor dataset's EV is manually set based on the lighting ratio of the scene.
And, for outdoor dataset, used EVs are ± {0.5, 0.7, 1.0, 2.0, 3.0}).
Then, can i consider indoor dataset's EV setting as {-1.0, -0.7, -0.5, 0, 0.5, 0.7, 1.0} for 7 images?
(2) Or, according to testing index.txt, It seems you used {-3.0, -2.0, -1.0, 0, 1,0, 2.0, 3.0}.
Could you tell me which setting is right?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.