akshaydudhane16 / burstormer Goto Github PK
View Code? Open in Web Editor NEWSOTA for Burst Super-resolution, Low-light Burst Image Enhancement, Burst Image De-noising
SOTA for Burst Super-resolution, Low-light Burst Image Enhancement, Burst Image De-noising
Hi,
I am very thankful for what you have done!I would like to do some experiments with enhancement function.I hope to see enhancement code to do me this favor.Thank you !
Hello, thank you for open source code!
I have trouble using pre-trained weights of Burst Super-resolution page.
There's no weights about back_projection.
I check the similar question posted earlier.
Can you check one more time if it is right for Burstormer's pretrained weights?
Thanks.
I saw the Cyclic Burst Sampling module in your article, and I think this module is very cleverly designed, but I can't find the definition of the code for this module in your code, can I ask you where this code is? There is one more question, the weights in the trace 1 part you gave me don't seem to match the model, is the weight file uploaded wrongly?
thank you
Thanks for release codes.
I would like to train your Burst SR model using released codes but I want to ask something before training.
In the paper, it seems the model trained with 4 RTX6000 GPUs.
Could you share how long the train took?
Thanks.
Luis Kang.
Hi,
syn_burst_val and burstsr_dataset datasets can not be download now .Server not found.
Thanks for share great network.
I'm waiting your training code for both denoising and low-light enhancement!
Will you have plan to release those training codes?
Hello, thank you very much for open source code. What is the function of the BFF module designed in this paper? I read the description of the BFF module in the paper. Unfortunately, I didn't get it. Can the author explain the role of BFF module in more detail? Thank you very much and look forward to your reply.
Hello @akshaydudhane16!
I want to test burst denoising on real dataset. I've seen that network takes as an input both noisy image together with its noise_estimate calculated from read and shot noises.
Can you please clarify the same above..
Thank you
Gopi
Hi, thanks for your wonderful work. I am also working on BurstSR and your codes have helped me a lot.
While I have some problems when I try to fine-tune an SR model on the BurstSR dataset. I firstly train the SR model on the Synthetic dataset and everything is OK. Then I fine-tune this model on the BurstSR dataset. The training loss keeps decreasing while the validation PSNR only grows at the beginning and drops gradually after it achieves the best result.
Below image is PSNR result per epoch. X axis is epoch and Y axis is PSNR.
Changed points were as below.
Have you met the same problem? I would appreciate it a lot if you could help me figure this out.
Thanks.
Dear authors!
Thank you for sharing your work with the community. Recently, I tried to inference Burstormer on Synthetic Burst SR validation set and found that provided checkpoint does not match with the model:
Missing key(s) in state_dict: "back_projection1.feat_fusion.0.weight", "back_projection1.feat_fusion.0.bias", "back_projection1.feat_expand.0.weight", "back_projection1.feat_expand.0.bias", "back_projection2.feat_fusion.0.weight", "back_projection2.feat_fusion.0.bias", "back_projection2.feat_expand.0.weight", "back_projection2.feat_expand.0.bias".
Unexpected key(s) in state_dict: "back_projection1.diff_fusion.weight", "back_projection1.feat_fusion.weight", "back_projection1.feat_expand.weight", "back_projection2.diff_fusion.weight", "back_projection2.feat_fusion.weight", "back_projection2.feat_expand.weight".
size mismatch for align.alignment0.back_projection.encoder1.0.norm1.body.weight: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([48]).
size mismatch for align.alignment0.back_projection.encoder1.0.norm1.body.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([48]).
size mismatch for align.alignment0.back_projection.encoder1.0.attn.qk.weight: copying a param with shape torch.Size([192, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([96, 48, 1, 1]).
I increased the num. of channels twice in ref_back_projection.encoder1, but still there is a mismatch in no_ref_back_projection. Checkpoint contains diff_fusion block weights, but the Burstormer model does not have it.
Can you please help to resolve this issue?
Dear authors,
thanks for opensource the code!
I noticed that for the network, the fine line in the forward model is always burst = burst[0]
So does the training and inference only support batchsize=1? Or if there is any tricks we can do larger batchsize?
Thanks,
Ke
Hello,
We noticed that you mentioned in the article that all training utilized L1 Loss. However, in the Burst Denoising section, you referred to the experimental setups of KPN and BPN, where KPN employed the Basic Loss (L2 Loss + gradient loss). We are curious about the Loss function employed in your Burst Denoising.
Furthermore, in Burst Denoising, when converting Open Image data, apart from what was mentioned in KPN, are there any additional steps or specific considerations? During the training phase of Burst Denoising, were there any other training techniques applied? Currently, we are unable to replicate the training accuracy.
We eagerly await your response!
Hi, thanks for your excellent work!
Could you please upload the visualization results of compared models?
Hello. Thank you for releasing your impressive CVPR 2023 work, Burstormer!
I have one question about the download links of pre-trained model.
When I go into the trained model page of Burst Super-Resolution, the pre-trained weights is BIPNet, not Burstormer.
I think it is a mistake of the link.
Can you update the link of pre-trained weights to the one of Burstormer?
Thank you.
I see in the zurich_raw2rgb_dataset.py, you use cv2.imread(path), so it would be BGR? But then when putting that dataset into synthetic_burst_train_set.py, it's treated as RGB. Should there be a conversion or is there actually a conversion somewhere along the pipeline already? Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.