Comments (15)
It depends on what you want. If you just care about PSNR, using pixel loss for training is enough (first stage). PSNR provides a good quantitative metric for comparing different methods, but pixel loss often does not have a good visual quality.
If you want better visual quality, you should fine-tine the model from the first stage using a combination of pixel loss, perceptual loss and GAN loss, but it will decrease the PSNR.
By the way, I might know why you suffer from a sudden large drop of PSNR. GAN training is very unstable. Generally you should fine-tune the model from the first stage, instead of training it from scratch. The EMA strategy can also help stabilize the convergence. Note that PSNR is also not a good metric when you are training towards good visual quality.
from swinir.
Do you mean that training SwinIR (middle size, dim=180) sometime have doubled loss? I don't think it is a problem. When some images in a training patch is hard to reconstruct, the loss of that batch would be large. If you look at the PSNR on the validation set, the model converges smoothly.
In our implementation, all settings are similar to CNN-based SR models and we did not use any special tricks. Detailed settings can be found in the supplementary. Training codes will be released in KAIR in a few days.
from swinir.
I don’t mean that the loss of a batch has suddenly doubled, but the average loss of 100 batches has doubled, and psnr will also drop close to 1db and recover after a few epochs.
from swinir.
We trained SwinIR (middle size, dim=180) for 500K iterations with batch_size=32. The learning rate is initialized as 2e-4 and halved at [250K, 400K, 450K, 475K]. We use Adam optimizer (betas=[0.9, 0.99]) without weight decay. The loss is the mean L1 pixel loss. The training loss and PSNR on validation set (Set 5) are attached as follows.
We did not notice any sudden large PSNR drop on the validation set.
from swinir.
Thank you, I will check my code and explore whether the slight parameter difference will have a big impact。
from swinir.
We trained SwinIR (middle size, dim=180) for 500K iterations with batch_size=32. The learning rate is initialized as 2e-4 and halved at [250K, 400K, 450K, 475K]. We use Adam optimizer (betas=[0.9, 0.99]) without weight decay. The loss is the mean L1 pixel loss. The training loss and PSNR on validation set (Set 5) are attached as follows.
We did not notice any sudden large PSNR drop on the validation set.
Hi thank you for your work.
May I know that does the SwinIR (for image SR) need to trained as GAN with discriminator?
If I just train for image SR, can I just train it without GAN?
Thanks.
from swinir.
pixel loss. The training loss and PSNR on validation set (Set 5) are attached as follows.
Hi, @JingyunLiang
Thanks for the loss plot. May I ask based on your experience, is it normal for a transformer framework that the training loss oscillates seriously? I am currently training a transformer and the loss seems just fluctuate repeatedly and there is no trend of convergence. So do you think this is a normal phenomenon for most transformers?
Thanks very much.
from swinir.
I don't think so. There is no such problems as you can see in Fig.3 (f) of the paper. By the way, our training code will be released in 1-2 days. Please use that for training. Thank you.
from swinir.
Yes, I see. Fig. 3(f) is the PSNR plot. Just as shown in your earlier reply, the PSNR is stable. But the L1 loss oscillates. I am confused about the fluctuation of the L1 loss. Thanks a lot. @JingyunLiang
from swinir.
PSNR and L1 loss on validation loss are highly related because PSNR has a MSE(pred, gt)
term. If the validation PSNR is stable, the validation loss should also be stable. The training loss is shown in the top figure of my previous answer, it may fluctuate a bit because each batch has different images (some of them are hard to super-resolve).
We trained SwinIR (middle size, dim=180) for 500K iterations with batch_size=32. The learning rate is initialized as 2e-4 and halved at [250K, 400K, 450K, 475K]. We use Adam optimizer (betas=[0.9, 0.99]) without weight decay. The loss is the mean L1 pixel loss. The training loss and PSNR on validation set (Set 5) are attached as follows.
We did not notice any sudden large PSNR drop on the validation set.
from swinir.
Thanks so much for your explanation! In this case, have you tried to increase the batch size to reduce this training loss fluctuation? Does a large batch size alleviate this instability?
from swinir.
No, I always use batch_size=32 (so we only need 500k iterations). You can try it later on our training code.
from swinir.
I see. Thank you very much. I think 32 is large enough for the image-to-image task due to the huge cost of the transformer.
from swinir.
@z625715875 @shengkelong @hcleung3325 We have release the SwinIR training code at KAIR. We also add an interactive online Colab demo for real-world image SR.
from swinir.
Feel free to open it if you have more questions.
from swinir.
Related Issues (20)
- Colab notebook error
- About self-ensemble strategy
- not compatible with the latest cog version
- Did you train SwinIR on DIV test set?
- How to disable using two GPUs for training?
- only 1 swin layer in the RSTB module?
- It seems SwinIR doesn't use patch merging. HOT 2
- Loading pretrained weight achiving not accurate result HOT 1
- Error(s) in loading state_dict for SwinIR HOT 5
- Inquiry about patch embedding HOT 4
- 关于X8的测试集
- JPEG Artifact Removal window size
- Transfer Learning with SWINIR model
- Artifact SWINIR (training Model as Generator GAN) HOT 1
- dynamic shape inference with onnx model HOT 1
- The noise removal command eats up my entire RAM and then gets killed HOT 5
- Load model takes forever
- SWINIR as Generator in GAN : Real world
- Unable to load pretrained model
- change the video card to run on the site replicate HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from swinir.