idiap / model-uncertainty-for-adaptation Goto Github PK
View Code? Open in Web Editor NEWCode paper Uncertainty Reduction for Uncertainty Reduction for Model Adaptation in Semantic Segmentation at CVPR 2021
License: Other
Code paper Uncertainty Reduction for Uncertainty Reduction for Model Adaptation in Semantic Segmentation at CVPR 2021
License: Other
Thank you for your excellent work. I want to get the NTHU dataset mentioned in the article. Is it at 'https://yihsinchen.github.io/segmentation_adaptation_dataset/#Unlabeled'? I have not obtained the download permission so far,
Is there any other way to get the dataset?
Thank you for sharing your work.
I'm interested in the GTA5 to Cityscapes training, but I've found in your README only the link to the Synthia pretrained checkpoint. I found the GTA5 pretrained checkpoint following the link provided by the repository that you cite in your work (namely GTA5_source.pth at this link), but the test on Cityscapes after loading the parameters and without additional training iterations achieved very bad performances of only 0.5% miou, far from the 33.8% you declared in the paper. Moreover, I achieve only 2.5% miou even if I load the Synthia_source checkpoint you provided in your repo, much less than the 40.3% of the paper. Here I see that you are loading a different model, probably because you are sharing the code for the Cityscapes to Cross-City experiments, but this checkpoint is not present in your repository anyway. I'm using your exact network for my experiments.
Are these the correct checkpoints? Could you please share the checkpoints of the pertaining phase on GTA5 that should achieve 33.8% miou and the final checkpoint after the application of your method that should achieve 45.1%?
I'd be grateful if you could provide at least these two checkpoints! Thank you.
Thank you for sharing your work. I want to do some experiments but I cannot find the codes about these two tasks: GTAtoCS and SYNtoCS as the paper reported. I can only find the codes about CStoCC. Is there any plan about sharing these codes?
Thank you very much for your work.
I have some questions:
when unc_noise is False, ie. the model is deepLab(13, False) in your code, is pretrained and it includes encoder and mian decoder right?
Are the parameters of the auxiliary decoder copied from the main decoder or initialized randomly?
When training, update the parameters of the encoder and the auxiliary decoder and keep only the parameters of the main decoder unchanged?
When training, the pseudo-label is generated by the pre-trained model, and it is not updated in the later training process, right?
I'd be grateful if you could answer it!
Dear author,
Thanks for your good work. I wonder if you could offer your environment dependencies as a list since your spec-file.txt may cause some troubles like some parts of small tools could not be found. Thanks a lot.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.