Comments (2)
Thanks for your interest in this project! I just pushed a commit to address this problem. Here is how you can reproduce results now:
- In the benchmark spreadsheet, click on the google drive link under the "config files" column.
- Download the folders you want (for example cub200_old_approach_triplet_batch_all), into some folder on your computer. For example, I downloaded into /home/tkm45/experiments_to_reproduce
- Then run:
python run.py --reproduce_results /home/tkm45/experiments_to_reproduce/cub200_old_approach_triplet_batch_all --experiment_name cub200_old_approach_triplet_batch_all_reproduced
You cannot override complex (nested) config options at the command line in this case. But you can still override non-nested options. For example, you might like to use a different number of workers for your dataloaders:
python run.py --reproduce_results /home/tkm45/experiments_to_reproduce/cub200_old_approach_triplet_batch_all --experiment_name cub200_old_approach_triplet_batch_all_reproduced --dataloader_num_workers 6 --eval_dataloader_num_workers 6
What still needs to be fixed:
In the configs folder of cub200_old_approach_triplet_batch_all, you'll see a subfolder called resume_training_config_diffs_1. This just indicates that I resumed training and that I changed num_epochs_train to 60 (it was originally 50). At the moment, the "reproduce_results" option does not look for these diff folders. So you may need to copy any changes from the diff folder into the config file. From what I remember, the only parameter I ever changed when resuming training was num_epochs_train.
Let me know if it works for you!
from powerful-benchmarker.
My latest commits fix the config_diff issue. The reproduce_results flag parses the config_diff folders so that training proceeds exactly as it did in the original experiment.
from powerful-benchmarker.
Related Issues (20)
- validator_tests/delete_pkls: Change --validator flag to --prefix
- Use --extend-ignore in linter rule
- Default split manager HOT 4
- Get error when execute script in 'powerful-benchmarker' directory. HOT 1
- Whem MClassPerSampler is equal to 1, Error occurs HOT 1
- Can I add my own learning rate scheduler during training? HOT 3
- Stanford Online Product training scheme HOT 5
- Cross Batch Memory error HOT 4
- Make trained models available on torch.hub
- Test on benchmark Error HOT 1
- Set accuracy evaluator to only get precision_at_k=1 HOT 1
- Clarification on precision_at_k computation HOT 2
- Question on SoftTripleLoss Bayes hyperparameter tuning HOT 2
- Reproducing benchmark results only gives validation accuracy HOT 2
- The ~INT_BAYESIAN~ flag does not seem to work HOT 1
- Proxy NCA And Softmax Scale / Label Smoothing HOT 3
- Compatibility with latest pytorch-metric-learning library HOT 1
- Trouble Reproducing ArcFace Results HOT 5
- Pretraining on source HOT 3
- How to evaluate an experiment (domain adaptation) HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from powerful-benchmarker.