Comments (3)
Please comment on speed and resources with/out Dask. Did you need 200+GB of memory/32 cores? Would you have achieved similar results using a machine with lower specs?
No, we needed by far not 200GB. At first we thought we would need it but then it was getting clear that we need to have a separate ML-model for each gridpoint. Fitting then thousands of ML-models requires runing many fitting 'processes' in parallel to minimize training time (20s on 1000 gridpoints -> >5,5 h) but it does not require much RAM. Even though every training process may copy its X and y arrays (in training, X=predictors, y=predictand array) in RAM this only takes a few hundreds of MB, because the models are fitted 'locally' with information from only the upstream areas as features. If we would have run a model that can be used 'globally' for all gridpoints, then we would probably need to load the whole dataset into memory (not necessarily, one could train on one catchment with the upstream information, then move onto the next catchment to train and so on "incremental learning").
So I see no reason why the code wouldn't work on a personal notebook too. The only bottlenecks could be computation speed (we use up to 30 cores, but only for a very short amount of time). I will remove the dask definitions where not necessary or described (I think it's only necessary where the models are trained in parallel).
The methodology used for feature selection is unclear as there is very little comment to the code. Please expand.
We will come back to this, an alternative simple, self-contained ML workflow single notebook containing all parts for a simple example for discharge forecasts is also soon to be finished.
from ml_flood.
I had now an issue with applying a rolling sum operation over time, which resulted in MemoryError
if the input field was the result of an interpolation. It still does not make 100% sense why this shouldn't work but it is in line with the experience in other instances: If you get a MemoryError
it most probably not by lack of RAM but something is wrong with the xarray dataset (coordinates or names) or incorrectly called xarray-functions. The solution for now is to apply interpolation after the rolling sum.
from ml_flood.
Although the project is already succesfully finished, for completeness sake here is additional information:
- The methodology used for feature selection is unclear as there is very little comment to the code. Please expand.
- Please document these notebooks and interpret the results. A short description about the functionality of the model would be helpful.
We expanded the documentation in the notebooks, they can be found in the /notebooks/2_preprocessing/ folder.
- For completeness, please add results of out-of-sample test for all the models.
- For every model you test, please provide a summary of used hyperparameters (e.g. activation function, loss function, learning rate, neurons in each layer, any hidden layers, number of epochs, etc.)
Can be found in the model notebooks in /notebooks/3_model_tests/ folder.
- Please comment on how you identify where the ‘upstream’ river gridpoints are.
- In 012_explaining_training_the_localmodel the validation loss fluctuates more than the training loss. What could the problem be? Maybe the learning rate is too large?
Can be found in /notebooks/4_coupled_model/, although we should emphasize, that this was merely some testing for the concept model and does not serve any purpose to the comparison study, which was the target of the ESoWC project.
from ml_flood.
Related Issues (11)
- test
- Review - Reproducibility/Best practice HOT 2
- Review - Feature selection HOT 2
- Review - Feature importance HOT 1
- Review - Spatial correlations HOT 1
- Review - ML-techniques HOT 1
- Review - Feature engineering HOT 1
- Review - CDO-based methods HOT 2
- Add a binder link so folks can run the notebooks
- Is forecast_range the same as lead time?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ml_flood.