Comments (9)
We should explore alternatives like https://optuna.org/ here.
from whitebox.
The hyper-parameters tuning is pretty easy to be performed using Gridsearch.
There are some questions thought:
- Do you believe that there is a time threshold (e.g. not take more than 20 seconds)?
- Do you believe that we have to set an evaluation metric threshold(e.g. if model achieves 90% accuracy, pick that model)?
- The model training will be performed once per training set. Means that we will retrain the model only if the training set changes. Do we need to keep track of models (e.g. by using MLflow?)
- Regardless of whether we use MLflow or not, do we need to save somewhere the hyper-parameters of the optimal model?
cc: @momegas , @gcharis , @stavrostheocharis
from whitebox.
Do you believe that there is a time threshold (e.g. not take more than 20 seconds)?
- Depends on when this pipeline runs. If it is in almost real-time, I think that we should have a threshold. If it runs based on a scheduler it is not a problem
Do you believe that we have to set an evaluation metric threshold(e.g. if model achieves 90% accuracy, pick that model)?
- Maybe just pick the one with the highest accuracy. But here what happens if we have a poor model as the best model?
The model training will be performed once per training set. Means that we will retrain the model only if the training set changes. Do we need to keep track of models (e.g. by using MLflow?)
- If we are going to keep the model, a solution like this could be implemented, but MLflow will need much effort to integrate it (database, paths, deployment, etc.)
Regardless of whether we use MLflow or not, do we need to save somewhere the hyper-parameters of the optimal model?
- I think that this would be good to save them and maybe also keep the eval metrics to show something to the user, in order that he knows exactly how precise is our explanation.
from whitebox.
I think its important to keep the target of Whitebox in mind. The target is monitoring not create models (at least not now)
With this in mind, I think that we should either have a quick tuning or not at all. How I understood this issue was that it would be just some adjustments on the training. Not create a full other feature.
Think about this and if we can have just this in the timebox we have good. Otherwise, I would look at something else.
from whitebox.
Having some discussions with @stavrostheocharis , we concluded that the requirements of this task are still pretty blurry. I will try to simplify them with some simple questions below, so please @momegas - when you have the time, let us know.
- Do we wish to have some possibilities for a better model - predicting more accurate results? This means more accuracy during the explanability also.
- If no, we can close the ticket. If yes, how much time do we wish to sacrifice for performing the fine tuning searching for the best model - here could help also a metric threshold. For instance if we say to the model to iterate through 20 different combinations of hyper-parameters, in case of achieving an acceptable performance even in the 1st iteration, stop there and consider this as the best model.
- Do we wish in some way to keep track of the best hyper-parameters?
from whitebox.
I think we should not spend more time on this as a better model will not give much value to WB at the moment since we are missing more core features.
Feel free to close this if needed @NickNtamp
from whitebox.
Sure I can close the ticket @momegas .
Before do it, I want to remind to both you and @stavrostheocharis that by not exploring combinations in order to increase the possibility of building a better model in an unknown dataset we accept the high risk of explaining a trash-model. Just imagine that we could build a model that has an accuracy of 20% and we will use it for our explainability feature.
from whitebox.
I would keep this as an issue in the backlog, in order to further investigate it and implement an enhancement in the future
from whitebox.
It was actually requested! You are right. I will re-open this.
from whitebox.
Related Issues (20)
- Reports are being produced on all data points HOT 2
- Indicator of significance for the reports HOT 1
- Make nonprocessed Optional for the Inferences HOT 3
- The user has to give together the timestamp in a column with the processed Dataframe HOT 3
- Update inference with actuals HOT 1
- Inference timestamp is saved as created_at in database HOT 2
- There should be raised an error in case the target column is not included inside the processed data of the given inference HOT 2
- SDK example notebook has wrong outputs and not all functions HOT 1
- Change the name of the prediction column HOT 2
- Confusion matrix schema alignment HOT 1
- Fix typing in SDK functions HOT 2
- Additions and fixes in README parts
- Refactor XAI functionality HOT 3
- Missing endpoints for model monitor HOT 2
- Add tests for streamlit
- Add data monitoring feature HOT 1
- Create PULL_REQUEST_TEMPLATE.md HOT 1
- Create flag in a report to see if it was checked for an alert HOT 4
- Write tests and documentation for all SDK methods.
- Database is timezone unaware
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from whitebox.