Giter VIP home page Giter VIP logo

Comments (9)

NickNtamp avatar NickNtamp commented on May 19, 2024 1

We should explore alternatives like https://optuna.org/ here.

from whitebox.

NickNtamp avatar NickNtamp commented on May 19, 2024

The hyper-parameters tuning is pretty easy to be performed using Gridsearch.

There are some questions thought:

  • Do you believe that there is a time threshold (e.g. not take more than 20 seconds)?
  • Do you believe that we have to set an evaluation metric threshold(e.g. if model achieves 90% accuracy, pick that model)?
  • The model training will be performed once per training set. Means that we will retrain the model only if the training set changes. Do we need to keep track of models (e.g. by using MLflow?)
  • Regardless of whether we use MLflow or not, do we need to save somewhere the hyper-parameters of the optimal model?

cc: @momegas , @gcharis , @stavrostheocharis

from whitebox.

stavrostheocharis avatar stavrostheocharis commented on May 19, 2024

Do you believe that there is a time threshold (e.g. not take more than 20 seconds)?

  • Depends on when this pipeline runs. If it is in almost real-time, I think that we should have a threshold. If it runs based on a scheduler it is not a problem

Do you believe that we have to set an evaluation metric threshold(e.g. if model achieves 90% accuracy, pick that model)?

  • Maybe just pick the one with the highest accuracy. But here what happens if we have a poor model as the best model?

The model training will be performed once per training set. Means that we will retrain the model only if the training set changes. Do we need to keep track of models (e.g. by using MLflow?)

  • If we are going to keep the model, a solution like this could be implemented, but MLflow will need much effort to integrate it (database, paths, deployment, etc.)

Regardless of whether we use MLflow or not, do we need to save somewhere the hyper-parameters of the optimal model?

  • I think that this would be good to save them and maybe also keep the eval metrics to show something to the user, in order that he knows exactly how precise is our explanation.

from whitebox.

momegas avatar momegas commented on May 19, 2024

I think its important to keep the target of Whitebox in mind. The target is monitoring not create models (at least not now)
With this in mind, I think that we should either have a quick tuning or not at all. How I understood this issue was that it would be just some adjustments on the training. Not create a full other feature.

Think about this and if we can have just this in the timebox we have good. Otherwise, I would look at something else.

from whitebox.

NickNtamp avatar NickNtamp commented on May 19, 2024

Having some discussions with @stavrostheocharis , we concluded that the requirements of this task are still pretty blurry. I will try to simplify them with some simple questions below, so please @momegas - when you have the time, let us know.

  1. Do we wish to have some possibilities for a better model - predicting more accurate results? This means more accuracy during the explanability also.
  2. If no, we can close the ticket. If yes, how much time do we wish to sacrifice for performing the fine tuning searching for the best model - here could help also a metric threshold. For instance if we say to the model to iterate through 20 different combinations of hyper-parameters, in case of achieving an acceptable performance even in the 1st iteration, stop there and consider this as the best model.
  3. Do we wish in some way to keep track of the best hyper-parameters?

from whitebox.

momegas avatar momegas commented on May 19, 2024

I think we should not spend more time on this as a better model will not give much value to WB at the moment since we are missing more core features.
Feel free to close this if needed @NickNtamp

from whitebox.

NickNtamp avatar NickNtamp commented on May 19, 2024

Sure I can close the ticket @momegas .
Before do it, I want to remind to both you and @stavrostheocharis that by not exploring combinations in order to increase the possibility of building a better model in an unknown dataset we accept the high risk of explaining a trash-model. Just imagine that we could build a model that has an accuracy of 20% and we will use it for our explainability feature.

from whitebox.

stavrostheocharis avatar stavrostheocharis commented on May 19, 2024

I would keep this as an issue in the backlog, in order to further investigate it and implement an enhancement in the future

from whitebox.

momegas avatar momegas commented on May 19, 2024

It was actually requested! You are right. I will re-open this.

from whitebox.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.