Giter VIP home page Giter VIP logo

Comments (16)

mnwright avatar mnwright commented on August 23, 2024 2

Yes, that's another interpretation. ;) The current approach selects each partition size with equal probability, Pierre's approach selects each partition with equal probability.

Following the argument of Ishwaran (2015), extreme cutpoints are actually a good thing, particularly for noisy data. One could therefore prefer the current method...

I agree we shouldn't have another tuning parameter. I will add Pierre's approach to the developer version but we should select one before merging in the master.

In summary I think we should use Pierre's method to have the original method of "extremely randomised trees", as proposed in his paper, unless there is strong evidence that the current approach is better.

from ranger.

mnwright avatar mnwright commented on August 23, 2024 1

I have added Pierre's approach. Use splitrule = "extratrees" for the old method and splitrule = "extratrees2" to select Pierre's approach.

I couldn't resist to do a simple comparison and Pierre's approach was better, at least for a regression outcome.

from ranger.

mnwright avatar mnwright commented on August 23, 2024

Option 1 would be Extremely randomised trees, right?
Another alternative would be to randomly draw from all values of a feature X in a node, instead of drawing between a and b.

What is done in extraTrees?

from ranger.

jakob-r avatar jakob-r commented on August 23, 2024

No. According to my understanding in extremly randomised trees there is no such thing as a and b. According to the linked paper it just looks at K completely randomly drawn split points from the feature X.
I proposed to find the optimal split point like it is the norm. But since we don't know (for regression) for the real data if the split is closer to a or b we just draw it randomly between a and b.
But thinking further I can see that this might not minimize the rmse as much as s = (a+b)/2

from ranger.

rfcv avatar rfcv commented on August 23, 2024

This comment was moved here on request of @mnwright :

I think that the greatest improvement to ranger would be to allow for an implementation of the extra trees, also known as extremely randomized trees algorithm. The addition of options method = "extratrees" and numRandomCuts = x to ranger() would make ranger the only algorithm anyone needs for random forests due to its speed, memory efficiency, and flexibility with the additional splitting method. It would take some quick changes to the tree.cpp files, but I don't think the addition of the extra trees method would be very difficult to add.

from ranger.

jakob-r avatar jakob-r commented on August 23, 2024

I have to add that obviously extremly randomised trees will still lead to a smoother regression and hence make my proposal maybe redundant.

from ranger.

jakob-r avatar jakob-r commented on August 23, 2024

I have to post here again to emphasize how great it would be to have extremly randomised trees in ranger. I used extraTrees for a small study on model based optimization and it appears to be really promising. Unfortunately the development of extraTrees seems to be discontinued.
So it is unlikely that they will add the possibility to have categorical features although the algorithm as proposed in the paper you linked has ways to deal with it.
It would be awesome if we can work something out for ranger!

from ranger.

mnwright avatar mnwright commented on August 23, 2024

Just to make sure I got extraTrees right (I've not read the paper in detail):

  • The random split points are drawn from a continuous uniform distribution between a and b, where a and b are the minimal and maximal covariate values in the current node.
  • For categorical covariate, a random subset of all covariate values in the current node is drawn, a second subset of all covariate values not in the current node is drawn. The union of these two subsets is assigned to one child node.

Is this right?

More questions:

  • Could we used the ordering approach (see #36) for categorical predictors instead of the proposed approach in extraTrees?
  • Do we also need the other approach (draw split values randomly out of covariate values) or is the a/b approach just better?

from ranger.

jakob-r avatar jakob-r commented on August 23, 2024

The random split points are drawn from a continuous uniform distribution between a and b, where a and b are the minimal and maximal covariate values in the current node.

Yes. K split points are randomly drawn from [a_min, b_max]

For categorical covariate, a random subset of all covariate values in the current node is drawn, a second subset of all covariate values not in the current node is drawn. The union of these two subsets is assigned to one child node.

Yes and this is done K times and the union that led to the best score is selected as the split.

Could we used the ordering approach (see #36) for categorical predictors instead of the proposed approach in extraTrees?

You mean:

In "The elements of Statistical Learning" in chapter 9.2.4 (http://statweb.stanford.edu/~tibs/ElemStatLearn/) they suggest to order unordered variables in case of binary outcomes, by there appearances in outcome class 1. Similarly in the regression case (and possibly this could be also applied in the multiclass case by some similarity approach).

Well this is a very special case and will break multiclass support? I would prefer the extremely randomized trees version.

Do we also need the other approach (draw split values randomly out of covariate values) or is the a/b approach just better?

With "a/b approach" you mean to randomly select between the a and b as I described in my first post? I am actually curoious to benchmark that against the runif(a_min, b_max) approach from the paper.

from ranger.

PhilippPro avatar PhilippPro commented on August 23, 2024

Well this is a very special case and will break multiclass support? I would prefer the extremely randomized trees version.

It is not a very special case, in the binary case it is equivalent, to the normal approach in randomForest but much faster. It has nothing to do with extratrees, I think, cause the splits in extratrees are generating two random subsets of the available categories of a variable.

from ranger.

mnwright avatar mnwright commented on August 23, 2024

I have implemented the extraTrees approach, see #137.
Use splitrule = "extratrees" and num.random.splits = X.

In the description for categorical features I couldn't find the size of the subsets to draw. Currently, this size is chosen randomly, too.
It seems in the extraTrees R package the method for categorical features is not implemented.

If I read the doc of extraTrees right, they are not using bagging by default, but one can chose subagging with the option subsetSizes. Right?
Are there any other differences to standard RF in their approach?

@jakob-r: Could you please test my implementation before I merge it in the master?

from ranger.

jakob-r avatar jakob-r commented on August 23, 2024

Wow. Really cool that you implemented it! I will test it out as soon as I can.

Exactly. I also noted that extraTrees did not support categorical features. And as far as I understand the doc you are right. The subsetGroups and subsetSizes seem to be motivated for balancing unevenly distributed classes. The original paper says the following

It is used several times with the (full) original
learning sample to generate an ensemble model (we denote by M the number of trees of
this ensemble)

Regarding the size of the subset for categorical splits I am clueless as well. Where is the original implementation of the paper? Can we just ask the authors?

from ranger.

mnwright avatar mnwright commented on August 23, 2024

I've asked Pierre Geurts (first author of the paper). His answer (with kind permission):

Dear Marvin,

Thank you for your interest in Extremely randomized trees. In my code, the sets A1 and A2 are determined as follows:

  • For A1, I simply pick a random integer uniformly between 1 and 2^n-2, where n is the size of the As set, and use the bits of the resulting integer to decide whether a value in As is included in A1 or not. This ensures that I will not pick an empty or a full subset.
  • For A2, I flip a (fair) coin for each value to decide whether or not to include it in A2 (since A2 can be empty or full).

If the number of values in As is very large, I sample A1 similarly as A2 (and I retry in case I obtain by chance an empty or full subset).

This is not equivalent to your sampling mechanism. I give equal chance to all subsets while you give equal chance to all subset sizes. Unfortunately, I have no idea which sampling mechanism should work best in practice. I must confess that I have not done many experiments with categorical features.

I hope this helps.

Best regards,

Pierre

Following Pierre's argument, in the current approach the extreme partitions are preferred over balanced partitions because there are fewer possibilities.

I think we should implement both methods, do a simple comparison and decide for one method.

from ranger.

jakob-r avatar jakob-r commented on August 23, 2024

Nice to get a quick and helpful answer!

So Pierres approach A2 is not as extremely random as yours from my understanding because in A2 there is a tendency that both subsets contain an equal amount of classes so we should favour your method?

I'm a bit afraid that to answer which method is better is not answered by "a simple comparison" because we don't know how the data influences the result and what data is really representative. So the usual problems... But maybe we can find a theoretical motivation? 🤔

In the end we don't want the user to have again another parameter he has to think about. So we can just mimic the extraTrees behavior and be happy or we take a bit of effort and make an bigger study that might lead to a helpful paper?

Personally I think it's interesting enough to have both options available (in a developer version of ranger?).

from ranger.

mnwright avatar mnwright commented on August 23, 2024

Changed to Pierre's original method, removed old method and merged.

from ranger.

jakob-r avatar jakob-r commented on August 23, 2024

Perfect! So from my side this can be closed. Will come back at you as soon as we have some interesting results ;)

from ranger.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.