Giter VIP home page Giter VIP logo

Comments (28)

mattansb avatar mattansb commented on May 18, 2024 4

I was teaching yesterday and I was called out by students "You don't have delta-R2 available??"
This is embarrassing guys....

from performance.

DominiqueMakowski avatar DominiqueMakowski commented on May 18, 2024 1

I guess that would make sense as models' performance indices are very often used to compare models... I wonder about the syntax tho, do we need something implicit like r2(..., named_args) with wich which we could do r2(model1, model2, model3, named_arg=FALSE) or is it better to leave the current behaviourand accept lists of models as the first argument r2(c(model1, model2, model3), named_arg=FALSE).

This could be later extended to model_performance() (or a new compare_performance? that would open a new types of functions compare_*), that would compute and compare or possibles indices (i.e., all indices that are compatible with all the models)

from performance.

mattansb avatar mattansb commented on May 18, 2024 1

I suggest implementing this in compare_performance() as R2_delta for linear models only (the only ones for which this really makes sense (As for GLMs the total "variance" on the latent scale increases with model complexity... which is weird...).

We might then also add Cohens_f2:

image

from performance.

strengejacke avatar strengejacke commented on May 18, 2024 1

Maybe compare_models() is better located in report, and both includes parameters and performance indices

from performance.

strengejacke avatar strengejacke commented on May 18, 2024

Currently, r2() is defined as r2 <- function(model, ...) {. I would say we just make it r2(...) (or probably r2(x, ...) ) and capture the models with list(...) or so inside the function.

from performance.

DominiqueMakowski avatar DominiqueMakowski commented on May 18, 2024

Agreed.

For the sake of flexibility, we might want to check the provided arguments, to see if they are (compatible) models (i.e. statistical models). Maybe we could add a small is_model in insight, and run this check on the provided arguments in r2(...) (models_to_compare <- allargsinellipsis[is_model(allargsinellipsis)] to increase stability?

from performance.

strengejacke avatar strengejacke commented on May 18, 2024

Sounds good! is_model() would be a long list of inherits()-commands, I guess ;-)
For comparison, should we also check if all models are of the same type? (i.e. no mixing from lm, glm, coxph etc.)

from performance.

strengejacke avatar strengejacke commented on May 18, 2024

I think we should check the input with all_models_equal() because it makes no sense to compare r-squared values from different distributional families.

from performance.

strengejacke avatar strengejacke commented on May 18, 2024

Might be something for later than initial release... It requires some work esp. for more complex R2 measures like Bayes or Nakagawa.

from performance.

DominiqueMakowski avatar DominiqueMakowski commented on May 18, 2024

Agree, this can be improved later on

from performance.

DominiqueMakowski avatar DominiqueMakowski commented on May 18, 2024

The R2 diff could nicely fit in test_performance especially if there are any CI/significance that we could derive from it 😏

I wonder if, in general, we should have a difference_performance() utility function or a difference=TRUE arg in compare_performance() that just displays the difference instead of the raw indices? (which basically sapply(compare_performance, diff))

from performance.

strengejacke avatar strengejacke commented on May 18, 2024

we should have a difference_performance()

No.

or a difference=TRUE arg in compare_performance() that just displays the difference instead of the raw indices?

Difference to what? Models as entered in their order? I'm not sure this is informative for most indices, is it?

from performance.

bwiernik avatar bwiernik commented on May 18, 2024

Olkin, Alf, and Graf have each variously developed CIs of various flavors for R2 differences.

from performance.

bwiernik avatar bwiernik commented on May 18, 2024

Haha. Tell your students that it's honestly not a clear problem to solve when you aren't incorporating the incremental validity into your model (ala some specific SEM models) or via bootstrapping 😜

from performance.

DominiqueMakowski avatar DominiqueMakowski commented on May 18, 2024

We should definitely have some difference-related capabilities, and R2 seems like the best place to start

from performance.

bwiernik avatar bwiernik commented on May 18, 2024

∆R^2^ and ∆R and √∆R are all good statistics to that end. As a start, bootstrapping would be a great method for intervals/p. (Honestly, they are often the best estimators compared to delta-method; proper analytic are a pain).

from performance.

strengejacke avatar strengejacke commented on May 18, 2024

Is this a valid or useful measure at all?
https://twitter.com/brodriguesco/status/1461604815759417344?s=20

from performance.

bwiernik avatar bwiernik commented on May 18, 2024

That tweet is just referencing the distinction between R2 and adjusted/cross-validated R2

from performance.

strengejacke avatar strengejacke commented on May 18, 2024

cross-validated R2?

from performance.

bwiernik avatar bwiernik commented on May 18, 2024

Out of sample R2 (either actually computed in a hold out sample or via leave-one-out or via an analytic approximation)

from performance.

bwiernik avatar bwiernik commented on May 18, 2024

https://journals.sagepub.com/doi/abs/10.1177/1094428106292901?casa_token=QnJ3HAUoBFEAAAAA:Un99_4wYO9dp8i7uM5Pkdwh3surUpUS9pLV294PciaCe8r2AWTfY14KHiLr5yxwJnve3HGEI92SM

from performance.

strengejacke avatar strengejacke commented on May 18, 2024

not sure why this is out of sample/cross validated, since predictors are added, but no additional / different data sets?

from performance.

bwiernik avatar bwiernik commented on May 18, 2024

I mean that his tweet is lamenting that in-sample R2 is positively biased. It is absolutely meaningful to compare models on R2--the solution to his concern is that an unbiased R2 estimator should be used.

from performance.

strengejacke avatar strengejacke commented on May 18, 2024

Ah, ok. Was a bit confused, because we were not discussing cross validated R2 here.

from performance.

DominiqueMakowski avatar DominiqueMakowski commented on May 18, 2024

Saw a recent tweet where @bwiernik mentioned R2 differences, I'd suggest implementing it first in a function called test_r2(), and then perhaps incorporated in test_performance()

from performance.

bwiernik avatar bwiernik commented on May 18, 2024

And then compare_models()

from performance.

strengejacke avatar strengejacke commented on May 18, 2024

compare_performance() ;-)
compare_models() is an alias for compare_parameters() (and hence, located in parameters)

from performance.

bwiernik avatar bwiernik commented on May 18, 2024

That would be less confusing

from performance.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.