Comments (8)
library(bayestestR)
library(report)
mo0 <- lm(Sepal.Length ~ 1, data = iris)
mo1 <- lm(Sepal.Length ~ Species, data = iris)
mo2 <- lm(Sepal.Length ~ Species + Petal.Length, data = iris)
mo3 <- lm(Sepal.Length ~ Species * Petal.Length, data = iris)
BFmodels <- bayesfactor_models(mo1, mo2, mo3, denominator = mo0)
inc_bf <- bayesfactor_inclusion(BFmodels, prior_odds = c(1,2,3), match_models = TRUE)
bf_report <- report(inc_bf)
to_table(bf_report) # same for both full = FALSE / TRUE
#> Terms | Pr(prior) | Pr(posterior) | Inclusion BF
#> ----------------------------------------------------------------------------
#> Species | 0.429 | 0.946 | 3.895e+55
#> Petal.Length | 0.286 | 0.946 | 6.891e+26
#> Species:Petal.Length | 0.429 | 0.054 | 0.038
#> | | |
#> Across matched models only, | | |
#> with custom prior odds (1, 2, 3). | | |
to_text(bf_report, full = FALSE)
We found extreme evidence (BF > 999) in favour of including Species; extreme evidence (BF > 999) in favour of including Petal.Length; strong evidence (BF = 0.04) against including Species:Petal.Length.
to_text(bf_report, full = TRUE)
Bayesian model averaging (BMA) was used to obtain the average evidence for each predictor. Since each model has a prior probability (here we used subjective prior odds of 1, 2, 3), it is possible to sum the prior probability of all models that include a predictor of interest (the prior inclusion probability), and of all models that do not include that predictor (the prior exclusion probability). After the data are observed, we can similarly consider the sums of the posterior modelsβ probabilities to obtain the posterior inclusion probability and the posterior exclusion probability. The change from prior to posterior inclusion odds is the Inclusion Bayes factor. For each predictor, averaging was done only across models that did not include any interactions with that predictor; additionally, for each interaction predictor, averaging was done only across models that contained the main effect from which the interaction predictor was comprised. This was done to prevent Inclusion Bayes factors from being contaminated with non-relevant evidence (see Mathot, 2017). We found extreme evidence (BF > 999) in favour of including Species; extreme evidence (BF > 999) in favour of including Petal.Length; strong evidence (BF = 0.04) against including Species:Petal.Length.
from report.
@mattansb what's the status of this issue?
from report.
Nothing has changed... If you're looking for an initial submit, this is probably good enough for now (with inclusion + model).
Parameter-wise reporting has been implemented else where I think? Maybe in describe_posterior
's reporting? At the very least I remember working on something like that with Dom.
from report.
that is in, I think
from report.
Only bf_models
and bf_inclusion
have their own methods I think.
Are bf_parameters
included in Bayesian reporting?
from report.
no, Bayesian reporting is in its absolute minimal state π and will need a lot of improvement
from report.
but at least the API is now simpler so it should be simpler to fix / improve
from report.
π
from report.
Related Issues (20)
- Unclear reporting HOT 1
- The model's explanatory power is "substantial" HOT 1
- What's the best way to provide appropriate attribution/citation? HOT 2
- Support models of class `gamm` HOT 2
- emmeans and beta regression support
- `report_participants()` should set age as numeric, accept more choices for gender
- report fails when model formulat built with stats::reformulate
- oneway.test: `Error in paste0(out$interpretation, " (", out$statistics, ")"): object 'out' not found`
- Add support for `kruskal.test()`
- Error: bad 'data': object 'data_std' not found HOT 3
- What is the expected behaviour for report(estimate_contrasts(model))?
- To-do: Clean-up names in outputs (`airquality$Month` instead of `as.factor(airquality$Month)`) HOT 1
- Why do the standardized beta values and CIs of a glm poisson regression model not differ from the unstandardized ones? HOT 8
- New CRAN release? HOT 1
- When using stats::t.test, the report() and report_table() function output displays "95 % CI" even if, say, conf.level = 0.975
- CRAN submission revedep check failed (*** Strong rev. depends ***: easystats SqueakR) HOT 7
- report does not work with BayesFactor models
- report_sample(): add indices names in caption instead of table HOT 1
- support for quantile regression
- Report Summary for Time Series Model Stats
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from report.