Comments (5)
This is what I get right now:
tot <- rep(10, 100)
suc <- rbinom(100, prob = 0.9, size = tot)
df <- data.frame(tot, suc)
df$prop <- suc / tot
mod1 <- glm(cbind(suc, tot - suc) ~ 1,
family = binomial,
data = df
)
performance::check_posterior_predictions(mod1)
mod2 <- glm(prop ~ 1,
family = binomial,
data = df,
weights = tot
)
performance::check_posterior_predictions(mod2)
mod3 <- glm(cbind(suc, tot) ~ 1,
family = binomial,
data = df
)
performance::check_posterior_predictions(mod3)
And this is when I change the code to your suggestion:
tot <- rep(10, 100)
suc <- rbinom(100, prob = 0.9, size = tot)
df <- data.frame(tot, suc)
df$prop <- suc / tot
mod1 <- glm(cbind(suc, tot - suc) ~ 1,
family = binomial,
data = df
)
performance::check_posterior_predictions(mod1)
#> Warning: Maximum value of original data is not included in the
#> replicated data.
#> Model may not capture the variation of the data.
mod2 <- glm(prop ~ 1,
family = binomial,
data = df,
weights = tot
)
performance::check_posterior_predictions(mod2)
mod3 <- glm(cbind(suc, tot) ~ 1,
family = binomial,
data = df
)
performance::check_posterior_predictions(mod3)
#> Warning: Maximum value of original data is not included in the
#> replicated data.
#> Model may not capture the variation of the data.
I'll look into this, but right now I'm not sure which approach is more appropriate.
from performance.
Thanks for having a look at this. I appear to have given a partial fix. Looking again line 288
out$y <- response[, 1] / response[, 2]
would also need to become
out$y <- response[, 1] / rowSums(response)
so that it was calculated in the same way
from performance.
Ok, this would be the result:
set.seed(123)
tot <- rep(10, 100)
suc <- rbinom(100, prob = 0.9, size = tot)
df <- data.frame(tot, suc)
df$prop <- suc / tot
mod1 <- glm(cbind(suc, tot - suc) ~ 1,
family = binomial,
data = df
)
mod2 <- glm(prop ~ 1,
family = binomial,
data = df,
weights = tot
)
mod3 <- glm(cbind(suc, tot) ~ 1,
family = binomial,
data = df
)
mod4 <- glm(am ~ 1,
family = binomial,
data = mtcars
)
performance::check_predictions(mod1)
performance::check_predictions(mod2)
performance::check_predictions(mod3)
performance::check_predictions(mod4)
from performance.
Thank you. That looks how I would expect it to.
Would be nice to improve the x-axis label, but not sure what would work and be easy
from performance.
After fixing a bug in insight, this is how it would look like with the current implementation, and your suggested fix.
set.seed(1)
tot <- rep(10, 100)
suc <- rbinom(100, prob = 0.9, size = tot)
df <- data.frame(tot, suc)
df$prop <- suc / tot
mod1 <- glm(cbind(suc, tot - suc) ~ 1,
family = binomial,
data = df
)
mod2 <- glm(prop ~ 1,
family = binomial,
data = df,
weights = tot
)
mod3 <- glm(cbind(suc, tot) ~ 1,
family = binomial,
data = df
)
mod4 <- glm(am ~ 1,
family = binomial,
data = mtcars
)
Mod1
Curent (mod1)
New (mod1)
Mod2
Curent (mod2)
New (mod2)
Mod3
Curent (mod3)
New (mod3)
Mod4
Curent (mod4)
New (mod4)
from performance.
Related Issues (20)
- performance::r2_nakagawa() and r.squaredGLMM() give different values for Gaussian glmmTMB models without random effects HOT 4
- wonky plot from `check_model()` on a `glmmTMB` example HOT 28
- Examples for `{BayesFactor}` are failing
- `check_model`: add title and save plot HOT 5
- Simple glms no longer supported? HOT 2
- Checking outliers paper: BRM review discussion HOT 23
- Missing variable when using the check_collinearity() function for zero-augmented glmmTMB (hurdle, zi) HOT 4
- Errors in using r2_kullback HOT 1
- support for quantile regression and mixed effects quantile regression
- Investigating the high % of outliers detected with the MCD method HOT 17
- How can I save the group of plots produced by check_model to a figure from within a script? HOT 3
- Outlier detection: Support Cook's distance for SEM models
- difficult-to-diagnose errors using "difftime" response in a linear model HOT 9
- `check_singularity` doesn't work for `glmmTMB` HOT 9
- `icc` doesn't work for `glmmTMB` HOT 4
- R-squared for Dirichlet regression (`r2`)
- QQ plot blank in check model for glmmTMB with tweedie distribution HOT 4
- Error checking normality for t.test HOT 1
- spurious(?) viewport-too-small error with new ggplot2 version 3.5.0 HOT 11
- incorrect warning with old `ggplot2`/failure to load `see` HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from performance.