Think about applying it to the completed data matrix or the imputed values. Because there is no variance-covariance matrix of the imputations, just correlations. The estimates will be more realistic (compared to the population) for the completed data. If you have too few missing cases, using the imputations results in unreliable estimates, but using the completed data may lead to missing the non-convergence.
The magnitude of the AC can be interpreted qualitatively or quantitatively. Quantitative evaluation of the AC entails comparing the observed ACs to the critical values of a two-tailed 95% confidence interval, divided by the square root of the number of iterations.
I want to get a univariate measure of bias to compare to the univariate convergence diagnostics.
If I understand correctly, the variable means are easy: stack the imputed datasets, and take the average (according to FIMD par 5.1.3 "While the estimated regression coefficients are unbiased, we cannot trust the standard errors,..."). So I thought I needed a different strategy for the standard deviations/variances of the completed variables. But this thread suggests otherwise: "If you just want to report the descriptive SD of a variable, you can just take the average of the within imputation SDs." (https://lists.gking.harvard.edu/pipermail/amelia/2016-July/001249.html).
I struggle with the last three sentences of this paragraph. All three contain a ':'. Help?
The goal of this research project is to develop novel methodology and guidelines for evaluat-
ing MI methods. These evaluation measures and guidelines will subsequently be implemented
in an interactive evaluation framework for multiple imputation to aid applied researchers in
drawing valid inference from incomplete datasets. This note provides the theoretical foun-
dation for one of these evaluation measures that is vital in evaluating MI procedures: a
diagnostic to assess convergence of the MI algorithm. More specifically, this note focuses
on the MI algorithm that is implemented in the R package mice: ‘Multiple Imputation us-
ing Chained Equations’ (MICE). The research question that will be addressed is: ‘How to
diagnose convergence of the multiple imputation algorithm MICE?’.
Hi! In one of the MICE meetings we have discussed running my simulation on convergence with very low or even with no correlations between the predictor variables in the analysis. The problem is that I do not remember whether this was related the the actual performance of $\widehat{R}$, or only for me to find out what would happen with respect to the theoretically implausible values below one that I kept getting. Because I did do so now, but nothing interesting happened. @gerkovink@stefvanbuuren can you enlighten me? (I'll ask you on Monday anyways)
Q: The package by Su et al (2011), 'mi', does include a convergence statistic: "If the $\hat{R}$ statistic is smaller than 1.1, (i.e., the difference of the within and between variance is trivial), the imputation is considered converged (Gelman, Carlin, Stern, and Rubin 2004)". As far as I know this is just the G-R stat. Does this mean we could just implement it as well? I thought it did not apply directly to MI data?
A: i don't know how to start from an overdisp dataset, so between is very high and then you try to ... if there is conv you'll find GR near 1, but that's no guarantee (see new papers on R hat on mcmc models). If you use GR to assess conv. you'll end up with 30-50 it. but that might be superfluous. --> too little im
Q: Should I aim for publication in The R Journal, or Journal of Statistical Software? (I checked, shinystan does not have an official, published, peer-reviewed publication at all).
A: The latter one, because it's more scientific, R is more news-is