pic {missingHE} | R Documentation |
Predictive information criteria for Bayesian models fitted in JAGS
using the funciton selection
, pattern
or hurdle
Description
Efficient approximate leave-one-out cross validation (LOO), deviance information criterion (DIC) and widely applicable information criterion (WAIC) for Bayesian models, calculated on the observed data.
Usage
pic(x, criterion = "dic", module = "total")
Arguments
x |
A |
criterion |
type of information criteria to be produced. Available choices are |
module |
The modules with respect to which the information criteria should be computed. Available choices are |
Details
The Deviance Information Criterion (DIC), Leave-One-Out Information Criterion (LOOIC) and the Widely Applicable Information Criterion (WAIC) are methods for estimating out-of-sample predictive accuracy from a Bayesian model using the log-likelihood evaluated at the posterior simulations of the parameters. DIC is computationally simple to calculate but it is known to have some problems, arising in part from it not being fully Bayesian in that it is based on a point estimate. LOOIC can be computationally expensive but can be easily approximated using importance weights that are smoothed by fitting a generalised Pareto distribution to the upper tail of the distribution of the importance weights. WAIC is fully Bayesian and closely approximates Bayesian cross-validation. Unlike DIC, WAIC is invariant to parameterisation and also works for singular models. In finite cases, WAIC and LOO give similar esitmates, but for influential observations WAIC underestimates the effect of leaving out one observation.
Value
A named list containing different predictive information criteria results and quantities according to the value of criterion
. In all cases, the measures are
computed on the observed data for the specific modules of the model selected in module
.
- d_bar
Posterior mean deviance (only if
criterion
is'dic'
).- pD
Effective number of parameters calculated with the formula used by
JAGS
(only ifcriterion
is'dic'
)
.
- dic
Deviance Information Criterion calculated with the formula used by
JAGS
(only ifcriterion
is'dic'
)
.
- d_hat
Deviance evaluated at the posterior mean of the parameters and calculated with the formula used by
JAGS
(only ifcriterion
is'dic'
)- elpd, elpd_se
Expected log pointwise predictive density and standard error calculated on the observed data for the model nodes indicated in
module
(only ifcriterion
is'waic'
or'loo'
).- p, p_se
Effective number of parameters and standard error calculated on the observed data for the model nodes indicated in
module
(only ifcriterion
is'waic'
or'loo'
).- looic, looic_se
The leave-one-out information criterion and standard error calculated on the observed data for the model nodes indicated in
module
(only ifcriterion
is'loo'
).- waic, waic_se
The widely applicable information criterion and standard error calculated on the observed data for the model nodes indicated in
module
(only ifcriterion
is'waic'
).- pointwise
A matrix containing the pointwise contributions of each of the above measures calculated on the observed data for the model nodes indicated in
module
(only ifcriterion
is'waic'
or'loo'
).- pareto_k
A vector containing the estimates of the shape parameter
k
for the generalised Pareto fit to the importance ratios for each leave-one-out distribution calculated on the observed data for the model nodes indicated inmodule
(only ifcriterion
is'loo'
). Seeloo
for details about interpretingk
.
Author(s)
Andrea Gabrio
References
Plummer, M. JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling. (2003).
Vehtari, A. Gelman, A. Gabry, J. (2016a) Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing. Advance online publication.
Vehtari, A. Gelman, A. Gabry, J. (2016b) Pareto smoothed importance sampling. ArXiv preprint.
Gelman, A. Hwang, J. Vehtari, A. (2014) Understanding predictive information criteria for Bayesian models. Statistics and Computing 24, 997-1016.
Watanable, S. (2010). Asymptotic equivalence of Bayes cross validation and widely application information criterion in singular learning theory. Journal of Machine Learning Research 11, 3571-3594.
See Also
Examples
#For examples see the function selection, pattern or hurdle
#
#