SSGL_gibbs {SSGL}R Documentation

Gibbs sampling for Spike-and-Slab Group Lasso in Group-Regularized Generalized Linear Models (GLMs)

Description

The SSGL_gibbs function implements Gibbs sampling for group-regularized GLMs with the spike-and-slab group lasso (SSGL) prior of Bai et al. (2022) and Bai (2023). The identity link function is used for Gaussian regression, the logit link is used for binomial regression, and the log link is used for Poisson regression.

For binomial and Poisson regression, Polya-gamma data augmentation (Polson et al., 2013) is used to draw MCMC samples. The details are described in Bai (2023).

Note that the SSGL_gibbs function only returns the posterior mean, the 95 percent posterior credible intervals, and the posterior samples for the elements of the model parameter \beta and the predicted mean response \mu_{test} = E(Y_{test}). This function does not perform variable selection.

It is recommended that you use the SSGL function to perform variable selection and MAP estimation. If uncertainty quantification is also desired, then this SSGL_gibbs function can be used.

Usage

SSGL_gibbs(Y, X, groups, family=c("gaussian","binomial","poisson"), 
           X_test, group_weights, lambda0=5, lambda1=1, 
           a=1, b=length(unique(groups)),
           burn=1000, n_mcmc=2000, save_samples=TRUE) 

Arguments

Y

n \times 1 vector of responses for training data.

X

n \times p design matrix for training data, where the jth column corresponds to the jth overall feature.

groups

p-dimensional vector of group labels. The jth entry in groups should contain either the group number or the factor level name that the feature in the jth column of X belongs to. groups must be either a vector of integers or factors.

family

exponential dispersion family of the response variables. Allows for "gaussian", "binomial", and "poisson".

X_test

n_{test} \times p design matrix for test data to calculate predictions. X_test must have the same number of columns as X, but not necessarily the same number of rows. If no test data is provided or if in-sample predictions are desired, then the function automatically sets X_test=X in order to calculate in-sample predictions.

group_weights

group-specific, nonnegative weights for the penalty. Default is to use the square roots of the group sizes.

lambda0

spike hyperparameter \lambda_0 in the SSGL prior. Default is lambda0=5.

lambda1

slab hyperparameter \lambda_1 in the SSGL prior. Default is lambda1=1.

a

shape hyperparameter for the Beta(a,b) prior on the mixing proportion in the SSGL prior. Default is a=1.

b

shape hyperparameter for the Beta(a,b) prior on the mixing proportion in the SSGL prior. Default is b=length(unique(groups)), i.e. the number of groups.

burn

Number of warm-up MCMC samples to discard as burn-in. Default is burn=1000.

n_mcmc

Number of MCMC samples to save for posterior inference. Default is n_mcmc=2000.

save_samples

Boolean variable for whether or not to save the MCMC samples for \beta and predicted mean response \mu_{test} = E(Y_{text}). Default is save_samples=TRUE.

Value

The function returns a list containing the following components:

beta_hat

estimated posterior mean of p \times 1 regression coefficient vector \beta.

Y_pred_hat

estimated posterior mean of n_{test} \times 1 vector of predicted mean response values \mu_{test} = E(Y_{test}) based on the test data in X_test (or training data X if no argument was specified for X_test).

beta_lower

p \times 1 vector of lower endpoints of the 95 percent posterior credible intervals for \beta.

beta_upper

p \times 1 vector of upper endpoints of the 95 percent posterior credible intervals for \beta.

Y_pred_lower

n_{test} \times 1 vector of lower endpoints of the 95 percent posterior credible intervals for \mu_{test} = E(Y_{test}).

Y_pred_upper

n_{test} \times 1 vector of upper endpoints of the 95 percent posterior credible intervals for \mu_{test} = E(Y_{test}).

beta_samples

p \times n_mcmc matrix of saved posterior samples for \beta. The jth row of beta_samples consists of the posterior samples for the jth regression coefficient in \beta. This is not returned if save_samples=FALSE.

Y_pred_samples

n_{test} \times n_mcmc matrix of saved posterior samples for \beta. The ith row of Y_pred_samples consists of the posterior samples of the predicted mean response \mu_{i,test} = E(Y_{i,test}) for the ith test point. This is not returned if save_samples=FALSE.

References

Bai, R. (2023). "Bayesian group regularization in generalized linear models with a continuous spike-and-slab prior." arXiv pre-print arXiv:2007.07021.

Polson, N. G., Scott, J. G., and Windle, J. (2013). "Bayesian inference for logistic models using Polya-gamma latent variables." Journal of the American Statistical Association, 108: 1339-1349.

Examples

## Generate data
set.seed(1)
X = matrix(runif(200*17), nrow=200)
X_test = matrix(runif(20*17), nrow=20)

n = dim(X)[1]
n_test = dim(X_test)[1]

groups = c(1,1,1,2,2,2,2,3,3,3,4,4,5,5,6,6,6)
true_beta = c(-2,2,2,0,0,0,0,0,0,0,0,0,2.5,-2.5,0,0,0)
Y = crossprod(t(X), true_beta) + rnorm(n)

## Fit SSGL model. You should use the default burn=1000 and n_mcmc=2000

SSGL_mod = SSGL_gibbs(Y, X, groups, family="gaussian", X_test, burn=500, n_mcmc=1000)

## Evaluate results
cbind("True Beta" = true_beta, 
      "Posterior Mean" = SSGL_mod$beta_hat, 
      "95 CI lower" = SSGL_mod$beta_lower, 
      "95 CI upper"= SSGL_mod$beta_upper)

## Predictions on test data
cbind("Predicted E(Y)" = SSGL_mod$Y_pred_hat, 
      "95 CI lower" = SSGL_mod$Y_pred_lower, 
      "95 CI upper" = SSGL_mod$Y_pred_upper)


## Example with binary logistic regression

## Generate data
set.seed(123)
X = matrix(runif(200*16), nrow=200)
X_test = matrix(runif(50*16), nrow=50)
n = dim(X)[1]
n_test = dim(X)[2]
groups = c(1,1,1,1,2,2,2,2,3,4,4,5,5,6,6,6)
true_beta = c(-2,2,2,-2,0,0,0,0,0,0,0,2.5,-2.5,0,0,0)

## Generate binary responses
eta = crossprod(t(X), true_beta)
Y = rbinom(n, 1, 1/(1+exp(-eta)))

## Fit  SSGL logistic model
SSGL_logistic_mod = SSGL_gibbs(Y, X, groups, family="binomial", X_test)

## Evaluate results
cbind("True Beta" = true_beta, 
      "Posterior Mean" = SSGL_logistic_mod$beta_hat, 
      "95 CI lower" = SSGL_logistic_mod$beta_lower, 
      "95 CI upper"= SSGL_logistic_mod$beta_upper)

## Predictions on test data
cbind("Predicted E(Y)" = SSGL_logistic_mod$Y_pred_hat, 
      "95 CI lower" = SSGL_logistic_mod$Y_pred_lower, 
      "95 CI upper" = SSGL_logistic_mod$Y_pred_upper)


[Package SSGL version 2.0 Index]