gclm {gclm} | R Documentation |
Estimates a sparse continuous time Lyapunov parametrization of a covariance matrix using a lasso (L1) penalty.
gclm( Sigma, B = -0.5 * diag(ncol(Sigma)), C = rep(1, ncol(Sigma)), C0 = rep(1, ncol(Sigma)), loss = "loglik", eps = 0.01, alpha = 0.5, maxIter = 100, lambda = 0, lambdac = 0, job = 0 ) gclm.path( Sigma, lambdas = NULL, B = -0.5 * diag(ncol(Sigma)), C = rep(1, ncol(Sigma)), ... )
Sigma |
covariance matrix |
B |
initial B matrix |
C |
diagonal of initial C matrix |
C0 |
diagonal of penalization matrix |
loss |
one of "loglik" (default) or "frobenius" |
eps |
convergence threshold |
alpha |
parameter line search |
maxIter |
maximum number of iterations |
lambda |
penalization coefficient for B |
lambdac |
penalization coefficient for C |
job |
integer 0,1,10 or 11 |
lambdas |
sequence of lambda |
... |
additional arguments passed to |
gclm
performs proximal gradient descent for the optimization problem
argmin L(Σ(B,C)) + λ ρ(B) + λ_C ||C - C0||_F^2
subject to B stable and C diagonal, where ρ(B) is the l1 norm of the off-diagonal element of B.
gclm.path
simply calls iteratively gclm
with different lambda
values. Warm start is used, that
is in the i-th call to gclm
the B
and C
matrices are initialized as the one obtained in the (i-1)th
call.
for gclm
: a list with the result of the optimization
for gclm.path
: a list of the same length of
lambdas
with the results of the optimization for
the different lambda
values
x <- matrix(rnorm(50*20),ncol=20) S <- cov(x) ## l1 penalized log-likelihood res <- gclm(S, eps = 0, lambda = 0.1, lambdac = 0.01) ## l1 penalized log-likelihood with fixed C res <- gclm(S, eps = 0, lambda = 0.1, lambdac = -1) ## l1 penalized frobenius loss res <- gclm(S, eps = 0, lambda = 0.1, loss = "frobenius")