psdr {psvmSDR} | R Documentation |
Unified linear principal sufficient dimension reduction methods
Description
A function for a linear principal sufficient dimension reduction.
Usage
psdr(
x,
y,
loss = "svm",
h = 10,
lambda = 1,
eps = 1e-05,
max.iter = 100,
eta = 0.1,
mtype = "m",
plot = FALSE
)
Arguments
x |
input matrix, of dimension |
y |
response variable, either can be continuous variable or (+1,-1) coded binary response vector. |
loss |
pre-specified loss functions belongs to |
h |
the number of slices and probabilities equally spaced in |
lambda |
the cost parameter for the svm loss function. The default value is 1. |
eps |
the threshold for stopping iteration with respect to the magnitude of the change of the derivative. The default value is 1.0e-5. |
max.iter |
maximum iteration number for the optimization process. default value is 100. |
eta |
learning rate for the gradient descent algorithm. The default value is 0.1. |
mtype |
a margin type, which is either margin ("m") or residual ("r") (See, Table 1 in the manuscript). Only need when user-defined loss is used. Default is "m". |
plot |
If |
Details
Two examples of the usage of user-defined losses are presented below (u
represents a margin):
mylogit <- function(u, ...) log(1+exp(-u))
,
myls <- function(u ...) u^2
.
Argument u
is a function variable (any character is possible) and the argument mtype
for psdr()
determines a type of a margin, either (type="m"
) or (type="r"
) method. type="m"
is a default.
Users have to change type="r"
, when applying residual type loss.
Any additional parameters of the loss can be specified via ...
argument.
Value
An object with S3 class "psdr". Details are listed below.
Mn |
The estimated working matrix, which is obtained by the cumulative outer product of the estimated parameters over the slices. It will not print out, unless it is called manually. |
evalues |
Eigenvalues of the working matrix |
evectors |
Eigenvectors of the |
Author(s)
Jungmin Shin, jungminshin@korea.ac.kr, Seung Jun Shin, sjshin@korea.ac.kr, Andreas Artemiou artemiou@uol.ac.cy
References
Artemiou, A. and Dong, Y. (2016)
Sufficient dimension reduction via principal lq support vector machine,
Electronic Journal of Statistics 10: 783–805.
Artemiou, A., Dong, Y. and Shin, S. J. (2021)
Real-time sufficient dimension reduction through principal least
squares support vector machines, Pattern Recognition 112: 107768.
Kim, B. and Shin, S. J. (2019)
Principal weighted logistic regression for sufficient dimension
reduction in binary classification, Journal of the Korean Statistical Society 48(2): 194–206.
Li, B., Artemiou, A. and Li, L. (2011)
Principal support vector machines for linear and
nonlinear sufficient dimension reduction, Annals of Statistics 39(6): 3182–3210.
Soale, A.-N. and Dong, Y. (2022)
On sufficient dimension reduction via principal asymmetric
least squares, Journal of Nonparametric Statistics 34(1): 77–94.
Wang, C., Shin, S. J. and Wu, Y. (2018)
Principal quantile regression for sufficient dimension
reduction with heteroscedasticity, Electronic Journal of Statistics 12(2): 2114–2140.
Shin, S. J., Wu, Y., Zhang, H. H. and Liu, Y. (2017)
Principal weighted support vector machines for sufficient dimension reduction in
binary classification, Biometrika 104(1): 67–81.
Li, L. (2007)
Sparse sufficient dimension reduction, Biometrika 94(3): 603–613.
See Also
Examples
## ----------------------------
## Linear PM
## ----------------------------
set.seed(1)
n <- 200; p <- 5;
x <- matrix(rnorm(n*p, 0, 2), n, p)
y <- x[,1]/(0.5 + (x[,2] + 1)^2) + 0.2*rnorm(n)
y.tilde <- sign(y)
obj <- psdr(x, y)
print(obj)
plot(obj, d=2)
## ----------------------------
## Kernel PM
## ----------------------------
obj_wsvm <- psdr(x, y.tilde, loss="wsvm")
plot(obj_wsvm)
## ----------------------------
## User-defined loss function
## ----------------------------
mylogistic <- function(u) log(1+exp(-u))
psdr(x, y, loss="mylogistic")