vorob_optim_parallel_gpc {ARCHISSUR}R Documentation

Parallel Vorob'ev criterion

Description

Evaluation of the parallel Vorob'ev criterion for some candidate points. To be used in optimization routines, like in max_vorob_parallel_gpc. To avoid numerical instabilities, the new points are evaluated only if they are not too close to an existing observation, or if there is some observation noise. The criterion is the integral of the posterior Vorob'ev uncertainty.

Usage

vorob_optim_parallel_gpc(
  x,
  integration.points,
  integration.weights = NULL,
  intpoints.oldmean,
  intpoints.oldsd,
  precalc.data,
  object,
  new.noise.var = NULL,
  batchsize,
  alpha,
  current.vorob,
  seed = NULL
)

Arguments

x

input vector of size batchsize*d at which one wants to evaluate the criterion. This argument is NOT a matrix.

integration.points

matrix of points for numerical integration in the design space.

integration.weights

vector of size ncol(integration.points) corresponding to the weights of these integration points.

intpoints.oldmean

(see below).

intpoints.oldsd

vectors of size ncol(integration.points) corresponding to the mean and standard deviation at the integration points before adding the batchsize points x to the design of experiments.

precalc.data

list containing precalculated data. This list can be generated using the precomputeUpdateData function.

object

object of class gpcm.

new.noise.var

optional scalar value of the noise variance for the new observations.

batchsize

number of points to sample simultaneously. The sampling criterion will return batchsize points at a time for sampling.

alpha

a scalar representing the Vorob'ev threshold

current.vorob

current value of the vorob criterion (before adding new observations).

seed

to fix the seed.

Value

Parallel vorob value

Author(s)

Morgane MENZ, Delphine SINOQUET, Miguel MUNOZ-ZUNIGA. Contributors: Naoual SERRAJI.

References

Menz, M., Munoz-Zuniga, M., Sinoquet, D. Estimation of simulation failure set with active learning based on Gaussian Process classifiers and random set theory (2023). https://hal.science/hal-03848238.

Chevalier, C. Fast uncertainty reduction strategies relying on Gaussian process models PhD Thesis. University of Bern (2013).

El Amri, M.R. Analyse d’incertitudes et de robustesse pour les modèles à entrées et sorties fonctionnelles. Theses, Université Grenoble Alpes (2019), https://theses.hal.science/tel-02433324.

El Amri, M.R., Helbert, C., Munoz-Zuniga, M. Feasible set estimation under functional uncertainty by gaussian process modelling (2023). 455:133,893. doi:10.1016/j.physd.2023.133893.

Bachoc, F., Helbert, C. & Picheny, V. Gaussian process optimization with failures: classification and convergence proof. J Glob Optim 78, 483–506 (2020). doi:10.1007/s10898-020-00920-0.

Examples

#-------------------------------------------------------------------
#-------------------vorob_optim_parallel_gpc------------------------
#-------------------------------------------------------------------

## 20-points DoE, and the corresponding response
d <- 2
nb_PX <- 20
x_ <- matrix(c(0.205293785978832, 0.0159983370750337,
              0.684774733109666, 0.125251417595962,
              0.787208786290006, 0.700475706055049,
              0.480507717105934, 0.359730889653793,
              0.543665267336735, 0.565974761807069,
              0.303412043992361, 0.471502352650857,
              0.839505250127309, 0.504914690245002,
              0.573294917143728, 0.784444726564573,
              0.291681289223421, 0.255053812451938,
              0.87233450888786, 0.947168337730927,
              0.648262257638515, 0.973264712407035,
              0.421877310273815, 0.0686662506387988,
              0.190976166753807, 0.810964668176754,
              0.918527262507395, 0.161973686467513,
              0.0188128700859558, 0.43522031347403,
              0.99902788789426, 0.655561821513544,
              0.741113863862512, 0.321050086076934,
              0.112003007565305, 0.616551317575545,
              0.383511473487687, 0.886611679106771,
              0.0749211435982952, 0.205805968972305),
            byrow = TRUE, ncol = d)
require(DiceKriging)
fx <- apply(x_, 1, branin)
f <- ifelse(fx < 14, -1, 1)
Xf <- as.matrix(x_)

require(future)
plan(multisession)
## gpcm object
require(GPCsign)
model <- gpcm(f, Xf, coef.m = -1.25, coef.cov = c(1.17,0.89))

## parameteres for vorob_optim_parallel_gpc function
batchsize <- 1
x <- matrix(c(0.1,0.2),ncol=2,byrow=TRUE)
require(randtoolbox)
nb.integration <- d * 1000
integration.points <- sobol(n = nb.integration, dim = d, scrambling = 0)
integration.points <- scale(x = integration.points, center = model@X.mean, scale = model@X.std)
precalc.data <- precomputeUpdateData(model=model, integration.points=integration.points)
intpoints.oldmean <- precalc.data$intpoints.oldmean
intpoints.oldsd <- precalc.data$intpoints.oldsd

pn <- precalc.data$pn

require(KrigInv)
alpha <- KrigInv::vorob_threshold(pn)
pn_bigger_than_alpha <- (pn>alpha)+0
pn_lower_than_alpha <- 1-pn_bigger_than_alpha
penalisation <- 1
current.vorob <- mean(pn*pn_lower_than_alpha + penalisation*(1-pn)*pn_bigger_than_alpha)

x <- scale(x = x, center = model@X.mean, scale = model@X.std)
criter <- vorob_optim_parallel_gpc(x = x, integration.points = integration.points,
                                   intpoints.oldmean = intpoints.oldmean,
                                   intpoints.oldsd = intpoints.oldsd,
                                   precalc.data = precalc.data,
                                   object = model,
                                   batchsize = batchsize,
                                   alpha = alpha,
                                   current.vorob = current.vorob)
plan(sequential)


[Package ARCHISSUR version 0.0.1 Index]