optim_ignite_adam {torch} | R Documentation |
LibTorch implementation of Adam
Description
It has been proposed in Adam: A Method for Stochastic Optimization.
Usage
optim_ignite_adam(
params,
lr = 0.001,
betas = c(0.9, 0.999),
eps = 1e-08,
weight_decay = 0,
amsgrad = FALSE
)
Arguments
params |
(iterable): iterable of parameters to optimize or dicts defining parameter groups |
lr |
(float, optional): learning rate (default: 1e-3) |
betas |
( |
eps |
(float, optional): term added to the denominator to improve numerical stability (default: 1e-8) |
weight_decay |
(float, optional): weight decay (L2 penalty) (default: 0) |
amsgrad |
(boolean, optional): whether to use the AMSGrad variant of this algorithm from the paper On the Convergence of Adam and Beyond (default: FALSE) |
Fields and Methods
See OptimizerIgnite
.
Examples
if (torch_is_installed()) {
## Not run:
optimizer <- optim_ignite_adam(model$parameters(), lr = 0.1)
optimizer$zero_grad()
loss_fn(model(input), target)$backward()
optimizer$step()
## End(Not run)
}
[Package torch version 0.15.1 Index]