module core.optim
Global Variables
- TYPE_CHECKING
class Optimizer
Optimizer configuration API for ice-learn.
This is an extension of torch.optim.Optimizer
that:
-
allows the user to update the optimizer states using
ice.DictProcessor
, -
leverages
torch.ZeroRedundancyOptimizer
inside for memory efficient distributed training, -
is able to accumulate gradients for simulating large batch size,
-
etc.
Inspired by:
-
https://pytorch.org/docs/stable/_modules/torch/optim/optimizer.html#Optimizer
-
https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html
-
https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/lr_updater.py
method __init__
method load_state_dict
method state_dict
method update
update(self, grad_scaler: GradScaler, grad_acc_steps: int, *, current_epoch, epoch_steps, global_steps, epoch_size)
Ellipsis