Skip to content

module core.optim

Global Variables

  • TYPE_CHECKING

class Optimizer

Optimizer configuration API for ice-learn.

This is an extension of torch.optim.Optimizer that:

  • allows the user to update the optimizer states using ice.DictProcessor,

  • leverages torch.ZeroRedundancyOptimizer inside for memory efficient distributed training,

  • is able to accumulate gradients for simulating large batch size,

  • etc.

Inspired by:

  • https://pytorch.org/docs/stable/_modules/torch/optim/optimizer.html#Optimizer

  • https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html

  • https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/lr_updater.py

method __init__

__init__(*args, **kwds)  None

method load_state_dict

load_state_dict(_state_dict)

method state_dict

state_dict(rank)

method update

update(self, grad_scaler: GradScaler, grad_acc_steps: int, *, current_epoch, epoch_steps, global_steps, epoch_size)

Ellipsis