dicee.models.adopt

Classes

ADOPT

Base class for all optimizers.

Functions

adopt(params, grads, exp_avgs, exp_avg_sqs, state_steps)

Functional API that performs ADOPT algorithm computation.

Module Contents

class dicee.models.adopt.ADOPT(params: torch.optim.optimizer.ParamsT, lr: float | torch.Tensor = 0.001, betas: Tuple[float, float] = (0.9, 0.9999), eps: float = 1e-06, clip_lambda: Callable[[int], float] | None = lambda step: ..., weight_decay: float = 0.0, decouple: bool = False, *, foreach: bool | None = None, maximize: bool = False, capturable: bool = False, differentiable: bool = False, fused: bool | None = None)[source]

Bases: torch.optim.optimizer.Optimizer

Base class for all optimizers.

Warning

Parameters need to be specified as collections that have a deterministic ordering that is consistent between runs. Examples of objects that don’t satisfy those properties are sets and iterators over values of dictionaries.

Parameters:
  • params (iterable) – an iterable of torch.Tensor s or dict s. Specifies what Tensors should be optimized.

  • defaults – (dict): a dict containing default values of optimization options (used when a parameter group doesn’t specify them).

clip_lambda
__setstate__(state)[source]
step(closure=None)[source]

Perform a single optimization step.

Parameters:

closure (Callable, optional) – A closure that reevaluates the model and returns the loss.

dicee.models.adopt.adopt(params: List[torch.Tensor], grads: List[torch.Tensor], exp_avgs: List[torch.Tensor], exp_avg_sqs: List[torch.Tensor], state_steps: List[torch.Tensor], foreach: bool | None = None, capturable: bool = False, differentiable: bool = False, fused: bool | None = None, grad_scale: torch.Tensor | None = None, found_inf: torch.Tensor | None = None, has_complex: bool = False, *, beta1: float, beta2: float, lr: float | torch.Tensor, clip_lambda: Callable[[int], float] | None, weight_decay: float, decouple: bool, eps: float, maximize: bool)[source]

Functional API that performs ADOPT algorithm computation.