scimba_torch.optimizers.scimba_optimizers

A module defining scimba optimizers.

Classes

AbstractScimbaOptimizer(params[, ...])

Abstract base class for Scimba optimizers with optional learning rate scheduler.

NoScheduler()

A placeholder class to indicate the absence of a scheduler.

ScimbaAdam(params[, optimizer_args, ...])

Scimba wrapper for Adam optimizer with optional learning rate scheduler.

ScimbaCustomOptomizer(params[, ...])

An abstract class of which user defined optimizer must inherit.

ScimbaLBFGS(params[, optimizer_args])

Scimba wrapper for LBFGS optimizer with optional learning rate scheduler.

ScimbaMomentum(params[, lr, momentum])

Custom Momentum optimizer with scheduler.

ScimbaSGD(params[, optimizer_args, ...])

Scimba wrapper for SGD optimizer with optional learning rate scheduler.

class NoScheduler[source]

Bases: object

A placeholder class to indicate the absence of a scheduler.

class AbstractScimbaOptimizer(params, optimizer_args={}, scheduler=<class 'scimba_torch.optimizers.scimba_optimizers.NoScheduler'>, scheduler_args={}, **kwargs)[source]

Bases: Optimizer, ABC

Abstract base class for Scimba optimizers with optional learning rate scheduler.

Parameters:
  • params (Union[Iterable[Tensor], Iterable[dict[str, Any]], Iterable[tuple[str, Tensor]]]) – Iterable of parameters to optimize or dicts defining parameter groups.

  • optimizer_args (dict[str, Any]) – Additional arguments for the optimizer. Defaults to {}.

  • scheduler (type) – Learning rate scheduler class. Defaults to NoScheduler.

  • scheduler_args (dict[str, Any]) – Additional arguments for the scheduler. Defaults to {}.

  • **kwargs – Arbitrary keyword arguments.

Raises:

ValueError – scheduler is not an object of a subclass of torch.optim.lr_scheduler.LRScheduler.

scheduler_exists

Flag indicating if a scheduler is set.

scheduler

list containing the scheduler.

best_optimizer

dictionary containing the best state of the optimizer.

best_scheduler

list containing the best state of the scheduler.

optimizer_step(closure)[source]

Performs an optimization step and updates the scheduler if it exists.

Parameters:

closure (Callable[[], float]) – A closure that reevaluates the model and returns the loss.

Return type:

None

abstract inner_step(closure)[source]

Abstract method for performing the inner optimization step.

Parameters:

closure (Callable[[], float]) – A closure that reevaluates the model and returns the loss.

Return type:

None

update_best_optimizer()[source]

Updates the best optimizer state.

Return type:

None

dict_for_save()[source]

Returns a dictionary containing the best optimizer and scheduler states.

Returns:

dictionary containing the best optimizer and scheduler states.

Return type:

dict

load(checkpoint)[source]

Loads the optimizer and scheduler states from a checkpoint.

Parameters:

checkpoint (dict) – dictionary containing the optimizer and scheduler states.

Return type:

None

class ScimbaAdam(params, optimizer_args={}, scheduler=<class 'torch.optim.lr_scheduler.StepLR'>, scheduler_args={}, **kwargs)[source]

Bases: AbstractScimbaOptimizer, Adam

Scimba wrapper for Adam optimizer with optional learning rate scheduler.

Parameters:
  • params (Union[Iterable[Tensor], Iterable[dict[str, Any]], Iterable[tuple[str, Tensor]]]) – Iterable of parameters to optimize or dicts defining parameter groups.

  • optimizer_args (dict[str, Any]) – Additional arguments for the Adam optimizer. Defaults to {}.

  • scheduler (type) – Learning rate scheduler class. Defaults to torch.optim.lr_scheduler.StepLR.

  • scheduler_args (dict[str, Any]) – Additional arguments for the scheduler. Defaults to {}.

  • **kwargs – Arbitrary keyword arguments.

inner_step(closure)[source]

Performs the inner optimization step for ScimbaAdam.

Parameters:

closure (Callable[[], float]) – A closure that reevaluates the model and returns the loss.

Return type:

None

class ScimbaSGD(params, optimizer_args={}, scheduler=<class 'torch.optim.lr_scheduler.StepLR'>, scheduler_args={}, **kwargs)[source]

Bases: AbstractScimbaOptimizer, SGD

Scimba wrapper for SGD optimizer with optional learning rate scheduler.

Parameters:
  • params (Union[Iterable[Tensor], Iterable[dict[str, Any]], Iterable[tuple[str, Tensor]]]) – Iterable of parameters to optimize or dicts defining parameter groups.

  • optimizer_args (dict[str, Any]) – Additional arguments for the Adam optimizer. Defaults to {}.

  • scheduler (type) – Learning rate scheduler class. Defaults to torch.optim.lr_scheduler.StepLR.

  • scheduler_args (dict[str, Any]) – Additional arguments for the scheduler. Defaults to {}.

  • **kwargs – Arbitrary keyword arguments.

inner_step(closure)[source]

Performs the inner optimization step for ScimbaAdam.

Parameters:

closure (Callable[[], float]) – A closure that reevaluates the model and returns the loss.

Return type:

None

class ScimbaLBFGS(params, optimizer_args={}, **kwargs)[source]

Bases: AbstractScimbaOptimizer, LBFGS

Scimba wrapper for LBFGS optimizer with optional learning rate scheduler.

Parameters:
  • params (Union[Iterable[Tensor], Iterable[dict[str, Any]], Iterable[tuple[str, Tensor]]]) – Iterable of parameters to optimize or dicts defining parameter groups.

  • optimizer_args (dict[str, Any]) – Additional arguments for the LBFGS optimizer. Defaults to {}.

  • **kwargs – Arbitrary keyword arguments.

inner_step(closure)[source]

Performs the inner optimization step for ScimbaLBFGS.

Parameters:

closure (Callable[[], float]) – A closure that reevaluates the model and returns the loss.

Return type:

None

class ScimbaCustomOptomizer(params, optimizer_args={}, scheduler=<class 'scimba_torch.optimizers.scimba_optimizers.NoScheduler'>, scheduler_args={}, **kwargs)[source]

Bases: AbstractScimbaOptimizer, ABC

An abstract class of which user defined optimizer must inherit.

abstract step(closure)[source]

To be implemented in subclasses: applies one step of optimizer.

Parameters:

closure (Callable[[], float]) – A closure that reevaluates the model and returns the loss.

class ScimbaMomentum(params, lr=0.001, momentum=0.0)[source]

Bases: ScimbaCustomOptomizer

Custom Momentum optimizer with scheduler.

For an example of a custom optimizer inheriting from AbstractScimbaOptimizer.

Parameters:
  • params (Union[Iterable[Tensor], Iterable[dict[str, Any]], Iterable[tuple[str, Tensor]]]) – Iterable of parameters to optimize or dicts defining parameter groups.

  • lr (float) – learning rate

  • momentum (float) – momentum

step(closure=None)[source]

Re-implements the step method.

Parameters:

closure (Optional[Callable[[], float]]) – A closure that reevaluates the model and returns the loss.

inner_step(closure)[source]

The inner step method.

Parameters:

closure (Callable[[], float]) – A closure that reevaluates the model and returns the loss.

Return type:

None