scimba_torch.optimizers.scimba_optimizers¶
A module defining scimba optimizers.
Classes
|
Abstract base class for Scimba optimizers with optional learning rate scheduler. |
A placeholder class to indicate the absence of a scheduler. |
|
|
Scimba wrapper for Adam optimizer with optional learning rate scheduler. |
|
An abstract class of which user defined optimizer must inherit. |
|
Scimba wrapper for LBFGS optimizer with optional learning rate scheduler. |
|
Custom Momentum optimizer with scheduler. |
|
Scimba wrapper for SGD optimizer with optional learning rate scheduler. |
- class NoScheduler[source]¶
Bases:
objectA placeholder class to indicate the absence of a scheduler.
- class AbstractScimbaOptimizer(params, optimizer_args={}, scheduler=<class 'scimba_torch.optimizers.scimba_optimizers.NoScheduler'>, scheduler_args={}, **kwargs)[source]¶
Bases:
Optimizer,ABCAbstract base class for Scimba optimizers with optional learning rate scheduler.
- Parameters:
params (
Union[Iterable[Tensor],Iterable[dict[str,Any]],Iterable[tuple[str,Tensor]]]) – Iterable of parameters to optimize or dicts defining parameter groups.optimizer_args (
dict[str,Any]) – Additional arguments for the optimizer. Defaults to {}.scheduler (
type) – Learning rate scheduler class. Defaults to NoScheduler.scheduler_args (
dict[str,Any]) – Additional arguments for the scheduler. Defaults to {}.**kwargs – Arbitrary keyword arguments.
- Raises:
ValueError – scheduler is not an object of a subclass of torch.optim.lr_scheduler.LRScheduler.
- scheduler_exists¶
Flag indicating if a scheduler is set.
- scheduler¶
list containing the scheduler.
- best_optimizer¶
dictionary containing the best state of the optimizer.
- best_scheduler¶
list containing the best state of the scheduler.
- optimizer_step(closure)[source]¶
Performs an optimization step and updates the scheduler if it exists.
- Parameters:
closure (
Callable[[],float]) – A closure that reevaluates the model and returns the loss.- Return type:
None
- abstract inner_step(closure)[source]¶
Abstract method for performing the inner optimization step.
- Parameters:
closure (
Callable[[],float]) – A closure that reevaluates the model and returns the loss.- Return type:
None
- class ScimbaAdam(params, optimizer_args={}, scheduler=<class 'torch.optim.lr_scheduler.StepLR'>, scheduler_args={}, **kwargs)[source]¶
Bases:
AbstractScimbaOptimizer,AdamScimba wrapper for Adam optimizer with optional learning rate scheduler.
- Parameters:
params (
Union[Iterable[Tensor],Iterable[dict[str,Any]],Iterable[tuple[str,Tensor]]]) – Iterable of parameters to optimize or dicts defining parameter groups.optimizer_args (
dict[str,Any]) – Additional arguments for the Adam optimizer. Defaults to {}.scheduler (
type) – Learning rate scheduler class. Defaults to torch.optim.lr_scheduler.StepLR.scheduler_args (
dict[str,Any]) – Additional arguments for the scheduler. Defaults to {}.**kwargs – Arbitrary keyword arguments.
- class ScimbaSGD(params, optimizer_args={}, scheduler=<class 'torch.optim.lr_scheduler.StepLR'>, scheduler_args={}, **kwargs)[source]¶
Bases:
AbstractScimbaOptimizer,SGDScimba wrapper for SGD optimizer with optional learning rate scheduler.
- Parameters:
params (
Union[Iterable[Tensor],Iterable[dict[str,Any]],Iterable[tuple[str,Tensor]]]) – Iterable of parameters to optimize or dicts defining parameter groups.optimizer_args (
dict[str,Any]) – Additional arguments for the Adam optimizer. Defaults to {}.scheduler (
type) – Learning rate scheduler class. Defaults to torch.optim.lr_scheduler.StepLR.scheduler_args (
dict[str,Any]) – Additional arguments for the scheduler. Defaults to {}.**kwargs – Arbitrary keyword arguments.
- class ScimbaLBFGS(params, optimizer_args={}, **kwargs)[source]¶
Bases:
AbstractScimbaOptimizer,LBFGSScimba wrapper for LBFGS optimizer with optional learning rate scheduler.
- Parameters:
params (
Union[Iterable[Tensor],Iterable[dict[str,Any]],Iterable[tuple[str,Tensor]]]) – Iterable of parameters to optimize or dicts defining parameter groups.optimizer_args (
dict[str,Any]) – Additional arguments for the LBFGS optimizer. Defaults to {}.**kwargs – Arbitrary keyword arguments.
- class ScimbaCustomOptomizer(params, optimizer_args={}, scheduler=<class 'scimba_torch.optimizers.scimba_optimizers.NoScheduler'>, scheduler_args={}, **kwargs)[source]¶
Bases:
AbstractScimbaOptimizer,ABCAn abstract class of which user defined optimizer must inherit.
- class ScimbaMomentum(params, lr=0.001, momentum=0.0)[source]¶
Bases:
ScimbaCustomOptomizerCustom Momentum optimizer with scheduler.
For an example of a custom optimizer inheriting from AbstractScimbaOptimizer.
- Parameters:
params (
Union[Iterable[Tensor],Iterable[dict[str,Any]],Iterable[tuple[str,Tensor]]]) – Iterable of parameters to optimize or dicts defining parameter groups.lr (
float) – learning ratemomentum (
float) – momentum