scimba_torch.optimizers.losses¶
A module to handle losses.
Classes
|
A class for a loss with a coefficient and history. |
|
A class to handle several losses: residual, boundary conditions, etc. |
|
Custom loss function for the difference in mass between input and target tensors. |
- class MassLoss(size_average=None, reduce=None, reduction='mean')[source]¶
Bases:
_LossCustom loss function for the difference in mass between input and target tensors.
This loss returns either the mean or sum of the element-wise difference between input and target, depending on the reduction parameter.
- Parameters:
size_average (
bool|None) – Deprecated (unused). Included for API compatibility.reduce (
bool|None) – Deprecated (unused). Included for API compatibility.reduction (
str) – Specifies the reduction to apply to the output. Must be ‘mean’ (default) or ‘sum’.
Example
>>> loss = MassLoss(reduction='sum') >>> input = torch.tensor([1.0, 2.0, 3.0]) >>> target = torch.tensor([0.5, 1.5, 2.5]) >>> output = loss(input, target) >>> print(output) tensor(1.5)
- class GenericLoss(loss_function, coeff)[source]¶
Bases:
objectA class for a loss with a coefficient and history.
- Parameters:
loss_function (
Callable[[Tensor,Tensor],Tensor]) – The loss function.coeff (
float) – A coefficient that scales the computed loss value.
- func¶
The loss function.
- coeff¶
The coeff.
-
coeff_history:
list[float]¶ The history of coeffs.
- loss¶
The current loss value.
- weighted_loss¶
The current weighted loss value.
-
loss_history:
list[float]¶ The history of losses.
- get_loss()[source]¶
Returns the current loss value.
- Return type:
Tensor- Returns:
The current loss value.
- get_weighted_loss()[source]¶
Returns the current weighted loss value.
- Return type:
Tensor- Returns:
The current weighted loss value (coeff * loss).
- get_loss_history()[source]¶
Returns the history of computed loss values.
- Return type:
list[float]- Returns:
A list of loss values (in float).
- get_coeff()[source]¶
Returns the current coefficient value.
- Return type:
float- Returns:
The current coefficient value.
- get_coeff_history()[source]¶
Returns the history of coefficient values.
- Return type:
list[float]- Returns:
A list of coefficient values representing the history of coefficients used.
- update_loss(value)[source]¶
Updates the current loss value and recalculates the weighted loss.
- Parameters:
value (
Tensor) – The new loss value to be set.
- update_history(loss_factor=1.0)[source]¶
Appends the current loss (optionally scaled by a factor) to the loss history.
- Parameters:
loss_factor (
float) – A factor by which to scale the loss before adding it to the history. Defaults to 1.0.- Return type:
None
- set_history(history)[source]¶
Sets the history of loss values to the provided list of floats.
- Parameters:
history (
list[float]) – A list of float values representing the new loss history.- Return type:
None
- class GenericLosses(losses=None, **kwargs)[source]¶
Bases:
objectA class to handle several losses: residual, boundary conditions, etc.
- A class that manages multiple instances of GenericLoss and
computes the full loss as a combination of all individual losses.
- Parameters:
losses (
Sequence[tuple[str,Callable[[Tensor,Tensor],Tensor],float|int]] |None) – A list of tuples; each tuple contains a loss name, a callable loss function, and a coefficient. Default is None.**kwargs –
Additional keyword arguments. “adaptive_weights”: The method for adaptive weighting of losses. currently only “annealing” is supported.
”principal_weights”: the name of the reference loss for adapting weights.
”epochs_adapt”: the number of epochs between adaptive weight updates.
”alpha_lr_annealing”: the learning rate annealing factor for adaptive weighting.
- Raises:
ValueError – If the input list is empty.
TypeError – If the input list contains elements with incorrect types.
-
losses_dict:
dict[str,GenericLoss]¶ A dictionary mapping loss names to GenericLoss instances.
-
loss:
Tensor¶ The current full loss value, which is the sum of all weighted losses.
-
loss_history:
list[float]¶ A list storing the history of computed full loss values.
-
adaptive_weights:
str|None¶ The method for adaptive weighting of losses. Default is None.
-
principal_weights:
str|None¶ The name of the principal loss used for adaptive weighting.
-
epochs_adapt:
int¶ The number of epochs between adaptive weight updates. Default is 10.
-
alpha_lr_annealing:
float¶ “The learning rate annealing factor for adaptive weighting. Default is 0.9.
- get_full_loss()[source]¶
Returns the current full loss value.
- Return type:
Tensor- Returns:
The current full loss value.
- get_loss(key)[source]¶
Returns the current loss value for a specific loss function.
- Parameters:
key (
str) – The name of the loss function.- Return type:
Tensor- Returns:
The current loss value for the specified loss function.
- Raises:
KeyError – If the key is not found in the losses dictionary.
- get_history(key)[source]¶
Returns the history of computed loss values for a specific loss function.
- Parameters:
key (
str) – The name of the loss function.- Return type:
list[float]- Returns:
The history of computed loss values for the specified loss function.
- Raises:
KeyError – If the key is not found in the losses dictionary.
- get_coeff(key)[source]¶
Returns the current coefficient value for a specific loss function.
- Parameters:
key (
str) – The name of the loss function.- Return type:
float- Returns:
The current coefficient value for the specified loss function.
- Raises:
KeyError – If the key is not found in the losses dictionary.
- get_coeff_history(key)[source]¶
Returns the history of coefficient values for a specific loss function.
- Parameters:
key (
str) – The name of the loss function.- Return type:
list[float]- Returns:
The history of coefficient values for the specified loss function.
- Raises:
KeyError – If the key is not found in the losses dictionary.
- init_loss(key)[source]¶
Resets the loss value for a specific loss function to infinity.
- Parameters:
key (
str) – The name of the loss function.- Raises:
KeyError – If the key is not found in the losses dictionary.
- Return type:
None
- update_loss(key, value)[source]¶
Updates the loss value for a specific loss function.
- Parameters:
key (
str) – The name of the loss function.value (
Tensor) – The new loss value to be set.
- Raises:
KeyError – If the key is not found in the losses dictionary.
- Return type:
None
- update_histories(loss_factor=1.0)[source]¶
Appends the current loss (optionally scaled by a factor) to the loss history.
- Parameters:
loss_factor (
float) – A factor by which to scale the loss before adding it to the history. Defaults to 1.0.- Return type:
None
- update_coeff(key, value)[source]¶
Updates the coefficient value for a specific loss function.
- Parameters:
key (
str) – The name of the loss function.value (
float) – The new coefficient value to be set.
- Raises:
KeyError – If the key is not found in the losses dictionary.
- Return type:
None
- call_and_update(key, a, b)[source]¶
Calls the loss function, updates the loss, and returns the updated loss.
- Parameters:
key (
str) – The name of the loss function.a (
Tensor) – The first input tensor.b (
Tensor) – The second input tensor.
- Return type:
Tensor- Returns:
The updated loss value.
- Raises:
KeyError – If the key is not found in the losses dictionary.
- compute_all_losses(left, right, update=True)[source]¶
Computes all losses.
Returns the combination of all the losses, possibly updates the loss values.
- Parameters:
left (
tuple[Tensor,...]) – The left tensors.right (
tuple[Tensor,...]) – The right tensors.update (
bool) – Whether to update the current loss.
- Returns:
The computed full loss value.
- Return type:
torch.Tensor
- Raises:
ValueError – when left and right do not have the same length or the length of left (and right) is not a divisor of the number of losses.
- compute_full_loss_without_updating(left, right)[source]¶
Computes the full loss without updating the loss values.
- Parameters:
left (
tuple[Tensor,...]) – The left tensors.right (
tuple[Tensor,...]) – The right tensors.
- Return type:
Tensor- Returns:
The computed full loss value.
- compute_full_loss(optimizers, epoch)[source]¶
Computes the full loss as the combination of all the losses.
- Parameters:
optimizers (
OptimizerData) – The optimizer data object.epoch (
int) – The current epoch.
- Return type:
Tensor- Returns:
The computed full loss value.
- Raises:
ValueError – when adaptive_weights is not recognized.
- dict_for_save()[source]¶
Returns a dictionary of best loss values for saving.
- Return type:
dict[str,Tensor|list[float]]- Returns:
A dictionary containing the best loss value and loss history.
- try_to_load(checkpoint, string)[source]¶
Tries to load a value from the checkpoint.
- Parameters:
checkpoint (
dict) – The checkpoint dictionary.string (
str) – The key to look for in the checkpoint.
- Return type:
Any- Returns:
The loaded value if found, otherwise None.