scimba_torch.flows.flow_trainer¶
Flow trainers for optimization and projection in flow spaces.
Classes
|
Abstract class for a nonlinear projector. |
|
Natural gradient flow trainer for optimization. |
- class FlowTrainer(flow, full_data, **kwargs)[source]¶
Bases:
AbstractNonlinearProjectorAbstract class for a nonlinear projector.
This class defines a nonlinear projector with various projection options and an optimization method. It is used to solve projection problems in a given approximation space, using optimization methods.
- Parameters:
flow (
DiscreteFlowSpace|ContinuousFlowSpace) – The flow space (discrete or continuous) where the projection takes place.full_data (
tuple) – Tuple containing the full dataset for training.**kwargs (
Any) – Additional parameters, such as the type of projection, losses, and optimizers.
-
space:
DiscreteFlowSpace|ContinuousFlowSpace¶ The approximation space where the projection takes place.
-
y:
Tensor¶ Target tensor for training.
-
full_data:
tuple¶ Tuple containing the full dataset for training.
- get_dof(flag_scope='all', flag_format='list')[source]¶
Get degrees of freedom from the space.
- Parameters:
flag_scope (
str) – Scope for the degrees of freedom. Defaults to “all”.flag_format (
str) – Format for the degrees of freedom. Defaults to “list”.
- Return type:
Any- Returns:
Degrees of freedom in the specified format.
- sample_all_vars(**kwargs)[source]¶
Sample variables from the full dataset.
- Parameters:
**kwargs (
Any) – Additional keyword arguments, includingbatch_size (int): Number of samples to draw. Defaults to 10.- Return type:
tuple- Returns:
A tuple containing sampled inputs and corresponding targets.
- assembly_post_sampling(data, **kwargs)[source]¶
Assemble the system after sampling.
- Parameters:
data (
tuple) – Tuple containing sampled inputs and parameters.**kwargs (
Any) – Additional keyword arguments, includingflag_scope (str): scope for the last layer. Defaults to “all”.
- Return type:
tuple- Returns:
A tuple containing the left-hand side and right-hand side of the system.
- assembly(**kwargs)[source]¶
Assembles the system of equations for the PDE.
(and weak boundary conditions if needed).
- Parameters:
**kwargs (
Any) –Additional keyword arguments including:
n_collocation (int): Number of collocation points for the PDE. Defaults to 1000.n_bc_collocation (int): Number of collocation points for the boundary conditions. Defaults to 1000.
- Returns:
A tuple containing the assembled system of equations (Lo, f).
- Return type:
tuple
- class NaturalGradientFlowTrainer(flow, full_data, **kwargs)[source]¶
Bases:
FlowTrainerNatural gradient flow trainer for optimization.
This class extends FlowTrainer to use natural gradient optimization with preconditioning and line search capabilities.
- Parameters:
flow (
DiscreteFlowSpace|ContinuousFlowSpace) – The flow space (discrete or continuous) where the projection takes place.full_data (
tuple) – Tuple containing the full dataset for training.**kwargs (
Any) – Additional parameters including learning rate, line search options, etc.
-
default_lr:
float¶ Default learning rate for the optimizer.
-
optimizer:
OptimizerData¶ The optimizer used for parameter updates.
-
bool_linesearch:
bool¶ Whether to use line search.
-
bool_preconditioner:
bool¶ Whether to use preconditioning.
-
nb_epoch_preconditioner_computing:
int¶ Number of epochs for preconditioner computation.
-
type_linesearch:
str¶ Type of line search algorithm.
-
projection_data:
dict¶ Data structure for projection settings.
-
preconditioner:
EnergyNaturalGradientPreconditionerProjector¶ The preconditioner instance.
-
data_linesearch:
dict¶ Parameters for line search configuration.