Contents Menu Expand Light mode Dark mode Auto light/dark, in light mode Auto light/dark, in dark mode Skip to content
Scimba 1.3.3 documentation
Scimba 1.3.3 documentation

Tutorials (Notebooks)

  • Scimba basics I: approximation of the solution of a 2D Laplacian
  • Scimba basics II: Natural gradient preconditioners
  • Creating meshless domains
  • Defining a physical model
  • Projection of a function on an approximation space.
  • Saving and loading PINNs
  • Strong boundary conditions
  • Imposing mixed boundary conditions
  • Using Time-Discrete Schemes
  • Scimba Optimizers
  • Losses in Scimba PINNs

Examples (Notebooks)

  • Deep Ritz: div A grad u 2D
  • Deep Ritz: 2D laplacian on a disk
  • PINNs: viscous Burgers 1D
  • PINNs: Inhomogeneous Helmholtz equation
  • PINNs: linearized Euler 1D
  • PINNs: grad-div system 2D
  • PINNs: Navier Stokes equation
  • Flows: pendulum
  • Time discrete: kinetic Vlasov in 1D
  • Time discrete: rotating transport
  • Time discrete: heat equation
  • Time discrete: transport in a cylinder
  • Time discrete: advection of a 3D level-set function
  • Learning the Signed Distance Function of a shape

Examples

  • Python Scripts Index

API documentation

  • Basic configuration
  • scimba_torch.approximation_space
    • scimba_torch.approximation_space.abstract_space
    • scimba_torch.approximation_space.kernelx_space
    • scimba_torch.approximation_space.nn_space
    • scimba_torch.approximation_space.radial_space
    • scimba_torch.approximation_space.spectral_space
  • scimba_torch.domain
    • scimba_torch.domain.mesh_based_domain
      • scimba_torch.domain.mesh_based_domain.cuboid
    • scimba_torch.domain.meshless_domain
      • scimba_torch.domain.meshless_domain.base
      • scimba_torch.domain.meshless_domain.domain_1d
      • scimba_torch.domain.meshless_domain.domain_2d
      • scimba_torch.domain.meshless_domain.domain_3d
      • scimba_torch.domain.meshless_domain.domain_nd
    • scimba_torch.domain.sdf
  • scimba_torch.flows
    • scimba_torch.flows.create_solution
    • scimba_torch.flows.deep_flows
    • scimba_torch.flows.discretization_based_flows
    • scimba_torch.flows.flow_trainer
    • scimba_torch.flows.integrators_ode
  • scimba_torch.integration
    • scimba_torch.integration.mesh_based_quadrature
    • scimba_torch.integration.monte_carlo
    • scimba_torch.integration.monte_carlo_parameters
    • scimba_torch.integration.monte_carlo_time
  • scimba_torch.geometry
    • scimba_torch.geometry.monte_carlo_hypersurface
    • scimba_torch.geometry.parametric_hypersurface
    • scimba_torch.geometry.regularized_eikonal_pde
    • scimba_torch.geometry.regularized_sdf_projectors
    • scimba_torch.geometry.utils
  • scimba_torch.neural_nets
    • scimba_torch.neural_nets.coordinates_based_nets
      • scimba_torch.neural_nets.coordinates_based_nets.activation
      • scimba_torch.neural_nets.coordinates_based_nets.discontinuous_mlp
      • scimba_torch.neural_nets.coordinates_based_nets.features
      • scimba_torch.neural_nets.coordinates_based_nets.mlp
      • scimba_torch.neural_nets.coordinates_based_nets.pirate_net
      • scimba_torch.neural_nets.coordinates_based_nets.res_net
      • scimba_torch.neural_nets.coordinates_based_nets.scimba_module
      • scimba_torch.neural_nets.coordinates_based_nets.siren
    • scimba_torch.neural_nets.embeddings
      • scimba_torch.neural_nets.embeddings.periodic_embedding
    • scimba_torch.neural_nets.structure_preserving_nets
      • scimba_torch.neural_nets.structure_preserving_nets.affine_ode_layers
      • scimba_torch.neural_nets.structure_preserving_nets.coupling_layers
      • scimba_torch.neural_nets.structure_preserving_nets.coupling_symplectic_layers
      • scimba_torch.neural_nets.structure_preserving_nets.invertible_nn
      • scimba_torch.neural_nets.structure_preserving_nets.nilpotent_symplectic_layer
      • scimba_torch.neural_nets.structure_preserving_nets.nonlinear_ode_layers
      • scimba_torch.neural_nets.structure_preserving_nets.ode_splitted_layer
      • scimba_torch.neural_nets.structure_preserving_nets.separated_symplectic_layers
      • scimba_torch.neural_nets.structure_preserving_nets.split_layer
      • scimba_torch.neural_nets.structure_preserving_nets.symplectic_nets
  • scimba_torch.numerical_solvers
    • scimba_torch.numerical_solvers.abstract_preconditioner
    • scimba_torch.numerical_solvers.abstract_projector
    • scimba_torch.numerical_solvers.collocation_projector
    • scimba_torch.numerical_solvers.elliptic_pde
      • scimba_torch.numerical_solvers.elliptic_pde.deep_ritz
      • scimba_torch.numerical_solvers.elliptic_pde.pinns
    • scimba_torch.numerical_solvers.functional_operator
    • scimba_torch.numerical_solvers.pinn_preconditioners
      • scimba_torch.numerical_solvers.pinn_preconditioners.anagram_ng
      • scimba_torch.numerical_solvers.pinn_preconditioners.energy_ng
      • scimba_torch.numerical_solvers.pinn_preconditioners.nystrom_ng
      • scimba_torch.numerical_solvers.pinn_preconditioners.sketchy_ng
    • scimba_torch.numerical_solvers.preconditioner_deep_ritz
    • scimba_torch.numerical_solvers.preconditioner_pinns
    • scimba_torch.numerical_solvers.preconditioner_projector
    • scimba_torch.numerical_solvers.preconditioner_solvers
    • scimba_torch.numerical_solvers.temporal_pde
      • scimba_torch.numerical_solvers.temporal_pde.discrete_pinn
      • scimba_torch.numerical_solvers.temporal_pde.neural_galerkin
      • scimba_torch.numerical_solvers.temporal_pde.neural_semilagrangian
      • scimba_torch.numerical_solvers.temporal_pde.pinns
      • scimba_torch.numerical_solvers.temporal_pde.time_discrete
  • scimba_torch.optimizers
    • scimba_torch.optimizers.line_search
    • scimba_torch.optimizers.losses
    • scimba_torch.optimizers.optimizers_data
    • scimba_torch.optimizers.scimba_optimizers
    • scimba_torch.optimizers.ssbroyden
  • scimba_torch.physical_models
    • scimba_torch.physical_models.elliptic_pde
      • scimba_torch.physical_models.elliptic_pde.abstract_elliptic_pde
      • scimba_torch.physical_models.elliptic_pde.advection_diffusion
      • scimba_torch.physical_models.elliptic_pde.laplacians
      • scimba_torch.physical_models.elliptic_pde.linear_order_2
    • scimba_torch.physical_models.kinetic_pde
      • scimba_torch.physical_models.kinetic_pde.abstract_kinetic_pde
      • scimba_torch.physical_models.kinetic_pde.radiative_transfer
      • scimba_torch.physical_models.kinetic_pde.vlasov
    • scimba_torch.physical_models.temporal_pde
      • scimba_torch.physical_models.temporal_pde.abstract_temporal_pde
      • scimba_torch.physical_models.temporal_pde.advection_diffusion_equation
      • scimba_torch.physical_models.temporal_pde.heat_equation
      • scimba_torch.physical_models.temporal_pde.transport_equation
  • scimba_torch.plots
    • scimba_torch.plots.plot_regularized_sdf_projector
    • scimba_torch.plots.plot_time_discrete_scheme
    • scimba_torch.plots.plots_nd
  • scimba_torch.utils
    • scimba_torch.utils.environment
    • scimba_torch.utils.mapping
    • scimba_torch.utils.paths
    • scimba_torch.utils.scimba_tensors
    • scimba_torch.utils.typing_protocols
    • scimba_torch.utils.verbosity

Development

  • Setup for development
  • Coding conventions
  • Documenting
  • Testing
  • GitLab
Back to top
View this page
Edit this page

scimba_torch.optimizers.line_search¶

Line search functions.

Examples: Examples of all three line types of line search

from scimba_torch.approximation_space.nn_space import NNxSpace
from scimba_torch.domain.meshless_domain.domain_2d import Disk2D, Square2D
from scimba_torch.integration.monte_carlo import (
    DomainSampler,
    TensorizedSampler,
)
from scimba_torch.integration.monte_carlo_parameters import (
    UniformParametricSampler,
)
from scimba_torch.neural_nets.coordinates_based_nets.mlp import GenericMLP
from scimba_torch.numerical_solvers.elliptic_pde.deep_ritz import (
    DeepRitzElliptic,
)
from scimba_torch.numerical_solvers.elliptic_pde.pinns import PinnsElliptic
from scimba_torch.optimizers.line_search import (
    backtracking_armijo_line_search,
    backtracking_armijo_line_search_with_loss_theta_grad_loss_theta,
    logarithmic_grid_line_search,
)
from scimba_torch.optimizers.losses import GenericLosses
from scimba_torch.optimizers.optimizers_data import OptimizerData
from scimba_torch.physical_models.elliptic_pde.laplacians import (
    Laplacian2DDirichletRitzForm,
    Laplacian2DDirichletStrongForm,
)
from scimba_torch.utils.scimba_tensors import LabelTensor

print(" ######################################################## ")
print(" # line_search with a pinn with weak boundary condition # ")
print(" ######################################################## ")

def f_rhs(x: LabelTensor, mu: LabelTensor) -> torch.Tensor:
    x1, x2 = x.get_components()
    mu1 = mu.get_components()
    return (
        mu1
        * 8.0
        * torch.pi
        * torch.pi
        * torch.sin(2.0 * torch.pi * x1)
        * torch.sin(2.0 * torch.pi * x2)
    )

def f_bc(x: LabelTensor, mu: LabelTensor) -> torch.Tensor:
    x1, _ = x.get_components()
    return x1 * 0.0

domain_x = Square2D([(0.0, 1), (0.0, 1)], is_main_domain=True)
sampler = TensorizedSampler(
    [DomainSampler(domain_x), UniformParametricSampler([(1.0, 2.0)])]
)
space = NNxSpace(
    1,
    1,
    GenericMLP,
    domain_x,
    sampler,
    layer_sizes=[60] * 3,
)
pde = Laplacian2DDirichletStrongForm(space, f=f_rhs, g=f_bc)
losses = GenericLosses(
    [
        ("residual", torch.nn.MSELoss(), 1.0),
        ("bc", torch.nn.MSELoss(), 40.0),
    ],
)
opt_1 = {
    "name": "adam",
    "optimizer_args": {"lr": 2.5e-2, "betas": (0.9, 0.999)},
}
opt = OptimizerData(opt_1)
pinns = PinnsElliptic(pde, bc_type="weak", optimizers=opt, losses=losses)
n_collocation = 2000
n_bc_collocation = 1500
# get current parameters of the nn
params_vect = pinns.space.get_dof(flag_scope="all", flag_format="tensor")
# get func and derivative
Lpinn, GradLpinn = pinns.get_loss_grad_loss(
    n_collocation=n_collocation, n_bc_collocation=n_bc_collocation
)
loss = Lpinn(params_vect)
print("loss at theta = initial parameters: ", loss)
theta = params_vect.clone().detach().requires_grad_(False)
loss = Lpinn(theta)
gradltheta = GradLpinn(theta)
loss2 = Lpinn(theta)
gradltheta2 = GradLpinn(theta)
assert torch.equal(loss, loss2)
assert torch.equal(gradltheta, gradltheta2)
# perform a linesearch along gradLTheta
print("Lpinn(theta): ", Lpinn(theta))
eta = backtracking_armijo_line_search_with_loss_theta_grad_loss_theta(
    Lpinn,
    theta,
    loss,
    gradltheta,
    gradltheta,
    alpha=0.2,
    beta=0.5,
    n_step_max=1000,
)
print("eta with Armijo : ", eta)
print("Lpinn(theta): ", Lpinn(theta))
print("Lpinn(theta - eta * dsearch): ", Lpinn(theta - eta * gradltheta))
assert torch.all(Lpinn(theta) > Lpinn(theta - eta * gradltheta))
print("\n")

eta = logarithmic_grid_line_search(
    Lpinn, theta, gradltheta, m=10, interval=[0.0, 1.0]
)
print("eta with logarithmic grid : ", eta)
print("Lpinn(theta): ", Lpinn(theta))
print("Lpinn(theta - eta * dsearch): ", Lpinn(theta - eta * gradltheta))
assert torch.all(Lpinn(theta) > Lpinn(theta - eta * gradltheta))
print("\n")

# get func and derivative with new sampling points
loss = Lpinn(theta)
Lpinn, GradLpinn = pinns.get_loss_grad_loss(
    n_collocation=n_collocation, n_bc_collocation=n_bc_collocation
)
assert not torch.equal(Lpinn(theta), loss)
# actualize theta
theta = theta - eta * gradltheta
loss = Lpinn(theta)
gradltheta = GradLpinn(theta)
# perform a linesearch along gradLTheta
print("Lpinn(theta): ", Lpinn(theta))
eta = backtracking_armijo_line_search_with_loss_theta_grad_loss_theta(
    Lpinn,
    theta,
    loss,
    gradltheta,
    gradltheta,
    alpha=0.2,
    beta=0.5,
    n_step_max=1000,
)
print("eta with Armijo : ", eta)
print("Lpinn(theta): ", Lpinn(theta))
print("Lpinn(theta - eta * dsearch): ", Lpinn(theta - eta * gradltheta))
assert torch.all(Lpinn(theta) > Lpinn(theta - eta * gradltheta))
print("\n")
# get func and derivative with new sampling points
loss = Lpinn(theta)
Lpinn, GradLpinn = pinns.get_loss_grad_loss(
    n_collocation=n_collocation, n_bc_collocation=n_bc_collocation
)
assert not torch.equal(Lpinn(theta), loss)
# actualize theta
theta = theta - eta * gradltheta
loss = Lpinn(theta)
gradltheta = GradLpinn(theta)
# perform a linesearch along gradltheta
print("Lpinn(theta): ", Lpinn(theta))
eta = backtracking_armijo_line_search_with_loss_theta_grad_loss_theta(
    Lpinn,
    theta,
    loss,
    gradltheta,
    gradltheta,
    alpha=0.2,
    beta=0.5,
    n_step_max=1000,
)
print("eta with Armijo : ", eta)
print("Lpinn(theta): ", Lpinn(theta))
print("Lpinn(theta - eta * dsearch): ", Lpinn(theta - eta * gradltheta))
assert torch.all(Lpinn(theta) > Lpinn(theta - eta * gradltheta))
print("\n")

print(" ############################################################ ")
print(" # line_search with a deep_ritz with weak boundary condition # ")
print(" ############################################################ ")

domain_x = Disk2D(torch.tensor([0.0, 0.0]), radius=1, is_main_domain=True)
sampler = TensorizedSampler(
    [DomainSampler(domain_x), UniformParametricSampler([(1.0, 1.0001)])]
)
space = NNxSpace(
    1,
    1,
    GenericMLP,
    domain_x,
    sampler,
    layer_sizes=[30] * 3,
)
pde = Laplacian2DDirichletRitzForm(space, f=f_rhs, g=f_bc)
losses = GenericLosses(
    [
        ("residual", torch.nn.MSELoss(), 1.0),
        ("bc", torch.nn.MSELoss(), 40.0),
    ],
)
opt_1 = {
    "name": "adam",
    "optimizer_args": {"lr": 2.5e-2, "betas": (0.9, 0.999)},
}
opt = OptimizerData(opt_1)
ritz = DeepRitzElliptic(pde, bc_type="weak", optimizers=opt, losses=losses)
n_collocation = 2000
n_bc_collocation = 1500

# get current parameters of the nn
params_vect = ritz.space.get_dof(flag_scope="all", flag_format="tensor")
# get func and derivative
Lritz, GradLritz = ritz.get_loss_grad_loss(
    n_collocation=n_collocation, n_bc_collocation=n_bc_collocation
)
loss = Lritz(params_vect)
print("loss at theta = initial parameters: ", loss)
theta = params_vect.clone().detach().requires_grad_(False)
loss = Lritz(theta)
gradltheta = GradLritz(theta)
loss2 = Lritz(theta)
gradltheta2 = GradLritz(theta)
assert torch.equal(loss, loss2)
assert torch.equal(gradltheta, gradltheta2)
# perform a linesearch along gradLTheta
print("Lritz(theta): ", Lritz(theta))
eta = backtracking_armijo_line_search_with_loss_theta_grad_loss_theta(
    Lritz,
    theta,
    loss,
    gradltheta,
    gradltheta,
    alpha=0.2,
    beta=0.5,
    n_step_max=1000,
)
print("eta with Armijo : ", eta)
print("Lritz(theta): ", Lritz(theta))
print("Lritz(theta - eta * dsearch): ", Lritz(theta - eta * gradltheta))
assert torch.all(Lritz(theta) > Lritz(theta - eta * gradltheta))
print("\n")

eta = logarithmic_grid_line_search(
    Lritz, theta, gradltheta, m=10, interval=[0.0, 1.0]
)
print("eta with logarithmic grid : ", eta)
print("Lritz(theta): ", Lritz(theta))
print("Lritz(theta - eta * dsearch): ", Lritz(theta - eta * gradltheta))
# assert torch.all(Lritz(theta) > Lritz(theta - eta * gradltheta))
print("\n")

# get func and derivative with new sampling points
loss = Lritz(theta)
Lritz, GradLritz = ritz.get_loss_grad_loss(
    n_collocation=n_collocation, n_bc_collocation=n_bc_collocation
)
assert not torch.equal(Lritz(theta), loss)
# actualize theta
theta = theta - eta * gradltheta
loss = Lritz(theta)
gradltheta = GradLritz(theta)
# perform a linesearch along gradLTheta
print("Lritz(theta): ", Lritz(theta))
eta = backtracking_armijo_line_search_with_loss_theta_grad_loss_theta(
    Lritz,
    theta,
    loss,
    gradltheta,
    gradltheta,
    alpha=0.2,
    beta=0.5,
    n_step_max=1000,
)
print("eta with Armijo : ", eta)
print("Lritz(theta): ", Lritz(theta))
print("Lritz(theta - eta * dsearch): ", Lritz(theta - eta * gradltheta))
assert torch.all(Lritz(theta) > Lritz(theta - eta * gradltheta))
print("\n")

eta = logarithmic_grid_line_search(
    Lritz, theta, gradltheta, m=10, interval=[0.0, 1.0]
)
print("eta with logarithmic grid : ", eta)
print("Lritz(theta): ", Lritz(theta))
print("Lritz(theta - eta * dsearch): ", Lritz(theta - eta * gradltheta))
# assert torch.all(Lritz(theta) > Lritz(theta - eta * gradltheta))
print("\n")

# get func and derivative with new sampling points
loss = Lritz(theta)
Lritz, GradLritz = ritz.get_loss_grad_loss(
    n_collocation=n_collocation, n_bc_collocation=n_bc_collocation
)
assert not torch.equal(Lritz(theta), loss)
# actualize theta
theta = theta - eta * gradltheta
loss = Lritz(theta)
gradltheta = GradLritz(theta)
# perform a linesearch along gradLTheta
print("Lritz(theta): ", Lritz(theta))
eta = backtracking_armijo_line_search_with_loss_theta_grad_loss_theta(
    Lritz,
    theta,
    loss,
    gradltheta,
    gradltheta,
    alpha=0.2,
    beta=0.5,
    n_step_max=1000,
)
print("eta with Armijo : ", eta)
print("Lritz(theta): ", Lritz(theta))
print("Lritz(theta - eta * dsearch): ", Lritz(theta - eta * gradltheta))
assert torch.all(Lritz(theta) > Lritz(theta - eta * gradltheta))
print("\n")

Functions

backtracking_armijo_line_search(loss, ...[, ...])

Line search algorithm based on the Armijo condition.

backtracking_armijo_line_search_with_loss_theta_grad_loss_theta(...)

Line search algorithm based on the Armijo condition.

logarithmic_grid_line_search(loss, theta, ...)

Line search algorithm based on a logarithmic grid.

logarithmic_grid_line_search(loss, theta, dsearch, m=10, interval=[0.0, 1.0], log_basis=2.0, **kwargs)[source]¶

Line search algorithm based on a logarithmic grid.

Parameters:
  • loss (Callable[[Tensor], Tensor]) – The loss function.

  • theta (Tensor) – The current parameters of the loss.

  • dsearch (Tensor) – The search direction.

  • m (int) – The number of points in the logarithmic grid.

  • interval (list[float]) – The interval of the logarithmic grid.

  • log_basis (float) – The logarithmic basis to generate the grid.

  • **kwargs – Arbitrary keyword arguments.

Return type:

Tensor

Returns:

An eta minimizing the loss along the search direction from theta.

Raises:

ValueError – when log_basis <= 0.

backtracking_armijo_line_search_with_loss_theta_grad_loss_theta(loss, theta, loss_theta, grad_loss_theta, dsearch, alpha=0.01, beta=0.5, n_step_max=10, **kwargs)[source]¶

Line search algorithm based on the Armijo condition.

Parameters:
  • loss (Callable[[Tensor], Tensor]) – The loss function.

  • theta (Tensor) – The current parameters of the loss.

  • loss_theta (Tensor) – The loss at theta.

  • grad_loss_theta (Tensor) – The gradient of the loss at theta.

  • dsearch (Tensor) – The search direction.

  • alpha (float) – The Armijo condition parameter.

  • beta (float) – The Armijo condition parameter.

  • n_step_max (int) – The maximum number of steps in the backtracking algorithm.

  • **kwargs – Arbitrary keyword arguments.

Return type:

Tensor

Returns:

An eta minimizing the loss along the search direction from theta.

backtracking_armijo_line_search(loss, grad_loss, theta, dsearch, alpha=0.1, beta=0.5, n_step_max=10, **kwargs)[source]¶

Line search algorithm based on the Armijo condition.

Parameters:
  • loss (Callable[[Tensor], Tensor]) – The loss function.

  • grad_loss (Callable[[Tensor], Tensor]) – The gradient function of the loss function.

  • theta (Tensor) – The current parameters of the loss.

  • dsearch (Tensor) – The search direction.

  • alpha (float) – The Armijo condition parameter.

  • beta (float) – The Armijo condition parameter.

  • n_step_max (int) – The maximum number of steps in the backtracking algorithm.

  • **kwargs – Arbitrary keyword arguments.

Return type:

Tensor

Returns:

An eta minimizing the loss along the search direction from theta.

Next
scimba_torch.optimizers.losses
Previous
scimba_torch.optimizers
Copyright © 2025, IRMA
Made with Sphinx and @pradyunsg's Furo
On this page
  • scimba_torch.optimizers.line_search
    • logarithmic_grid_line_search()
    • backtracking_armijo_line_search_with_loss_theta_grad_loss_theta()
    • backtracking_armijo_line_search()