scimba_torch.neural_nets.structure_preserving_nets.gradpotential

Defines the GradPotential class for neural networks.

Classes

GradPotential(y_dim, p_dim, width, **kwargs)

Combines a linear transformation on two input tensors y and p.

class GradPotential(y_dim, p_dim, width, **kwargs)[source]

Bases: Module

Combines a linear transformation on two input tensors y and p.

Applies an activation function, scales the result based on p, and returns a matrix product of the transformed tensors.

The module is used to model potential gradients in neural network architectures, especially in problems involving structured data.

Parameters:
  • y_dim (int) – Dimension of the input tensor y.

  • p_dim (int) – Dimension of the input tensor p.

  • width (int) – Width of the internal layers (i.e., the number of units in the hidden layers).

  • **kwargs – Additional keyword arguments. The activation function type can be passed as a keyword argument (e.g., “tanh”, “relu”).

linear_y: nn.Linear

Linear transformation for the y input tensor.

linear_p: nn.Linear

Linear transformation for the p input tensor.

activation_type: str

Activation function type (e.g., ‘tanh’) applied to the sum of the linear transformations.

scaling: nn.Linear

Linear scaling transformation for the p tensor.

activation

Activation function applied to the sum of the linear transformations.

forward(y, p)[source]

Computes the forward pass.

This method combines the transformations of the input tensors y and p, applies an activation function, scales the result based on p, and returns the matrix product.

Parameters:
  • y (Tensor) – The input tensor of dimension (batch_size, y_dim).

  • p (Tensor) – The input tensor of dimension (batch_size, p_dim).

Return type:

Tensor

Returns:

The output tensor after applying the transformation and scaling.