scimba_torch.neural_nets.structure_preserving_nets.separated_symplectic_layers¶
Defines the SympNet class for symplectic neural networks.
Classes
|
Combines a linear transformation on two input tensors |
|
Combines a linear transformation on two input tensors |
|
Combines a linear transformation on two input tensors |
|
Combines a linear transformation on two input tensors |
- class LinearSymplecticLayer(size, conditional_size, **kwargs)[source]¶
Bases:
InvertibleLayerCombines a linear transformation on two input tensors
yandp.Applies an activation function, scales the result based on
p, and returns a matrix product of the transformed tensors.The module is used to model potential gradients in neural network architectures, especially in problems involving structured data.
- Parameters:
size (
int) – Total dimension of the state space (will be split into p and q).conditional_size (
int) – Dimension of the conditional input tensor.**kwargs – Additional keyword arguments. The activation function type can be passed as a keyword argument (e.g., “tanh”, “relu”).
- linear_q: nn.Linear¶
Linear transformation for the y input tensor.
- linear_mu: nn.Linear¶
Linear transformation for the p input tensor.
- forward(p, q, mu)[source]¶
Computes the forward pass.
This method combines the transformations of the input tensors, applies an activation function, scales the result, and returns the matrix product.
- Parameters:
p (
Tensor) – The momentum tensor.q (
Tensor) – The position tensor.mu (
Tensor) – The conditional input tensor.
- Return type:
Tensor- Returns:
The output tensor after applying the transformation and scaling.
- backward(p, q, mu)[source]¶
Computes the backward (inverse) pass.
This method inverts the transformation applied in the forward pass.
- Parameters:
p (
Tensor) – The momentum tensor of dimension (batch_size, n).q (
Tensor) – The position tensor of dimension (batch_size, n).mu (
Tensor) – The conditional input tensor of dimension (batch_size, conditional_size).
- Return type:
Tensor- Returns:
The concatenated tensor of the original (p, q).
- log_abs_det_jacobian(p, q, mu)[source]¶
Computes the log absolute value of the determinant of the Jacobian.
- Parameters:
p (
Tensor) – the momentum tensor of shape (batch_size, n).q (
Tensor) – the position tensor of shape (batch_size, n).mu (
Tensor) – the conditional input tensor of shape (batch_size, conditional_size).
- Return type:
Tensor- Returns:
The log absolute determinant of the Jacobian as a tensor of shape (batch_size,).
- abs_det_jacobian(p, q, mu)[source]¶
Computes the absolute value of the determinant of the Jacobian.
- Parameters:
p (
Tensor) – the momentum tensor of shape (batch_size, n).q (
Tensor) – the position tensor of shape (batch_size, n).mu (
Tensor) – the conditional input tensor of shape (batch_size, conditional_size).
- Return type:
Tensor- Returns:
The absolute determinant of the Jacobian as a tensor of shape (batch_size,).
- class ActivationSymplecticLayer(size, conditional_size, **kwargs)[source]¶
Bases:
InvertibleLayerCombines a linear transformation on two input tensors
yandp.Applies an activation function, scales the result based on
p, and returns a matrix product of the transformed tensors.The module is used to model potential gradients in neural network architectures, especially in problems involving structured data.
- Parameters:
size (
int) – Total dimension of the state space (will be split into p and q).conditional_size (
int) – Dimension of the conditional input tensor.**kwargs – Additional keyword arguments. The activation function type can be passed as a keyword argument (e.g., “tanh”, “relu”).
- linear_a: nn.Linear¶
Linear transformation for the y input tensor.
- forward(p, q, mu)[source]¶
Computes the forward pass.
This method combines the transformations of the input tensors, applies an activation function, scales the result, and returns the matrix product.
- Parameters:
p (
Tensor) – The momentum tensor.q (
Tensor) – The position tensor.mu (
Tensor) – The conditional input tensor.
- Return type:
Tensor- Returns:
The output tensor after applying the transformation and scaling.
- backward(p, q, mu)[source]¶
Computes the backward (inverse) pass.
This method inverts the transformation applied in the forward pass.
- Parameters:
p (
Tensor) – The momentum tensor of dimension (batch_size, n).q (
Tensor) – The position tensor of dimension (batch_size, n).mu (
Tensor) – The conditional input tensor of dimension (batch_size, conditional_size).
- Return type:
Tensor- Returns:
The concatenated tensor of the original (p, q).
- log_abs_det_jacobian(p, q, mu)[source]¶
Computes the log absolute value of the determinant of the Jacobian.
- Parameters:
p (
Tensor) – the momentum tensor of shape (batch_size, n).q (
Tensor) – the position tensor of shape (batch_size, n).mu (
Tensor) – the conditional input tensor of shape (batch_size, conditional_size).
- Return type:
Tensor- Returns:
The log absolute determinant of the Jacobian as a tensor of shape (batch_size,).
- abs_det_jacobian(p, q, mu)[source]¶
Computes the absolute value of the determinant of the Jacobian.
- Parameters:
p (
Tensor) – the momentum tensor of shape (batch_size, n).q (
Tensor) – the position tensor of shape (batch_size, n).mu (
Tensor) – the conditional input tensor of shape (batch_size, conditional_size).
- Return type:
Tensor- Returns:
The absolute determinant of the Jacobian as a tensor of shape (batch_size,).
- class GradPotentialSymplecticLayer(size, conditional_size, width, **kwargs)[source]¶
Bases:
InvertibleLayerCombines a linear transformation on two input tensors
yandp.Applies an activation function, scales the result based on
p, and returns a matrix product of the transformed tensors.The module is used to model potential gradients in neural network architectures, especially in problems involving structured data.
- Parameters:
size (
int) – Total dimension of the state space.conditional_size (
int) – Dimension of the conditional input tensor.width (
int) – Width of the internal layers (i.e., the number of units in the hidden layers).**kwargs – Additional keyword arguments. The activation function type can be passed as a keyword argument (e.g., “tanh”, “relu”).
- linear_q: nn.Linear¶
Linear transformation for the y input tensor.
- linear_mu: nn.Linear¶
Linear transformation for the p input tensor.
- activation_type: str¶
Activation function type (e.g., ‘tanh’) applied to the sum of the linear transformations.
- scaling: nn.Linear¶
Linear scaling transformation for the p tensor.
- activation¶
Activation function applied to the sum of the linear transformations.
- forward(q, p, mu)[source]¶
Computes the forward pass.
This method combines the transformations of the input tensors, applies an activation function, scales the result, and returns the matrix product.
- Parameters:
q (
Tensor) – The position tensor.p (
Tensor) – The momentum tensor.mu (
Tensor) – The conditional input tensor.
- Return type:
Tensor- Returns:
The output tensor after applying the transformation and scaling.
- backward(q, p, mu)[source]¶
Computes the backward (inverse) pass.
This method inverts the transformation applied in the forward pass.
- Parameters:
q (
Tensor) – The position tensor of dimension (batch_size, n).p (
Tensor) – The momentum tensor of dimension (batch_size, n).mu (
Tensor) – The conditional input tensor of dimension (batch_size, conditional_size).
- Return type:
Tensor- Returns:
The concatenated tensor of the original (p, q).
- log_abs_det_jacobian(p, q, mu)[source]¶
Computes the log absolute value of the determinant of the Jacobian.
- Parameters:
p (
Tensor) – the momentum tensor of shape (batch_size, n).q (
Tensor) – the position tensor of shape (batch_size, n).mu (
Tensor) – the conditional input tensor of shape (batch_size, conditional_size).
- Return type:
Tensor- Returns:
The log absolute determinant of the Jacobian as a tensor of shape (batch_size,).
- abs_det_jacobian(p, q, mu)[source]¶
Computes the absolute value of the determinant of the Jacobian.
- Parameters:
p (
Tensor) – the momentum tensor of shape (batch_size, n).q (
Tensor) – the position tensor of shape (batch_size, n).mu (
Tensor) – the conditional input tensor of shape (batch_size, conditional_size).
- Return type:
Tensor- Returns:
The absolute determinant of the Jacobian as a tensor of shape (batch_size,).
- class PeriodicGradPotentialSymplecticLayer(size, conditional_size, width, period, **kwargs)[source]¶
Bases:
InvertibleLayerCombines a linear transformation on two input tensors
yandp.Applies an activation function, scales the result based on
p, and returns a matrix product of the transformed tensors.The module is used to model periodic potential gradients in neural network architectures,
especially in problems involving structured data.
- Parameters:
size (
int) – Total dimension of the state space.conditional_size (
int) – Dimension of the conditional input tensor.width (
int) – Width of the internal layers (i.e., the number of units in the hidden layers).period (
Tensor) – The period of the potential.**kwargs – Additional keyword arguments. The activation function type can be passed as a keyword argument (e.g., “tanh”, “relu”).
- linear_q1: nn.Linear¶
Linear transformation for the y input tensor.
- linear_mu: nn.Linear¶
Linear transformation for the p input tensor.
- activation_type: str¶
Activation function type (e.g., ‘tanh’) applied to the sum of the linear transformations.
- scaling: nn.Linear¶
Linear scaling transformation for the p tensor.
- activation¶
Activation function applied to the sum of the linear transformations.
- forward(q, p, mu)[source]¶
Computes the forward pass.
This method combines the transformations of the input tensors, applies an activation function, scales the result, and returns the matrix product.
- Parameters:
q (
Tensor) – The position tensor.p (
Tensor) – The momentum tensor.mu (
Tensor) – The conditional input tensor.
- Return type:
Tensor- Returns:
The output tensor after applying the transformation and scaling.
- backward(q, p, mu)[source]¶
Computes the backward (inverse) pass.
This method inverts the transformation applied in the forward pass.
- Parameters:
q (
Tensor) – The position tensor of dimension (batch_size, n).p (
Tensor) – The momentum tensor of dimension (batch_size, n).mu (
Tensor) – The conditional input tensor of dimension (batch_size, conditional_size).
- Return type:
Tensor- Returns:
The concatenated tensor of the original (p, q).
- log_abs_det_jacobian(p, q, mu)[source]¶
Computes the log absolute value of the determinant of the Jacobian.
- Parameters:
p (
Tensor) – the momentum tensor of shape (batch_size, n).q (
Tensor) – the position tensor of shape (batch_size, n).mu (
Tensor) – the conditional input tensor of shape (batch_size, conditional_size).
- Return type:
Tensor- Returns:
The log absolute determinant of the Jacobian as a tensor of shape (batch_size,).
- abs_det_jacobian(p, q, mu)[source]¶
Computes the absolute value of the determinant of the Jacobian.
- Parameters:
p (
Tensor) – the momentum tensor of shape (batch_size, n).q (
Tensor) – the position tensor of shape (batch_size, n).mu (
Tensor) – the conditional input tensor of shape (batch_size, conditional_size).
- Return type:
Tensor- Returns:
The absolute determinant of the Jacobian as a tensor of shape (batch_size,).