scimba_torch.neural_nets.coordinates_based_nets.activation¶
Differents activation layers and adaptive activation layers.
All the activation functions take **kwargs for the initialization in order to have the same signature for all activation functions.
Functions
|
Function to choose the activation function. |
Classes
|
Class for tanh activation function with adaptive parameter. |
|
Anisotropic radial basis activation. |
|
Class for Cosine activation function. |
|
Class for Hat activation function. |
|
Class for Regularized Heaviside activation function. |
|
Identity activation function. |
|
Isotropic radial basis activation. |
|
Class for a rational activation function with adaptive parameters. |
|
|
|
Class for Regularized Hat activation function. |
|
SiLU activation function. |
|
Sigmoid activation function. |
|
Class for Sine activation function. |
|
Swish activation function. |
|
Tanh activation function. |
|
- class AdaptativeTanh(**kwargs)[source]¶
Bases:
ModuleClass for tanh activation function with adaptive parameter.
- Parameters:
**kwargs (
Any) –Keyword arguments including:
mu (
float): the mean of the Gaussian law. Defaults to 0.0.sigma (
float): std of the Gaussian law. Defaults to 0.1.
- a¶
The parameter of the tanh.
- class Hat(**kwargs)[source]¶
Bases:
ModuleClass for Hat activation function.
- Parameters:
**kwargs (
Any) – Keyword arguments (not used here).
- class RegularizedHat(**kwargs)[source]¶
Bases:
ModuleClass for Regularized Hat activation function.
- Parameters:
**kwargs (
Any) – Keyword arguments (not used here).
- class Sine(**kwargs)[source]¶
Bases:
ModuleClass for Sine activation function.
- Parameters:
**kwargs (
Any) – Keyword arguments including: - freq: The frequency of the sinus. Defaults to 1.0.
- freq¶
The frequency of the sinus.
- class Cosin(**kwargs)[source]¶
Bases:
ModuleClass for Cosine activation function.
- Parameters:
**kwargs (
Any) –Keyword arguments including:
freq (
float): The frequency of the cosine. Defaults to 1.0.
- freq¶
The frequency of the cosine.
- class Heaviside(**kwargs)[source]¶
Bases:
ModuleClass for Regularized Heaviside activation function.
\[\begin{split}H_k(x) &= 1/(1+e^{-2 k x}) \\ k >> 1, \quad H_k(x) &= H(x)\end{split}\]- Parameters:
**kwargs (
Any) –Keyword arguments including:
k (
float): the regularization parameter. Defaults to 100.0.
- k¶
The regularization parameter.
- class Tanh(**kwargs)[source]¶
Bases:
ModuleTanh activation function.
- Parameters:
**kwargs (
Any) – Keyword arguments (not used here).
- class SiLU(**kwargs)[source]¶
Bases:
ModuleSiLU activation function.
- Parameters:
**kwargs (
Any) – Keyword arguments (not used here).
- class Swish(**kwargs)[source]¶
Bases:
ModuleSwish activation function.
- Parameters:
**kwargs (
Any) –Keyword arguments including:
learnable: Whether the beta parameter is learnable. Defaults to False.
beta: The beta parameter. Defaults to 1.0.
- learnable¶
Whether beta is learnable.
- beta¶
The beta parameter.
- class Sigmoid(**kwargs)[source]¶
Bases:
ModuleSigmoid activation function.
- Parameters:
**kwargs (
Any) – Keyword arguments (not used here).
- class IsotropicRadial(in_size, m, **kwargs)[source]¶
Bases:
ModuleIsotropic radial basis activation.
It is of the form: \(\phi(x,m,\sigma)\) with \(m\) the center of the function and \(\sigma\) the shape parameter.
- Currently implemented:
\(\phi(x,m,\sigma)= exp^{-\mid x-m \mid^2 \sigma^2}\)
\(\phi(x,m,\sigma)= 1/\sqrt(1+(\mid x-m\mid \sigma^2)^2)\)
we use the Lp norm.
- Parameters:
in_size (
int) – Size of the inputs.m (
Tensor) – Center tensor for the radial basis function.**kwargs –
Keyword arguments including:
norm (
int): Number of norm. Defaults to 2.type_rbf (
str): Type of RBF (“gaussian” or other). Defaults to “gaussian”.
- Learnable Parameters:
mu: The list of the center of the radial basis function (size= in_size). sigma: The shape parameter of the radial basis function.
- class AnisotropicRadial(in_size, m, **kwargs)[source]¶
Bases:
ModuleAnisotropic radial basis activation.
It is of the form: \(\phi(x,m,\sigma)\) with \(m\) the center of the function and \(\Sigma=A A^t + 0.01 I_d\) the matrix shape parameter.
Currently implemented:
\(\phi(x,m,\Sigma)= exp^{- ((x-m),\Sigma(x-m))}\)
\(\phi(x,m,\Sigma)= 1/\sqrt(1+((x-m,\Sigma(x-m)))^2)\)
we use the Lp norm.
- Parameters:
in_size (
int) – Size of the inputs.m (
Tensor) – Center tensor for the radial basis function.**kwargs – Keyword arguments including type_rbf (
str): Type of RBF (“gaussian” or other). Defaults to “gaussian”.
Learnable Parameters:
mu: The list of the center of the radial basis function (size= in_size).A: The shape matrix of the radial basis function (size= in_size*in_size).
- class Rational(**kwargs)[source]¶
Bases:
ModuleClass for a rational activation function with adaptive parameters.
The function takes the form \(P(x) / Q(x)\), with \(P\) a degree 3 polynomial and \(Q\) a degree 2 polynomial. It is initialized as the best approximation of the ReLU function on \([- 1, 1]\). The polynomials take the form:
\(P(x) = p_0 + p_1 x + p_2 x^2 + p_3 x^3\)
\(Q(x) = q_0 + q_1 x + q_2 x^2\).
p0,p1,p2,p3,q0,q1,q2are learnable parameters- Parameters:
**kwargs (
Any) – Additional keyword arguments (not used here).
- p0¶
Coefficient \(p_0\) of the polynomial \(P\).
- p1¶
Coefficient \(p_1\) of the polynomial \(P\).
- p2¶
Coefficient \(p_2\) of the polynomial \(P\).
- p3¶
Coefficient \(p_3\) of the polynomial \(P\).
- q0¶
Coefficient \(q_0\) of the polynomial \(Q\).
- q1¶
Coefficient \(q_1\) of the polynomial \(Q\).
- q2¶
Coefficient \(q_2\) of the polynomial \(Q\).
- activation_function(ac_type, in_size=1, **kwargs)[source]¶
Function to choose the activation function.
- Parameters:
ac_type (
str) – The name of the activation function.in_size (
int) – The dimension (useful for radial basis). Defaults to 1.**kwargs – Additional keyword arguments passed to the activation function.
- Returns:
The activation function instance.