scimba_torch.neural_nets.coordinates_based_nets.activation

Differents activation layers and adaptive activation layers.

All the activation functions take **kwargs for the initialization in order to have the same signature for all activation functions.

Functions

activation_function(ac_type[, in_size])

Function to choose the activation function.

Classes

AdaptativeTanh(**kwargs)

Class for tanh activation function with adaptive parameter.

AnisotropicRadial(in_size, m, **kwargs)

Anisotropic radial basis activation.

Cosin(**kwargs)

Class for Cosine activation function.

Hat(**kwargs)

Class for Hat activation function.

Heaviside(**kwargs)

Class for Regularized Heaviside activation function.

Id()

Identity activation function.

IsotropicRadial(in_size, m, **kwargs)

Isotropic radial basis activation.

Rational(**kwargs)

Class for a rational activation function with adaptive parameters.

RbfSinus(*args, **kwargs)

RegularizedHat(**kwargs)

Class for Regularized Hat activation function.

SiLU(**kwargs)

SiLU activation function.

Sigmoid(**kwargs)

Sigmoid activation function.

Sine(**kwargs)

Class for Sine activation function.

Swish(**kwargs)

Swish activation function.

Tanh(**kwargs)

Tanh activation function.

Wavelet(*args, **kwargs)

class AdaptativeTanh(**kwargs)[source]

Bases: Module

Class for tanh activation function with adaptive parameter.

Parameters:

**kwargs (Any) –

Keyword arguments including:

  • mu (float): the mean of the Gaussian law. Defaults to 0.0.

  • sigma (float): std of the Gaussian law. Defaults to 0.1.

a

The parameter of the tanh.

forward(x)[source]

Apply the activation function to a tensor x.

Parameters:

x (Tensor) – Input tensor.

Return type:

Tensor

Returns:

The tensor after the application of the tanh function.

class Hat(**kwargs)[source]

Bases: Module

Class for Hat activation function.

Parameters:

**kwargs (Any) – Keyword arguments (not used here).

forward(x)[source]

Apply the activation function to a tensor x.

Parameters:

x (Tensor) – Input tensor.

Return type:

Tensor

Returns:

The tensor after the application of the activation function.

class RegularizedHat(**kwargs)[source]

Bases: Module

Class for Regularized Hat activation function.

Parameters:

**kwargs (Any) – Keyword arguments (not used here).

forward(x)[source]

Apply the activation function to a tensor x.

Parameters:

x (Tensor) – Input tensor.

Return type:

Tensor

Returns:

The tensor after the application of the activation function.

class Sine(**kwargs)[source]

Bases: Module

Class for Sine activation function.

Parameters:

**kwargs (Any) – Keyword arguments including: - freq: The frequency of the sinus. Defaults to 1.0.

freq

The frequency of the sinus.

forward(x)[source]

Apply the activation function to a tensor x.

Parameters:

x (Tensor) – Input tensor.

Return type:

Tensor

Returns:

The tensor after the application of the sine function.

class Cosin(**kwargs)[source]

Bases: Module

Class for Cosine activation function.

Parameters:

**kwargs (Any) –

Keyword arguments including:

  • freq (float): The frequency of the cosine. Defaults to 1.0.

freq

The frequency of the cosine.

forward(x)[source]

Apply the activation function to a tensor x.

Parameters:

x (Tensor) – Input tensor.

Return type:

Tensor

Returns:

The tensor after the application of the cosine function.

class Heaviside(**kwargs)[source]

Bases: Module

Class for Regularized Heaviside activation function.

\[\begin{split}H_k(x) &= 1/(1+e^{-2 k x}) \\ k >> 1, \quad H_k(x) &= H(x)\end{split}\]
Parameters:

**kwargs (Any) –

Keyword arguments including:

  • k (float): the regularization parameter. Defaults to 100.0.

k

The regularization parameter.

forward(x)[source]

Apply the activation function to a tensor x.

Parameters:

x (Tensor) – Input tensor.

Return type:

Tensor

Returns:

The tensor after the application to the sigmoid function.

class Tanh(**kwargs)[source]

Bases: Module

Tanh activation function.

Parameters:

**kwargs (Any) – Keyword arguments (not used here).

forward(x)[source]

Apply the activation function to a tensor x.

Parameters:

x (Tensor) – Input tensor.

Return type:

Tensor

Returns:

The tensor after the application to the tanh function.

class Id[source]

Bases: Module

Identity activation function.

forward(x)[source]

Apply the activation function to a tensor x.

Parameters:

x (Tensor) – Input tensor.

Return type:

Tensor

Returns:

The input tensor unchanged (identity function).

class SiLU(**kwargs)[source]

Bases: Module

SiLU activation function.

Parameters:

**kwargs (Any) – Keyword arguments (not used here).

forward(x)[source]

Apply the activation function to a tensor x.

Parameters:

x (Tensor) – Input tensor.

Return type:

Tensor

Returns:

The tensor after the application of the SiLU function.

class Swish(**kwargs)[source]

Bases: Module

Swish activation function.

Parameters:

**kwargs (Any) –

Keyword arguments including:

  • learnable: Whether the beta parameter is learnable. Defaults to False.

  • beta: The beta parameter. Defaults to 1.0.

learnable

Whether beta is learnable.

beta

The beta parameter.

forward(x)[source]

Apply the activation function to a tensor x.

Parameters:

x (Tensor) – Input tensor.

Return type:

Tensor

Returns:

The tensor after the application of the Swish function.

class Sigmoid(**kwargs)[source]

Bases: Module

Sigmoid activation function.

Parameters:

**kwargs (Any) – Keyword arguments (not used here).

forward(x)[source]

Apply the activation function to a tensor x.

Parameters:

x (Tensor) – Input tensor.

Return type:

Tensor

Returns:

The tensor after the application of the sigmoid function.

class Wavelet(*args, **kwargs)[source]

Bases: Module

class RbfSinus(*args, **kwargs)[source]

Bases: Module

class IsotropicRadial(in_size, m, **kwargs)[source]

Bases: Module

Isotropic radial basis activation.

It is of the form: \(\phi(x,m,\sigma)\) with \(m\) the center of the function and \(\sigma\) the shape parameter.

Currently implemented:
  • \(\phi(x,m,\sigma)= exp^{-\mid x-m \mid^2 \sigma^2}\)

  • \(\phi(x,m,\sigma)= 1/\sqrt(1+(\mid x-m\mid \sigma^2)^2)\)

we use the Lp norm.

Parameters:
  • in_size (int) – Size of the inputs.

  • m (Tensor) – Center tensor for the radial basis function.

  • **kwargs

    Keyword arguments including:

    • norm (int): Number of norm. Defaults to 2.

    • type_rbf (str): Type of RBF (“gaussian” or other). Defaults to “gaussian”.

Learnable Parameters:

mu: The list of the center of the radial basis function (size= in_size). sigma: The shape parameter of the radial basis function.

forward(x)[source]

Apply the activation function to a tensor x.

Parameters:

x (Tensor) – Input tensor.

Return type:

Tensor

Returns:

The tensor after the application of the radial basis function.

class AnisotropicRadial(in_size, m, **kwargs)[source]

Bases: Module

Anisotropic radial basis activation.

It is of the form: \(\phi(x,m,\sigma)\) with \(m\) the center of the function and \(\Sigma=A A^t + 0.01 I_d\) the matrix shape parameter.

Currently implemented:

  • \(\phi(x,m,\Sigma)= exp^{- ((x-m),\Sigma(x-m))}\)

  • \(\phi(x,m,\Sigma)= 1/\sqrt(1+((x-m,\Sigma(x-m)))^2)\)

we use the Lp norm.

Parameters:
  • in_size (int) – Size of the inputs.

  • m (Tensor) – Center tensor for the radial basis function.

  • **kwargs – Keyword arguments including type_rbf (str): Type of RBF (“gaussian” or other). Defaults to “gaussian”.

Learnable Parameters:

  • mu: The list of the center of the radial basis function (size= in_size).

  • A: The shape matrix of the radial basis function (size= in_size*in_size).

forward(x)[source]

Apply the activation function to a tensor x.

Parameters:

x (Tensor) – Input tensor.

Return type:

Tensor

Returns:

The tensor after the application of the anisotropic radial basis function.

class Rational(**kwargs)[source]

Bases: Module

Class for a rational activation function with adaptive parameters.

The function takes the form \(P(x) / Q(x)\), with \(P\) a degree 3 polynomial and \(Q\) a degree 2 polynomial. It is initialized as the best approximation of the ReLU function on \([- 1, 1]\). The polynomials take the form:

  • \(P(x) = p_0 + p_1 x + p_2 x^2 + p_3 x^3\)

  • \(Q(x) = q_0 + q_1 x + q_2 x^2\).

p0, p1, p2, p3, q0, q1, q2 are learnable parameters

Parameters:

**kwargs (Any) – Additional keyword arguments (not used here).

p0

Coefficient \(p_0\) of the polynomial \(P\).

p1

Coefficient \(p_1\) of the polynomial \(P\).

p2

Coefficient \(p_2\) of the polynomial \(P\).

p3

Coefficient \(p_3\) of the polynomial \(P\).

q0

Coefficient \(q_0\) of the polynomial \(Q\).

q1

Coefficient \(q_1\) of the polynomial \(Q\).

q2

Coefficient \(q_2\) of the polynomial \(Q\).

forward(x)[source]

Apply the activation function to a tensor x.

Parameters:

x (Tensor) – Input tensor.

Return type:

Tensor

Returns:

The tensor after the application of the rational function.

activation_function(ac_type, in_size=1, **kwargs)[source]

Function to choose the activation function.

Parameters:
  • ac_type (str) – The name of the activation function.

  • in_size (int) – The dimension (useful for radial basis). Defaults to 1.

  • **kwargs – Additional keyword arguments passed to the activation function.

Returns:

The activation function instance.