Scaled Exponential Linear Unit (SELU).
SELU is equal to:
scale * elu(x, alpha), where alpha and scale
are pre-defined constants. The values of
chosen so that the mean and variance of the inputs are preserved
between two consecutive layers as long as the weights are initialized
lecun_normal initialization) and the number of inputs
is "large enough" (see references for more information).
x: A tensor or variable to compute the activation function for.
The scaled exponential unit activation: `scale * elu(x, alpha)`.
- To be used together with the initialization "lecun_normal". - To be used together with the dropout variant "AlphaDropout".
References: - Self-Normalizing Neural Networks