# tfa.activations.gelu

Gaussian Error Linear Unit.

Computes gaussian error linear:

$\mathrm{gelu}(x) = x \Phi(x),$

where

$\Phi(x) = \frac{1}{2} \left[ 1 + \mathrm{erf}(\frac{x}{\sqrt{2} }) \right]$

when approximate is False; or

$\Phi(x) = \frac{x}{2} \left[ 1 + \tanh(\sqrt{\frac{2}{\pi} } \cdot (x + 0.044715 \cdot x^3)) \right]$

when approximate is True.

Consider using tf.nn.gelu instead. Note that the default of approximate changed to False in tf.nn.gelu.

#### Usage:

x = tf.constant([-1.0, 0.0, 1.0])
tfa.activations.gelu(x, approximate=False)
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([-0.15865529,  0.        ,  0.8413447 ], dtype=float32)>
tfa.activations.gelu(x, approximate=True)
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([-0.15880796,  0.        ,  0.841192  ], dtype=float32)>


x A Tensor. Must be one of the following types: float16, float32, float64.
approximate bool, whether to enable approximation.

A Tensor. Has the same type as x.

[{ "type": "thumb-down", "id": "missingTheInformationINeed", "label":"Missing the information I need" },{ "type": "thumb-down", "id": "tooComplicatedTooManySteps", "label":"Too complicated / too many steps" },{ "type": "thumb-down", "id": "outOfDate", "label":"Out of date" },{ "type": "thumb-down", "id": "samplesCodeIssue", "label":"Samples / code issue" },{ "type": "thumb-down", "id": "otherDown", "label":"Other" }]
[{ "type": "thumb-up", "id": "easyToUnderstand", "label":"Easy to understand" },{ "type": "thumb-up", "id": "solvedMyProblem", "label":"Solved my problem" },{ "type": "thumb-up", "id": "otherUp", "label":"Other" }]