TF 2.0 is out! Get hands-on practice at TF World, Oct 28-31. Use code TF20 for 20% off select passes. Register now

tfa.activations.gelu

View source on GitHub

Gaussian Error Linear Unit.

Aliases:

tfa.activations.gelu(
    x,
    approximate=True
)

Computes gaussian error linear: 0.5 * x * (1 + tanh(sqrt(2 / pi) * (x + 0.044715 * x^3))) or x * P(X <= x) = 0.5 * x * (1 + erf(x / sqrt(2))), where P(X) ~ N(0, 1), depending on whether approximation is enabled.

See Gaussian Error Linear Units (GELUs) and BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.

Args:

  • x: A Tensor. Must be one of the following types: float16, float32, float64.
  • approximate: bool, whether to enable approximation.

Returns:

A Tensor. Has the same type as x.