Activation Functions.

The activation ops provide different types of nonlinearities for use in neural networks. These include smooth nonlinearities (sigmoid, tanh, elu, softplus, and softsign), continuous but not everywhere differentiable functions (relu, relu6, crelu and relu_x), and random regularization (dropout).

All activation ops apply componentwise, and produce a tensor of the same shape as the input tensor.

tf.nn.relu(features, name=None)

Computes rectified linear: max(features, 0).

Args:
• features: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half.
• name: A name for the operation (optional).
Returns:

A Tensor. Has the same type as features.

tf.nn.relu6(features, name=None)

Computes Rectified Linear 6: min(max(features, 0), 6).

Args:
• features: A Tensor with type float, double, int32, int64, uint8, int16, or int8.
• name: A name for the operation (optional).
Returns:

A Tensor with the same type as features.

tf.nn.crelu(features, name=None)

Computes Concatenated ReLU.

Concatenates a ReLU which selects only the positive part of the activation with a ReLU which selects only the negative part of the activation. Note that as a result this non-linearity doubles the depth of the activations. Source: https://arxiv.org/abs/1603.05201

Args:
• features: A Tensor with type float, double, int32, int64, uint8, int16, or int8.
• name: A name for the operation (optional).
Returns:

A Tensor with the same type as features.

tf.nn.elu(features, name=None)

Computes exponential linear: exp(features) - 1 if < 0, features otherwise.

Args:
• features: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half.
• name: A name for the operation (optional).
Returns:

A Tensor. Has the same type as features.

tf.nn.softplus(features, name=None)

Computes softplus: log(exp(features) + 1).

Args:
• features: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half.
• name: A name for the operation (optional).
Returns:

A Tensor. Has the same type as features.

tf.nn.softsign(features, name=None)

Computes softsign: features / (abs(features) + 1).

Args:
• features: A Tensor. Must be one of the following types: float32, float64, int32, int64, uint8, int16, int8, uint16, half.
• name: A name for the operation (optional).
Returns:

A Tensor. Has the same type as features.

tf.nn.dropout(x, keep_prob, noise_shape=None, seed=None, name=None)

Computes dropout.

With probability keep_prob, outputs the input element scaled up by 1 / keep_prob, otherwise outputs 0. The scaling is so that the expected sum is unchanged.

By default, each element is kept or dropped independently. If noise_shape is specified, it must be broadcastable to the shape of x, and only dimensions with noise_shape[i] == shape(x)[i] will make independent decisions. For example, if shape(x) = [k, l, m, n] and noise_shape = [k, 1, 1, n], each batch and channel component will be kept independently and each row and column will be kept or not kept together.

Args:
• x: A tensor.
• keep_prob: A scalar Tensor with the same type as x. The probability that each element is kept.
• noise_shape: A 1-D Tensor of type int32, representing the shape for randomly generated keep/drop flags.
• seed: A Python integer. Used to create random seeds. See set_random_seed for behavior.
• name: A name for this operation (optional).
Returns:

A Tensor of the same shape of x.

Raises:
• ValueError: If keep_prob is not in (0, 1].

tf.nn.bias_add(value, bias, data_format=None, name=None)

Adds bias to value.

This is (mostly) a special case of tf.add where bias is restricted to 1-D. Broadcasting is supported, so value may have any number of dimensions. Unlike tf.add, the type of bias is allowed to differ from value in the case where both types are quantized.

Args:
• value: A Tensor with type float, double, int64, int32, uint8, int16, int8, complex64, or complex128.
• bias: A 1-D Tensor with size matching the last dimension of value. Must be the same type as value unless value is a quantized type, in which case a different quantized type may be used.
• data_format: A string. 'NHWC' and 'NCHW' are supported.
• name: A name for the operation (optional).
Returns:

A Tensor with the same type as value.

tf.sigmoid(x, name=None)

Computes sigmoid of x element-wise.

Specifically, y = 1 / (1 + exp(-x)).

Args:
• x: A Tensor with type float32, float64, int32, complex64, int64, or qint32.
• name: A name for the operation (optional).
Returns:

A Tensor with the same type as x if x.dtype != qint32 otherwise the return type is quint8.

tf.tanh(x, name=None)

Computes hyperbolic tangent of x element-wise.

Args:
• x: A Tensor or SparseTensor with type float, double, int32, complex64, int64, or qint32.
• name: A name for the operation (optional).
Returns:

A Tensor or SparseTensor respectively with the same type as x if x.dtype != qint32 otherwise the return type is quint8.