TensorFlow provides several operations that you can use to perform common math computations that reduce various dimensions of a tensor.

`tf.reduce_sum(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)`

Computes the sum of elements across dimensions of a tensor.

Reduces `input_tensor`

along the dimensions given in `axis`

.
Unless `keep_dims`

is true, the rank of the tensor is reduced by 1 for each
entry in `axis`

. If `keep_dims`

is true, the reduced dimensions
are retained with length 1.

If `axis`

has no entries, all dimensions are reduced, and a
tensor with a single element is returned.

For example:

```
# 'x' is [[1, 1, 1]
# [1, 1, 1]]
tf.reduce_sum(x) ==> 6
tf.reduce_sum(x, 0) ==> [2, 2, 2]
tf.reduce_sum(x, 1) ==> [3, 3]
tf.reduce_sum(x, 1, keep_dims=True) ==> [[3], [3]]
tf.reduce_sum(x, [0, 1]) ==> 6
```

##### Args:

: The tensor to reduce. Should have numeric type.`input_tensor`

: The dimensions to reduce. If`axis`

`None`

(the default), reduces all dimensions.: If true, retains reduced dimensions with length 1.`keep_dims`

: A name for the operation (optional).`name`

: The old (deprecated) name for axis.`reduction_indices`

##### Returns:

The reduced tensor.

@compatibility(numpy) Equivalent to np.sum @end_compatibility

`tf.reduce_prod(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)`

Computes the product of elements across dimensions of a tensor.

Reduces `input_tensor`

along the dimensions given in `axis`

.
Unless `keep_dims`

is true, the rank of the tensor is reduced by 1 for each
entry in `axis`

. If `keep_dims`

is true, the reduced dimensions
are retained with length 1.

If `axis`

has no entries, all dimensions are reduced, and a
tensor with a single element is returned.

##### Args:

: The tensor to reduce. Should have numeric type.`input_tensor`

: The dimensions to reduce. If`axis`

`None`

(the default), reduces all dimensions.: If true, retains reduced dimensions with length 1.`keep_dims`

: A name for the operation (optional).`name`

: The old (deprecated) name for axis.`reduction_indices`

##### Returns:

The reduced tensor.

@compatibility(numpy) Equivalent to np.prod @end_compatibility

`tf.reduce_min(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)`

Computes the minimum of elements across dimensions of a tensor.

Reduces `input_tensor`

along the dimensions given in `axis`

.
Unless `keep_dims`

is true, the rank of the tensor is reduced by 1 for each
entry in `axis`

. If `keep_dims`

is true, the reduced dimensions
are retained with length 1.

If `axis`

has no entries, all dimensions are reduced, and a
tensor with a single element is returned.

##### Args:

: The tensor to reduce. Should have numeric type.`input_tensor`

: The dimensions to reduce. If`axis`

`None`

(the default), reduces all dimensions.: If true, retains reduced dimensions with length 1.`keep_dims`

: A name for the operation (optional).`name`

: The old (deprecated) name for axis.`reduction_indices`

##### Returns:

The reduced tensor.

@compatibility(numpy) Equivalent to np.min @end_compatibility

`tf.reduce_max(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)`

Computes the maximum of elements across dimensions of a tensor.

`input_tensor`

along the dimensions given in `axis`

.
Unless `keep_dims`

is true, the rank of the tensor is reduced by 1 for each
entry in `axis`

. If `keep_dims`

is true, the reduced dimensions
are retained with length 1.

If `axis`

has no entries, all dimensions are reduced, and a
tensor with a single element is returned.

##### Args:

: The tensor to reduce. Should have numeric type.`input_tensor`

: The dimensions to reduce. If`axis`

`None`

(the default), reduces all dimensions.: If true, retains reduced dimensions with length 1.`keep_dims`

: A name for the operation (optional).`name`

: The old (deprecated) name for axis.`reduction_indices`

##### Returns:

The reduced tensor.

@compatibility(numpy) Equivalent to np.max @end_compatibility

`tf.reduce_mean(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)`

Computes the mean of elements across dimensions of a tensor.

`input_tensor`

along the dimensions given in `axis`

.
Unless `keep_dims`

is true, the rank of the tensor is reduced by 1 for each
entry in `axis`

. If `keep_dims`

is true, the reduced dimensions
are retained with length 1.

If `axis`

has no entries, all dimensions are reduced, and a
tensor with a single element is returned.

For example:

```
# 'x' is [[1., 1.]
# [2., 2.]]
tf.reduce_mean(x) ==> 1.5
tf.reduce_mean(x, 0) ==> [1.5, 1.5]
tf.reduce_mean(x, 1) ==> [1., 2.]
```

##### Args:

: The tensor to reduce. Should have numeric type.`input_tensor`

: The dimensions to reduce. If`axis`

`None`

(the default), reduces all dimensions.: If true, retains reduced dimensions with length 1.`keep_dims`

: A name for the operation (optional).`name`

: The old (deprecated) name for axis.`reduction_indices`

##### Returns:

The reduced tensor.

@compatibility(numpy) Equivalent to np.mean @end_compatibility

`tf.reduce_all(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)`

Computes the "logical and" of elements across dimensions of a tensor.

`input_tensor`

along the dimensions given in `axis`

.
Unless `keep_dims`

is true, the rank of the tensor is reduced by 1 for each
entry in `axis`

. If `keep_dims`

is true, the reduced dimensions
are retained with length 1.

If `axis`

has no entries, all dimensions are reduced, and a
tensor with a single element is returned.

For example:

```
# 'x' is [[True, True]
# [False, False]]
tf.reduce_all(x) ==> False
tf.reduce_all(x, 0) ==> [False, False]
tf.reduce_all(x, 1) ==> [True, False]
```

##### Args:

: The boolean tensor to reduce.`input_tensor`

: The dimensions to reduce. If`axis`

`None`

(the default), reduces all dimensions.: If true, retains reduced dimensions with length 1.`keep_dims`

: A name for the operation (optional).`name`

: The old (deprecated) name for axis.`reduction_indices`

##### Returns:

The reduced tensor.

@compatibility(numpy) Equivalent to np.all @end_compatibility

`tf.reduce_any(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)`

Computes the "logical or" of elements across dimensions of a tensor.

`input_tensor`

along the dimensions given in `axis`

.
Unless `keep_dims`

is true, the rank of the tensor is reduced by 1 for each
entry in `axis`

. If `keep_dims`

is true, the reduced dimensions
are retained with length 1.

If `axis`

has no entries, all dimensions are reduced, and a
tensor with a single element is returned.

For example:

```
# 'x' is [[True, True]
# [False, False]]
tf.reduce_any(x) ==> True
tf.reduce_any(x, 0) ==> [True, True]
tf.reduce_any(x, 1) ==> [True, False]
```

##### Args:

: The boolean tensor to reduce.`input_tensor`

: The dimensions to reduce. If`axis`

`None`

(the default), reduces all dimensions.: If true, retains reduced dimensions with length 1.`keep_dims`

: A name for the operation (optional).`name`

: The old (deprecated) name for axis.`reduction_indices`

##### Returns:

The reduced tensor.

@compatibility(numpy) Equivalent to np.any @end_compatibility

`tf.reduce_logsumexp(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)`

Computes log(sum(exp(elements across dimensions of a tensor))).

`input_tensor`

along the dimensions given in `axis`

.
Unless `keep_dims`

is true, the rank of the tensor is reduced by 1 for each
entry in `axis`

. If `keep_dims`

is true, the reduced dimensions
are retained with length 1.

If `axis`

has no entries, all dimensions are reduced, and a
tensor with a single element is returned.

This function is more numerically stable than log(sum(exp(input))). It avoids overflows caused by taking the exp of large inputs and underflows caused by taking the log of small inputs.

For example:

```
# 'x' is [[0, 0, 0]]
# [0, 0, 0]]
tf.reduce_logsumexp(x) ==> log(6)
tf.reduce_logsumexp(x, 0) ==> [log(2), log(2), log(2)]
tf.reduce_logsumexp(x, 1) ==> [log(3), log(3)]
tf.reduce_logsumexp(x, 1, keep_dims=True) ==> [[log(3)], [log(3)]]
tf.reduce_logsumexp(x, [0, 1]) ==> log(6)
```

##### Args:

: The tensor to reduce. Should have numeric type.`input_tensor`

: The dimensions to reduce. If`axis`

`None`

(the default), reduces all dimensions.: If true, retains reduced dimensions with length 1.`keep_dims`

: A name for the operation (optional).`name`

: The old (deprecated) name for axis.`reduction_indices`

##### Returns:

The reduced tensor.

`tf.count_nonzero(input_tensor, axis=None, keep_dims=False, dtype=tf.int64, name=None, reduction_indices=None)`

Computes number of nonzero elements across dimensions of a tensor.

`input_tensor`

along the dimensions given in `axis`

.
Unless `keep_dims`

is true, the rank of the tensor is reduced by 1 for each
entry in `axis`

. If `keep_dims`

is true, the reduced dimensions
are retained with length 1.

If `axis`

has no entries, all dimensions are reduced, and a
tensor with a single element is returned.

**NOTE** Floating point comparison to zero is done by exact floating point
equality check. Small values are **not** rounded to zero for purposes of
the nonzero check.

For example:

```
# 'x' is [[0, 1, 0]
# [1, 1, 0]]
tf.count_nonzero(x) ==> 3
tf.count_nonzero(x, 0) ==> [1, 2, 0]
tf.count_nonzero(x, 1) ==> [1, 2]
tf.count_nonzero(x, 1, keep_dims=True) ==> [[1], [2]]
tf.count_nonzero(x, [0, 1]) ==> 3
```

##### Args:

: The tensor to reduce. Should be of numeric type, or`input_tensor`

`bool`

.: The dimensions to reduce. If`axis`

`None`

(the default), reduces all dimensions.: If true, retains reduced dimensions with length 1.`keep_dims`

: The output dtype; defaults to`dtype`

`tf.int64`

.: A name for the operation (optional).`name`

: The old (deprecated) name for axis.`reduction_indices`

##### Returns:

The reduced tensor (number of nonzero values).

`tf.accumulate_n(inputs, shape=None, tensor_dtype=None, name=None)`

Returns the element-wise sum of a list of tensors.

Optionally, pass `shape`

and `tensor_dtype`

for shape and type checking,
otherwise, these are inferred.

NOTE: This operation is not differentiable and cannot be used if inputs depend
on trainable variables. Please use `tf.add_n`

for such cases.

For example:

```
# tensor 'a' is [[1, 2], [3, 4]]
# tensor `b` is [[5, 0], [0, 6]]
tf.accumulate_n([a, b, a]) ==> [[7, 4], [6, 14]]
# Explicitly pass shape and type
tf.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32)
==> [[7, 4], [6, 14]]
```

##### Args:

: A list of`inputs`

`Tensor`

objects, each with same shape and type.: Shape of elements of`shape`

`inputs`

.: The type of`tensor_dtype`

`inputs`

.: A name for the operation (optional).`name`

##### Returns:

A `Tensor`

of same shape and type as the elements of `inputs`

.

##### Raises:

: If`ValueError`

`inputs`

don't all have same shape and dtype or the shape cannot be inferred.

`tf.einsum(equation, *inputs)`

A generalized contraction between tensors of arbitrary dimension.

This function returns a tensor whose elements are defined by `equation`

,
which is written in a shorthand form inspired by the Einstein summation
convention. As an example, consider multiplying two matrices
A and B to form a matrix C. The elements of C are given by:

```
C[i,k] = sum_j A[i,j] * B[j,k]
```

The corresponding `equation`

is:

```
ij,jk->ik
```

In general, the `equation`

is obtained from the more familiar element-wise
equation by
1. removing variable names, brackets, and commas,
2. replacing "*" with ",",
3. dropping summation signs, and
4. moving the output to the right, and replacing "=" with "->".

Many common operations can be expressed in this way. For example:

# Matrix multiplication

einsum('ij,jk->ik', m0, m1) # output[i,k] = sum_j m0[i,j] * m1[j, k]

# Dot product

einsum('i,i->', u, v) # output = sum_i u[i]*v[i]

# Outer product

einsum('i,j->ij', u, v) # output[i,j] = u[i]*v[j]

# Transpose

einsum('ij->ji', m) # output[j,i] = m[i,j]

# Batch matrix multiplication

einsum('aij,ajk->aik', s, t) # out[a,i,k] = sum_j s[a,i,j] * t[a, j, k]

This function behaves like `numpy.einsum`

, but does not support:
* Ellipses (subscripts like ij...,jk...->ik...)
* Subscripts where an axis appears more than once for a single input
(e.g.

`ijj,k->ik`

).
* Subscripts that are summed across multiple inputs (e.g., `ij,ij,jk->ik`

).##### Args:

: a`equation`

`str`

describing the contraction, in the same format as`numpy.einsum`

.: the inputs to contract (each one a`inputs`

`Tensor`

), whose shapes should be consistent with`equation`

.

##### Returns:

The contracted `Tensor`

, with shape determined by `equation`

.

##### Raises:

: If`ValueError`

- the format of
`equation`

is incorrect, - the number of inputs implied by
`equation`

does not match`len(inputs)`

, - an axis appears in the output subscripts but not in any of the inputs,
- the number of dimensions of an input differs from the number of indices in its subscript, or
- the input shapes are inconsistent along a particular axis.

- the format of