TensorFlow provides several operations that you can use to add linear algebra functions on matrices to your graph.

`tf.diag(diagonal, name=None)`

Returns a diagonal tensor with a given diagonal values.

Given a `diagonal`

, this operation returns a tensor with the `diagonal`

and
everything else padded with zeros. The diagonal is computed as follows:

Assume `diagonal`

has dimensions [D1,..., Dk], then the output is a tensor of
rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where:

`output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik]`

and 0 everywhere else.

For example:

```
# 'diagonal' is [1, 2, 3, 4]
tf.diag(diagonal) ==> [[1, 0, 0, 0]
[0, 2, 0, 0]
[0, 0, 3, 0]
[0, 0, 0, 4]]
```

##### Args:

: A`diagonal`

`Tensor`

. Must be one of the following types:`float32`

,`float64`

,`int32`

,`int64`

,`complex64`

,`complex128`

. Rank k tensor where k is at most 3.: A name for the operation (optional).`name`

##### Returns:

A `Tensor`

. Has the same type as `diagonal`

.

`tf.diag_part(input, name=None)`

Returns the diagonal part of the tensor.

This operation returns a tensor with the `diagonal`

part
of the `input`

. The `diagonal`

part is computed as follows:

Assume `input`

has dimensions `[D1,..., Dk, D1,..., Dk]`

, then the output is a
tensor of rank `k`

with dimensions `[D1,..., Dk]`

where:

`diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik]`

.

For example:

```
# 'input' is [[1, 0, 0, 0]
[0, 2, 0, 0]
[0, 0, 3, 0]
[0, 0, 0, 4]]
tf.diag_part(input) ==> [1, 2, 3, 4]
```

##### Args:

: A`input`

`Tensor`

. Must be one of the following types:`float32`

,`float64`

,`int32`

,`int64`

,`complex64`

,`complex128`

. Rank k tensor where k is 2, 4, or 6.: A name for the operation (optional).`name`

##### Returns:

A `Tensor`

. Has the same type as `input`

. The extracted diagonal.

`tf.trace(x, name=None)`

Compute the trace of a tensor `x`

.

`trace(x)`

returns the sum of along the diagonal.

For example:

```
# 'x' is [[1, 1],
# [1, 1]]
tf.trace(x) ==> 2
# 'x' is [[1,2,3],
# [4,5,6],
# [7,8,9]]
tf.trace(x) ==> 15
```

##### Args:

: 2-D tensor.`x`

: A name for the operation (optional).`name`

##### Returns:

The trace of input tensor.

`tf.transpose(a, perm=None, name='transpose')`

Transposes `a`

. Permutes the dimensions according to `perm`

.

The returned tensor's dimension i will correspond to the input dimension
`perm[i]`

. If `perm`

is not given, it is set to (n-1...0), where n is
the rank of the input tensor. Hence by default, this operation performs a
regular matrix transpose on 2-D input Tensors.

For example:

```
# 'x' is [[1 2 3]
# [4 5 6]]
tf.transpose(x) ==> [[1 4]
[2 5]
[3 6]]
# Equivalently
tf.transpose(x, perm=[1, 0]) ==> [[1 4]
[2 5]
[3 6]]
# 'perm' is more useful for n-dimensional tensors, for n > 2
# 'x' is [[[1 2 3]
# [4 5 6]]
# [[7 8 9]
# [10 11 12]]]
# Take the transpose of the matrices in dimension-0
tf.transpose(x, perm=[0, 2, 1]) ==> [[[1 4]
[2 5]
[3 6]]
[[7 10]
[8 11]
[9 12]]]
```

##### Args:

: A`a`

`Tensor`

.: A permutation of the dimensions of`perm`

`a`

.: A name for the operation (optional).`name`

##### Returns:

A transposed `Tensor`

.

`tf.matrix_diag(diagonal, name=None)`

Returns a batched diagonal tensor with a given batched diagonal values.

Given a `diagonal`

, this operation returns a tensor with the `diagonal`

and
everything else padded with zeros. The diagonal is computed as follows:

Assume `diagonal`

has `k`

dimensions `[I, J, K, ..., N]`

, then the output is a
tensor of rank `k+1`

with dimensions [I, J, K, ..., N, N]` where:

`output[i, j, k, ..., m, n] = 1{m=n} * diagonal[i, j, k, ..., n]`

.

For example:

```
# 'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]]
and diagonal.shape = (2, 4)
tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0]
[0, 2, 0, 0]
[0, 0, 3, 0]
[0, 0, 0, 4]],
[[5, 0, 0, 0]
[0, 6, 0, 0]
[0, 0, 7, 0]
[0, 0, 0, 8]]]
which has shape (2, 4, 4)
```

##### Args:

: A`diagonal`

`Tensor`

. Rank`k`

, where`k >= 1`

.: A name for the operation (optional).`name`

##### Returns:

A `Tensor`

. Has the same type as `diagonal`

.
Rank `k+1`

, with `output.shape = diagonal.shape + [diagonal.shape[-1]]`

.

`tf.matrix_diag_part(input, name=None)`

Returns the batched diagonal part of a batched tensor.

This operation returns a tensor with the `diagonal`

part
of the batched `input`

. The `diagonal`

part is computed as follows:

Assume `input`

has `k`

dimensions `[I, J, K, ..., N, N]`

, then the output is a
tensor of rank `k - 1`

with dimensions `[I, J, K, ..., N]`

where:

`diagonal[i, j, k, ..., n] = input[i, j, k, ..., n, n]`

.

The input must be at least a matrix.

For example:

```
# 'input' is [[[1, 0, 0, 0]
[0, 2, 0, 0]
[0, 0, 3, 0]
[0, 0, 0, 4]],
[[5, 0, 0, 0]
[0, 6, 0, 0]
[0, 0, 7, 0]
[0, 0, 0, 8]]]
and input.shape = (2, 4, 4)
tf.matrix_diag_part(input) ==> [[1, 2, 3, 4], [5, 6, 7, 8]]
which has shape (2, 4)
```

##### Args:

: A`input`

`Tensor`

. Rank`k`

tensor where`k >= 2`

and the last two dimensions are equal.: A name for the operation (optional).`name`

##### Returns:

A `Tensor`

. Has the same type as `input`

.
The extracted diagonal(s) having shape
`diagonal.shape = input.shape[:-1]`

.

`tf.matrix_band_part(input, num_lower, num_upper, name=None)`

Copy a tensor setting everything outside a central band in each innermost matrix

to zero.

The `band`

part is computed as follows:
Assume `input`

has `k`

dimensions `[I, J, K, ..., M, N]`

, then the output is a
tensor with the same shape where

`band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n]`

.

The indicator function 'in_band(m, n)`is one if`

(num_lower < 0 || (m-n) <= num_lower)) &&
(num_upper < 0 || (n-m) <= num_upper)`, and zero otherwise.

For example:

```
# if 'input' is [[ 0, 1, 2, 3]
[-1, 0, 1, 2]
[-2, -1, 0, 1]
[-3, -2, -1, 0]],
tf.matrix_band_part(input, 1, -1) ==> [[ 0, 1, 2, 3]
[-1, 0, 1, 2]
[ 0, -1, 0, 1]
[ 0, 0, -1, 0]],
tf.matrix_band_part(input, 2, 1) ==> [[ 0, 1, 0, 0]
[-1, 0, 1, 0]
[-2, -1, 0, 1]
[ 0, -2, -1, 0]]
```

Useful special cases:

```
tf.matrix_band_part(input, 0, -1) ==> Upper triangular part.
tf.matrix_band_part(input, -1, 0) ==> Lower triangular part.
tf.matrix_band_part(input, 0, 0) ==> Diagonal.
```

##### Args:

: A`input`

`Tensor`

. Rank`k`

tensor.: A`num_lower`

`Tensor`

of type`int64`

. 0-D tensor. Number of subdiagonals to keep. If negative, keep entire lower triangle.: A`num_upper`

`Tensor`

of type`int64`

. 0-D tensor. Number of superdiagonals to keep. If negative, keep entire upper triangle.: A name for the operation (optional).`name`

##### Returns:

A `Tensor`

. Has the same type as `input`

.
Rank `k`

tensor of the same shape as input. The extracted banded tensor.

`tf.matrix_set_diag(input, diagonal, name=None)`

Returns a batched matrix tensor with new batched diagonal values.

Given `input`

and `diagonal`

, this operation returns a tensor with the
same shape and values as `input`

, except for the diagonals of the innermost
matrices. These will be overwritten by the values in `diagonal`

.
The batched matrices must be square.

The output is computed as follows:

Assume `input`

has `k+1`

dimensions `[I, J, K, ..., N, N]`

and `diagonal`

has
`k`

dimensions `[I, J, K, ..., N]`

. Then the output is a
tensor of rank `k+1`

with dimensions [I, J, K, ..., N, N]` where:

`output[i, j, k, ..., m, n] = diagonal[i, j, k, ..., n]`

for`m == n`

.`output[i, j, k, ..., m, n] = input[i, j, k, ..., m, n]`

for`m != n`

.

##### Args:

: A`input`

`Tensor`

. Rank`k+1`

, where`k >= 1`

.: A`diagonal`

`Tensor`

. Must have the same type as`input`

. Rank`k`

, where`k >= 1`

.: A name for the operation (optional).`name`

##### Returns:

A `Tensor`

. Has the same type as `input`

.
Rank `k+1`

, with `output.shape = input.shape`

.

`tf.matrix_transpose(a, name='matrix_transpose')`

Transposes last two dimensions of tensor `a`

.

For example:

```
# Matrix with no batch dimension.
# 'x' is [[1 2 3]
# [4 5 6]]
tf.matrix_transpose(x) ==> [[1 4]
[2 5]
[3 6]]
# Matrix with two batch dimensions.
# x.shape is [1, 2, 3, 4]
# tf.matrix_transpose(x) is shape [1, 2, 4, 3]
```

##### Args:

: A`a`

`Tensor`

with`rank >= 2`

.: A name for the operation (optional).`name`

##### Returns:

A transposed batch matrix `Tensor`

.

##### Raises:

: If`ValueError`

`a`

is determined statically to have`rank < 2`

.

`tf.matmul(a, b, transpose_a=False, transpose_b=False, a_is_sparse=False, b_is_sparse=False, name=None)`

Multiplies matrix `a`

by matrix `b`

, producing `a`

* `b`

.

The inputs must be two-dimensional matrices, with matching inner dimensions, possibly after transposition.

Both matrices must be of the same type. The supported types are:
`float32`

, `float64`

, `int32`

, `complex64`

.

Either matrix can be transposed on the fly by setting the corresponding flag
to `True`

. This is `False`

by default.

If one or both of the matrices contain a lot of zeros, a more efficient
multiplication algorithm can be used by setting the corresponding
`a_is_sparse`

or `b_is_sparse`

flag to `True`

. These are `False`

by default.

For example:

```
# 2-D tensor `a`
a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) => [[1. 2. 3.]
[4. 5. 6.]]
# 2-D tensor `b`
b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) => [[7. 8.]
[9. 10.]
[11. 12.]]
c = tf.matmul(a, b) => [[58 64]
[139 154]]
```

##### Args:

:`a`

`Tensor`

of type`float32`

,`float64`

,`int32`

or`complex64`

.:`b`

`Tensor`

with same type as`a`

.: If`transpose_a`

`True`

,`a`

is transposed before multiplication.: If`transpose_b`

`True`

,`b`

is transposed before multiplication.: If`a_is_sparse`

`True`

,`a`

is treated as a sparse matrix.: If`b_is_sparse`

`True`

,`b`

is treated as a sparse matrix.: Name for the operation (optional).`name`

##### Returns:

A `Tensor`

of the same type as `a`

.

`tf.batch_matmul(x, y, adj_x=None, adj_y=None, name=None)`

Multiplies slices of two tensors in batches.

Multiplies all slices of `Tensor`

`x`

and `y`

(each slice can be
viewed as an element of a batch), and arranges the individual results
in a single output tensor of the same batch size. Each of the
individual slices can optionally be adjointed (to adjoint a matrix
means to transpose and conjugate it) before multiplication by setting
the `adj_x`

or `adj_y`

flag to `True`

, which are by default `False`

.

The input tensors `x`

and `y`

are 3-D or higher with shape `[..., r_x, c_x]`

and `[..., r_y, c_y]`

.

The output tensor is 3-D or higher with shape `[..., r_o, c_o]`

, where:

```
r_o = c_x if adj_x else r_x
c_o = r_y if adj_y else c_y
```

It is computed as:

```
output[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :])
```

##### Args:

: A`x`

`Tensor`

. Must be one of the following types:`half`

,`float32`

,`float64`

,`int32`

,`complex64`

,`complex128`

. 3-D or higher with shape`[..., r_x, c_x]`

.: A`y`

`Tensor`

. Must have the same type as`x`

. 3-D or higher with shape`[..., r_y, c_y]`

.: An optional`adj_x`

`bool`

. Defaults to`False`

. If`True`

, adjoint the slices of`x`

. Defaults to`False`

.: An optional`adj_y`

`bool`

. Defaults to`False`

. If`True`

, adjoint the slices of`y`

. Defaults to`False`

.: A name for the operation (optional).`name`

##### Returns:

A `Tensor`

. Has the same type as `x`

.
3-D or higher with shape `[..., r_o, c_o]`

`tf.matrix_determinant(input, name=None)`

Computes the determinant of one ore more square matrices.

The input is a tensor of shape `[..., M, M]`

whose inner-most 2 dimensions
form square matrices. The output is a tensor containing the determinants
for all input submatrices `[..., :, :]`

.

##### Args:

: A`input`

`Tensor`

. Must be one of the following types:`float32`

,`float64`

. Shape is`[..., M, M]`

.: A name for the operation (optional).`name`

##### Returns:

A `Tensor`

. Has the same type as `input`

. Shape is `[...]`

.

`tf.matrix_inverse(input, adjoint=None, name=None)`

Computes the inverse of one or more square invertible matrices or their

adjoints (conjugate transposes).

The input is a tensor of shape `[..., M, M]`

whose inner-most 2 dimensions
form square matrices. The output is a tensor of the same shape as the input
containing the inverse for all input submatrices `[..., :, :]`

.

The op uses LU decomposition with partial pivoting to compute the inverses.

If a matrix is not invertible there is no guarantee what the op does. It may detect the condition and raise an exception or it may simply return a garbage result.

##### Args:

: A`input`

`Tensor`

. Must be one of the following types:`float64`

,`float32`

. Shape is`[..., M, M]`

.: An optional`adjoint`

`bool`

. Defaults to`False`

.: A name for the operation (optional).`name`

##### Returns:

A `Tensor`

. Has the same type as `input`

. Shape is `[..., M, M]`

.

`tf.cholesky(input, name=None)`

Computes the Cholesky decomposition of one or more square matrices.

The input is a tensor of shape `[..., M, M]`

whose inner-most 2 dimensions
form square matrices, with the same constraints as the single matrix Cholesky
decomposition above. The output is a tensor of the same shape as the input
containing the Cholesky decompositions for all input submatrices `[..., :, :]`

.

##### Args:

: A`input`

`Tensor`

. Must be one of the following types:`float64`

,`float32`

. Shape is`[..., M, M]`

.: A name for the operation (optional).`name`

##### Returns:

A `Tensor`

. Has the same type as `input`

. Shape is `[..., M, M]`

.

`tf.cholesky_solve(chol, rhs, name=None)`

Solves systems of linear eqns `A X = RHS`

, given Cholesky factorizations.

```
# Solve 10 separate 2x2 linear systems:
A = ... # shape 10 x 2 x 2
RHS = ... # shape 10 x 2 x 1
chol = tf.cholesky(A) # shape 10 x 2 x 2
X = tf.cholesky_solve(chol, RHS) # shape 10 x 2 x 1
# tf.matmul(A, X) ~ RHS
X[3, :, 0] # Solution to the linear system A[3, :, :] x = RHS[3, :, 0]
# Solve five linear systems (K = 5) for every member of the length 10 batch.
A = ... # shape 10 x 2 x 2
RHS = ... # shape 10 x 2 x 5
...
X[3, :, 2] # Solution to the linear system A[3, :, :] x = RHS[3, :, 2]
```

##### Args:

: A`chol`

`Tensor`

. Must be`float32`

or`float64`

, shape is`[..., M, M]`

. Cholesky factorization of`A`

, e.g.`chol = tf.cholesky(A)`

. For that reason, only the lower triangular parts (including the diagonal) of the last two dimensions of`chol`

are used. The strictly upper part is assumed to be zero and not accessed.: A`rhs`

`Tensor`

, same type as`chol`

, shape is`[..., M, K]`

.: A name to give this`name`

`Op`

. Defaults to`cholesky_solve`

.

##### Returns:

Solution to `A x = rhs`

, shape `[..., M, K]`

.

`tf.matrix_solve(matrix, rhs, adjoint=None, name=None)`

Solves systems of linear equations.

`Matrix`

is a tensor of shape `[..., M, M]`

whose inner-most 2 dimensions
form square matrices. `Rhs`

is a tensor of shape `[..., M, K]`

. The `output`

is
a tensor shape `[..., M, K]`

. If `adjoint`

is `False`

then each output matrix
satisfies `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`

.
If `adjoint`

is `True`

then each output matrix satisfies
`adjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :]`

.

##### Args:

: A`matrix`

`Tensor`

. Must be one of the following types:`float64`

,`float32`

. Shape is`[..., M, M]`

.: A`rhs`

`Tensor`

. Must have the same type as`matrix`

. Shape is`[..., M, K]`

.: An optional`adjoint`

`bool`

. Defaults to`False`

. Boolean indicating whether to solve with`matrix`

or its (block-wise) adjoint.: A name for the operation (optional).`name`

##### Returns:

A `Tensor`

. Has the same type as `matrix`

. Shape is `[..., M, K]`

.

`tf.matrix_triangular_solve(matrix, rhs, lower=None, adjoint=None, name=None)`

Solves systems of linear equations with upper or lower triangular matrices by

backsubstitution.

`matrix`

is a tensor of shape `[..., M, M]`

whose inner-most 2 dimensions form
square matrices. If `lower`

is `True`

then the strictly upper triangular part
of each inner-most matrix is assumed to be zero and not accessed.
If `lower`

is False then the strictly lower triangular part of each inner-most
matrix is assumed to be zero and not accessed.
`rhs`

is a tensor of shape `[..., M, K]`

.

The output is a tensor of shape `[..., M, K]`

. If `adjoint`

is
`True`

then the innermost matrices in output`satisfy matrix equations`

matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]```
.
If
```

adjoint`is`

False`then the strictly then the innermost matrices in`

output`satisfy matrix equations`

adjoint(matrix[..., i, k]) * output[..., k, j] = rhs[..., i, j]`.

##### Args:

: A`matrix`

`Tensor`

. Must be one of the following types:`float64`

,`float32`

. Shape is`[..., M, M]`

.: A`rhs`

`Tensor`

. Must have the same type as`matrix`

. Shape is`[..., M, K]`

.: An optional`lower`

`bool`

. Defaults to`True`

. Boolean indicating whether the innermost matrices in`matrix`

are lower or upper triangular.: An optional`adjoint`

`bool`

. Defaults to`False`

. Boolean indicating whether to solve with`matrix`

or its (block-wise) adjoint.: A name for the operation (optional).`name`

##### Returns:

A `Tensor`

. Has the same type as `matrix`

. Shape is `[..., M, K]`

.

`tf.matrix_solve_ls(matrix, rhs, l2_regularizer=0.0, fast=True, name=None)`

Solves one or more linear least-squares problems.

`matrix`

is a tensor of shape `[..., M, N]`

whose inner-most 2 dimensions
form `M`

-by-`N`

matrices. Rhs is a tensor of shape `[..., M, K]`

whose
inner-most 2 dimensions form `M`

-by-`K`

matrices. The computed output is a
`Tensor`

of shape `[..., N, K]`

whose inner-most 2 dimensions form `M`

-by-`K`

matrices that solve the equations
`matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`

in the least squares
sense.

Below we will use the following notation for each pair of matrix and right-hand sides in the batch:

`matrix`

=\(A \in \Re^{m \times n}\),
`rhs`

=\(B \in \Re^{m \times k}\),
`output`

=\(X \in \Re^{n \times k}\),
`l2_regularizer`

=\(\lambda\).

If `fast`

is `True`

, then the solution is computed by solving the normal
equations using Cholesky decomposition. Specifically, if \(m \ge n\) then
\(X = (A^T A + \lambda I)^{-1} A^T B\), which solves the least-squares
problem \(X = \mathrm{argmin}*{Z \in \Re^{n \times k}} ||A Z - B||_F^2 +
\lambda ||Z||_F^2\). If \(m \lt n\) then output is computed as
\(X = A^T (A A^T + \lambda I)^{-1} B\), which (for \(\lambda = 0\)) is
the minimum-norm solution to the under-determined linear system, i.e.
\(X = \mathrm{argmin}*{Z \in \Re^{n \times k}} ||Z||

*F^2 \), subject to \(A Z = B\). Notice that the fast path is only numerically stable when \(A\) is numerically full rank and has a condition number \(\mathrm{cond} (A) \lt \frac{1}{\sqrt{\epsilon*{mach}}}\) or\(\lambda\) is sufficiently large.

If `fast`

is `False`

an algorithm based on the numerically robust complete
orthogonal decomposition is used. This computes the minimum-norm
least-squares solution, even when \(A\) is rank deficient. This path is
typically 6-7 times slower than the fast path. If `fast`

is `False`

then
`l2_regularizer`

is ignored.

##### Args:

:`matrix`

`Tensor`

of shape`[..., M, N]`

.:`rhs`

`Tensor`

of shape`[..., M, K]`

.: 0-D`l2_regularizer`

`double`

`Tensor`

. Ignored if`fast=False`

.: bool. Defaults to`fast`

`True`

.: string, optional name of the operation.`name`

##### Returns:

:`output`

`Tensor`

of shape`[..., N, K]`

whose inner-most 2 dimensions form`M`

-by-`K`

matrices that solve the equations`matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`

in the least squares sense.

`tf.self_adjoint_eig(tensor, name=None)`

Computes the eigen decomposition of a batch of self-adjoint matrices.

Computes the eigenvalues and eigenvectors of the innermost N-by-N matrices
in `tensor`

such that
`tensor[...,:,:] * v[..., :,i] = e[..., i] * v[...,:,i]`

, for i=0...N-1.

##### Args:

:`tensor`

`Tensor`

of shape`[..., N, N]`

. Only the lower triangular part of each inner inner matrix is referenced.: string, optional name of the operation.`name`

##### Returns:

: Eigenvalues. Shape is`e`

`[..., N]`

.: Eigenvectors. Shape is`v`

`[..., N, N]`

. The columns of the inner most matrices contain eigenvectors of the corresponding matrices in`tensor`

`tf.self_adjoint_eigvals(tensor, name=None)`

Computes the eigenvalues of one or more self-adjoint matrices.

##### Args:

:`tensor`

`Tensor`

of shape`[..., N, N]`

.: string, optional name of the operation.`name`

##### Returns:

: Eigenvalues. Shape is`e`

`[..., N]`

. The vector`e[..., :]`

contains the`N`

eigenvalues of`tensor[..., :, :]`

.

`tf.svd(tensor, compute_uv=True, full_matrices=False, name=None)`

Computes the singular value decompositions of one or more matrices.

Computes the SVD of each inner matrix in `tensor`

such that
```
tensor[..., :, :] = u[..., :, :] * diag(s[..., :, :]) * transpose(v[..., :,
:])
```

```
# a is a tensor.
# s is a tensor of singular values.
# u is a tensor of left singular vectors.
# v is a tensor of right singular vectors.
s, u, v = svd(a)
s = svd(a, compute_uv=False)
```

##### Args:

:`matrix`

`Tensor`

of shape`[..., M, N]`

. Let`P`

be the minimum of`M`

and`N`

.: If`compute_uv`

`True`

then left and right singular vectors will be computed and returned in`u`

and`v`

, respectively. Otherwise, only the singular values will be computed, which can be significantly faster.: If true, compute full-sized`full_matrices`

`u`

and`v`

. If false (the default), compute only the leading`P`

singular vectors. Ignored if`compute_uv`

is`False`

.: string, optional name of the operation.`name`

##### Returns:

: Singular values. Shape is`s`

`[..., P]`

.: Right singular vectors. If`u`

`full_matrices`

is`False`

(default) then shape is`[..., M, P]`

; if`full_matrices`

is`True`

then shape is`[..., M, M]`

. Not returned if`compute_uv`

is`False`

.: Left singular vectors. If`v`

`full_matrices`

is`False`

(default) then shape is`[..., N, P]`

. If`full_matrices`

is`True`

then shape is`[..., N, N]`

. Not returned if`compute_uv`

is`False`

.