Matrix Math Functions

TensorFlow provides several operations that you can use to add linear algebra functions on matrices to your graph.

tf.diag(diagonal, name=None)

Returns a diagonal tensor with a given diagonal values.

Given a diagonal, this operation returns a tensor with the diagonal and everything else padded with zeros. The diagonal is computed as follows:

Assume diagonal has dimensions [D1,..., Dk], then the output is a tensor of rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where:

output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik] and 0 everywhere else.

For example:

# 'diagonal' is [1, 2, 3, 4]
tf.diag(diagonal) ==> [[1, 0, 0, 0]
                       [0, 2, 0, 0]
                       [0, 0, 3, 0]
                       [0, 0, 0, 4]]
Args:
  • diagonal: A Tensor. Must be one of the following types: float32, float64, int32, int64, complex64, complex128. Rank k tensor where k is at most 3.
  • name: A name for the operation (optional).
Returns:

A Tensor. Has the same type as diagonal.


tf.diag_part(input, name=None)

Returns the diagonal part of the tensor.

This operation returns a tensor with the diagonal part of the input. The diagonal part is computed as follows:

Assume input has dimensions [D1,..., Dk, D1,..., Dk], then the output is a tensor of rank k with dimensions [D1,..., Dk] where:

diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik].

For example:

# 'input' is [[1, 0, 0, 0]
              [0, 2, 0, 0]
              [0, 0, 3, 0]
              [0, 0, 0, 4]]

tf.diag_part(input) ==> [1, 2, 3, 4]
Args:
  • input: A Tensor. Must be one of the following types: float32, float64, int32, int64, complex64, complex128. Rank k tensor where k is 2, 4, or 6.
  • name: A name for the operation (optional).
Returns:

A Tensor. Has the same type as input. The extracted diagonal.


tf.trace(x, name=None)

Compute the trace of a tensor x.

trace(x) returns the sum along the main diagonal of each inner-most matrix in x. If x is of rank k with shape [I, J, K, ..., L, M, N], then output is a tensor of rank k-2 with dimensions [I, J, K, ..., L] where

output[i, j, k, ..., l] = trace(x[i, j, i, ..., l, :, :])

For example:

# 'x' is [[1, 2],
#         [3, 4]]
tf.trace(x) ==> 5

# 'x' is [[1,2,3],
#         [4,5,6],
#         [7,8,9]]
tf.trace(x) ==> 15

# 'x' is [[[1,2,3],
#          [4,5,6],
#          [7,8,9]],
#         [[-1,-2,-3],
#          [-4,-5,-6],
#          [-7,-8,-9]]]
tf.trace(x) ==> [15,-15]
Args:
  • x: tensor.
  • name: A name for the operation (optional).
Returns:

The trace of input tensor.


tf.transpose(a, perm=None, name='transpose')

Transposes a. Permutes the dimensions according to perm.

The returned tensor's dimension i will correspond to the input dimension perm[i]. If perm is not given, it is set to (n-1...0), where n is the rank of the input tensor. Hence by default, this operation performs a regular matrix transpose on 2-D input Tensors.

For example:

# 'x' is [[1 2 3]
#         [4 5 6]]
tf.transpose(x) ==> [[1 4]
                     [2 5]
                     [3 6]]

# Equivalently
tf.transpose(x, perm=[1, 0]) ==> [[1 4]
                                  [2 5]
                                  [3 6]]

# 'perm' is more useful for n-dimensional tensors, for n > 2
# 'x' is   [[[1  2  3]
#            [4  5  6]]
#           [[7  8  9]
#            [10 11 12]]]
# Take the transpose of the matrices in dimension-0
tf.transpose(x, perm=[0, 2, 1]) ==> [[[1  4]
                                      [2  5]
                                      [3  6]]

                                     [[7 10]
                                      [8 11]
                                      [9 12]]]
Args:
  • a: A Tensor.
  • perm: A permutation of the dimensions of a.
  • name: A name for the operation (optional).
Returns:

A transposed Tensor.


tf.eye(num_rows, num_columns=None, batch_shape=None, dtype=tf.float32, name=None)

Construct an identity matrix, or a batch of matrices.

# Construct one identity matrix.
tf.eye(2)
==> [[1., 0.],
     [0., 1.]]

# Construct a batch of 3 identity matricies, each 2 x 2.
# batch_identity[i, :, :] is a 2 x 2 identity matrix, i = 0, 1, 2.
batch_identity = tf.eye(2, batch_shape=[3])

# Construct one 2 x 3 "identity" matrix
tf.eye(2, num_columns=3)
==> [[ 1.,  0.,  0.],
     [ 0.,  1.,  0.]]
Args:
  • num_rows: Non-negative int32 scalar Tensor giving the number of rows in each batch matrix.
  • num_columns: Optional non-negative int32 scalar Tensor giving the number of columns in each batch matrix. Defaults to num_rows.
  • batch_shape: int32 Tensor. If provided, returned Tensor will have leading batch dimensions of this shape.
  • dtype: The type of an element in the resulting Tensor
  • name: A name for this Op. Defaults to "eye".
Returns:

A Tensor of shape batch_shape + [num_rows, num_columns]


tf.matrix_diag(diagonal, name=None)

Returns a batched diagonal tensor with a given batched diagonal values.

Given a diagonal, this operation returns a tensor with the diagonal and everything else padded with zeros. The diagonal is computed as follows:

Assume diagonal has k dimensions [I, J, K, ..., N], then the output is a tensor of rank k+1 with dimensions [I, J, K, ..., N, N]` where:

output[i, j, k, ..., m, n] = 1{m=n} * diagonal[i, j, k, ..., n].

For example:

# 'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]]

and diagonal.shape = (2, 4)

tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0]
                                     [0, 2, 0, 0]
                                     [0, 0, 3, 0]
                                     [0, 0, 0, 4]],
                                    [[5, 0, 0, 0]
                                     [0, 6, 0, 0]
                                     [0, 0, 7, 0]
                                     [0, 0, 0, 8]]]

which has shape (2, 4, 4)
Args:
  • diagonal: A Tensor. Rank k, where k >= 1.
  • name: A name for the operation (optional).
Returns:

A Tensor. Has the same type as diagonal. Rank k+1, with output.shape = diagonal.shape + [diagonal.shape[-1]].


tf.matrix_diag_part(input, name=None)

Returns the batched diagonal part of a batched tensor.

This operation returns a tensor with the diagonal part of the batched input. The diagonal part is computed as follows:

Assume input has k dimensions [I, J, K, ..., M, N], then the output is a tensor of rank k - 1 with dimensions [I, J, K, ..., min(M, N)] where:

diagonal[i, j, k, ..., n] = input[i, j, k, ..., n, n].

The input must be at least a matrix.

For example:

# 'input' is [[[1, 0, 0, 0]
               [0, 2, 0, 0]
               [0, 0, 3, 0]
               [0, 0, 0, 4]],
              [[5, 0, 0, 0]
               [0, 6, 0, 0]
               [0, 0, 7, 0]
               [0, 0, 0, 8]]]

and input.shape = (2, 4, 4)

tf.matrix_diag_part(input) ==> [[1, 2, 3, 4], [5, 6, 7, 8]]

which has shape (2, 4)
Args:
  • input: A Tensor. Rank k tensor where k >= 2.
  • name: A name for the operation (optional).
Returns:

A Tensor. Has the same type as input. The extracted diagonal(s) having shape diagonal.shape = input.shape[:-2] + [min(input.shape[-2:])].


tf.matrix_band_part(input, num_lower, num_upper, name=None)

Copy a tensor setting everything outside a central band in each innermost matrix

to zero.

The band part is computed as follows: Assume input has k dimensions [I, J, K, ..., M, N], then the output is a tensor with the same shape where

band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n].

The indicator function

in_band(m, n) = (num_lower < 0 || (m-n) <= num_lower)) && (num_upper < 0 || (n-m) <= num_upper).

For example:

# if 'input' is [[ 0,  1,  2, 3]
                 [-1,  0,  1, 2]
                 [-2, -1,  0, 1]
                 [-3, -2, -1, 0]],

tf.matrix_band_part(input, 1, -1) ==> [[ 0,  1,  2, 3]
                                       [-1,  0,  1, 2]
                                       [ 0, -1,  0, 1]
                                       [ 0,  0, -1, 0]],

tf.matrix_band_part(input, 2, 1) ==> [[ 0,  1,  0, 0]
                                      [-1,  0,  1, 0]
                                      [-2, -1,  0, 1]
                                      [ 0, -2, -1, 0]]

Useful special cases:

 tf.matrix_band_part(input, 0, -1) ==> Upper triangular part.
 tf.matrix_band_part(input, -1, 0) ==> Lower triangular part.
 tf.matrix_band_part(input, 0, 0) ==> Diagonal.
Args:
  • input: A Tensor. Rank k tensor.
  • num_lower: A Tensor of type int64. 0-D tensor. Number of subdiagonals to keep. If negative, keep entire lower triangle.
  • num_upper: A Tensor of type int64. 0-D tensor. Number of superdiagonals to keep. If negative, keep entire upper triangle.
  • name: A name for the operation (optional).
Returns:

A Tensor. Has the same type as input. Rank k tensor of the same shape as input. The extracted banded tensor.


tf.matrix_set_diag(input, diagonal, name=None)

Returns a batched matrix tensor with new batched diagonal values.

Given input and diagonal, this operation returns a tensor with the same shape and values as input, except for the main diagonal of the innermost matrices. These will be overwritten by the values in diagonal.

The output is computed as follows:

Assume input has k+1 dimensions [I, J, K, ..., M, N] and diagonal has k dimensions [I, J, K, ..., min(M, N)]. Then the output is a tensor of rank k+1 with dimensions [I, J, K, ..., M, N] where:

  • output[i, j, k, ..., m, n] = diagonal[i, j, k, ..., n] for m == n.
  • output[i, j, k, ..., m, n] = input[i, j, k, ..., m, n] for m != n.
Args:
  • input: A Tensor. Rank k+1, where k >= 1.
  • diagonal: A Tensor. Must have the same type as input. Rank k, where k >= 1.
  • name: A name for the operation (optional).
Returns:

A Tensor. Has the same type as input. Rank k+1, with output.shape = input.shape.


tf.matrix_transpose(a, name='matrix_transpose')

Transposes last two dimensions of tensor a.

For example:

# Matrix with no batch dimension.
# 'x' is [[1 2 3]
#         [4 5 6]]
tf.matrix_transpose(x) ==> [[1 4]
                                 [2 5]
                                 [3 6]]

# Matrix with two batch dimensions.
# x.shape is [1, 2, 3, 4]
# tf.matrix_transpose(x) is shape [1, 2, 4, 3]
Args:
  • a: A Tensor with rank >= 2.
  • name: A name for the operation (optional).
Returns:

A transposed batch matrix Tensor.

Raises:
  • ValueError: If a is determined statically to have rank < 2.

tf.matmul(a, b, transpose_a=False, transpose_b=False, adjoint_a=False, adjoint_b=False, a_is_sparse=False, b_is_sparse=False, name=None)

Multiplies matrix a by matrix b, producing a * b.

The inputs must be matrices (or tensors of rank > 2, representing batches of matrices), with matching inner dimensions, possibly after transposition.

Both matrices must be of the same type. The supported types are: float16, float32, float64, int32, complex64, complex128.

Either matrix can be transposed or adjointed (conjugated and transposed) on the fly by setting one of the corresponding flag to True. These are False by default.

If one or both of the matrices contain a lot of zeros, a more efficient multiplication algorithm can be used by setting the corresponding a_is_sparse or b_is_sparse flag to True. These are False by default. This optimization is only available for plain matrices (rank-2 tensors) with datatypes bfloat16 or float32.

For example:

# 2-D tensor `a`
a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) => [[1. 2. 3.]
                                                      [4. 5. 6.]]
# 2-D tensor `b`
b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) => [[7. 8.]
                                                         [9. 10.]
                                                         [11. 12.]]
c = tf.matmul(a, b) => [[58 64]
                        [139 154]]

# 3-D tensor `a`
a = tf.constant(np.arange(1,13), shape=[2, 2, 3]) => [[[ 1.  2.  3.]
                                                       [ 4.  5.  6.]],
                                                      [[ 7.  8.  9.]
                                                       [10. 11. 12.]]]

# 3-D tensor `b`
b = tf.constant(np.arange(13,25), shape=[2, 3, 2]) => [[[13. 14.]
                                                        [15. 16.]
                                                        [17. 18.]],
                                                       [[19. 20.]
                                                        [21. 22.]
                                                        [23. 24.]]]
c = tf.matmul(a, b) => [[[ 94 100]
                         [229 244]],
                        [[508 532]
                         [697 730]]]
Args:
  • a: Tensor of type float16, float32, float64, int32, complex64, complex128 and rank > 1.
  • b: Tensor with same type and rank as a.
  • transpose_a: If True, a is transposed before multiplication.
  • transpose_b: If True, b is transposed before multiplication.
  • adjoint_a: If True, a is conjugated and transposed before multiplication.
  • adjoint_b: If True, b is conjugated and transposed before multiplication.
  • a_is_sparse: If True, a is treated as a sparse matrix.
  • b_is_sparse: If True, b is treated as a sparse matrix.
  • name: Name for the operation (optional).
Returns:

A Tensor of the same type as a and b where each inner-most matrix is the product of the corresponding matrices in a and b, e.g. if all transpose or adjoint attributes areFalse`:

output[..., :, :] = a[..., :, :] * b[..., :, :] ,

Raises:
  • ValueError: If transpose_a and adjoint_a, or transpose_b and adjoint_b are both set to True.

tf.batch_matmul(x, y, adj_x=None, adj_y=None, name=None)

Multiplies slices of two tensors in batches.

Multiplies all slices of Tensor x and y (each slice can be viewed as an element of a batch), and arranges the individual results in a single output tensor of the same batch size. Each of the individual slices can optionally be adjointed (to adjoint a matrix means to transpose and conjugate it) before multiplication by setting the adj_x or adj_y flag to True, which are by default False.

The input tensors x and y are 3-D or higher with shape [..., r_x, c_x] and [..., r_y, c_y].

The output tensor is 3-D or higher with shape [..., r_o, c_o], where:

r_o = c_x if adj_x else r_x
c_o = r_y if adj_y else c_y

It is computed as:

output[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :])
Args:
  • x: A Tensor. Must be one of the following types: half, float32, float64, int32, complex64, complex128. 3-D or higher with shape [..., r_x, c_x].
  • y: A Tensor. Must have the same type as x. 3-D or higher with shape [..., r_y, c_y].
  • adj_x: An optional bool. Defaults to False. If True, adjoint the slices of x. Defaults to False.
  • adj_y: An optional bool. Defaults to False. If True, adjoint the slices of y. Defaults to False.
  • name: A name for the operation (optional).
Returns:

A Tensor. Has the same type as x. 3-D or higher with shape [..., r_o, c_o]


tf.matrix_determinant(input, name=None)

Computes the determinant of one ore more square matrices.

The input is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices. The output is a tensor containing the determinants for all input submatrices [..., :, :].

Args:
  • input: A Tensor. Must be one of the following types: float32, float64. Shape is [..., M, M].
  • name: A name for the operation (optional).
Returns:

A Tensor. Has the same type as input. Shape is [...].


tf.matrix_inverse(input, adjoint=None, name=None)

Computes the inverse of one or more square invertible matrices or their

adjoints (conjugate transposes).

The input is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices. The output is a tensor of the same shape as the input containing the inverse for all input submatrices [..., :, :].

The op uses LU decomposition with partial pivoting to compute the inverses.

If a matrix is not invertible there is no guarantee what the op does. It may detect the condition and raise an exception or it may simply return a garbage result.

Args:
  • input: A Tensor. Must be one of the following types: float64, float32. Shape is [..., M, M].
  • adjoint: An optional bool. Defaults to False.
  • name: A name for the operation (optional).
Returns:

A Tensor. Has the same type as input. Shape is [..., M, M].

@compatibility(numpy) Equivalent to np.linalg.inv @end_compatibility


tf.cholesky(input, name=None)

Computes the Cholesky decomposition of one or more square matrices.

The input is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices, with the same constraints as the single matrix Cholesky decomposition above. The output is a tensor of the same shape as the input containing the Cholesky decompositions for all input submatrices [..., :, :].

Args:
  • input: A Tensor. Must be one of the following types: float64, float32. Shape is [..., M, M].
  • name: A name for the operation (optional).
Returns:

A Tensor. Has the same type as input. Shape is [..., M, M].


tf.cholesky_solve(chol, rhs, name=None)

Solves systems of linear eqns A X = RHS, given Cholesky factorizations.

# Solve 10 separate 2x2 linear systems:
A = ... # shape 10 x 2 x 2
RHS = ... # shape 10 x 2 x 1
chol = tf.cholesky(A)  # shape 10 x 2 x 2
X = tf.cholesky_solve(chol, RHS)  # shape 10 x 2 x 1
# tf.matmul(A, X) ~ RHS
X[3, :, 0]  # Solution to the linear system A[3, :, :] x = RHS[3, :, 0]

# Solve five linear systems (K = 5) for every member of the length 10 batch.
A = ... # shape 10 x 2 x 2
RHS = ... # shape 10 x 2 x 5
...
X[3, :, 2]  # Solution to the linear system A[3, :, :] x = RHS[3, :, 2]
Args:
  • chol: A Tensor. Must be float32 or float64, shape is [..., M, M]. Cholesky factorization of A, e.g. chol = tf.cholesky(A). For that reason, only the lower triangular parts (including the diagonal) of the last two dimensions of chol are used. The strictly upper part is assumed to be zero and not accessed.
  • rhs: A Tensor, same type as chol, shape is [..., M, K].
  • name: A name to give this Op. Defaults to cholesky_solve.
Returns:

Solution to A x = rhs, shape [..., M, K].


tf.matrix_solve(matrix, rhs, adjoint=None, name=None)

Solves systems of linear equations.

Matrix is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices. Rhs is a tensor of shape [..., M, K]. The output is a tensor shape [..., M, K]. If adjoint is False then each output matrix satisfies matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]. If adjoint is True then each output matrix satisfies adjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :].

Args:
  • matrix: A Tensor. Must be one of the following types: float64, float32, complex64, complex128. Shape is [..., M, M].
  • rhs: A Tensor. Must have the same type as matrix. Shape is [..., M, K].
  • adjoint: An optional bool. Defaults to False. Boolean indicating whether to solve with matrix or its (block-wise) adjoint.
  • name: A name for the operation (optional).
Returns:

A Tensor. Has the same type as matrix. Shape is [..., M, K].


tf.matrix_triangular_solve(matrix, rhs, lower=None, adjoint=None, name=None)

Solves systems of linear equations with upper or lower triangular matrices by

backsubstitution.

matrix is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices. If lower is True then the strictly upper triangular part of each inner-most matrix is assumed to be zero and not accessed. If lower is False then the strictly lower triangular part of each inner-most matrix is assumed to be zero and not accessed. rhs is a tensor of shape [..., M, K].

The output is a tensor of shape [..., M, K]. If adjoint is True then the innermost matrices in outputsatisfy matrix equationsmatrix[..., :, :] * output[..., :, :] = rhs[..., :, :]. IfadjointisFalsethen the strictly then the innermost matrices inoutputsatisfy matrix equationsadjoint(matrix[..., i, k]) * output[..., k, j] = rhs[..., i, j]`.

Args:
  • matrix: A Tensor. Must be one of the following types: float64, float32. Shape is [..., M, M].
  • rhs: A Tensor. Must have the same type as matrix. Shape is [..., M, K].
  • lower: An optional bool. Defaults to True. Boolean indicating whether the innermost matrices in matrix are lower or upper triangular.
  • adjoint: An optional bool. Defaults to False. Boolean indicating whether to solve with matrix or its (block-wise) adjoint.

    @compatibility(numpy) Equivalent to np.linalg.triangular_solve @end_compatibility

  • name: A name for the operation (optional).

Returns:

A Tensor. Has the same type as matrix. Shape is [..., M, K].


tf.matrix_solve_ls(matrix, rhs, l2_regularizer=0.0, fast=True, name=None)

Solves one or more linear least-squares problems.

matrix is a tensor of shape [..., M, N] whose inner-most 2 dimensions form M-by-N matrices. Rhs is a tensor of shape [..., M, K] whose inner-most 2 dimensions form M-by-K matrices. The computed output is a Tensor of shape [..., N, K] whose inner-most 2 dimensions form M-by-K matrices that solve the equations matrix[..., :, :] * output[..., :, :] = rhs[..., :, :] in the least squares sense.

Below we will use the following notation for each pair of matrix and right-hand sides in the batch:

matrix=\(A \in \Re^{m \times n}\), rhs=\(B \in \Re^{m \times k}\), output=\(X \in \Re^{n \times k}\), l2_regularizer=\(\lambda\).

If fast is True, then the solution is computed by solving the normal equations using Cholesky decomposition. Specifically, if \(m \ge n\) then \(X = (A^T A + \lambda I)^{-1} A^T B\), which solves the least-squares problem \(X = \mathrm{argmin}{Z \in \Re^{n \times k}} ||A Z - B||_F^2 + \lambda ||Z||_F^2\). If \(m \lt n\) then output is computed as \(X = A^T (A A^T + \lambda I)^{-1} B\), which (for \(\lambda = 0\)) is the minimum-norm solution to the under-determined linear system, i.e. \(X = \mathrm{argmin}{Z \in \Re^{n \times k}} ||Z||F^2 \), subject to \(A Z = B\). Notice that the fast path is only numerically stable when \(A\) is numerically full rank and has a condition number \(\mathrm{cond} (A) \lt \frac{1}{\sqrt{\epsilon{mach}}}\) or\(\lambda\) is sufficiently large.

If fast is False an algorithm based on the numerically robust complete orthogonal decomposition is used. This computes the minimum-norm least-squares solution, even when \(A\) is rank deficient. This path is typically 6-7 times slower than the fast path. If fast is False then l2_regularizer is ignored.

Args:
  • matrix: Tensor of shape [..., M, N].
  • rhs: Tensor of shape [..., M, K].
  • l2_regularizer: 0-D double Tensor. Ignored if fast=False.
  • fast: bool. Defaults to True.
  • name: string, optional name of the operation.
Returns:
  • output: Tensor of shape [..., N, K] whose inner-most 2 dimensions form M-by-K matrices that solve the equations matrix[..., :, :] * output[..., :, :] = rhs[..., :, :] in the least squares sense.

tf.self_adjoint_eig(tensor, name=None)

Computes the eigen decomposition of a batch of self-adjoint matrices.

Computes the eigenvalues and eigenvectors of the innermost N-by-N matrices in tensor such that tensor[...,:,:] * v[..., :,i] = e[..., i] * v[...,:,i], for i=0...N-1.

Args:
  • tensor: Tensor of shape [..., N, N]. Only the lower triangular part of each inner inner matrix is referenced.
  • name: string, optional name of the operation.
Returns:
  • e: Eigenvalues. Shape is [..., N].
  • v: Eigenvectors. Shape is [..., N, N]. The columns of the inner most matrices contain eigenvectors of the corresponding matrices in tensor

tf.self_adjoint_eigvals(tensor, name=None)

Computes the eigenvalues of one or more self-adjoint matrices.

Args:
  • tensor: Tensor of shape [..., N, N].
  • name: string, optional name of the operation.
Returns:
  • e: Eigenvalues. Shape is [..., N]. The vector e[..., :] contains the N eigenvalues of tensor[..., :, :].

tf.svd(tensor, full_matrices=False, compute_uv=True, name=None)

Computes the singular value decompositions of one or more matrices.

Computes the SVD of each inner matrix in tensor such that tensor[..., :, :] = u[..., :, :] * diag(s[..., :, :]) * transpose(v[..., :, :])

# a is a tensor.
# s is a tensor of singular values.
# u is a tensor of left singular vectors.
# v is a tensor of right singular vectors.
s, u, v = svd(a)
s = svd(a, compute_uv=False)
Args:
  • matrix: Tensor of shape [..., M, N]. Let P be the minimum of M and N.
  • full_matrices: If true, compute full-sized u and v. If false (the default), compute only the leading P singular vectors. Ignored if compute_uv is False.
  • compute_uv: If True then left and right singular vectors will be computed and returned in u and v, respectively. Otherwise, only the singular values will be computed, which can be significantly faster.
  • name: string, optional name of the operation.
Returns:
  • s: Singular values. Shape is [..., P].
  • u: Right singular vectors. If full_matrices is False (default) then shape is [..., M, P]; if full_matrices is True then shape is [..., M, M]. Not returned if compute_uv is False.
  • v: Left singular vectors. If full_matrices is False (default) then shape is [..., N, P]. If full_matrices is True then shape is [..., N, N]. Not returned if compute_uv is False.