TensorFlow provides several operations that you can use to add linear algebra functions on matrices to your graph.
tf.diag(diagonal, name=None)
Returns a diagonal tensor with a given diagonal values.
Given a diagonal
, this operation returns a tensor with the diagonal
and
everything else padded with zeros. The diagonal is computed as follows:
Assume diagonal
has dimensions [D1,..., Dk], then the output is a tensor of
rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where:
output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik]
and 0 everywhere else.
For example:
# 'diagonal' is [1, 2, 3, 4]
tf.diag(diagonal) ==> [[1, 0, 0, 0]
[0, 2, 0, 0]
[0, 0, 3, 0]
[0, 0, 0, 4]]
Args:
diagonal
: ATensor
. Must be one of the following types:float32
,float64
,int32
,int64
,complex64
,complex128
. Rank k tensor where k is at most 3.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as diagonal
.
tf.diag_part(input, name=None)
Returns the diagonal part of the tensor.
This operation returns a tensor with the diagonal
part
of the input
. The diagonal
part is computed as follows:
Assume input
has dimensions [D1,..., Dk, D1,..., Dk]
, then the output is a
tensor of rank k
with dimensions [D1,..., Dk]
where:
diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik]
.
For example:
# 'input' is [[1, 0, 0, 0]
[0, 2, 0, 0]
[0, 0, 3, 0]
[0, 0, 0, 4]]
tf.diag_part(input) ==> [1, 2, 3, 4]
Args:
input
: ATensor
. Must be one of the following types:float32
,float64
,int32
,int64
,complex64
,complex128
. Rank k tensor where k is 2, 4, or 6.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
. The extracted diagonal.
tf.trace(x, name=None)
Compute the trace of a tensor x
.
trace(x)
returns the sum along the main diagonal of each innermost matrix
in x. If x is of rank k
with shape [I, J, K, ..., L, M, N]
, then output
is a tensor of rank k2
with dimensions [I, J, K, ..., L]
where
output[i, j, k, ..., l] = trace(x[i, j, i, ..., l, :, :])
For example:
# 'x' is [[1, 2],
# [3, 4]]
tf.trace(x) ==> 5
# 'x' is [[1,2,3],
# [4,5,6],
# [7,8,9]]
tf.trace(x) ==> 15
# 'x' is [[[1,2,3],
# [4,5,6],
# [7,8,9]],
# [[1,2,3],
# [4,5,6],
# [7,8,9]]]
tf.trace(x) ==> [15,15]
Args:
x
: tensor.name
: A name for the operation (optional).
Returns:
The trace of input tensor.
tf.transpose(a, perm=None, name='transpose')
Transposes a
. Permutes the dimensions according to perm
.
The returned tensor's dimension i will correspond to the input dimension
perm[i]
. If perm
is not given, it is set to (n1...0), where n is
the rank of the input tensor. Hence by default, this operation performs a
regular matrix transpose on 2D input Tensors.
For example:
# 'x' is [[1 2 3]
# [4 5 6]]
tf.transpose(x) ==> [[1 4]
[2 5]
[3 6]]
# Equivalently
tf.transpose(x, perm=[1, 0]) ==> [[1 4]
[2 5]
[3 6]]
# 'perm' is more useful for ndimensional tensors, for n > 2
# 'x' is [[[1 2 3]
# [4 5 6]]
# [[7 8 9]
# [10 11 12]]]
# Take the transpose of the matrices in dimension0
tf.transpose(x, perm=[0, 2, 1]) ==> [[[1 4]
[2 5]
[3 6]]
[[7 10]
[8 11]
[9 12]]]
Args:
a
: ATensor
.perm
: A permutation of the dimensions ofa
.name
: A name for the operation (optional).
Returns:
A transposed Tensor
.
tf.eye(num_rows, num_columns=None, batch_shape=None, dtype=tf.float32, name=None)
Construct an identity matrix, or a batch of matrices.
# Construct one identity matrix.
tf.eye(2)
==> [[1., 0.],
[0., 1.]]
# Construct a batch of 3 identity matricies, each 2 x 2.
# batch_identity[i, :, :] is a 2 x 2 identity matrix, i = 0, 1, 2.
batch_identity = tf.eye(2, batch_shape=[3])
# Construct one 2 x 3 "identity" matrix
tf.eye(2, num_columns=3)
==> [[ 1., 0., 0.],
[ 0., 1., 0.]]
Args:
num_rows
: Nonnegativeint32
scalarTensor
giving the number of rows in each batch matrix.num_columns
: Optional nonnegativeint32
scalarTensor
giving the number of columns in each batch matrix. Defaults tonum_rows
.batch_shape
:int32
Tensor
. If provided, returnedTensor
will have leading batch dimensions of this shape.dtype
: The type of an element in the resultingTensor
name
: A name for thisOp
. Defaults to "eye".
Returns:
A Tensor
of shape batch_shape + [num_rows, num_columns]
tf.matrix_diag(diagonal, name=None)
Returns a batched diagonal tensor with a given batched diagonal values.
Given a diagonal
, this operation returns a tensor with the diagonal
and
everything else padded with zeros. The diagonal is computed as follows:
Assume diagonal
has k
dimensions [I, J, K, ..., N]
, then the output is a
tensor of rank k+1
with dimensions [I, J, K, ..., N, N]` where:
output[i, j, k, ..., m, n] = 1{m=n} * diagonal[i, j, k, ..., n]
.
For example:
# 'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]]
and diagonal.shape = (2, 4)
tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0]
[0, 2, 0, 0]
[0, 0, 3, 0]
[0, 0, 0, 4]],
[[5, 0, 0, 0]
[0, 6, 0, 0]
[0, 0, 7, 0]
[0, 0, 0, 8]]]
which has shape (2, 4, 4)
Args:
diagonal
: ATensor
. Rankk
, wherek >= 1
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as diagonal
.
Rank k+1
, with output.shape = diagonal.shape + [diagonal.shape[1]]
.
tf.matrix_diag_part(input, name=None)
Returns the batched diagonal part of a batched tensor.
This operation returns a tensor with the diagonal
part
of the batched input
. The diagonal
part is computed as follows:
Assume input
has k
dimensions [I, J, K, ..., M, N]
, then the output is a
tensor of rank k  1
with dimensions [I, J, K, ..., min(M, N)]
where:
diagonal[i, j, k, ..., n] = input[i, j, k, ..., n, n]
.
The input must be at least a matrix.
For example:
# 'input' is [[[1, 0, 0, 0]
[0, 2, 0, 0]
[0, 0, 3, 0]
[0, 0, 0, 4]],
[[5, 0, 0, 0]
[0, 6, 0, 0]
[0, 0, 7, 0]
[0, 0, 0, 8]]]
and input.shape = (2, 4, 4)
tf.matrix_diag_part(input) ==> [[1, 2, 3, 4], [5, 6, 7, 8]]
which has shape (2, 4)
Args:
input
: ATensor
. Rankk
tensor wherek >= 2
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
The extracted diagonal(s) having shape
diagonal.shape = input.shape[:2] + [min(input.shape[2:])]
.
tf.matrix_band_part(input, num_lower, num_upper, name=None)
Copy a tensor setting everything outside a central band in each innermost matrix
to zero.
The band
part is computed as follows:
Assume input
has k
dimensions [I, J, K, ..., M, N]
, then the output is a
tensor with the same shape where
band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n]
.
The indicator function
in_band(m, n) = (num_lower < 0  (mn) <= num_lower)) &&
(num_upper < 0  (nm) <= num_upper)
.
For example:
# if 'input' is [[ 0, 1, 2, 3]
[1, 0, 1, 2]
[2, 1, 0, 1]
[3, 2, 1, 0]],
tf.matrix_band_part(input, 1, 1) ==> [[ 0, 1, 2, 3]
[1, 0, 1, 2]
[ 0, 1, 0, 1]
[ 0, 0, 1, 0]],
tf.matrix_band_part(input, 2, 1) ==> [[ 0, 1, 0, 0]
[1, 0, 1, 0]
[2, 1, 0, 1]
[ 0, 2, 1, 0]]
Useful special cases:
tf.matrix_band_part(input, 0, 1) ==> Upper triangular part.
tf.matrix_band_part(input, 1, 0) ==> Lower triangular part.
tf.matrix_band_part(input, 0, 0) ==> Diagonal.
Args:
input
: ATensor
. Rankk
tensor.num_lower
: ATensor
of typeint64
. 0D tensor. Number of subdiagonals to keep. If negative, keep entire lower triangle.num_upper
: ATensor
of typeint64
. 0D tensor. Number of superdiagonals to keep. If negative, keep entire upper triangle.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
Rank k
tensor of the same shape as input. The extracted banded tensor.
tf.matrix_set_diag(input, diagonal, name=None)
Returns a batched matrix tensor with new batched diagonal values.
Given input
and diagonal
, this operation returns a tensor with the
same shape and values as input
, except for the main diagonal of the
innermost matrices. These will be overwritten by the values in diagonal
.
The output is computed as follows:
Assume input
has k+1
dimensions [I, J, K, ..., M, N]
and diagonal
has
k
dimensions [I, J, K, ..., min(M, N)]
. Then the output is a
tensor of rank k+1
with dimensions [I, J, K, ..., M, N]
where:
output[i, j, k, ..., m, n] = diagonal[i, j, k, ..., n]
form == n
.output[i, j, k, ..., m, n] = input[i, j, k, ..., m, n]
form != n
.
Args:
input
: ATensor
. Rankk+1
, wherek >= 1
.diagonal
: ATensor
. Must have the same type asinput
. Rankk
, wherek >= 1
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
Rank k+1
, with output.shape = input.shape
.
tf.matrix_transpose(a, name='matrix_transpose')
Transposes last two dimensions of tensor a
.
For example:
# Matrix with no batch dimension.
# 'x' is [[1 2 3]
# [4 5 6]]
tf.matrix_transpose(x) ==> [[1 4]
[2 5]
[3 6]]
# Matrix with two batch dimensions.
# x.shape is [1, 2, 3, 4]
# tf.matrix_transpose(x) is shape [1, 2, 4, 3]
Args:
a
: ATensor
withrank >= 2
.name
: A name for the operation (optional).
Returns:
A transposed batch matrix Tensor
.
Raises:
ValueError
: Ifa
is determined statically to haverank < 2
.
tf.matmul(a, b, transpose_a=False, transpose_b=False, adjoint_a=False, adjoint_b=False, a_is_sparse=False, b_is_sparse=False, name=None)
Multiplies matrix a
by matrix b
, producing a
* b
.
The inputs must be matrices (or tensors of rank > 2, representing batches of matrices), with matching inner dimensions, possibly after transposition.
Both matrices must be of the same type. The supported types are:
float16
, float32
, float64
, int32
, complex64
, complex128
.
Either matrix can be transposed or adjointed (conjugated and transposed) on
the fly by setting one of the corresponding flag to True
. These are False
by default.
If one or both of the matrices contain a lot of zeros, a more efficient
multiplication algorithm can be used by setting the corresponding
a_is_sparse
or b_is_sparse
flag to True
. These are False
by default.
This optimization is only available for plain matrices (rank2 tensors) with
datatypes bfloat16
or float32
.
For example:
# 2D tensor `a`
a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) => [[1. 2. 3.]
[4. 5. 6.]]
# 2D tensor `b`
b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) => [[7. 8.]
[9. 10.]
[11. 12.]]
c = tf.matmul(a, b) => [[58 64]
[139 154]]
# 3D tensor `a`
a = tf.constant(np.arange(1,13), shape=[2, 2, 3]) => [[[ 1. 2. 3.]
[ 4. 5. 6.]],
[[ 7. 8. 9.]
[10. 11. 12.]]]
# 3D tensor `b`
b = tf.constant(np.arange(13,25), shape=[2, 3, 2]) => [[[13. 14.]
[15. 16.]
[17. 18.]],
[[19. 20.]
[21. 22.]
[23. 24.]]]
c = tf.matmul(a, b) => [[[ 94 100]
[229 244]],
[[508 532]
[697 730]]]
Args:
a
:Tensor
of typefloat16
,float32
,float64
,int32
,complex64
,complex128
and rank > 1.b
:Tensor
with same type and rank asa
.transpose_a
: IfTrue
,a
is transposed before multiplication.transpose_b
: IfTrue
,b
is transposed before multiplication.adjoint_a
: IfTrue
,a
is conjugated and transposed before multiplication.adjoint_b
: IfTrue
,b
is conjugated and transposed before multiplication.a_is_sparse
: IfTrue
,a
is treated as a sparse matrix.b_is_sparse
: IfTrue
,b
is treated as a sparse matrix.name
: Name for the operation (optional).
Returns:
A Tensor
of the same type as a
and b
where each innermost matrix is
the product of the corresponding matrices in a
and b, e.g. if all
transpose or adjoint attributes are
False`:
output[..., :, :] = a[..., :, :] * b[..., :, :] ,
Raises:
ValueError
: If transpose_a and adjoint_a, or transpose_b and adjoint_b are both set to True.
tf.batch_matmul(x, y, adj_x=None, adj_y=None, name=None)
Multiplies slices of two tensors in batches.
Multiplies all slices of Tensor
x
and y
(each slice can be
viewed as an element of a batch), and arranges the individual results
in a single output tensor of the same batch size. Each of the
individual slices can optionally be adjointed (to adjoint a matrix
means to transpose and conjugate it) before multiplication by setting
the adj_x
or adj_y
flag to True
, which are by default False
.
The input tensors x
and y
are 3D or higher with shape [..., r_x, c_x]
and [..., r_y, c_y]
.
The output tensor is 3D or higher with shape [..., r_o, c_o]
, where:
r_o = c_x if adj_x else r_x
c_o = r_y if adj_y else c_y
It is computed as:
output[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :])
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,int32
,complex64
,complex128
. 3D or higher with shape[..., r_x, c_x]
.y
: ATensor
. Must have the same type asx
. 3D or higher with shape[..., r_y, c_y]
.adj_x
: An optionalbool
. Defaults toFalse
. IfTrue
, adjoint the slices ofx
. Defaults toFalse
.adj_y
: An optionalbool
. Defaults toFalse
. IfTrue
, adjoint the slices ofy
. Defaults toFalse
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
3D or higher with shape [..., r_o, c_o]
tf.matrix_determinant(input, name=None)
Computes the determinant of one ore more square matrices.
The input is a tensor of shape [..., M, M]
whose innermost 2 dimensions
form square matrices. The output is a tensor containing the determinants
for all input submatrices [..., :, :]
.
Args:
input
: ATensor
. Must be one of the following types:float32
,float64
. Shape is[..., M, M]
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
. Shape is [...]
.
tf.matrix_inverse(input, adjoint=None, name=None)
Computes the inverse of one or more square invertible matrices or their
adjoints (conjugate transposes).
The input is a tensor of shape [..., M, M]
whose innermost 2 dimensions
form square matrices. The output is a tensor of the same shape as the input
containing the inverse for all input submatrices [..., :, :]
.
The op uses LU decomposition with partial pivoting to compute the inverses.
If a matrix is not invertible there is no guarantee what the op does. It may detect the condition and raise an exception or it may simply return a garbage result.
Args:
input
: ATensor
. Must be one of the following types:float64
,float32
. Shape is[..., M, M]
.adjoint
: An optionalbool
. Defaults toFalse
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
. Shape is [..., M, M]
.
@compatibility(numpy) Equivalent to np.linalg.inv @end_compatibility
tf.cholesky(input, name=None)
Computes the Cholesky decomposition of one or more square matrices.
The input is a tensor of shape [..., M, M]
whose innermost 2 dimensions
form square matrices, with the same constraints as the single matrix Cholesky
decomposition above. The output is a tensor of the same shape as the input
containing the Cholesky decompositions for all input submatrices [..., :, :]
.
Args:
input
: ATensor
. Must be one of the following types:float64
,float32
. Shape is[..., M, M]
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
. Shape is [..., M, M]
.
tf.cholesky_solve(chol, rhs, name=None)
Solves systems of linear eqns A X = RHS
, given Cholesky factorizations.
# Solve 10 separate 2x2 linear systems:
A = ... # shape 10 x 2 x 2
RHS = ... # shape 10 x 2 x 1
chol = tf.cholesky(A) # shape 10 x 2 x 2
X = tf.cholesky_solve(chol, RHS) # shape 10 x 2 x 1
# tf.matmul(A, X) ~ RHS
X[3, :, 0] # Solution to the linear system A[3, :, :] x = RHS[3, :, 0]
# Solve five linear systems (K = 5) for every member of the length 10 batch.
A = ... # shape 10 x 2 x 2
RHS = ... # shape 10 x 2 x 5
...
X[3, :, 2] # Solution to the linear system A[3, :, :] x = RHS[3, :, 2]
Args:
chol
: ATensor
. Must befloat32
orfloat64
, shape is[..., M, M]
. Cholesky factorization ofA
, e.g.chol = tf.cholesky(A)
. For that reason, only the lower triangular parts (including the diagonal) of the last two dimensions ofchol
are used. The strictly upper part is assumed to be zero and not accessed.rhs
: ATensor
, same type aschol
, shape is[..., M, K]
.name
: A name to give thisOp
. Defaults tocholesky_solve
.
Returns:
Solution to A x = rhs
, shape [..., M, K]
.
tf.matrix_solve(matrix, rhs, adjoint=None, name=None)
Solves systems of linear equations.
Matrix
is a tensor of shape [..., M, M]
whose innermost 2 dimensions
form square matrices. Rhs
is a tensor of shape [..., M, K]
. The output
is
a tensor shape [..., M, K]
. If adjoint
is False
then each output matrix
satisfies matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]
.
If adjoint
is True
then each output matrix satisfies
adjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :]
.
Args:
matrix
: ATensor
. Must be one of the following types:float64
,float32
,complex64
,complex128
. Shape is[..., M, M]
.rhs
: ATensor
. Must have the same type asmatrix
. Shape is[..., M, K]
.adjoint
: An optionalbool
. Defaults toFalse
. Boolean indicating whether to solve withmatrix
or its (blockwise) adjoint.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as matrix
. Shape is [..., M, K]
.
tf.matrix_triangular_solve(matrix, rhs, lower=None, adjoint=None, name=None)
Solves systems of linear equations with upper or lower triangular matrices by
backsubstitution.
matrix
is a tensor of shape [..., M, M]
whose innermost 2 dimensions form
square matrices. If lower
is True
then the strictly upper triangular part
of each innermost matrix is assumed to be zero and not accessed.
If lower
is False then the strictly lower triangular part of each innermost
matrix is assumed to be zero and not accessed.
rhs
is a tensor of shape [..., M, K]
.
The output is a tensor of shape [..., M, K]
. If adjoint
is
True
then the innermost matrices in outputsatisfy matrix equations
matrix[..., :, :] * output[..., :, :] = rhs[..., :, :].
If
adjointis
Falsethen the strictly then the innermost matrices in
outputsatisfy matrix equations
adjoint(matrix[..., i, k]) * output[..., k, j] = rhs[..., i, j]`.
Args:
matrix
: ATensor
. Must be one of the following types:float64
,float32
. Shape is[..., M, M]
.rhs
: ATensor
. Must have the same type asmatrix
. Shape is[..., M, K]
.lower
: An optionalbool
. Defaults toTrue
. Boolean indicating whether the innermost matrices inmatrix
are lower or upper triangular.
adjoint
: An optionalbool
. Defaults toFalse
. Boolean indicating whether to solve withmatrix
or its (blockwise) adjoint.@compatibility(numpy) Equivalent to np.linalg.triangular_solve @end_compatibility

name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as matrix
. Shape is [..., M, K]
.
tf.matrix_solve_ls(matrix, rhs, l2_regularizer=0.0, fast=True, name=None)
Solves one or more linear leastsquares problems.
matrix
is a tensor of shape [..., M, N]
whose innermost 2 dimensions
form M
byN
matrices. Rhs is a tensor of shape [..., M, K]
whose
innermost 2 dimensions form M
byK
matrices. The computed output is a
Tensor
of shape [..., N, K]
whose innermost 2 dimensions form M
byK
matrices that solve the equations
matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]
in the least squares
sense.
Below we will use the following notation for each pair of matrix and righthand sides in the batch:
matrix
=\(A \in \Re^{m \times n}\),
rhs
=\(B \in \Re^{m \times k}\),
output
=\(X \in \Re^{n \times k}\),
l2_regularizer
=\(\lambda\).
If fast
is True
, then the solution is computed by solving the normal
equations using Cholesky decomposition. Specifically, if \(m \ge n\) then
\(X = (A^T A + \lambda I)^{1} A^T B\), which solves the leastsquares
problem \(X = \mathrm{argmin}{Z \in \Re^{n \times k}} A Z  B_F^2 +
\lambda Z_F^2\). If \(m \lt n\) then output
is computed as
\(X = A^T (A A^T + \lambda I)^{1} B\), which (for \(\lambda = 0\)) is
the minimumnorm solution to the underdetermined linear system, i.e.
\(X = \mathrm{argmin}{Z \in \Re^{n \times k}} ZF^2 \), subject to
\(A Z = B\). Notice that the fast path is only numerically stable when
\(A\) is numerically full rank and has a condition number
\(\mathrm{cond} (A) \lt \frac{1}{\sqrt{\epsilon{mach}}}\) or\(\lambda\)
is sufficiently large.
If fast
is False
an algorithm based on the numerically robust complete
orthogonal decomposition is used. This computes the minimumnorm
leastsquares solution, even when \(A\) is rank deficient. This path is
typically 67 times slower than the fast path. If fast
is False
then
l2_regularizer
is ignored.
Args:
matrix
:Tensor
of shape[..., M, N]
.rhs
:Tensor
of shape[..., M, K]
.l2_regularizer
: 0Ddouble
Tensor
. Ignored iffast=False
.fast
: bool. Defaults toTrue
.name
: string, optional name of the operation.
Returns:
output
:Tensor
of shape[..., N, K]
whose innermost 2 dimensions formM
byK
matrices that solve the equationsmatrix[..., :, :] * output[..., :, :] = rhs[..., :, :]
in the least squares sense.
tf.self_adjoint_eig(tensor, name=None)
Computes the eigen decomposition of a batch of selfadjoint matrices.
Computes the eigenvalues and eigenvectors of the innermost NbyN matrices
in tensor
such that
tensor[...,:,:] * v[..., :,i] = e[..., i] * v[...,:,i]
, for i=0...N1.
Args:
tensor
:Tensor
of shape[..., N, N]
. Only the lower triangular part of each inner inner matrix is referenced.name
: string, optional name of the operation.
Returns:
e
: Eigenvalues. Shape is[..., N]
.v
: Eigenvectors. Shape is[..., N, N]
. The columns of the inner most matrices contain eigenvectors of the corresponding matrices intensor
tf.self_adjoint_eigvals(tensor, name=None)
Computes the eigenvalues of one or more selfadjoint matrices.
Args:
tensor
:Tensor
of shape[..., N, N]
.name
: string, optional name of the operation.
Returns:
e
: Eigenvalues. Shape is[..., N]
. The vectore[..., :]
contains theN
eigenvalues oftensor[..., :, :]
.
tf.svd(tensor, full_matrices=False, compute_uv=True, name=None)
Computes the singular value decompositions of one or more matrices.
Computes the SVD of each inner matrix in tensor
such that
tensor[..., :, :] = u[..., :, :] * diag(s[..., :, :]) * transpose(v[..., :,
:])
# a is a tensor.
# s is a tensor of singular values.
# u is a tensor of left singular vectors.
# v is a tensor of right singular vectors.
s, u, v = svd(a)
s = svd(a, compute_uv=False)
Args:
matrix
:Tensor
of shape[..., M, N]
. LetP
be the minimum ofM
andN
.full_matrices
: If true, compute fullsizedu
andv
. If false (the default), compute only the leadingP
singular vectors. Ignored ifcompute_uv
isFalse
.compute_uv
: IfTrue
then left and right singular vectors will be computed and returned inu
andv
, respectively. Otherwise, only the singular values will be computed, which can be significantly faster.name
: string, optional name of the operation.
Returns:
s
: Singular values. Shape is[..., P]
.u
: Right singular vectors. Iffull_matrices
isFalse
(default) then shape is[..., M, P]
; iffull_matrices
isTrue
then shape is[..., M, M]
. Not returned ifcompute_uv
isFalse
.v
: Left singular vectors. Iffull_matrices
isFalse
(default) then shape is[..., N, P]
. Iffull_matrices
isTrue
then shape is[..., N, N]
. Not returned ifcompute_uv
isFalse
.