|TensorFlow 1 version||View source on GitHub|
Multiply SparseTensor (or dense Matrix) (of rank 2) "A" by dense matrix
Compat aliases for migration
See Migration guide for more details.
tf.sparse.sparse_dense_matmul( sp_a, b, adjoint_a=False, adjoint_b=False, name=None )
Used in the notebooks
|Used in the guide|
(or SparseTensor) "B". Please note that one and only one of the inputs MUST be a SparseTensor and the other MUST be a dense matrix.
The following input format is recommended (but not required) for optimal performance:
adjoint_a == false:
Ashould be sorted in lexicographically increasing order. Use
sparse.reorderif you're not sure.
adjoint_a == true:
Ashould be sorted in order of increasing dimension 1 (i.e., "column major" order instead of "row major" order).
||SparseTensor (or dense Matrix) A, of rank 2.|
||dense Matrix (or SparseTensor) B, with the same dtype as sp_a.|
||Use the adjoint of A in the matrix multiply. If A is complex, this is transpose(conj(A)). Otherwise it's transpose(A).|
||Use the adjoint of B in the matrix multiply. If B is complex, this is transpose(conj(B)). Otherwise it's transpose(B).|
||A name prefix for the returned tensors (optional)|
A dense matrix (pseudo-code in dense np.matrix notation):
tf.nn.embedding_lookup_sparse for sparse multiplication:
It's not obvious but you can consider
embedding_lookup_sparse as another
sparse and dense multiplication. In some situations, you may prefer to use
embedding_lookup_sparse even though you're not dealing with embeddings.
There are two questions to ask in the decision process: Do you need gradients
computed as sparse too? Is your sparse data represented as two
SparseTensors: ids and values? There is more explanation about data format
below. If you answer any of these questions as yes, consider using