tf.sparse.sparse_dense_matmul

Multiply SparseTensor (or dense Matrix) (of rank 2) "A" by dense matrix

Used in the notebooks

Used in the guide

(or SparseTensor) "B". Please note that one and only one of the inputs MUST be a SparseTensor and the other MUST be a dense matrix.

The following input format is recommended (but not required) for optimal performance:

  • If adjoint_a == false: A should be sorted in lexicographically increasing order. Use sparse.reorder if you're not sure.
  • If adjoint_a == true: A should be sorted in order of increasing dimension 1 (i.e., "column major" order instead of "row major" order).

sp_a SparseTensor (or dense Matrix) A, of rank 2.
b dense Matrix (or SparseTensor) B, with the same dtype as sp_a.
adjoint_a Use the adjoint of A in the matrix multiply. If A is complex, this is transpose(conj(A)). Otherwise it's transpose(A).
adjoint_b Use the adjoint of B in the matrix multiply. If B is complex, this is transpose(conj(B)). Otherwise it's transpose(B).
name A name prefix for the returned tensors (optional)

A dense matrix (pseudo-code in dense np.matrix notation): A = A.H if adjoint_a else A B = B.H if adjoint_b else B return A*B

Notes:

Using tf.nn.embedding_lookup_sparse for sparse multiplication:

It's not obvious but you can consider embedding_lookup_sparse as another sparse and dense multiplication. In some situations, you may prefer to use embedding_lookup_sparse even though you're not dealing with embeddings.

There are two questions to ask in the decision process: Do you need gradients computed as sparse too? Is your sparse data represented as two SparseTensors: ids and values? There is more explanation about data format below. If you answer any of these questions as yes, consider using