tf.nn.safe_embedding_lookup_sparse
Stay organized with collections
Save and categorize content based on your preferences.
Lookup embedding results, accounting for invalid IDs and empty features.
tf.nn.safe_embedding_lookup_sparse(
embedding_weights,
sparse_ids,
sparse_weights=None,
combiner='mean',
default_id=None,
max_norm=None,
name=None
)
The partitioned embedding in embedding_weights
must all be the same shape
except for the first dimension. The first dimension is allowed to vary as the
vocabulary size is not necessarily a multiple of num of shards.
Invalid IDs (< 0) are pruned from input IDs and weights, as well as any IDs
with non-positive weight. For an entry with no features, the embedding vector
for default_id
is returned, or the 0-vector if default_id
is not supplied.
The ids and weights may be multi-dimensional. Embeddings are always aggregated
along the last dimension.
If len(embedding_weights) > 1
, each element id
of ids
is partitioned
between the elements of embedding_weights
according to the "div" partition
strategy, which means we assign ids to partitions in a contiguous manner. For
instance, 13 ids are split across 5 partitions as:
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]
.
If the id space does not evenly divide the number of partitions, each of the
first (max_id + 1) % len(embedding_weights)
partitions will be assigned one
more id.
Args |
embedding_weights
|
A single tensor representing the complete embedding
tensor, or a list of tensors all of same shape except for the first
dimension, representing sharded embedding tensors following "div"
partition strategy.
|
sparse_ids
|
SparseTensor of shape [d_0, d_1, ..., d_n] containing the
ids. d_0 is typically batch size.
|
sparse_weights
|
SparseTensor of same shape as sparse_ids , containing
float weights corresponding to sparse_ids , or None if all weights are
be assumed to be 1.0.
|
combiner
|
A string specifying how to combine embedding results for each
entry. Currently "mean", "sqrtn" and "sum" are supported, with "mean" the
default.
|
default_id
|
The id to use for an entry with no features. Defaults to
0-vector.
|
max_norm
|
If not None , all embeddings are l2-normalized to max_norm before
combining.
|
name
|
A name for this operation (optional).
|
Returns |
A dense tensor representing the combined embeddings for the
sparse ids. For each row in the dense tensor represented by sparse_ids ,
the op looks up the embeddings for all ids in that row, multiplies them by
the corresponding weight, and combines these embeddings as specified.
In other words, if
shape(combined embedding_weights) = [p0, p1, ..., pm]
and
shape(sparse_ids) = shape(sparse_weights) = [d0, d1, ..., dn]
then
shape(output) = [d0, d1, ... dn-1, p1, ..., pm] .
For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are
[0, 0]: id 1, weight 2.0
[0, 1]: id 3, weight 0.5
[1, 0]: id -1, weight 1.0
[2, 3]: id 1, weight 3.0
default_id is 0.
with combiner ="mean", then the output will be a 3x20 matrix where
output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5)
output[1, :] = (params[0, :] * 1.0) / 1.0
output[2, :] = (params[1, :] * 3.0) / 3.0
|
Raises |
ValueError
|
if embedding_weights is empty.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2023-10-06 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2023-10-06 UTC."],[],[],null,["# tf.nn.safe_embedding_lookup_sparse\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.12.1/tensorflow/python/ops/embedding_ops.py#L684-L783) |\n\nLookup embedding results, accounting for invalid IDs and empty features. \n\n tf.nn.safe_embedding_lookup_sparse(\n embedding_weights,\n sparse_ids,\n sparse_weights=None,\n combiner='mean',\n default_id=None,\n max_norm=None,\n name=None\n )\n\nThe partitioned embedding in `embedding_weights` must all be the same shape\nexcept for the first dimension. The first dimension is allowed to vary as the\nvocabulary size is not necessarily a multiple of num of shards.\n\nInvalid IDs (\\\u003c 0) are pruned from input IDs and weights, as well as any IDs\nwith non-positive weight. For an entry with no features, the embedding vector\nfor `default_id` is returned, or the 0-vector if `default_id` is not supplied.\n\nThe ids and weights may be multi-dimensional. Embeddings are always aggregated\nalong the last dimension.\n\nIf `len(embedding_weights) \u003e 1`, each element `id` of `ids` is partitioned\nbetween the elements of `embedding_weights` according to the \"div\" partition\nstrategy, which means we assign ids to partitions in a contiguous manner. For\ninstance, 13 ids are split across 5 partitions as:\n`[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`.\n\nIf the id space does not evenly divide the number of partitions, each of the\nfirst `(max_id + 1) % len(embedding_weights)` partitions will be assigned one\nmore id.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `embedding_weights` | A single tensor representing the complete embedding tensor, or a list of tensors all of same shape except for the first dimension, representing sharded embedding tensors following \"div\" partition strategy. |\n| `sparse_ids` | `SparseTensor` of shape `[d_0, d_1, ..., d_n]` containing the ids. `d_0` is typically batch size. |\n| `sparse_weights` | `SparseTensor` of same shape as `sparse_ids`, containing float weights corresponding to `sparse_ids`, or `None` if all weights are be assumed to be 1.0. |\n| `combiner` | A string specifying how to combine embedding results for each entry. Currently \"mean\", \"sqrtn\" and \"sum\" are supported, with \"mean\" the default. |\n| `default_id` | The id to use for an entry with no features. Defaults to 0-vector. |\n| `max_norm` | If not `None`, all embeddings are l2-normalized to max_norm before combining. |\n| `name` | A name for this operation (optional). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A dense tensor representing the combined embeddings for the sparse ids. For each row in the dense tensor represented by `sparse_ids`, the op looks up the embeddings for all ids in that row, multiplies them by the corresponding weight, and combines these embeddings as specified. \u003cbr /\u003e In other words, if `shape(combined embedding_weights) = [p0, p1, ..., pm]` and `shape(sparse_ids) = shape(sparse_weights) = [d0, d1, ..., dn]` then `shape(output) = [d0, d1, ... dn-1, p1, ..., pm]`. For instance, if params is a 10x20 matrix, and sp_ids / sp_weights are [0, 0]: id 1, weight 2.0 [0, 1]: id 3, weight 0.5 [1, 0]: id -1, weight 1.0 [2, 3]: id 1, weight 3.0 `default_id` is 0. with `combiner`=\"mean\", then the output will be a 3x20 matrix where output[0, :] = (params[1, :] * 2.0 + params[3, :] * 0.5) / (2.0 + 0.5) output[1, :] = (params[0, :] * 1.0) / 1.0 output[2, :] = (params[1, :] * 3.0) / 3.0 \u003cbr /\u003e ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|----------------------------------|\n| `ValueError` | if `embedding_weights` is empty. |\n\n\u003cbr /\u003e"]]