Retrieves K highest scoring items and their ids from a large dataset.
Inherits From: TopK
tfrs.layers.factorized_top_k.Streaming(
query_model: Optional[tf.keras.Model] = None,
k: int = 10,
handle_incomplete_batches: bool = True,
num_parallel_calls: int = tf.data.experimental.AUTOTUNE,
sorted_order: bool = True
) -> None
Used to efficiently retrieve top K query-candidate scores from a dataset,
along with the top scoring candidates' identifiers.
Args |
query_model
|
Optional Keras model for representing queries. If provided,
will be used to transform raw features into query embeddings when
querying the layer. If not provided, the layer will expect to be given
query embeddings as inputs.
|
k
|
Number of top scores to retrieve.
|
handle_incomplete_batches
|
When True, candidate batches smaller than k
will be correctly handled at the price of some performance. As an
alternative, consider using the drop_remainer option when batching the
candidate dataset.
|
num_parallel_calls
|
Degree of parallelism when computing scores. Defaults
to autotuning.
|
sorted_order
|
If the resulting scores should be returned in sorted order.
setting this to False may result in a small increase in performance.
|
Raises |
ValueError if candidate elements are not tuples.
|
Methods
call
View source
call(
queries: Union[tf.Tensor, Dict[Text, tf.Tensor]],
k: Optional[int] = None
) -> Tuple[tf.Tensor, tf.Tensor]
Computes K highest scores and candidate indices for a given query.
Args |
queries
|
Query features. If query_model was provided in the constructor,
these can be raw query features that will be processed by the query
model before performing retrieval. If query_model was not provided,
these should be pre-computed query embeddings.
|
k
|
Number of elements to retrieve. If not set, will default to the k set
in the constructor.
|
Returns |
Tuple of [query_batch_size, k] tensor of top scores for each query and
[query_batch_size, k] tensor of indices for highest scoring candidates.
|
Raises |
ValueError if index has not been called.
|
index
View source
index(
candidates: tf.data.Dataset,
identifiers: Optional[tf.data.Dataset] = None
) -> "Streaming"
Builds the retrieval index.
When called multiple times the existing index will be dropped and a new one
created.
Args |
candidates
|
Matrix (or dataset) of candidate embeddings.
|
identifiers
|
Optional tensor (or dataset) of candidate identifiers. If
given, these will be used to as identifiers of top candidates returned
when performing searches. If not given, indices into the candidates
tensor will be given instead.
|
query_with_exclusions
View source
@tf.function
query_with_exclusions(
queries: Union[tf.Tensor, Dict[Text, tf.Tensor]],
exclusions: tf.Tensor,
k: Optional[int] = None
) -> Tuple[tf.Tensor, tf.Tensor]
Query the index.
Args |
queries
|
Query features. If query_model was provided in the constructor,
these can be raw query features that will be processed by the query
model before performing retrieval. If query_model was not provided,
these should be pre-computed query embeddings.
|
exclusions
|
[query_batch_size, num_to_exclude] tensor of identifiers to
be excluded from the top-k calculation. This is most commonly used to
exclude previously seen candidates from retrieval. For example, if a
user has already seen items with ids "42" and "43", you could set
exclude to [["42", "43"]] .
|
k
|
The number of candidates to retrieve. Defaults to constructor k
parameter if not supplied.
|
Returns |
Tuple of (top candidate scores, top candidate identifiers).
|
Raises |
ValueError if index has not been called.
ValueError if queries is not a tensor (after being passed through
the query model).
|