|View source on GitHub|
Computes average precision@k of predictions with respect to sparse labels.
tf.compat.v1.metrics.average_precision_at_k( labels, predictions, k, weights=None, metrics_collections=None, updates_collections=None, name=None )
average_precision_at_k creates two local variables,
are used to compute the frequency. This frequency is ultimately returned as
average_precision_at_<k>: an idempotent operation that simply divides
For estimation of the metric over a stream of data, the function creates an
update_op operation that updates these variables and returns the
precision_at_<k>. Internally, a
top_k operation computes a
indicating the top
predictions. Set operations applied to
labels calculate the true positives and false positives weighted by
false_positive_at_<k> using these values.
None, weights default to 1. Use weights of 0 to mask values.
Integer, k for @k metric. This will calculate an average precision for
||An optional list of collections that values should be added to.|
||An optional list of collections that updates should be added to.|
||Name of new update operation, and namespace for other dependent ops.|