{ }
View source on GitHub |
Computes precision@k of the predictions with respect to sparse labels.
tf.compat.v1.metrics.precision_at_k(
labels,
predictions,
k,
class_id=None,
weights=None,
metrics_collections=None,
updates_collections=None,
name=None
)
If class_id
is specified, we calculate precision by considering only the
entries in the batch for which class_id
is in the top-k highest
predictions
, and computing the fraction of them for which class_id
is
indeed a correct label.
If class_id
is not specified, we'll calculate precision as how often on
average a class among the top-k classes with the highest predicted values
of a batch entry is correct and can be found in the label for that entry.
precision_at_k
creates two local variables,
true_positive_at_<k>
and false_positive_at_<k>
, that are used to compute
the precision@k frequency. This frequency is ultimately returned as
precision_at_<k>
: an idempotent operation that simply divides
true_positive_at_<k>
by total (true_positive_at_<k>
+
false_positive_at_<k>
).
For estimation of the metric over a stream of data, the function creates an
update_op
operation that updates these variables and returns the
precision_at_<k>
. Internally, a top_k
operation computes a Tensor
indicating the top k
predictions
. Set operations applied to top_k
and
labels
calculate the true positives and false positives weighted by
weights
. Then update_op
increments true_positive_at_<k>
and
false_positive_at_<k>
using these values.
If weights
is None
, weights default to 1. Use weights of 0 to mask values.