{ }
View source on GitHub |
Computes recall@k of the predictions with respect to sparse labels.
tf.compat.v1.metrics.recall_at_k(
labels,
predictions,
k,
class_id=None,
weights=None,
metrics_collections=None,
updates_collections=None,
name=None
)
If class_id
is specified, we calculate recall by considering only the
entries in the batch for which class_id
is in the label, and computing
the fraction of them for which class_id
is in the top-k predictions
.
If class_id
is not specified, we'll calculate recall as how often on
average a class among the labels of a batch entry is in the top-k
predictions
.
sparse_recall_at_k
creates two local variables,
true_positive_at_<k>
and false_negative_at_<k>
, that are used to compute
the recall_at_k frequency. This frequency is ultimately returned as
recall_at_<k>
: an idempotent operation that simply divides
true_positive_at_<k>
by total (true_positive_at_<k>
+
false_negative_at_<k>
).
For estimation of the metric over a stream of data, the function creates an
update_op
operation that updates these variables and returns the
recall_at_<k>
. Internally, a top_k
operation computes a Tensor
indicating the top k
predictions
. Set operations applied to top_k
and
labels
calculate the true positives and false negatives weighted by
weights
. Then update_op
increments true_positive_at_<k>
and
false_negative_at_<k>
using these values.
If weights
is None
, weights default to 1. Use weights of 0 to mask values.