|View source on GitHub|
Computes precision@k of the predictions with respect to sparse labels.
tf.contrib.metrics.streaming_sparse_precision_at_k( predictions, labels, k, class_id=None, weights=None, metrics_collections=None, updates_collections=None, name=None )
class_id is not specified, we calculate precision as the ratio of true
positives (i.e., correct predictions, items in the top
predictions that are found in the corresponding row in
positives (all top
class_id is specified, we calculate precision by considering only the
rows in the batch for which
class_id is in the top
predictions, and computing the fraction of them for which
in the corresponding row in
We expect precision to decrease as
streaming_sparse_precision_at_k creates two local variables,
false_positive_at_<k>, that are used to compute
the precision@k frequency. This frequency is ultimately returned as
precision_at_<k>: an idempotent operation that simply divides
true_positive_at_<k> by total (
For estimation of the metric over a stream of data, the function creates an
update_op operation that updates these variables and returns the
precision_at_<k>. Internally, a
top_k operation computes a
indicating the top
predictions. Set operations applied to
labels calculate the true positives and false positives weighted by
false_positive_at_<k> using these values.
None, weights default to 1. Use weights of 0 to mask values.
Tensorwith shape [D1, ... DN, num_classes] where N >=
- Commonly, N=1 and predictions has shape [batch size, num_classes]. The
final dimension contains the logit values for each class. [D1, ... DN]
- Commonly, N=1 and predictions has shape [batch size, num_classes]. The final dimension contains the logit values for each class. [D1, ... DN] must match
SparseTensorwith shape [D1, ... DN, num_labels], where N >= 1 and num_labels is the number of target classes for the associated prediction. Commonly, N=1 and
labelshas shape [batch_size, num_labels]. [D1, ... DN] must match
predictions. Values should be in range [0, num_classes), where num_classes is the last dimension of
predictions. Values outside this range are ignored.
k: Integer, k for @k metric.
class_id: Integer class ID for which we want binary metrics. This should be in range [0, num_classes], where num_classes is the last dimension of
class_idis outside this range, the method returns NAN.
Tensorwhose rank is either 0, or n-1, where n is the rank of
labels. If the latter, it must be broadcastable to
labels(i.e., all dimensions must be either
1, or the same as the corresponding
metrics_collections: An optional list of collections that values should be added to.
updates_collections: An optional list of collections that updates should be added to.
name: Name of new update operation, and namespace for other dependent ops.
Tensorwith the value of
true_positivesdivided by the sum of
false_positivesvariables appropriately, and whose value matches
Noneand its shape doesn't match
predictions, or if either
updates_collectionsare not a list or tuple.