tfma.metrics.SetMatchRecall

Computes recall for sets of labels and predictions.

Inherits From: Recall, Metric

The metric deals with labels and predictions which are provided in the format of sets (stored as variable length numpy arrays). The recall is the micro averaged classification recall. The metric is suitable for the case where the number of classes is large or the list of classes could not be provided in advance.

Example:

Label: ['cats'], Predictions: {'classes': ['cats, dogs']}

The recall is 1.

thresholds (Optional) A float value or a python list/tuple of float threshold values in [0, 1]. A threshold is compared with prediction values to determine the truth value of predictions (i.e., above the threshold is true, below is false). One metric value is generated for each threshold value. If neither thresholds nor top_k are set, the default is to calculate precision with thresholds=0.5.
top_k (Optional) Used with a multi-class model to specify that the top-k values should be used to compute the confusion matrix. The net effect is that the non-top-k values are truncated and the matrix is then constructed from the average TP, FP, TN, FN across the classes. When top_k is used, metrics_specs.binarize settings must not be present. When top_k is used, the default threshold is float('-inf'). In this case, unmatched labels are still considered false negative, since they have prediction with confidence score float('-inf'),
name (Optional) string name of the metric instance.
prediction_class_key the key name of the classes in prediction.
prediction_score_key the key name of the scores in prediction.
class_key (Optional) The key name of the classes in class-weight pairs. If it is not provided, the classes are assumed to be the label classes.
weight_key (Optional) The key name of the weights of classes in class-weight pairs. The value in this key should be a numpy array of the same length as the classes in class_key. The key should be stored under the features key.
**kwargs (Optional) Additional args to pass along to init (and eventually on to _metric_computations and _metric_values). The args are passed to the recall metric, the confusion matrix metric and binary classification metric.

compute_confidence_interval Whether to compute confidence intervals for this metric.

Note that this may not completely remove the computational overhead involved in computing a given metric. This is only respected by the jackknife confidence interval method.

Methods

computations

View source

Creates computations associated with metric.

from_config

View source

get_config

View source

Returns serializable config.

result

View source

Function for computing metric value from TP, TN, FP, FN values.