Module: tf.compat.v1.metrics

Evaluation-related metrics.

Functions

accuracy(...): Calculates how often predictions matches labels.

auc(...): Computes the approximate AUC via a Riemann sum. (deprecated)

average_precision_at_k(...): Computes average precision@k of predictions with respect to sparse labels.

false_negatives(...): Computes the total number of false negatives.

false_negatives_at_thresholds(...): Computes false negatives at provided threshold values.

false_positives(...): Sum the weights of false positives.

false_positives_at_thresholds(...): Computes false positives at provided threshold values.

mean(...): Computes the (weighted) mean of the given values.

mean_absolute_error(...): Computes the mean absolute error between the labels and predictions.

mean_cosine_distance(...): Computes the cosine distance between the labels and predictions.

mean_iou(...): Calculate per-step mean Intersection-Over-Union (mIOU).

mean_per_class_accuracy(...): Calculates the mean of the per-class accuracies.

mean_relative_error(...): Computes the mean relative error by normalizing with the given values.

mean_squared_error(...): Computes the mean squared error between the labels and predictions.

mean_tensor(...): Computes the element-wise (weighted) mean o