tf.keras.metrics.experimental.PyMetric

Metric which runs in Python, compiled outside of the TensorFlow graph.

Inherits From: Metric, Layer, Module

name (Optional) string name of the PyMetric instance.
dtype (Optional) data type of the PyMetric result.
**kwargs Additional layer keywords arguments.

Usage of PyMetric is generally identical to keras.metrics.Metric. It can be used in isolation, or in tandem with the compile() API. For more information about the usage of PyMetric, see keras.metrics.Metric.

Unlike regular metrics, PyMetric instances are outside-compiled with respect to the TensorFlow graph during training or evaluation. They have access to the same inputs of a standard in-graph metric, but they run in a Python interpreter on the host CPU. Any data stored in a PyMetric is located on the main memory of the host CPU, and any TensorFlow ops used in a PyMetric are run eagerly on the host CPU.

As a result, PyMetric instances are generally not as performant as in-graph metrics, and should only be used in cases where computing the metric inside of the TensorFlow graph is either impossible or prohibitively expensive.

Methods to be implemented by subclasses:

  • update_state(): Handles updates to internal state variables
  • result(): Computes and returns a scalar value or a dict of scalar values for the metric from the state variables.
  • reset_state(): Computes and returns a scalar value for the metric from the state variables.

This subclass implementation is similar to that of keras.metrics.Metric, with two notable differences:

  • Inputs to update_state() in a PyMetric are eager tensors, and both update_state() and result() run outside of the TensorFlow graph, executing any TensorFlow ops eagerly.
  • reset_state() is also called at initialization time to initialize the Python state of the metric.
  • result() can only return a single scalar. It does not support returning a dictionary of results like keras.metrics.Metric.

Example subclass implementation using sklearn's Jaccard Score:

from sklearn.metrics import jaccard_score
import tensorflow as tf

class JaccardScore(tf.keras.metrics.experimental.PyMetric):

  def __init__(self, name='jaccard_score', **kwargs):
    super().__init__(name=name, **kwargs)

  def update_state(self, y_true, y_pred, sample_weight=None):
    self.jaccard_sum += jaccard_score(y_pred, y_true, average="macro")
    self.count += 1

  def reset_state(self):
    self.jaccard_sum = 0.
    self.count = 0.

  def result(self):
    return self.jaccard_sum / self.count

Methods

merge_state

View source

Merges the state from one or more metrics.

PyMetric instances that intend to support merging state must override this method, as the default implementation in keras.metrics.Metric does not apply to PyMetric.

reset_state

View source

Resets all of the metric state variables.

This function is called between epochs when a metric is evaluated during training. It's also called when the metric is initialized.

result

View source

Computes and returns the scalar metric value.

Result computation is an idempotent operation that simply calculates the metric value using the state variables.

Returns
A Python scalar.

update_state

View source

Accumulates statistics for the metric.

This means:

a) Inputs are eager tensors. b) Any TensorFlow ops run in this method are run eagerly. c) Any Tensors created are allocated to the CPU's main memory.

Args
y_true Target output
y_pred Predicted output
sample_weight (Optional) weights for the individual samples in y_true and y_pred