# tfa.metrics.MultiLabelConfusionMatrix

Computes Multi-label confusion matrix.

Class-wise confusion matrix is computed for the evaluation of classification.

If multi-class input is provided, it will be treated as multilabel data.

Consider classification problem with two classes (i.e num_classes=2).

Resultant matrix `M` will be in the shape of (num_classes, 2, 2).

Every class `i` has a dedicated 2*2 matrix that contains:

• true negatives for class i in M(0,0)
• false positives for class i in M(0,1)
• false negatives for class i in M(1,0)
• true positives for class i in M(1,1)
``````# multilabel confusion matrix
y_true = tf.constant([[1, 0, 1], [0, 1, 0]],
dtype=tf.int32)
y_pred = tf.constant([[1, 0, 0],[0, 1, 1]],
dtype=tf.int32)
output = MultiLabelConfusionMatrix(num_classes=3)
output.update_state(y_true, y_pred)
print('Confusion matrix:', output.result().numpy())

# Confusion matrix: [[[1 0] [0 1]] [[1 0] [0 1]]
[[0 1] [1 0]]]

# if multiclass input is provided
y_true = tf.constant([[1, 0, 0], [0, 1, 0]],
dtype=tf.int32)
y_pred = tf.constant([[1, 0, 0],[0, 0, 1]],
dtype=tf.int32)
output = MultiLabelConfusionMatrix(num_classes=3)
output.update_state(y_true, y_pred)
print('Confusion matrix:', output.result().numpy())

# Confusion matrix: [[[1 0] [0 1]] [[1 0] [1 0]] [[1 1] [0 0]]]
``````

#### Attributes:

• `activity_regularizer`: Optional regularizer function for the output of this layer.
• `dtype`
• `dynamic`
• `input`: Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

• `input_mask`: Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

• `input_shape`: Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

• `input_spec`

• `losses`: Losses which are associated with this `Layer`.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing `losses` under a `tf.GradientTape` will propagate gradients back to the corresponding variables.

• `metrics`

• `name`: Returns the name of this module as passed or determined in the ctor.

NOTE: This is not the same as the `self.name_scope.name` which includes parent module names.

• `name_scope`: Returns a `tf.name_scope` instance for this class.

• `non_trainable_variables`

• `non_trainable_weights`

• `output`: Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

• `output_mask`: Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

• `output_shape`: Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

• `submodules`: Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

``````a = tf.Module()
b = tf.Module()
c = tf.Module()
a.b = b
b.c = c
assert list(a.submodules) == [b, c]
assert list(b.submodules) == [c]
assert list(c.submodules) == []
``````
• `trainable`
• `trainable_variables`: Sequence of trainable variables owned by this module and its submodules.

• `trainable_weights`

• `updates`

• `variables`: Returns the list of all layer variables/weights.

Alias of `self.weights`.

• `weights`: Returns the list of all layer variables/weights.

## Methods

### `__call__`

Accumulates statistics and then computes metric result value.

#### Args:

• `*args`: * `**kwargs`: A mini-batch of inputs to the Metric, passed on to `update_state()`.

#### Returns:

The metric value tensor.

### `build`

Creates the variables of the layer (optional, for subclass implementers).

This is a method that implementers of subclasses of `Layer` or `Model` can override if they need a state-creation step in-between layer instantiation and layer call.

This is typically used to create the weights of `Layer` subclasses.

#### Arguments:

• `input_shape`: Instance of `TensorShape`, or list of instances of `TensorShape` if the layer expects a list of inputs (one instance per input).

### `compute_mask`

#### Arguments:

• `inputs`: Tensor or list of tensors.
• `mask`: Tensor or list of tensors.

#### Returns:

None or a tensor (or list of tensors, one per output tensor of the layer).

### `compute_output_shape`

Computes the output shape of the layer.

If the layer has not been built, this method will call `build` on the layer. This assumes that the layer will later be used with inputs that match the input shape provided here.

#### Arguments:

• `input_shape`: Shape tuple (tuple of integers) or list of shape tuples (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

#### Returns:

An input shape tuple.

### `count_params`

Count the total number of scalars composing the weights.

#### Returns:

An integer count.

#### Raises:

• `ValueError`: if the layer isn't yet built (in which case its weights aren't yet defined).

### `from_config`

Creates a layer from its config.

This method is the reverse of `get_config`, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by `set_weights`).

#### Arguments:

• `config`: A Python dictionary, typically the output of get_config.

#### Returns:

A layer instance.

### `get_config`

View source

Returns the serializable config of the metric.

### `get_input_at`

Retrieves the input tensor(s) of a layer at a given node.

#### Arguments:

• `node_index`: Integer, index of the node from which to retrieve the attribute. E.g. `node_index=0` will correspond to the first time the layer was called.

#### Returns:

A tensor (or list of tensors if the layer has multiple inputs).

#### Raises:

• `RuntimeError`: If called in Eager mode.

### `get_input_mask_at`

Retrieves the input mask tensor(s) of a layer at a given node.

#### Arguments:

• `node_index`: Integer, index of the node from which to retrieve the attribute. E.g. `node_index=0` will correspond to the first time the layer was called.

#### Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

### `get_input_shape_at`

Retrieves the input shape(s) of a layer at a given node.

#### Arguments:

• `node_index`: Integer, index of the node from which to retrieve the attribute. E.g. `node_index=0` will correspond to the first time the layer was called.

#### Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

#### Raises:

• `RuntimeError`: If called in Eager mode.

### `get_losses_for`

Retrieves losses relevant to a specific set of inputs.

#### Arguments:

• `inputs`: Input tensor or list/tuple of input tensors.

#### Returns:

List of loss tensors of the layer that depend on `inputs`.

### `get_output_at`

Retrieves the output tensor(s) of a layer at a given node.

#### Arguments:

• `node_index`: Integer, index of the node from which to retrieve the attribute. E.g. `node_index=0` will correspond to the first time the layer was called.

#### Returns:

A tensor (or list of tensors if the layer has multiple outputs).

#### Raises:

• `RuntimeError`: If called in Eager mode.

### `get_output_mask_at`

Retrieves the output mask tensor(s) of a layer at a given node.

#### Arguments:

• `node_index`: Integer, index of the node from which to retrieve the attribute. E.g. `node_index=0` will correspond to the first time the layer was called.

#### Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

### `get_output_shape_at`

Retrieves the output shape(s) of a layer at a given node.

#### Arguments:

• `node_index`: Integer, index of the node from which to retrieve the attribute. E.g. `node_index=0` will correspond to the first time the layer was called.

#### Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

#### Raises:

• `RuntimeError`: If called in Eager mode.

### `get_updates_for`

Retrieves updates relevant to a specific set of inputs.

#### Arguments:

• `inputs`: Input tensor or list/tuple of input tensors.

#### Returns:

List of update ops of the layer that depend on `inputs`.

### `get_weights`

Returns the current weights of the layer.

#### Returns:

Weights values as a list of numpy arrays.

### `reset_states`

View source

Resets all of the metric state variables.

This function is called between epochs/steps, when a metric is evaluated during training.

### `result`

View source

Computes and returns the metric value tensor.

Result computation is an idempotent operation that simply calculates the metric value using the state variables.

### `set_weights`

Sets the weights of the layer, from Numpy arrays.

#### Arguments:

• `weights`: a list of Numpy arrays. The number of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of `get_weights`).

#### Raises:

• `ValueError`: If the provided weights list does not match the layer's specifications.

### `update_state`

View source

Accumulates statistics for the metric.

Please use `tf.config.experimental_run_functions_eagerly(True)` to execute this function eagerly for debugging or profiling.

#### Args:

• `*args`: * `**kwargs`: A mini-batch of inputs to the Metric.

### `with_name_scope`

Decorator to automatically enter the module name scope.

``````class MyModule(tf.Module):
@tf.Module.with_name_scope
def __call__(self, x):
if not hasattr(self, 'w'):
self.w = tf.Variable(tf.random.normal([x.shape, 64]))
return tf.matmul(x, self.w)
``````

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

``````mod = MyModule()
mod(tf.ones([8, 32]))
# ==> <tf.Tensor: ...>
mod.w
# ==> <tf.Variable ...'my_module/w:0'>
``````

#### Args:

• `method`: The method to wrap.

#### Returns:

The original method wrapped such that it enters the module's name scope.