# tfa.metrics.MultiLabelConfusionMatrix

## Class `MultiLabelConfusionMatrix`

Computes Multi-label confusion matrix.

### Aliases:

Class-wise confusion matrix is computed for the evaluation of classification.

If multi-class input is provided, it will be treated as multilabel data.

Consider classification problem with two classes (i.e num_classes=2).

Resultant matrix `M` will be in the shape of (num_classes, 2, 2).

Every class `i` has a dedicated 2*2 matrix that contains:

• true negatives for class i in M(0,0)
• false positives for class i in M(0,1)
• false negatives for class i in M(1,0)
• true positives for class i in M(1,1)
``````# multilabel confusion matrix
y_true = tf.constant([[1, 0, 1], [0, 1, 0]],
dtype=tf.int32)
y_pred = tf.constant([[1, 0, 0],[0, 1, 1]],
dtype=tf.int32)
output = MultiLabelConfusionMatrix(num_classes=3)
output.update_state(y_true, y_pred)
print('Confusion matrix:', output.result().numpy())

# Confusion matrix: [[[1 0] [0 1]] [[1 0] [0 1]]
[[0 1] [1 0]]]

# if multiclass input is provided
y_true = tf.constant([[1, 0, 0], [0, 1, 0]],
dtype=tf.int32)
y_pred = tf.constant([[1, 0, 0],[0, 0, 1]],
dtype=tf.int32)
output = MultiLabelConfusionMatrix(num_classes=3)
output.update_state(y_true, y_pred)
print('Confusion matrix:', output.result().numpy())

# Confusion matrix: [[[1 0] [0 1]] [[1 0] [1 0]] [[1 1] [0 0]]]
``````

## `__init__`

View source

``````__init__(
num_classes,
name='Multilabel_confusion_matrix',
dtype=tf.int32
)
``````

## `__new__`

``````__new__(
cls,
*args,
**kwargs
)
``````

Create and return a new object. See help(type) for accurate signature.

## Properties

### `activity_regularizer`

Optional regularizer function for the output of this layer.

### `input`

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

#### Returns:

Input tensor or list of input tensors.

#### Raises:

• `RuntimeError`: If called in Eager mode.
• `AttributeError`: If no inbound nodes are found.

### `input_mask`

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

#### Raises:

• `AttributeError`: if the layer is connected to more than one incoming layers.

### `input_shape`

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

#### Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

#### Raises:

• `AttributeError`: if the layer has no defined input_shape.
• `RuntimeError`: if called in Eager mode.

### `losses`

Losses which are associated with this `Layer`.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing `losses` under a `tf.GradientTape` will propagate gradients back to the corresponding variables.

#### Returns:

A list of tensors.

### `name`

Returns the name of this module as passed or determined in the ctor.

NOTE: This is not the same as the `self.name_scope.name` which includes parent module names.

### `name_scope`

Returns a `tf.name_scope` instance for this class.

### `output`

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

#### Returns:

Output tensor or list of output tensors.

#### Raises:

• `AttributeError`: if the layer is connected to more than one incoming layers.
• `RuntimeError`: if called in Eager mode.

### `output_mask`

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

#### Raises:

• `AttributeError`: if the layer is connected to more than one incoming layers.

### `output_shape`

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

#### Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

#### Raises:

• `AttributeError`: if the layer has no defined output shape.
• `RuntimeError`: if called in Eager mode.

### `submodules`

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

``````a = tf.Module()
b = tf.Module()
c = tf.Module()
a.b = b
b.c = c
assert list(a.submodules) == [b, c]
assert list(b.submodules) == [c]
assert list(c.submodules) == []
``````

#### Returns:

A sequence of all submodules.

### `trainable_variables`

Sequence of variables owned by this module and it's submodules.

#### Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

### `variables`

Returns the list of all layer variables/weights.

Alias of `self.weights`.

#### Returns:

A list of variables.

### `weights`

Returns the list of all layer variables/weights.

#### Returns:

A list of variables.

## Methods

### `__call__`

``````__call__(
*args,
**kwargs
)
``````

Accumulates statistics and then computes metric result value.

#### Args:

• `*args`: * `**kwargs`: A mini-batch of inputs to the Metric, passed on to `update_state()`.

#### Returns:

The metric value tensor.

### `build`

``````build(input_shape)
``````

Creates the variables of the layer (optional, for subclass implementers).

This is a method that implementers of subclasses of `Layer` or `Model` can override if they need a state-creation step in-between layer instantiation and layer call.

This is typically used to create the weights of `Layer` subclasses.

#### Arguments:

• `input_shape`: Instance of `TensorShape`, or list of instances of `TensorShape` if the layer expects a list of inputs (one instance per input).

### `compute_mask`

``````compute_mask(
inputs,
)
``````

#### Arguments:

• `inputs`: Tensor or list of tensors.
• `mask`: Tensor or list of tensors.

#### Returns:

None or a tensor (or list of tensors, one per output tensor of the layer).

### `compute_output_shape`

``````compute_output_shape(input_shape)
``````

Computes the output shape of the layer.

If the layer has not been built, this method will call `build` on the layer. This assumes that the layer will later be used with inputs that match the input shape provided here.

#### Arguments:

• `input_shape`: Shape tuple (tuple of integers) or list of shape tuples (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

#### Returns:

An input shape tuple.

### `count_params`

``````count_params()
``````

Count the total number of scalars composing the weights.

#### Returns:

An integer count.

#### Raises:

• `ValueError`: if the layer isn't yet built (in which case its weights aren't yet defined).

### `from_config`

``````from_config(
cls,
config
)
``````

Creates a layer from its config.

This method is the reverse of `get_config`, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by `set_weights`).

#### Arguments:

• `config`: A Python dictionary, typically the output of get_config.

#### Returns:

A layer instance.

### `get_config`

View source

``````get_config()
``````

Returns the serializable config of the metric.

### `get_input_at`

``````get_input_at(node_index)
``````

Retrieves the input tensor(s) of a layer at a given node.

#### Arguments:

• `node_index`: Integer, index of the node from which to retrieve the attribute. E.g. `node_index=0` will correspond to the first time the layer was called.

#### Returns:

A tensor (or list of tensors if the layer has multiple inputs).

#### Raises:

• `RuntimeError`: If called in Eager mode.

### `get_input_mask_at`

``````get_input_mask_at(node_index)
``````

Retrieves the input mask tensor(s) of a layer at a given node.

#### Arguments:

• `node_index`: Integer, index of the node from which to retrieve the attribute. E.g. `node_index=0` will correspond to the first time the layer was called.

#### Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

### `get_input_shape_at`

``````get_input_shape_at(node_index)
``````

Retrieves the input shape(s) of a layer at a given node.

#### Arguments:

• `node_index`: Integer, index of the node from which to retrieve the attribute. E.g. `node_index=0` will correspond to the first time the layer was called.

#### Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

#### Raises:

• `RuntimeError`: If called in Eager mode.

### `get_losses_for`

``````get_losses_for(inputs)
``````

Retrieves losses relevant to a specific set of inputs.

#### Arguments:

• `inputs`: Input tensor or list/tuple of input tensors.

#### Returns:

List of loss tensors of the layer that depend on `inputs`.

### `get_output_at`

``````get_output_at(node_index)
``````

Retrieves the output tensor(s) of a layer at a given node.

#### Arguments:

• `node_index`: Integer, index of the node from which to retrieve the attribute. E.g. `node_index=0` will correspond to the first time the layer was called.

#### Returns:

A tensor (or list of tensors if the layer has multiple outputs).

#### Raises:

• `RuntimeError`: If called in Eager mode.

### `get_output_mask_at`

``````get_output_mask_at(node_index)
``````

Retrieves the output mask tensor(s) of a layer at a given node.

#### Arguments:

• `node_index`: Integer, index of the node from which to retrieve the attribute. E.g. `node_index=0` will correspond to the first time the layer was called.

#### Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

### `get_output_shape_at`

``````get_output_shape_at(node_index)
``````

Retrieves the output shape(s) of a layer at a given node.

#### Arguments:

• `node_index`: Integer, index of the node from which to retrieve the attribute. E.g. `node_index=0` will correspond to the first time the layer was called.

#### Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

#### Raises:

• `RuntimeError`: If called in Eager mode.

### `get_updates_for`

``````get_updates_for(inputs)
``````

Retrieves updates relevant to a specific set of inputs.

#### Arguments:

• `inputs`: Input tensor or list/tuple of input tensors.

#### Returns:

List of update ops of the layer that depend on `inputs`.

### `get_weights`

``````get_weights()
``````

Returns the current weights of the layer.

#### Returns:

Weights values as a list of numpy arrays.

### `reset_states`

View source

``````reset_states()
``````

Resets all of the metric state variables.

This function is called between epochs/steps, when a metric is evaluated during training.

### `result`

View source

``````result()
``````

Computes and returns the metric value tensor.

Result computation is an idempotent operation that simply calculates the metric value using the state variables.

### `set_weights`

``````set_weights(weights)
``````

Sets the weights of the layer, from Numpy arrays.

#### Arguments:

• `weights`: a list of Numpy arrays. The number of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of `get_weights`).

#### Raises:

• `ValueError`: If the provided weights list does not match the layer's specifications.

### `update_state`

View source

``````update_state(
y_true,
y_pred
)
``````

Accumulates statistics for the metric.

Please use `tf.config.experimental_run_functions_eagerly(True)` to execute this function eagerly for debugging or profiling.

#### Args:

• `*args`: * `**kwargs`: A mini-batch of inputs to the Metric.

### `with_name_scope`

``````with_name_scope(
cls,
method
)
``````

Decorator to automatically enter the module name scope.

``````class MyModule(tf.Module):
@tf.Module.with_name_scope
def __call__(self, x):
if not hasattr(self, 'w'):
self.w = tf.Variable(tf.random.normal([x.shape, 64]))
return tf.matmul(x, self.w)
``````

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

``````mod = MyModule()
mod(tf.ones([8, 32]))
# ==> <tf.Tensor: ...>
mod.w
# ==> <tf.Variable ...'my_module/w:0'>
``````

#### Args:

• `method`: The method to wrap.

#### Returns:

The original method wrapped such that it enters the module's name scope.