tfp.layers.DenseLocalReparameterization

Class DenseLocalReparameterization

Densely-connected layer class with local reparameterization estimator.

This layer implements the Bayesian variational inference analogue to a dense layer by assuming the kernel and/or the bias are drawn from distributions. By default, the layer implements a stochastic forward pass via sampling from the kernel and bias posteriors,

kernel, bias ~ posterior
outputs = activation(matmul(inputs, kernel) + bias)

It uses the local reparameterization estimator [(Kingma et al., 2015)][1], which performs a Monte Carlo approximation of the distribution on the hidden units induced by the kernel and bias. The default kernel_posterior_fn is a normal distribution which factorizes across all elements of the weight matrix and bias vector. Unlike [1]'s multiplicative parameterization, this distribution has trainable location and scale parameters which is known as an additive noise parameterization [(Molchanov et al., 2017)][2].

The arguments permit separate specification of the surrogate posterior (q(W|x)), prior (p(W)), and divergence for both the kernel and bias distributions.

Upon being built, this layer adds losses (accessible via the losses property) representing the divergences of kernel and/or bias surrogate posteriors and their respective priors. When doing minibatch stochastic optimization, make sure to scale this loss such that it is applied just once per epoch (e.g. if kl is the sum of losses for each element of the batch, you should pass kl / num_examples_per_epoch to your optimizer).

You can access the kernel and/or bias posterior and prior distributions after the layer is built via the kernel_posterior, kernel_prior, bias_posterior and bias_prior properties.

Examples

We illustrate a Bayesian neural network with variational inference, assuming a dataset of features and labels.

import tensorflow_probability as tfp

model = tf.keras.Sequential([
    tfp.layers.DenseReparameterization(512, activation=tf.nn.relu),
    tfp.layers.DenseReparameterization(10),
])

logits = model(features)
neg_log_likelihood = tf.nn.softmax_cross_entropy_with_logits(
    labels=labels, logits=logits)
kl = sum(model.losses)
loss = neg_log_likelihood + kl
train_op = tf.train.AdamOptimizer().minimize(loss)

It uses local reparameterization gradients to minimize the Kullback-Leibler divergence up to a constant, also known as the negative Evidence Lower Bound. It consists of the sum of two terms: the expected negative log-likelihood, which we approximate via Monte Carlo; and the KL divergence, which is added via regularizer terms which are arguments to the layer.

References

[1]: Diederik Kingma, Tim Salimans, and Max Welling. Variational Dropout and the Local Reparameterization Trick. In Neural Information Processing Systems, 2015. https://arxiv.org/abs/1506.02557 [2]: Dmitry Molchanov, Arsenii Ashukha, Dmitry Vetrov. Variational Dropout Sparsifies Deep Neural Networks. In International Conference on Machine Learning, 2017. https://arxiv.org/abs/1701.05369

__init__

__init__(
    units,
    activation=None,
    activity_regularizer=None,
    trainable=True,
    kernel_posterior_fn=tfp_layers_util.default_mean_field_normal_fn(),
    kernel_posterior_tensor_fn=(lambda d: d.sample()),
    kernel_prior_fn=tfp.layers.default_multivariate_normal_fn,
    kernel_divergence_fn=(lambda q, p, ignore: tfd.kl_divergence(q, p)),
    bias_posterior_fn=tfp_layers_util.default_mean_field_normal_fn(is_singular=True),
    bias_posterior_tensor_fn=(lambda d: d.sample()),
    bias_prior_fn=None,
    bias_divergence_fn=(lambda q, p, ignore: tfd.kl_divergence(q, p)),
    **kwargs
)

Construct layer.

Args:

  • units: Integer or Long, dimensionality of the output space.
  • activation: Activation function (callable). Set it to None to maintain a linear activation.
  • activity_regularizer: Regularizer function for the output.
  • kernel_posterior_fn: Python callable which creates tfd.Distribution instance representing the surrogate posterior of the kernel parameter. Default value: default_mean_field_normal_fn().
  • kernel_posterior_tensor_fn: Python callable which takes a tfd.Distribution instance and returns a representative value. Default value: lambda d: d.sample().
  • kernel_prior_fn: Python callable which creates tfd instance. See default_mean_field_normal_fn docstring for required parameter signature. Default value: tfd.Normal(loc=0., scale=1.).
  • kernel_divergence_fn: Python callable which takes the surrogate posterior distribution, prior distribution and random variate sample(s) from the surrogate posterior and computes or approximates the KL divergence. The distributions are tfd.Distribution-like instances and the sample is a Tensor.
  • bias_posterior_fn: Python callable which creates tfd.Distribution instance representing the surrogate posterior of the bias parameter. Default value: default_mean_field_normal_fn(is_singular=True) (which creates an instance of tfd.Deterministic).
  • bias_posterior_tensor_fn: Python callable which takes a tfd.Distribution instance and returns a representative value. Default value: lambda d: d.sample().
  • bias_prior_fn: Python callable which creates tfd instance. See default_mean_field_normal_fn docstring for required parameter signature. Default value: None (no prior, no variational inference)
  • bias_divergence_fn: Python callable which takes the surrogate posterior distribution, prior distribution and random variate sample(s) from the surrogate posterior and computes or approximates the KL divergence. The distributions are tfd.Distribution-like instances and the sample is a Tensor.

Properties

activity_regularizer

Optional regularizer function for the output of this layer.

dtype

dynamic

input

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

Returns:

Input tensor or list of input tensors.

Raises:

  • AttributeError: if the layer is connected to more than one incoming layers.

Raises:

  • RuntimeError: If called in Eager mode.
  • AttributeError: If no inbound nodes are found.

input_mask

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Input mask tensor (potentially None) or list of input mask tensors.

Raises:

  • AttributeError: if the layer is connected to more than one incoming layers.

input_shape

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:

  • AttributeError: if the layer has no defined input_shape.
  • RuntimeError: if called in Eager mode.

losses

Losses which are associated with this Layer.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables.

Returns:

A list of tensors.

name

non_trainable_variables

non_trainable_weights

output

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

Returns:

Output tensor or list of output tensors.

Raises:

  • AttributeError: if the layer is connected to more than one incoming layers.
  • RuntimeError: if called in Eager mode.

output_mask

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Output mask tensor (potentially None) or list of output mask tensors.

Raises:

  • AttributeError: if the layer is connected to more than one incoming layers.

output_shape

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

Raises:

  • AttributeError: if the layer has no defined output shape.
  • RuntimeError: if called in Eager mode.

trainable_variables

trainable_weights

updates

variables

Returns the list of all layer variables/weights.

Alias of self.weights.

Returns:

A list of variables.

weights

Returns the list of all layer variables/weights.

Returns:

A list of variables.

Methods

__call__

__call__(
    inputs,
    *args,
    **kwargs
)

Wraps call, applying pre- and post-processing steps.

Arguments:

  • inputs: input tensor(s).
  • *args: additional positional arguments to be passed to self.call.
  • **kwargs: additional keyword arguments to be passed to self.call.

Returns:

Output tensor(s).

Raises:

  • ValueError: if the layer's call method returns None (an invalid value).

__setattr__

__setattr__(
    name,
    value
)

apply

apply(
    inputs,
    *args,
    **kwargs
)

Apply the layer on a input.

This is an alias of self.__call__.

Arguments:

  • inputs: Input tensor(s).
  • *args: additional positional arguments to be passed to self.call.
  • **kwargs: additional keyword arguments to be passed to self.call.

Returns:

Output tensor(s).

build

build(input_shape)

compute_mask

compute_mask(
    inputs,
    mask=None
)

Computes an output mask tensor.

Arguments:

  • inputs: Tensor or list of tensors.
  • mask: Tensor or list of tensors.

Returns:

None or a tensor (or list of tensors, one per output tensor of the layer).

compute_output_shape

compute_output_shape(input_shape)

Computes the output shape of the layer.

Args:

  • input_shape: Shape tuple (tuple of integers) or list of shape tuples (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

  • output_shape: A tuple representing the output shape.

Raises:

  • ValueError: If innermost dimension of input_shape is not defined.

count_params

count_params()

Count the total number of scalars composing the weights.

Returns:

An integer count.

Raises:

  • ValueError: if the layer isn't yet built (in which case its weights aren't yet defined).

from_config

from_config(
    cls,
    config
)

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary.

Args:

  • config: A Python dictionary, typically the output of get_config.

Returns:

  • layer: A layer instance.

get_config

get_config()

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

Returns:

  • config: A Python dictionary of class keyword arguments and their serialized values.

get_input_at

get_input_at(node_index)

Retrieves the input tensor(s) of a layer at a given node.

Arguments:

  • node_index: Integer, index of the node from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A tensor (or list of tensors if the layer has multiple inputs).

Raises:

  • RuntimeError: If called in Eager mode.

get_input_mask_at

get_input_mask_at(node_index)

Retrieves the input mask tensor(s) of a layer at a given node.

Arguments:

  • node_index: Integer, index of the node from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

get_input_shape_at

get_input_shape_at(node_index)

Retrieves the input shape(s) of a layer at a given node.

Arguments:

  • node_index: Integer, index of the node from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

Raises:

  • RuntimeError: If called in Eager mode.

get_losses_for

get_losses_for(inputs)

Retrieves losses relevant to a specific set of inputs.

Arguments:

  • inputs: Input tensor or list/tuple of input tensors.

Returns:

List of loss tensors of the layer that depend on inputs.

Raises:

  • RuntimeError: If called in Eager mode.

get_output_at

get_output_at(node_index)

Retrieves the output tensor(s) of a layer at a given node.

Arguments:

  • node_index: Integer, index of the node from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A tensor (or list of tensors if the layer has multiple outputs).

Raises:

  • RuntimeError: If called in Eager mode.

get_output_mask_at

get_output_mask_at(node_index)

Retrieves the output mask tensor(s) of a layer at a given node.

Arguments:

  • node_index: Integer, index of the node from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

get_output_shape_at

get_output_shape_at(node_index)

Retrieves the output shape(s) of a layer at a given node.

Arguments:

  • node_index: Integer, index of the node from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

Raises:

  • RuntimeError: If called in Eager mode.

get_updates_for

get_updates_for(inputs)

Retrieves updates relevant to a specific set of inputs.

Arguments:

  • inputs: Input tensor or list/tuple of input tensors.

Returns:

List of update ops of the layer that depend on inputs.

Raises:

  • RuntimeError: If called in Eager mode.

get_weights

get_weights()

Returns the current weights of the layer.

Returns:

Weights values as a list of numpy arrays.

set_weights

set_weights(weights)

Sets the weights of the layer, from Numpy arrays.

Arguments:

  • weights: a list of Numpy arrays. The number of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights).

Raises:

  • ValueError: If the provided weights list does not match the layer's specifications.