Missed TensorFlow Dev Summit? Check out the video playlist. Watch recordings

tfp.experimental.nn.AffineVariationalReparameterization

View source on GitHub

Densely-connected layer class with reparameterization estimator.

tfp.experimental.nn.AffineVariationalReparameterization(
    input_size, output_size, init_kernel_fn=None, init_bias_fn=None,
    make_posterior_fn=tfp.experimental.nn.util.make_kernel_bias_posterior_mvn_diag,
    make_prior_fn=tfp.experimental.nn.util.make_kernel_bias_prior_spike_and_slab,
    posterior_value_fn=tfp.distributions.Distribution.sample,
    unpack_weights_fn=unpack_kernel_and_bias, dtype=tf.float32, penalty_weight=None,
    posterior_penalty_fn=kl_divergence_monte_carlo, activation_fn=None, seed=None,
    name=None
)

This layer implements the Bayesian variational inference analogue to a dense layer by assuming the kernel and/or the bias are drawn from distributions. By default, the layer implements a stochastic forward pass via sampling from the kernel and bias posteriors,

kernel, bias ~ posterior
outputs = matmul(inputs, kernel) + bias

It uses the reparameterization estimator [(Kingma and Welling, 2014)][1], which performs a Monte Carlo approximation of the distribution integrating over the kernel and bias.

The arguments permit separate specification of the surrogate posterior (q(W|x)), prior (p(W)), and divergence for both the kernel and bias distributions.

Upon being built, this layer adds losses (accessible via the losses property) representing the divergences of kernel and/or bias surrogate posteriors and their respective priors. When doing minibatch stochastic optimization, make sure to scale this loss such that it is applied just once per epoch (e.g. if kl is the sum of losses for each element of the batch, you should pass kl / num_examples_per_epoch to your optimizer).

You can access the kernel and/or bias posterior and prior distributions after the layer is built via the kernel_posterior, kernel_prior, bias_posterior and bias_prior properties.

Examples

We illustrate a Bayesian neural network with variational inference, assuming a dataset of images and length-10 one-hot targets.

import functools
import tensorflow.compat.v2 as tf
import tensorflow_probability as tfp
import tensorflow_datasets as tfds
tfb = tfp.bijectors
tfd = tfp.distributions
tfn = tfp.experimental.nn

# 1  Prepare Dataset

[train_dataset, eval_dataset], datasets_info = tfds.load(
    name='mnist',
    split=['train', 'test'],
    with_info=True,
    as_supervised=True,
    shuffle_files=True)
def _preprocess(image, label):
  # image = image < tf.random.uniform(tf.shape(image))   # Randomly binarize.
  image = tf.cast(image, tf.float32) / 255.  # Scale to unit interval.
  lo = 0.001
  image = (1. - 2. * lo) * image + lo  # Rescale to *open* unit interval.
  return image, label
batch_size = 32
train_size = datasets_info.splits['train'].num_examples
train_dataset = tfn.util.tune_dataset(
    train_dataset,
    batch_shape=(batch_size,),
    shuffle_size=int(train_size / 7),
    preprocess_fn=_preprocess)
train_iter = iter(train_dataset)
eval_iter = iter(eval_dataset)
x, y = next(train_iter)
evidence_shape = x.shape[1:]
targets_shape = y.shape[1:]

# 2  Specify Model

BayesConv2D = functools.partial(
    tfn.ConvolutionVariationalReparameterization,
    rank=2,
    padding='same',
    filter_shape=5,
    # Use `he_uniform` because we'll use the `relu` family.
    init_kernel_fn=tf.initializers.he_uniform())

BayesAffine = functools.partial(
    tfn.AffineVariationalReparameterization,
    init_kernel_fn=tf.initializers.he_normal())

scale = tfp.util.TransformedVariable(1., tfb.Softplus())
bnn = tfn.Sequential([
    BayesConv2D(evidence_shape[-1], 32, filter_shape=7, strides=2,
                activation_fn=tf.nn.leaky_relu),           # [b, 14, 14, 32]
    tfn.util.flatten_rightmost(ndims=3),                   # [b, 14 * 14 * 32]
    BayesAffine(14 * 14 * 32, np.prod(target_shape) - 1),  # [b, 9]
    tfn.Lambda(
        eval_fn=lambda loc: tfb.SoftmaxCentered()(
            tfd.Independent(tfd.Normal(loc, scale),
                            reinterpreted_batch_ndims=1)),
        also_track=scale),                                 # [b, 10]
], name='bayesian_neural_network')

print(bnn.summary())

# 3  Train.

def loss_fn():
  x, y = next(train_iter)
  nll = -tf.reduce_mean(bnn(x).log_prob(y), axis=-1)
  kl = bnn.extra_loss / tf.cast(train_size, tf.float32)
  loss = nll + kl
  return loss, (nll, kl)
opt = tf.optimizers.Adam()
fit_op = tfn.util.make_fit_op(loss_fn, opt, bnn.trainable_variables)
for _ in range(200):
  loss, (nll, kl), g = fit_op()

This example uses reparameterization gradients to minimize the Kullback-Leibler divergence up to a constant, also known as the negative Evidence Lower Bound. It consists of the sum of two terms: the expected negative log-likelihood, which we approximate via Monte Carlo; and the KL divergence, which is added via regularizer terms which are arguments to the layer.

References

[1]: Diederik Kingma and Max Welling. Auto-Encoding Variational Bayes. In International Conference on Learning Representations, 2014. https://arxiv.org/abs/1312.6114

Args:

Attributes:

  • activation_fn
  • also_track
  • dtype
  • extra_loss
  • extra_result
  • name: Returns the name of this module as passed or determined in the ctor.

    NOTE: This is not the same as the self.name_scope.name which includes parent module names.

  • name_scope: Returns a tf.name_scope instance for this class.

  • penalty_weight

  • posterior

  • posterior_penalty_fn

  • posterior_value_fn

  • prior

  • submodules: Sequence of all sub-modules.

    Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

  a = tf.Module() 
  b = tf.Module() 
  c = tf.Module() 
  a.b = b 
  b.c = c 
  list(a.submodules) == [b, c] 
    True 
  list(b.submodules) == [c] 
    True 
  list(c.submodules) == [] 
    True 
     
  • trainable_variables: Sequence of trainable variables owned by this module and its submodules.

  • unpack_weights_fn

  • variables: Sequence of variables owned by this module and its submodules.

Methods

__call__

View source

__call__(
    inputs, **kwargs
)

Call self as a function.

eval

View source

eval(
    inputs, is_training=True, **kwargs
)

load

View source

load(
    filename
)

save

View source

save(
    filename
)

summary

View source

summary()

with_name_scope

@classmethod
with_name_scope(
    cls, method
)

Decorator to automatically enter the module name scope.

class MyModule(tf.Module): 
  @tf.Module.with_name_scope 
  def __call__(self, x): 
    if not hasattr(self, 'w'): 
      self.w = tf.Variable(tf.random.normal([x.shape[1], 3])) 
    return tf.matmul(x, self.w) 

Using the above module would produce tf.Variables and tf.Tensors whose names included the module name:

mod = MyModule() 
mod(tf.ones([1, 2])) 
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)> 
mod.w 
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32, 
numpy=..., dtype=float32)> 

Args:

  • method: The method to wrap.

Returns:

The original method wrapped such that it enters the module's name scope.