tfp.distributions.PixelCNN

View source on GitHub

The Pixel CNN++ distribution.

Inherits From: Distribution

Pixel CNN++ [(Salimans et al., 2017)][1] models a distribution over image
data, parameterized by a neural network. It builds on Pixel CNN and
Conditional Pixel CNN, as originally proposed by [(van den Oord et al.,
2016)][2, 3]. The model expresses the joint distribution over pixels as
the product of conditional distributions: `p(x|h) = prod{ p(x[i] | x[0:i], h)
i=0, ..., d }, in whichp(x[i] | x[0:i], h) : i=0, ..., dis the probability of thei-th pixel conditional on the pixels that preceded it in raster order (color channels in RGB order, then left to right, then top to bottom).his optional additional data on which to condition the image distribution, such as class labels or VAE embeddings. The Pixel CNN++ network enforces the dependency structure among pixels by applying a mask to the kernels of the convolutional layers that ensures that the values for each pixel depend only on other pixels up and to the left (seetfd.PixelCnnNetwork`).

Pixel values are modeled with a mixture of quantized logistic distributions, which can take on a set of distinct integer values (e.g. between 0 and 255 for an 8-bit image).

Color intensity v of each pixel is modeled as:

`v ~ sum{q[i] * quantized_logistic(loc[i], scale[i]) : i = 0, ..., k },

in which k is the number of mixture components and the q[i] are the Categorical probabilities over the components.

Sampling

Pixels are sampled one at a time, in raster order. This enforces the autoregressive dependency structure, in which the sample of pixel i is conditioned on the samples of pixels 1, ..., i-1. A single color image is sampled as follows:

samples = random_uniform([image_height, image_width, image_channels])
for i in image_height:
  for j in image_width:
    component_logits, locs, scales, coeffs = pixel_cnn_network(samples)
    components = Categorical(component_logits).sample()
    locs = gather(locs, components)
    scales = gather(scales, components)

    coef_count = 0
    channel_samples = []
    for k in image_channels:
      loc = locs[k]
      for m in range(k):
        loc += channel_samples[m] * coeffs[coef_count]
        coef_count += 1
      channel_samp = Logistic(loc, scales[k]).sample()
      channel_samples.append(channel_samp)
    samples[i, j, :] = tf.stack(channel_samples, axis=-1)
samples = round(samples)

Examples


# Build a small Pixel CNN++ model to train on MNIST.

import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_probability as tfp

tfd = tfp.distributions
tfk = tf.keras
tfkl = tf.keras.layers

tf.enable_v2_behavior()

# Load MNIST from tensorflow_datasets
data = tfds.load('mnist')
train_data, test_data = data['train'], data['test']

def image_preprocess(x):
  x['image'] = tf.cast(x['image'], tf.float32)
  return (x['image'],)  # (input, output) of the model

batch_size = 16
train_it = train_data.map(image_preprocess).batch(batch_size).shuffle(1000)

image_shape = (28, 28, 1)
# Define a Pixel CNN network
dist = tfd.PixelCNN(
    image_shape=image_shape,
    num_resnet=1,
    num_hierarchies=2,
    num_filters=32,
    num_logistic_mix=5,
    dropout_p=.3,
)

# Define the model input
image_input = tfkl.Input(shape=image_shape)

# Define the log likelihood for the loss fn
log_prob = dist.log_prob(image_input)

# Define the model
model = tfk.Model(inputs=image_input, outputs=log_prob)
model.add_loss(-tf.reduce_mean(log_prob))

# Compile and train the model
model.compile(
    optimizer=tfk.optimizers.Adam(.001),
    metrics=[])

model.fit(train_it, epochs=10, verbose=True)

# sample five images from the trained model
samples = dist.sample(5)

To train a class-conditional model:


data = tfds.load('mnist')
train_data, test_data = data['train'], data['test']

def image_preprocess(x):
  x['image'] = tf.cast(x['image'], tf.float32)
  # return model (inputs, outputs): inputs are (image, label) and there are no
  # outputs
  return ((x['image'], x['label']),)

batch_size = 16
train_ds = train_data.map(image_preprocess).batch(batch_size).shuffle(1000)
optimizer = tfk.optimizers.Adam()

image_shape = (28, 28, 1)
label_shape = ()
dist = tfd.PixelCNN(
    image_shape=image_shape,
    conditional_shape=label_shape,
    num_resnet=1,
    num_hierarchies=2,
    num_filters=32,
    num_logistic_mix=5,
    dropout_p=.3,
)

image_input = tfkl.Input(shape=image_shape)
label_input = tfkl.Input(shape=label_shape)

log_prob = dist.log_prob(image_input, conditional_input=label_input)

class_cond_model = tfk.Model(
    inputs=[image_input, label_input], outputs=log_prob)
class_cond_model.add_loss(-tf.reduce_mean(log_prob))
class_cond_model.compile(
    optimizer=tfk.optimizers.Adam(),
    metrics=[])
class_cond_model.fit(train_ds, epochs=10)

# Take 10 samples of the digit '5'
samples = dist.sample(10, conditional_input=5.)

# Take 4 samples each of the digits '1', '2', '3'.
# Note that when a batch of conditional input is passed, the sample shape
# (the first argument of `dist.sample`) must have its last dimension(s) equal
# the batch shape of the conditional input (here, (3,)).
samples = dist.sample((4, 3), conditional_input=[1., 2., 3.])

References

[1]: Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P. Kingma. PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications. In International Conference on Learning Representations, 2017. https://pdfs.semanticscholar.org/9e90/6792f67cbdda7b7777b69284a81044857656.pdf Additional details at https://github.com/openai/pixel-cnn

[2]: Aaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuoglu. Conditional Image Generation with PixelCNN Decoders. In Neural Information Processing Systems, 2016. https://arxiv.org/abs/1606.05328

[3]: Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel Recurrent Neural Networks. In International Conference on Machine Learning, 2016. https://arxiv.org/pdf/1601.06759.pdf

image_shape 3D TensorShape or tuple for the [height, width, channels] dimensions of the image.
conditional_shape TensorShape or tuple for the shape of the conditional input, or None if there is no conditional input.
num_resnet int, the number of layers (shown in Figure 2 of [2]) within each highest-level block of Figure 2 of [1].
num_hierarchies int, the number of hightest-level blocks (separated by expansions/contractions of dimensions in Figure 2 of [1].)
num_filters int, the number of convolutional filters.
num_logistic_mix int, number of components in the logistic mixture distribution.
receptive_field_dims tuple, height and width in pixels of the receptive field of the convolutional layers above and to the left of a given pixel. The width (second element of the tuple) should be odd. Figure 1 (middle) of [2] shows a receptive field of (3, 5) (the row containing the current pixel is included in the height). The default of (3, 3) was used to produce the results in [1].
dropout_p float, the dropout probability. Should be between 0 and 1.
resnet_activation string, the type of activation to use in the resnet blocks. May be 'concat_elu', 'elu', or 'relu'.
use_weight_norm bool, if True then use weight normalization (works only in Eager mode).
use_data_init bool, if True then use data-dependent initialization (has no effect if use_weight_norm is False).
high int, the maximum value of the input data (255 for an 8-bit image).
low int, the minimum value of the input data.
dtype Data type of the Distribution.
name string, the name of the Distribution.

allow_nan_stats Python bool describing behavior when a stat is undefined.

Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.

batch_shape Shape of a single sample from a single event index as a TensorShape.

May be partially defined or unknown.

The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.

dtype The DType of Tensors handled by this Distribution.
event_shape Shape of a single sample from a single batch as a TensorShape.

May be partially defined or unknown.

name Name prepended to all ops created by this Distribution.
name_scope Returns a tf.name_scope instance for this class.
parameters Dictionary of parameters used to instantiate this Distribution.
reparameterization_type Describes how samples from the distribution are reparameterized.

Currently this is one of the static instances tfd.FULLY_REPARAMETERIZED or tfd.NOT_REPARAMETERIZED.

submodules Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

a = tf.Module()
b = tf.Module()
c = tf.Module()
a.b = b
b.c = c
list(a.submodules) == [b, c]
True
list(b.submodules) == [c]
True
list(c.submodules) == []
True

trainable_variables Sequence of trainable variables owned by this module and its submodules.

validate_args Python bool indicating possibly expensive checks are enabled.
variables Sequence of variables owned by this module and its submodules.

Methods

batch_shape_tensor

View source

Shape of a single sample from a single event index as a 1-D Tensor.

The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.

Args
name name to give to the op

Returns
batch_shape Tensor.

cdf

View source

Cumulative distribution function.

Given random variable X, the cumulative distribution function cdf is:

cdf(x) := P[X <= x]

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
cdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

copy

View source

Creates a deep copy of the distribution.

Args
**override_parameters_kwargs String/value dictionary of initialization arguments to override with new values.

Returns
distribution A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs).

covariance

View source

Covariance.

Covariance is (possibly) defined only for non-scalar-event distributions.

For example, for a length-k, vector-valued distribution, it is calculated as,

Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]

where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation.

Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e.,

Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]

where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector.

Args
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
covariance Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape).

cross_entropy

View source

Computes the (Shannon) cross entropy.

Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shannon) cross entropy is defined as:

H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)

where F denotes the support of the random variable X ~ P.

Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.

Returns
cross_entropy self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of (Shannon) cross entropy.

entropy

View source

Shannon entropy in nats.

event_shape_tensor

View source

Shape of a single sample from a single batch as a 1-D int32 Tensor.

Args
name name to give to the op

Returns
event_shape Tensor.

is_scalar_batch

View source

Indicates that batch_shape == [].

Args
name Python str prepended to names of ops created by this function.

Returns
is_scalar_batch bool scalar Tensor.

is_scalar_event

View source

Indicates that event_shape == [].

Args
name Python str prepended to names of ops created by this function.

Returns
is_scalar_event bool scalar Tensor.

kl_divergence

View source

Computes the Kullback--Leibler divergence.

Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as:

KL[p, q] = E_p[log(p(X)/q(X))]
         = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
         = H[p, q] - H[p]

where F denotes the support of the random variable X ~ p, H[., .] denotes (Shannon) cross entropy, and H[.] denotes (Shannon) entropy.

Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.

Returns
kl_divergence self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence.

log_cdf

View source

Log cumulative distribution function.

Given random variable X, the cumulative distribution function cdf is:

log_cdf(x) := Log[ P[X <= x] ]

Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1.

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
logcdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

log_prob

View source

Log probability density/mass function.

Additional documentation from PixelCNN:

Log probability function with optional conditional input.

Calculates the log probability of a batch of data under the modeled distribution (or conditional distribution, if conditional input is provided).

Args
value Tensor or Numpy array of image data. May have leading batch dimension(s), which must broadcast to the leading batch dimensions of conditional_input.
conditional_input Tensor on which to condition the distribution (e.g. class labels), or None. May have leading batch dimension(s), which must broadcast to the leading batch dimensions of value.
training bool or None. If bool, it controls the dropout layer, where True implies dropout is active. If None, it defaults to tf.keras.backend.learning_phase().

Returns
log_prob_values Tensor.

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
log_prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

log_survival_function

View source

Log survival function.

Given random variable X, the survival function is defined:

log_survival_function(x) = Log[ P[X > x] ]
                         = Log[ 1 - P[X <= x] ]
                         = Log[ 1 - cdf(x) ]

Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1.

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

mean

View source

Mean.

mode

View source

Mode.

param_shapes

View source

Shapes of parameters given the desired shape of a call to sample().

This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample().

Subclasses should override class method _param_shapes.

Args
sample_shape Tensor or python list/tuple. Desired shape of a call to sample().
name name to prepend ops with.

Returns
dict of parameter name to Tensor shapes.

param_static_shapes

View source

param_shapes with static (i.e. TensorShape) shapes.

This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Assumes that the sample's shape is known statically.

Subclasses should override class method _param_shapes to return constant-valued tensors when constant values are fed.

Args
sample_shape TensorShape or python list/tuple. Desired shape of a call to sample().

Returns
dict of parameter name to TensorShape.

Raises
ValueError if sample_shape is a TensorShape and is not fully defined.

prob

View source

Probability density/mass function.

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

quantile

View source

Quantile function. Aka 'inverse cdf' or 'percent point function'.

Given random variable X and p in [0, 1], the quantile is:

quantile(p) := x such that P[X <= x] == p

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
quantile a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

sample

View source

Generate samples of the specified shape.

Note that a call to sample() without arguments will generate a single sample.

Args
sample_shape 0D or 1D int32 Tensor. Shape of the generated samples.
seed Python integer or tfp.util.SeedStream instance, for seeding PRNG.
name name to give to the op.
**kwargs Named arguments forwarded to subclass implementation.

Returns
samples a Tensor with prepended dimensions sample_shape.

stddev

View source

Standard deviation.

Standard deviation is defined as,

stddev = E[(X - E[X])**2]**0.5

where X is the random variable associated with this distribution, E denotes expectation, and stddev.shape = batch_shape + event_shape.

Args
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
stddev Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean().

survival_function

View source

Survival function.

Given random variable X, the survival function is defined:

survival_function(x) = P[X > x]
                     = 1 - P[X <= x]
                     = 1 - cdf(x).

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

variance

View source

Variance.

Variance is defined as,

Var = E[(X - E[X])**2]

where X is the random variable associated with this distribution, E denotes expectation, and Var.shape = batch_shape + event_shape.

Args
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
variance Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean().

with_name_scope

Decorator to automatically enter the module name scope.

class MyModule(tf.Module):
  @tf.Module.with_name_scope
  def __call__(self, x):
    if not hasattr(self, 'w'):
      self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
    return tf.matmul(x, self.w)

Using the above module would produce tf.Variables and tf.Tensors whose names included the module name:

mod = MyModule()
mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>

Args
method The method to wrap.

Returns
The original method wrapped such that it enters the module's name scope.

__getitem__

View source

Slices the batch axes of this distribution, returning a new instance.

b = tfd.Bernoulli(logits=tf.zeros([3, 5, 7, 9]))
b.batch_shape  # => [3, 5, 7, 9]
b2 = b[:, tf.newaxis, ..., -2:, 1::2]
b2.batch_shape  # => [3, 1, 5, 2, 4]

x = tf.random.normal([5, 3, 2, 2])
cov = tf.matmul(x, x, transpose_b=True)
chol = tf.cholesky(cov)
loc = tf.random.normal([4, 1, 3, 1])
mvn = tfd.MultivariateNormalTriL(loc, chol)
mvn.batch_shape  # => [4, 5, 3]
mvn.event_shape  # => [2]
mvn2 = mvn[:, 3:, ..., ::-1, tf.newaxis]
mvn2.batch_shape  # => [4, 2, 3, 1]
mvn2.event_shape  # => [2]

Args
slices slices from the [] operator

Returns
dist A new tfd.Distribution instance with sliced parameters.

__iter__

View source