Missed TensorFlow Dev Summit? Check out the video playlist. Watch recordings

tfp.experimental.substrates.numpy.util.DeferredTensor

View source on GitHub

Variable tracking object which applies function upon convert_to_tensor.

tfp.experimental.substrates.numpy.util.DeferredTensor(
    pretransformed_input, transform_fn, dtype=None, shape=NONE_SPECIFIED, name=None
)

Example

from tensorflow_probability.python.internal.backend.numpy.compat import v2 as tf
import tensorflow_probability as tfp; tfp = tfp.experimental.substrates.numpy
tfb = tfp.bijectors
tfd = tfp.distributions

# Note: it'd be better to use `tfp.util.TransformedVariable`;
#       this example is for illustration only.
trainable_normal = tfd.Normal(
    loc=tf.Variable(0.),
    scale=tfp.util.DeferredTensor(tf.Variable(0.), tf.math.exp))

trainable_normal.loc
# ==> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0>

trainable_normal.scale
# ==> <DeferredTensor: dtype=float32, shape=[], fn=exp>

# Operators work with `DeferredTensor`.
trainable_normal.scale + 1.
# ==> 2.

with tf.GradientTape() as tape:
  negloglik = -trainable_normal.log_prob(0.5)
g = tape.gradient(negloglik, trainable_normal.trainable_variables)
# ==> (-0.5, 0.75)

Which we could then fit as:

opt = tf.optimizers.Adam(learning_rate=0.05)
loss = tf.function(lambda: -trainable_normal.log_prob(0.5), autograph=True)
for _ in range(int(1e3)):
  opt.minimize(loss, trainable_normal.trainable_variables)
trainable_normal.mean()
# ==> 0.5
trainable_normal.stddev()
# ==> (approximately) 0.0075

It is also possible to parameterize a DeferredTensor with a bijector, e.g.:

# Note: it'd be better to use `tfp.util.TransformedVariable`;
#       this example is for illustration only.
d = tfd.Normal(loc=0.,
               scale=tfp.util.DeferredTensor(tf.Variable([0.54, 1.85]),
                                             tfb.Softplus()))
d.stddev()
# ==> [1., 2.]
tf.convert_to_tensor(d.scale)
# ==> [1., 2.]

Args:

  • pretransformed_input: object with shape, dtype properties (typically a tf.Variable) passed into transform_fn when this object is acted upon in a Tensor context, eg, tf.convert_to_tensor, +, tf.math.exp, etc.
  • transform_fn: Python callable or tfp.bijectors.Bijector-like instance. When callable, should take pretransformed_input and return a Tensor (representing by this object).
  • dtype: Equivalent to what would otherwise be transform_fn(pretransformed_input).dtype. Default value: None (i.e., getattr(transform_fn, 'dtype', None) or pretransformed_input.dtype).
  • shape: Equivalent to what would otherwise be transform_fn(pretransformed_input).shape. Default value: 'None' (i.e., getattr(transform_fn, 'forward_event_shape', lambda x: x)( pretransformed_input.shape)).
  • name: Python str representing this object's name; used only in graph mode. Default value: None (i.e., (getattr(transform_fn, 'name', None) or transform_fn.__name__ + '_' + pretransformed_input.name)).

Attributes:

  • dtype: Represents the type of the elements in a Tensor.
  • name: The string name of this object.
  • pretransformed_input: Input to transform_fn.
  • shape: Represents the shape of a Tensor.
  • trainable_variables
  • transform_fn: Function which characterizes the Tensorization of this object.
  • variables

Raises:

  • TypeError: if transform_fn is not callable.
  • TypeError: if pretransformed_input lacks dtype and/or shape properties (and dtype and/or shape arguments are unspecified).

Methods

__getitem__

View source

__getitem__(
    i
)

get_shape

View source

get_shape()

Legacy means of getting Tensor shape, for compat with 2.0.0 LinOp.

set_shape

View source

set_shape(
    shape
)

Updates the shape of this pretransformed_input.

This method can be called multiple times, and will merge the given shape with the current shape of this object. It can be used to provide additional information about the shape of this object that cannot be inferred from the graph alone.

Args:

  • shape: A TensorShape representing the shape of this pretransformed_input, a TensorShapeProto, a list, a tuple, or None.

Raises:

  • ValueError: If shape is not compatible with the current shape of this pretransformed_input.