View source on GitHub |
Variable tracking object which applies function upon convert_to_tensor
.
tfp.util.DeferredTensor(
pretransformed_input,
transform_fn,
dtype=None,
shape=NONE_SPECIFIED,
also_track=None,
name=None
)
Example
import tensorflow.compat.v2 as tf
import tensorflow_probability as tfp
tfb = tfp.bijectors
tfd = tfp.distributions
# Note: it'd be better to use `tfp.util.TransformedVariable`;
# this example is for illustration only.
trainable_normal = tfd.Normal(
loc=tf.Variable(0.),
scale=tfp.util.DeferredTensor(tf.Variable(0.), tf.math.exp))
trainable_normal.loc
# ==> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0>
trainable_normal.scale
# ==> <DeferredTensor: dtype=float32, shape=[], fn=exp>
# Operators work with `DeferredTensor`.
trainable_normal.scale + 1.
# ==> 2.
with tf.GradientTape() as tape:
negloglik = -trainable_normal.log_prob(0.5)
g = tape.gradient(negloglik, trainable_normal.trainable_variables)
# ==> (-0.5, 0.75)
Which we could then fit as:
opt = tf.optimizers.Adam(learning_rate=0.05)
loss = tf.function(lambda: -trainable_normal.log_prob(0.5), autograph=True)
for _ in range(int(1e3)):
opt.minimize(loss, trainable_normal.trainable_variables)
trainable_normal.mean()
# ==> 0.5
trainable_normal.stddev()
# ==> (approximately) 0.0075
It is also possible to parameterize a DeferredTensor
with a bijector, e.g.:
# Note: it'd be better to use `tfp.util.TransformedVariable`;
# this example is for illustration only.
d = tfd.Normal(loc=0.,
scale=tfp.util.DeferredTensor(tf.Variable([0.54, 1.85]),
tfb.Softplus()))
d.stddev()
# ==> [1., 2.]
tf.convert_to_tensor(d.scale)
# ==> [1., 2.]
Args | |
---|---|
pretransformed_input
|
object with shape , dtype properties (typically a
tf.Variable ) passed into transform_fn when this object is acted upon
in a Tensor context, eg, tf.convert_to_tensor , + , tf.math.exp ,
etc.
|
transform_fn
|
Python callable or tfp.bijectors.Bijector -like instance.
When callable , should take pretransformed_input and
return a Tensor (representing by this object).
|
dtype
|
Equivalent to what would otherwise be
transform_fn(pretransformed_input).dtype .
Default value: None (i.e.,
getattr(transform_fn, 'dtype', None) or pretransformed_input.dtype ).
|
shape
|
Equivalent to what would otherwise be
transform_fn(pretransformed_input).shape .
Default value: 'None' (i.e.,
getattr(transform_fn, 'forward_event_shape', lambda x: x)(
pretransformed_input.shape) ).
|
also_track
|
Optional instance or structure of instances of tf.Variable
and/or tf.Module , containing any additional trainable variables that
the transform_fn may access beyond the given
pretransformed_input . This ensures that such variables
will be correctly tracked in self.trainable_variables .
Default value: None .
|
name
|
Python str representing this object's name ; used only in graph
mode.
Default value: None (i.e.,
(getattr(transform_fn, 'name', None) or
transform_fn.__name__ + '_' + pretransformed_input.name) ).
|
Raises | |
---|---|
TypeError
|
if transform_fn is not callable .
|
TypeError
|
if pretransformed_input lacks dtype and/or shape
properties (and dtype and/or shape arguments are unspecified).
|
Attributes | |
---|---|
also_track
|
Additional variables tracked by tf.Module in self.trainable_variables. |
dtype
|
Represents the type of the elements in a Tensor .
|
name
|
The string name of this object. |
name_scope
|
Returns a tf.name_scope instance for this class.
|
non_trainable_variables
|
Sequence of non-trainable variables owned by this module and its submodules. |
pretransformed_input
|
Input to transform_fn .
|
shape
|
Represents the shape of a Tensor .
|
submodules
|
Sequence of all sub-modules.
Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).
|
trainable_variables
|
Sequence of trainable variables owned by this module and its submodules. |
transform_fn
|
Function which characterizes the Tensor ization of this object.
|
variables
|
Sequence of variables owned by this module and its submodules. |
Methods
numpy
numpy()
Returns (copy of) deferred values as a NumPy array or scalar.
set_shape
set_shape(
shape
)
Updates the shape of this pretransformed_input.
This method can be called multiple times, and will merge the given shape
with the current shape of this object. It can be used to provide additional
information about the shape of this object that cannot be inferred from the
graph alone.
Args | |
---|---|
shape
|
A TensorShape representing the shape of this
pretransformed_input , a TensorShapeProto , a list, a tuple, or None.
|
Raises | |
---|---|
ValueError
|
If shape is not compatible with the current shape of this
pretransformed_input .
|
with_name_scope
@classmethod
with_name_scope( method )
Decorator to automatically enter the module name scope.
class MyModule(tf.Module):
@tf.Module.with_name_scope
def __call__(self, x):
if not hasattr(self, 'w'):
self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
return tf.matmul(x, self.w)
Using the above module would produce tf.Variable
s and tf.Tensor
s whose
names included the module name:
mod = MyModule()
mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args | |
---|---|
method
|
The method to wrap. |
Returns | |
---|---|
The original method wrapped such that it enters the module's name scope. |
__abs__
__abs__(
*args, **kwargs
)
__add__
__add__(
*args, **kwargs
)
__and__
__and__(
*args, **kwargs
)
__array__
__array__(
dtype=None
)
__bool__
__bool__()
Dummy method to prevent a tensor from being used as a Python bool
.
This overload raises a TypeError
when the user inadvertently
treats a Tensor
as a boolean (most commonly in an if
or while
statement), in code that was not converted by AutoGraph. For example:
if tf.constant(True): # Will raise.
# ...
if tf.constant(5) < tf.constant(7): # Will raise.
# ...
Raises | |
---|---|
TypeError .
|
__div__
__div__(
*args, **kwargs
)
__floordiv__
__floordiv__(
*args, **kwargs
)
__ge__
__ge__(
*args, **kwargs
)
__getitem__
__getitem__(
*args, **kwargs
)
__gt__
__gt__(
*args, **kwargs
)
__invert__
__invert__(
*args, **kwargs
)
__iter__
__iter__(
*args, **kwargs
)
__le__
__le__(
*args, **kwargs
)
__lt__
__lt__(
*args, **kwargs
)
__matmul__
__matmul__(
*args, **kwargs
)
__mod__
__mod__(
*args, **kwargs
)
__mul__
__mul__(
*args, **kwargs
)
__neg__
__neg__(
*args, **kwargs
)
__nonzero__
__nonzero__()
Dummy method to prevent a tensor from being used as a Python bool
.
This is the Python 2.x counterpart to __bool__()
above.
Raises | |
---|---|
TypeError .
|
__or__
__or__(
*args, **kwargs
)
__pow__
__pow__(
*args, **kwargs
)
__radd__
__radd__(
*args, **kwargs
)
__rand__
__rand__(
*args, **kwargs
)
__rdiv__
__rdiv__(
*args, **kwargs
)
__rfloordiv__
__rfloordiv__(
*args, **kwargs
)
__rmatmul__
__rmatmul__(
*args, **kwargs
)
__rmod__
__rmod__(
*args, **kwargs
)
__rmul__
__rmul__(
*args, **kwargs
)
__ror__
__ror__(
*args, **kwargs
)
__rpow__
__rpow__(
*args, **kwargs
)
__rsub__
__rsub__(
*args, **kwargs
)
__rtruediv__
__rtruediv__(
*args, **kwargs
)
__rxor__
__rxor__(
*args, **kwargs
)
__sub__
__sub__(
*args, **kwargs
)
__truediv__
__truediv__(
*args, **kwargs
)
__xor__
__xor__(
*args, **kwargs
)