# Y = X
b = AffineScalar()
# Y = X + shift
b = AffineScalar(shift=[1., 2, 3])
# Y = 2 * X + shift
b = AffineScalar(
shift=[1., 2, 3],
scale=2.)
Args
shift
Floating-point Tensor. If this is set to None, no shift is
applied.
scale
Floating-point Tensor. If this is set to None, no scale is
applied. This should not be set if log_scale is set.
log_scale
Floating-point Tensor. Logarithm of the scale. If this is set
to None, no scale is applied. This should not be set if scale is
set.
validate_args
Python bool indicating whether arguments should be
checked for correctness.
name
Python str name given to ops managed by this object.
Raises
ValueError
If both scale and log_scale are specified.
Attributes
dtype
forward_min_event_ndims
Returns the minimal number of dimensions bijector.forward operates on.
Multipart bijectors return structured ndims, which indicates the
expected structure of their inputs. Some multipart bijectors, notably
Composites, may return structures of None.
graph_parents
Returns this Bijector's graph_parents as a Python list.
has_static_min_event_ndims
Returns True if the bijector has statically-known min_event_ndims.
inverse_min_event_ndims
Returns the minimal number of dimensions bijector.inverse operates on.
Multipart bijectors return structured event_ndims, which indicates the
expected structure of their outputs. Some multipart bijectors, notably
Composites, may return structures of None.
is_constant_jacobian
Returns true iff the Jacobian matrix is not a function of x.
log_scale
The log_scale term in Y = exp(log_scale) * X + shift.
Tensor (structure). The input to the 'forward' Jacobian determinant
evaluation.
event_ndims
Number of dimensions in the probabilistic events being
transformed. Must be greater than or equal to
self.forward_min_event_ndims. The result is summed over the final
dimensions to produce a scalar Jacobian determinant for each event, i.e.
it has shape rank(x) - event_ndims dimensions.
Multipart bijectors require structured event_ndims, such that
rank(y[i]) - rank(event_ndims[i]) is the same for all elements i of
the structured input. Furthermore, the first event_ndims[i] of each
x[i].shape must be the same for all i (broadcasting is not allowed).
name
The name to give this op.
**kwargs
Named arguments forwarded to subclass implementation.
Returns
Tensor (structure), if this bijector is injective.
If not injective this is not implemented.
Raises
TypeError
if y's dtype is incompatible with the expected output dtype.
NotImplementedError
if neither _forward_log_det_jacobian
nor {_inverse, _inverse_log_det_jacobian} are implemented, or
this is a non-injective bijector.
Returns the inverse Bijector evaluation, i.e., X = g^{-1}(Y).
Args
y
Tensor (structure). The input to the 'inverse' evaluation.
name
The name to give this op.
**kwargs
Named arguments forwarded to subclass implementation.
Returns
Tensor (structure), if this bijector is injective.
If not injective, returns the k-tuple containing the unique
k points (x1, ..., xk) such that g(xi) = y.
Raises
TypeError
if y's structured dtype is incompatible with the expected
output dtype.
Note that forward_log_det_jacobian is the negative of this function,
evaluated at g^{-1}(y).
Args
y
Tensor (structure). The input to the 'inverse' Jacobian determinant
evaluation.
event_ndims
Number of dimensions in the probabilistic events being
transformed. Must be greater than or equal to
self.inverse_min_event_ndims. The result is summed over the final
dimensions to produce a scalar Jacobian determinant for each event, i.e.
it has shape rank(y) - event_ndims dimensions.
Multipart bijectors require structured event_ndims, such that
rank(y[i]) - rank(event_ndims[i]) is the same for all elements i of
the structured input. Furthermore, the first event_ndims[i] of each
x[i].shape must be the same for all i (broadcasting is not allowed).
name
The name to give this op.
**kwargs
Named arguments forwarded to subclass implementation.
Returns
ildj
Tensor, if this bijector is injective.
If not injective, returns the tuple of local log det
Jacobians, log(det(Dg_i^{-1}(y))), where g_i is the restriction
of g to the ith partition Di.
Raises
TypeError
if x's dtype is incompatible with the expected inverse-dtype.
NotImplementedError
if _inverse_log_det_jacobian is not implemented.
with_name_scope
@classmethodwith_name_scope(
method
)
Decorator to automatically enter the module name scope.
class MyModule(tf.Module): @tf.Module.with_name_scope def __call__(self, x): if not hasattr(self, 'w'): self.w = tf.Variable(tf.random.normal([x.shape[1], 3])) return tf.matmul(x, self.w)
Using the above module would produce tf.Variables and tf.Tensors whose
names included the module name:
Applies or composes the Bijector, depending on input type.
This is a convenience function which applies the Bijector instance in
three different ways, depending on the input:
If the input is a tfd.Distribution instance, return
tfd.TransformedDistribution(distribution=input, bijector=self).
If the input is a tfb.Bijector instance, return
tfb.Chain([self, input]).
Otherwise, return self.forward(input)
Args
value
A tfd.Distribution, tfb.Bijector, or a (structure of) Tensor.
name
Python str name given to ops created by this function.
**kwargs
Additional keyword arguments passed into the created
tfd.TransformedDistribution, tfb.Bijector, or self.forward.
Returns
composition
A tfd.TransformedDistribution if the input was a
tfd.Distribution, a tfb.Chain if the input was a tfb.Bijector, or
a (structure of) Tensor computed by self.forward.