![]() |
Bijector that approximates clipping as a continuous, differentiable map.
Inherits From: Bijector
tfp.substrates.numpy.bijectors.SoftClip(
low=None, high=None, hinge_softness=None, validate_args=False,
name='soft_clip'
)
The forward
method takes unconstrained scalar x
to a value y
in
[low, high]
. For values within the interval and far from the bounds
(low << x << high
), this mapping is approximately the identity mapping.
b = tfb.SoftClip(low=-10., high=10.)
b.forward([-15., -7., 1., 9., 20.])
# => [-9.993284, -6.951412, 0.9998932, 8.686738, 9.999954 ]
The softness of the clipping can be adjusted via the hinge_softness
parameter. A sharp constraint (hinge_softness < 1.0
) will approximate
the identity mapping very well across almost all of its range, but may
be numerically ill-conditioned at the boundaries. A soft constraint
(hinge_softness > 1.0
) corresponds to a smoother, better-conditioned
mapping, but creates a larger distortion of its inputs.
b_hard = SoftClip(low=-5, high=5., hinge_softness=0.1)
b_soft.forward([-15., -7., 1., 9., 20.])
# => [-10., -7., 1., 8.999995, 10.]
b_soft = SoftClip(low=-5, high=5., hinge_softness=10.0)
b_soft.forward([-15., -7., 1., 9., 20.])
# => [-6.1985435, -3.369276, 0.16719627, 3.6655345, 7.1750355]
Note that the outputs are always in the interval [low, high]
, regardless
of the hinge_softness
.
Example use
A trivial application of this bijector is to constrain the values sampled from a distribution:
dist = tfd.TransformedDistribution(
distribution=tfd.Normal(loc=0., scale=1.),
bijector=tfb.SoftClip(low=-5., high=5.))
samples = dist.sample(100) # => samples guaranteed in [-10., 10.]
A more useful application is to constrain the values considered
during inference, preventing an inference algorithm from proposing values
that cause numerical issues. For example, this model will return a log_prob
of NaN
when z
is outside of the range [-5., 5.]
:
dist = tfd.JointDistributionNamed({
'z': tfd.Normal(0., 1.0)
'x': lambda z: tfd.Normal(
loc=tf.log(25 - z**2), # Breaks if z >= 5 or z <= -5.
scale=1.)})
Using SoftClip allows us to keep an inference algorithm in the feasible region without distorting the inference geometry by very much:
target_log_prob_fn = lambda z: dist.log_prob(z=z, x=3.) # Condition on x==3.
# Use SoftClip to ensure sampler stays within the numerically valid region.
mcmc_samples = tfp.mcmc.sample_chain(
kernel=tfp.mcmc.TransformedTransitionKernel(
tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=target_log_prob_fn,
num_leapfrog_steps=2,
step_size=0.1),
bijector=tfb.SoftClip(-5., 5.)),
trace_fn=None,
current_state=0.,
num_results=100)
Mathematical Details
The constraint is built by using softplus(x) = log(1 + exp(x))
as a smooth
approximation to max(x, 0)
. In combination with affine transformations, this
can implement a constraint to any scalar interval.
In particular, translating softplus
gives a generic lower bound constraint:
max(x, low) = max(x - low, 0) + low
~= softplus(x - low) + low
:= softlower(x)
Note that this quantity is always greater than low
because softplus
is
positive-valued. We can also implement a soft upper bound:
min(x, high) = min(x - high, 0) + high
= -max(high - x, 0) + high
~= -softplus(high - x) + high
:= softupper(x)
which, similarly, is always less than high
.
Composing these bounds as softupper(softlower(x))
gives a quantity bounded
above by high
, and bounded below by softupper(low)
(because softupper
is monotonic and its input is bounded below by low
). In general, we will
have softupper(low) < low
, so we need to shrink the interval slightly
(by (high - low) / (high - softupper(low))
) to preserve the lower bound.
The two-sided constraint is therefore:
softclip(x) := (softupper(softlower(x)) - high) *
(high - low) / (high - softupper(low)) + high
= -softplus(high - low - softplus(x - low)) *
(high - low) / (softplus(high-low)) + high
Due to this rescaling, the bijector can be mildly asymmetric. Values of equal distance from the endpoints are mapped to values with slightly unequal distance from the endpoints; for example,
b = SoftConstrain(-1., 1.)
b.forward([-0.5., 0.5.])
# => [-0.2527727 , 0.19739306]
The degree of the asymmetry is proportional to the size of the rescaling
correction, i.e., the extent to which softupper
fails to be the identity
map at the lower end of the interval. This is maximized when the upper and
lower bounds are very close together relative to the hinge softness, as in
the example above. Conversely, when the interval is wide, the required
correction and asymmetry are very small.
Args | |
---|---|
low
|
Optional float Tensor lower bound. If None , the lower-bound
constraint is omitted.
Default value: None .
|
high
|
Optional float Tensor upper bound. If None , the upper-bound
constraint is omitted.
Default value: None .
|
hinge_softness
|
Optional nonzero float Tensor . Controls the softness
of the constraint at the boundaries; values outside of the constraint
set are mapped into intervals of width approximately
log(2) * hinge_softness on the interior of each boundary. High
softness reserves more space for values outside of the constraint set,
leading to greater distortion of inputs within the constraint set,
but improved numerical stability near the boundaries.
Default value: None (1.0 ).
|
validate_args
|
Python bool indicating whether arguments should be
checked for correctness.
|
name
|
Python str name given to ops managed by this object.
|
Attributes | |
---|---|
dtype
|
|
forward_min_event_ndims
|
Returns the minimal number of dimensions bijector.forward operates on.
Multipart bijectors return structured |
graph_parents
|
Returns this Bijector 's graph_parents as a Python list.
|
has_static_min_event_ndims
|
Returns True if the bijector has statically-known min_event_ndims .
|
high
|
|
hinge_softness
|
|
inverse_min_event_ndims
|
Returns the minimal number of dimensions bijector.inverse operates on.
Multipart bijectors return structured |
is_constant_jacobian
|
Returns true iff the Jacobian matrix is not a function of x. |
low
|
|
name
|
Returns the string name of this Bijector .
|
parameters
|
Dictionary of parameters used to instantiate this Bijector .
|
trainable_variables
|
|
validate_args
|
Returns True if Tensor arguments will be validated. |
variables
|
Methods
forward
forward(
x, name='forward', **kwargs
)
Returns the forward Bijector
evaluation, i.e., X = g(Y).
Args | |
---|---|
x
|
Tensor (structure). The input to the 'forward' evaluation.
|
name
|
The name to give this op. |
**kwargs
|
Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
Tensor (structure).
|
Raises | |
---|---|
TypeError
|
if self.dtype is specified and x.dtype is not
self.dtype .
|
NotImplementedError
|
if _forward is not implemented.
|
forward_dtype
forward_dtype(
dtype=UNSPECIFIED, name='forward_dtype', **kwargs
)
Returns the dtype returned by forward
for the provided input.
forward_event_ndims
forward_event_ndims(
event_ndims, **kwargs
)
Returns the number of event dimensions produced by forward
.
forward_event_shape
forward_event_shape(
input_shape
)
Shape of a single sample from a single batch as a TensorShape
.
Same meaning as forward_event_shape_tensor
. May be only partially defined.
Args | |
---|---|
input_shape
|
TensorShape (structure) indicating event-portion shape
passed into forward function.
|
Returns | |
---|---|
forward_event_shape_tensor
|
TensorShape (structure) indicating
event-portion shape after applying forward . Possibly unknown.
|
forward_event_shape_tensor
forward_event_shape_tensor(
input_shape, name='forward_event_shape_tensor'
)
Shape of a single sample from a single batch as an int32
1D Tensor
.
Args | |
---|---|
input_shape
|
Tensor , int32 vector (structure) indicating event-portion
shape passed into forward function.
|
name
|
name to give to the op |
Returns | |
---|---|
forward_event_shape_tensor
|
Tensor , int32 vector (structure)
indicating event-portion shape after applying forward .
|
forward_log_det_jacobian
forward_log_det_jacobian(
x, event_ndims, name='forward_log_det_jacobian', **kwargs
)
Returns both the forward_log_det_jacobian.
Args | |
---|---|
x
|
Tensor (structure). The input to the 'forward' Jacobian determinant
evaluation.
|
event_ndims
|
Number of dimensions in the probabilistic events being
transformed. Must be greater than or equal to
self.forward_min_event_ndims . The result is summed over the final
dimensions to produce a scalar Jacobian determinant for each event, i.e.
it has shape rank(x) - event_ndims dimensions.
Multipart bijectors require structured event_ndims, such that
rank(y[i]) - rank(event_ndims[i]) is the same for all elements i of
the structured input. Furthermore, the first event_ndims[i] of each
x[i].shape must be the same for all i (broadcasting is not allowed).
|
name
|
The name to give this op. |
**kwargs
|
Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
Tensor (structure), if this bijector is injective.
If not injective this is not implemented.
|
Raises | |
---|---|
TypeError
|
if y 's dtype is incompatible with the expected output dtype.
|
NotImplementedError
|
if neither _forward_log_det_jacobian
nor {_inverse , _inverse_log_det_jacobian } are implemented, or
this is a non-injective bijector.
|
inverse
inverse(
y, name='inverse', **kwargs
)
Returns the inverse Bijector
evaluation, i.e., X = g^{-1}(Y).
Args | |
---|---|
y
|
Tensor (structure). The input to the 'inverse' evaluation.
|
name
|
The name to give this op. |
**kwargs
|
Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
Tensor (structure), if this bijector is injective.
If not injective, returns the k-tuple containing the unique
k points (x1, ..., xk) such that g(xi) = y .
|
Raises | |
---|---|
TypeError
|
if y 's structured dtype is incompatible with the expected
output dtype.
|
NotImplementedError
|
if _inverse is not implemented.
|
inverse_dtype
inverse_dtype(
dtype=UNSPECIFIED, name='inverse_dtype', **kwargs
)
Returns the dtype returned by inverse
for the provided input.
inverse_event_ndims
inverse_event_ndims(
event_ndims, **kwargs
)
Returns the number of event dimensions produced by inverse
.
inverse_event_shape
inverse_event_shape(
output_shape
)
Shape of a single sample from a single batch as a TensorShape
.
Same meaning as inverse_event_shape_tensor
. May be only partially defined.
Args | |
---|---|
output_shape
|
TensorShape (structure) indicating event-portion shape
passed into inverse function.
|
Returns | |
---|---|
inverse_event_shape_tensor
|
TensorShape (structure) indicating
event-portion shape after applying inverse . Possibly unknown.
|
inverse_event_shape_tensor
inverse_event_shape_tensor(
output_shape, name='inverse_event_shape_tensor'
)
Shape of a single sample from a single batch as an int32
1D Tensor
.
Args | |
---|---|
output_shape
|
Tensor , int32 vector (structure) indicating
event-portion shape passed into inverse function.
|
name
|
name to give to the op |
Returns | |
---|---|
inverse_event_shape_tensor
|
Tensor , int32 vector (structure)
indicating event-portion shape after applying inverse .
|
inverse_log_det_jacobian
inverse_log_det_jacobian(
y, event_ndims, name='inverse_log_det_jacobian', **kwargs
)
Returns the (log o det o Jacobian o inverse)(y).
Mathematically, returns: log(det(dX/dY))(Y)
. (Recall that: X=g^{-1}(Y)
.)
Note that forward_log_det_jacobian
is the negative of this function,
evaluated at g^{-1}(y)
.
Args | |
---|---|
y
|
Tensor (structure). The input to the 'inverse' Jacobian determinant
evaluation.
|
event_ndims
|
Number of dimensions in the probabilistic events being
transformed. Must be greater than or equal to
self.inverse_min_event_ndims . The result is summed over the final
dimensions to produce a scalar Jacobian determinant for each event, i.e.
it has shape rank(y) - event_ndims dimensions.
Multipart bijectors require structured event_ndims, such that
rank(y[i]) - rank(event_ndims[i]) is the same for all elements i of
the structured input. Furthermore, the first event_ndims[i] of each
x[i].shape must be the same for all i (broadcasting is not allowed).
|
name
|
The name to give this op. |
**kwargs
|
Named arguments forwarded to subclass implementation. |
Returns | |
---|---|
ildj
|
Tensor , if this bijector is injective.
If not injective, returns the tuple of local log det
Jacobians, log(det(Dg_i^{-1}(y))) , where g_i is the restriction
of g to the ith partition Di .
|
Raises | |
---|---|
TypeError
|
if x 's dtype is incompatible with the expected inverse-dtype.
|
NotImplementedError
|
if _inverse_log_det_jacobian is not implemented.
|
__call__
__call__(
value, name=None, **kwargs
)
Applies or composes the Bijector
, depending on input type.
This is a convenience function which applies the Bijector
instance in
three different ways, depending on the input:
- If the input is a
tfd.Distribution
instance, returntfd.TransformedDistribution(distribution=input, bijector=self)
. - If the input is a
tfb.Bijector
instance, returntfb.Chain([self, input])
. - Otherwise, return
self.forward(input)
Args | |
---|---|
value
|
A tfd.Distribution , tfb.Bijector , or a (structure of) Tensor .
|
name
|
Python str name given to ops created by this function.
|
**kwargs
|
Additional keyword arguments passed into the created
tfd.TransformedDistribution , tfb.Bijector , or self.forward .
|
Returns | |
---|---|
composition
|
A tfd.TransformedDistribution if the input was a
tfd.Distribution , a tfb.Chain if the input was a tfb.Bijector , or
a (structure of) Tensor computed by self.forward .
|
Examples
sigmoid = tfb.Reciprocal()(
tfb.Shift(shift=1.)(
tfb.Exp()(
tfb.Scale(scale=-1.))))
# ==> `tfb.Chain([
# tfb.Reciprocal(),
# tfb.Shift(shift=1.),
# tfb.Exp(),
# tfb.Scale(scale=-1.),
# ])` # ie, `tfb.Sigmoid()`
log_normal = tfb.Exp()(tfd.Normal(0, 1))
# ==> `tfd.TransformedDistribution(tfd.Normal(0, 1), tfb.Exp())`
tfb.Exp()([-1., 0., 1.])
# ==> tf.exp([-1., 0., 1.])
__eq__
__eq__(
other
)
Return self==value.