View source on GitHub |

## Class `SoftClip`

Bijector that approximates clipping as a continuous, differentiable map.

Inherits From: `Bijector`

The `forward`

method takes unconstrained scalar `x`

to a value `y`

in
`[low, high]`

. For values within the interval and far from the bounds
(`low << x << high`

), this mapping is approximately the identity mapping.

```
b = tfb.SoftClip(low=-10., high=10.)
b.forward([-15., -7., 1., 9., 20.])
# => [-9.993284, -6.951412, 0.9998932, 8.686738, 9.999954 ]
```

The softness of the clipping can be adjusted via the `hinge_softness`

parameter. A sharp constraint (`hinge_softness < 1.0`

) will approximate
the identity mapping very well across almost all of its range, but may
be numerically ill-conditioned at the boundaries. A soft constraint
(`hinge_softness > 1.0`

) corresponds to a smoother, better-conditioned
mapping, but creates a larger distortion of its inputs.

```
b_hard = SoftClip(low=-5, high=5., hinge_softness=0.1)
b_soft.forward([-15., -7., 1., 9., 20.])
# => [-10., -7., 1., 8.999995, 10.]
b_soft = SoftClip(low=-5, high=5., hinge_softness=10.0)
b_soft.forward([-15., -7., 1., 9., 20.])
# => [-6.1985435, -3.369276, 0.16719627, 3.6655345, 7.1750355]
```

Note that the outputs are always in the interval `[low, high]`

, regardless
of the `hinge_softness`

.

#### Example use

A trivial application of this bijector is to constrain the values sampled from a distribution:

```
dist = tfd.TransformedDistribution(
distribution=tfd.Normal(loc=0., scale=1.),
bijector=tfd.SoftClip(low=-5., high=5.))
samples = dist.sample(100) # => samples guaranteed in [-10., 10.]
```

A more useful application is to constrain the values considered
during inference, preventing an inference algorithm from proposing values
that cause numerical issues. For example, this model will return a `log_prob`

of `NaN`

when `z`

is outside of the range `[-5., 5.]`

:

```
dist = tfd.JointDistributionNamed({
'z': tfd.Normal(0., 1.0)
'x': lambda z: tfd.Normal(
loc=tf.log(25 - z**2), # Breaks if z >= 5 or z <= -5.
scale=1.)})
```

Using SoftClip allows us to keep an inference algorithm in the feasible region without distorting the inference geometry by very much:

```
target_log_prob_fn = lambda z: dist.log_prob(z=z, x=3.) # Condition on x==3.
# Use SoftClip to ensure sampler stays within the numerically valid region.
mcmc_samples, _ = tfp.mcmc.sample_chain(
tfp.mcmc.TransformedTransitionKernel(
tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=target_log_prob_fn,
num_leapfrog_steps=2,
step_size=0.1),
bijector=tfb.SoftClip(-5., 5.))
current_state=0.,
num_results=100)
```

#### Mathematical Details

The constraint is built by using `softplus(x) = log(1 + exp(x))`

as a smooth
approximation to `max(x, 0)`

. In combination with affine transformations, this
can implement a constraint to any scalar interval.

In particular, translating `softplus`

gives a generic lower bound constraint:

```
max(x, low) = max(x - low, 0) + low
~= softplus(x - low) + low
:= softlower(x)
```

Note that this quantity is always greater than `low`

because `softplus`

is
positive-valued. We can also implement a soft upper bound:

```
min(x, high) = min(x - high, 0) + high
= -max(high - x, 0) + high
~= -softplus(high - x) + high
:= softupper(x)
```

which, similarly, is always less than `high`

.

Composing these bounds as `softupper(softlower(x))`

gives a quantity bounded
above by `high`

, and bounded below by `softupper(low)`

(because `softupper`

is monotonic and its input is bounded below by `low`

). In general, we will
have `softupper(low) < low`

, so we need to shrink the interval slightly
(by `(high - low) / (high - softupper(low))`

) to preserve the lower bound.
The two-sided constraint is therefore:

```
softclip(x) := (softupper(softlower(x)) - high) *
(high - low) / (high - softupper(low)) + high
= -softplus(high - low - softplus(x - low)) *
(high - low) / (softplus(high-low)) + high
```

Due to this rescaling, the bijector can be mildly asymmetric. Values of equal distance from the endpoints are mapped to values with slightly unequal distance from the endpoints; for example,

```
b = SoftConstrain(-1., 1.)
b.forward([-0.5., 0.5.])
# => [-0.2527727 , 0.19739306]
```

The degree of the asymmetry is proportional to the size of the rescaling
correction, i.e., the extent to which `softupper`

fails to be the identity
map at the lower end of the interval. This is maximized when the upper and
lower bounds are very close together relative to the hinge softness, as in
the example above. Conversely, when the interval is wide, the required
correction and asymmetry are very small.

`__init__`

```
__init__(
low=None,
high=None,
hinge_softness=None,
validate_args=False,
name='soft_clip'
)
```

Instantiates the SoftClip bijector.

#### Args:

: Optional float`low`

`Tensor`

lower bound. If`None`

, the lower-bound constraint is omitted. Default value:`None`

.: Optional float`high`

`Tensor`

upper bound. If`None`

, the upper-bound constraint is omitted. Default value:`None`

.: Optional nonzero float`hinge_softness`

`Tensor`

. Controls the softness of the constraint at the boundaries; values outside of the constraint set are mapped into intervals of width approximately`log(2) * hinge_softness`

on the interior of each boundary. High softness reserves more space for values outside of the constraint set, leading to greater distortion of inputs*within*the constraint set, but improved numerical stability near the boundaries. Default value:`None`

(`1.0`

).: Python`validate_args`

`bool`

indicating whether arguments should be checked for correctness.: Python`name`

`str`

name given to ops managed by this object.

## Properties

`dtype`

dtype of `Tensor`

s transformable by this distribution.

`forward_min_event_ndims`

Returns the minimal number of dimensions bijector.forward operates on.

`graph_parents`

Returns this `Bijector`

's graph_parents as a Python list.

`high`

`hinge_softness`

`inverse_min_event_ndims`

Returns the minimal number of dimensions bijector.inverse operates on.

`is_constant_jacobian`

Returns true iff the Jacobian matrix is not a function of x.

#### Returns:

: Python`is_constant_jacobian`

`bool`

.

`low`

`name`

Returns the string name of this `Bijector`

.

`name_scope`

Returns a `tf.name_scope`

instance for this class.

`parameters`

Dictionary of parameters used to instantiate this `Bijector`

.

`submodules`

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

`a = tf.Module()`

`b = tf.Module()`

`c = tf.Module()`

`a.b = b`

`b.c = c`

`list(a.submodules) == [b, c]`

`True`

`list(b.submodules) == [c]`

`True`

`list(c.submodules) == []`

`True`

#### Returns:

A sequence of all submodules.

`trainable_variables`

Sequence of trainable variables owned by this module and its submodules.

#### Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

`validate_args`

Returns True if Tensor arguments will be validated.

`variables`

Sequence of variables owned by this module and its submodules.

#### Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

## Methods

`__call__`

```
__call__(
value,
name=None,
**kwargs
)
```

Applies or composes the `Bijector`

, depending on input type.

This is a convenience function which applies the `Bijector`

instance in
three different ways, depending on the input:

- If the input is a
`tfd.Distribution`

instance, return`tfd.TransformedDistribution(distribution=input, bijector=self)`

. - If the input is a
`tfb.Bijector`

instance, return`tfb.Chain([self, input])`

. - Otherwise, return
`self.forward(input)`

#### Args:

: A`value`

`tfd.Distribution`

,`tfb.Bijector`

, or a`Tensor`

.: Python`name`

`str`

name given to ops created by this function.: Additional keyword arguments passed into the created`**kwargs`

`tfd.TransformedDistribution`

,`tfb.Bijector`

, or`self.forward`

.

#### Returns:

: A`composition`

`tfd.TransformedDistribution`

if the input was a`tfd.Distribution`

, a`tfb.Chain`

if the input was a`tfb.Bijector`

, or a`Tensor`

computed by`self.forward`

.

#### Examples

```
sigmoid = tfb.Reciprocal()(
tfb.AffineScalar(shift=1.)(
tfb.Exp()(
tfb.AffineScalar(scale=-1.))))
# ==> `tfb.Chain([
# tfb.Reciprocal(),
# tfb.AffineScalar(shift=1.),
# tfb.Exp(),
# tfb.AffineScalar(scale=-1.),
# ])` # ie, `tfb.Sigmoid()`
log_normal = tfb.Exp()(tfd.Normal(0, 1))
# ==> `tfd.TransformedDistribution(tfd.Normal(0, 1), tfb.Exp())`
tfb.Exp()([-1., 0., 1.])
# ==> tf.exp([-1., 0., 1.])
```

`forward`

```
forward(
x,
name='forward',
**kwargs
)
```

Returns the forward `Bijector`

evaluation, i.e., X = g(Y).

#### Args:

:`x`

`Tensor`

. The input to the 'forward' evaluation.: The name to give this op.`name`

: Named arguments forwarded to subclass implementation.`**kwargs`

#### Returns:

`Tensor`

.

#### Raises:

: if`TypeError`

`self.dtype`

is specified and`x.dtype`

is not`self.dtype`

.: if`NotImplementedError`

`_forward`

is not implemented.

`forward_event_shape`

```
forward_event_shape(input_shape)
```

Shape of a single sample from a single batch as a `TensorShape`

.

Same meaning as `forward_event_shape_tensor`

. May be only partially defined.

#### Args:

:`input_shape`

`TensorShape`

indicating event-portion shape passed into`forward`

function.

#### Returns:

:`forward_event_shape_tensor`

`TensorShape`

indicating event-portion shape after applying`forward`

. Possibly unknown.

`forward_event_shape_tensor`

```
forward_event_shape_tensor(
input_shape,
name='forward_event_shape_tensor'
)
```

Shape of a single sample from a single batch as an `int32`

1D `Tensor`

.

#### Args:

:`input_shape`

`Tensor`

,`int32`

vector indicating event-portion shape passed into`forward`

function.: name to give to the op`name`

#### Returns:

:`forward_event_shape_tensor`

`Tensor`

,`int32`

vector indicating event-portion shape after applying`forward`

.

`forward_log_det_jacobian`

```
forward_log_det_jacobian(
x,
event_ndims,
name='forward_log_det_jacobian',
**kwargs
)
```

Returns both the forward_log_det_jacobian.

#### Args:

:`x`

`Tensor`

. The input to the 'forward' Jacobian determinant evaluation.: Number of dimensions in the probabilistic events being transformed. Must be greater than or equal to`event_ndims`

`self.forward_min_event_ndims`

. The result is summed over the final dimensions to produce a scalar Jacobian determinant for each event, i.e. it has shape`rank(x) - event_ndims`

dimensions.: The name to give this op.`name`

: Named arguments forwarded to subclass implementation.`**kwargs`

#### Returns:

`Tensor`

, if this bijector is injective.
If not injective this is not implemented.

#### Raises:

: if`TypeError`

`self.dtype`

is specified and`y.dtype`

is not`self.dtype`

.: if neither`NotImplementedError`

`_forward_log_det_jacobian`

nor {`_inverse`

,`_inverse_log_det_jacobian`

} are implemented, or this is a non-injective bijector.

`inverse`

```
inverse(
y,
name='inverse',
**kwargs
)
```

Returns the inverse `Bijector`

evaluation, i.e., X = g^{-1}(Y).

#### Args:

:`y`

`Tensor`

. The input to the 'inverse' evaluation.: The name to give this op.`name`

: Named arguments forwarded to subclass implementation.`**kwargs`

#### Returns:

`Tensor`

, if this bijector is injective.
If not injective, returns the k-tuple containing the unique
`k`

points `(x1, ..., xk)`

such that `g(xi) = y`

.

#### Raises:

: if`TypeError`

`self.dtype`

is specified and`y.dtype`

is not`self.dtype`

.: if`NotImplementedError`

`_inverse`

is not implemented.

`inverse_event_shape`

```
inverse_event_shape(output_shape)
```

Shape of a single sample from a single batch as a `TensorShape`

.

Same meaning as `inverse_event_shape_tensor`

. May be only partially defined.

#### Args:

:`output_shape`

`TensorShape`

indicating event-portion shape passed into`inverse`

function.

#### Returns:

:`inverse_event_shape_tensor`

`TensorShape`

indicating event-portion shape after applying`inverse`

. Possibly unknown.

`inverse_event_shape_tensor`

```
inverse_event_shape_tensor(
output_shape,
name='inverse_event_shape_tensor'
)
```

Shape of a single sample from a single batch as an `int32`

1D `Tensor`

.

#### Args:

:`output_shape`

`Tensor`

,`int32`

vector indicating event-portion shape passed into`inverse`

function.: name to give to the op`name`

#### Returns:

:`inverse_event_shape_tensor`

`Tensor`

,`int32`

vector indicating event-portion shape after applying`inverse`

.

`inverse_log_det_jacobian`

```
inverse_log_det_jacobian(
y,
event_ndims,
name='inverse_log_det_jacobian',
**kwargs
)
```

Returns the (log o det o Jacobian o inverse)(y).

Mathematically, returns: `log(det(dX/dY))(Y)`

. (Recall that: `X=g^{-1}(Y)`

.)

Note that `forward_log_det_jacobian`

is the negative of this function,
evaluated at `g^{-1}(y)`

.

#### Args:

:`y`

`Tensor`

. The input to the 'inverse' Jacobian determinant evaluation.: Number of dimensions in the probabilistic events being transformed. Must be greater than or equal to`event_ndims`

`self.inverse_min_event_ndims`

. The result is summed over the final dimensions to produce a scalar Jacobian determinant for each event, i.e. it has shape`rank(y) - event_ndims`

dimensions.: The name to give this op.`name`

: Named arguments forwarded to subclass implementation.`**kwargs`

#### Returns:

:`ildj`

`Tensor`

, if this bijector is injective. If not injective, returns the tuple of local log det Jacobians,`log(det(Dg_i^{-1}(y)))`

, where`g_i`

is the restriction of`g`

to the`ith`

partition`Di`

.

#### Raises:

: if`TypeError`

`self.dtype`

is specified and`y.dtype`

is not`self.dtype`

.: if`NotImplementedError`

`_inverse_log_det_jacobian`

is not implemented.

`with_name_scope`

```
@classmethod
with_name_scope(
cls,
method
)
```

Decorator to automatically enter the module name scope.

`class MyModule(tf.Module):`

`@tf.Module.with_name_scope`

`def __call__(self, x):`

`if not hasattr(self, 'w'):`

`self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))`

`return tf.matmul(x, self.w)`

Using the above module would produce `tf.Variable`

s and `tf.Tensor`

s whose
names included the module name:

`mod = MyModule()`

`mod(tf.ones([1, 2]))`

`<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>`

`mod.w`

`<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,`

`numpy=..., dtype=float32)>`

#### Args:

: The method to wrap.`method`

#### Returns:

The original method wrapped such that it enters the module's name scope.