TensorFlow 2.0 Beta is available

# Module: tfp.bijectors

Bijective transformations.

Defined in `python/bijectors/__init__.py`.

## Classes

`class AbsoluteValue`: Computes `Y = g(X) = Abs(X)`, element-wise.

`class Affine`: Compute `Y = g(X; shift, scale) = scale @ X + shift`.

`class AffineLinearOperator`: Compute `Y = g(X; shift, scale) = scale @ X + shift`.

`class AffineScalar`: Compute `Y = g(X; shift, scale) = scale * X + shift`.

`class AutoregressiveLayer`: Masked Autoencoder for Distribution Estimation [Germain et al. (2015)].

`class AutoregressiveNetwork`: Masked Autoencoder for Distribution Estimation [Germain et al. (2015)].

`class BatchNormalization`: Compute `Y = g(X) s.t. X = g^-1(Y) = (Y - mean(Y)) / std(Y)`.

`class Bijector`: Interface for transformations of a `Distribution` sample.

`class Blockwise`: Bijector which applies a list of bijectors to blocks of a `Tensor`.

`class Chain`: Bijector which applies a sequence of bijectors.

`class CholeskyOuterProduct`: Compute `g(X) = X @ X.T`; X is lower-triangular, positive-diagonal matrix.

`class CholeskyToInvCholesky`: Maps the Cholesky factor of `M` to the Cholesky factor of `M^{-1}`.

`class ConditionalBijector`: Conditional Bijector is a Bijector that allows intrinsic conditioning.

`class Cumsum`: Computes the cumulative sum of a tensor along a specified axis.

`class DiscreteCosineTransform`: Compute `Y = g(X) = DCT(X)`, where DCT type is indicated by the `type` arg.

`class Exp`: Compute `Y = g(X) = exp(X)`.

`class Expm1`: Compute `Y = g(X) = exp(X) - 1`.

`class FillTriangular`: Transforms vectors to triangular.

`class Gumbel`: Compute `Y = g(X) = exp(-exp(-(X - loc) / scale))`.

`class Identity`: Compute Y = g(X) = X.

`class Inline`: Bijector constructed from custom callables.

`class Invert`: Bijector which inverts another Bijector.

`class IteratedSigmoidCentered`: Bijector which applies a Stick Breaking procedure.

`class Kumaraswamy`: Compute `Y = g(X) = (1 - (1 - X)**(1 / b))**(1 / a), X in [0, 1]`.

`class MaskedAutoregressiveFlow`: Affine MaskedAutoregressiveFlow bijector.

`class MatrixInverseTriL`: Computes `g(L) = inv(L)`, where `L` is a lower-triangular matrix.

`class MatvecLU`: Matrix-vector multiply using LU decomposition.

`class NormalCDF`: Compute `Y = g(X) = NormalCDF(x)`.

`class Ordered`: Bijector which maps a tensor x_k that has increasing elements in the last

`class Permute`: Permutes the rightmost dimension of a `Tensor`.

`class PowerTransform`: Compute `Y = g(X) = (1 + X * c)**(1 / c), X >= -1 / c`.

`class RealNVP`: RealNVP "affine coupling layer" for vector-valued events.

`class Reciprocal`: A `Bijector` that computes the reciprocal `b(x) = 1. / x` entrywise.

`class Reshape`: Reshapes the `event_shape` of a `Tensor`.

`class ScaleTriL`: Transforms unconstrained vectors to TriL matrices with positive diagonal.

`class Sigmoid`: Bijector which computes `Y = g(X) = 1 / (1 + exp(-X))`.

`class SinhArcsinh`: Compute `Y = g(X) = Sinh( (Arcsinh(X) + skewness) * tailweight )`.

`class SoftmaxCentered`: Bijector which computes `Y = g(X) = exp([X 0]) / sum(exp([X 0]))`.

`class Softplus`: Bijector which computes `Y = g(X) = Log[1 + exp(X)]`.

`class Softsign`: Bijector which computes `Y = g(X) = X / (1 + |X|)`.

`class Square`: Compute `g(X) = X^2`; X is a positive real number.

`class Tanh`: Bijector that computes `Y = tanh(X)`, therefore `Y in (-1, 1)`.

`class TransformDiagonal`: Applies a Bijector to the diagonal of a matrix.

`class Transpose`: Compute `Y = g(X) = transpose_rightmost_dims(X, rightmost_perm)`.

`class Weibull`: Compute `Y = g(X) = 1 - exp((-X / scale) ** concentration), X >= 0`.

## Functions

`masked_autoregressive_default_template(...)`: Build the Masked Autoregressive Density Estimator (Germain et al., 2015).

`masked_dense(...)`: A autoregressively masked dense layer. Analogous to `tf.layers.dense`.

`real_nvp_default_template(...)`: Build a scale-and-shift function using a multi-layer neural network.