Missed TensorFlow Dev Summit? Check out the video playlist.

# tfp.experimental.substrates.jax.experimental.inference_gym.targets.BayesianModel

Base class for Bayesian models in the Inference Gym.

``````tfp.experimental.substrates.jax.experimental.inference_gym.targets.BayesianModel(
default_event_space_bijector, event_shape, dtype, name, pretty_name,
sample_transformations
)
``````

Given a Bayesian model described by a joint distribution `P(x, y)` which we can sample from, we construct the posterior by conditioning the joint model on evidence `y`. The posterior distribution `P(x | y)` is represented as a product of the inverse normalization constant and the un-normalized density: `1/Z tilde{P}(x | y)`. Note that as a special case the evidence is allowed to be empty, in which case both the joint and the posterior are just `P(x)`.

Given a Bayesian model conditioned on evidence, you can access the associated un-normalized density via the `unnormalized_log_prob` method.

The dtype, shape, and constraints over `x` are returned by the `dtype`, `event_shape`, and `default_event_space_bijector` properties. Note that `x` could be structured, in which case `dtype` and `shape` will be structured as well (parallel to the structure of `x`). `default_event_space_bijector` need not be structured, but could operate on the structured `x`. A generic way of constructing a random number that is within the event space of this model is to do:

``````model = LogisticRegression(...)
unconstrained_values = tf.nest.map_structure(
lambda d, s: tf.random.stateless_normal(s, dtype=d),
model.dtype,
model.event_shape,
)
constrained_values = tf.nest.map_structure_up_to(
model.default_event_space_bijector,
lambda b, v: b(v),
model.default_event_space_bijector,
unconstrained_values,
)
``````

A model has two names. First, the `name` property is used for various name scopes inside the implementation of the model. Second, a pretty name which is meant to be suitable for a table inside a publication, accessed via the `__str__` method.

Models come with associated sample transformations, which describe useful ways of looking at the samples from the posterior distribution. Each transformation optionally comes equipped with various ground truth values (computed analytically or via Monte Carlo averages). You can apply the transformations to samples from the model like so:

``````model = LogisticRegression(...)
for name, sample_transformation in model.sample_transformations.items():
transformed_samples = sample_transformation(samples)
if sample_transformation.ground_truth_mean is not None:
square_diff = tf.nest.map_structure(
lambda gtm, sm: (gtm - tf.reduce_mean(sm, axis=0))**2,
sample_transformation.ground_truth_mean,
transformed_samples,
)
``````

#### Examples

A simple 2-variable Bayesian model:

``````class SimpleModel(gym.targets.BayesianModel):

def __init__(self):
self._joint_distribution_val = tfd.JointDistributionSequential([
tfd.Exponential(0.),
lambda s: tfd.Normal(0., s),
])
self._evidence_val = 1.

super(TestModel, self).__init__(
default_event_space_bijector=tfb.Exp(),
event_shape=self._joint_distribution_val.event_shape,
dtype=self._joint_distribution_val.dtype,
name='simple_model',
pretty_name='SimpleModel',
sample_transformations=dict(
identity=gym.targets.BayesianModel.SampleTransformation(
fn=lambda x: x,
pretty_name='Identity',
),),
)

def _joint_distribution(self):
return self._joint_distribution_val

def _evidence(self):
return self._evidence_val

def _unnormalized_log_prob(self, x):
return self.joint_distribution().log_prob([x, self.evidence()])
``````

Note how we first constructed a joint distribution, and then used its properties to specify the Bayesian model. We don't specify the ground truth values for the `identity` sample transformation as they're not known analytically. See `GermanCreditNumericLogisticRegression` Bayesian model for an example of how to incorporate Monte-Carlo derived values for ground truth into a sample transformation.

#### Args:

• `default_event_space_bijector`: A (nest of) bijectors that take unconstrained `R**n` tensors to the event space of the posterior.
• `event_shape`: A (nest of) shapes describing the samples from the posterior.
• `dtype`: A (nest of) dtypes describing the dtype of the posterior.
• `name`: Python `str` name prefixed to Ops created by this class.
• `pretty_name`: A Python `str`. The pretty name of this model.
• `sample_transformations`: A dictionary of Python strings to `SampleTransformation`s.

#### Attributes:

• `default_event_space_bijector`: Bijector mapping the reals (R**n) to the event space of this model.
• `dtype`: The `DType` of `Tensor`s handled by this model.
• `event_shape`: Shape of a single sample from as a `TensorShape`.

May be partially defined or unknown.

• `name`: Python `str` name prefixed to Ops created by this class.

• `sample_transformations`: A dictionary of names to `SampleTransformation`s.

## Child Classes

`class SampleTransformation`

## Methods

### `evidence`

View source

``````evidence(
name='evidence'
)
``````

The evidence that the joint model is conditioned on.

### `joint_distribution`

View source

``````joint_distribution(
name='joint_distribution'
)
``````

The joint distribution before any conditioning.

### `unnormalized_log_prob`

View source

``````unnormalized_log_prob(
value, name='unnormalized_log_prob'
)
``````

The un-normalized log density of evaluated at a point.

This corresponds to the target distribution associated with the model, often its posterior.

#### Args:

• `value`: A (nest of) `Tensor` to evaluate the log density at.
• `name`: Python `str` name prefixed to Ops created by this method.

#### Returns:

• `unnormalized_log_prob`: A floating point `Tensor`.