View source on GitHub |

Base class for Bayesian models in the Inference Gym.

```
tfp.experimental.inference_gym.targets.BayesianModel(
default_event_space_bijector, event_shape, dtype, name, pretty_name,
sample_transformations
)
```

Given a Bayesian model described by a joint distribution `P(x, y)`

which we
can sample from, we construct the posterior by conditioning the joint model on
evidence `y`

. The posterior distribution `P(x | y)`

is represented as a
product of the inverse normalization constant and the un-normalized density:
`1/Z tilde{P}(x | y)`

. Note that as a special case the evidence is allowed to
be empty, in which case both the joint and the posterior are just `P(x)`

.

Given a Bayesian model conditioned on evidence, you can access the associated
un-normalized density via the `unnormalized_log_prob`

method.

The dtype, shape, and constraints over `x`

are returned by the `dtype`

,
`event_shape`

, and `default_event_space_bijector`

properties. Note that `x`

could be structured, in which case `dtype`

and `shape`

will be structured as
well (parallel to the structure of `x`

). `default_event_space_bijector`

need
not be structured, but could operate on the structured `x`

. A generic way of
constructing a random number that is within the event space of this model is
to do:

```
model = LogisticRegression(...)
unconstrained_values = tf.nest.map_structure(
lambda d, s: tf.random.normal(s, dtype=d),
model.dtype,
model.event_shape,
)
constrained_values = tf.nest.map_structure_up_to(
model.default_event_space_bijector,
lambda b, v: b(v),
model.default_event_space_bijector,
unconstrained_values,
)
```

A model has two names. First, the `name`

property is used for various name
scopes inside the implementation of the model. Second, a pretty name which is
meant to be suitable for a table inside a publication, accessed via the
`__str__`

method.

Models come with associated sample transformations, which describe useful ways of looking at the samples from the posterior distribution. Each transformation optionally comes equipped with various ground truth values (computed analytically or via Monte Carlo averages). You can apply the transformations to samples from the model like so:

```
model = LogisticRegression(...)
for name, sample_transformation in model.sample_transformations.items():
transformed_samples = sample_transformation(samples)
if sample_transformation.ground_truth_mean is not None:
square_diff = tf.nest.map_structure(
lambda gtm, sm: (gtm - tf.reduce_mean(sm, axis=0))**2,
sample_transformation.ground_truth_mean,
transformed_samples,
)
```

#### Examples

A simple 2-variable Bayesian model:

```
class SimpleModel(gym.targets.BayesianModel):
def __init__(self):
self._joint_distribution_val = tfd.JointDistributionSequential([
tfd.Exponential(0.),
lambda s: tfd.Normal(0., s),
])
self._evidence_val = 1.
super(TestModel, self).__init__(
default_event_space_bijector=tfb.Exp(),
event_shape=self._joint_distribution_val.event_shape[0],
dtype=self._joint_distribution_val.dtype[0],
name='simple_model',
pretty_name='SimpleModel',
sample_transformations=dict(
identity=gym.targets.BayesianModel.SampleTransformation(
fn=lambda x: x,
pretty_name='Identity',
),),
)
def _joint_distribution(self):
return self._joint_distribution_val
def _evidence(self):
return self._evidence_val
def _unnormalized_log_prob(self, x):
return self.joint_distribution().log_prob([x, self.evidence()])
```

Note how we first constructed a joint distribution, and then used its
properties to specify the Bayesian model. We don't specify the ground truth
values for the `identity`

sample transformation as they're not known
analytically. See `GermanCreditNumericLogisticRegression`

Bayesian model for
an example of how to incorporate Monte-Carlo derived values for ground truth
into a sample transformation.

#### Args:

: A (nest of) bijectors that take unconstrained`default_event_space_bijector`

`R**n`

tensors to the event space of the posterior.: A (nest of) shapes describing the samples from the posterior.`event_shape`

: A (nest of) dtypes describing the dtype of the posterior.`dtype`

: Python`name`

`str`

name prefixed to Ops created by this class.: A Python`pretty_name`

`str`

. The pretty name of this model.: A dictionary of Python strings to`sample_transformations`

`SampleTransformation`

s.

#### Attributes:

: Bijector mapping the reals (R**n) to the event space of this model.`default_event_space_bijector`

: The`dtype`

`DType`

of`Tensor`

s handled by this model.: Shape of a single sample from as a`event_shape`

`TensorShape`

.May be partially defined or unknown.

: Python`name`

`str`

name prefixed to Ops created by this class.: A dictionary of names to`sample_transformations`

`SampleTransformation`

s.

## Child Classes

## Methods

`evidence`

```
evidence(
name='evidence'
)
```

The evidence that the joint model is conditioned on.

`joint_distribution`

```
joint_distribution(
name='joint_distribution'
)
```

The joint distribution before any conditioning.

`unnormalized_log_prob`

```
unnormalized_log_prob(
value, name='unnormalized_log_prob'
)
```

The un-normalized log density of evaluated at a point.

This corresponds to the target distribution associated with the model, often its posterior.

#### Args:

: A (nest of)`value`

`Tensor`

to evaluate the log density at.: Python`name`

`str`

name prefixed to Ops created by this method.

#### Returns:

: A floating point`unnormalized_log_prob`

`Tensor`

.