View source on GitHub |
Specifies a mean-value parameterized exponential family.
tfp.glm.ExponentialFamily(
name=None
)
Subclasses implement exponential-family distribution properties (e.g.,
log_prob
, variance
) as a function of a real-value which is transformed via
some link function
to be interpreted as the distribution's mean. The distribution is
parameterized by this mean, i.e., "mean-value parameterized."
Subclasses are typically used to specify a Generalized Linear Model (GLM). A
GLM is a
generalization of linear regression which enables efficient fitting of
log-likelihood losses beyond just assuming Normal
noise. See tfp.glm.fit
for more details.
Subclasses must implement _as_distribution
which does not need to be either
"tape-safe" or "variable-safe." (tfp.glm
families are however guaranteed to
be both tape and variable safe.)
Subclasses may optionally implement _call
and _log_prob
which otherwise
default to:
def _call(self, predicted_linear_response):
with tf.GradientTape(watch_accessed_variables=False) as tape:
tape.watch(predicted_linear_response)
likelihood = self.as_distribution(predicted_linear_response)
mean = likelihood.mean()
variance = likelihood.variance()
grad_mean = tape.gradient(mean, predicted_linear_response)
return mean, variance, grad_mean
def _log_prob(self, response, predicted_linear_response):
likelihood = self.as_distribution(predicted_linear_response)
return likelihood.log_prob(response)
In context of tfp.glm.fit
and tfp.glm.fit_sparse
, these functions are used
to find the best fitting weights for given model matrix ("X") and responses
("Y").
Args | |
---|---|
name
|
Python str used as TF namescope for ops created by member
functions. Default value: None (i.e., the subclass name).
|
Attributes | |
---|---|
name
|
Returns the name of this module as passed or determined in the ctor. |
name_scope
|
Returns a tf.name_scope instance for this class.
|
non_trainable_variables
|
Sequence of non-trainable variables owned by this module and its submodules. |
submodules
|
Sequence of all sub-modules.
Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).
|
trainable_variables
|
Sequence of trainable variables owned by this module and its submodules. |
variables
|
Sequence of variables owned by this module and its submodules. |
Methods
as_distribution
as_distribution(
predicted_linear_response, name=None
)
Builds a mean parameterized TFP Distribution from linear response.
Example:
model = tfp.glm.Bernoulli()
r = tfp.glm.compute_predicted_linear_response(x, w)
yhat = model.as_distribution(r)
Args | |
---|---|
predicted_linear_response
|
response -shaped Tensor representing linear
predictions based on new model_coefficients , i.e.,
tfp.glm.compute_predicted_linear_response(
model_matrix, model_coefficients, offset) .
|
name
|
Python str used as TF namescope for ops created by member
functions. Default value: None (i.e., 'log_prob').
|
Returns | |
---|---|
model
|
tfp.distributions.Distribution -like object with mean
parameterized by predicted_linear_response .
|
log_prob
log_prob(
response, predicted_linear_response, name=None
)
Computes D(param=mean(r)).log_prob(response)
for linear response, r
.
Args | |
---|---|
response
|
float -like Tensor representing observed ("actual")
responses.
|
predicted_linear_response
|
float -like Tensor corresponding to
tf.linalg.matmul(model_matrix, weights) .
|
name
|
Python str used as TF namescope for ops created by member
functions. Default value: None (i.e., 'log_prob').
|
Returns | |
---|---|
log_prob
|
Tensor with shape and dtype of predicted_linear_response
representing the distribution prescribed log-probability of the observed
response s.
|
with_name_scope
@classmethod
with_name_scope( method )
Decorator to automatically enter the module name scope.
class MyModule(tf.Module):
@tf.Module.with_name_scope
def __call__(self, x):
if not hasattr(self, 'w'):
self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
return tf.matmul(x, self.w)
Using the above module would produce tf.Variable
s and tf.Tensor
s whose
names included the module name:
mod = MyModule()
mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args | |
---|---|
method
|
The method to wrap. |
Returns | |
---|---|
The original method wrapped such that it enters the module's name scope. |
__call__
__call__(
predicted_linear_response, name=None
)
Computes mean(r), var(mean), d/dr mean(r)
for linear response, r
.
Here mean
and var
are the mean and variance of the sufficient statistic,
which may not be the same as the mean and variance of the random variable
itself. If the distribution's density has the form
p_Y(y) = h(y) Exp[dot(theta, T(y)) - A]
where theta
and A
are constants and h
and T
are known functions,
then mean
and var
are the mean and variance of T(Y)
. In practice,
often T(Y) := Y
and in that case the distinction doesn't matter.
Args | |
---|---|
predicted_linear_response
|
float -like Tensor corresponding to
tf.linalg.matmul(model_matrix, weights) .
|
name
|
Python str used as TF namescope for ops created by member
functions. Default value: None (i.e., 'call').
|
Returns | |
---|---|
mean
|
Tensor with shape and dtype of predicted_linear_response
representing the distribution prescribed mean, given the prescribed
linear-response to mean mapping.
|
variance
|
Tensor with shape and dtype of predicted_linear_response
representing the distribution prescribed variance, given the prescribed
linear-response to mean mapping.
|
grad_mean
|
Tensor with shape and dtype of predicted_linear_response
representing the gradient of the mean with respect to the
linear-response and given the prescribed linear-response to mean
mapping.
|