View source on GitHub 
NegativeBinomial(total_count, probs=mean / (mean + total_count))
.
Inherits From: ExponentialFamily
tfp.glm.NegativeBinomial(
total_count=1.0, name=None
)
Where mean = exp(X @ weights)
.
Args  

name

Python str used as TF namescope for ops created by member
functions. Default value: None (i.e., the subclass name).

Attributes  

name

Returns the name of this module as passed or determined in the ctor. 
name_scope

Returns a tf.name_scope instance for this class.

submodules

Sequence of all submodules.
Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

trainable_variables

Sequence of trainable variables owned by this module and its submodules. 
variables

Sequence of variables owned by this module and its submodules. 
Methods
as_distribution
as_distribution(
predicted_linear_response, name=None
)
Builds a mean parameterized TFP Distribution from linear response.
Example:
model = tfp.glm.Bernoulli()
r = tfp.glm.compute_predicted_linear_response(x, w)
yhat = model.as_distribution(r)
Args  

predicted_linear_response

response shaped Tensor representing linear
predictions based on new model_coefficients , i.e.,
tfp.glm.compute_predicted_linear_response(
model_matrix, model_coefficients, offset) .

name

Python str used as TF namescope for ops created by member
functions. Default value: None (i.e., 'log_prob').

Returns  

model

tfp.distributions.Distribution like object with mean
parameterized by predicted_linear_response .

log_prob
log_prob(
response, predicted_linear_response, name=None
)
Computes D(param=mean(r)).log_prob(response)
for linear response, r
.
Args  

response

float like Tensor representing observed ("actual")
responses.

predicted_linear_response

float like Tensor corresponding to
tf.linalg.matmul(model_matrix, weights) .

name

Python str used as TF namescope for ops created by member
functions. Default value: None (i.e., 'log_prob').

Returns  

log_prob

Tensor with shape and dtype of predicted_linear_response
representing the distribution prescribed logprobability of the observed
response s.

with_name_scope
@classmethod
with_name_scope( method )
Decorator to automatically enter the module name scope.
class MyModule(tf.Module):
@tf.Module.with_name_scope
def __call__(self, x):
if not hasattr(self, 'w'):
self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
return tf.matmul(x, self.w)
Using the above module would produce tf.Variable
s and tf.Tensor
s whose
names included the module name:
mod = MyModule()
mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args  

method

The method to wrap. 
Returns  

The original method wrapped such that it enters the module's name scope. 
__call__
__call__(
predicted_linear_response, name=None
)
Computes mean(r), var(mean), d/dr mean(r)
for linear response, r
.
Here mean
and var
are the mean and variance of the sufficient statistic,
which may not be the same as the mean and variance of the random variable
itself. If the distribution's density has the form
p_Y(y) = h(y) Exp[dot(theta, T(y))  A]
where theta
and A
are constants and h
and T
are known functions,
then mean
and var
are the mean and variance of T(Y)
. In practice,
often T(Y) := Y
and in that case the distinction doesn't matter.
Args  

predicted_linear_response

float like Tensor corresponding to
tf.linalg.matmul(model_matrix, weights) .

name

Python str used as TF namescope for ops created by member
functions. Default value: None (i.e., 'call').

Returns  

mean

Tensor with shape and dtype of predicted_linear_response
representing the distribution prescribed mean, given the prescribed
linearresponse to mean mapping.

variance

Tensor with shape and dtype of predicted_linear_response
representing the distribution prescribed variance, given the prescribed
linearresponse to mean mapping.

grad_mean

Tensor with shape and dtype of predicted_linear_response
representing the gradient of the mean with respect to the
linearresponse and given the prescribed linearresponse to mean
mapping.
