TF 2.0 is out! Get hands-on practice at TF World, Oct 28-31. Use code TF20 for 20% off select passes. Register now

tfp.trainable_distributions.normal

Constructs a trainable tfd.Normal distribution. (deprecated)

tfp.trainable_distributions.normal(
    *args,
    **kwargs
)

This function creates a Normal distribution parameterized by loc and scale. Using default args, this function is mathematically equivalent to:

Y = Normal(loc=matmul(W, x) + b, scale=1)

where,
  W in R^[d, n]
  b in R^d

Examples

This function can be used as a linear regression loss.

# This example fits a linear regression loss.
import tensorflow as tf
import tensorflow_probability as tfp

# Create fictitious training data.
dtype = np.float32
n = 3000    # number of samples
x_size = 4  # size of single x
def make_training_data():
  np.random.seed(142)
  x = np.random.randn(n, x_size).astype(dtype)
  w = np.random.randn(x_size).astype(dtype)
  b = np.random.randn(1).astype(dtype)
  true_mean = np.tensordot(x, w, axes=[[-1], [-1]]) + b
  noise = np.random.randn(n).astype(dtype)
  y = true_mean + noise
  return y, x
y, x = make_training_data()

# Build TF graph for fitting Normal maximum likelihood estimator.
normal = tfp.trainable_distributions.normal(x)
loss = -tf.reduce_mean(normal.log_prob(y))
train_op = tf.train.AdamOptimizer(learning_rate=2.**-5).minimize(loss)
mse = tf.reduce_mean(tf.squared_difference(y, normal.mean()))
init_op = tf.global_variables_initializer()

# Run graph 1000 times.
num_steps = 1000
loss_ = np.zeros(num_steps)   # Style: `_` to indicate sess.run result.
mse_ = np.zeros(num_steps)
with tf.Session() as sess:
  sess.run(init_op)
  for it in xrange(loss_.size):
    _, loss_[it], mse_[it] = sess.run([train_op, loss, mse])
    if it % 200 == 0 or it == loss_.size - 1:
      print("iteration:{}  loss:{}  mse:{}".format(it, loss_[it], mse_[it]))

# ==> iteration:0    loss:6.34114170074  mse:10.8444051743
#     iteration:200  loss:1.40146839619  mse:0.965059816837
#     iteration:400  loss:1.40052902699  mse:0.963181257248
#     iteration:600  loss:1.40052902699  mse:0.963181257248
#     iteration:800  loss:1.40052902699  mse:0.963181257248
#     iteration:999  loss:1.40052902699  mse:0.963181257248

Args:

  • x: Tensor with floating type. Must have statically defined rank and statically known right-most dimension.
  • layer_fn: Python callable which takes input x and int scalar d and returns a transformation of x with shape tf.concat([tf.shape(x)[:-1], [1]], axis=0). Default value: tf.layers.dense.
  • loc_fn: Python callable which transforms the loc parameter. Takes a (batch of) length-dims vectors and returns a Tensor of same shape and dtype. Default value: lambda x: x.
  • scale_fn: Python callable or Tensor. If a callable transforms the scale parameters; if Tensor is the tfd.Normal scale argument. Takes a (batch of) length-dims vectors and returns a Tensor of same size. (Taking a callable or Tensor is how tf.Variable intializers behave.) Default value: 1.
  • name: A name_scope name for operations created by this function. Default value: None (i.e., "normal").

Returns:

  • normal: An instance of tfd.Normal.