Additive attention layer, a.k.a. Bahdanau-style attention.

Inputs are query tensor of shape [batch_size, Tq, dim], value tensor of shape [batch_size, Tv, dim] and key tensor of shape [batch_size, Tv, dim]. The calculation follows the steps:

  1. Reshape query and value into shapes [batch_size, Tq, 1, dim] and [batch_size, 1, Tv, dim] respectively.
  2. Calculate scores with shape [batch_size, Tq, Tv] as a non-linear sum: scores = tf.reduce_sum(tf.tanh(query + value), axis=-1)
  3. Use scores to calculate a distribution with shape [batch_size, Tq, Tv]: distribution = tf.nn.softmax(scores).
  4. Use distribution to create a linear combination of value with shape batch_size, Tq, dim]: return tf.matmul(distribution, value).

use_scale If True, will create a variable to scale the attention scores.
causal Boolean. Set to True for decoder self-attention. Adds a mask such that position i cannot attend to positions j > i. This prevents the flow of information from the future towards the past.
dropout Float between 0 and 1. Fraction of the units to drop for the attention scores.

Call Arguments:

  • inputs: List of the following tensors:
    • query: Query Tensor of shape [batch_size, Tq, dim].
    • value: Value Tensor of shape [batch_size, Tv, dim].
    • key: Optional key Tensor of shape [batch_size, Tv, dim]. If not given, will use value for both key and value, which is the most common case.
  • mask: List of the following tensors:
    • q