tfm.nlp.layers.TalkingHeadsAttention

Implements Talking-Heads Attention.

This is an implementation of Talking-Heads Attention based on the paper Talking-Heads Attention (https://arxiv.org/abs/2003.02436): it enhanced multi-head attention by including linearprojections across the attention-heads dimension, immediately before and after the softmax operation.

See the base class tf.keras.layers.MultiHeadAttention for more details.

num_heads Number of attention heads.
key_dim Size of each attention head for query and key.
value_dim Size of each attention head for value.
dropout Dropout probability.
use_bias Boolean, whether the dense layers use bias vectors/matrices.
output_shape The expected shape of an output tensor, besides the batch and sequence dims. If not specified, projects back to the key feature dim.
attention_axes axes over which the attention is applied. None means attention over all axes, but batch, heads, and features.
return_attention_scores bool, if True, returns the multi-head attention scores as an additional output argument.
kernel_initializer Initializer for dense layer kernels.
bias_initializer Initializer for dense layer biases.
kernel_regularizer Regularizer for dense layer kernels.
bias_regularizer Regularizer for dense layer biases.
activity_regularizer Regularizer for dense layer activity.
kernel_constraint Constraint for dense layer kernels.
bias_constraint Constraint for dense layer kernels.

Methods

call

This is where the layer's logic lives.

The call() method may not create state (except in its first invocation, wrapping the creation of variables or other resources in tf.init_scope()). It is recommended to create state, including tf.Variable instances and nested Layer instances, in __init__(), or in the build() method that is called automatically before call() executes for the first time.

Args
inputs Input tensor, or dict/list/tuple of input tensors. The first positional inputs argument is subject to special rules:

  • inputs must be explicitly passed. A layer cannot have zero arguments, and inputs cannot be provided via the default value of a keyword argument.
  • NumPy array or Python scalar values in inputs get cast as tensors.
  • Keras mask metadata is only collected from inputs.
  • Layers are built (build(input_shape) method) using shape info from inputs only.
  • input_spec compatibility is only checked against inputs.
  • Mixed precision input casting is only applied to inputs. If a layer has tensor arguments in *args or **kwargs, their casting behavior in mixed precision should be handled manually.
  • The SavedModel input specification is generated using inputs only.
  • Integration with various ecosystem packages like TFMOT, TFLite, TF.js, etc is only supported for inputs and not for tensors in positional and keyword arguments.
*args Additional positional arguments. May contain tensors, although this is not recommended, for the reasons above.
**kwargs Additional keyword arguments. May contain tensors, although this is not recommended, for the reasons above. The following optional keyword arguments are reserved:
  • training: Boolean scalar tensor of Python boolean indicating whether the call is meant for training or inference.
  • mask: Boolean input mask. If the layer's call() method takes a mask argument, its default value will be set to the mask generated for inputs by the previous layer (if input did come from a layer that generated a corresponding mask, i.e. if it came from a Keras layer with masking support).
  • Returns
    A tensor or list/tuple of tensors.