Module: tfa.seq2seq.attention_wrapper

View source on GitHub

A powerful dynamic attention wrapper object.

Classes

class AttentionMechanism

class AttentionWrapper: Wraps another RNNCell with attention.

class AttentionWrapperState: namedtuple storing the state of a AttentionWrapper.

class BahdanauAttention: Implements Bahdanau-style (additive) attention.

class BahdanauMonotonicAttention: Monotonic attention mechanism with Bahadanau-style energy function.

class LuongAttention: Implements Luong-style (multiplicative) attention scoring.

class LuongMonotonicAttention: Monotonic attention mechanism with Luong-style energy function.

Functions

hardmax(...): Returns batched one-hot vectors.

monotonic_attention(...): Compute monotonic attention distribution from choosing probabilities.

safe_cumprod(...): Computes cumprod of x in logspace using cumsum to avoid underflow.