Basic attention cell wrapper.
Implementation based on https://arxiv.org/abs/1601.06733.
__init__( cell, attn_length, attn_size=None, attn_vec_size=None, input_size=None, state_is_tuple=True, reuse=None )
Create a cell with attention.
cell: an RNNCell, an attention is added to it.
attn_length: integer, the size of an attention window.
attn_size: integer, the size of an attention vector. Equal to cell.output_size by default.
attn_vec_size: integer, the number of convolutional features calculated on attention state and a size of the hidden layer built from base cell state. Equal attn_size to by default.
input_size: integer, the size of a hidden linear layer, built from inputs and attention. Derived from the input tensor by default.
state_is_tuple: If True, accepted and returned states are n-tuples, where
n = len(cells). By default (False), the states are all concatenated along the column axis.
reuse: (optional) Python boolean describing whether to reuse variables in an existing scope. If not
True, and the existing scope already has the given variables, an error is raised.
TypeError: if cell is not an RNNCell.
ValueError: if cell returns a state tuple but the flag
Falseor if attn_length is zero or less.
get_initial_state( inputs=None, batch_size=None, dtype=None )
zero_state( batch_size, dtype )
Return zero-filled state tensor(s).
batch_size: int, float, or unit Tensor representing the batch size.
dtype: the data type to use for the state.
state_size is an int or TensorShape, then the return value is a
N-D tensor of shape
[batch_size, state_size] filled with zeros.
state_size is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of
2-D tensors with
[batch_size, s] for each s in