A list of AttentionMechanism instances or a single
instance.
attention_layer_size
A list of Python integers or a single Python
integer, the depth of the attention (output) layer(s). If None
(default), use the context as attention at each time step. Otherwise,
feed the context and cell output into the attention layer to generate
attention at each time step. If attention_mechanism is a list,
attention_layer_size must be a list of the same length. If
attention_layer is set, this must be None. If attention_fn is set, it
must guaranteed that the outputs of attention_fn also meet the above
requirements.
alignment_history
Python boolean, whether to store alignment history from
all time steps in the final output state (currently stored as a time
major TensorArray on which you must call stack()).
cell_input_fn
(optional) A callable. The default is:
lambda inputs, attention: array_ops.concat([inputs, attention], -1).
output_attention
Python bool. If True (default), the output at each
time step is the attention value. This is the behavior of Luong-style
attention mechanisms. If False, the output at each time step is the
output of cell. This is the behavior of Bhadanau-style attention
mechanisms. In both cases, the attention tensor is propagated to the
next time step via the state and is used there. This flag only controls
whether the attention mechanism is propagated up to the next cell in an
RNN stack or to the top RNN output.
initial_cell_state
The initial state value to use for the cell when the
user calls zero_state(). Note that if this value is provided now, and
the user uses a batch_size argument of zero_state which does not
match the batch size of initial_cell_state, proper behavior is not
guaranteed.
name
Name to use when creating ops.
attention_layer
A list of tf.compat.v1.layers.Layer instances or a
single tf.compat.v1.layers.Layer instance taking the context and cell
output as inputs to generate attention at each time step. If None
(default), use the context as attention at each time step. If
attention_mechanism is a list, attention_layer must be a list of the
same length. If attention_layers_size is set, this must be None.
attention_fn
An optional callable function that allows users to provide
their own customized attention function, which takes input
(attention_mechanism, cell_output, attention_state, attention_layer) and
outputs (attention, alignments, next_attention_state). If provided, the
attention_layer_size should be the size of the outputs of attention_fn.
dtype
The cell dtype
Raises
TypeError
attention_layer_size is not None and (attention_mechanism
is a list but attention_layer_size is not; or vice versa).
ValueError
if attention_layer_size is not None, attention_mechanism
is a list, and its length does not match that of attention_layer_size;
if attention_layer_size and attention_layer are set simultaneously.
Attributes
graph
DEPRECATED FUNCTION
output_size
Integer or TensorShape: size of outputs produced by this cell.
Return an initial (zero) state tuple for this AttentionWrapper.
Args
batch_size
0D integer tensor: the batch size.
dtype
The internal state data type.
Returns
An AttentionWrapperState tuple containing zeroed out tensors and,
possibly, empty TensorArray objects.
Raises
ValueError
(or, possibly at runtime, InvalidArgument), if
batch_size does not match the output size of the encoder passed
to the wrapper object at initialization time.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[]]