tf.contrib.seq2seq.dynamic_decode( decoder, output_time_major=False, impute_finished=False, maximum_iterations=None, parallel_iterations=32, swap_memory=False, scope=None )
See the guide: Seq2seq Library (contrib) > Dynamic Decoding
Perform dynamic decoding with
Calls initialize() once and step() repeatedly on the Decoder object.
output_time_major: Python boolean. Default:
False(batch major). If
True, outputs are returned as time major tensors (this mode is faster). Otherwise, outputs are returned as batch major tensors (this adds extra time to the computation).
impute_finished: Python boolean. If
True, then states for batch entries which are marked as finished get copied through and the corresponding outputs get zeroed out. This causes some slowdown at each time step, but ensures that the final state and outputs have the correct values and that backprop ignores time steps that were marked as finished.
int32scalar, maximum allowed number of decoding steps. Default is
None(decode until the decoder is fully done).
parallel_iterations: Argument passed to
swap_memory: Argument passed to
scope: Optional variable scope to use.
(final_outputs, final_state, final_sequence_lengths).
decoderis not an instance of
maximum_iterationsis provided but is not a scalar.