A RNN backed by cuDNN.
tf.raw_ops.CudnnRNNV2(
input,
input_h,
input_c,
params,
rnn_mode='lstm',
input_mode='linear_input',
direction='unidirectional',
dropout=0,
seed=0,
seed2=0,
is_training=True,
name=None
)
Computes the RNN from the input and initial states, with respect to the params buffer. Produces one extra output "host_reserved" than CudnnRNN.
rnn_mode: Indicates the type of the RNN model. input_mode: Indicates whether there is a linear projection between the input and the actual computation before the first layer. 'skip_input' is only allowed when input_size == num_units; 'auto_select' implies 'skip_input' when input_size == num_units; otherwise, it implies 'linear_input'. direction: Indicates whether a bidirectional model will be used. Should be "unidirectional" or "bidirectional". dropout: Dropout probability. When set to 0., dropout is disabled. seed: The 1st part of a seed to initialize dropout. seed2: The 2nd part of a seed to initialize dropout. input: A 3-D tensor with the shape of [seq_length, batch_size, input_size]. input_h: A 3-D tensor with the shape of [num_layer * dir, batch_size, num_units]. input_c: For LSTM, a 3-D tensor with the shape of [num_layer * dir, batch, num_units]. For other models, it is ignored. params: A 1-D tensor that contains the weights and biases in an opaque layout. The size must be created through CudnnRNNParamsSize, and initialized separately. Note that they might not be compatible across different generations. So it is a good idea to save and restore output: A 3-D tensor with the shape of [seq_length, batch_size, dir * num_units]. output_h: The same shape has input_h. output_c: The same shape as input_c for LSTM. An empty tensor for other models. is_training: Indicates whether this operation is used for inference or training. reserve_space: An opaque tensor that can be used in backprop calculation. It is only produced if is_training is true. host_reserved: An opaque tensor that can be used in backprop calculation. It is only produced if is_training is true. It is output on host memory rather than device memory.