tf.contrib.rnn.BidirectionalGridLSTMCell

View source on GitHub

Bidirectional GridLstm cell.

Inherits From: GridLSTMCell

The bidirection connection is only used in the frequency direction, which hence doesn't affect the time direction's real-time processing that is required for online recognition systems. The current implementation uses different weights for the two directions.

num_units int, The number of units in the LSTM cell
use_peepholes (optional) bool, default False. Set True to enable diagonal/peephole connections.
share_time_frequency_weights (optional) bool, default False. Set True to enable shared cell weights between time and frequency LSTMs.
cell_clip (optional) A float value, default None, if provided the cell state is clipped by this value prior to the cell output activation.
initializer (optional) The initializer to use for the weight and projection matrices, default None.
num_unit_shards (optional) int, default 1, How to split the weight matrix. If > 1, the weight matrix is stored across num_unit_shards.
forget_bias (optional) float, default 1.0, The initial bias of the forget gates, used to reduce the scale of forgetting at the beginning of the training.
feature_size (optional) int, default None, The size of the input feature the LSTM spans over.
frequency_skip (optional) int, default None, The amount the LSTM filter is shifted by in frequency.
num_frequency_blocks [required] A list of frequency blocks needed to cover the whole input feature splitting defined by start_freqindex_list and end_freqindex_list.
start_freqindex_list [optional], list of ints, default None, The starting frequency index for each frequency block.
end_freqindex_list [optional], list of ints, default None. The ending frequency index for each frequency block.
couple_input_forget_gates (optional) bool, default False, Whether to couple the input and forget gates, i.e. f_gate = 1.0 - i_gate, to reduce model parameters and computation cost.
backward_slice_offset (optional) int32, default 0, the starting offset to slice the feature for backward processing.
reuse (optional) Python boolean describing whether to reuse variables in an existing scope. If not True, and the existing scope already has the given variables, an error is raised.

graph DEPRECATED FUNCTION

output_size Integer or TensorShape: size of outputs produced by this cell.
scope_name

state_size size(s) of state(s) used by this cell.

It can be represented by an Integer, a TensorShape or a tuple of Integers or TensorShapes.

state_tuple_type

Methods

get_initial_state

View source

zero_state

View source

Return zero-filled state tensor(s).

Args
batch_size int, float, or unit Tensor representing the batch size.
dtype the data type to use for the state.

Returns
If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size, state_size] filled with zeros.

If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size, s] for each s in state_size.