This package provides additional contributed RNNCells.

Fused RNNCells

class tf.contrib.rnn.LSTMBlockCell

Basic LSTM recurrent network cell.

The implementation is based on: http://arxiv.org/abs/1409.2329.

We add forget_bias (default: 1) to the biases of the forget gate in order to reduce the scale of forgetting in the beginning of the training.

Unlike BasicLSTMCell, this is a monolithic op and should be much faster. The weight and bias matrixes should be compatible as long as the variabel scope matches.


tf.contrib.rnn.LSTMBlockCell.__call__(x, states_prev, scope=None) {:#LSTMBlockCell.call}

Long short-term memory cell (LSTM).


tf.contrib.rnn.LSTMBlockCell.__init__(num_units, forget_bias=1.0, use_peephole=False) {:#LSTMBlockCell.init}

Initialize the basic LSTM cell.

Args:
  • num_units: int, The number of units in the LSTM cell.
  • forget_bias: float, The bias added to forget gates (see above).
  • use_peephole: Whether to use peephole connections or not.

tf.contrib.rnn.LSTMBlockCell.output_size


tf.contrib.rnn.LSTMBlockCell.state_size


tf.contrib.rnn.LSTMBlockCell.zero_state(batch_size, dtype)

Return zero-filled state tensor(s).

Args:
  • batch_size: int, float, or unit Tensor representing the batch size.
  • dtype: the data type to use for the state.
Returns:

If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size x state_size] filled with zeros.

If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size x s] for each s in state_size.


class tf.contrib.rnn.GRUBlockCell

Block GRU cell implementation.

The implementation is based on: http://arxiv.org/abs/1406.1078 Computes the LSTM cell forward propagation for 1 time step.

This kernel op implements the following mathematical equations:

Baises are initialized with : b_ru - constant_initializer(1.0) b_c - constant_initializer(0.0)

x_h_prev = [x, h_prev]

[r_bar u_bar] = x_h_prev * w_ru + b_ru

r = sigmoid(r_bar)
u = sigmoid(u_bar)

h_prevr = h_prev \circ r

x_h_prevr = [x h_prevr]

c_bar = x_h_prevr * w_c + b_c
c = tanh(c_bar)

h = (1-u) \circ c + u \circ h_prev

tf.contrib.rnn.GRUBlockCell.__call__(x, h_prev, scope=None) {:#GRUBlockCell.call}

GRU cell.


tf.contrib.rnn.GRUBlockCell.__init__(cell_size) {:#GRUBlockCell.init}

Initialize the Block GRU cell.

Args:
  • cell_size: int, GRU cell size.

tf.contrib.rnn.GRUBlockCell.output_size


tf.contrib.rnn.GRUBlockCell.state_size


tf.contrib.rnn.GRUBlockCell.zero_state(batch_size, dtype)

Return zero-filled state tensor(s).

Args:
  • batch_size: int, float, or unit Tensor representing the batch size.
  • dtype: the data type to use for the state.
Returns:

If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size x state_size] filled with zeros.

If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size x s] for each s in state_size.

LSTM-like cells


class tf.contrib.rnn.CoupledInputForgetGateLSTMCell

Long short-term memory unit (LSTM) recurrent network cell.

The default non-peephole implementation is based on:

http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf

S. Hochreiter and J. Schmidhuber. "Long Short-Term Memory". Neural Computation, 9(8):1735-1780, 1997.

The peephole implementation is based on:

https://research.google.com/pubs/archive/43905.pdf

Hasim Sak, Andrew Senior, and Francoise Beaufays. "Long short-term memory recurrent neural network architectures for large scale acoustic modeling." INTERSPEECH, 2014.

The coupling of input and forget gate is based on:

http://arxiv.org/pdf/1503.04069.pdf

Greff et al. "LSTM: A Search Space Odyssey"

The class uses optional peep-hole connections, and an optional projection layer.


tf.contrib.rnn.CoupledInputForgetGateLSTMCell.__call__(inputs, state, scope=None) {:#CoupledInputForgetGateLSTMCell.call}

Run one step of LSTM.

Args:
  • inputs: input Tensor, 2D, batch x num_units.
  • state: if state_is_tuple is False, this must be a state Tensor, 2-D, batch x state_size. If state_is_tuple is True, this must be a tuple of state Tensors, both 2-D, with column sizes c_state and m_state.
  • scope: VariableScope for the created subgraph; defaults to "LSTMCell".
Returns:

A tuple containing: - A 2-D, [batch x output_dim], Tensor representing the output of the LSTM after reading inputs when previous state was state. Here output_dim is: num_proj if num_proj was set, num_units otherwise. - Tensor(s) representing the new state of LSTM after reading inputs when the previous state was state. Same type and shape(s) as state.

Raises:
  • ValueError: If input size cannot be inferred from inputs via static shape inference.

tf.contrib.rnn.CoupledInputForgetGateLSTMCell.__init__(num_units, use_peepholes=False, initializer=None, num_proj=None, proj_clip=None, num_unit_shards=1, num_proj_shards=1, forget_bias=1.0, state_is_tuple=False, activation=tanh) {:#CoupledInputForgetGateLSTMCell.init}

Initialize the parameters for an LSTM cell.

Args:
  • num_units: int, The number of units in the LSTM cell
  • use_peepholes: bool, set True to enable diagonal/peephole connections.
  • initializer: (optional) The initializer to use for the weight and projection matrices.
  • num_proj: (optional) int, The output dimensionality for the projection matrices. If None, no projection is performed.
  • proj_clip: (optional) A float value. If num_proj > 0 and proj_clip is provided, then the projected values are clipped elementwise to within [-proj_clip, proj_clip].

  • num_unit_shards: How to split the weight matrix. If >1, the weight matrix is stored across num_unit_shards.

  • num_proj_shards: How to split the projection matrix. If >1, the projection matrix is stored across num_proj_shards.
  • forget_bias: Biases of the forget gate are initialized by default to 1 in order to reduce the scale of forgetting at the beginning of the training.
  • state_is_tuple: If True, accepted and returned states are 2-tuples of the c_state and m_state. By default (False), they are concatenated along the column axis. This default behavior will soon be deprecated.
  • activation: Activation function of the inner states.

tf.contrib.rnn.CoupledInputForgetGateLSTMCell.output_size


tf.contrib.rnn.CoupledInputForgetGateLSTMCell.state_size


tf.contrib.rnn.CoupledInputForgetGateLSTMCell.zero_state(batch_size, dtype)

Return zero-filled state tensor(s).

Args:
  • batch_size: int, float, or unit Tensor representing the batch size.
  • dtype: the data type to use for the state.
Returns:

If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size x state_size] filled with zeros.

If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size x s] for each s in state_size.


class tf.contrib.rnn.TimeFreqLSTMCell

Time-Frequency Long short-term memory unit (LSTM) recurrent network cell.

This implementation is based on:

Tara N. Sainath and Bo Li "Modeling Time-Frequency Patterns with LSTM vs. Convolutional Architectures for LVCSR Tasks." submitted to INTERSPEECH, 2016.

It uses peep-hole connections and optional cell clipping.


tf.contrib.rnn.TimeFreqLSTMCell.__call__(inputs, state, scope=None) {:#TimeFreqLSTMCell.call}

Run one step of LSTM.

Args:
  • inputs: input Tensor, 2D, batch x num_units.
  • state: state Tensor, 2D, batch x state_size.
  • scope: VariableScope for the created subgraph; defaults to "TimeFreqLSTMCell".
Returns:

A tuple containing: - A 2D, batch x output_dim, Tensor representing the output of the LSTM after reading "inputs" when previous state was "state". Here output_dim is num_units. - A 2D, batch x state_size, Tensor representing the new state of LSTM after reading "inputs" when previous state was "state".

Raises:
  • ValueError: if an input_size was specified and the provided inputs have a different dimension.

tf.contrib.rnn.TimeFreqLSTMCell.__init__(num_units, use_peepholes=False, cell_clip=None, initializer=None, num_unit_shards=1, forget_bias=1.0, feature_size=None, frequency_skip=None) {:#TimeFreqLSTMCell.init}

Initialize the parameters for an LSTM cell.

Args:
  • num_units: int, The number of units in the LSTM cell
  • use_peepholes: bool, set True to enable diagonal/peephole connections.
  • cell_clip: (optional) A float value, if provided the cell state is clipped by this value prior to the cell output activation.
  • initializer: (optional) The initializer to use for the weight and projection matrices.
  • num_unit_shards: int, How to split the weight matrix. If >1, the weight matrix is stored across num_unit_shards.
  • forget_bias: float, Biases of the forget gate are initialized by default to 1 in order to reduce the scale of forgetting at the beginning of the training.
  • feature_size: int, The size of the input feature the LSTM spans over.
  • frequency_skip: int, The amount the LSTM filter is shifted by in frequency.

tf.contrib.rnn.TimeFreqLSTMCell.output_size


tf.contrib.rnn.TimeFreqLSTMCell.state_size


tf.contrib.rnn.TimeFreqLSTMCell.zero_state(batch_size, dtype)

Return zero-filled state tensor(s).

Args:
  • batch_size: int, float, or unit Tensor representing the batch size.
  • dtype: the data type to use for the state.
Returns:

If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size x state_size] filled with zeros.

If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size x s] for each s in state_size.


class tf.contrib.rnn.GridLSTMCell

Grid Long short-term memory unit (LSTM) recurrent network cell.

The default is based on: Nal Kalchbrenner, Ivo Danihelka and Alex Graves "Grid Long Short-Term Memory," Proc. ICLR 2016. http://arxiv.org/abs/1507.01526

When peephole connections are used, the implementation is based on: Tara N. Sainath and Bo Li "Modeling Time-Frequency Patterns with LSTM vs. Convolutional Architectures for LVCSR Tasks." submitted to INTERSPEECH, 2016.

The code uses optional peephole connections, shared_weights and cell clipping.


tf.contrib.rnn.GridLSTMCell.__call__(inputs, state, scope=None) {:#GridLSTMCell.call}

Run one step of LSTM.

Args:
  • inputs: input Tensor, 2D, batch x num_units.
  • state: state Tensor, 2D, batch x state_size.
  • scope: VariableScope for the created subgraph; defaults to "LSTMCell".
Returns:

A tuple containing: - A 2D, batch x output_dim, Tensor representing the output of the LSTM after reading "inputs" when previous state was "state". Here output_dim is num_units. - A 2D, batch x state_size, Tensor representing the new state of LSTM after reading "inputs" when previous state was "state".

Raises:
  • ValueError: if an input_size was specified and the provided inputs have a different dimension.

tf.contrib.rnn.GridLSTMCell.__init__(num_units, use_peepholes=False, share_time_frequency_weights=False, cell_clip=None, initializer=None, num_unit_shards=1, forget_bias=1.0, feature_size=None, frequency_skip=None, num_frequency_blocks=1, couple_input_forget_gates=False, state_is_tuple=False) {:#GridLSTMCell.init}

Initialize the parameters for an LSTM cell.

Args:
  • num_units: int, The number of units in the LSTM cell
  • use_peepholes: bool, default False. Set True to enable diagonal/peephole connections.
  • share_time_frequency_weights: bool, default False. Set True to enable shared cell weights between time and frequency LSTMs.
  • cell_clip: (optional) A float value, if provided the cell state is clipped by this value prior to the cell output activation.
  • initializer: (optional) The initializer to use for the weight and projection matrices.
  • num_unit_shards: int, How to split the weight matrix. If >1, the weight matrix is stored across num_unit_shards.
  • forget_bias: float, Biases of the forget gate are initialized by default to 1 in order to reduce the scale of forgetting at the beginning of the training.
  • feature_size: int, The size of the input feature the LSTM spans over.
  • frequency_skip: int, The amount the LSTM filter is shifted by in frequency.
  • num_frequency_blocks: int, The total number of frequency blocks needed to cover the whole input feature.
  • couple_input_forget_gates: bool, Whether to couple the input and forget gates, i.e. f_gate = 1.0 - i_gate, to reduce model parameters and computation cost.
  • state_is_tuple: If True, accepted and returned states are 2-tuples of the c_state and m_state. By default (False), they are concatenated along the column axis. This default behavior will soon be deprecated.

tf.contrib.rnn.GridLSTMCell.output_size


tf.contrib.rnn.GridLSTMCell.state_size


tf.contrib.rnn.GridLSTMCell.state_tuple_type


tf.contrib.rnn.GridLSTMCell.zero_state(batch_size, dtype)

Return zero-filled state tensor(s).

Args:
  • batch_size: int, float, or unit Tensor representing the batch size.
  • dtype: the data type to use for the state.
Returns:

If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size x state_size] filled with zeros.

If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size x s] for each s in state_size.

RNNCell wrappers


class tf.contrib.rnn.AttentionCellWrapper

Basic attention cell wrapper.

Implementation based on https://arxiv.org/pdf/1601.06733.pdf.


tf.contrib.rnn.AttentionCellWrapper.__call__(inputs, state, scope=None) {:#AttentionCellWrapper.call}

Long short-term memory cell with attention (LSTMA).


tf.contrib.rnn.AttentionCellWrapper.__init__(cell, attn_length, attn_size=None, attn_vec_size=None, input_size=None, state_is_tuple=False) {:#AttentionCellWrapper.init}

Create a cell with attention.

Args:
  • cell: an RNNCell, an attention is added to it.
  • attn_length: integer, the size of an attention window.
  • attn_size: integer, the size of an attention vector. Equal to cell.output_size by default.
  • attn_vec_size: integer, the number of convolutional features calculated on attention state and a size of the hidden layer built from base cell state. Equal attn_size to by default.
  • input_size: integer, the size of a hidden linear layer, built from inputs and attention. Derived from the input tensor by default.
  • state_is_tuple: If True, accepted and returned states are n-tuples, where n = len(cells). By default (False), the states are all concatenated along the column axis.
Raises:
  • TypeError: if cell is not an RNNCell.
  • ValueError: if cell returns a state tuple but the flag state_is_tuple is False or if attn_length is zero or less.

tf.contrib.rnn.AttentionCellWrapper.output_size


tf.contrib.rnn.AttentionCellWrapper.state_size


tf.contrib.rnn.AttentionCellWrapper.zero_state(batch_size, dtype)

Return zero-filled state tensor(s).

Args:
  • batch_size: int, float, or unit Tensor representing the batch size.
  • dtype: the data type to use for the state.
Returns:

If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size x state_size] filled with zeros.

If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size x s] for each s in state_size.