RNN Cells for use with TensorFlow's core RNN methods

class tf.nn.rnn_cell.BasicRNNCell

The most basic RNN cell.


tf.nn.rnn_cell.BasicRNNCell.__call__(inputs, state, scope=None) {:#BasicRNNCell.call}

Most basic RNN: output = new_state = activation(W * input + U * state + B).


tf.nn.rnn_cell.BasicRNNCell.__init__(num_units, input_size=None, activation=tanh) {:#BasicRNNCell.init}


tf.nn.rnn_cell.BasicRNNCell.output_size


tf.nn.rnn_cell.BasicRNNCell.state_size


tf.nn.rnn_cell.BasicRNNCell.zero_state(batch_size, dtype)

Return zero-filled state tensor(s).

Args:
  • batch_size: int, float, or unit Tensor representing the batch size.
  • dtype: the data type to use for the state.
Returns:

If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size x state_size] filled with zeros.

If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size x s] for each s in state_size.


class tf.nn.rnn_cell.BasicLSTMCell

Basic LSTM recurrent network cell.

The implementation is based on: http://arxiv.org/abs/1409.2329.

We add forget_bias (default: 1) to the biases of the forget gate in order to reduce the scale of forgetting in the beginning of the training.

It does not allow cell clipping, a projection layer, and does not use peep-hole connections: it is the basic baseline.

For advanced models, please use the full LSTMCell that follows.


tf.nn.rnn_cell.BasicLSTMCell.__call__(inputs, state, scope=None) {:#BasicLSTMCell.call}

Long short-term memory cell (LSTM).


tf.nn.rnn_cell.BasicLSTMCell.__init__(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=tanh) {:#BasicLSTMCell.init}

Initialize the basic LSTM cell.

Args:
  • num_units: int, The number of units in the LSTM cell.
  • forget_bias: float, The bias added to forget gates (see above).
  • input_size: Deprecated and unused.
  • state_is_tuple: If True, accepted and returned states are 2-tuples of the c_state and m_state. If False, they are concatenated along the column axis. The latter behavior will soon be deprecated.
  • activation: Activation function of the inner states.

tf.nn.rnn_cell.BasicLSTMCell.output_size


tf.nn.rnn_cell.BasicLSTMCell.state_size


tf.nn.rnn_cell.BasicLSTMCell.zero_state(batch_size, dtype)

Return zero-filled state tensor(s).

Args:
  • batch_size: int, float, or unit Tensor representing the batch size.
  • dtype: the data type to use for the state.
Returns:

If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size x state_size] filled with zeros.

If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size x s] for each s in state_size.


class tf.nn.rnn_cell.GRUCell

Gated Recurrent Unit cell (cf. http://arxiv.org/abs/1406.1078).


tf.nn.rnn_cell.GRUCell.__call__(inputs, state, scope=None) {:#GRUCell.call}

Gated recurrent unit (GRU) with nunits cells.


tf.nn.rnn_cell.GRUCell.__init__(num_units, input_size=None, activation=tanh) {:#GRUCell.init}


tf.nn.rnn_cell.GRUCell.output_size


tf.nn.rnn_cell.GRUCell.state_size


tf.nn.rnn_cell.GRUCell.zero_state(batch_size, dtype)

Return zero-filled state tensor(s).

Args:
  • batch_size: int, float, or unit Tensor representing the batch size.
  • dtype: the data type to use for the state.
Returns:

If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size x state_size] filled with zeros.

If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size x s] for each s in state_size.


class tf.nn.rnn_cell.LSTMCell

Long short-term memory unit (LSTM) recurrent network cell.

The default non-peephole implementation is based on:

http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf

S. Hochreiter and J. Schmidhuber. "Long Short-Term Memory". Neural Computation, 9(8):1735-1780, 1997.

The peephole implementation is based on:

https://research.google.com/pubs/archive/43905.pdf

Hasim Sak, Andrew Senior, and Francoise Beaufays. "Long short-term memory recurrent neural network architectures for large scale acoustic modeling." INTERSPEECH, 2014.

The class uses optional peep-hole connections, optional cell clipping, and an optional projection layer.


tf.nn.rnn_cell.LSTMCell.__call__(inputs, state, scope=None) {:#LSTMCell.call}

Run one step of LSTM.

Args:
  • inputs: input Tensor, 2D, batch x num_units.
  • state: if state_is_tuple is False, this must be a state Tensor, 2-D, batch x state_size. If state_is_tuple is True, this must be a tuple of state Tensors, both 2-D, with column sizes c_state and m_state.
  • scope: VariableScope for the created subgraph; defaults to "LSTMCell".
Returns:

A tuple containing:

  • A 2-D, [batch x output_dim], Tensor representing the output of the LSTM after reading inputs when previous state was state. Here output_dim is: num_proj if num_proj was set, num_units otherwise.
  • Tensor(s) representing the new state of LSTM after reading inputs when the previous state was state. Same type and shape(s) as state.
Raises:
  • ValueError: If input size cannot be inferred from inputs via static shape inference.

tf.nn.rnn_cell.LSTMCell.__init__(num_units, input_size=None, use_peepholes=False, cell_clip=None, initializer=None, num_proj=None, proj_clip=None, num_unit_shards=1, num_proj_shards=1, forget_bias=1.0, state_is_tuple=True, activation=tanh) {:#LSTMCell.init}

Initialize the parameters for an LSTM cell.

Args:
  • num_units: int, The number of units in the LSTM cell
  • input_size: Deprecated and unused.
  • use_peepholes: bool, set True to enable diagonal/peephole connections.
  • cell_clip: (optional) A float value, if provided the cell state is clipped by this value prior to the cell output activation.
  • initializer: (optional) The initializer to use for the weight and projection matrices.
  • num_proj: (optional) int, The output dimensionality for the projection matrices. If None, no projection is performed.
  • proj_clip: (optional) A float value. If num_proj > 0 and proj_clip is provided, then the projected values are clipped elementwise to within [-proj_clip, proj_clip].

  • num_unit_shards: How to split the weight matrix. If >1, the weight matrix is stored across num_unit_shards.

  • num_proj_shards: How to split the projection matrix. If >1, the projection matrix is stored across num_proj_shards.
  • forget_bias: Biases of the forget gate are initialized by default to 1 in order to reduce the scale of forgetting at the beginning of the training.
  • state_is_tuple: If True, accepted and returned states are 2-tuples of the c_state and m_state. If False, they are concatenated along the column axis. This latter behavior will soon be deprecated.
  • activation: Activation function of the inner states.

tf.nn.rnn_cell.LSTMCell.output_size


tf.nn.rnn_cell.LSTMCell.state_size


tf.nn.rnn_cell.LSTMCell.zero_state(batch_size, dtype)

Return zero-filled state tensor(s).

Args:
  • batch_size: int, float, or unit Tensor representing the batch size.
  • dtype: the data type to use for the state.
Returns:

If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size x state_size] filled with zeros.

If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size x s] for each s in state_size.