tf.contrib.rnn.LSTMBlockFusedCell

class tf.contrib.rnn.LSTMBlockFusedCell

See the guide: RNN and Cells (contrib) > Core RNN Cell wrappers (RNNCells that wrap other RNNCells)

FusedRNNCell implementation of LSTM.

This is an extremely efficient LSTM implementation, that uses a single TF op for the entire LSTM. It should be both faster and more memory-efficient than LSTMBlockCell defined above.

The implementation is based on: http://arxiv.org/abs/1409.2329.

We add forget_bias (default: 1) to the biases of the forget gate in order to reduce the scale of forgetting in the beginning of the training.

The variable naming is consistent with core_rnn_cell.LSTMCell.

Properties

num_units

Number of units in this cell (output dimension).

Methods

__init__(num_units, forget_bias=1.0, cell_clip=None, use_peephole=False)

Initialize the LSTM cell.

Args:

  • num_units: int, The number of units in the LSTM cell.
  • forget_bias: float, The bias added to forget gates (see above).
  • cell_clip: clip the cell to this value. Defaults to 3.
  • use_peephole: Whether to use peephole connections or not.

Defined in tensorflow/contrib/rnn/python/ops/lstm_ops.py.