# tf.contrib.rnn.BasicLSTMCell

### class tf.contrib.rnn.core_rnn_cell.BasicLSTMCell

Basic LSTM recurrent network cell.

The implementation is based on: http://arxiv.org/abs/1409.2329.

We add forget_bias (default: 1) to the biases of the forget gate in order to reduce the scale of forgetting in the beginning of the training.

It does not allow cell clipping, a projection layer, and does not use peep-hole connections: it is the basic baseline.

## Methods

### __init__(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=tf.tanh)

Initialize the basic LSTM cell.

#### Args:

• num_units: int, The number of units in the LSTM cell.
• forget_bias: float, The bias added to forget gates (see above).
• input_size: Deprecated and unused.
• state_is_tuple: If True, accepted and returned states are 2-tuples of the c_state and m_state. If False, they are concatenated along the column axis. The latter behavior will soon be deprecated.
• activation: Activation function of the inner states.

### zero_state(batch_size, dtype)

Return zero-filled state tensor(s).

#### Args:

• batch_size: int, float, or unit Tensor representing the batch size.
• dtype: the data type to use for the state.

#### Returns:

If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size x state_size] filled with zeros.

If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size x s] for each s in state_size.