Basic LSTM recurrent network cell with pruning.
Overrides the call method of tensorflow BasicLSTMCell and injects the weight masks
The implementation is based on: http://arxiv.org/abs/1409.2329.
We add forget_bias (default: 1) to the biases of the forget gate in order to reduce the scale of forgetting in the beginning of the training.
It does not allow cell clipping, a projection layer, and does not use peep-hole connections: it is the basic baseline.
For advanced models, please use the full
__init__( num_units, forget_bias=1.0, state_is_tuple=True, activation=None, reuse=None, name=None )
Initialize the basic LSTM cell with pruning.
num_units: int, The number of units in the LSTM cell.
forget_bias: float, The bias added to forget gates (see above). Must set to
0.0manually when restoring from CudnnLSTM-trained checkpoints.
state_is_tuple: If True, accepted and returned states are 2-tuples of the
m_state. If False, they are concatenated along the column axis. The latter behavior will soon be deprecated.
activation: Activation function of the inner states. Default:
reuse: (optional) Python boolean describing whether to reuse variables in an existing scope. If not
True, and the existing scope already has the given variables, an error is raised.
name: String, the name of the layer. Layers with the same name will share weights, but to avoid mistakes we require reuse=True in such cases.
When restoring from CudnnLSTM-trained checkpoints, must use CudnnCompatibleLSTMCell instead.
get_initial_state( inputs=None, batch_size=None, dtype=None )
zero_state( batch_size, dtype )
Return zero-filled state tensor(s).
batch_size: int, float, or unit Tensor representing the batch size.
dtype: the data type to use for the state.
state_size is an int or TensorShape, then the return value is a
N-D tensor of shape
[batch_size, state_size] filled with zeros.
state_size is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of
2-D tensors with
[batch_size, s] for each s in