|View source on GitHub|
Group LSTM cell (G-LSTM).
The implementation is based on:
O. Kuchaiev and B. Ginsburg "Factorization Tricks for LSTM Networks", ICLR 2017 workshop.
In brief, a G-LSTM cell consists of one LSTM sub-cell per group, where each sub-cell operates on an evenly-sized sub-vector of the input and produces an evenly-sized sub-vector of the output. For example, a G-LSTM cell with 128 units and 4 groups consists of 4 LSTMs sub-cells with 32 units each. If that G-LSTM cell is fed a 200-dim input, then each sub-cell receives a 50-dim part of the input and produces a 32-dim part of the output.
__init__( num_units, initializer=None, num_proj=None, number_of_groups=1, forget_bias=1.0, activation=tf.math.tanh, reuse=None )
Initialize the parameters of G-LSTM cell.
num_units: int, The number of units in the G-LSTM cell
initializer: (optional) The initializer to use for the weight and projection matrices.
num_proj: (optional) int, The output dimensionality for the projection matrices. If None, no projection is performed.
number_of_groups: (optional) int, number of groups to use. If
number_of_groupsis 1, then it should be equivalent to LSTM cell
forget_bias: Biases of the forget gate are initialized by default to 1 in order to reduce the scale of forgetting at the beginning of the training.
activation: Activation function of the inner states.
reuse: (optional) Python boolean describing whether to reuse variables in an existing scope. If not
True, and the existing scope already has the given variables, an error is raised.
num_projis not divisible by
get_initial_state( inputs=None, batch_size=None, dtype=None )
zero_state( batch_size, dtype )
Return zero-filled state tensor(s).
batch_size: int, float, or unit Tensor representing the batch size.
dtype: the data type to use for the state.
state_size is an int or TensorShape, then the return value is a
N-D tensor of shape
[batch_size, state_size] filled with zeros.
state_size is a nested list or tuple, then the return value is
a nested list or tuple (of the same structure) of
2-D tensors with
[batch_size, s] for each s in