Other Functions and Classes

class tf.contrib.rnn.LayerNormBasicLSTMCell

LSTM unit with layer normalization and recurrent dropout.

This class adds layer normalization and recurrent dropout to a basic LSTM unit. Layer normalization implementation is based on:


"Layer Normalization" Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton

and is applied before the internal nonlinearities. Recurrent dropout is base on:


"Recurrent Dropout without Memory Loss" Stanislau Semeniuta, Aliaksei Severyn, Erhardt Barth.

tf.contrib.rnn.LayerNormBasicLSTMCell.__call__(inputs, state, scope=None) {:#LayerNormBasicLSTMCell.call}

LSTM cell with layer normalization and recurrent dropout.

tf.contrib.rnn.LayerNormBasicLSTMCell.__init__(num_units, forget_bias=1.0, input_size=None, activation=tanh, layer_norm=True, norm_gain=1.0, norm_shift=0.0, dropout_keep_prob=1.0, dropout_prob_seed=None) {:#LayerNormBasicLSTMCell.init}

Initializes the basic LSTM cell.

  • num_units: int, The number of units in the LSTM cell.
  • forget_bias: float, The bias added to forget gates (see above).
  • input_size: Deprecated and unused.
  • activation: Activation function of the inner states.
  • layer_norm: If True, layer normalization will be applied.
  • norm_gain: float, The layer normalization gain initial value. If layer_norm has been set to False, this argument will be ignored.
  • norm_shift: float, The layer normalization shift initial value. If layer_norm has been set to False, this argument will be ignored.
  • dropout_keep_prob: unit Tensor or float between 0 and 1 representing the recurrent dropout probability value. If float and 1.0, no dropout will be applied.
  • dropout_prob_seed: (optional) integer, the randomness seed.



tf.contrib.rnn.LayerNormBasicLSTMCell.zero_state(batch_size, dtype)

Return zero-filled state tensor(s).

  • batch_size: int, float, or unit Tensor representing the batch size.
  • dtype: the data type to use for the state.

If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size x state_size] filled with zeros.

If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size x s] for each s in state_size.