View source on GitHub |
Cudnn Compatible GRUCell.
Inherits From: GRUCell
tf.contrib.cudnn_rnn.CudnnCompatibleGRUCell(
num_units, reuse=None, kernel_initializer=None
)
A GRU impl akin to tf.compat.v1.nn.rnn_cell.GRUCell
to use along with
tf.contrib.cudnn_rnn.CudnnGRU
. The latter's params can be used by
it seamlessly.
It differs from platform-independent GRUs in how the new memory gate is calculated. Nvidia picks this variant based on GRU author's[1] suggestion and the fact it has no accuracy impact[2]. [1] https://arxiv.org/abs/1406.1078 [2] http://svail.github.io/diff_graphs/
Cudnn compatible GRU (from Cudnn library user guide):
# reset gate
<div> $$r_t = \sigma(x_t * W_r + h_t-1 * R_h + b_{Wr} + b_{Rr})$$ </div>
# update gate
<div> $$u_t = \sigma(x_t * W_u + h_t-1 * R_u + b_{Wu} + b_{Ru})$$ </div>
# new memory gate
<div> $$h'_t = tanh(x_t * W_h + r_t .* (h_t-1 * R_h + b_{Rh}) + b_{Wh})$$ </div>
<div> $$h_t = (1 - u_t) .* h'_t + u_t .* h_t-1$$ </div>
Other GRU (see tf.compat.v1.nn.rnn_cell.GRUCell
and
tf.contrib.rnn.GRUBlockCell
):
# new memory gate
\\(h'_t = tanh(x_t * W_h + (r_t .* h_t-1) * R_h + b_{Wh})\\)
which is not equivalent to Cudnn GRU: in addition to the extra bias term b_Rh,
\\(r .* (h * R) != (r .* h) * R\\)
Attributes | |
---|---|
graph
|
DEPRECATED FUNCTION |
output_size
|
Integer or TensorShape: size of outputs produced by this cell. |
scope_name
|
|
state_size
|
size(s) of state(s) used by this cell.
It can be represented by an Integer, a TensorShape or a tuple of Integers or TensorShapes. |
Methods
get_initial_state
get_initial_state(
inputs=None, batch_size=None, dtype=None
)
zero_state
zero_state(
batch_size, dtype
)
Return zero-filled state tensor(s).
Args | |
---|---|
batch_size
|
int, float, or unit Tensor representing the batch size. |
dtype
|
the data type to use for the state. |
Returns | |
---|---|
If state_size is an int or TensorShape, then the return value is a
N-D tensor of shape [batch_size, state_size] filled with zeros.
If |