{ }
View source on GitHub |
Creates a dynamic version of bidirectional recurrent neural network. (deprecated)
tf.compat.v1.nn.bidirectional_dynamic_rnn(
cell_fw,
cell_bw,
inputs,
sequence_length=None,
initial_state_fw=None,
initial_state_bw=None,
dtype=None,
parallel_iterations=None,
swap_memory=False,
time_major=False,
scope=None
)
Takes input and builds independent forward and backward RNNs. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.
Returns | |
---|---|
A tuple (outputs, output_states) where:
outputs: A tuple (output_fw, output_bw) containing the forward and
the backward rnn output Tensor .
If time_major == False (default),
output_fw will be a Tensor shaped:
[batch_size, max_time, cell_fw.output_size]
and output_bw will be a Tensor shaped:
[batch_size, max_time, cell_bw.output_size] .
If time_major == True,
output_fw will be a Tensor shaped:
[max_time, batch_size, cell_fw.output_size]
and output_bw will be a Tensor shaped:
[max_time, batch_size, cell_bw.output_size] .
It returns a tuple instead of a single concatenated Tensor , unlike
in the bidirectional_rnn . If the concatenated one is preferred,
the forward and backward outputs can be concatenated as
tf.concat(outputs, 2) .
output_states: A tuple (output_state_fw, output_state_bw) containing
the forward and the backward final states of bidirectional rnn.
|
Raises | |
---|---|
TypeError
|
If cell_fw or cell_bw is not an instance of RNNCell .
|