|View source on GitHub|
Process a history of sequences that are concatenated without padding.
tf_agents.keras_layers.DynamicUnroll( cell, parallel_iterations=20, swap_memory=None, **kwargs )
Given batched, batch-major
an RNN using
cell; at each time step it feeds a frame of
inputs as input
If at least one tensor in
inputs has rank 3 or above (shaped
[batch_size, n, ...] where
n is the number of time steps),
the RNN will run for exactly
n == 1 is known statically, then only a single step is executed.
This is done via a static unroll without using a
If all of the tensors in
inputs have rank at most
2 (i.e., shaped
[batch_size, d], then it is assumed that a single step
is being taken (i.e.
n = 1) and the outputs will also not have a time
dimension in their output.
Parallel iterations to pass to
Python bool. Whether to swap memory from GPU to CPU when
storing activations for backprop. This may sometimes have a negligible
performance impact, but can improve memory usage. See documentation of
Additional layer arguments, such as
get_initial_state( inputs=None, batch_size=None, dtype=None )