|View source on GitHub|
Reverb trajectory sequence observer.
tf_agents.typing.types.ReverbClient, table_name: Union[Text, Sequence[Text]], sequence_length: int, stride_length: int = 1, priority: Union[float, int] = 1, pad_end_of_episodes: bool = False )
This is equivalent to ReverbAddTrajectoryObserver but sequences are not cut when a boundary trajectory is seen. This allows for sequences to be sampled with boundaries anywhere in the sequence rather than just at the end.
Consider using this observer when you want to create training experience that can encompass any subsequence of the observed trajectories.
||Python client for the reverb replay server.|
||The table name(s) where samples will be written to.|
||The sequence_length used to write to the given table.|
The integer stride for the sliding window for overlapping
sequences. The default value of
||Initial priority for new samples in the RB.|
At the end of an episode, the cache is dropped by
close() -> None
Closes the writer of the observer.
open() -> None
Open the writer of the observer.
reset( write_cached_steps: bool = True ) -> None
Resets the state of the observer.
||boolean flag indicating whether we want to write the cached trajectory. When this argument is True, the function attempts to write the cached data before resetting (optionally with padding). Otherwise, the cached data gets dropped.|
tf_agents.trajectories.Trajectory) -> None
Writes the trajectory into the underlying replay buffer.
Allows trajectory to be a flattened trajectory. No batch dimension allowed.
||The trajectory to be written which could be (possibly nested) trajectory object or a flattened version of a trajectory. It assumes there is no batch dimension.|