ML Community Day is November 9! Join us for updates from TensorFlow, JAX, and more Learn more


Reverb ReplayBuffer exposed as a TF-Agents replay buffer.

Inherits From: ReplayBuffer

Used in the notebooks

Used in the tutorials

data_spec Spec for the data held in the replay buffer.
table_name Name of the table that will be sampled.
sequence_length (can be set to None, i.e unknown) The number of timesteps that each sample consists of. If not None, then the lengths of samples received from the server will be validated against this number.

server_address (Optional) Address of the reverb replay server. One of server_address or local_server must be provided.
local_server (Optional) An instance of reverb.Server that holds the replay's data.
dataset_buffer_size (Optional) This is the prefetch buffer size (in number of items) of the Reverb Dataset object. A good rule of thumb is to set this value to 2-3x times the sample_batch_size you will use.
max_cycle_length (Optional) The number of sequences used to populate the batches of as_dataset. By default, min(32, sample_batch_size) is used, but the number can be between 1 and sample_batch_size.
num_workers_per_iterator (Defaults to -1, i.e auto selected) The number of worker threads to create per dataset iterator. When the selected table uses a FIFO or Heap sampler (i.e a queue) then exactly 1 worker must be used to avoid races causing invalid ordering of items. For all other samplers, this value should be roughly equal to the number of threads available on the CPU.
max_samples_per_stream (Defaults to -1, i.e auto selected) The maximum number of samples to fetch from a stream before a new call is made. Keeping this number low ensures that the data is fetched uniformly from all servers.
rate_limiter_timeout_ms (Defaults to -1: infinite). Timeout (in milliseconds) to wait on the rate limiter when sampling from the table. If rate_limiter_timeout_ms >= 0, this is the timeout passed to Table::Sample describing how long to wait for the rate limiter to allow sampling.

capacity Returns the capacity of the replay buffer.
data_spec Returns the spec for items in the replay buffer.


stateful_dataset Returns whether the dataset of the replay buffer has stateful ops.



View source

Adds a batch of items to the replay buffer.

items Ignored.


Raises: NotImplementedError


View source

Creates and returns a dataset that returns entries from the buffer.

A single entry from the dataset is the result of the following pipeline:

  • Sample sequences from the underlying data store
  • (optionally) Process them with sequence_preprocess_fn,
  • (optionally) Split them into subsequences of length num_steps
  • (optionally) Batch them into batches of size sample_batch_size.

In practice, this pipeline is executed in parallel as much as possible if num_parallel_calls != 1.

Some additional notes:

If num_steps is None, different replay buffers will behave differently. For example, TFUniformReplayBuffer will return single time steps without a time dimension. In contrast, e.g., EpisodicReplayBuffer will return full sequences (since each sequence may be an episode of unknown length, the outermost shape dimension will be None).

If sample_batch_size is None, no batching is performed; and there is no outer batch dimension in the returned Dataset entries. This setting is useful with variable episode lengths using e.g. EpisodicReplayBuffer, because it allows the user to get full episodes back, and use to build padded or truncated batches themselves.

If single_determinsitic_pass == True, the replay buffer will make every attempt to ensure every time step is visited once and exactly once in a deterministic manner (though true determinism depends on the underlying data store). Additional work may be done to ensure minibatches do not have multiple rows from the same episode. In some cases, this may mean arguments like num_parallel_calls are ignored.

sample_batch_size (Optional.) An optional batch_size to specify the number of items to return. If None (default), a single item is returned which matches the data_spec of this class (without a batch dimension). Otherwise, a batch of sample_batch_size items is returned, where each tensor in items will have its first dimension equal to sample_batch_size and the rest of the dimensions match the corresponding data_spec.
num_steps (Optional.) Optional way to specify that sub-episodes are desired. If None (default), a batch of single items is returned. Otherwise, a batch of sub-episodes is returned, where a sub-episode is a sequence of consecutive items in the replay_buffer. The returned tensors will have first dimension equal to sample_batch_size (if sample_batch_size is not None), subsequent dimension equal to num_steps, and remaining dimensions which match the data_spec of this class.
num_parallel_calls (Optional.) A tf.int32 scalar tf.Tensor, representing the number elements to process in parallel. If not specified, elements will be processed sequentially.
sequence_preprocess_fn (Optional) fn for preprocessing the collected data before it is split into subsequences of length num_steps. Defined in TFAgent.preprocess_sequence. Defaults to pass through.
single_deterministic_pass Python boolean. If True, the dataset will return a single deterministic pass through its underlying data.

A dataset of type, elements of which are 2-tuples of:

  • An item or sequence of items or batch thereof
  • Auxiliary info for the items (i.e. ids, probs).

NotImplementedError If a non-default argument value is not supported.
ValueError If the data spec contains lists that must be converted to tuples.


View source

Resets the contents of replay buffer.

Clears the replay buffer contents.


View source

Returns all the items in buffer.




View source

Returns an item or batch of items from the buffer.

sample_batch_size Ignored.
num_steps Ignored.
time_stacked Ignored.




View source


View source

Returns the number of frames in the replay buffer.


View source

Updates the priorities for the given keys.