Save the date! Google I/O returns May 18-20 Register now


Makes a TensorFlow dataset.

server_address The server address of the replay server.
table The name of the table to sample from replay.
data_spec The data's spec.
max_in_flight_samples_per_worker Optional, dataset buffer capacity.
batch_size Optional. If specified the dataset returned will combine consecutive elements into batches. This argument is also used to determine the cycle_length for -- if unspecified the cycle length is set to
prefetch_size How many batches to prefectch in the pipeline.
sequence_length Optional. If specified consecutive elements of each interleaved dataset will be combined into sequences.
cycle_length Optional. When equal to batch_size it would make take a sample from a different sequence. For reducing memory usage use a smaller number.
num_parallel_calls Optional. If specified number of parallel calls in iterleave. By default use
per_sequence_fn Optional, per sequence function.
dataset_transformation Optional, per dataset interleave transformation.
num_workers_per_iterator (Defaults to -1, i.e auto selected) The number of worker threads to create per dataset iterator. When the selected table uses a FIFO sampler (i.e a queue) then exactly 1 worker must be used to avoid races causing invalid ordering of items. For all other samplers, this value should be roughly equal to the number of threads available on the CPU.
max_samples_per_stream (Defaults to -1, i.e auto selected) The maximum number of samples to fetch from a stream before a new call is made. Keeping this number low ensures that the data is fetched uniformly from all server.
rate_limiter_timeout_ms Timeout (in milliseconds) to wait on the rate limiter when sampling from the table. If rate_limiter_timeout_ms >= 0, this is the timeout passed to Table::Sample describing how long to wait for the rate limiter to allow sampling.

A that streams data from the replay server.