Inputs and Readers

Placeholders

TensorFlow provides a placeholder operation that must be fed with data on execution. For more info, see the section on Feeding data.

For feeding SparseTensors which are composite type, there is a convenience function:

Readers

TensorFlow provides a set of Reader classes for reading data formats. For more information on inputs and readers, see Reading data.

Converting

TensorFlow provides several operations that you can use to convert various data formats into tensors.


Example protocol buffer

TensorFlow's recommended format for training examples is serialized Example protocol buffers, described here. They contain Features, described here.

Queues

TensorFlow provides several implementations of 'Queues', which are structures within the TensorFlow computation graph to stage pipelines of tensors together. The following describe the basic Queue interface and some implementations. To see an example use, see Threading and Queues.

Conditional Accumulators

Dealing with the filesystem

Input pipeline

TensorFlow functions for setting up an input-prefetching pipeline. Please see the reading data how-to for context.

Beginning of an input pipeline

The "producer" functions add a queue to the graph and a corresponding QueueRunner for running the subgraph that fills that queue.

Batching at the end of an input pipeline

These functions add a queue to the graph to assemble a batch of examples, with possible shuffling. They also add a QueueRunner for running the subgraph that fills that queue.

Use tf.train.batch or tf.train.batch_join for batching examples that have already been well shuffled. Use tf.train.shuffle_batch or tf.train.shuffle_batch_join for examples that would benefit from additional shuffling.

Use tf.train.batch or tf.train.shuffle_batch if you want a single thread producing examples to batch, or if you have a single subgraph producing examples but you want to run it in N threads (where you increase N until it can keep the queue full). Use tf.train.batch_join or tf.train.shuffle_batch_join if you have N different subgraphs producing examples to batch and you want them run by N threads. Use maybe_* to enqueue conditionally.