Queues

TensorFlow provides several implementations of 'Queues', which are structures within the TensorFlow computation graph to stage pipelines of tensors together. The following describe the basic Queue interface and some implementations. To see an example use, see Threading and Queues.

class tf.QueueBase

Base class for queue implementations.

A queue is a TensorFlow data structure that stores tensors across multiple steps, and exposes operations that enqueue and dequeue tensors.

Each queue element is a tuple of one or more tensors, where each tuple component has a static dtype, and may have a static shape. The queue implementations support versions of enqueue and dequeue that handle single elements, versions that support enqueuing and dequeuing a batch of elements at once.

See tf.FIFOQueue and tf.RandomShuffleQueue for concrete implementations of this class, and instructions on how to create them.


tf.QueueBase.enqueue(vals, name=None)

Enqueues one element to this queue.

If the queue is full when this operation executes, it will block until the element has been enqueued.

At runtime, this operation may raise an error if the queue is closed before or during its execution. If the queue is closed before this operation runs, tf.errors.CancelledError will be raised. If this operation is blocked, and either (i) the queue is closed by a close operation with cancel_pending_enqueues=True, or (ii) the session is closed, tf.errors.CancelledError will be raised.

Args:
  • vals: A tensor, a list or tuple of tensors, or a dictionary containing the values to enqueue.
  • name: A name for the operation (optional).
Returns:

The operation that enqueues a new tuple of tensors to the queue.


tf.QueueBase.enqueue_many(vals, name=None)

Enqueues zero or more elements to this queue.

This operation slices each component tensor along the 0th dimension to make multiple queue elements. All of the tensors in vals must have the same size in the 0th dimension.

If the queue is full when this operation executes, it will block until all of the elements have been enqueued.

At runtime, this operation may raise an error if the queue is closed before or during its execution. If the queue is closed before this operation runs, tf.errors.CancelledError will be raised. If this operation is blocked, and either (i) the queue is closed by a close operation with cancel_pending_enqueues=True, or (ii) the session is closed, tf.errors.CancelledError will be raised.

Args:
  • vals: A tensor, a list or tuple of tensors, or a dictionary from which the queue elements are taken.
  • name: A name for the operation (optional).
Returns:

The operation that enqueues a batch of tuples of tensors to the queue.


tf.QueueBase.dequeue(name=None)

Dequeues one element from this queue.

If the queue is empty when this operation executes, it will block until there is an element to dequeue.

At runtime, this operation may raise an error if the queue is closed before or during its execution. If the queue is closed, the queue is empty, and there are no pending enqueue operations that can fulfill this request, tf.errors.OutOfRangeError will be raised. If the session is closed, tf.errors.CancelledError will be raised.

Args:
  • name: A name for the operation (optional).
Returns:

The tuple of tensors that was dequeued.


tf.QueueBase.dequeue_many(n, name=None)

Dequeues and concatenates n elements from this queue.

This operation concatenates queue-element component tensors along the 0th dimension to make a single component tensor. All of the components in the dequeued tuple will have size n in the 0th dimension.

If the queue is closed and there are less than n elements left, then an OutOfRange exception is raised.

At runtime, this operation may raise an error if the queue is closed before or during its execution. If the queue is closed, the queue contains fewer than n elements, and there are no pending enqueue operations that can fulfill this request, tf.errors.OutOfRangeError will be raised. If the session is closed, tf.errors.CancelledError will be raised.

Args:
  • n: A scalar Tensor containing the number of elements to dequeue.
  • name: A name for the operation (optional).
Returns:

The tuple of concatenated tensors that was dequeued.


tf.QueueBase.size(name=None)

Compute the number of elements in this queue.

Args:
  • name: A name for the operation (optional).
Returns:

A scalar tensor containing the number of elements in this queue.


tf.QueueBase.close(cancel_pending_enqueues=False, name=None)

Closes this queue.

This operation signals that no more elements will be enqueued in the given queue. Subsequent enqueue and enqueue_many operations will fail. Subsequent dequeue and dequeue_many operations will continue to succeed if sufficient elements remain in the queue. Subsequent dequeue and dequeue_many operations that would block will fail immediately.

If cancel_pending_enqueues is True, all pending requests will also be cancelled.

Args:
  • cancel_pending_enqueues: (Optional.) A boolean, defaulting to False (described above).
  • name: A name for the operation (optional).
Returns:

The operation that closes the queue.

Other Methods


tf.QueueBase.__init__(dtypes, shapes, names, queue_ref) {:#QueueBase.init}

Constructs a queue object from a queue reference.

The two optional lists, shapes and names, must be of the same length as dtypes if provided. The values at a given index i indicate the shape and name to use for the corresponding queue component in dtypes.

Args:
  • dtypes: A list of types. The length of dtypes must equal the number of tensors in each element.
  • shapes: Constraints on the shapes of tensors in an element: A list of shape tuples or None. This list is the same length as dtypes. If the shape of any tensors in the element are constrained, all must be; shapes can be None if the shapes should not be constrained.
  • names: Optional list of names. If provided, the enqueue() and dequeue() methods will use dictionaries with these names as keys. Must be None or a list or tuple of the same length as dtypes.
  • queue_ref: The queue reference, i.e. the output of the queue op.
Raises:
  • ValueError: If one of the arguments is invalid.

tf.QueueBase.dequeue_up_to(n, name=None)

Dequeues and concatenates n elements from this queue.

Note This operation is not supported by all queues. If a queue does not support DequeueUpTo, then a tf.errors.UnimplementedError is raised.

This operation concatenates queue-element component tensors along the 0th dimension to make a single component tensor. If the queue has not been closed, all of the components in the dequeued tuple will have size n in the 0th dimension.

If the queue is closed and there are more than 0 but fewer than n elements remaining, then instead of raising a tf.errors.OutOfRangeError like dequeue_many, less than n elements are returned immediately. If the queue is closed and there are 0 elements left in the queue, then a tf.errors.OutOfRangeError is raised just like in dequeue_many. Otherwise the behavior is identical to dequeue_many.

Args:
  • n: A scalar Tensor containing the number of elements to dequeue.
  • name: A name for the operation (optional).
Returns:

The tuple of concatenated tensors that was dequeued.


tf.QueueBase.dtypes

The list of dtypes for each component of a queue element.


tf.QueueBase.from_list(index, queues)

Create a queue using the queue reference from queues[index].

Args:
  • index: An integer scalar tensor that determines the input that gets selected.
  • queues: A list of QueueBase objects.
Returns:

A QueueBase object.

Raises:
  • TypeError: When queues is not a list of QueueBase objects, or when the data types of queues are not all the same.

tf.QueueBase.name

The name of the underlying queue.


tf.QueueBase.names

The list of names for each component of a queue element.


tf.QueueBase.queue_ref

The underlying queue reference.


tf.QueueBase.shapes

The list of shapes for each component of a queue element.


class tf.FIFOQueue

A queue implementation that dequeues elements in first-in first-out order.

See tf.QueueBase for a description of the methods on this class.


tf.FIFOQueue.__init__(capacity, dtypes, shapes=None, names=None, shared_name=None, name='fifo_queue') {:#FIFOQueue.init}

Creates a queue that dequeues elements in a first-in first-out order.

A FIFOQueue has bounded capacity; supports multiple concurrent producers and consumers; and provides exactly-once delivery.

A FIFOQueue holds a list of up to capacity elements. Each element is a fixed-length tuple of tensors whose dtypes are described by dtypes, and whose shapes are optionally described by the shapes argument.

If the shapes argument is specified, each component of a queue element must have the respective fixed shape. If it is unspecified, different queue elements may have different shapes, but the use of dequeue_many is disallowed.

Args:
  • capacity: An integer. The upper bound on the number of elements that may be stored in this queue.
  • dtypes: A list of DType objects. The length of dtypes must equal the number of tensors in each queue element.
  • shapes: (Optional.) A list of fully-defined TensorShape objects with the same length as dtypes, or None.
  • names: (Optional.) A list of string naming the components in the queue with the same length as dtypes, or None. If specified the dequeue methods return a dictionary with the names as keys.
  • shared_name: (Optional.) If non-empty, this queue will be shared under the given name across multiple sessions.
  • name: Optional name for the queue operation.

class tf.PaddingFIFOQueue

A FIFOQueue that supports batching variable-sized tensors by padding.

A PaddingFIFOQueue may contain components with dynamic shape, while also supporting dequeue_many. See the constructor for more details.

See tf.QueueBase for a description of the methods on this class.


tf.PaddingFIFOQueue.__init__(capacity, dtypes, shapes, names=None, shared_name=None, name='padding_fifo_queue') {:#PaddingFIFOQueue.init}

Creates a queue that dequeues elements in a first-in first-out order.

A PaddingFIFOQueue has bounded capacity; supports multiple concurrent producers and consumers; and provides exactly-once delivery.

A PaddingFIFOQueue holds a list of up to capacity elements. Each element is a fixed-length tuple of tensors whose dtypes are described by dtypes, and whose shapes are described by the shapes argument.

The shapes argument must be specified; each component of a queue element must have the respective shape. Shapes of fixed rank but variable size are allowed by setting any shape dimension to None. In this case, the inputs' shape may vary along the given dimension, and dequeue_many will pad the given dimension with zeros up to the maximum shape of all elements in the given batch.

Args:
  • capacity: An integer. The upper bound on the number of elements that may be stored in this queue.
  • dtypes: A list of DType objects. The length of dtypes must equal the number of tensors in each queue element.
  • shapes: A list of TensorShape objects, with the same length as dtypes. Any dimension in the TensorShape containing value None is dynamic and allows values to be enqueued with variable size in that dimension.
  • names: (Optional.) A list of string naming the components in the queue with the same length as dtypes, or None. If specified the dequeue methods return a dictionary with the names as keys.
  • shared_name: (Optional.) If non-empty, this queue will be shared under the given name across multiple sessions.
  • name: Optional name for the queue operation.
Raises:
  • ValueError: If shapes is not a list of shapes, or the lengths of dtypes and shapes do not match, or if names is specified and the lengths of dtypes and names do not match.

class tf.RandomShuffleQueue

A queue implementation that dequeues elements in a random order.

See tf.QueueBase for a description of the methods on this class.


tf.RandomShuffleQueue.__init__(capacity, min_after_dequeue, dtypes, shapes=None, names=None, seed=None, shared_name=None, name='random_shuffle_queue') {:#RandomShuffleQueue.init}

Create a queue that dequeues elements in a random order.

A RandomShuffleQueue has bounded capacity; supports multiple concurrent producers and consumers; and provides exactly-once delivery.

A RandomShuffleQueue holds a list of up to capacity elements. Each element is a fixed-length tuple of tensors whose dtypes are described by dtypes, and whose shapes are optionally described by the shapes argument.

If the shapes argument is specified, each component of a queue element must have the respective fixed shape. If it is unspecified, different queue elements may have different shapes, but the use of dequeue_many is disallowed.

The min_after_dequeue argument allows the caller to specify a minimum number of elements that will remain in the queue after a dequeue or dequeue_many operation completes, to ensure a minimum level of mixing of elements. This invariant is maintained by blocking those operations until sufficient elements have been enqueued. The min_after_dequeue argument is ignored after the queue has been closed.

Args:
  • capacity: An integer. The upper bound on the number of elements that may be stored in this queue.
  • min_after_dequeue: An integer (described above).
  • dtypes: A list of DType objects. The length of dtypes must equal the number of tensors in each queue element.
  • shapes: (Optional.) A list of fully-defined TensorShape objects with the same length as dtypes, or None.
  • names: (Optional.) A list of string naming the components in the queue with the same length as dtypes, or None. If specified the dequeue methods return a dictionary with the names as keys.
  • seed: A Python integer. Used to create a random seed. See set_random_seed for behavior.
  • shared_name: (Optional.) If non-empty, this queue will be shared under the given name across multiple sessions.
  • name: Optional name for the queue operation.

class tf.PriorityQueue

A queue implementation that dequeues elements in prioritized order.

See tf.QueueBase for a description of the methods on this class.


tf.PriorityQueue.__init__(capacity, types, shapes=None, names=None, shared_name=None, name='priority_queue') {:#PriorityQueue.init}

Creates a queue that dequeues elements in a first-in first-out order.

A PriorityQueue has bounded capacity; supports multiple concurrent producers and consumers; and provides exactly-once delivery.

A PriorityQueue holds a list of up to capacity elements. Each element is a fixed-length tuple of tensors whose dtypes are described by types, and whose shapes are optionally described by the shapes argument.

If the shapes argument is specified, each component of a queue element must have the respective fixed shape. If it is unspecified, different queue elements may have different shapes, but the use of dequeue_many is disallowed.

Enqueues and Dequeues to the PriorityQueue must include an additional tuple entry at the beginning: the priority. The priority must be an int64 scalar (for enqueue) or an int64 vector (for enqueue_many).

Args:
  • capacity: An integer. The upper bound on the number of elements that may be stored in this queue.
  • types: A list of DType objects. The length of types must equal the number of tensors in each queue element, except the first priority element. The first tensor in each element is the priority, which must be type int64.
  • shapes: (Optional.) A list of fully-defined TensorShape objects, with the same length as types, or None.
  • names: (Optional.) A list of strings naming the components in the queue with the same length as dtypes, or None. If specified, the dequeue methods return a dictionary with the names as keys.
  • shared_name: (Optional.) If non-empty, this queue will be shared under the given name across multiple sessions.
  • name: Optional name for the queue operation.