tf.train.maybe_batch( tensors, keep_input, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None )
See the guide: Inputs and Readers > Input pipeline
Conditionally creates batches of tensors based on
See docstring in
batch for more details.
tensors: The list or dictionary of tensors to enqueue.
boolTensor. This tensor controls whether the input is added to the queue or not. If it is a scalar and evaluates
tensorsare all added to the queue. If it is a vector and
True, then each example is added to the queue only if the corresponding value in
True. This tensor essentially acts as a filtering mechanism.
batch_size: The new batch size pulled from the queue.
num_threads: The number of threads enqueuing
tensors. The batching will be nondeterministic if
num_threads > 1.
capacity: An integer. The maximum number of elements in the queue.
enqueue_many: Whether each tensor in
tensorsis a single example.
shapes: (Optional) The shapes for each example. Defaults to the inferred shapes for
dynamic_pad: Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.
allow_smaller_final_batch: (Optional) Boolean. If
True, allow the final batch to be smaller if there are insufficient items left in the queue.
shared_name: (Optional). If set, this queue will be shared under the given name across multiple sessions.
name: (Optional) A name for the operations.
A list or dictionary of tensors with the same types as
ValueError: If the
shapesare not specified, and cannot be inferred from the elements of