|View source on GitHub|
Runs a list of tensors to fill a queue to create batches of examples. (deprecated)
tf.compat.v1.train.batch_join( tensors_list, batch_size, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None )
tensors_list argument is a list of tuples of tensors, or a list of
dictionaries of tensors. Each element in the list is treated similarly
tensors argument of
Enqueues a different list of tensors in different threads.
Implemented using a queue -- a
QueueRunner for the queue
is added to the current
len(tensors_list) threads will be started,
i enqueuing the tensors from
tensors_list[i1][j] must match
tensors_list[i2][j] in type and shape, except in the first
enqueue_many is true.
tensors_list[i] is assumed
to represent a single example. An input tensor
x will be output as a
tensor with shape
[batch_size] + x.shape.
tensors_list[i] is assumed to
represent a batch of examples, where the first dimension is indexed
by example, and all members of
tensors_list[i] should have the
same size in the first dimension. The slices of any input tensor
x are treated as examples, and the output tensors will have shape
[batch_size] + x.shape[1:].
capacity argument controls the how long the prefetching is allowed to
grow the queues.
The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself.
True, it is sufficient that the rank of the
tensors is known, but individual dimensions may have value
In this case, for each enqueue the dimensions with value
may have a variable length; upon dequeue, the output tensors will be padded
on the right to the maximum shape of the tensors in the current minibatch.
For numbers, this padding takes value 0. For strings, this padding is
the empty string. See
PaddingFIFOQueue for more info.
True, a smaller batch value than
batch_size is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
shape property will have a first
Dimension value of
operations that depend on fixed batch_size would fail.
||A list of tuples or dictionaries of tensors to enqueue.|
||An integer. The new batch size pulled from the queue.|
||An integer. The maximum number of elements in the queue.|
Whether each tensor in
(Optional) The shapes for each example. Defaults to the
inferred shapes for
||Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.|
(Optional) Boolean. If
||(Optional) If set, this queue will be shared under the given name across multiple sessions.|
||(Optional) A name for the operations.|
A list or dictionary of tensors with the same number and types as
Input pipelines based on Queues are not supported when eager execution is
enabled. Please use the
tf.data API to ingest data under eager execution.