{ }
View source on GitHub |
Batches the computation done by the decorated function.
tf.nondifferentiable_batch_function(
num_batch_threads,
max_batch_size,
batch_timeout_micros,
allowed_batch_sizes=None,
max_enqueued_batches=10,
autograph=True,
enable_large_batch_splitting=True
)
So, for example, in the following code
@batch_function(1, 2, 3)
def layer(a):
return tf.matmul(a, a)
b = layer(w)
if more than one session.run call is simultaneously trying to compute b
the values of w
will be gathered, non-deterministically concatenated
along the first axis, and only one thread will run the computation. See the
documentation of the Batch
op for more details.
Assumes that all arguments of the decorated function are Tensors which will be batched along their first dimension.
SparseTensor is not supported. The return value of the decorated function must be a Tensor or a list/tuple of Tensors.
Returns | |
---|---|
The decorated function will return the unbatched computation output Tensors. |