tf.map_fn( fn, elems, dtype=None, parallel_iterations=None, back_prop=True, swap_memory=False, infer_shape=True, name=None )
map on the list of tensors unpacked from
elems on dimension 0.
The simplest version of
map_fn repeatedly applies the callable
fn to a
sequence of elements from first to last. The elements are made of the
tensors unpacked from
dtype is the data type of the return
fn. Users must provide
dtype if it is different from
the data type of
elems is unpacked into
values, a list of tensors. The shape
of the result tensor is
[values.shape] + fn(values).shape.
This method also allows multi-arity
elems and output of
is a (possibly nested) list or tuple of tensors, then each of these tensors
must have a matching first (unpack) dimension. The signature of
match the structure of
elems. That is, if
(t1, [t2, t3, [t4, t5]]), then an appropriate signature for
fn = lambda (t1, [t2, t3, [t4, t5]]):.
fn may emit a different structure than its input. For example,
fn may look like:
fn = lambda t1: return (t1 + 1, t1 - 1). In this case,
dtype parameter is not optional:
dtype must be a type or (possibly
nested) tuple of types matching the output of
To apply a functional operation to the nonzero elements of a SparseTensor one of the following methods is recommended. First, if the function is expressible as TensorFlow ops, use
result = SparseTensor(input.indices, fn(input.values), input.dense_shape)
If, however, the function is not expressible as a TensorFlow op, then use
result = SparseTensor( input.indices, map_fn(fn, input.values), input.dense_shape)
When executing eagerly, map_fn does not execute in parallel even if
parallel_iterations is set to a value > 1. You can still get the
performance benefits of running a function in parallel by using the
# Assume the function being used in map_fn is fn. # To ensure map_fn calls fn in parallel, use the defun decorator. @tf.contrib.eager.defun def func(tensor): return tf.map_fn(fn, tensor)
Note that if you use the defun decorator, any non-TensorFlow Python code
that you may have written in your function won't get executed. See
tf.contrib.eager.defun for more details. The recommendation would be to
debug without defun but switch to defun to get performance benefits of
running map_fn in parallel.
fn: The callable to be performed. It accepts one argument, which will have the same (possibly nested) structure as
elems. Its output must have the same structure as
dtypeif one is provided, otherwise it must have the same structure as
elems: A tensor or (possibly nested) sequence of tensors, each of which will be unpacked along their first dimension. The nested sequence of the resulting slices will be applied to
dtype: (optional) The output type(s) of
fnreturns a structure of Tensors differing from the structure of
dtypeis not optional and must have the same structure as the output of
parallel_iterations: (optional) The number of iterations allowed to run in parallel. When graph building, the default value is 10. While executing eagerly, the default value is set to 1.
back_prop: (optional) True enables support for back propagation.
swap_memory: (optional) True enables GPU-CPU memory swapping.
infer_shape: (optional) False disables tests for consistent output shapes.
name: (optional) Name prefix for the returned tensors.
A tensor or (possibly nested) sequence of tensors. Each tensor packs the
results of applying
fn to tensors unpacked from
elems along the first
dimension, from first to last.
fnis not callable or the structure of the output of
dtypedo not match, or if elems is a SparseTensor.
ValueError: if the lengths of the output of
dtypedo not match.
elems = np.array([1, 2, 3, 4, 5, 6]) squares = map_fn(lambda x: x * x, elems) # squares == [1, 4, 9, 16, 25, 36]
elems = (np.array([1, 2, 3]), np.array([-1, 1, -1])) alternate = map_fn(lambda x: x * x, elems, dtype=tf.int64) # alternate == [-1, 2, -3]
elems = np.array([1, 2, 3]) alternates = map_fn(lambda x: (x, -x), elems, dtype=(tf.int64, tf.int64)) # alternates == [1, 2, 3] # alternates == [-1, -2, -3]