tf.TensorArray

Class wrapping dynamic-sized, per-time-step, write-once Tensor arrays.

Used in the notebooks

Used in the guide Used in the tutorials

This class is meant to be used with dynamic iteration primitives such as while_loop and map_fn. It supports gradient back-propagation via special "flow" control flow dependencies.

Example 1: Plain reading and writing.

ta = tf.TensorArray(tf.float32, size=0, dynamic_size=True, clear_after_read=False)
ta = ta.write(0, 10)
ta = ta.write(1, 20)
ta = ta.write(2, 30)

ta.read(0)
<tf.Tensor: shape=(), dtype=float32, numpy=10.0>
ta.read(1)
<tf.Tensor: shape=(), dtype=float32, numpy=20.0>
ta.read(2)
<tf.Tensor: shape=(), dtype=float32, numpy=30.0>
ta.stack()
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([10., 20., 30.],
dtype=float32)>

Example 2: Fibonacci sequence algorithm that writes in a loop then returns.

@tf.function
def fibonacci(n):