Tune in to the first Women in ML Symposium this Tuesday, October 19 at 9am PST Register now


Class wrapping dynamic-sized, per-time-step, write-once Tensor arrays.

Used in the notebooks

Used in the guide Used in the tutorials

This class is meant to be used with dynamic iteration primitives such as while_loop and map_fn. It supports gradient back-propagation via special "flow" control flow dependencies.

Example 1: Plain reading and writing.

ta = tf.TensorArray(tf.float32, size=0, dynamic_size=True, clear_after_read=False)
ta = ta.write(0, 10)
ta = ta.write(1, 20)
ta = ta.write(2, 30)

<tf.Tensor: shape=(), dtype=float32, numpy=10.0>
<tf.Tensor: shape=(), dtype=float32, numpy=20.0>
<tf.Tensor: shape=(), dtype=float32, numpy=30.0>
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([10., 20., 30.],

Example 2: Fibonacci sequence algorithm that writes in a loop then returns.

def fibonacci(n):
  ta = tf.TensorArray(tf.float32, size=0, dynamic_size=True)
  ta = ta.unstack([0., 1.])

  for i in range(2, n):
    ta = ta.write(i, ta.read(i - 1) + ta.read(i - 2))

  return ta.stack()

<tf.Tensor: shape=(7,), dtype=float32,
numpy=array([0., 1., 1., 2., 3., 5., 8.], dtype=float32)>

Example 3: A simple loop interacting with a tf.Variable.

v = tf.Variable(1)
def f(x):
  ta = tf.TensorArray(tf.int32, size=0, dynamic_size=True)
  for i in tf.range(x):