ML Community Day is November 9! Join us for updates from TensorFlow, JAX, and more Learn more

Module: tf.experimental.numpy

tf.experimental.numpy: NumPy API on TensorFlow.

This module provides a subset of NumPy API, built on top of TensorFlow operations. APIs are based on and have been tested with NumPy 1.16 version.

The set of supported APIs may be expanded over time. Also future releases may change the baseline version of NumPy API being supported. A list of some systematic differences with NumPy is listed later in the "Differences with NumPy" section.

Getting Started

Please also see TensorFlow NumPy Guide.

In the code snippets below, we will assume that tf.experimental.numpy is imported as tnp and NumPy is imported as np

print(tnp.ones([2,1]) + np.ones([1, 2]))


The module provides an ndarray class which wraps an immutable tf.Tensor. Additional functions are provided which accept array-like objects. Here array-like objects include ndarrays as defined by this module, as well as tf.Tensor, in addition to types accepted by NumPy.

A subset of NumPy dtypes are supported. Type promotion follows NumPy semantics.

print(tnp.ones([1, 2], dtype=tnp.int16) + tnp.ones([2, 1], dtype=tnp.uint8))

Array Interface

The ndarray class implements the __array__ interface. This should allow these objects to be passed into contexts that expect a NumPy or array-like object (e.g. matplotlib).

np.sum(tnp.ones([1, 2]) + np.ones([2, 1]))

TF Interoperability

The TF-NumPy API calls can be interleaved with TensorFlow calls without incurring Tensor data copies. This is true even if the ndarray or tf.Tensor is placed on a non-CPU device.

In general, the expected behavior should be on par with that of code involving tf.Tensor and running stateless TensorFlow functions on them.

tnp.sum(tnp.ones([1, 2]) + tf.ones([2, 1]))

Note that the __array_priority__ is currently chosen to be lower than tf.Tensor. Hence the + operator above returns a tf.Tensor.

Additional examples of interoperability include:

  • using with tf.GradientTape() scope to compute gradients through the TF-NumPy API calls.
  • using tf.distribution.Strategy scope for distributed execution
  • using tf.vectorized_map() for speeding up code using auto-vectorization

Device Support

Given that ndarray and functions wrap TensorFlow constructs, the code will have GPU and TPU support on par with TensorFlow. Device placement can be controlled by using with tf.device scopes. Note that these devices could be local or remote.

with tf.device("GPU:0"):
  x = tnp.ones([1, 2])

Graph and Eager Modes

Eager mode execution should typically match NumPy semantics of executing op-by-op. However the same code can be executed in graph mode, by putting it inside a tf.function. The function body can contain NumPy code, and the inputs can be ndarray as well.

def f(x, y):
  return tnp.sum(x + y)

f(tnp.ones([1, 2]), tf.ones([2, 1]))

Python control flow based on ndarray values will be translated by autograph into tf.cond and tf.while_loop constructs. The code can be XLA compiled for further optimizations.

However, note that graph mode execution can change behavior of certain operations since symbolic execution may not have information that is computed during runtime. Some differences are:

  • Shapes can be incomplete or unknown in graph mode. This means that ndarray.shape, ndarray.size and ndarray.ndim can return ndarray</