tf.experimental.numpy: NumPy API on TensorFlow.
This module provides a subset of NumPy API, built on top of TensorFlow operations. APIs are based on and have been tested with NumPy 1.16 version.
The set of supported APIs may be expanded over time. Also future releases may change the baseline version of NumPy API being supported. A list of some systematic differences with NumPy is listed later in the "Differences with NumPy" section.
Please also see TensorFlow NumPy Guide.
In the code snippets below, we will assume that
tnp and NumPy is imported as
print(tnp.ones([2,1]) + np.ones([1, 2]))
The module provides an
ndarray class which wraps an immutable
Additional functions are provided which accept array-like objects. Here
array-like objects include
ndarrays as defined by this module, as well as
tf.Tensor, in addition to types accepted by NumPy.
A subset of NumPy dtypes are supported. Type promotion follows NumPy semantics.
print(tnp.ones([1, 2], dtype=tnp.int16) + tnp.ones([2, 1], dtype=tnp.uint8))
ndarray class implements the
__array__ interface. This should allow
these objects to be passed into contexts that expect a NumPy or array-like
object (e.g. matplotlib).
np.sum(tnp.ones([1, 2]) + np.ones([2, 1]))
The TF-NumPy API calls can be interleaved with TensorFlow calls
without incurring Tensor data copies. This is true even if the
tf.Tensor is placed on a non-CPU device.
In general, the expected behavior should be on par with that of code involving
tf.Tensor and running stateless TensorFlow functions on them.
tnp.sum(tnp.ones([1, 2]) + tf.ones([2, 1]))
Note that the
__array_priority__ is currently chosen to be lower than
tf.Tensor. Hence the
+ operator above returns a
Additional examples of interoperability include:
with tf.GradientTape()scope to compute gradients through the TF-NumPy API calls.
tf.distribution.Strategyscope for distributed execution
tf.vectorized_map()for speeding up code using auto-vectorization
ndarray and functions wrap TensorFlow constructs, the code will
have GPU and TPU support on par with TensorFlow. Device placement can be
controlled by using
with tf.device scopes. Note that these devices could
be local or remote.
with tf.device("GPU:0"): x = tnp.ones([1, 2]) print(tf.convert_to_tensor(x).device)
Graph and Eager Modes
Eager mode execution should typically match NumPy semantics of executing
op-by-op. However the same code can be executed in graph mode, by putting it
tf.function. The function body can contain NumPy code, and the inputs
ndarray as well.
@tf.function def f(x, y): return tnp.sum(x + y) f(tnp.ones([1, 2]), tf.ones([2, 1]))
However, note that graph mode execution can change behavior of certain operations since symbolic execution may not have information that is computed during runtime. Some differences are:
- Shapes can be incomplete or unknown in graph mode. This means that