View source on GitHub 
RandomVariable
supports random variable semantics for TFP distributions.
Inherits From: DeferredTensor
tfp.experimental.nn.util.RandomVariable(
distribution, convert_to_tensor_fn=tfp.distributions.Distribution.sample,
dtype=None, shape=None, name=None
)
The RandomVariable
class memoizes concretizations of TFP distributionlike
objects so that random draws can be retriggered ondemand, i.e., by calling
reset
. For more details type help(tfp.util.DeferredTensor)
.
Examples
# In this example we see the memoization semantics in action.
tfd = tfp.distributions
tfn = tfp.experimental.nn
x = tfn.util.RandomVariable(tfd.Normal(0, 1))
x_ = tf.convert_to_tensor(x)
x _ + 1. == x + 1.
# ==> True; `x` always has the same value until reset.
x.reset()
tf.convert_to_tensor(x) == x_
# ==> False; `x` was reset which triggers a new sample.
# In this example we see how to concretize with different semantics.
tfd = tfp.distributions
tfn = tfp.experimental.nn
x = tfn.util.RandomVariable(
tfd.Bernoulli(probs=[[0.25], [0.5]]),
convert_to_tensor_fn=tfd.Distribution.mean,
dtype=tf.float32,
shape=[2, 1],
name='x')
tf.convert_to_tensor(x)
# ==> [[0.25], [0.5]]
x.shape
# ==> [2, 1]
x.dtype
# ==> tf.float32
x.name
# ==> 'x'
# In this example we see a common pitfall: accessing the memoized value from a
# different graph context.
tfd = tfp.distributions
tfn = tfp.experimental.nn
x = tfn.util.RandomVariable(tfd.Normal(0, 1))
@tf.function(autograph=False, experimental_compile=True)
def run():
return tf.convert_to_tensor(x)
first = run()
second = tf.convert_to_tensor(x)
# raises ValueError:
# "You are attempting to access a memoized value from a different
# graph context. Please call `this.reset()` before accessing a
# memoized value from a different graph context."
x.reset()
third = tf.convert_to_tensor(x)
# ==> No exception.
first == third
# ==> False
Args  

distribution

TFP distributionlike object which is passed into the
convert_to_tensor_fn whenever this object is evaluated in
Tensor like contexts.

convert_to_tensor_fn

Python callable which takes one argument, the
distribution and returns a Tensor of type dtype and shape shape .
Default value: tfp.distributions.Distribution.sample .

dtype

TF dtype equivalent to what would otherwise be
convert_to_tensor_fn(distribution).dtype .
Default value: None (i.e., distribution.dtype ).

shape

tf.TensorShape like object compatible with what would otherwise
be convert_to_tensor_fn(distribution).shape .
Default value: 'None' (i.e., unspecified static shape).

name

Python str representing this object's name ; used only in graph
mode.
Default value: None (i.e., distribution.name )

Attributes  

convert_to_tensor_fn


distribution


dtype

Represents the type of the elements in a Tensor .

name

The string name of this object. 
name_scope

Returns a tf.name_scope instance for this class.

pretransformed_input

Input to transform_fn .

shape

Represents the shape of a Tensor .

submodules

Sequence of all submodules.
Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

trainable_variables

Sequence of trainable variables owned by this module and its submodules. 
transform_fn

Function which characterizes the Tensor ization of this object.

variables

Sequence of variables owned by this module and its submodules. 
Methods
get_shape
get_shape()
Legacy means of getting Tensor shape, for compat with 2.0.0 LinOp.
is_unset
is_unset()
Returns True
if there is no memoized value and False
otherwise.
numpy
numpy()
Returns (copy of) deferred values as a NumPy array or scalar.
reset
reset()
Removes memoized value which triggers reeval on subsequent reads.
set_shape
set_shape(
shape
)
Updates the shape of this pretransformed_input.
This method can be called multiple times, and will merge the given shape
with the current shape of this object. It can be used to provide additional
information about the shape of this object that cannot be inferred from the
graph alone.
Args  

shape

A TensorShape representing the shape of this
pretransformed_input , a TensorShapeProto , a list, a tuple, or None.

Raises  

ValueError

If shape is not compatible with the current shape of this
pretransformed_input .

with_name_scope
@classmethod
with_name_scope( method )
Decorator to automatically enter the module name scope.
class MyModule(tf.Module):
@tf.Module.with_name_scope
def __call__(self, x):
if not hasattr(self, 'w'):
self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
return tf.matmul(x, self.w)
Using the above module would produce tf.Variable
s and tf.Tensor
s whose
names included the module name:
mod = MyModule()
mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args  

method

The method to wrap. 
Returns  

The original method wrapped such that it enters the module's name scope. 
__abs__
__abs__(
x, name=None
)
Computes the absolute value of a tensor.
Given a tensor of integer or floatingpoint values, this operation returns a tensor of the same type, where each element contains the absolute value of the corresponding element in the input.
Given a tensor x
of complex numbers, this operation returns a tensor of type
float32
or float64
that is the absolute value of each element in x
. For
a complex number \(a + bj\), its absolute value is computed as \(\sqrt{a^2
 b^2}\). For example:
x = tf.constant([[2.25 + 4.75j], [3.25 + 5.75j]])
tf.abs(x)
<tf.Tensor: shape=(2, 1), dtype=float64, numpy=
array([[5.25594901],
[6.60492241]])>
Args  

x

A Tensor or SparseTensor of type float16 , float32 , float64 ,
int32 , int64 , complex64 or complex128 .

name

A name for the operation (optional). 
Returns  

A Tensor or SparseTensor of the same size, type and sparsity as x ,
with absolute values. Note, for complex64 or complex128 input, the
returned Tensor will be of type float32 or float64 , respectively.
If 
__add__
__add__(
x, y
)
Dispatches to add for strings and add_v2 for all other types.
__and__
__and__(
x, y
)
Logical AND function.
The operation works for the following input types:
 Two single elements of type
bool
 One
tf.Tensor
of typebool
and one singlebool
, where the result will be calculated by applying logical AND with the single element to each element in the larger Tensor.  Two
tf.Tensor
objects of typebool
of the same shape. In this case, the result will be the elementwise logical AND of the two input tensors.
Usage:
a = tf.constant([True])
b = tf.constant([False])
tf.math.logical_and(a, b)
<tf.Tensor: shape=(1,), dtype=bool, numpy=array([False])>
c = tf.constant([True])
x = tf.constant([False, True, True, False])
tf.math.logical_and(c, x)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([False, True, True, False])>
y = tf.constant([False, False, True, True])
z = tf.constant([False, True, False, True])
tf.math.logical_and(y, z)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([False, False, False, True])>
Args  

x

A tf.Tensor type bool.

y

A tf.Tensor of type bool.

name

A name for the operation (optional). 
Returns  

A tf.Tensor of type bool with the same size as that of x or y.

__bool__
__bool__()
Dummy method to prevent a tensor from being used as a Python bool
.
This overload raises a TypeError
when the user inadvertently
treats a Tensor
as a boolean (most commonly in an if
or while
statement), in code that was not converted by AutoGraph. For example:
if tf.constant(True): # Will raise.
# ...
if tf.constant(5) < tf.constant(7): # Will raise.
# ...
Raises  

TypeError .

__div__
__div__(
x, y
)
Divides x / y elementwise (using Python 2 division operator semantics). (deprecated)
This function divides x
and y
, forcing Python 2 semantics. That is, if x
and y
are both integers then the result will be an integer. This is in
contrast to Python 3, where division with /
is always a float while division
with //
is always an integer.
Args  

x

Tensor numerator of real numeric type.

y

Tensor denominator of real numeric type.

name

A name for the operation (optional). 
Returns  

x / y returns the quotient of x and y.

__floordiv__
__floordiv__(
x, y
)
Divides x / y
elementwise, rounding toward the most negative integer.
The same as tf.compat.v1.div(x,y)
for integers, but uses
tf.floor(tf.compat.v1.div(x,y))
for
floating point arguments so that the result is always an integer (though
possibly an integer represented as floating point). This op is generated by
x // y
floor division in Python 3 and in Python 2.7 with
from __future__ import division
.
x
and y
must have the same type, and the result will have the same type
as well.
Args  

x

Tensor numerator of real numeric type.

y

Tensor denominator of real numeric type.

name

A name for the operation (optional). 
Returns  

x / y rounded down.

Raises  

TypeError

If the inputs are complex. 
__ge__
__ge__(
x, y, name=None
)
Returns the truth value of (x >= y) elementwise.
Example:
x = tf.constant([5, 4, 6, 7])
y = tf.constant([5, 2, 5, 10])
tf.math.greater_equal(x, y) ==> [True, True, True, False]
x = tf.constant([5, 4, 6, 7])
y = tf.constant([5])
tf.math.greater_equal(x, y) ==> [True, False, True, True]
Args  

x

A Tensor . Must be one of the following types: float32 , float64 , int32 , uint8 , int16 , int8 , int64 , bfloat16 , uint16 , half , uint32 , uint64 .

y

A Tensor . Must have the same type as x .

name

A name for the operation (optional). 
Returns  

A Tensor of type bool .

__getitem__
__getitem__(
tensor, slice_spec, var=None
)
Overload for Tensor.getitem.
This operation extracts the specified region from the tensor. The notation is similar to NumPy with the restriction that currently only support basic indexing. That means that using a nonscalar tensor as input is not currently allowed.
Some useful examples:
# Strip leading and trailing 2 elements
foo = tf.constant([1,2,3,4,5,6])
print(foo[2:2].eval()) # => [3,4]
# Skip every other row and reverse the order of the columns
foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
print(foo[::2,::1].eval()) # => [[3,2,1], [9,8,7]]
# Use scalar tensors as indices on both dimensions
print(foo[tf.constant(0), tf.constant(2)].eval()) # => 3
# Insert another dimension
foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
print(foo[tf.newaxis, :, :].eval()) # => [[[1,2,3], [4,5,6], [7,8,9]]]
print(foo[:, tf.newaxis, :].eval()) # => [[[1,2,3]], [[4,5,6]], [[7,8,9]]]
print(foo[:, :, tf.newaxis].eval()) # => [[[1],[2],[3]], [[4],[5],[6]],
[[7],[8],[9]]]
# Ellipses (3 equivalent operations)
foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
print(foo[tf.newaxis, :, :].eval()) # => [[[1,2,3], [4,5,6], [7,8,9]]]
print(foo[tf.newaxis, ...].eval()) # => [[[1,2,3], [4,5,6], [7,8,9]]]
print(foo[tf.newaxis].eval()) # => [[[1,2,3], [4,5,6], [7,8,9]]]
# Masks
foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
print(foo[foo > 2].eval()) # => [3, 4, 5, 6, 7, 8, 9]
Notes:
tf.newaxis
isNone
as in NumPy. An implicit ellipsis is placed at the end of the
slice_spec
 NumPy advanced indexing is currently not supported.
Args  

tensor

An ops.Tensor object. 
slice_spec

The arguments to Tensor.getitem. 
var

In the case of variable slice assignment, the Variable object to slice (i.e. tensor is the readonly view of this variable). 
Returns  

The appropriate slice of "tensor", based on "slice_spec". 
Raises  

ValueError

If a slice range is negative size. 
TypeError

If the slice indices aren't int, slice, ellipsis, tf.newaxis or scalar int32/int64 tensors. 
__gt__
__gt__(
x, y, name=None
)
Returns the truth value of (x > y) elementwise.
Example:
x = tf.constant([5, 4, 6])
y = tf.constant([5, 2, 5])
tf.math.greater(x, y) ==> [False, True, True]
x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.greater(x, y) ==> [False, False, True]
Args  

x

A Tensor . Must be one of the following types: float32 , float64 , int32 , uint8 , int16 , int8 , int64 , bfloat16 , uint16 , half , uint32 , uint64 .

y

A Tensor . Must have the same type as x .

name

A name for the operation (optional). 
Returns  

A Tensor of type bool .

__invert__
__invert__(
x, name=None
)
Returns the truth value of NOT x
elementwise.
Example:
tf.math.logical_not(tf.constant([True, False]))
<tf.Tensor: shape=(2,), dtype=bool, numpy=array([False, True])>
Args  

x

A Tensor of type bool . A Tensor of type bool .

name

A name for the operation (optional). 
Returns  

A Tensor of type bool .

__iter__
__iter__()
__le__
__le__(
x, y, name=None
)
Returns the truth value of (x <= y) elementwise.
Example:
x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.less_equal(x, y) ==> [True, True, False]
x = tf.constant([5, 4, 6])
y = tf.constant([5, 6, 6])
tf.math.less_equal(x, y) ==> [True, True, True]
Args  

x

A Tensor . Must be one of the following types: float32 , float64 , int32 , uint8 , int16 , int8 , int64 , bfloat16 , uint16 , half , uint32 , uint64 .

y

A Tensor . Must have the same type as x .

name

A name for the operation (optional). 
Returns  

A Tensor of type bool .

__lt__
__lt__(
x, y, name=None
)
Returns the truth value of (x < y) elementwise.
Example:
x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.less(x, y) ==> [False, True, False]
x = tf.constant([5, 4, 6])
y = tf.constant([5, 6, 7])
tf.math.less(x, y) ==> [False, True, True]
Args  

x

A Tensor . Must be one of the following types: float32 , float64 , int32 , uint8 , int16 , int8 , int64 , bfloat16 , uint16 , half , uint32 , uint64 .

y

A Tensor . Must have the same type as x .

name

A name for the operation (optional). 
Returns  

A Tensor of type bool .

__matmul__
__matmul__(
x, y
)
Multiplies matrix a
by matrix b
, producing a
* b
.
The inputs must, following any transpositions, be tensors of rank >= 2 where the inner 2 dimensions specify valid matrix multiplication dimensions, and any further outer dimensions specify matching batch size.
Both matrices must be of the same type. The supported types are:
float16
, float32
, float64
, int32
, complex64
, complex128
.
Either matrix can be transposed or adjointed (conjugated and transposed) on
the fly by setting one of the corresponding flag to True
. These are False
by default.
If one or both of the matrices contain a lot of zeros, a more efficient
multiplication algorithm can be used by setting the corresponding
a_is_sparse
or b_is_sparse
flag to True
. These are False
by default.
This optimization is only available for plain matrices (rank2 tensors) with
datatypes bfloat16
or float32
.
A simple 2D tensor matrix multiplication:
a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])
a # 2D tensor
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[1, 2, 3],
[4, 5, 6]], dtype=int32)>
b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2])
b # 2D tensor
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[ 7, 8],
[ 9, 10],
[11, 12]], dtype=int32)>
c = tf.matmul(a, b)
c # `a` * `b`
<tf.Tensor: shape=(2, 2), dtype=int32, numpy=
array([[ 58, 64],
[139, 154]], dtype=int32)>
A batch matrix multiplication with batch shape [2]:
a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3])
a # 3D tensor
<tf.Tensor: shape=(2, 2, 3), dtype=int32, numpy=
array([[[ 1, 2, 3],
[ 4, 5, 6]],
[[ 7, 8, 9],
[10, 11, 12]]], dtype=int32)>
b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2])
b # 3D tensor
<tf.Tensor: shape=(2, 3, 2), dtype=int32, numpy=
array([[[13, 14],
[15, 16],
[17, 18]],
[[19, 20],
[21, 22],
[23, 24]]], dtype=int32)>
c = tf.matmul(a, b)
c # `a` * `b`
<tf.Tensor: shape=(2, 2, 2), dtype=int32, numpy=
array([[[ 94, 100],
[229, 244]],
[[508, 532],
[697, 730]]], dtype=int32)>
Since python >= 3.5 the @ operator is supported
(see PEP 465). In TensorFlow,
it simply calls the tf.matmul()
function, so the following lines are
equivalent:
d = a @ b @ [[10], [11]]
d = tf.matmul(tf.matmul(a, b), [[10], [11]])
Args  

a

tf.Tensor of type float16 , float32 , float64 , int32 ,
complex64 , complex128 and rank > 1.

b

tf.Tensor with same type and rank as a .

transpose_a

If True , a is transposed before multiplication.

transpose_b

If True , b is transposed before multiplication.

adjoint_a

If True , a is conjugated and transposed before
multiplication.

adjoint_b

If True , b is conjugated and transposed before
multiplication.

a_is_sparse

If True , a is treated as a sparse matrix. Notice, this
does not support tf.sparse.SparseTensor , it just makes optimizations
that assume most values in a are zero.
See tf.sparse.sparse_dense_matmul
for some support for tf.sparse.SparseTensor multiplication.

b_is_sparse

If True , b is treated as a sparse matrix. Notice, this
does not support tf.sparse.SparseTensor , it just makes optimizations
that assume most values in a are zero.
See tf.sparse.sparse_dense_matmul
for some support for tf.sparse.SparseTensor multiplication.

name

Name for the operation (optional). 
Returns  

A tf.Tensor of the same type as a and b where each innermost matrix
is the product of the corresponding matrices in a and b , e.g. if all
transpose or adjoint attributes are False :


Note

This is matrix product, not elementwise product. 
Raises  

ValueError

If transpose_a and adjoint_a , or transpose_b and
adjoint_b are both set to True .

__mod__
__mod__(
x, y
)
Returns elementwise remainder of division. When x < 0
xor y < 0
is
true, this follows Python semantics in that the result here is consistent
with a flooring divide. E.g. floor(x / y) * y + mod(x, y) = x
.
Args  

x

A Tensor . Must be one of the following types: int32 , int64 , uint64 , bfloat16 , half , float32 , float64 .

y

A Tensor . Must have the same type as x .

name

A name for the operation (optional). 
Returns  

A Tensor . Has the same type as x .

__mul__
__mul__(
x, y
)
Dispatches cwise mul for "DenseDense" and "DenseSparse".
__neg__
__neg__(
x, name=None
)
Computes numerical negative value elementwise.
I.e., \(y = x\).
Args  

x

A Tensor . Must be one of the following types: bfloat16 , half , float32 , float64 , int32 , int64 , complex64 , complex128 .

name

A name for the operation (optional). 
Returns  

A Tensor . Has the same type as x .
If 
__nonzero__
__nonzero__()
Dummy method to prevent a tensor from being used as a Python bool
.
This is the Python 2.x counterpart to __bool__()
above.
Raises  

TypeError .

__or__
__or__(
x, y
)
Returns the truth value of x OR y elementwise.
Args  

x

A Tensor of type bool .

y

A Tensor of type bool .

name

A name for the operation (optional). 
Returns  

A Tensor of type bool .

__pow__
__pow__(
x, y
)
Computes the power of one value to another.
Given a tensor x
and a tensor y
, this operation computes \(x^y\) for
corresponding elements in x
and y
. For example:
x = tf.constant([[2, 2], [3, 3]])
y = tf.constant([[8, 16], [2, 3]])
tf.pow(x, y) # [[256, 65536], [9, 27]]
Args  

x

A Tensor of type float16 , float32 , float64 , int32 , int64 ,
complex64 , or complex128 .

y

A Tensor of type float16 , float32 , float64 , int32 , int64 ,
complex64 , or complex128 .

name

A name for the operation (optional). 
Returns  

A Tensor .

__radd__
__radd__(
y, x
)
Dispatches to add for strings and add_v2 for all other types.
__rand__
__rand__(
y, x
)
Logical AND function.
The operation works for the following input types:
 Two single elements of type
bool
 One
tf.Tensor
of typebool
and one singlebool
, where the result will be calculated by applying logical AND with the single element to each element in the larger Tensor.  Two
tf.Tensor
objects of typebool
of the same shape. In this case, the result will be the elementwise logical AND of the two input tensors.
Usage:
a = tf.constant([True])
b = tf.constant([False])
tf.math.logical_and(a, b)
<tf.Tensor: shape=(1,), dtype=bool, numpy=array([False])>
c = tf.constant([True])
x = tf.constant([False, True, True, False])
tf.math.logical_and(c, x)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([False, True, True, False])>
y = tf.constant([False, False, True, True])
z = tf.constant([False, True, False, True])
tf.math.logical_and(y, z)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([False, False, False, True])>
Args  

x

A tf.Tensor type bool.

y

A tf.Tensor of type bool.

name

A name for the operation (optional). 
Returns  

A tf.Tensor of type bool with the same size as that of x or y.

__rdiv__
__rdiv__(
y, x
)
Divides x / y elementwise (using Python 2 division operator semantics). (deprecated)
This function divides x
and y
, forcing Python 2 semantics. That is, if x
and y
are both integers then the result will be an integer. This is in
contrast to Python 3, where division with /
is always a float while division
with //
is always an integer.
Args  

x

Tensor numerator of real numeric type.

y

Tensor denominator of real numeric type.

name

A name for the operation (optional). 
Returns  

x / y returns the quotient of x and y.

__rfloordiv__
__rfloordiv__(
y, x
)
Divides x / y
elementwise, rounding toward the most negative integer.
The same as tf.compat.v1.div(x,y)
for integers, but uses
tf.floor(tf.compat.v1.div(x,y))
for
floating point arguments so that the result is always an integer (though
possibly an integer represented as floating point). This op is generated by
x // y
floor division in Python 3 and in Python 2.7 with
from __future__ import division
.
x
and y
must have the same type, and the result will have the same type
as well.
Args  

x

Tensor numerator of real numeric type.

y

Tensor denominator of real numeric type.

name

A name for the operation (optional). 
Returns  

x / y rounded down.

Raises  

TypeError

If the inputs are complex. 
__rmatmul__
__rmatmul__(
y, x
)
Multiplies matrix a
by matrix b
, producing a
* b
.
The inputs must, following any transpositions, be tensors of rank >= 2 where the inner 2 dimensions specify valid matrix multiplication dimensions, and any further outer dimensions specify matching batch size.
Both matrices must be of the same type. The supported types are:
float16
, float32
, float64
, int32
, complex64
, complex128
.
Either matrix can be transposed or adjointed (conjugated and transposed) on
the fly by setting one of the corresponding flag to True
. These are False
by default.
If one or both of the matrices contain a lot of zeros, a more efficient
multiplication algorithm can be used by setting the corresponding
a_is_sparse
or b_is_sparse
flag to True
. These are False
by default.
This optimization is only available for plain matrices (rank2 tensors) with
datatypes bfloat16
or float32
.
A simple 2D tensor matrix multiplication:
a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])
a # 2D tensor
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[1, 2, 3],
[4, 5, 6]], dtype=int32)>
b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2])
b # 2D tensor
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[ 7, 8],
[ 9, 10],
[11, 12]], dtype=int32)>
c = tf.matmul(a, b)
c # `a` * `b`
<tf.Tensor: shape=(2, 2), dtype=int32, numpy=
array([[ 58, 64],
[139, 154]], dtype=int32)>
A batch matrix multiplication with batch shape [2]:
a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3])
a # 3D tensor
<tf.Tensor: shape=(2, 2, 3), dtype=int32, numpy=
array([[[ 1, 2, 3],
[ 4, 5, 6]],
[[ 7, 8, 9],
[10, 11, 12]]], dtype=int32)>
b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2])
b # 3D tensor
<tf.Tensor: shape=(2, 3, 2), dtype=int32, numpy=
array([[[13, 14],
[15, 16],
[17, 18]],
[[19, 20],
[21, 22],
[23, 24]]], dtype=int32)>
c = tf.matmul(a, b)
c # `a` * `b`
<tf.Tensor: shape=(2, 2, 2), dtype=int32, numpy=
array([[[ 94, 100],
[229, 244]],
[[508, 532],
[697, 730]]], dtype=int32)>
Since python >= 3.5 the @ operator is supported
(see PEP 465). In TensorFlow,
it simply calls the tf.matmul()
function, so the following lines are
equivalent:
d = a @ b @ [[10], [11]]
d = tf.matmul(tf.matmul(a, b), [[10], [11]])
Args  

a

tf.Tensor of type float16 , float32 , float64 , int32 ,
complex64 , complex128 and rank > 1.

b

tf.Tensor with same type and rank as a .

transpose_a

If True , a is transposed before multiplication.

transpose_b

If True , b is transposed before multiplication.

adjoint_a

If True , a is conjugated and transposed before
multiplication.

adjoint_b

If True , b is conjugated and transposed before
multiplication.

a_is_sparse

If True , a is treated as a sparse matrix. Notice, this
does not support tf.sparse.SparseTensor , it just makes optimizations
that assume most values in a are zero.
See tf.sparse.sparse_dense_matmul
for some support for tf.sparse.SparseTensor multiplication.

b_is_sparse

If True , b is treated as a sparse matrix. Notice, this
does not support tf.sparse.SparseTensor , it just makes optimizations
that assume most values in a are zero.
See tf.sparse.sparse_dense_matmul
for some support for tf.sparse.SparseTensor multiplication.

name

Name for the operation (optional). 
Returns  

A tf.Tensor of the same type as a and b where each innermost matrix
is the product of the corresponding matrices in a and b , e.g. if all
transpose or adjoint attributes are False :


Note

This is matrix product, not elementwise product. 
Raises  

ValueError

If transpose_a and adjoint_a , or transpose_b and
adjoint_b are both set to True .

__rmod__
__rmod__(
y, x
)
Returns elementwise remainder of division. When x < 0
xor y < 0
is
true, this follows Python semantics in that the result here is consistent
with a flooring divide. E.g. floor(x / y) * y + mod(x, y) = x
.
Args  

x

A Tensor . Must be one of the following types: int32 , int64 , uint64 , bfloat16 , half , float32 , float64 .

y

A Tensor . Must have the same type as x .

name

A name for the operation (optional). 
Returns  

A Tensor . Has the same type as x .

__rmul__
__rmul__(
y, x
)
Dispatches cwise mul for "DenseDense" and "DenseSparse".
__ror__
__ror__(
y, x
)
Returns the truth value of x OR y elementwise.
Args  

x

A Tensor of type bool .

y

A Tensor of type bool .

name

A name for the operation (optional). 
Returns  

A Tensor of type bool .

__rpow__
__rpow__(
y, x
)
Computes the power of one value to another.
Given a tensor x
and a tensor y
, this operation computes \(x^y\) for
corresponding elements in x
and y
. For example:
x = tf.constant([[2, 2], [3, 3]])
y = tf.constant([[8, 16], [2, 3]])
tf.pow(x, y) # [[256, 65536], [9, 27]]
Args  

x

A Tensor of type float16 , float32 , float64 , int32 , int64 ,
complex64 , or complex128 .

y

A Tensor of type float16 , float32 , float64 , int32 , int64 ,
complex64 , or complex128 .

name

A name for the operation (optional). 
Returns  

A Tensor .

__rsub__
__rsub__(
y, x
)
Returns x  y elementwise.
Args  

x

A Tensor . Must be one of the following types: bfloat16 , half , float32 , float64 , uint8 , int8 , uint16 , int16 , int32 , int64 , complex64 , complex128 , uint32 .

y

A Tensor . Must have the same type as x .

name

A name for the operation (optional). 
Returns  

A Tensor . Has the same type as x .

__rtruediv__
__rtruediv__(
y, x
)
Divides x / y elementwise (using Python 3 division operator semantics).
This function forces Python 3 division operator semantics where all integer
arguments are cast to floating types first. This op is generated by normal
x / y
division in Python 3 and in Python 2.7 with
from __future__ import division
. If you want integer division that rounds
down, use x // y
or tf.math.floordiv
.
x
and y
must have the same numeric type. If the inputs are floating
point, the output will have the same type. If the inputs are integral, the
inputs are cast to float32
for int8
and int16
and float64
for int32
and int64
(matching the behavior of Numpy).
Args  

x

Tensor numerator of numeric type.

y

Tensor denominator of numeric type.

name

A name for the operation (optional). 
Returns  

x / y evaluated in floating point.

Raises  

TypeError

If x and y have different dtypes.

__rxor__
__rxor__(
y, x
)
Logical XOR function.
x ^ y = (x  y) & ~(x & y)
The operation works for the following input types:
 Two single elements of type
bool
 One
tf.Tensor
of typebool
and one singlebool
, where the result will be calculated by applying logical XOR with the single element to each element in the larger Tensor.  Two
tf.Tensor
objects of typebool
of the same shape. In this case, the result will be the elementwise logical XOR of the two input tensors.
Usage:
a = tf.constant([True])
b = tf.constant([False])
tf.math.logical_xor(a, b)
<tf.Tensor: shape=(1,), dtype=bool, numpy=array([ True])>
c = tf.constant([True])
x = tf.constant([False, True, True, False])
tf.math.logical_xor(c, x)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([ True, False, False, True])>
y = tf.constant([False, False, True, True])
z = tf.constant([False, True, False, True])
tf.math.logical_xor(y, z)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([False, True, True, False])>
Args  

x

A tf.Tensor type bool.

y

A tf.Tensor of type bool.

name

A name for the operation (optional). 
Returns  

A tf.Tensor of type bool with the same size as that of x or y.

__sub__
__sub__(
x, y
)
Returns x  y elementwise.
Args  

x

A Tensor . Must be one of the following types: bfloat16 , half , float32 , float64 , uint8 , int8 , uint16 , int16 , int32 , int64 , complex64 , complex128 , uint32 .

y

A Tensor . Must have the same type as x .

name

A name for the operation (optional). 
Returns  

A Tensor . Has the same type as x .

__truediv__
__truediv__(
x, y
)
Divides x / y elementwise (using Python 3 division operator semantics).
This function forces Python 3 division operator semantics where all integer
arguments are cast to floating types first. This op is generated by normal
x / y
division in Python 3 and in Python 2.7 with
from __future__ import division
. If you want integer division that rounds
down, use x // y
or tf.math.floordiv
.
x
and y
must have the same numeric type. If the inputs are floating
point, the output will have the same type. If the inputs are integral, the
inputs are cast to float32
for int8
and int16
and float64
for int32
and int64
(matching the behavior of Numpy).
Args  

x

Tensor numerator of numeric type.

y

Tensor denominator of numeric type.

name

A name for the operation (optional). 
Returns  

x / y evaluated in floating point.

Raises  

TypeError

If x and y have different dtypes.

__xor__
__xor__(
x, y
)
Logical XOR function.
x ^ y = (x  y) & ~(x & y)
The operation works for the following input types:
 Two single elements of type
bool
 One
tf.Tensor
of typebool
and one singlebool
, where the result will be calculated by applying logical XOR with the single element to each element in the larger Tensor.  Two
tf.Tensor
objects of typebool
of the same shape. In this case, the result will be the elementwise logical XOR of the two input tensors.
Usage:
a = tf.constant([True])
b = tf.constant([False])
tf.math.logical_xor(a, b)
<tf.Tensor: shape=(1,), dtype=bool, numpy=array([ True])>
c = tf.constant([True])
x = tf.constant([False, True, True, False])
tf.math.logical_xor(c, x)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([ True, False, False, True])>
y = tf.constant([False, False, True, True])
z = tf.constant([False, True, False, True])
tf.math.logical_xor(y, z)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([False, True, True, False])>
Args  

x

A tf.Tensor type bool.

y

A tf.Tensor of type bool.

name

A name for the operation (optional). 
Returns  

A tf.Tensor of type bool with the same size as that of x or y.
