'tf' Dialect

The TensorFlow dialect.

This dialect maps to TensorFlow operations.

Invariants:

  • All values are of Tensor type (in particular, scalars are represented using zero-dimensional tensors);

TODO: Make invariants more structured so that we can reference them in ops.

Operations

tf._ArrayToList (TF::_ArrayToListOp)

Converts an array of tensors to a list of tensors.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute
T::mlir::Attributederived attribute
out_types::mlir::Attributederived attribute

Operands:

Operand Description
input variadic of tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf._EagerConst (TF::_EagerConstOp)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf._FusedBatchNormEx (TF::_FusedBatchNormExOp)

Internal FusedBatchNorm operation: reserved for internal use.

Do not invoke this operator directly in Python. A fusion optimization is expected to create these operators.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
epsilon::mlir::FloatAttr32-bit float attribute
exponential_avg_factor::mlir::FloatAttr32-bit float attribute
activation_mode::mlir::StringAttrstring attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
is_training::mlir::BoolAttrbool attribute
num_side_inputs::mlir::Attributederived attribute
T::mlir::Attributederived attribute
U::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 16-bit float or 32-bit float values
scale tensor of 32-bit float values
offset tensor of 32-bit float values
mean tensor of 32-bit float values
variance tensor of 32-bit float values
side_input variadic of tensor of bfloat16 or 16-bit float or 32-bit float values

Results:

Result Description
y tensor of bfloat16 or 16-bit float or 32-bit float values
batch_mean tensor of 32-bit float values
batch_variance tensor of 32-bit float values
reserve_space_1 tensor of 32-bit float values
reserve_space_2 tensor of 32-bit float values
reserve_space_3 tensor of 32-bit float values

tf._FusedConv2D (TF::_FusedConv2DOp)

Performs a convolution followed by a specified series of operations.

The inputs to the convolution are input and filter. The series of operations that follows is specified by the fused_ops attribute, which is a list of TF op names specified as strings (e.g. "Relu"). They are performed in order, where the (first) input to each op is the output of the preceding op. The first input and the output of each fused_op must be of type T.

Currently supported fused_op combinations are: [X] and [X,A], where X is one of {"BiasAdd","FusedBatchNorm"} and A is one of {"Elu","Relu","Relu6"}.

  • The first input to op X is the Conv2D result, and the additional input(s) to X are specified by args.
  • If there is an op A specified, the output of op X is the input to op A, and op A produces the _FusedConv2D output. Otherwise, op X produces the _FusedConv2D output.

Traits: AlwaysSpeculatableImplTrait, AttrSizedOperandSegments

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
num_args::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 0
strides::mlir::ArrayAttr64-bit integer array attribute
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttrstring attribute whose value is NHWC, or NCHW, or NCHW_VECT_C
filter_format::mlir::StringAttrstring attribute whose value is HWIO, or OIHW, or OIHW_VECT_I
dilations::mlir::ArrayAttr64-bit integer array attribute
use_cudnn_on_gpu::mlir::BoolAttrbool attribute
fused_ops::mlir::ArrayAttrstring array attribute
epsilon::mlir::FloatAttr32-bit float attribute
leakyrelu_alpha::mlir::FloatAttr32-bit float attribute
num_host_args::mlir::Attributederived attribute
T::mlir::Attributederived attribute
TArgs::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 16-bit float or 32-bit float or 64-bit float or 8-bit integer or 8-bit quantized integer values
filter tensor of 16-bit float or 32-bit float or 64-bit float or 8-bit integer or 8-bit quantized integer values
args variadic of tensor of tf.dtype values
host_args variadic of tensor of 32-bit float values

Results:

Result Description
output tensor of 16-bit float or 32-bit float or 64-bit float or 8-bit integer or 8-bit quantized integer values

tf._FusedMatMul (TF::_FusedMatMulOp)

Performs a MatMul followed by a specified series of operations.

The inputs to the MatMul are specified by a and b. The series of operations that follows is specified by the fused_ops attribute, which is a list of TF op names specified as strings (e.g. "Relu"). They are performed in order, where the (first) input to each op is the output of the preceding op. The first input and the output of each fused_op must be of type T.

Currently supported fused_op combinations are: ["BiasAdd"] and ["BiasAdd",A], where A is one of {"Elu","Relu","Relu6"}.

  • The first input to BiasAdd is the MatMul result, and the additional BiasAdd input is specified by args.
  • If there is an op A specified, the output of the BiasAdd is the input to op A, and op A produces the _FusedConv2D output. Otherwise, the BiasAdd produces the _FusedConv2D output.

Traits: AlwaysSpeculatableImplTrait, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
transpose_a::mlir::BoolAttrbool attribute
transpose_b::mlir::BoolAttrbool attribute
fused_ops::mlir::ArrayAttrstring array attribute
epsilon::mlir::FloatAttr32-bit float attribute
leakyrelu_alpha::mlir::FloatAttr32-bit float attribute
num_args::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
a tensor of bfloat16 or 16-bit float or 32-bit float values
b tensor of bfloat16 or 16-bit float or 32-bit float values
args variadic of tensor of bfloat16 or 16-bit float or 32-bit float values

Results:

Result Description
product tensor of bfloat16 or 16-bit float or 32-bit float values

tf._HostRecv (TF::_HostRecvOp)

_Receives the named tensor from send_device on recvdevice.

_HostRecv produces its output on host memory whereas _Recv produces its output on device memory.

Interfaces: GetResourceInstanceInterface, TF_RecvSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Recv}

Attributes:

AttributeMLIR TypeDescription
tensor_name::mlir::StringAttrstring attribute
send_device::mlir::StringAttrstring attribute
send_device_incarnation::mlir::IntegerAttr64-bit signless integer attribute
recv_device::mlir::StringAttrstring attribute
client_terminated::mlir::BoolAttrbool attribute
tensor_type::mlir::Attributederived attribute

Results:

Result Description
tensor tensor of tf.dtype values

tf._HostSend (TF::_HostSendOp)

_Sends the named tensor from send_device to recvdevice.

_HostSend requires its input on host memory whereas _Send requires its input on device memory.

Interfaces: GetResourceInstanceInterface, TF_SendSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Send}

Attributes:

AttributeMLIR TypeDescription
tensor_name::mlir::StringAttrstring attribute
send_device::mlir::StringAttrstring attribute
send_device_incarnation::mlir::IntegerAttr64-bit signless integer attribute
recv_device::mlir::StringAttrstring attribute
client_terminated::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
tensor tensor of tf.dtype values

tf._InternalTestMustExecuteTrait_ (TF::InternalTestMustExecuteTrait)

Internal op for testing only

Interfaces: TF_MustExecute (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

tf._InternalTestNonResourceValueSideEffects_ (TF::InternalTestNonResourceValueSideEffects)

Internal op for testing only

Operands:

Operand Description
key tensor of string values

tf._ListToArray (TF::_ListToArrayOp)

Converts a list of tensors to an array of tensors.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tin::mlir::Attributederived attribute
N::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input variadic of tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf._Recv (TF::_RecvOp)

_Receives the named tensor from send_device on recvdevice.

Interfaces: GetResourceInstanceInterface, TF_RecvSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Recv}

Attributes:

AttributeMLIR TypeDescription
tensor_name::mlir::StringAttrstring attribute
send_device::mlir::StringAttrstring attribute
send_device_incarnation::mlir::IntegerAttr64-bit signless integer attribute
recv_device::mlir::StringAttrstring attribute
client_terminated::mlir::BoolAttrbool attribute
tensor_type::mlir::Attributederived attribute

Results:

Result Description
tensor tensor of tf.dtype values

tf._Send (TF::_SendOp)

_Sends the named tensor from send_device to recvdevice.

Interfaces: GetResourceInstanceInterface, TF_SendSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Send}

Attributes:

AttributeMLIR TypeDescription
tensor_name::mlir::StringAttrstring attribute
send_device::mlir::StringAttrstring attribute
send_device_incarnation::mlir::IntegerAttr64-bit signless integer attribute
recv_device::mlir::StringAttrstring attribute
client_terminated::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
tensor tensor of tf.dtype values

tf._TPUCompileMlir (TF::_TPUCompileMlirOp)

Compiles a computations for execution on one or more TPU devices.

For the internal use of the distributed TPU compiler.

'mlir_module' is a serialized MLIR module with a main function that contains target computation. 'dynamic_shapes' contains dynamic shapes of arguments whose shapes were not known statically at TPUReplication rewrite time. 'metadata' is a serialized TPUCompileMetadataProto describing the shapes and types of the inputs to the computation, as well as a mapping onto the TPU pod topology. 'program' output is a string key that is passed to the TPUExecute op and used to look up the program in the compilation cache.

Interfaces: TF_MustExecute (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
mlir_module::mlir::StringAttrstring attribute
metadata::mlir::StringAttrstring attribute
NumDynamicShapes::mlir::Attributederived attribute
num_computations::mlir::Attributederived attribute

Operands:

Operand Description
dynamic_shapes variadic of tensor of 64-bit integer values

Results:

Result Description
compilation_status tensor of string values
program variadic of tensor of string values

tf._TPUDeviceOrdinalPlaceholder (TF::_TPUDeviceOrdinalPlaceholderOp)

_Placeholder for a device ordinal that depends on its tfdevice.replicate ancestor.

This op must have a tf_device.replicate ancestor. The ancestor replica_id and logical_core attribute correspond to a TPU core. This op maps the TPU core to a device_ordinal, where the device ordinal is the index of the core relative to its host.

The replicate_to_island pass removes and flattens tf_device.replicate, so it converts this op to the constant index of the core relative to its host.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
logical_core::mlir::IntegerAttr64-bit signless integer attribute

Results:

Result Description
device_ordinal tensor of 64-bit integer values

tf._UnaryOpsComposition (TF::_UnaryOpsCompositionOp)

NOTE: Do not invoke this operator directly in Python. Graph rewrite pass is

expected to create these operators.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
op_names::mlir::ArrayAttrstring array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of 16-bit float or 32-bit float or 64-bit float values

Results:

Result Description
y tensor of 16-bit float or 32-bit float or 64-bit float values

tf._XlaCompile (TF::_XlaCompileOp)

XLA Compile Op. For use by the XLA JIT only.

Compiles a TensorFlow function into an XLA LocalExecutable and returns a key that _XlaRun can use to look up the LocalExecutable and execute it.

Traits: AttrSizedOperandSegments

Attributes:

AttributeMLIR TypeDescription
must_compile::mlir::BoolAttrbool attribute
function::mlir::SymbolRefAttrsymbol reference attribute
Nresources::mlir::Attributederived attribute
Targs::mlir::Attributederived attribute
Tconstants::mlir::Attributederived attribute

Operands:

Operand Description
constants variadic of tensor of tf.dtype values
args variadic of tensor of tf.dtype values
resources variadic of tensor of resource values

Results:

Result Description
key tensor of string values
compilation_successful tensor of bool values

tf._XlaCompileMlirPlaceholderProgramKey (TF::_XlaCompileMlirPlaceholderProgramKeyOp)

Placeholder program key (compilation cache key) of a XLA program.

This op can be used when certain rewrite passes materialize ops that require a program key but the _TPUCompileMlir or _XlaCompile op has not been added yet. Subsequent rewrite passes must replace this op with program output.

Interfaces: TF_MustExecute (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Results:

Result Description
program tensor of string values

tf._XlaHostComputeMlir (TF::_XlaHostComputeMlirOp)

A pseudo-op to represent host-side computation in an XLA program.

Interfaces: TF_RecvSideEffect (MemoryEffectOpInterface), TF_SendSideEffect (MemoryEffectOpInterface), TF_XlaHostComputeSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Recv}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Send}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::XlaHostCompute}

Attributes:

AttributeMLIR TypeDescription
send_key::mlir::StringAttrstring attribute
recv_key::mlir::StringAttrstring attribute
host_mlir_module::mlir::StringAttrstring attribute
manual_sharding::mlir::BoolAttrbool attribute
Tinputs::mlir::Attributederived attribute
Toutputs::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of tf.dtype values

Results:

Result Description
outputs variadic of tensor of tf.dtype values

tf._XlaRecvAtHost (TF::_XlaRecvAtHostOp)

A placeholder op to receive values from a running XLA computation.

Interfaces: GetResourceInstanceInterface, TF_RecvSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Recv}

Attributes:

AttributeMLIR TypeDescription
key::mlir::StringAttrstring attribute
device_ordinal::mlir::IntegerAttr64-bit signless integer attribute
device_type::mlir::StringAttrstring attribute
Toutputs::mlir::Attributederived attribute

Operands:

Operand Description
dynamic_key tensor of string values

Results:

Result Description
outputs variadic of tensor of tf.dtype values

tf._XlaRecvAtHostV2 (TF::_XlaRecvAtHostV2Op)

A placeholder op to receive values from a running XLA computation with support for a runtime device ordinal.

Interfaces: GetResourceInstanceInterface, TF_RecvSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Recv}

Attributes:

AttributeMLIR TypeDescription
key::mlir::StringAttrstring attribute
device_type::mlir::StringAttrstring attribute
Toutputs::mlir::Attributederived attribute

Operands:

Operand Description
dynamic_key tensor of string values
device_ordinal tensor of 64-bit integer values

Results:

Result Description
outputs variadic of tensor of tf.dtype values

tf._XlaRun (TF::_XlaRunOp)

XLA Run Op. For use by the XLA JIT only.

Executes a TensorFlow function previously compiled into a LocalExecutable by an _XlaCompile op.

Interfaces: MemoryEffectOpInterface

Attributes:

AttributeMLIR TypeDescription
Targs::mlir::Attributederived attribute
Tresults::mlir::Attributederived attribute

Operands:

Operand Description
args variadic of tensor of tf.dtype values
key tensor of string values

Results:

Result Description
results variadic of tensor of tf.dtype values

tf._XlaSendFromHost (TF::_XlaSendFromHostOp)

A placeholder op to send values to a running XLA computation.

Interfaces: GetResourceInstanceInterface, TF_SendSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Send}

Attributes:

AttributeMLIR TypeDescription
key::mlir::StringAttrstring attribute
device_ordinal::mlir::IntegerAttr64-bit signless integer attribute
device_type::mlir::StringAttrstring attribute
Tinputs::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of tf.dtype values
dynamic_key tensor of string values

tf._XlaSendFromHostV2 (TF::_XlaSendFromHostV2Op)

A placeholder op to send values to a running XLA computation with support for a runtime device ordinal.

Interfaces: GetResourceInstanceInterface, TF_SendSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Send}

Attributes:

AttributeMLIR TypeDescription
key::mlir::StringAttrstring attribute
device_type::mlir::StringAttrstring attribute
Tinputs::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of tf.dtype values
dynamic_key tensor of string values
device_ordinal tensor of 64-bit integer values

tf.Abs (TF::AbsOp)

Computes the absolute value of a tensor.

Given a tensor x, this operation returns a tensor containing the absolute value of each element in x. For example, if x is an input element and y is an output element, this operation computes \(y = |x|\).

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_Idempotent

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer values

Results:

Result Description
y tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer values

tf.Acos (TF::AcosOp)

Computes acos of x element-wise.

Provided an input tensor, the tf.math.acos operation returns the inverse cosine of each element of the tensor. If y = tf.math.cos(x) then, x = tf.math.acos(y).

Input range is [-1, 1] and the output has a range of [0, pi].

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.Acosh (TF::AcoshOp)

Computes inverse hyperbolic cosine of x element-wise.

Given an input tensor, the function computes inverse hyperbolic cosine of every element. Input range is [1, inf]. It returns nan if the input lies outside the range.

x = tf.constant([-2, -0.5, 1, 1.2, 200, 10000, float("inf")])
tf.math.acosh(x) ==> [nan nan 0. 0.62236255 5.9914584 9.903487 inf]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.Add (TF::AddOp)

Returns x + y element-wise.

Given two input tensors, the tf.add operation computes the sum for every element in the tensor.

Both input and output have a range (-inf, inf).

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_LayoutAgnostic, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or signed integer or complex or 8-bit unsigned integer or string values
y tensor of floating-point or signed integer or complex or 8-bit unsigned integer or string values

Results:

Result Description
z tensor of floating-point or signed integer or complex or 8-bit unsigned integer or string values

tf.AddN (TF::AddNOp)

Add all input tensors element wise.

Inputs must be of same size and shape.

  x = [9, 7, 10]
  tf.math.add_n(x) ==> 26

Traits: AlwaysSpeculatableImplTrait, Commutative

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer or variant values

Results:

Result Description
sum tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer or variant values

tf.AddV2 (TF::AddV2Op)

Returns x + y element-wise.

Traits: AlwaysSpeculatableImplTrait, Commutative, ResultsBroadcastableShape, TF_CwiseBinary, TF_LayoutAgnostic, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
z tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.AdjustContrastv2 (TF::AdjustContrastv2Op)

Adjust the contrast of one or more images.

images is a tensor of at least 3 dimensions. The last 3 dimensions are interpreted as [height, width, channels]. The other dimensions only represent a collection of images, such as [batch, height, width, channels].

Contrast is adjusted independently for each channel of each image.

For each channel, the Op first computes the mean of the image pixels in the channel and then adjusts each component of each pixel to (x - mean) * contrast_factor + mean.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
images tensor of 16-bit float or 32-bit float values
contrast_factor tensor of 32-bit float values

Results:

Result Description
output tensor of 16-bit float or 32-bit float values

tf.AdjustHue (TF::AdjustHueOp)

Adjust the hue of one or more images.

images is a tensor of at least 3 dimensions. The last dimension is interpreted as channels, and must be three.

The input image is considered in the RGB colorspace. Conceptually, the RGB colors are first mapped into HSV. A delta is then applied all the hue values, and then remapped back to RGB colorspace.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
images tensor of 16-bit float or 32-bit float values
delta tensor of 32-bit float values

Results:

Result Description
output tensor of 16-bit float or 32-bit float values

tf.AdjustSaturation (TF::AdjustSaturationOp)

Adjust the saturation of one or more images.

images is a tensor of at least 3 dimensions. The last dimension is interpreted as channels, and must be three.

The input image is considered in the RGB colorspace. Conceptually, the RGB colors are first mapped into HSV. A scale is then applied all the saturation values, and then remapped back to RGB colorspace.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
images tensor of 16-bit float or 32-bit float values
scale tensor of 32-bit float values

Results:

Result Description
output tensor of 16-bit float or 32-bit float values

tf.All (TF::AllOp)

Computes the "logical and" of elements across dimensions of a tensor.

Reduces input along the dimensions given in axis. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keep_dims is true, the reduced dimensions are retained with length 1.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
keep_dims::mlir::BoolAttrbool attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bool values
reduction_indices tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of bool values

tf.AllToAll (TF::AllToAllOp)

An Op to exchange data across TPU replicas.

On each replica, the input is split into split_count blocks along split_dimension and send to the other replicas given group_assignment. After receiving split_count - 1 blocks from other replicas, we concatenate the blocks along concat_dimension as the output.

For example, suppose there are 2 TPU replicas: replica 0 receives input: [[A, B]] replica 1 receives input: [[C, D]]

group_assignment=[[0, 1]] concat_dimension=0 split_dimension=1 split_count=2

replica 0's output: [[A], [C]] replica 1's output: [[B], [D]]

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
concat_dimension::mlir::IntegerAttr64-bit signless integer attribute
split_dimension::mlir::IntegerAttr64-bit signless integer attribute
split_count::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
group_assignment tensor of 32-bit integer values

Results:

Result Description
output tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.Angle (TF::AngleOp)

Returns the argument of a complex number.

Given a tensor input of complex numbers, this operation returns a tensor of type float that is the argument of each element in input. All elements in input must be complex numbers of the form \(a + bj\), where a is the real part and b is the imaginary part.

The argument returned by this operation is of the form \(atan2(b, a)\).

For example:

# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]
tf.math.angle(input) ==> [2.0132, 1.056]

@compatibility(numpy) Equivalent to np.angle. @end_compatibility

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values

Results:

Result Description
output tensor of 32/64-bit float values

tf.AnonymousIterator (TF::AnonymousIteratorOp)

A container for an iterator resource.

Traits: TF::UniqueResourceAllocation

Interfaces: TF_ResourceHandleAllocatorInterface

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements

Results:

Result Description
handle tensor of resource values

tf.AnonymousIteratorV2 (TF::AnonymousIteratorV2Op)

A container for an iterator resource.

Traits: TF::UniqueResourceAllocation

Interfaces: TF_ResourceHandleAllocatorInterface

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements

Results:

Result Description
handle tensor of resource values
deleter tensor of variant values

tf.AnonymousIteratorV3 (TF::AnonymousIteratorV3Op)

A container for an iterator resource.

Traits: TF::UniqueResourceAllocation

Interfaces: TF_ResourceHandleAllocatorInterface

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements

Results:

Result Description
handle tensor of resource values

tf.AnonymousMemoryCache (TF::AnonymousMemoryCacheOp)

Traits: TF::UniqueResourceAllocation

Interfaces: TF_ResourceHandleAllocatorInterface

Results:

Result Description
handle tensor of resource values
deleter tensor of variant values

tf.AnonymousMultiDeviceIterator (TF::AnonymousMultiDeviceIteratorOp)

A container for a multi device iterator resource.

Traits: TF::UniqueResourceAllocation

Interfaces: TF_ResourceHandleAllocatorInterface

Attributes:

AttributeMLIR TypeDescription
devices::mlir::ArrayAttrstring array attribute with at least 1 elements
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements

Results:

Result Description
handle tensor of resource values
deleter tensor of variant values

tf.AnonymousMultiDeviceIteratorV3 (TF::AnonymousMultiDeviceIteratorV3Op)

A container for a multi device iterator resource.

Traits: TF::UniqueResourceAllocation

Interfaces: TF_ResourceHandleAllocatorInterface

Attributes:

AttributeMLIR TypeDescription
devices::mlir::ArrayAttrstring array attribute with at least 1 elements
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements

Results:

Result Description
handle tensor of resource values

tf.AnonymousRandomSeedGenerator (TF::AnonymousRandomSeedGeneratorOp)

Traits: TF::UniqueResourceAllocation

Interfaces: TF_ResourceHandleAllocatorInterface

Operands:

Operand Description
seed tensor of 64-bit integer values
seed2 tensor of 64-bit integer values

Results:

Result Description
handle tensor of resource values
deleter tensor of variant values

tf.AnonymousSeedGenerator (TF::AnonymousSeedGeneratorOp)

Traits: TF::UniqueResourceAllocation

Interfaces: TF_ResourceHandleAllocatorInterface

Operands:

Operand Description
seed tensor of 64-bit integer values
seed2 tensor of 64-bit integer values
reshuffle tensor of bool values

Results:

Result Description
handle tensor of resource values
deleter tensor of variant values

tf.Any (TF::AnyOp)

Computes the "logical or" of elements across dimensions of a tensor.

Reduces input along the dimensions given in axis. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keep_dims is true, the reduced dimensions are retained with length 1.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
keep_dims::mlir::BoolAttrbool attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bool values
reduction_indices tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of bool values

tf.ApproximateEqual (TF::ApproximateEqualOp)

Returns the truth value of abs(x-y) < tolerance element-wise.

Traits: AlwaysSpeculatableImplTrait, Commutative

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
tolerance::mlir::FloatAttr32-bit float attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of number values
y tensor of number values

Results:

Result Description
z tensor of bool values

tf.ApproxTopK (TF::ApproxTopKOp)

Returns min/max k values and their indices of the input operand in an approximate manner.

See https://arxiv.org/abs/2206.14286 for the algorithm details. This op is only optimized on TPU currently.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
k::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 0
reduction_dimension::mlir::IntegerAttr64-bit signless integer attribute
recall_target::mlir::FloatAttr32-bit float attribute
is_max_k::mlir::BoolAttrbool attribute
reduction_input_size_override::mlir::IntegerAttr64-bit signless integer attribute
aggregate_to_topk::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float values

Results:

Result Description
values tensor of bfloat16 or 16-bit float or 32-bit float values
indices tensor of 32-bit integer values

tf.ArgMax (TF::ArgMaxOp)

Returns the index with the largest value across dimensions of a tensor.

Note that in case of ties the identity of the return value is not guaranteed.

Usage:

  import tensorflow as tf
  a = [1, 10, 26.9, 2.8, 166.32, 62.3]
  b = tf.math.argmax(input = a)
  c = tf.keras.backend.eval(b)
  # c = 4
  # here a[4] = 166.32 which is the largest element of a across axis 0

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute
output_type::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or bool or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
dimension tensor of 16-bit integer or 32-bit integer or 64-bit integer values

Results:

Result Description
output tensor of 16-bit integer or 32-bit integer or 64-bit integer or 16-bit unsigned integer values

tf.ArgMin (TF::ArgMinOp)

Returns the index with the smallest value across dimensions of a tensor.

Note that in case of ties the identity of the return value is not guaranteed.

Usage:

  import tensorflow as tf
  a = [1, 10, 26.9, 2.8, 166.32, 62.3]
  b = tf.math.argmin(input = a)
  c = tf.keras.backend.eval(b)
  # c = 0
  # here a[0] = 1 which is the smallest element of a across axis 0

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute
output_type::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or bool or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
dimension tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of 32/64-bit signed integer values

tf.Asin (TF::AsinOp)

Computes the trignometric inverse sine of x element-wise.

The tf.math.asin operation returns the inverse of tf.math.sin, such that if y = tf.math.sin(x) then, x = tf.math.asin(y).

For example:

# Note: [1.047, 0.785] ~= [(pi/3), (pi/4)]
x = tf.constant([1.047, 0.785])
y = tf.math.sin(x) # [0.8659266, 0.7068252]

tf.math.asin(y) # [1.047, 0.785] = x

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.Asinh (TF::AsinhOp)

Computes inverse hyperbolic sine of x element-wise.

Given an input tensor, this function computes inverse hyperbolic sine for every element in the tensor. Both input and output has a range of [-inf, inf].

  x = tf.constant([-float("inf"), -2, -0.5, 1, 1.2, 200, 10000, float("inf")])
  tf.math.asinh(x) ==> [-inf -1.4436355 -0.4812118 0.8813736 1.0159732 5.991471 9.903487 inf]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.Assert (TF::AssertOp)

Asserts that the given condition is true.

If condition evaluates to false, print the list of tensors in data. summarize determines how many entries of the tensors to print.

Attributes:

AttributeMLIR TypeDescription
summarize::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
condition tensor of bool values
data variadic of tensor of tf.dtype values

tf.Assign (TF::AssignOp)

Update 'ref' by assigning 'value' to it.

This operation outputs "ref" after the assignment is done. This makes it easier to chain operations that need to use the reset value.

Attributes:

AttributeMLIR TypeDescription
validate_shape::mlir::BoolAttrbool attribute
use_locking::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
ref tensor of tf.dtype values
value tensor of tf.dtype values

Results:

Result Description
output_ref tensor of tf.dtype values

tf.AssignAddVariableOp (TF::AssignAddVariableOp)

Adds a value to the current value of a variable.

Any ReadVariableOp with a control dependency on this op is guaranteed to see the incremented value or a subsequent newer one.

Attributes:

AttributeMLIR TypeDescription
dtype::mlir::Attributederived attribute

Operands:

Operand Description
resource tensor of resource values
value tensor of tf.dtype values

tf.AssignSubVariableOp (TF::AssignSubVariableOp)

Subtracts a value from the current value of a variable.

Any ReadVariableOp with a control dependency on this op is guaranteed to see the decremented value or a subsequent newer one.

Attributes:

AttributeMLIR TypeDescription
dtype::mlir::Attributederived attribute

Operands:

Operand Description
resource tensor of resource values
value tensor of tf.dtype values

tf.AssignVariableOp (TF::AssignVariableOp)

Assigns a new value to a variable.

Any ReadVariableOp with a control dependency on this op is guaranteed to return this value or a subsequent newer value of the variable.

Attributes:

AttributeMLIR TypeDescription
validate_shape::mlir::BoolAttrbool attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
resource tensor of resource values
value tensor of tf.dtype values

tf.AsString (TF::AsStringOp)

Converts each entry in the given tensor to strings.

Supports many numeric types and boolean.

For Unicode, see the https://www.tensorflow.org/tutorials/representation/unicode tutorial.

Examples:

tf.strings.as_string([3, 2]) tf.strings.as_string([3.1415926, 2.71828], precision=2).numpy() array([b'3.14', b'2.72'], dtype=object)

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
precision::mlir::IntegerAttr64-bit signless integer attribute
scientific::mlir::BoolAttrbool attribute
shortest::mlir::BoolAttrbool attribute
width::mlir::IntegerAttr64-bit signless integer attribute
fill::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or string or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer or variant values

Results:

Result Description
output tensor of string values

tf.Atan (TF::AtanOp)

Computes the trignometric inverse tangent of x element-wise.

The tf.math.atan operation returns the inverse of tf.math.tan, such that if y = tf.math.tan(x) then, x = tf.math.atan(y).

For example:

# Note: [1.047, 0.785] ~= [(pi/3), (pi/4)]
x = tf.constant([1.047, 0.785])
y = tf.math.tan(x) # [1.731261, 0.99920404]

tf.math.atan(y) # [1.047, 0.785] = x

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.Atan2 (TF::Atan2Op)

Computes arctangent of y/x element-wise, respecting signs of the arguments.

This is the angle \( \theta \in [-\pi, \pi] \) such that \[ x = r \cos(\theta) \] and \[ y = r \sin(\theta) \] where \(r = \sqrt{x^2 + y^2} \).

For example:

x = [1., 1.] y = [1., -1.] print((tf.math.atan2(y,x) * (180 / np.pi)).numpy()) [ 45. -45.]

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
y tensor of floating-point values
x tensor of floating-point values

Results:

Result Description
z tensor of floating-point values

tf.Atanh (TF::AtanhOp)

Computes inverse hyperbolic tangent of x element-wise.

Given an input tensor, this function computes inverse hyperbolic tangent for every element in the tensor. Input range is [-1,1] and output range is [-inf, inf]. If input is -1, output will be -inf and if the input is 1, output will be inf. Values outside the range will have nan as output.

  x = tf.constant([-float("inf"), -1, -0.5, 1, 0, 0.5, 10, float("inf")])
  tf.math.atanh(x) ==> [nan -inf -0.54930615 inf  0. 0.54930615 nan nan]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.AvgPool (TF::AvgPoolOp)

Performs average pooling on the input.

Each entry in output is the mean of the corresponding size ksize window in value.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
ksize::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
strides::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
T::mlir::Attributederived attribute

Operands:

Operand Description
value tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.AvgPool3D (TF::AvgPool3DOp)

Performs 3D average pooling on the input.

Each entry in output is the mean of the corresponding size ksize window in value.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
ksize::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
strides::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttrstring attribute whose value is NDHWC, or NCDHW
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.AvgPool3DGrad (TF::AvgPool3DGradOp)

Computes gradients of average pooling function.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
ksize::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
strides::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttrstring attribute whose value is NDHWC, or NCDHW
T::mlir::Attributederived attribute

Operands:

Operand Description
orig_input_shape tensor of 32-bit integer values
grad tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.AvgPoolGrad (TF::AvgPoolGradOp)

Computes gradients of the average pooling function.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
ksize::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
strides::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
T::mlir::Attributederived attribute

Operands:

Operand Description
orig_input_shape tensor of 32-bit integer values
grad tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.BatchDatasetV2 (TF::BatchDatasetV2Op)

Creates a dataset that batches batch_size elements from input_dataset.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
parallel_copy::mlir::BoolAttrbool attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
metadata::mlir::StringAttrstring attribute

Operands:

Operand Description
input_dataset tensor of variant values
batch_size tensor of 64-bit integer values
drop_remainder tensor of bool values

Results:

Result Description
handle tensor of variant values

tf.BatchFunction (TF::BatchFunctionOp)

Batches all the inputs tensors to the computation done by the function.

So, for example, in the following code


  # This input will be captured.
  y = tf.placeholder_with_default(1.0, shape=[])

  @tf.Defun(tf.float32)
  def computation(a):
    return tf.matmul(a, a) + y

  b = gen_batch_ops.batch_function(
          f=computation
          in_tensors=[a],
          captured_tensors=computation.captured_inputs,
          Tout=[o.type for o in computation.definition.signature.output_arg],
          num_batch_threads=1,
          max_batch_size=10,
          batch_timeout_micros=100000,  # 100ms
          allowed_batch_sizes=[3, 10],
          batching_queue="")

If more than one session.run call is simultaneously trying to compute b the values of a will be gathered, non-deterministically concatenated along the first axis, and only one thread will run the computation.

Assumes that all arguments of the function are Tensors which will be batched along their first dimension.

Arguments that are captured, are not batched. The session.run call which does the concatenation, will use the values of the captured tensors available to it. Therefore, typical uses of captured tensors should involve values which remain unchanged across session.run calls. Inference is a good example of this.

SparseTensor is not supported. The return value of the decorated function must be a Tensor or a list/tuple of Tensors.

Traits: AlwaysSpeculatableImplTrait, AttrSizedOperandSegments

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), SymbolUserOpInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
f::mlir::SymbolRefAttrsymbol reference attribute
num_batch_threads::mlir::IntegerAttr64-bit signless integer attribute
max_batch_size::mlir::IntegerAttr64-bit signless integer attribute
batch_timeout_micros::mlir::IntegerAttr64-bit signless integer attribute
max_enqueued_batches::mlir::IntegerAttr64-bit signless integer attribute
allowed_batch_sizes::mlir::ArrayAttr64-bit integer array attribute
container::mlir::StringAttrstring attribute
shared_name::mlir::StringAttrstring attribute
batching_queue::mlir::StringAttrstring attribute
low_priority_max_batch_size::mlir::IntegerAttr64-bit signless integer attribute
low_priority_batch_timeout_micros::mlir::IntegerAttr64-bit signless integer attribute
low_priority_allowed_batch_sizes::mlir::ArrayAttr64-bit integer array attribute
low_priority_max_enqueued_batches::mlir::IntegerAttr64-bit signless integer attribute
mixed_priority_policy::mlir::StringAttrstring attribute whose value is low_priority_padding_with_max_batch_size, or low_priority_padding_with_next_allowed_batch_size, or priority_isolation
batch_padding_policy::mlir::StringAttrstring attribute whose value is PAD_UP, or BATCH_DOWN, or MINIMIZE_TPU_COST_PER_REQUEST
enable_large_batch_splitting::mlir::BoolAttrbool attribute
Tcaptured::mlir::Attributederived attribute
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
in_tensors variadic of tensor of tf.dtype values
captured_tensors variadic of tensor of tf.dtype values

Results:

Result Description
out_tensors variadic of tensor of tf.dtype values

tf.BatchMatMul (TF::BatchMatMulOp)

Multiplies slices of two tensors in batches.

Multiplies all slices of Tensor x and y (each slice can be viewed as an element of a batch), and arranges the individual results in a single output tensor of the same batch size. Each of the individual slices can optionally be adjointed (to adjoint a matrix means to transpose and conjugate it) before multiplication by setting the adj_x or adj_y flag to True, which are by default False.

The input tensors x and y are 2-D or higher with shape [..., r_x, c_x] and [..., r_y, c_y].

The output tensor is 2-D or higher with shape [..., r_o, c_o], where:

r_o = c_x if adj_x else r_x
c_o = r_y if adj_y else c_y

It is computed as:

output[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :])

Traits: AlwaysSpeculatableImplTrait, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
adj_x::mlir::BoolAttrbool attribute
adj_y::mlir::BoolAttrbool attribute
grad_x::mlir::BoolAttrbool attribute
grad_y::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

Results:

Result Description
output tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.BatchMatMulV2 (TF::BatchMatMulV2Op)

Multiplies slices of two tensors in batches.

Multiplies all slices of Tensor x and y (each slice can be viewed as an element of a batch), and arranges the individual results in a single output tensor of the same batch size. Each of the individual slices can optionally be adjointed (to adjoint a matrix means to transpose and conjugate it) before multiplication by setting the adj_x or adj_y flag to True, which are by default False.

The input tensors x and y are 2-D or higher with shape [..., r_x, c_x] and [..., r_y, c_y].

The output tensor is 2-D or higher with shape [..., r_o, c_o], where:

r_o = c_x if adj_x else r_x
c_o = r_y if adj_y else c_y

It is computed as:

output[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :])

Traits: AlwaysSpeculatableImplTrait, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
adj_x::mlir::BoolAttrbool attribute
adj_y::mlir::BoolAttrbool attribute
grad_x::mlir::BoolAttrbool attribute
grad_y::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
output tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.BatchMatMulV3 (TF::BatchMatMulV3Op)

Multiplies slices of two tensors in batches.

Multiplies all slices of Tensor x and y (each slice can be viewed as an element of a batch), and arranges the individual results in a single output tensor of the same batch size. Each of the individual slices can optionally be adjointed (to adjoint a matrix means to transpose and conjugate it) before multiplication by setting the adj_x or adj_y flag to True, which are by default False.

The input tensors x and y are 2-D or higher with shape [..., r_x, c_x] and [..., r_y, c_y].

The output tensor is 2-D or higher with shape [..., r_o, c_o], where:

r_o = c_x if adj_x else r_x
c_o = r_y if adj_y else c_y

It is computed as:

output[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :])

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
adj_x::mlir::BoolAttrbool attribute
adj_y::mlir::BoolAttrbool attribute
grad_x::mlir::BoolAttrbool attribute
grad_y::mlir::BoolAttrbool attribute
Ta::mlir::Attributederived attribute
Tb::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 8-bit unsigned integer values
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 8-bit unsigned integer values

Results:

Result Description
output tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer values

tf.BatchNormWithGlobalNormalization (TF::BatchNormWithGlobalNormalizationOp)

Batch normalization.

This op is deprecated. Prefer tf.nn.batch_normalization.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
variance_epsilon::mlir::FloatAttr32-bit float attribute
scale_after_normalization::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
m tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
v tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
beta tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
gamma tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
result tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.BatchToSpace (TF::BatchToSpaceOp)

BatchToSpace for 4-D tensors of type T.

This is a legacy version of the more general BatchToSpaceND.

Rearranges (permutes) data from batch into blocks of spatial data, followed by cropping. This is the reverse transformation of SpaceToBatch. More specifically, this op outputs a copy of the input tensor where values from the batch dimension are moved in spatial blocks to the height and width dimensions, followed by cropping along the height and width dimensions.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
block_size::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 2
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
crops tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.BatchToSpaceND (TF::BatchToSpaceNDOp)

BatchToSpace for N-D tensors of type T.

This operation reshapes the "batch" dimension 0 into M + 1 dimensions of shape block_shape + [batch], interleaves these blocks back into the grid defined by the spatial dimensions [1, ..., M], to obtain a result with the same rank as the input. The spatial dimensions of this intermediate result are then optionally cropped according to crops to produce the output. This is the reverse of SpaceToBatch. See below for a precise description.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tblock_shape::mlir::Attributederived attribute
Tcrops::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
block_shape tensor of 32/64-bit signed integer values
crops tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.BesselI0e (TF::BesselI0eOp)

Computes the Bessel i0e function of x element-wise.

Exponentially scaled modified Bessel function of order 0 defined as bessel_i0e(x) = exp(-abs(x)) bessel_i0(x).

This function is faster and numerically stabler than bessel_i0(x).

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of floating-point values

tf.BesselI1e (TF::BesselI1eOp)

Computes the Bessel i1e function of x element-wise.

Exponentially scaled modified Bessel function of order 0 defined as bessel_i1e(x) = exp(-abs(x)) bessel_i1(x).

This function is faster and numerically stabler than bessel_i1(x).

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of floating-point values

tf.Betainc (TF::BetaincOp)

_Compute the regularized incomplete beta integral \(I_x(a, b)\)._

The regularized incomplete beta integral is defined as:

\(I_x(a, b) = \frac{B(x; a, b)}{B(a, b)}\)

where

\(B(x; a, b) = \int_0^x t^{a-1} (1 - t)^{b-1} dt\)

is the incomplete beta function and \(B(a, b)\) is the complete beta function.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
a tensor of 32/64-bit float values
b tensor of 32/64-bit float values
x tensor of 32/64-bit float values

Results:

Result Description
z tensor of 32/64-bit float values

tf.BiasAdd (TF::BiasAddOp)

Adds bias to value.

This is a special case of tf.add where bias is restricted to be 1-D. Broadcasting is supported, so value may have any number of dimensions.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), TF_LayoutSensitiveInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
T::mlir::Attributederived attribute

Operands:

Operand Description
value tensor of number values
bias tensor of number values

Results:

Result Description
output tensor of number values

tf.BiasAddGrad (TF::BiasAddGradOp)

The backward operation for "BiasAdd" on the "bias" tensor.

It accumulates all the values from out_backprop into the feature dimension. For NHWC data format, the feature dimension is the last. For NCHW data format, the feature dimension is the third-to-last.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
T::mlir::Attributederived attribute

Operands:

Operand Description
out_backprop tensor of number values

Results:

Result Description
output tensor of number values

tf.BiasAddV1 (TF::BiasAddV1Op)

Adds bias to value.

This is a deprecated version of BiasAdd and will be soon removed.

This is a special case of tf.add where bias is restricted to be 1-D. Broadcasting is supported, so value may have any number of dimensions.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
value tensor of number values
bias tensor of number values

Results:

Result Description
output tensor of number values

tf.Bincount (TF::BincountOp)

Counts the number of occurrences of each value in an integer array.

Outputs a vector with length size and the same dtype as weights. If weights are empty, then index i stores the number of times the value i is counted in arr. If weights are non-empty, then index i stores the sum of the value in weights at each index where the corresponding value in arr is i.

Values in arr outside of the range [0, size) are ignored.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
arr tensor of 32-bit integer values
size tensor of 32-bit integer values
weights tensor of 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

Results:

Result Description
bins tensor of 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.Bitcast (TF::BitcastOp)

Bitcasts a tensor from one type to another without copying data.

Given a tensor input, this operation returns a tensor that has the same buffer data as input with datatype type.

If the input datatype T is larger than the output datatype type then the shape changes from [...] to [..., sizeof(T)/sizeof(type)].

If T is smaller than type, the operator requires that the rightmost dimension be equal to sizeof(type)/sizeof(T). The shape then goes from [..., sizeof(type)/sizeof(T)] to [...].

tf.bitcast() and tf.cast() work differently when real dtype is casted as a complex dtype (e.g. tf.complex64 or tf.complex128) as tf.cast() make imaginary part 0 while tf.bitcast() gives module error. For example,

Example 1:

a = [1., 2., 3.] equality_bitcast = tf.bitcast(a, tf.complex128) Traceback (most recent call last): ... InvalidArgumentError: Cannot bitcast from 1 to 18 [Op:Bitcast] equality_cast = tf.cast(a, tf.complex128) print(equality_cast) tf.Tensor([1.+0.j 2.+0.j 3.+0.j], shape=(3,), dtype=complex128)

Example 2:

tf.bitcast(tf.constant(0xffffffff, dtype=tf.uint32), tf.uint8)

Example 3:

x = [1., 2., 3.] y = [0., 2., 3.] equality= tf.equal(x,y) equality_cast = tf.cast(equality,tf.float32) equality_bitcast = tf.bitcast(equality_cast,tf.uint8) print(equality) tf.Tensor([False True True], shape=(3,), dtype=bool) print(equality_cast) tf.Tensor([0. 1. 1.], shape=(3,), dtype=float32) print(equality_bitcast) tf.Tensor( [[ 0 0 0 0] [ 0 0 128 63] [ 0 0 128 63]], shape=(3, 4), dtype=uint8)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
type::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of number values

Results:

Result Description
output tensor of number values

tf.BitwiseAnd (TF::BitwiseAndOp)

Elementwise computes the bitwise AND of x and y.

The result will have those bits set, that are set in both x and y. The computation is performed on the underlying representations of x and y.

For example:

import tensorflow as tf
from tensorflow.python.ops import bitwise_ops
dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64,
              tf.uint8, tf.uint16, tf.uint32, tf.uint64]

for dtype in dtype_list:
  lhs = tf.constant([0, 5, 3, 14], dtype=dtype)
  rhs = tf.constant([5, 0, 7, 11], dtype=dtype)
  exp = tf.constant([0, 0, 3, 10], dtype=tf.float32)

  res = bitwise_ops.bitwise_and(lhs, rhs)
  tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE

Traits: AlwaysSpeculatableImplTrait, Commutative, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer values
y tensor of integer values

Results:

Result Description
z tensor of integer values

tf.BitwiseOr (TF::BitwiseOrOp)

Elementwise computes the bitwise OR of x and y.

The result will have those bits set, that are set in x, y or both. The computation is performed on the underlying representations of x and y.

For example:

import tensorflow as tf
from tensorflow.python.ops import bitwise_ops
dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64,
              tf.uint8, tf.uint16, tf.uint32, tf.uint64]

for dtype in dtype_list:
  lhs = tf.constant([0, 5, 3, 14], dtype=dtype)
  rhs = tf.constant([5, 0, 7, 11], dtype=dtype)
  exp = tf.constant([5, 5, 7, 15], dtype=tf.float32)

  res = bitwise_ops.bitwise_or(lhs, rhs)
  tf.assert_equal(tf.cast(res,  tf.float32), exp)  # TRUE

Traits: AlwaysSpeculatableImplTrait, Commutative, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer values
y tensor of integer values

Results:

Result Description
z tensor of integer values

tf.BitwiseXor (TF::BitwiseXorOp)

Elementwise computes the bitwise XOR of x and y.

The result will have those bits set, that are different in x and y. The computation is performed on the underlying representations of x and y.

For example:

import tensorflow as tf
from tensorflow.python.ops import bitwise_ops
dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64,
              tf.uint8, tf.uint16, tf.uint32, tf.uint64]

for dtype in dtype_list:
  lhs = tf.constant([0, 5, 3, 14], dtype=dtype)
  rhs = tf.constant([5, 0, 7, 11], dtype=dtype)
  exp = tf.constant([5, 5, 4, 5],  dtype=tf.float32)

  res = bitwise_ops.bitwise_xor(lhs, rhs)
  tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE

Traits: AlwaysSpeculatableImplTrait, Commutative, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer values
y tensor of integer values

Results:

Result Description
z tensor of integer values

tf.BoostedTreesBucketize (TF::BoostedTreesBucketizeOp)

Bucketize each feature based on bucket boundaries.

An op that returns a list of float tensors, where each tensor represents the bucketized values for a single feature.

Traits: AlwaysSpeculatableImplTrait, SameVariadicOperandSize

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
num_features::mlir::Attributederived attribute

Operands:

Operand Description
float_values variadic of tensor of 32-bit float values
bucket_boundaries variadic of tensor of 32-bit float values

Results:

Result Description
buckets variadic of tensor of 32-bit integer values

tf.BroadcastArgs (TF::BroadcastArgsOp)

Return the shape of s0 op s1 with broadcast.

Given s0 and s1, tensors that represent shapes, compute r0, the broadcasted shape. s0, s1 and r0 are all integer vectors.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
s0 tensor of 32/64-bit signed integer values
s1 tensor of 32/64-bit signed integer values

Results:

Result Description
r0 tensor of 32/64-bit signed integer values

tf.BroadcastGradientArgs (TF::BroadcastGradientArgsOp)

Return the reduction indices for computing gradients of s0 op s1 with broadcast.

This is typically used by gradient computations for a broadcasting operation.

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultElementType

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
s0 tensor of 32/64-bit signed integer values
s1 tensor of 32/64-bit signed integer values

Results:

Result Description
r0 tensor of 32/64-bit signed integer values
r1 tensor of 32/64-bit signed integer values

tf.BroadcastTo (TF::BroadcastToOp)

Broadcast an array for a compatible shape.

Broadcasting is the process of making arrays to have compatible shapes for arithmetic operations. Two shapes are compatible if for each dimension pair they are either equal or one of them is one.

For example:

x = tf.constant([[1, 2, 3]]) # Shape (1, 3,) y = tf.broadcast_to(x, [2, 3]) print(y) tf.Tensor( [[1 2 3] [1 2 3]], shape=(2, 3), dtype=int32)

In the above example, the input Tensor with the shape of [1, 3] is broadcasted to output Tensor with shape of [2, 3].

When broadcasting, if a tensor has fewer axes than necessary its shape is padded on the left with ones. So this gives the same result as the previous example:

x = tf.constant([1, 2, 3]) # Shape (3,) y = tf.broadcast_to(x, [2, 3])

When doing broadcasted operations such as multiplying a tensor by a scalar, broadcasting (usually) confers some time or space benefit, as the broadcasted tensor is never materialized.

However, broadcast_to does not carry with it any such benefits. The newly-created tensor takes the full memory of the broadcasted shape. (In a graph context, broadcast_to might be fused to subsequent operation and then be optimized away, however.)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
shape tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.Bucketize (TF::BucketizeOp)

Bucketizes 'input' based on 'boundaries'.

For example, if the inputs are boundaries = [0, 10, 100] input = [[-5, 10000] [150, 10] [5, 100]]

then the output will be output = [[0, 3] [3, 2] [1, 3]]

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
boundaries::mlir::ArrayAttr32-bit float array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

Results:

Result Description
output tensor of 32-bit integer values

tf.CacheDatasetV2 (TF::CacheDatasetV2Op)

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
metadata::mlir::StringAttrstring attribute

Operands:

Operand Description
input_dataset tensor of variant values
filename tensor of string values
cache tensor of resource values

Results:

Result Description
handle tensor of variant values

tf.Case (TF::CaseOp)

An n-way switch statement which calls a single branch function.

An n-way switch statement, implementing the following:

```
switch (branch_index) {
  case 0:
    output = branches[0](input);
    break;
  case 1:
    output = branches[1](input);
    break;
  ...
  case [[nbranches-1]]:
  default:
    output = branches[nbranches-1](input);
    break;
}
```

Interfaces: SymbolUserOpInterface

Attributes:

AttributeMLIR TypeDescription
branches::mlir::ArrayAttrsymbol ref array attribute with at least 1 elements
is_stateless::mlir::BoolAttrbool attribute
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute
output_shapes::mlir::Attributederived attribute

Operands:

Operand Description
branch_index tensor of 32-bit signless integer values
input variadic of tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.CaseRegion (TF::CaseRegionOp)

An n-way switch statement which calls a single branch function.

An n-way switch statement, implementing the following:

```
switch (branch_index) {
  case 0:
    output = branches[0](input);
    break;
  case 1:
    output = branches[1](input);
    break;
  ...
  case [[nbranches-1]]:
  default:
    output = branches[nbranches-1](input);
    break;
}
```

Traits: NoRegionArguments, SingleBlockImplicitTerminator<YieldOp>, SingleBlock

Attributes:

AttributeMLIR TypeDescription
is_stateless::mlir::BoolAttrbool attribute

Operands:

Operand Description
branch_index tensor of 32-bit signless integer values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.Cast (TF::CastOp)

Cast x of type SrcT to y of DstT.

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Truncate::mlir::BoolAttrbool attribute
SrcT::mlir::Attributederived attribute
DstT::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of tf.dtype values

Results:

Result Description
y tensor of tf.dtype values

tf.Ceil (TF::CeilOp)

Returns element-wise smallest integer not less than x.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_Idempotent

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of floating-point values

tf.CheckNumerics (TF::CheckNumericsOp)

Checks a tensor for NaN and Inf values.

When run, reports an InvalidArgument error if tensor has any values that are not a number (NaN) or infinity (Inf). Otherwise, returns the input tensor.

Example usage:

a = tf.Variable(1.0)
tf.debugging.check_numerics(a, message='')

b = tf.Variable(np.nan)
try:
  tf.debugging.check_numerics(b, message='Checking b')
except Exception as e:
  assert "Checking b : Tensor had NaN values" in e.message

c = tf.Variable(np.inf)
try:
  tf.debugging.check_numerics(c, message='Checking c')
except Exception as e:
  assert "Checking c : Tensor had Inf values" in e.message

Traits: InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_NoConstantFold

Interfaces: InferShapedTypeOpInterface, InferTypeOpInterface, TF_MustExecute (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
message::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
tensor tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.Cholesky (TF::CholeskyOp)

Computes the Cholesky decomposition of one or more square matrices.

The input is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices.

The input has to be symmetric and positive definite. Only the lower-triangular part of the input will be used for this operation. The upper-triangular part will not be read.

The output is a tensor of the same shape as the input containing the Cholesky decompositions for all input submatrices [..., :, :].

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values

tf.ClipByValue (TF::ClipByValueOp)

Clips tensor values to a specified min and max.

Given a tensor x, this operation returns a tensor of the same type and shape as x with its values clipped to clip_value_min and clip_value_max. Any values less than clip_value_min are set to clip_value_min. Any values greater than clip_value_max are set to clip_value_max.

Traits: AlwaysSpeculatableImplTrait, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
clip_value_min tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
clip_value_max tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
output tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.CloseSummaryWriter (TF::CloseSummaryWriterOp)

Flushes and closes the summary writer.

Also removes it from the resource manager. To reopen, use another CreateSummaryFileWriter op.

writer: A handle to the summary writer resource.

Operands:

Operand Description
writer tensor of resource values

tf.CollateTPUEmbeddingMemory (TF::CollateTPUEmbeddingMemoryOp)

An op that merges the string-encoded memory config protos from all hosts.

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute

Operands:

Operand Description
memory_configs variadic of tensor of string values

Results:

Result Description
merged_memory_config tensor of string values

tf.CollectiveAllToAllV2 (TF::CollectiveAllToAllV2Op)

Mutually exchanges multiple tensors of identical type and shape.

is_stateless means each op does not need control dependencies to other collective ops. In this case, keys that are unique at runtime (e.g. instance_key) should be used to distinguish collective groups.

Interfaces: GetResourceInstanceInterface, TF_CollectiveReduceOrderingEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::CollectiveReduceOrdering}

Attributes:

AttributeMLIR TypeDescription
communication_hint::mlir::StringAttrstring attribute
timeout_seconds::mlir::FloatAttr32-bit float attribute
is_stateless::mlir::BoolAttrbool attribute
Nordering_token::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point or 32/64-bit signed integer values
group_size tensor of 32-bit integer values
group_key tensor of 32-bit integer values
instance_key tensor of 32-bit integer values
ordering_token variadic of tensor of resource values

Results:

Result Description
data tensor of floating-point or 32/64-bit signed integer values

tf.CollectiveAssignGroupV2 (TF::CollectiveAssignGroupV2Op)

Assign group keys based on group assignment.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
group_assignment tensor of 32-bit integer values
device_index tensor of 32-bit integer values
base_key tensor of 32-bit integer values

Results:

Result Description
group_size tensor of 32-bit integer values
group_key tensor of 32-bit integer values

tf.CollectiveBcastRecv (TF::CollectiveBcastRecvOp)

Receives a tensor value broadcast from another device.

Attributes:

AttributeMLIR TypeDescription
group_size::mlir::IntegerAttr64-bit signless integer attribute
group_key::mlir::IntegerAttr64-bit signless integer attribute
instance_key::mlir::IntegerAttr64-bit signless integer attribute
shape::mlir::AttributeTensorFlow shape attribute
communication_hint::mlir::StringAttrstring attribute
timeout_seconds::mlir::FloatAttr32-bit float attribute
T::mlir::Attributederived attribute

Results:

Result Description
data tensor of bool or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.CollectiveBcastSend (TF::CollectiveBcastSendOp)

Broadcasts a tensor value to one or more other devices.

Attributes:

AttributeMLIR TypeDescription
group_size::mlir::IntegerAttr64-bit signless integer attribute
group_key::mlir::IntegerAttr64-bit signless integer attribute
instance_key::mlir::IntegerAttr64-bit signless integer attribute
shape::mlir::AttributeTensorFlow shape attribute
communication_hint::mlir::StringAttrstring attribute
timeout_seconds::mlir::FloatAttr32-bit float attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bool or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

Results:

Result Description
data tensor of bool or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.CollectiveGather (TF::CollectiveGatherOp)

Mutually accumulates multiple tensors of identical type and shape.

Attributes:

AttributeMLIR TypeDescription
group_size::mlir::IntegerAttr64-bit signless integer attribute
group_key::mlir::IntegerAttr64-bit signless integer attribute
instance_key::mlir::IntegerAttr64-bit signless integer attribute
shape::mlir::AttributeTensorFlow shape attribute
communication_hint::mlir::StringAttrstring attribute
timeout_seconds::mlir::FloatAttr32-bit float attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

Results:

Result Description
data tensor of 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.CollectiveGatherV2 (TF::CollectiveGatherV2Op)

Mutually accumulates multiple tensors of identical type and shape.

is_stateless means each op does not need control dependencies to other collective ops. In this case, keys that are unique at runtime (e.g. instance_key) should be used to distinguish collective groups.

Interfaces: GetResourceInstanceInterface, TF_CollectiveReduceOrderingEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::CollectiveReduceOrdering}

Attributes:

AttributeMLIR TypeDescription
communication_hint::mlir::StringAttrstring attribute
timeout_seconds::mlir::FloatAttr32-bit float attribute
is_stateless::mlir::BoolAttrbool attribute
Nordering_token::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values
group_size tensor of 32-bit integer values
group_key tensor of 32-bit integer values
instance_key tensor of 32-bit integer values
ordering_token variadic of tensor of resource values

Results:

Result Description
data tensor of 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.CollectivePermute (TF::CollectivePermuteOp)

An Op to permute tensors across replicated TPU instances.

Each instance supplies its own input.

For example, suppose there are 4 TPU instances: [A, B, C, D]. Passing source_target_pairs=[[0,1],[1,2],[2,3],[3,0]] gets the outputs: [D, A, B, C].

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of number values
source_target_pairs tensor of 32-bit integer values

Results:

Result Description
output tensor of number values

tf.CollectiveReduce (TF::CollectiveReduceOp)

Mutually reduces multiple tensors of identical type and shape.

Traits: InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: InferShapedTypeOpInterface, InferTypeOpInterface

Attributes:

AttributeMLIR TypeDescription
group_size::mlir::IntegerAttr64-bit signless integer attribute
group_key::mlir::IntegerAttr64-bit signless integer attribute
instance_key::mlir::IntegerAttr64-bit signless integer attribute
merge_op::mlir::StringAttrstring attribute whose value is Min, or Max, or Mul, or Add
final_op::mlir::StringAttrstring attribute whose value is Id, or Div
subdiv_offsets::mlir::ArrayAttr64-bit integer array attribute
wait_for::mlir::ArrayAttr64-bit integer array attribute
communication_hint::mlir::StringAttrstring attribute
timeout_seconds::mlir::FloatAttr32-bit float attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point or 32/64-bit signed integer values

Results:

Result Description
data tensor of floating-point or 32/64-bit signed integer values

tf.CollectiveReduceScatterV2 (TF::CollectiveReduceScatterV2Op)

Mutually reduces multiple tensors of identical type and shape and scatters the result.

is_stateless means each op does not need control dependencies to other collective ops. In this case, keys that are unique at runtime (e.g. instance_key) should be used to distinguish collective groups.

Interfaces: GetResourceInstanceInterface, TF_CollectiveReduceOrderingEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::CollectiveReduceOrdering}

Attributes:

AttributeMLIR TypeDescription
merge_op::mlir::StringAttrstring attribute whose value is Min, or Max, or Mul, or Add
final_op::mlir::StringAttrstring attribute whose value is Id, or Div
communication_hint::mlir::StringAttrstring attribute
timeout_seconds::mlir::FloatAttr32-bit float attribute
is_stateless::mlir::BoolAttrbool attribute
max_subdivs_per_device::mlir::IntegerAttr64-bit signless integer attribute
Nordering_token::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point or 32/64-bit signed integer values
group_size tensor of 32-bit integer values
group_key tensor of 32-bit integer values
instance_key tensor of 32-bit integer values
ordering_token variadic of tensor of resource values

Results:

Result Description
data tensor of floating-point or 32/64-bit signed integer values

tf.CollectiveReduceV2 (TF::CollectiveReduceV2Op)

Mutually reduces multiple tensors of identical type and shape.

is_stateless means each op does not need control dependencies to other collective ops. In this case, keys that are unique at runtime (e.g. instance_key) should be used to distinguish collective groups.

Interfaces: GetResourceInstanceInterface, TF_CollectiveReduceOrderingEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::CollectiveReduceOrdering}

Attributes:

AttributeMLIR TypeDescription
merge_op::mlir::StringAttrstring attribute whose value is Min, or Max, or Mul, or Add
final_op::mlir::StringAttrstring attribute whose value is Id, or Div
communication_hint::mlir::StringAttrstring attribute
timeout_seconds::mlir::FloatAttr32-bit float attribute
is_stateless::mlir::BoolAttrbool attribute
max_subdivs_per_device::mlir::IntegerAttr64-bit signless integer attribute
Nordering_token::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point or 32/64-bit signed integer values
group_size tensor of 32-bit integer values
group_key tensor of 32-bit integer values
instance_key tensor of 32-bit integer values
ordering_token variadic of tensor of resource values

Results:

Result Description
data tensor of floating-point or 32/64-bit signed integer values

tf.Complex (TF::ComplexOp)

Converts two real numbers to a complex number.

Given a tensor real representing the real part of a complex number, and a tensor imag representing the imaginary part of a complex number, this operation returns complex numbers elementwise of the form \(a + bj\), where a represents the real part and b represents the imag part.

The input tensors real and imag must have the same shape.

For example:

# tensor 'real' is [2.25, 3.25]
# tensor `imag` is [4.75, 5.75]
tf.complex(real, imag) ==> [[2.25 + 4.75j], [3.25 + 5.75j]]

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
real tensor of 32/64-bit float values
imag tensor of 32/64-bit float values

Results:

Result Description
out tensor of 128-bit complex or 64-bit complex values

tf.ComplexAbs (TF::ComplexAbsOp)

Computes the complex absolute value of a tensor.

Given a tensor x of complex numbers, this operation returns a tensor of type float or double that is the absolute value of each element in x. All elements in x must be complex numbers of the form \(a + bj\). The absolute value is computed as \( \sqrt{a^2 + b^2}\).

For example:

x = tf.complex(3.0, 4.0) print((tf.raw_ops.ComplexAbs(x=x, Tout=tf.dtypes.float32, name=None)).numpy()) 5.0

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of 128-bit complex or 64-bit complex values

Results:

Result Description
y tensor of 32/64-bit float values

tf.Concat (TF::ConcatOp)

Concatenates tensors along one dimension.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
concat_dim tensor of 32-bit integer values
values variadic of tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.ConcatOffset (TF::ConcatOffsetOp)

Computes offsets of concat inputs within its output.

For example:

x = [2, 2, 7] y = [2, 3, 7] z = [2, 9, 7] offsets = concat_offset(1, [x, y, z]) [[a.item() for a in list(off.numpy())] for off in offsets] [[0, 0, 0], [0, 2, 0], [0, 5, 0]]

This is typically used by gradient computations for a concat operation.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute
shape_type::mlir::Attributederived attribute

Operands:

Operand Description
concat_dim tensor of 32-bit integer values
shape variadic of tensor of 32/64-bit signed integer values

Results:

Result Description
offset variadic of tensor of 32/64-bit signed integer values

tf.ConcatV2 (TF::ConcatV2Op)

Concatenates tensors along one dimension.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
values variadic of tensor of tf.dtype values
axis tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.ConfigureAndInitializeGlobalTPU (TF::ConfigureAndInitializeGlobalTPUOp)

An op that initialize the TPU system in a multi-client set up.

Initializes global TPU system for mutli-client execution.

This op does the work of both ConfigureDistributedTpuOp and InitializeHostForDistributedTpuOp, and outputs the latter's result.

Results:

Result Description
output tensor of 32-bit integer values

tf.ConfigureDistributedTPU (TF::ConfigureDistributedTPUOp)

Sets up the centralized structures for a distributed TPU system.

Attributes:

AttributeMLIR TypeDescription
embedding_config::mlir::StringAttrstring attribute
tpu_embedding_config::mlir::StringAttrstring attribute
is_global_init::mlir::BoolAttrbool attribute
enable_whole_mesh_compilations::mlir::BoolAttrbool attribute
compilation_failure_closes_chips::mlir::BoolAttrbool attribute
tpu_cancellation_closes_chips::mlir::IntegerAttr64-bit signless integer attribute

Results:

Result Description
topology tensor of string values

tf.ConfigureTPUEmbedding (TF::ConfigureTPUEmbeddingOp)

Sets up TPUEmbedding in a distributed TPU system.

Attributes:

AttributeMLIR TypeDescription
config::mlir::StringAttrstring attribute

tf.ConfigureTPUEmbeddingHost (TF::ConfigureTPUEmbeddingHostOp)

An op that configures the TPUEmbedding software on a host.

Attributes:

AttributeMLIR TypeDescription
config::mlir::StringAttrstring attribute

Operands:

Operand Description
common_config tensor of string values
memory_config tensor of string values

Results:

Result Description
network_config tensor of string values

tf.ConfigureTPUEmbeddingMemory (TF::ConfigureTPUEmbeddingMemoryOp)

An op that configures the TPUEmbedding software on a host.

Operands:

Operand Description
common_config tensor of string values

Results:

Result Description
memory_config tensor of string values

tf.Conj (TF::ConjOp)

Returns the complex conjugate of a complex number.

Given a tensor input of complex numbers, this operation returns a tensor of complex numbers that are the complex conjugate of each element in input. The complex numbers in input must be of the form \(a + bj\), where a is the real part and b is the imaginary part.

The complex conjugate returned by this operation is of the form \(a - bj\).

For example:

# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]
tf.conj(input) ==> [-2.25 - 4.75j, 3.25 - 5.75j]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_Involution

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex or variant values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex or variant values

tf.ConjugateTranspose (TF::ConjugateTransposeOp)

Shuffle dimensions of x according to a permutation and conjugate the result.

The output y has the same rank as x. The shapes of x and y satisfy: y.shape[i] == x.shape[perm[i]] for i in [0, 1, ..., rank(x) - 1] y[i,j,k,...,s,t,u] == conj(x[perm[i], perm[j], perm[k],...,perm[s], perm[t], perm[u]])

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tperm::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of tf.dtype values
perm tensor of 32/64-bit signed integer values

Results:

Result Description
y tensor of tf.dtype values

tf.ConnectTPUEmbeddingHosts (TF::ConnectTPUEmbeddingHostsOp)

An op that sets up communication between TPUEmbedding host software instances

after ConfigureTPUEmbeddingHost has been called on each host.

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute

Operands:

Operand Description
network_configs variadic of tensor of string values

tf.Const (TF::ConstOp)

Constant tensor op

Traits: AlwaysSpeculatableImplTrait, ConstantLike

Interfaces: ConditionallySpeculatable, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface), OpAsmOpInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
value::mlir::ElementsAttrconstant vector/tensor attribute
dtype::mlir::Attributederived attribute

Results:

Result Description
output tensor of tf.dtype values

tf.Conv (TF::ConvOp)

_Computes a N-D convolution given (N+1+batchdims)-D input and (N+2)-D filter tensors.

General function for computing a N-D convolution. It is required that 1 <= N <= 3.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttrstring attribute whose value is CHANNELS_FIRST, or CHANNELS_LAST
dilations::mlir::ArrayAttr64-bit integer array attribute
batch_dims::mlir::IntegerAttr64-bit signless integer attribute
groups::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values
filter tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values

tf.Conv2D (TF::Conv2DOp)

Computes a 2-D convolution given 4-D input and filter tensors.

Given an input tensor of shape [batch, in_height, in_width, in_channels] and a filter / kernel tensor of shape [filter_height, filter_width, in_channels, out_channels], this op performs the following:

  1. Flattens the filter to a 2-D matrix with shape [filter_height * filter_width * in_channels, output_channels].
  2. Extracts image patches from the input tensor to form a virtual tensor of shape [batch, out_height, out_width, filter_height * filter_width * in_channels].
  3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] =
    sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] *
                    filter[di, dj, q, k]

Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertices strides, strides = [1, stride, stride, 1].

Traits: AlwaysSpeculatableImplTrait, InferTensorType

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface), TF_LayoutSensitiveInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute
use_cudnn_on_gpu::mlir::BoolAttrbool attribute
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values
filter tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values

tf.Conv2DBackpropFilter (TF::Conv2DBackpropFilterOp)

Computes the gradients of convolution with respect to the filter.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), TF_LayoutSensitiveInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute
use_cudnn_on_gpu::mlir::BoolAttrbool attribute
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point values
filter_sizes tensor of 32-bit integer values
out_backprop tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.Conv2DBackpropFilterV2 (TF::Conv2DBackpropFilterV2Op)

Computes the gradients of convolution with respect to the filter.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute
use_cudnn_on_gpu::mlir::BoolAttrbool attribute
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point values
filter tensor of floating-point values
out_backprop tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.Conv2DBackpropInput (TF::Conv2DBackpropInputOp)

Computes the gradients of convolution with respect to the input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), TF_LayoutSensitiveInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute
use_cudnn_on_gpu::mlir::BoolAttrbool attribute
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input_sizes tensor of 32-bit integer values
filter tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values
out_backprop tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values

tf.Conv2DBackpropInputV2 (TF::Conv2DBackpropInputV2Op)

Computes the gradients of convolution with respect to the input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute
use_cudnn_on_gpu::mlir::BoolAttrbool attribute
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values
filter tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values
out_backprop tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values

tf.Conv3D (TF::Conv3DOp)

Computes a 3-D convolution given 5-D input and filter tensors.

In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product.

Our Conv3D implements a form of cross-correlation.

Traits: AlwaysSpeculatableImplTrait, InferTensorType

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttrstring attribute whose value is NDHWC, or NCDHW
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point values
filter tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.Conv3DBackpropFilter (TF::Conv3DBackpropFilterOp)

Computes the gradients of 3-D convolution with respect to the filter.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 16-bit float or 32-bit float or 64-bit float values
filter tensor of 16-bit float or 32-bit float or 64-bit float values
out_backprop tensor of 16-bit float or 32-bit float or 64-bit float values

Results:

Result Description
output tensor of 16-bit float or 32-bit float or 64-bit float values

tf.Conv3DBackpropFilterV2 (TF::Conv3DBackpropFilterV2Op)

Computes the gradients of 3-D convolution with respect to the filter.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttrstring attribute whose value is NDHWC, or NCDHW
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point values
filter_sizes tensor of 32-bit integer values
out_backprop tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.Conv3DBackpropInput (TF::Conv3DBackpropInputOp)

Computes the gradients of 3-D convolution with respect to the input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 16-bit float or 32-bit float or 64-bit float values
filter tensor of 16-bit float or 32-bit float or 64-bit float values
out_backprop tensor of 16-bit float or 32-bit float or 64-bit float values

Results:

Result Description
output tensor of 16-bit float or 32-bit float or 64-bit float values

tf.Conv3DBackpropInputV2 (TF::Conv3DBackpropInputV2Op)

Computes the gradients of 3-D convolution with respect to the input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttrstring attribute whose value is NDHWC, or NCDHW
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute
Tshape::mlir::Attributederived attribute

Operands:

Operand Description
input_sizes tensor of 32/64-bit signed integer values
filter tensor of floating-point values
out_backprop tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.ConvertToCooTensor (TF::ConvertToCooTensorOp)

Op that converts tensors into coo format.

This op coverts the dense, sparse and ragged tensor into standard coo tensor format which contains three 1D tensors.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
sample_count::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
combiner::mlir::StringAttrstring attribute

Operands:

Operand Description
indices_or_row_splits tensor of 32-bit integer values
values tensor of 32-bit integer values
weights tensor of 32-bit float values

Results:

Result Description
row_ids tensor of 32-bit integer values
col_ids tensor of 32-bit integer values
gains tensor of 32-bit float values

tf.ConvertToListOfSparseCoreCooTensors (TF::ConvertToListOfSparseCoreCooTensorsOp)

An op which converts the sparse/ragged/dense tensor into a list of COO tensor for each SparseCore.

Traits: AlwaysSpeculatableImplTrait, SameVariadicOperandSize, SameVariadicResultSize

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
sample_count::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
row_offset::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 0
col_offset::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 0
col_shift::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 0
num_sc_shards::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
stacked_table_sample_count::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
combiner::mlir::StringAttrstring attribute
num_sc_per_chip::mlir::Attributederived attribute

Operands:

Operand Description
indices_or_row_splits tensor of 32-bit integer values
values tensor of 32-bit integer values
weights tensor of 32-bit float values

Results:

Result Description
row_ids_list variadic of tensor of 32-bit integer values
col_ids_list variadic of tensor of 32-bit integer values
gains_list variadic of tensor of 32-bit float values

tf.ConvertToSparseCoreCsrWrappedCooTensorOp (TF::ConvertToSparseCoreCsrWrappedCooTensorOp)

An op which converts the sorted coo tensor into sparse core CSR wrapped COO format.

Traits: AlwaysSpeculatableImplTrait, SameVariadicOperandSize

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
sample_count_per_sc::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
num_replica::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
max_minibatches_per_sc::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
max_ids_per_chip_per_sample::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
table_vocab_size::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
feature_width::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
table_name::mlir::StringAttrstring attribute
allow_id_dropping::mlir::BoolAttrbool attribute
num_sc_per_chip::mlir::Attributederived attribute

Operands:

Operand Description
sorted_row_ids_list variadic of tensor of 32-bit integer values
sorted_col_ids_list variadic of tensor of 32-bit integer values
sorted_gains_list variadic of tensor of 32-bit float values
id_counts_list variadic of tensor of 32-bit integer values
splits tensor of 64-bit integer values

Results:

Result Description
row_pointers tensor of 32-bit integer values
sorted_sample_ids tensor of 32-bit integer values
sorted_token_ids tensor of 32-bit integer values
sorted_gains tensor of 32-bit float values
row_pointers_unpadded_size tensor of 32-bit integer values
ids_unpadded_size tensor of 32-bit integer values
num_minibatches_per_sc tensor of 32-bit integer values

tf.Cos (TF::CosOp)

Computes cos of x element-wise.

Given an input tensor, this function computes cosine of every element in the tensor. Input range is (-inf, inf) and output range is [-1,1]. If input lies outside the boundary, nan is returned.

  x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 200, 10000, float("inf")])
  tf.math.cos(x) ==> [nan -0.91113025 0.87758255 0.5403023 0.36235774 0.48718765 -0.95215535 nan]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.Cosh (TF::CoshOp)

Computes hyperbolic cosine of x element-wise.

Given an input tensor, this function computes hyperbolic cosine of every element in the tensor. Input range is [-inf, inf] and output range is [1, inf].

  x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 2, 10, float("inf")])
  tf.math.cosh(x) ==> [inf 4.0515420e+03 1.1276259e+00 1.5430807e+00 1.8106556e+00 3.7621956e+00 1.1013233e+04 inf]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.CreateSummaryDbWriter (TF::CreateSummaryDbWriterOp)

Creates summary database writer accessible by given resource handle.

This can be used to write tensors from the execution graph directly to a database. Only SQLite is supported right now. This function will create the schema if it doesn't exist. Entries in the Users, Experiments, and Runs tables will be created automatically if they don't already exist.

writer: Handle to SummaryWriter resource to overwrite. db_uri: For example "file:/tmp/foo.sqlite". experiment_name: Can't contain ASCII control characters or <>. Case sensitive. If empty, then the Run will not be associated with any Experiment. run_name: Can't contain ASCII control characters or <>. Case sensitive. If empty, then each Tag will not be associated with any Run. user_name: Must be valid as both a DNS label and Linux username. If empty, then the Experiment will not be associated with any User.

Operands:

Operand Description
writer tensor of resource values
db_uri tensor of string values
experiment_name tensor of string values
run_name tensor of string values
user_name tensor of string values

tf.CreateSummaryFileWriter (TF::CreateSummaryFileWriterOp)

Creates a summary file writer accessible by the given resource handle.

writer: A handle to the summary writer resource logdir: Directory where the event file will be written. max_queue: Size of the queue of pending events and summaries. flush_millis: How often, in milliseconds, to flush the pending events and summaries to disk. filename_suffix: Every event file's name is suffixed with this suffix.

Operands:

Operand Description
writer tensor of resource values
logdir tensor of string values
max_queue tensor of 32-bit integer values
flush_millis tensor of 32-bit integer values
filename_suffix tensor of string values

tf.Cross (TF::CrossOp)

Compute the pairwise cross product.

a and b must be the same shape; they can either be simple 3-element vectors, or any shape where the innermost dimension is 3. In the latter case, each pair of corresponding 3-element vectors is cross-multiplied independently.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
a tensor of integer or floating-point values
b tensor of integer or floating-point values

Results:

Result Description
product tensor of integer or floating-point values

tf.CrossReplicaSum (TF::CrossReplicaSumOp)

An Op to sum inputs across replicated TPU instances.

Each instance supplies its own input.

For example, suppose there are 8 TPU instances: [A, B, C, D, E, F, G, H]. Passing group_assignment=[[0,2,4,6],[1,3,5,7]] sets A, C, E, G as group 0, and B, D, F, H as group 1. Thus we get the outputs: [A+C+E+G, B+D+F+H, A+C+E+G, B+D+F+H, A+C+E+G, B+D+F+H, A+C+E+G, B+D+F+H].

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 32-bit unsigned integer values
group_assignment tensor of 32-bit integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 32-bit unsigned integer values

tf.Cumprod (TF::CumprodOp)

Compute the cumulative product of the tensor x along axis.

By default, this op performs an inclusive cumprod, which means that the first element of the input is identical to the first element of the output:

tf.cumprod([a, b, c])  # => [a, a * b, a * b * c]

By setting the exclusive kwarg to True, an exclusive cumprod is performed instead:

tf.cumprod([a, b, c], exclusive=True)  # => [1, a, a * b]

By setting the reverse kwarg to True, the cumprod is performed in the opposite direction:

tf.cumprod([a, b, c], reverse=True)  # => [a * b * c, b * c, c]

This is more efficient than using separate tf.reverse ops.

The reverse and exclusive kwargs can also be combined:

tf.cumprod([a, b, c], exclusive=True, reverse=True)  # => [b * c, c, 1]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
exclusive::mlir::BoolAttrbool attribute
reverse::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of number values
axis tensor of 32/64-bit signed integer values

Results:

Result Description
out tensor of number values

tf.Cumsum (TF::CumsumOp)

Compute the cumulative sum of the tensor x along axis.

By default, this op performs an inclusive cumsum, which means that the first element of the input is identical to the first element of the output:

tf.cumsum([a, b, c])  # => [a, a + b, a + b + c]

By setting the exclusive kwarg to True, an exclusive cumsum is performed instead:

tf.cumsum([a, b, c], exclusive=True)  # => [0, a, a + b]

By setting the reverse kwarg to True, the cumsum is performed in the opposite direction:

tf.cumsum([a, b, c], reverse=True)  # => [a + b + c, b + c, c]

This is more efficient than using separate tf.reverse ops.

The reverse and exclusive kwargs can also be combined:

tf.cumsum([a, b, c], exclusive=True, reverse=True)  # => [b + c, c, 0]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
exclusive::mlir::BoolAttrbool attribute
reverse::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of number values
axis tensor of 32/64-bit signed integer values

Results:

Result Description
out tensor of number values

tf.CumulativeLogsumexp (TF::CumulativeLogsumexpOp)

Compute the cumulative product of the tensor x along axis.

By default, this op performs an inclusive cumulative log-sum-exp, which means that the first element of the input is identical to the first element of the output:

tf.math.cumulative_logsumexp([a, b, c])  # => [a, log(exp(a) + exp(b)), log(exp(a) + exp(b) + exp(c))]

By setting the exclusive kwarg to True, an exclusive cumulative log-sum-exp is performed instead:

tf.cumulative_logsumexp([a, b, c], exclusive=True)  # => [-inf, a, log(exp(a) * exp(b))]

Note that the neutral element of the log-sum-exp operation is -inf, however, for performance reasons, the minimal value representable by the floating point type is used instead.

By setting the reverse kwarg to True, the cumulative log-sum-exp is performed in the opposite direction.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
exclusive::mlir::BoolAttrbool attribute
reverse::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values
axis tensor of 32/64-bit signed integer values

Results:

Result Description
out tensor of floating-point values

tf.DataFormatDimMap (TF::DataFormatDimMapOp)

Returns the dimension index in the destination data format given the one in

the source data format.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
src_format::mlir::StringAttrstring attribute
dst_format::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of 32/64-bit signed integer values

Results:

Result Description
y tensor of 32/64-bit signed integer values

tf.DataFormatVecPermute (TF::DataFormatVecPermuteOp)

Permute input tensor from src_format to dst_format.

Given source and destination format strings of length n=4 or 5, the input tensor must be a vector of size n or n-2, or a 2D tensor of shape (n, 2) or (n-2, 2).

If the first dimension of the input tensor is n-2, it is assumed that non-spatial dimensions are omitted (i.e N, C).

For example, with src_format of NHWC, dst_format of NCHW, and input:

[1, 2, 3, 4]

, the output will be:

[1, 4, 2, 3]

With src_format of NDHWC, dst_format of NCDHW, and input:

[[1, 6], [2, 7], [3, 8], [4, 9], [5, 10]]

, the output will be:

[[1, 6], [5, 10], [2, 7], [3, 8], [4, 9]]

With src_format of NHWC, dst_format of NCHW, and input:

[1, 2]

, the output will be:

[1, 2]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
src_format::mlir::StringAttrstring attribute
dst_format::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of 32/64-bit signed integer values

Results:

Result Description
y tensor of 32/64-bit signed integer values

tf.DebugIdentityV2 (TF::DebugIdentityV2Op)

Debug Identity V2 Op.

Provides an identity mapping from input to output, while writing the content of the input tensor by calling DebugEventsWriter.

The semantics of the input tensor depends on tensor_debug_mode. In typical usage, the input tensor comes directly from the user computation only when graph_debug_mode is FULL_TENSOR (see protobuf/debug_event.proto for a list of all the possible values of graph_debug_mode). For the other debug modes, the input tensor should be produced by an additional op or subgraph that computes summary information about one or more tensors.

Attributes:

AttributeMLIR TypeDescription
tfdbg_context_id::mlir::StringAttrstring attribute
op_name::mlir::StringAttrstring attribute
output_slot::mlir::IntegerAttr64-bit signless integer attribute
tensor_debug_mode::mlir::IntegerAttr64-bit signless integer attribute
debug_urls::mlir::ArrayAttrstring array attribute
circular_buffer_size::mlir::IntegerAttr64-bit signless integer attribute
tfdbg_run_id::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.DecodeAndCropJpeg (TF::DecodeAndCropJpegOp)

Decode and Crop a JPEG-encoded image to a uint8 tensor.

The attr channels indicates the desired number of color channels for the decoded image.

Accepted values are:

  • 0: Use the number of channels in the JPEG-encoded image.
  • 1: output a grayscale image.
  • 3: output an RGB image.

If needed, the JPEG-encoded image is transformed to match the requested number of color channels.

The attr ratio allows downscaling the image by an integer factor during decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than downscaling the image later.

It is equivalent to a combination of decode and crop, but much faster by only decoding partial jpeg image.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
channels::mlir::IntegerAttr64-bit signless integer attribute
ratio::mlir::IntegerAttr64-bit signless integer attribute
fancy_upscaling::mlir::BoolAttrbool attribute
try_recover_truncated::mlir::BoolAttrbool attribute
acceptable_fraction::mlir::FloatAttr32-bit float attribute
dct_method::mlir::StringAttrstring attribute

Operands:

Operand Description
contents tensor of string values
crop_window tensor of 32-bit integer values

Results:

Result Description
image tensor of 8-bit unsigned integer values

tf.DecodeGif (TF::DecodeGifOp)

Decode the frame(s) of a GIF-encoded image to a uint8 tensor.

GIF images with frame or transparency compression are not supported. On Linux and MacOS systems, convert animated GIFs from compressed to uncompressed by running:

convert \\(src.gif -coalesce \\)dst.gif

This op also supports decoding JPEGs and PNGs, though it is cleaner to use tf.io.decode_image.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
contents tensor of string values

Results:

Result Description
image tensor of 8-bit unsigned integer values

tf.DecodeJpeg (TF::DecodeJpegOp)

Decode a JPEG-encoded image to a uint8 tensor.

The attr channels indicates the desired number of color channels for the decoded image.

Accepted values are:

  • 0: Use the number of channels in the JPEG-encoded image.
  • 1: output a grayscale image.
  • 3: output an RGB image.

If needed, the JPEG-encoded image is transformed to match the requested number of color channels.

The attr ratio allows downscaling the image by an integer factor during decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than downscaling the image later.

This op also supports decoding PNGs and non-animated GIFs since the interface is the same, though it is cleaner to use tf.io.decode_image.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
channels::mlir::IntegerAttr64-bit signless integer attribute
ratio::mlir::IntegerAttr64-bit signless integer attribute
fancy_upscaling::mlir::BoolAttrbool attribute
try_recover_truncated::mlir::BoolAttrbool attribute
acceptable_fraction::mlir::FloatAttr32-bit float attribute
dct_method::mlir::StringAttrstring attribute

Operands:

Operand Description
contents tensor of string values

Results:

Result Description
image tensor of 8-bit unsigned integer values

tf.DecodePaddedRaw (TF::DecodePaddedRawOp)

Reinterpret the bytes of a string as a vector of numbers.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
little_endian::mlir::BoolAttrbool attribute
out_type::mlir::Attributederived attribute

Operands:

Operand Description
input_bytes tensor of string values
fixed_length tensor of 32-bit integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 8-bit unsigned integer values

tf.DecodePng (TF::DecodePngOp)

Decode a PNG-encoded image to a uint8 or uint16 tensor.

The attr channels indicates the desired number of color channels for the decoded image.

Accepted values are:

  • 0: Use the number of channels in the PNG-encoded image.
  • 1: output a grayscale image.
  • 3: output an RGB image.
  • 4: output an RGBA image.

If needed, the PNG-encoded image is transformed to match the requested number of color channels.

This op also supports decoding JPEGs and non-animated GIFs since the interface is the same, though it is cleaner to use tf.io.decode_image.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
channels::mlir::IntegerAttr64-bit signless integer attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
contents tensor of string values

Results:

Result Description
image tensor of 16-bit unsigned integer or 8-bit unsigned integer values

tf.DeleteIterator (TF::DeleteIteratorOp)

A container for an iterator resource.

Operands:

Operand Description
handle tensor of resource values
deleter tensor of variant values

tf.DeleteMemoryCache (TF::DeleteMemoryCacheOp)

Operands:

Operand Description
handle tensor of resource values
deleter tensor of variant values

tf.DeleteMultiDeviceIterator (TF::DeleteMultiDeviceIteratorOp)

A container for an iterator resource.

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute

Operands:

Operand Description
multi_device_iterator tensor of resource values
iterators variadic of tensor of resource values
deleter tensor of variant values

tf.DeleteRandomSeedGenerator (TF::DeleteRandomSeedGeneratorOp)

Operands:

Operand Description
handle tensor of resource values
deleter tensor of variant values

tf.DeleteSeedGenerator (TF::DeleteSeedGeneratorOp)

Operands:

Operand Description
handle tensor of resource values
deleter tensor of variant values

tf.DepthToSpace (TF::DepthToSpaceOp)

DepthToSpace for tensors of type T.

Rearranges data from depth into blocks of spatial data. This is the reverse transformation of SpaceToDepth. More specifically, this op outputs a copy of the input tensor where values from the depth dimension are moved in spatial blocks to the height and width dimensions. The attr block_size indicates the input block size and how the data is moved.

  • Chunks of data of size block_size * block_size from depth are rearranged into non-overlapping blocks of size block_size x block_size
  • The width of the output tensor is input_depth * block_size, whereas the height is input_height * block_size.
  • The Y, X coordinates within each block of the output image are determined by the high order component of the input channel index.
  • The depth of the input tensor must be divisible by block_size * block_size.

The data_format attr specifies the layout of the input and output tensors with the following options: "NHWC": [ batch, height, width, channels ] "NCHW": [ batch, channels, height, width ] "NCHW_VECT_C": qint8 [ batch, channels / 4, height, width, 4 ]

It is useful to consider the operation as transforming a 6-D Tensor. e.g. for data_format = NHWC, Each element in the input tensor can be specified via 6 coordinates, ordered by decreasing memory layout significance as: n,iY,iX,bY,bX,oC (where n=batch index, iX, iY means X or Y coordinates within the input image, bX, bY means coordinates within the output block, oC means output channels). The output would be the input transposed to the following layout: n,iY,bY,iX,bX,oC

This operation is useful for resizing the activations between convolutions (but keeping all data), e.g. instead of pooling. It is also useful for training purely convolutional models.

For example, given an input of shape [1, 1, 1, 4], data_format = "NHWC" and block_size = 2:

x = [[[[1, 2, 3, 4]]]]

This operation will output a tensor of shape [1, 2, 2, 1]:

   [[[[1], [2]],
     [[3], [4]]]]

Here, the input has a batch of 1 and each batch element has shape [1, 1, 4], the corresponding output will have 2x2 elements and will have a depth of 1 channel (1 = 4 / (block_size * block_size)). The output element shape is [2, 2, 1].

For an input tensor with larger depth, here of shape [1, 1, 1, 12], e.g.

x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]

This operation, for block size of 2, will return the following tensor of shape [1, 2, 2, 3]

   [[[[1, 2, 3], [4, 5, 6]],
     [[7, 8, 9], [10, 11, 12]]]]

Similarly, for the following input of shape [1 2 2 4], and a block size of 2:

x =  [[[[1, 2, 3, 4],
       [5, 6, 7, 8]],
      [[9, 10, 11, 12],
       [13, 14, 15, 16]]]]

the operator will return the following tensor of shape [1 4 4 1]:

x = [[[ [1],   [2],  [5],  [6]],
      [ [3],   [4],  [7],  [8]],
      [ [9],  [10], [13],  [14]],
      [ [11], [12], [15],  [16]]]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
block_size::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 2
data_format::mlir::StringAttrstring attribute whose value is NHWC, or NCHW, or NCHW_VECT_C
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.DepthwiseConv2dNative (TF::DepthwiseConv2dNativeOp)

Computes a 2-D depthwise convolution given 4-D input and filter tensors.

Given an input tensor of shape [batch, in_height, in_width, in_channels] and a filter / kernel tensor of shape [filter_height, filter_width, in_channels, channel_multiplier], containing in_channels convolutional filters of depth 1, depthwise_conv2d applies a different filter to each input channel (expanding from 1 channel to channel_multiplier channels for each), then concatenates the results together. Thus, the output has in_channels * channel_multiplier channels.

for k in 0..in_channels-1
  for q in 0..channel_multiplier-1
    output[b, i, j, k * channel_multiplier + q] =
      sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] *
                        filter[di, dj, k, q]

Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertices strides, strides = [1, stride, stride, 1].

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point values
filter tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.DepthwiseConv2dNativeBackpropFilter (TF::DepthwiseConv2dNativeBackpropFilterOp)

Computes the gradients of depthwise convolution with respect to the filter.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point values
filter_sizes tensor of 32-bit integer values
out_backprop tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.DepthwiseConv2dNativeBackpropInput (TF::DepthwiseConv2dNativeBackpropInputOp)

Computes the gradients of depthwise convolution with respect to the input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input_sizes tensor of 32-bit integer values
filter tensor of floating-point values
out_backprop tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.Dequantize (TF::DequantizeOp)

Dequantize the 'input' tensor into a float or bfloat16 Tensor.

[min_range, max_range] are scalar floats that specify the range for the output. The 'mode' attribute controls exactly which calculations are used to convert the float values to their quantized equivalents.

In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:

if T == qint8: in[i] += (range(T) + 1)/ 2.0
out[i] = min_range + (in[i]* (max_range - min_range) / range(T))

here range(T) = numeric_limits<T>::max() - numeric_limits<T>::min()

MIN_COMBINED Mode Example

If the input comes from a QuantizedRelu6, the output type is quint8 (range of 0-255) but the possible range of QuantizedRelu6 is 0-6. The min_range and max_range values are therefore 0.0 and 6.0. Dequantize on quint8 will take each value, cast to float, and multiply by 6 / 255. Note that if quantizedtype is qint8, the operation will additionally add each value by 128 prior to casting.

If the mode is 'MIN_FIRST', then this approach is used:

num_discrete_values = 1 << (# of bits in T)
range_adjust = num_discrete_values / (num_discrete_values - 1)
range = (range_max - range_min) * range_adjust
range_scale = range / num_discrete_values
const double offset_input = static_cast<double>(input) - lowest_quantized;
result = range_min + ((input - numeric_limits<T>::min()) * range_scale)

If the mode is SCALED, dequantization is performed by multiplying each input value by a scaling_factor. (Thus an input of 0 always maps to 0.0).

The scaling_factor is determined from min_range, max_range, and narrow_range in a way that is compatible with QuantizeAndDequantize{V2|V3} and QuantizeV2, using the following algorithm:


  const int min_expected_T = std::numeric_limits<T>::min() +
    (narrow_range ? 1 : 0);
  const int max_expected_T = std::numeric_limits<T>::max();
  const float max_expected_T = std::numeric_limits<float>::max();

  const float scale_factor =
    (std::numeric_limits<T>::min() == 0) ? (max_range / max_expected_T)
                                         : std::max(min_range / min_expected_T,
                                                    max_range / max_expected_T);

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
mode::mlir::StringAttrstring attribute whose value is MIN_COMBINED, or MIN_FIRST, or SCALED
narrow_range::mlir::BoolAttrbool attribute
axis::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer values
min_range tensor of 32-bit float values
max_range tensor of 32-bit float values

Results:

Result Description
output tensor of bfloat16 or 32-bit float values

tf.DeserializeIterator (TF::DeserializeIteratorOp)

Converts the given variant tensor to an iterator and stores it in the given resource.

Operands:

Operand Description
resource_handle tensor of resource values
serialized tensor of variant values

tf.DeserializeSparse (TF::DeserializeSparseOp)

Deserialize SparseTensor objects.

The input serialized_sparse must have the shape [?, ?, ..., ?, 3] where the last dimension stores serialized SparseTensor objects and the other N dimensions (N >= 0) correspond to a batch. The ranks of the original SparseTensor objects must all match. When the final SparseTensor is created, its rank is the rank of the incoming SparseTensor objects plus N; the sparse tensors have been concatenated along new dimensions, one for each batch.

The output SparseTensor object's shape values for the original dimensions are the max across the input SparseTensor objects' shape values for the corresponding dimensions. The new dimensions match the size of the batch.

The input SparseTensor objects' indices are assumed ordered in standard lexicographic order. If this is not the case, after this step run SparseReorder to restore index ordering.

For example, if the serialized input is a [2 x 3] matrix representing two original SparseTensor objects:

index = [ 0]
        [10]
        [20]
values = [1, 2, 3]
shape = [50]

and

index = [ 2]
        [10]
values = [4, 5]
shape = [30]

then the final deserialized SparseTensor will be:

index = [0  0]
        [0 10]
        [0 20]
        [1  2]
        [1 10]
values = [1, 2, 3, 4, 5]
shape = [2 50]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tserialized::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
serialized_sparse tensor of string or variant values

Results:

Result Description
sparse_indices tensor of 64-bit integer values
sparse_values tensor of tf.dtype values
sparse_shape tensor of 64-bit integer values

tf.DestroyResourceOp (TF::DestroyResourceOp)

Deletes the resource specified by the handle.

All subsequent operations using the resource will result in a NotFound error status.

Attributes:

AttributeMLIR TypeDescription
ignore_lookup_error::mlir::BoolAttrbool attribute

Operands:

Operand Description
resource tensor of resource values

tf.DeviceIndex (TF::DeviceIndexOp)

Return the index of device the op runs.

Given a list of device names, this operation returns the index of the device this op runs. The length of the list is returned in two cases: (1) Device does not exist in the given device list. (2) It is in XLA compilation.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
device_names::mlir::ArrayAttrstring array attribute

Results:

Result Description
index tensor of 32-bit integer values

tf.Diag (TF::DiagOp)

Returns a diagonal tensor with a given diagonal values.

Given a diagonal, this operation returns a tensor with the diagonal and everything else padded with zeros. The diagonal is computed as follows:

Assume diagonal has dimensions [D1,..., Dk], then the output is a tensor of rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where:

output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik] and 0 everywhere else.

For example:

# 'diagonal' is [1, 2, 3, 4]
tf.diag(diagonal) ==> [[1, 0, 0, 0]
                       [0, 2, 0, 0]
                       [0, 0, 3, 0]
                       [0, 0, 0, 4]]

Traits: AlwaysSpeculatableImplTrait, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
diagonal tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

Results:

Result Description
output tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.DiagPart (TF::DiagPartOp)

Returns the diagonal part of the tensor.

This operation returns a tensor with the diagonal part of the input. The diagonal part is computed as follows:

Assume input has dimensions [D1,..., Dk, D1,..., Dk], then the output is a tensor of rank k with dimensions [D1,..., Dk] where:

diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik].

For example:

# 'input' is [[1, 0, 0, 0]
              [0, 2, 0, 0]
              [0, 0, 3, 0]
              [0, 0, 0, 4]]

tf.diag_part(input) ==> [1, 2, 3, 4]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

Results:

Result Description
diagonal tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.Digamma (TF::DigammaOp)

Computes Psi, the derivative of Lgamma (the log of the absolute value of

Gamma(x)), element-wise.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of floating-point values

tf.DisableCopyOnRead (TF::DisableCopyOnReadOp)

Turns off the copy-on-read mode.

Turns off the copy-on-read mode of a resource variable. If the variable is not in copy-on-read mode, this op has no effect.

Operands:

Operand Description
resource tensor of resource values

tf.Div (TF::DivOp)

Returns x / y element-wise.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
z tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.DivNoNan (TF::DivNoNanOp)

Returns 0 if the denominator is zero.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values
y tensor of floating-point or complex values

Results:

Result Description
z tensor of floating-point or complex values

tf.DummyMemoryCache (TF::DummyMemoryCacheOp)

Results:

Result Description
handle tensor of resource values

tf.DummySeedGenerator (TF::DummySeedGeneratorOp)

Results:

Result Description
handle tensor of resource values

tf.DynamicEnqueueTPUEmbeddingArbitraryTensorBatch (TF::DynamicEnqueueTPUEmbeddingArbitraryTensorBatchOp)

_Eases the porting of code that uses tf.nn.embedding_lookupsparse().

embedding_indices[i] and aggregation_weights[i] correspond to the ith feature.

The tensors at corresponding positions in the three input lists (sample_indices, embedding_indices and aggregation_weights) must have the same shape, i.e. rank 1 with dim_size() equal to the total number of lookups into the table described by the corresponding feature.

Traits: SameVariadicOperandSize

Interfaces: TF_TPUEmbeddingWriteEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::TPUEmbedding}

Attributes:

AttributeMLIR TypeDescription
combiners::mlir::ArrayAttrstring array attribute
N::mlir::Attributederived attribute
T1::mlir::Attributederived attribute
T2::mlir::Attributederived attribute
T3::mlir::Attributederived attribute

Operands:

Operand Description
sample_indices_or_row_splits variadic of tensor of 32/64-bit signed integer values
embedding_indices variadic of tensor of 32/64-bit signed integer values
aggregation_weights variadic of tensor of 32/64-bit float values
mode_override tensor of string values
device_ordinal tensor of 32-bit integer values

tf.DynamicPartition (TF::DynamicPartitionOp)

Partitions data into num_partitions tensors using indices from partitions.

For each index tuple js of size partitions.ndim, the slice data[js, ...] becomes part of outputs[partitions[js]]. The slices with partitions[js] = i are placed in outputs[i] in lexicographic order of js, and the first dimension of outputs[i] is the number of entries in partitions equal to i. In detail,

    outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:]

    outputs[i] = pack([data[js, ...] for js if partitions[js] == i])

data.shape must start with partitions.shape.

For example:

    # Scalar partitions.
    partitions = 1
    num_partitions = 2
    data = [10, 20]
    outputs[0] = []  # Empty with shape [0, 2]
    outputs[1] = [[10, 20]]

    # Vector partitions.
    partitions = [0, 0, 1, 1, 0]
    num_partitions = 2
    data = [10, 20, 30, 40, 50]
    outputs[0] = [10, 20, 50]
    outputs[1] = [30, 40]

See dynamic_stitch for an example on how to merge partitions back.

Raises:

  • InvalidArgumentError in following cases:
    • If partitions is not in range [0, num_partiions)
    • If partitions.shape does not match prefix of data.shape argument.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
num_partitions::mlir::Attributederived attribute

Operands:

Operand Description
data tensor of tf.dtype values
partitions tensor of 32-bit integer values

Results:

Result Description
outputs variadic of tensor of tf.dtype values

tf.DynamicStitch (TF::DynamicStitchOp)

Interleave the values from the data tensors into a single tensor.

Builds a merged tensor such that

    merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]

For example, if each indices[m] is scalar or vector, we have

    # Scalar indices:
    merged[indices[m], ...] = data[m][...]

    # Vector indices:
    merged[indices[m][i], ...] = data[m][i, ...]

Each data[i].shape must start with the corresponding indices[i].shape, and the rest of data[i].shape must be constant w.r.t. i. That is, we must have data[i].shape = indices[i].shape + constant. In terms of this constant, the output shape is

merged.shape = [max(indices) + 1] + constant

Values are merged in order, so if an index appears in both indices[m][i] and indices[n][j] for (m,i) < (n,j) the slice data[n][j] will appear in the merged result. If you do not need this guarantee, ParallelDynamicStitch might perform better on some devices.

For example:

    indices[0] = 6
    indices[1] = [4, 1]
    indices[2] = [[5, 2], [0, 3]]
    data[0] = [61, 62]
    data[1] = [[41, 42], [11, 12]]
    data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]]
    merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42],
              [51, 52], [61, 62]]

This method can be used to merge partitions created by dynamic_partition as illustrated on the following example:

    # Apply function (increments x_i) on elements for which a certain condition
    # apply (x_i != -1 in this example).
    x=tf.constant([0.1, -1., 5.2, 4.3, -1., 7.4])
    condition_mask=tf.not_equal(x,tf.constant(-1.))
    partitioned_data = tf.dynamic_partition(
        x, tf.cast(condition_mask, tf.int32) , 2)
    partitioned_data[1] = partitioned_data[1] + 1.0
    condition_indices = tf.dynamic_partition(
        tf.range(tf.shape(x)[0]), tf.cast(condition_mask, tf.int32) , 2)
    x = tf.dynamic_stitch(condition_indices, partitioned_data)
    # Here x=[1.1, -1., 6.2, 5.3, -1, 8.4], the -1. values remain
    # unchanged.

Traits: AlwaysSpeculatableImplTrait, SameVariadicOperandSize

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
indices variadic of tensor of 32-bit integer values
data variadic of tensor of tf.dtype values

Results:

Result Description
merged tensor of tf.dtype values

tf.Einsum (TF::EinsumOp)

Tensor contraction according to Einstein summation convention.

Implements generalized Tensor contraction and reduction. Each input Tensor must have a corresponding input subscript appearing in the comma-separated left-hand side of the equation. The right-hand side of the equation consists of the output subscript. The input subscripts and the output subscript should consist of zero or more named axis labels and at most one ellipsis (...).

The named axis labels may be any single character other than those having special meaning, namely ,.->. The behavior of this Op is undefined if it receives an ill-formatted equation; since the validation is done at graph-building time, we omit format validation checks at runtime.

Operations are applied to the input(s) according to the following rules:

(a) Generalized Diagonals: For input dimensions corresponding to axis labels appearing more than once in the same input subscript, we take the generalized (k-dimensional) diagonal. For example, in the equation iii->i with input shape [3, 3, 3], the generalized diagonal would consist of 3 elements at indices (0, 0, 0), (1, 1, 1) and (2, 2, 2) to create a Tensor of shape [3].

(b) Reduction: Axes corresponding to labels appearing only in one input subscript but not in the output subscript are summed over prior to Tensor contraction. For example, in the equation ab,bc->b, the axis labels a and c are the reduction axis labels.

(c) Batch Dimensions: Axes corresponding to labels appearing in each of the input subscripts and also in the output subscript make up the batch dimensions in Tensor contraction. Unnamed axis labels corresponding to ellipsis (...) also correspond to batch dimensions. For example, for the equation denoting batch matrix multiplication, bij,bjk->bik, the axis label b corresponds to a batch dimension.

(d) Contraction: In case of binary einsum, axes corresponding to labels appearing in two different inputs (and not in the output) are contracted against each other. Considering the batch matrix multiplication equation again (bij,bjk->bik), the contracted axis label is j.

(e) Expand Diagonal: If the output subscripts contain repeated (explicit) axis labels, the opposite operation of (a) is applied. For example, in the equation i->iii, and input shape [3], the output of shape [3, 3, 3] are all zeros, except for the (generalized) diagonal which is populated with values from the input. Note: This operation is not supported by np.einsum or tf.einsum; it is provided to enable computing the symbolic gradient of tf.einsum.

The output subscripts must contain only labels appearing in at least one of the input subscripts. Furthermore, all dimensions mapping to the same axis label must be equal.

Any of the input and output subscripts may contain at most a single ellipsis (...). These ellipsis are mapped against dimensions not corresponding to any named axis label. If two inputs contain ellipsis, then they are broadcasted according to standard NumPy broadcasting rules.

The broadcasted dimensions are placed in the corresponding location of the ellipsis in the output subscript. If the broadcasted dimensions are non-empty and the output subscripts do not contain ellipsis, then an InvalidArgument error is raised.

@compatibility(numpy) Similar to numpy.einsum.

Comparison with numpy.einsum:

  • This Op only supports unary and binary forms of numpy.einsum.
  • This Op does not support implicit form. (i.e. equations without ->).
  • This Op also supports repeated indices in the output subscript, which is not supported by numpy.einsum. @end_compatibility

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
equation::mlir::StringAttrstring attribute
N::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.Elu (TF::EluOp)

Computes the exponential linear function.

The ELU function is defined as:

  • \( e ^ x - 1 \) if \( x < 0 \)
  • \( x \) if \( x >= 0 \)

Examples:

tf.nn.elu(1.0) tf.nn.elu(0.0) tf.nn.elu(-1000.0)

See Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
features tensor of floating-point values

Results:

Result Description
activations tensor of floating-point values

tf.EluGrad (TF::EluGradOp)

Computes gradients for the exponential linear (Elu) operation.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
gradients tensor of floating-point values
outputs tensor of floating-point values

Results:

Result Description
backprops tensor of floating-point values

tf.Empty (TF::EmptyOp)

_Creates a tensor with the given shape.

This operation creates a tensor of shape and dtype._

Attributes:

AttributeMLIR TypeDescription
init::mlir::BoolAttrbool attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32-bit integer values

Results:

Result Description
output tensor of tf.dtype values

tf.EmptyTensorList (TF::EmptyTensorListOp)

Creates and returns an empty tensor list.

All list elements must be tensors of dtype element_dtype and shape compatible with element_shape.

handle: an empty tensor list. element_dtype: the type of elements in the list. element_shape: a shape compatible with that of elements in the list.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
shape_type::mlir::Attributederived attribute
element_dtype::mlir::Attributederived attribute

Operands:

Operand Description
element_shape tensor of 32/64-bit signed integer values
max_num_elements tensor of 32-bit integer values

Results:

Result Description
handle tensor of variant values

tf.EncodePng (TF::EncodePngOp)

PNG-encode an image.

image is a 3-D uint8 or uint16 Tensor of shape [height, width, channels] where channels is:

  • 1: for grayscale.
  • 2: for grayscale + alpha.
  • 3: for RGB.
  • 4: for RGBA.

The ZLIB compression level, compression, can be -1 for the PNG-encoder default or a value from 0 to 9. 9 is the highest compression level, generating the smallest output, but is slower.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
compression::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
image tensor of 16-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
contents tensor of string values

tf.EnqueueTPUEmbeddingArbitraryTensorBatch (TF::EnqueueTPUEmbeddingArbitraryTensorBatchOp)

_Eases the porting of code that uses tf.nn.embedding_lookupsparse().

embedding_indices[i] and aggregation_weights[i] correspond to the ith feature.

The tensors at corresponding positions in the three input lists (sample_indices, embedding_indices and aggregation_weights) must have the same shape, i.e. rank 1 with dim_size() equal to the total number of lookups into the table described by the corresponding feature.

Traits: SameVariadicOperandSize

Interfaces: GetResourceInstanceInterface, TF_TPUEmbeddingWriteEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::TPUEmbedding}

Attributes:

AttributeMLIR TypeDescription
device_ordinal::mlir::IntegerAttr64-bit signless integer attribute
combiners::mlir::ArrayAttrstring array attribute
N::mlir::Attributederived attribute
T1::mlir::Attributederived attribute
T2::mlir::Attributederived attribute
T3::mlir::Attributederived attribute

Operands:

Operand Description
sample_indices_or_row_splits variadic of tensor of 32/64-bit signed integer values
embedding_indices variadic of tensor of 32/64-bit signed integer values
aggregation_weights variadic of tensor of 32/64-bit float values
mode_override tensor of string values

tf.EnqueueTPUEmbeddingBatch (TF::EnqueueTPUEmbeddingBatchOp)

An op that enqueues a list of input batch tensors to TPUEmbedding.

An op that enqueues a list of input batch tensors to TPUEmbedding.

Interfaces: GetResourceInstanceInterface, TF_TPUEmbeddingWriteEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::TPUEmbedding}

Attributes:

AttributeMLIR TypeDescription
device_ordinal::mlir::IntegerAttr64-bit signless integer attribute
combiners::mlir::ArrayAttrstring array attribute
N::mlir::Attributederived attribute

Operands:

Operand Description
batch variadic of tensor of string values
mode_override tensor of string values

tf.EnqueueTPUEmbeddingIntegerBatch (TF::EnqueueTPUEmbeddingIntegerBatchOp)

An op that enqueues a list of input batch tensors to TPUEmbedding.

Interfaces: GetResourceInstanceInterface, TF_TPUEmbeddingWriteEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::TPUEmbedding}

Attributes:

AttributeMLIR TypeDescription
device_ordinal::mlir::IntegerAttr64-bit signless integer attribute
N::mlir::Attributederived attribute

Operands:

Operand Description
batch variadic of tensor of 32-bit integer values
mode_override tensor of string values

tf.EnqueueTPUEmbeddingRaggedTensorBatch (TF::EnqueueTPUEmbeddingRaggedTensorBatchOp)

_Eases the porting of code that uses tf.nn.embeddinglookup().

sample_splits[i], embedding_indices[i] and aggregation_weights[i] correspond to the ith feature. table_ids[i] indicates which embedding table to look up ith feature.

The tensors at corresponding positions in two of the input lists, embedding_indices and aggregation_weights, must have the same shape, i.e. rank 1 with dim_size() equal to the total number of lookups into the table described by the corresponding feature.

Traits: SameVariadicOperandSize

Interfaces: GetResourceInstanceInterface, TF_TPUEmbeddingWriteEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::TPUEmbedding}

Attributes:

AttributeMLIR TypeDescription
device_ordinal::mlir::IntegerAttr64-bit signless integer attribute
combiners::mlir::ArrayAttrstring array attribute
table_ids::mlir::ArrayAttr64-bit integer array attribute
max_sequence_lengths::mlir::ArrayAttr64-bit integer array attribute
num_features::mlir::ArrayAttr64-bit integer array attribute
N::mlir::Attributederived attribute
T1::mlir::Attributederived attribute
T2::mlir::Attributederived attribute
T3::mlir::Attributederived attribute

Operands:

Operand Description
sample_splits variadic of tensor of 32/64-bit signed integer values
embedding_indices variadic of tensor of 32/64-bit signed integer values
aggregation_weights variadic of tensor of 32/64-bit float values
mode_override tensor of string values

tf.EnqueueTPUEmbeddingSparseBatch (TF::EnqueueTPUEmbeddingSparseBatchOp)

An op that enqueues TPUEmbedding input indices from a SparseTensor.

This Op eases the porting of code that uses embedding_lookup_sparse(), although some Python preprocessing of the SparseTensor arguments to embedding_lookup_sparse() is required to produce the arguments to this Op, since only a single EnqueueTPUEmbeddingSparseBatch Op is allowed per training step.

The tensors at corresponding positions in the three input lists must have the same shape, i.e. rank 1 with dim_size() equal to the total number of lookups into the table described by the corresponding table_id.

Traits: SameVariadicOperandSize

Interfaces: GetResourceInstanceInterface, TF_TPUEmbeddingWriteEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::TPUEmbedding}

Attributes:

AttributeMLIR TypeDescription
device_ordinal::mlir::IntegerAttr64-bit signless integer attribute
combiners::mlir::ArrayAttrstring array attribute
N::mlir::Attributederived attribute
T1::mlir::Attributederived attribute
T2::mlir::Attributederived attribute
T3::mlir::Attributederived attribute

Operands:

Operand Description
sample_indices variadic of tensor of 32/64-bit signed integer values
embedding_indices variadic of tensor of 32/64-bit signed integer values
aggregation_weights variadic of tensor of 32/64-bit float values
mode_override tensor of string values

tf.EnqueueTPUEmbeddingSparseTensorBatch (TF::EnqueueTPUEmbeddingSparseTensorBatchOp)

_Eases the porting of code that uses tf.nn.embedding_lookupsparse().

sample_indices[i], embedding_indices[i] and aggregation_weights[i] correspond to the ith feature. table_ids[i] indicates which embedding table to look up ith feature.

The tensors at corresponding positions in the three input lists (sample_indices, embedding_indices and aggregation_weights) must have the same shape, i.e. rank 1 with dim_size() equal to the total number of lookups into the table described by the corresponding feature.

Traits: SameVariadicOperandSize

Interfaces: GetResourceInstanceInterface, TF_TPUEmbeddingWriteEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::TPUEmbedding}

Attributes:

AttributeMLIR TypeDescription
device_ordinal::mlir::IntegerAttr64-bit signless integer attribute
combiners::mlir::ArrayAttrstring array attribute
table_ids::mlir::ArrayAttr64-bit integer array attribute
max_sequence_lengths::mlir::ArrayAttr64-bit integer array attribute
num_features::mlir::ArrayAttr64-bit integer array attribute
N::mlir::Attributederived attribute
T1::mlir::Attributederived attribute
T2::mlir::Attributederived attribute
T3::mlir::Attributederived attribute

Operands:

Operand Description
sample_indices variadic of tensor of 32/64-bit signed integer values
embedding_indices variadic of tensor of 32/64-bit signed integer values
aggregation_weights variadic of tensor of 32/64-bit float values
mode_override tensor of string values

tf.EnsureShape (TF::EnsureShapeOp)

Ensures that the tensor's shape matches the expected shape.

Raises an error if the input tensor's shape does not match the specified shape. Returns the input tensor otherwise.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
shape::mlir::AttributeTensorFlow shape attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.Equal (TF::EqualOp)

Returns the truth value of (x == y) element-wise.

x = tf.constant([2, 4])
y = tf.constant(2)
tf.math.equal(x, y) ==> array([True, False])

x = tf.constant([2, 4])
y = tf.constant([2, 4])
tf.math.equal(x, y) ==> array([True,  True])

Traits: AlwaysSpeculatableImplTrait, Commutative

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
incompatible_shape_error::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of tf.dtype values
y tensor of tf.dtype values

Results:

Result Description
z tensor of bool values

tf.Erf (TF::ErfOp)

Computes the Gauss error function of x element-wise. In statistics, for non-negative values of \(x\), the error function has the following interpretation: for a random variable \(Y\) that is normally distributed with mean 0 and variance \(1/\sqrt{2}\), \(erf(x)\) is the probability that \(Y\) falls in the range \([−x, x]\).

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of floating-point values

tf.Erfc (TF::ErfcOp)

Computes the complementary error function of x element-wise.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of floating-point values

tf.Erfinv (TF::ErfinvOp)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of floating-point values

tf.ExecuteTPUEmbeddingPartitioner (TF::ExecuteTPUEmbeddingPartitionerOp)

An op that executes the TPUEmbedding partitioner on the central configuration

device and computes the HBM size (in bytes) required for TPUEmbedding operation.

Attributes:

AttributeMLIR TypeDescription
config::mlir::StringAttrstring attribute

Results:

Result Description
common_config tensor of string values

tf.Exp (TF::ExpOp)

Computes exponential of x element-wise. \(y = e^x\).

This function computes the exponential of every element in the input tensor. i.e. exp(x) or e^(x), where x is the input tensor. e denotes Euler's number and is approximately equal to 2.718281. Output is positive for any real input.

  x = tf.constant(2.0)
  tf.math.exp(x) ==> 7.389056

  x = tf.constant([2.0, 8.0])
  tf.math.exp(x) ==> array([7.389056, 2980.958], dtype=float32)

For complex numbers, the exponential value is calculated as follows:

  e^(x+iy) = e^x * e^iy = e^x * (cos y + i sin y)

Let's consider complex number 1+1j as an example. e^1 * (cos 1 + i sin 1) = 2.7182818284590 * (0.54030230586+0.8414709848j)

  x = tf.constant(1 + 1j)
  tf.math.exp(x) ==> 1.4686939399158851+2.2873552871788423j

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.ExpandDims (TF::ExpandDimsOp)

Inserts a dimension of 1 into a tensor's shape.

Given a tensor input, this operation inserts a dimension of 1 at the dimension index axis of input's shape. The dimension index axis starts at zero; if you specify a negative number for axis it is counted backward from the end.

This operation is useful if you want to add a batch dimension to a single element. For example, if you have a single image of shape [height, width, channels], you can make it a batch of 1 image with expand_dims(image, 0), which will make the shape [1, height, width, channels].

Other examples:

# 't' is a tensor of shape [2]
shape(expand_dims(t, 0)) ==> [1, 2]
shape(expand_dims(t, 1)) ==> [2, 1]
shape(expand_dims(t, -1)) ==> [2, 1]

# 't2' is a tensor of shape [2, 3, 5]
shape(expand_dims(t2, 0)) ==> [1, 2, 3, 5]
shape(expand_dims(t2, 2)) ==> [2, 3, 1, 5]
shape(expand_dims(t2, 3)) ==> [2, 3, 5, 1]

This operation requires that:

-1-input.dims() <= dim <= input.dims()

This operation is related to squeeze(), which removes dimensions of size 1.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tdim::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
dim tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.Expm1 (TF::Expm1Op)

Computes exp(x) - 1 element-wise.

i.e. exp(x) - 1 or e^(x) - 1, where x is the input tensor. e denotes Euler's number and is approximately equal to 2.718281.

  x = tf.constant(2.0)
  tf.math.expm1(x) ==> 6.389056

  x = tf.constant([2.0, 8.0])
  tf.math.expm1(x) ==> array([6.389056, 2979.958], dtype=float32)

  x = tf.constant(1 + 1j)
  tf.math.expm1(x) ==> (0.46869393991588515+2.2873552871788423j)

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.ExtractImagePatches (TF::ExtractImagePatchesOp)

Extract patches from images and put them in the "depth" output dimension.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
ksizes::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
strides::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
rates::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
T::mlir::Attributederived attribute

Operands:

Operand Description
images tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
patches tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.FakeParam (TF::FakeParamOp)

This op is used as a placeholder in If branch functions. It doesn't provide a valid output when run, so must either be removed (e.g. replaced with a function input) or guaranteed not to be used (e.g. if mirroring an intermediate output needed for the gradient computation of the other branch).

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
shape::mlir::AttributeTensorFlow shape attribute
dtype::mlir::Attributederived attribute

Results:

Result Description
output tensor of tf.dtype values

tf.FakeQuantWithMinMaxArgs (TF::FakeQuantWithMinMaxArgsOp)

Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same shape and type.

Quantization is called fake since the output is still in floating point. The API converts inputs into values within the range [min and max] and returns as output.

Attributes

  • [min; max] define the clamping range for the inputs data.
  • inputs values are quantized into the quantization range ( [0; 2^num_bits - 1] when narrow_range is false and [1; 2^num_bits - 1] when it is true) and then de-quantized and output as floats in [min; max] interval.
  • num_bits is the bitwidth of the quantization; between 2 and 16, inclusive.

Before quantization, min and max values are adjusted with the following logic. It is suggested to have min <= 0 <= max. If 0 is not in the range of values, the behavior can be unexpected:

  • If 0 < min < max: min_adj = 0 and max_adj = max - min.
  • If min < max < 0: min_adj = min - max and max_adj = 0.
  • If min <= 0 <= max: scale = (max - min) / (2^num_bits - 1), min_adj = scale * round(min / scale) and max_adj = max + min_adj - min.

Examples


inp = tf.constant ([10.03, -10.23, 3])
out = tf.quantization.fake_quant_with_min_max_args(inp, min=-5, max=5,
                                                   num_bits=16)
print(out)

#  Output:
#  tf.Tensor([ 4.9999237 -5.0000763  3.0000763], shape=(3,), dtype=float32)

Raises:

  • InvalidArgumentError:
    • If num_bits are outside of range [2, 16].
    • If min >= max.
  • ValueError: If inputs are of any other type than float32.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
min::mlir::FloatAttr32-bit float attribute
max::mlir::FloatAttr32-bit float attribute
num_bits::mlir::IntegerAttr64-bit signless integer attribute
narrow_range::mlir::BoolAttrbool attribute

Operands:

Operand Description
inputs tensor of 32-bit float values

Results:

Result Description
outputs tensor of 32-bit float values

tf.FakeQuantWithMinMaxArgsGradient (TF::FakeQuantWithMinMaxArgsGradientOp)

Compute gradients for a FakeQuantWithMinMaxArgs operation.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
min::mlir::FloatAttr32-bit float attribute
max::mlir::FloatAttr32-bit float attribute
num_bits::mlir::IntegerAttr64-bit signless integer attribute
narrow_range::mlir::BoolAttrbool attribute

Operands:

Operand Description
gradients tensor of 32-bit float values
inputs tensor of 32-bit float values

Results:

Result Description
backprops tensor of 32-bit float values

tf.FakeQuantWithMinMaxVars (TF::FakeQuantWithMinMaxVarsOp)

Fake-quantize the 'inputs' tensor of type float via global float scalars

Fake-quantize the inputs tensor of type float via global float scalars min and max to outputs tensor of same shape as inputs.

Attributes

  • [min; max] define the clamping range for the inputs data.
  • inputs values are quantized into the quantization range ( [0; 2^num_bits - 1] when narrow_range is false and [1; 2^num_bits - 1] when it is true) and then de-quantized and output as floats in [min; max] interval.
  • num_bits is the bitwidth of the quantization; between 2 and 16, inclusive.

Before quantization, min and max values are adjusted with the following logic. It is suggested to have min <= 0 <= max. If 0 is not in the range of values, the behavior can be unexpected:

  • If 0 < min < max: min_adj = 0 and max_adj = max - min.
  • If min < max < 0: min_adj = min - max and max_adj = 0.
  • If min <= 0 <= max: scale = (max - min) / (2^num_bits - 1), min_adj = scale * round(min / scale) and max_adj = max + min_adj - min.

This operation has a gradient and thus allows for training min and max values.

constant_input = tf.constant([[1.2, -0.3, 0.7], [2.1, 0.5, -1.0]], dtype=tf.float32)

min_val = -0.5 max_val = 0.8 num_bits = 8 narrow_range = False #False:for the quantization range [0; 2^num_bits - 1]

quantized_data = tf.quantization.fake_quant_with_min_max_vars( ... inputs=constant_input, min=min_val, max=max_val, num_bits=num_bits, narrow_range=narrow_range ... )

print("Input:\n", constant_input.numpy()) Input: [[ 1.2 -0.3 0.7] [ 2.1 0.5 -1. ]] print("Output:\n", quantized_data.numpy()) Output: [[ 0.8003921 -0.3007843 0.6984313] [ 0.8003921 0.4996078 -0.4996078]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
num_bits::mlir::IntegerAttr64-bit signless integer attribute
narrow_range::mlir::BoolAttrbool attribute

Operands:

Operand Description
inputs tensor of 32-bit float values
min tensor of 32-bit float values
max tensor of 32-bit float values

Results:

Result Description
outputs tensor of 32-bit float values

tf.FakeQuantWithMinMaxVarsGradient (TF::FakeQuantWithMinMaxVarsGradientOp)

Compute gradients for a FakeQuantWithMinMaxVars operation.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
num_bits::mlir::IntegerAttr64-bit signless integer attribute
narrow_range::mlir::BoolAttrbool attribute

Operands:

Operand Description
gradients tensor of 32-bit float values
inputs tensor of 32-bit float values
min tensor of 32-bit float values
max tensor of 32-bit float values

Results:

Result Description
backprops_wrt_input tensor of 32-bit float values
backprop_wrt_min tensor of 32-bit float values
backprop_wrt_max tensor of 32-bit float values

tf.FakeQuantWithMinMaxVarsPerChannel (TF::FakeQuantWithMinMaxVarsPerChannelOp)

Fake-quantize the 'inputs' tensor of type float via per-channel floats

Fake-quantize the inputs tensor of type float per-channel and one of the shapes: [d], [b, d] [b, h, w, d] via per-channel floats min and max of shape [d] to outputs tensor of same shape as inputs.

Attributes

  • [min; max] define the clamping range for the inputs data.
  • inputs values are quantized into the quantization range ( [0; 2^num_bits - 1] when narrow_range is false and [1; 2^num_bits - 1] when it is true) and then de-quantized and output as floats in [min; max] interval.
  • num_bits is the bitwidth of the quantization; between 2 and 16, inclusive.

Before quantization, min and max values are adjusted with the following logic. It is suggested to have min <= 0 <= max. If 0 is not in the range of values, the behavior can be unexpected:

  • If 0 < min < max: min_adj = 0 and max_adj = max - min.
  • If min < max < 0: min_adj = min - max and max_adj = 0.
  • If min <= 0 <= max: scale = (max - min) / (2^num_bits - 1), min_adj = scale * round(min / scale) and max_adj = max + min_adj - min.

This operation has a gradient and thus allows for training min and max values.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
num_bits::mlir::IntegerAttr64-bit signless integer attribute
narrow_range::mlir::BoolAttrbool attribute

Operands:

Operand Description
inputs tensor of 32-bit float values
min tensor of 32-bit float values
max tensor of 32-bit float values

Results:

Result Description
outputs tensor of 32-bit float values

tf.FakeQuantWithMinMaxVarsPerChannelGradient (TF::FakeQuantWithMinMaxVarsPerChannelGradientOp)

Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
num_bits::mlir::IntegerAttr64-bit signless integer attribute
narrow_range::mlir::BoolAttrbool attribute

Operands:

Operand Description
gradients tensor of 32-bit float values
inputs tensor of 32-bit float values
min tensor of 32-bit float values
max tensor of 32-bit float values

Results:

Result Description
backprops_wrt_input tensor of 32-bit float values
backprop_wrt_min tensor of 32-bit float values
backprop_wrt_max tensor of 32-bit float values

tf.FFT (TF::FFTOp)

Fast Fourier transform.

Computes the 1-dimensional discrete Fourier transform over the inner-most dimension of input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tcomplex::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex values

tf.FFT2D (TF::FFT2DOp)

2D fast Fourier transform.

Computes the 2-dimensional discrete Fourier transform over the inner-most 2 dimensions of input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tcomplex::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex values

tf.FFT3D (TF::FFT3DOp)

3D fast Fourier transform.

Computes the 3-dimensional discrete Fourier transform over the inner-most 3 dimensions of input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tcomplex::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex values

tf.Fill (TF::FillOp)

Creates a tensor filled with a scalar value.

This operation creates a tensor of shape dims and fills it with value.

For example:

# Output tensor has shape [2, 3].
fill([2, 3], 9) ==> [[9, 9, 9]
                     [9, 9, 9]]

tf.fill differs from tf.constant in a few ways:

  • tf.fill only supports scalar contents, whereas tf.constant supports Tensor values.
  • tf.fill creates an Op in the computation graph that constructs the actual Tensor value at runtime. This is in contrast to tf.constant which embeds the entire Tensor into the graph with a Const node.
  • Because tf.fill evaluates at graph runtime, it supports dynamic shapes based on other runtime Tensors, unlike tf.constant.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
index_type::mlir::Attributederived attribute

Operands:

Operand Description
dims tensor of 32/64-bit signed integer values
value tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.FinalizeDataset (TF::FinalizeDatasetOp)

_Creates a dataset by applying tf.data.Options to input_dataset._

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
has_captured_ref::mlir::BoolAttrbool attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements

Operands:

Operand Description
input_dataset tensor of variant values

Results:

Result Description
handle tensor of variant values

tf.FinalizeTPUEmbedding (TF::FinalizeTPUEmbeddingOp)

An op that finalizes the TPUEmbedding configuration.

Operands:

Operand Description
common_config tensor of string values
memory_config tensor of string values

tf.FlatMapDataset (TF::FlatMapDatasetOp)

Creates a dataset that applies f to the outputs of input_dataset.

Unlike MapDataset, the f in FlatMapDataset is expected to return a Dataset variant, and FlatMapDataset will flatten successive results into a single Dataset.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
f::mlir::SymbolRefAttrsymbol reference attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
metadata::mlir::StringAttrstring attribute
Targuments::mlir::Attributederived attribute

Operands:

Operand Description
input_dataset tensor of variant values
other_arguments variadic of tensor of tf.dtype values

Results:

Result Description
handle tensor of variant values

tf.Floor (TF::FloorOp)

Returns element-wise largest integer not greater than x.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_Idempotent

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of floating-point values

tf.FloorDiv (TF::FloorDivOp)

Returns x // y element-wise.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
z tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.FloorMod (TF::FloorModOp)

Returns element-wise remainder of division.

This follows Python semantics in that the result here is consistent with a flooring divide. E.g. floor(x / y) * y + floormod(x, y) = x, regardless of the signs of x and y.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer or floating-point values
y tensor of integer or floating-point values

Results:

Result Description
z tensor of integer or floating-point values

tf.FlushSummaryWriter (TF::FlushSummaryWriterOp)

Flushes the writer's unwritten events.

writer: A handle to the summary writer resource.

Operands:

Operand Description
writer tensor of resource values

tf.FusedBatchNorm (TF::FusedBatchNormOp)

Batch normalization.

Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". The size of 1D Tensors matches the dimension C of the 4D Tensors.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
epsilon::mlir::FloatAttr32-bit float attribute
exponential_avg_factor::mlir::FloatAttr32-bit float attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
is_training::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of 32-bit float values
scale tensor of 32-bit float values
offset tensor of 32-bit float values
mean tensor of 32-bit float values
variance tensor of 32-bit float values

Results:

Result Description
y tensor of 32-bit float values
batch_mean tensor of 32-bit float values
batch_variance tensor of 32-bit float values
reserve_space_1 tensor of 32-bit float values
reserve_space_2 tensor of 32-bit float values

tf.FusedBatchNormGrad (TF::FusedBatchNormGradOp)

Gradient for batch normalization.

Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". The size of 1D Tensors matches the dimension C of the 4D Tensors.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
epsilon::mlir::FloatAttr32-bit float attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
is_training::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
y_backprop tensor of 32-bit float values
x tensor of 32-bit float values
scale tensor of 32-bit float values
reserve_space_1 tensor of 32-bit float values
reserve_space_2 tensor of 32-bit float values

Results:

Result Description
x_backprop tensor of 32-bit float values
scale_backprop tensor of 32-bit float values
offset_backprop tensor of 32-bit float values
reserve_space_3 tensor of 32-bit float values
reserve_space_4 tensor of 32-bit float values

tf.FusedBatchNormGradV2 (TF::FusedBatchNormGradV2Op)

Gradient for batch normalization.

Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". The size of 1D Tensors matches the dimension C of the 4D Tensors.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
epsilon::mlir::FloatAttr32-bit float attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
is_training::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
U::mlir::Attributederived attribute

Operands:

Operand Description
y_backprop tensor of bfloat16 or 16-bit float or 32-bit float values
x tensor of bfloat16 or 16-bit float or 32-bit float values
scale tensor of 32-bit float values
reserve_space_1 tensor of 32-bit float values
reserve_space_2 tensor of 32-bit float values

Results:

Result Description
x_backprop tensor of bfloat16 or 16-bit float or 32-bit float values
scale_backprop tensor of 32-bit float values
offset_backprop tensor of 32-bit float values
reserve_space_3 tensor of 32-bit float values
reserve_space_4 tensor of 32-bit float values

tf.FusedBatchNormGradV3 (TF::FusedBatchNormGradV3Op)

Gradient for batch normalization.

Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". The size of 1D Tensors matches the dimension C of the 4D Tensors.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), TF_LayoutSensitiveInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
epsilon::mlir::FloatAttr32-bit float attribute
data_format::mlir::StringAttrstring attribute whose value is NHWC, or NCHW, or NDHWC, or NCDHW
is_training::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
U::mlir::Attributederived attribute

Operands:

Operand Description
y_backprop tensor of bfloat16 or 16-bit float or 32-bit float values
x tensor of bfloat16 or 16-bit float or 32-bit float values
scale tensor of 32-bit float values
reserve_space_1 tensor of 32-bit float values
reserve_space_2 tensor of 32-bit float values
reserve_space_3 tensor of 32-bit float values

Results:

Result Description
x_backprop tensor of bfloat16 or 16-bit float or 32-bit float values
scale_backprop tensor of 32-bit float values
offset_backprop tensor of 32-bit float values
reserve_space_4 tensor of 32-bit float values
reserve_space_5 tensor of 32-bit float values

tf.FusedBatchNormV2 (TF::FusedBatchNormV2Op)

Batch normalization.

Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". The size of 1D Tensors matches the dimension C of the 4D Tensors.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), TF_FoldOperandsTransposeInterface, TF_LayoutSensitiveInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
epsilon::mlir::FloatAttr32-bit float attribute
exponential_avg_factor::mlir::FloatAttr32-bit float attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
is_training::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
U::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 16-bit float or 32-bit float values
scale tensor of 32-bit float values
offset tensor of 32-bit float values
mean tensor of 32-bit float values
variance tensor of 32-bit float values

Results:

Result Description
y tensor of bfloat16 or 16-bit float or 32-bit float values
batch_mean tensor of 32-bit float values
batch_variance tensor of 32-bit float values
reserve_space_1 tensor of 32-bit float values
reserve_space_2 tensor of 32-bit float values

tf.FusedBatchNormV3 (TF::FusedBatchNormV3Op)

Batch normalization.

Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". The size of 1D Tensors matches the dimension C of the 4D Tensors.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), TF_FoldOperandsTransposeInterface, TF_LayoutSensitiveInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
epsilon::mlir::FloatAttr32-bit float attribute
exponential_avg_factor::mlir::FloatAttr32-bit float attribute
data_format::mlir::StringAttrstring attribute whose value is NHWC, or NCHW, or NDHWC, or NCDHW
is_training::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
U::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 16-bit float or 32-bit float values
scale tensor of bfloat16 or 32-bit float values
offset tensor of bfloat16 or 32-bit float values
mean tensor of bfloat16 or 32-bit float values
variance tensor of bfloat16 or 32-bit float values

Results:

Result Description
y tensor of bfloat16 or 16-bit float or 32-bit float values
batch_mean tensor of bfloat16 or 32-bit float values
batch_variance tensor of bfloat16 or 32-bit float values
reserve_space_1 tensor of bfloat16 or 32-bit float values
reserve_space_2 tensor of bfloat16 or 32-bit float values
reserve_space_3 tensor of bfloat16 or 32-bit float values

tf.Gather (TF::GatherOp)

Gather slices from params according to indices.

indices must be an integer tensor of any dimension (usually 0-D or 1-D). Produces an output tensor with shape indices.shape + params.shape[1:] where:

    # Scalar indices
    output[:, ..., :] = params[indices, :, ... :]

    # Vector indices
    output[i, :, ..., :] = params[indices[i], :, ... :]

    # Higher rank indices
    output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :]

If indices is a permutation and len(indices) == params.shape[0] then this operation will permute params accordingly.

validate_indices: DEPRECATED. If this operation is assigned to CPU, values in indices are always validated to be within range. If assigned to GPU, out-of-bound indices result in safe but unspecified behavior, which may include raising an error.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
validate_indices::mlir::BoolAttrbool attribute
Tindices::mlir::Attributederived attribute
Tparams::mlir::Attributederived attribute

Operands:

Operand Description
params tensor of tf.dtype values
indices tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.GatherNd (TF::GatherNdOp)

Gather slices from params into a Tensor with shape specified by indices.

indices is a K-dimensional integer tensor, best thought of as a (K-1)-dimensional tensor of indices into params, where each element defines a slice of params:

output[\\(i_0, ..., i_{K-2}\\)] = params[indices[\\(i_0, ..., i_{K-2}\\)]]

Whereas in tf.gather indices defines slices into the axis dimension of params, in tf.gather_nd, indices defines slices into the first N dimensions of params, where N = indices.shape[-1].

The last dimension of indices can be at most the rank of params:

indices.shape[-1] <= params.rank

The last dimension of indices corresponds to elements (if indices.shape[-1] == params.rank) or slices (if indices.shape[-1] < params.rank) along dimension indices.shape[-1] of params. The output tensor has shape

indices.shape[:-1] + params.shape[indices.shape[-1]:]

If indices contains any out-of-bound indices, depending on bad_indices_policy, the op will either return an error or ignore the out-of-bound indices. bad_indices_policy can be one of the following values:

  1. "" or "DEFAULT": raises on CPU and ignore on GPU. This is because historically on CPU and GPU we handle errors in different ways, and for backward compatibility we keep the default behavior.
  2. "ERROR": raises error; GPU does not support this value.
  3. "IGNORE": ignore error and set the corresponding output to 0; supported on both CPU and GPU.

Some examples below.

Simple indexing into a matrix:

    indices = [[0, 0], [1, 1]]
    params = [['a', 'b'], ['c', 'd']]
    output = ['a', 'd']

Slice indexing into a matrix:

    indices = [[1], [0]]
    params = [['a', 'b'], ['c', 'd']]
    output = [['c', 'd'], ['a', 'b']]

Indexing into a 3-tensor:

    indices = [[1]]
    params = [[['a0', 'b0'], ['c0', 'd0']],
              [['a1', 'b1'], ['c1', 'd1']]]
    output = [[['a1', 'b1'], ['c1', 'd1']]]


    indices = [[0, 1], [1, 0]]
    params = [[['a0', 'b0'], ['c0', 'd0']],
              [['a1', 'b1'], ['c1', 'd1']]]
    output = [['c0', 'd0'], ['a1', 'b1']]


    indices = [[0, 0, 1], [1, 0, 1]]
    params = [[['a0', 'b0'], ['c0', 'd0']],
              [['a1', 'b1'], ['c1', 'd1']]]
    output = ['b0', 'b1']

Batched indexing into a matrix:

    indices = [[[0, 0]], [[0, 1]]]
    params = [['a', 'b'], ['c', 'd']]
    output = [['a'], ['b']]

Batched slice indexing into a matrix:

    indices = [[[1]], [[0]]]
    params = [['a', 'b'], ['c', 'd']]
    output = [[['c', 'd']], [['a', 'b']]]

Batched indexing into a 3-tensor:

    indices = [[[1]], [[0]]]
    params = [[['a0', 'b0'], ['c0', 'd0']],
              [['a1', 'b1'], ['c1', 'd1']]]
    output = [[[['a1', 'b1'], ['c1', 'd1']]],
              [[['a0', 'b0'], ['c0', 'd0']]]]

    indices = [[[0, 1], [1, 0]], [[0, 0], [1, 1]]]
    params = [[['a0', 'b0'], ['c0', 'd0']],
              [['a1', 'b1'], ['c1', 'd1']]]
    output = [[['c0', 'd0'], ['a1', 'b1']],
              [['a0', 'b0'], ['c1', 'd1']]]


    indices = [[[0, 0, 1], [1, 0, 1]], [[0, 1, 1], [1, 1, 0]]]
    params = [[['a0', 'b0'], ['c0', 'd0']],
              [['a1', 'b1'], ['c1', 'd1']]]
    output = [['b0', 'b1'], ['d0', 'c1']]

See also tf.gather and tf.batch_gather.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
bad_indices_policy::mlir::StringAttrstring attribute
Tindices::mlir::Attributederived attribute
Tparams::mlir::Attributederived attribute

Operands:

Operand Description
params tensor of tf.dtype values
indices tensor of 16-bit integer or 32-bit integer or 64-bit integer values

Results:

Result Description
output tensor of tf.dtype values

tf.GatherV2 (TF::GatherV2Op)

Gather slices from params axis axis according to indices.

indices must be an integer tensor of any dimension (usually 0-D or 1-D). Produces an output tensor with shape params.shape[:axis] + indices.shape[batch_dims:] + params.shape[axis + 1:] where:

    # Scalar indices (output is rank(params) - 1).
    output[a_0, ..., a_n, b_0, ..., b_n] =
      params[a_0, ..., a_n, indices, b_0, ..., b_n]

    # Vector indices (output is rank(params)).
    output[a_0, ..., a_n, i, b_0, ..., b_n] =
      params[a_0, ..., a_n, indices[i], b_0, ..., b_n]

    # Higher rank indices (output is rank(params) + rank(indices) - 1).
    output[a_0, ..., a_n, i, ..., j, b_0, ... b_n] =
      params[a_0, ..., a_n, indices[i, ..., j], b_0, ..., b_n]

Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, a 0 is stored in the corresponding output value.

Note that on TPU, if any dimension of params is of size 0 then the output will be the expected shape filled with zeros. On CPU and GPU an error will be returned.

See also tf.batch_gather and tf.gather_nd.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
batch_dims::mlir::IntegerAttr64-bit signless integer attribute
Taxis::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute
Tparams::mlir::Attributederived attribute

Operands:

Operand Description
params tensor of tf.dtype values
indices tensor of 16-bit integer or 32-bit integer or 64-bit integer values
axis tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.GeneratorDataset (TF::GeneratorDatasetOp)

Creates a dataset that invokes a function to generate elements.

Traits: AttrSizedOperandSegments

Interfaces: TF_GeneratorOpSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::GeneratorOp}

Attributes:

AttributeMLIR TypeDescription
init_func::mlir::SymbolRefAttrsymbol reference attribute
next_func::mlir::SymbolRefAttrsymbol reference attribute
finalize_func::mlir::SymbolRefAttrsymbol reference attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
metadata::mlir::StringAttrstring attribute
Tfinalize_func_args::mlir::Attributederived attribute
Tinit_func_args::mlir::Attributederived attribute
Tnext_func_args::mlir::Attributederived attribute

Operands:

Operand Description
init_func_other_args variadic of tensor of tf.dtype values
next_func_other_args variadic of tensor of tf.dtype values
finalize_func_other_args variadic of tensor of tf.dtype values

Results:

Result Description
handle tensor of variant values

tf.GeneratorDatasetRegion (TF::GeneratorDatasetRegionOp)

Regional version of GeneratorDataset

Creates a dataset that invokes its 'next' region to generate elements. Conceptually, within MLIR, we treat this op as if it fills a buffer with all the results right away, and those results are then passed (through the variant tensor result) to MakeIterator / IteratorGetNext. Note that the actual TF implementation differs: It generates the next element just in time, during IteratorGetNext.

init_extra_args: Additional arguments to pass to 'init'. next_extra_args: Additional arguments to pass to 'next'. (Passed after the normal arguments which are from the return values of 'init'.) finalize_extra_args: Additional arguments to pass to 'finalize'. (Passed after the normal arguments which are from the return values of 'init'.)

Traits: AttrSizedOperandSegments, SingleBlockImplicitTerminator<YieldOp>, SingleBlock

Interfaces: RegionBranchOpInterface, TF_GeneratorOpSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::GeneratorOp}

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
metadata::mlir::StringAttrstring attribute
Tinit_func_args::mlir::Attributederived attribute
Tnext_func_args::mlir::Attributederived attribute
Tfinalize_func_args::mlir::Attributederived attribute

Operands:

Operand Description
init_func_other_args variadic of tensor of tf.dtype values
next_func_other_args variadic of tensor of tf.dtype values
finalize_func_other_args variadic of tensor of tf.dtype values

Results:

Result Description
handle tensor of variant values

tf.GetMinibatchesInCsrWithPhysicalReplica (TF::GetMinibatchesInCsrWithPhysicalReplicaOp)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
sample_count::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
num_replica::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
max_minibatches_per_sc::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
max_ids_per_chip_per_sample::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
table_vocab_size::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
feature_width::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
num_sc_per_chip::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
table_name::mlir::StringAttrstring attribute
mini_batch_in_csr::mlir::StringAttrstring attribute

Operands:

Operand Description
program_key tensor of string values
row_ids tensor of 32-bit integer values
col_ids tensor of 32-bit integer values
gains tensor of 32-bit float values
splits tensor of 64-bit integer values
id_counts tensor of 32-bit integer values

Results:

Result Description
row_pointers tensor of 32-bit integer values
sorted_sample_ids tensor of 32-bit integer values
sorted_token_ids tensor of 32-bit integer values
sorted_gains tensor of 32-bit float values
row_pointers_unpadded_size tensor of 32-bit integer values
ids_unpadded_size tensor of 32-bit integer values
num_minibatches_per_physical_sparse_core tensor of 32-bit integer values

tf.GetMinibatchSplitsWithPhysicalReplica (TF::GetMinibatchSplitsWithPhysicalReplicaOp)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
sample_count::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
num_replica::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
table_vocab_size::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
feature_width::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
num_sc_per_chip::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
table_name::mlir::StringAttrstring attribute
mini_batch_splits::mlir::StringAttrstring attribute

Operands:

Operand Description
program_key tensor of string values
row_ids tensor of 32-bit integer values
col_ids tensor of 32-bit integer values
gains tensor of 32-bit float values

Results:

Result Description
sorted_row_ids tensor of 32-bit integer values
sorted_col_ids tensor of 32-bit integer values
sorted_gains tensor of 32-bit float values
splits tensor of 64-bit integer values
id_counts tensor of 32-bit integer values
max_ids tensor of 32-bit integer values
max_uniques tensor of 32-bit integer values

tf.GetStatsFromListOfSparseCoreCooTensors (TF::GetStatsFromListOfSparseCoreCooTensorsOp)

_An op which computes the maxids/uniques for a given table.

Traits: AlwaysSpeculatableImplTrait, SameVariadicOperandSize

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
sample_count_list::mlir::ArrayAttr64-bit integer array attribute
col_offset_list::mlir::ArrayAttr64-bit integer array attribute
num_replica::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
table_vocab_size::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
feature_width::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
num_sc_per_chip::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
table_name::mlir::StringAttrstring attribute
N::mlir::Attributederived attribute

Operands:

Operand Description
row_ids_list variadic of tensor of 32-bit integer values
col_ids_list variadic of tensor of 32-bit integer values
gains_list variadic of tensor of 32-bit float values

Results:

Result Description
max_ids_per_sparse_core tensor of 32-bit integer values
max_unique_ids_per_sparse_core tensor of 32-bit integer values

tf.GlobalIterId (TF::GlobalIterIdOp)

Op that gets the global step id.

This op gets the step id for each loop iteration.

Interfaces: GetResourceInstanceInterface, TF_GlobalIterIdEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::GlobalIterId}

Results:

Result Description
iter_id tensor of 64-bit integer values

tf.Greater (TF::GreaterOp)

Returns the truth value of (x > y) element-wise.

Example:

x = tf.constant([5, 4, 6])
y = tf.constant([5, 2, 5])
tf.math.greater(x, y) ==> [False, True, True]

x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.greater(x, y) ==> [False, False, True]

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer or floating-point values
y tensor of integer or floating-point values

Results:

Result Description
z tensor of bool values

tf.GreaterEqual (TF::GreaterEqualOp)

Returns the truth value of (x >= y) element-wise.

Example:

x = tf.constant([5, 4, 6, 7])
y = tf.constant([5, 2, 5, 10])
tf.math.greater_equal(x, y) ==> [True, True, True, False]

x = tf.constant([5, 4, 6, 7])
y = tf.constant([5])
tf.math.greater_equal(x, y) ==> [True, False, True, True]

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer or floating-point values
y tensor of integer or floating-point values

Results:

Result Description
z tensor of bool values

tf.HashTable (TF::HashTableOp)

Creates a non-initialized hash table.

This op creates a hash table, specifying the type of its keys and values. Before using the table you will have to initialize it. After initialization the table will be immutable.

Attributes:

AttributeMLIR TypeDescription
container::mlir::StringAttrstring attribute
shared_name::mlir::StringAttrstring attribute
use_node_name_sharing::mlir::BoolAttrbool attribute
key_dtype::mlir::TypeAttrany type attribute
value_dtype::mlir::TypeAttrany type attribute

Results:

Result Description
table_handle tensor of string values

tf.HashTableV2 (TF::HashTableV2Op)

Creates a non-initialized hash table.

This op creates a hash table, specifying the type of its keys and values. Before using the table you will have to initialize it. After initialization the table will be immutable.

Attributes:

AttributeMLIR TypeDescription
container::mlir::StringAttrstring attribute
shared_name::mlir::StringAttrstring attribute
use_node_name_sharing::mlir::BoolAttrbool attribute
key_dtype::mlir::TypeAttrany type attribute
value_dtype::mlir::TypeAttrany type attribute

Results:

Result Description
table_handle tensor of resource values

tf.HSVToRGB (TF::HSVToRGBOp)

Convert one or more images from HSV to RGB.

Outputs a tensor of the same shape as the images tensor, containing the RGB value of the pixels. The output is only well defined if the value in images are in [0,1].

See rgb_to_hsv for a description of the HSV encoding.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
images tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.Identity (TF::IdentityOp)

Return a tensor with the same shape and contents as the input tensor or value.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold, TF_OperandsSameAsResultsTypeOrRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.IdentityN (TF::IdentityNOp)

Returns a list of tensors with the same shapes and contents as the input

tensors.

This op can be used to override the gradient for complicated functions. For example, suppose y = f(x) and we wish to apply a custom function g for backprop such that dx = g(dy). In Python,

with tf.get_default_graph().gradient_override_map(
    {'IdentityN': 'OverrideGradientWithG'}):
  y, _ = identity_n([f(x), x])

@tf.RegisterGradient('OverrideGradientWithG')
def ApplyG(op, dy, _):
  return [None, g(dy)]  # Do not backprop to f(x).

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input variadic of tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.If (TF::IfOp)

_Output = cond ? then_branch(input) : elsebranch(input)

output = cond ? then_branch(input) : else_branch(input)

cond: A Tensor. If the tensor is a scalar of non-boolean type, the scalar is converted to a boolean according to the following rule: if the scalar is a numerical value, non-zero means True and zero means False; if the scalar is a string, non-empty means True and empty means False. If the tensor is not a scalar, being empty means False and being non-empty means True. input: A list of input tensors. then_branch: A function that takes 'inputs' and returns a list of tensors, whose types are the same as what else_branch returns. else_branch: A function that takes 'inputs' and returns a list of tensors. whose types are the same as what then_branch returns.

Interfaces: SymbolUserOpInterface

Attributes:

AttributeMLIR TypeDescription
then_branch::mlir::FlatSymbolRefAttrflat symbol reference attribute
else_branch::mlir::FlatSymbolRefAttrflat symbol reference attribute
is_stateless::mlir::BoolAttrbool attribute
Tcond::mlir::Attributederived attribute
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute
output_shapes::mlir::Attributederived attribute

Operands:

Operand Description
cond tensor of tf.dtype values
input variadic of tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.IFFT (TF::IFFTOp)

Inverse fast Fourier transform.

Computes the inverse 1-dimensional discrete Fourier transform over the inner-most dimension of input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tcomplex::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex values

tf.IFFT2D (TF::IFFT2DOp)

Inverse 2D fast Fourier transform.

Computes the inverse 2-dimensional discrete Fourier transform over the inner-most 2 dimensions of input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tcomplex::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex values

tf.IFFT3D (TF::IFFT3DOp)

Inverse 3D fast Fourier transform.

Computes the inverse 3-dimensional discrete Fourier transform over the inner-most 3 dimensions of input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tcomplex::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex values

tf.IfRegion (TF::IfRegionOp)

_Output = cond ? then_branch output : elsebranch output

"output = cond ? then_branch output : else_branch output"

cond: A Tensor. If the tensor is a scalar of non-boolean type, the scalar is converted to a boolean according to the following rule: if the scalar is a numerical value, non-zero means True and zero means False; if the scalar is a string, non-empty means True and empty means False. If the tensor is not a scalar, being empty means False and being non-empty means True. then_branch: A region that computes the outputs of the op if cond = true. It returns a list of tensors using tf.yield (as the terminator). The types of these returned tensors is same as that of the else_branch else_branch: A region that computes the outputs of the op if cond = false. It returns a list of tensors using tf.yield (as the terminator). The types of these returned tensors is same as that of the then_branch

Traits: NoRegionArguments, SingleBlockImplicitTerminator<YieldOp>, SingleBlock

Interfaces: RegionBranchOpInterface

Attributes:

AttributeMLIR TypeDescription
is_stateless::mlir::BoolAttrbool attribute
_then_func_name::mlir::StringAttrstring attribute
_else_func_name::mlir::StringAttrstring attribute

Operands:

Operand Description
cond 0D tensor of 1-bit signless integer values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.Igamma (TF::IgammaOp)

Compute the lower regularized incomplete Gamma function P(a, x).

The lower regularized incomplete Gamma function is defined as:

\(P(a, x) = gamma(a, x) / Gamma(a) = 1 - Q(a, x)\)

where

\(gamma(a, x) = \\int_{0}^{x} t^{a-1} exp(-t) dt\)

is the lower incomplete Gamma function.

Note, above Q(a, x) (Igammac) is the upper regularized complete Gamma function.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
a tensor of floating-point values
x tensor of floating-point values

Results:

Result Description
z tensor of floating-point values

tf.Igammac (TF::IgammacOp)

Compute the upper regularized incomplete Gamma function Q(a, x).

The upper regularized incomplete Gamma function is defined as:

\(Q(a, x) = Gamma(a, x) / Gamma(a) = 1 - P(a, x)\)

where

\(Gamma(a, x) = \int_{x}^{\infty} t^{a-1} exp(-t) dt\)

is the upper incomplete Gamma function.

Note, above P(a, x) (Igamma) is the lower regularized complete Gamma function.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
a tensor of floating-point values
x tensor of floating-point values

Results:

Result Description
z tensor of floating-point values

tf.IgammaGradA (TF::IgammaGradAOp)

Computes the gradient of igamma(a, x) wrt a.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
a tensor of 32/64-bit float values
x tensor of 32/64-bit float values

Results:

Result Description
z tensor of 32/64-bit float values

tf.Imag (TF::ImagOp)

Returns the imaginary part of a complex number.

Given a tensor input of complex numbers, this operation returns a tensor of type float that is the imaginary part of each element in input. All elements in input must be complex numbers of the form \(a + bj\), where a is the real part and b is the imaginary part returned by this operation.

For example:

# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]
tf.imag(input) ==> [4.75, 5.75]

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values

Results:

Result Description
output tensor of 32/64-bit float values

tf.ImportEvent (TF::ImportEventOp)

Outputs a tf.Event protocol buffer.

When CreateSummaryDbWriter is being used, this op can be useful for importing data from event logs.

writer: A handle to a summary writer. event: A string containing a binary-encoded tf.Event proto.

Operands:

Operand Description
writer tensor of resource values
event tensor of string values

tf.InfeedDequeue (TF::InfeedDequeueOp)

A placeholder op for a value that will be fed into the computation.

Attributes:

AttributeMLIR TypeDescription
shape::mlir::AttributeTensorFlow shape attribute
dtype::mlir::Attributederived attribute

Results:

Result Description
output tensor of tf.dtype values

tf.InfeedDequeueTuple (TF::InfeedDequeueTupleOp)

Fetches multiple values from infeed as an XLA tuple.

Attributes:

AttributeMLIR TypeDescription
_XlaSharding::mlir::StringAttrstring attribute
layouts::mlir::ArrayAttrarray attribute
shapes::mlir::Attributederived attribute
dtypes::mlir::Attributederived attribute

Results:

Result Description
outputs variadic of tensor of tf.dtype values

tf.InfeedEnqueueTuple (TF::InfeedEnqueueTupleOp)

Feeds multiple Tensor values into the computation as an XLA tuple.

Attributes:

AttributeMLIR TypeDescription
dtypes::mlir::ArrayAttrtype array attribute with at least 1 elements
shapes::mlir::ArrayAttrtensorflow shape attribute array
layouts::mlir::ArrayAttr64-bit integer array attribute
device_ordinal::mlir::IntegerAttr64-bit signless integer attribute

Operands:

Operand Description
inputs variadic of tensor of tf.dtype values

tf.InitializeTable (TF::InitializeTableOp)

Table initializer that takes two tensors for keys and values respectively.

Attributes:

AttributeMLIR TypeDescription
Tkey::mlir::Attributederived attribute
Tval::mlir::Attributederived attribute

Operands:

Operand Description
table_handle tensor of string values
keys tensor of tf.dtype values
values tensor of tf.dtype values

tf.InitializeTableFromDataset (TF::InitializeTableFromDatasetOp)

Operands:

Operand Description
table_handle tensor of resource values
dataset tensor of variant values

tf.InitializeTableFromTextFile (TF::InitializeTableFromTextFileOp)

Initializes a table from a text file.

It inserts one key-value pair into the table for each line of the file. The key and value is extracted from the whole line content, elements from the split line based on delimiter or the line number (starting from zero). Where to extract the key and value from a line is specified by key_index and value_index.

  • A value of -1 means use the line number(starting from zero), expects int64.
  • A value of -2 means use the whole line content, expects string.
  • A value >= 0 means use the index (starting at zero) of the split line based on delimiter.

Attributes:

AttributeMLIR TypeDescription
key_index::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is -2
value_index::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is -2
vocab_size::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is -1
delimiter::mlir::StringAttrstring attribute
offset::mlir::IntegerAttr64-bit signless integer attribute

Operands:

Operand Description
table_handle tensor of string values
filename tensor of string values

tf.InitializeTableFromTextFileV2 (TF::InitializeTableFromTextFileV2Op)

Initializes a table from a text file.

It inserts one key-value pair into the table for each line of the file. The key and value is extracted from the whole line content, elements from the split line based on delimiter or the line number (starting from zero). Where to extract the key and value from a line is specified by key_index and value_index.

  • A value of -1 means use the line number(starting from zero), expects int64.
  • A value of -2 means use the whole line content, expects string.
  • A value >= 0 means use the index (starting at zero) of the split line based on delimiter.

Attributes:

AttributeMLIR TypeDescription
key_index::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is -2
value_index::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is -2
vocab_size::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is -1
delimiter::mlir::StringAttrstring attribute
offset::mlir::IntegerAttr64-bit signless integer attribute

Operands:

Operand Description
table_handle tensor of resource values
filename tensor of string values

tf.InitializeTableV2 (TF::InitializeTableV2Op)

Table initializer that takes two tensors for keys and values respectively.

Attributes:

AttributeMLIR TypeDescription
Tkey::mlir::Attributederived attribute
Tval::mlir::Attributederived attribute

Operands:

Operand Description
table_handle tensor of resource values
keys tensor of tf.dtype values
values tensor of tf.dtype values

tf.InplaceAdd (TF::InplaceAddOp)

Adds v into specified rows of x.

Computes y = x; y[i, :] += v; return y.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of tf.dtype values
i tensor of 32-bit integer values
v tensor of tf.dtype values

Results:

Result Description
y tensor of tf.dtype values

tf.InplaceUpdate (TF::InplaceUpdateOp)

Updates specified rows 'i' with values 'v'.

Computes x[i, :] = v; return x.

Originally this function is mutative however for compilation we make this operation create / operate on a copy of x.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of tf.dtype values
i tensor of 32-bit integer values
v tensor of tf.dtype values

Results:

Result Description
y tensor of tf.dtype values

tf.InTopKV2 (TF::InTopKV2Op)

Says whether the targets are in the top K predictions.

This outputs a batch_size bool array, an entry out[i] is true if the prediction for the target class is among the top k predictions among all predictions for example i. Note that the behavior of InTopK differs from the TopK op in its handling of ties; if multiple classes have the same prediction value and straddle the top-k boundary, all of those classes are considered to be in the top k.

More formally, let

\(predictions_i\) be the predictions for all classes for example i, \(targets_i\) be the target class for example i, \(out_i\) be the output for example i,

\[out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)\]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
predictions tensor of 32-bit float values
targets tensor of 32/64-bit signed integer values
k tensor of 32/64-bit signed integer values

Results:

Result Description
precision tensor of bool values

tf.Inv (TF::InvOp)

Computes the reciprocal of x element-wise.

I.e., \(y = 1 / x\).

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer values

Results:

Result Description
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer values

tf.Invert (TF::InvertOp)

Invert (flip) each bit of supported types; for example, type uint8 value 01010101 becomes 10101010.

Flip each bit of supported types. For example, type int8 (decimal 2) binary 00000010 becomes (decimal -3) binary 11111101. This operation is performed on each element of the tensor argument x.

Example:

import tensorflow as tf
from tensorflow.python.ops import bitwise_ops

# flip 2 (00000010) to -3 (11111101)
tf.assert_equal(-3, bitwise_ops.invert(2))

dtype_list = [dtypes.int8, dtypes.int16, dtypes.int32, dtypes.int64,
              dtypes.uint8, dtypes.uint16, dtypes.uint32, dtypes.uint64]

inputs = [0, 5, 3, 14]
for dtype in dtype_list:
  # Because of issues with negative numbers, let's test this indirectly.
  # 1. invert(a) and a = 0
  # 2. invert(a) or a = invert(0)
  input_tensor = tf.constant([0, 5, 3, 14], dtype=dtype)
  not_a_and_a, not_a_or_a, not_0 = [bitwise_ops.bitwise_and(
                                      input_tensor, bitwise_ops.invert(input_tensor)),
                                    bitwise_ops.bitwise_or(
                                      input_tensor, bitwise_ops.invert(input_tensor)),
                                    bitwise_ops.invert(
                                      tf.constant(0, dtype=dtype))]

  expected = tf.constant([0, 0, 0, 0], dtype=tf.float32)
  tf.assert_equal(tf.cast(not_a_and_a, tf.float32), expected)

  expected = tf.cast([not_0] * 4, tf.float32)
  tf.assert_equal(tf.cast(not_a_or_a, tf.float32), expected)

  # For unsigned dtypes let's also check the result directly.
  if dtype.is_unsigned:
    inverted = bitwise_ops.invert(input_tensor)
    expected = tf.constant([dtype.max - x for x in inputs], dtype=tf.float32)
    tf.assert_equal(tf.cast(inverted, tf.float32), tf.cast(expected, tf.float32))

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_Involution

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer values

Results:

Result Description
y tensor of integer values

tf.InvertPermutation (TF::InvertPermutationOp)

Computes the inverse permutation of a tensor.

This operation computes the inverse of an index permutation. It takes a 1-D integer tensor x, which represents the indices of a zero-based array, and swaps each value with its index position. In other words, for an output tensor y and an input tensor x, this operation computes the following:

y[x[i]] = i for i in [0, 1, ..., len(x) - 1]

The values must include 0. There can be no duplicate values or negative values.

For example:

# tensor `x` is [3, 4, 0, 2, 1]
invert_permutation(x) ==> [2, 4, 3, 0, 1]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of 32/64-bit signed integer values

Results:

Result Description
y tensor of 32/64-bit signed integer values

tf.IRFFT (TF::IRFFTOp)

Inverse real-valued fast Fourier transform.

Computes the inverse 1-dimensional discrete Fourier transform of a real-valued signal over the inner-most dimension of input.

The inner-most dimension of input is assumed to be the result of RFFT: the fft_length / 2 + 1 unique components of the DFT of a real-valued signal. If fft_length is not provided, it is computed from the size of the inner-most dimension of input (fft_length = 2 * (inner - 1)). If the FFT length used to compute input is odd, it should be provided since it cannot be inferred properly.

Along the axis IRFFT is computed on, if fft_length / 2 + 1 is smaller than the corresponding dimension of input, the dimension is cropped. If it is larger, the dimension is padded with zeros.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tcomplex::mlir::Attributederived attribute
Treal::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values
fft_length tensor of 32-bit integer values

Results:

Result Description
output tensor of 32/64-bit float values

tf.IRFFT2D (TF::IRFFT2DOp)

Inverse 2D real-valued fast Fourier transform.

Computes the inverse 2-dimensional discrete Fourier transform of a real-valued signal over the inner-most 2 dimensions of input.

The inner-most 2 dimensions of input are assumed to be the result of RFFT2D: The inner-most dimension contains the fft_length / 2 + 1 unique components of the DFT of a real-valued signal. If fft_length is not provided, it is computed from the size of the inner-most 2 dimensions of input. If the FFT length used to compute input is odd, it should be provided since it cannot be inferred properly.

Along each axis IRFFT2D is computed on, if fft_length (or fft_length / 2 + 1 for the inner-most dimension) is smaller than the corresponding dimension of input, the dimension is cropped. If it is larger, the dimension is padded with zeros.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tcomplex::mlir::Attributederived attribute
Treal::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values
fft_length tensor of 32-bit integer values

Results:

Result Description
output tensor of 32/64-bit float values

tf.IRFFT3D (TF::IRFFT3DOp)

Inverse 3D real-valued fast Fourier transform.

Computes the inverse 3-dimensional discrete Fourier transform of a real-valued signal over the inner-most 3 dimensions of input.

The inner-most 3 dimensions of input are assumed to be the result of RFFT3D: The inner-most dimension contains the fft_length / 2 + 1 unique components of the DFT of a real-valued signal. If fft_length is not provided, it is computed from the size of the inner-most 3 dimensions of input. If the FFT length used to compute input is odd, it should be provided since it cannot be inferred properly.

Along each axis IRFFT3D is computed on, if fft_length (or fft_length / 2 + 1 for the inner-most dimension) is smaller than the corresponding dimension of input, the dimension is cropped. If it is larger, the dimension is padded with zeros.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tcomplex::mlir::Attributederived attribute
Treal::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values
fft_length tensor of 32-bit integer values

Results:

Result Description
output tensor of 32/64-bit float values

tf.IsFinite (TF::IsFiniteOp)

Returns which elements of x are finite.

@compatibility(numpy) Equivalent to np.isfinite @end_compatibility

Example:

x = tf.constant([5.0, 4.8, 6.8, np.inf, np.nan])
tf.math.is_finite(x) ==> [True, True, True, False, False]

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of bool values

tf.IsInf (TF::IsInfOp)

Returns which elements of x are Inf.

@compatibility(numpy) Equivalent to np.isinf @end_compatibility

Example:

x = tf.constant([5.0, np.inf, 6.8, np.inf])
tf.math.is_inf(x) ==> [False, True, False, True]

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of bool values

tf.IsNan (TF::IsNanOp)

Returns which elements of x are NaN.

@compatibility(numpy) Equivalent to np.isnan @end_compatibility

Example:

x = tf.constant([5.0, np.nan, 6.8, np.nan, np.inf])
tf.math.is_nan(x) ==> [False, True, False, True, False]

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of bool values

tf.Iterator (TF::IteratorOp)

A container for an iterator resource.

Attributes:

AttributeMLIR TypeDescription
shared_name::mlir::StringAttrstring attribute
container::mlir::StringAttrstring attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements

Results:

Result Description
handle tensor of resource values

tf.IteratorFromStringHandle (TF::IteratorFromStringHandleOp)

Converts the given string representing a handle to an iterator to a resource.

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute
output_shapes::mlir::ArrayAttrtensorflow shape attribute array

Operands:

Operand Description
string_handle tensor of string values

Results:

Result Description
resource_handle tensor of resource values

tf.IteratorFromStringHandleV2 (TF::IteratorFromStringHandleV2Op)

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute
output_shapes::mlir::ArrayAttrtensorflow shape attribute array

Operands:

Operand Description
string_handle tensor of string values

Results:

Result Description
resource_handle tensor of resource values

tf.IteratorGetNext (TF::IteratorGetNextOp)

Gets the next output from the given iterator .

Attributes:

AttributeMLIR TypeDescription
output_shapes::mlir::Attributederived attribute
output_types::mlir::Attributederived attribute

Operands:

Operand Description
iterator tensor of resource values

Results:

Result Description
components variadic of tensor of tf.dtype values

tf.IteratorGetNextAsOptional (TF::IteratorGetNextAsOptionalOp)

Gets the next output from the given iterator as an Optional variant.

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements

Operands:

Operand Description
iterator tensor of resource values

Results:

Result Description
optional tensor of variant values

tf.IteratorGetNextSync (TF::IteratorGetNextSyncOp)

Gets the next output from the given iterator.

This operation is a synchronous version IteratorGetNext. It should only be used in situations where the iterator does not block the calling thread, or where the calling thread is not a member of the thread pool used to execute parallel operations (e.g. in eager mode).

Attributes:

AttributeMLIR TypeDescription
output_shapes::mlir::Attributederived attribute
output_types::mlir::Attributederived attribute

Operands:

Operand Description
iterator tensor of resource values

Results:

Result Description
components variadic of tensor of tf.dtype values

tf.IteratorToStringHandle (TF::IteratorToStringHandleOp)

Converts the given resource_handle representing an iterator to a string.

Operands:

Operand Description
resource_handle tensor of resource values

Results:

Result Description
string_handle tensor of string values

tf.IteratorV2 (TF::IteratorV2Op)

Attributes:

AttributeMLIR TypeDescription
shared_name::mlir::StringAttrstring attribute
container::mlir::StringAttrstring attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements

Results:

Result Description
handle tensor of resource values

tf.KthOrderStatistic (TF::KthOrderStatisticOp)

Computes the Kth order statistic of a data set. The current

implementation uses a binary search requiring exactly 32 passes over the input data. The running time is linear with respect to input size. The median-of-medians algorithm is probably faster, but is difficult to implement efficiently in XLA. The implementation imposes a total ordering on floats. The ordering is consistent with the usual partial order. Positive NaNs are greater than positive infinity. Negative NaNs are less than negative infinity. NaNs with distinct payloads are treated as distinct. Subnormal numbers are preserved (not flushed to zero). Positive infinity is greater than all numbers. Negative infinity is less than all numbers. Positive is greater than negative zero. There are less than k values greater than the kth order statistic. There are at least k values greater than or equal to the Kth order statistic. The semantics are not the same as top_k_unique.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
k::mlir::IntegerAttr64-bit signless integer attribute

Operands:

Operand Description
input tensor of 32-bit float values

Results:

Result Description
output tensor of 32-bit float values

tf.L2Loss (TF::L2LossOp)

L2 Loss.

Computes half the L2 norm of a tensor without the sqrt:

output = sum(t ** 2) / 2

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.LeakyRelu (TF::LeakyReluOp)

Computes rectified linear: max(features, features * alpha).

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
alpha::mlir::FloatAttr32-bit float attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
features tensor of floating-point values

Results:

Result Description
activations tensor of floating-point values

tf.LeakyReluGrad (TF::LeakyReluGradOp)

Computes rectified linear gradients for a LeakyRelu operation.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
alpha::mlir::FloatAttr32-bit float attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
gradients tensor of floating-point values
features tensor of floating-point values

Results:

Result Description
backprops tensor of floating-point values

tf.LeftShift (TF::LeftShiftOp)

Elementwise computes the bitwise left-shift of x and y.

If y is negative, or greater than or equal to the width of x in bits the result is implementation defined.

Example:

import tensorflow as tf
from tensorflow.python.ops import bitwise_ops
import numpy as np
dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64]

for dtype in dtype_list:
  lhs = tf.constant([-1, -5, -3, -14], dtype=dtype)
  rhs = tf.constant([5, 0, 7, 11], dtype=dtype)

  left_shift_result = bitwise_ops.left_shift(lhs, rhs)

  print(left_shift_result)

# This will print:
# tf.Tensor([ -32   -5 -128    0], shape=(4,), dtype=int8)
# tf.Tensor([   -32     -5   -384 -28672], shape=(4,), dtype=int16)
# tf.Tensor([   -32     -5   -384 -28672], shape=(4,), dtype=int32)
# tf.Tensor([   -32     -5   -384 -28672], shape=(4,), dtype=int64)

lhs = np.array([-2, 64, 101, 32], dtype=np.int8)
rhs = np.array([-1, -5, -3, -14], dtype=np.int8)
bitwise_ops.left_shift(lhs, rhs)
# <tf.Tensor: shape=(4,), dtype=int8, numpy=array([ -2,  64, 101,  32], dtype=int8)>

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer values
y tensor of integer values

Results:

Result Description
z tensor of integer values

tf.LegacyCall (TF::LegacyCallOp)

Returns f(inputs), where f is a function.

The LegacyCall operation represents a direct call to a function that is within the same symbol scope as the call and is mapped to a GraphDef node with the function name as the op name. Unlike a PartitionedCall which represents asynchronously executing a function across multiple devices, a LegacyCall ignores specification for ops in the attached function and instead executes it on the device assigned to this op.

Traits: AlwaysSpeculatableImplTrait

Interfaces: CallOpInterface, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), SymbolUserOpInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
f::mlir::FlatSymbolRefAttrflat symbol reference attribute
_disable_call_shape_inference::mlir::BoolAttrbool attribute

Operands:

Operand Description
args variadic of tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.Less (TF::LessOp)

Returns the truth value of (x < y) element-wise.

Example:

x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.less(x, y) ==> [False, True, False]

x = tf.constant([5, 4, 6])
y = tf.constant([5, 6, 7])
tf.math.less(x, y) ==> [False, True, True]

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer or floating-point values
y tensor of integer or floating-point values

Results:

Result Description
z tensor of bool values

tf.LessEqual (TF::LessEqualOp)

Returns the truth value of (x <= y) element-wise.

Example:

x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.less_equal(x, y) ==> [True, True, False]

x = tf.constant([5, 4, 6])
y = tf.constant([5, 6, 6])
tf.math.less_equal(x, y) ==> [True, True, True]

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer or floating-point values
y tensor of integer or floating-point values

Results:

Result Description
z tensor of bool values

tf.Lgamma (TF::LgammaOp)

Computes the log of the absolute value of Gamma(x) element-wise.

For positive numbers, this function computes log((input - 1)!) for every element in the tensor. lgamma(5) = log((5-1)!) = log(4!) = log(24) = 3.1780539

Example:

x = tf.constant([0, 0.5, 1, 4.5, -4, -5.6])
tf.math.lgamma(x) ==> [inf, 0.5723649, 0., 2.4537368, inf, -4.6477685]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of floating-point values

tf.LinSpace (TF::LinSpaceOp)

Generates values in an interval.

A sequence of num evenly-spaced values are generated beginning at start. If num > 1, the values in the sequence increase by (stop - start) / (num - 1), so that the last one is exactly stop.

For example:

tf.linspace(10.0, 12.0, 3, name="linspace") => [ 10.0  11.0  12.0]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
start tensor of floating-point values
stop tensor of floating-point values
num tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of floating-point values

tf.ListDiff (TF::ListDiffOp)

Computes the difference between two lists of numbers or strings.

Given a list x and a list y, this operation returns a list out that represents all values that are in x but not in y. The returned list out is sorted in the same order that the numbers appear in x (duplicates are preserved). This operation also returns a list idx that represents the position of each out element in x. In other words:

out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]

For example, given this input:

x = [1, 2, 3, 4, 5, 6]
y = [1, 3, 5]

This operation would return:

out ==> [2, 4, 6]
idx ==> [1, 3, 5]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
out_idx::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of tf.dtype values
y tensor of tf.dtype values

Results:

Result Description
out tensor of tf.dtype values
idx tensor of 32/64-bit signed integer values

tf.LoadTPUEmbeddingAdadeltaParameters (TF::LoadTPUEmbeddingAdadeltaParametersOp)

Load Adadelta embedding parameters.

An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values
updates tensor of 32-bit float values

tf.LoadTPUEmbeddingAdadeltaParametersGradAccumDebug (TF::LoadTPUEmbeddingAdadeltaParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values
updates tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.LoadTPUEmbeddingAdagradParameters (TF::LoadTPUEmbeddingAdagradParametersOp)

Load Adagrad embedding parameters.

An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values

tf.LoadTPUEmbeddingAdagradParametersGradAccumDebug (TF::LoadTPUEmbeddingAdagradParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.LoadTPUEmbeddingADAMParameters (TF::LoadTPUEmbeddingADAMParametersOp)

Load ADAM embedding parameters.

An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
momenta tensor of 32-bit float values
velocities tensor of 32-bit float values

tf.LoadTPUEmbeddingADAMParametersGradAccumDebug (TF::LoadTPUEmbeddingADAMParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
momenta tensor of 32-bit float values
velocities tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.LoadTPUEmbeddingCenteredRMSPropParameters (TF::LoadTPUEmbeddingCenteredRMSPropParametersOp)

Load centered RMSProp embedding parameters.

An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
ms tensor of 32-bit float values
mom tensor of 32-bit float values
mg tensor of 32-bit float values

tf.LoadTPUEmbeddingFTRLParameters (TF::LoadTPUEmbeddingFTRLParametersOp)

Load FTRL embedding parameters.

An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values
linears tensor of 32-bit float values

tf.LoadTPUEmbeddingFTRLParametersGradAccumDebug (TF::LoadTPUEmbeddingFTRLParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values
linears tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.LoadTPUEmbeddingMDLAdagradLightParameters (TF::LoadTPUEmbeddingMDLAdagradLightParametersOp)

Load MDL Adagrad Light embedding parameters.

An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values
weights tensor of 32-bit float values
benefits tensor of 32-bit float values

tf.LoadTPUEmbeddingMomentumParameters (TF::LoadTPUEmbeddingMomentumParametersOp)

Load Momentum embedding parameters.

An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
momenta tensor of 32-bit float values

tf.LoadTPUEmbeddingMomentumParametersGradAccumDebug (TF::LoadTPUEmbeddingMomentumParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
momenta tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.LoadTPUEmbeddingProximalAdagradParameters (TF::LoadTPUEmbeddingProximalAdagradParametersOp)

Load proximal Adagrad embedding parameters.

An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values

tf.LoadTPUEmbeddingProximalAdagradParametersGradAccumDebug (TF::LoadTPUEmbeddingProximalAdagradParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.LoadTPUEmbeddingProximalYogiParameters (TF::LoadTPUEmbeddingProximalYogiParametersOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
v tensor of 32-bit float values
m tensor of 32-bit float values

tf.LoadTPUEmbeddingProximalYogiParametersGradAccumDebug (TF::LoadTPUEmbeddingProximalYogiParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
v tensor of 32-bit float values
m tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.LoadTPUEmbeddingRMSPropParameters (TF::LoadTPUEmbeddingRMSPropParametersOp)

Load RMSProp embedding parameters.

An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
ms tensor of 32-bit float values
mom tensor of 32-bit float values

tf.LoadTPUEmbeddingRMSPropParametersGradAccumDebug (TF::LoadTPUEmbeddingRMSPropParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
ms tensor of 32-bit float values
mom tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.LoadTPUEmbeddingStochasticGradientDescentParameters (TF::LoadTPUEmbeddingStochasticGradientDescentParametersOp)

Load SGD embedding parameters.

An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values

tf.LoadTPUEmbeddingStochasticGradientDescentParametersGradAccumDebug (TF::LoadTPUEmbeddingStochasticGradientDescentParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.Log (TF::LogOp)

Computes natural logarithm of x element-wise.

I.e., \(y = \log_e x\).

Example:

x = tf.constant([0, 0.5, 1, 5])
tf.math.log(x) ==> [-inf, -0.6931472,  0. ,  1.609438]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.Log1p (TF::Log1pOp)

Computes natural logarithm of (1 + x) element-wise.

I.e., \(y = \log_e (1 + x)\).

Example:

x = tf.constant([0, 0.5, 1, 5])
tf.math.log1p(x) ==> [0., 0.4054651, 0.6931472, 1.7917595]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_CwiseUnary

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.LogicalAnd (TF::LogicalAndOp)

Returns the truth value of x AND y element-wise.

Traits: AlwaysSpeculatableImplTrait, Commutative, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
x tensor of bool values
y tensor of bool values

Results:

Result Description
z tensor of bool values

tf.LogicalNot (TF::LogicalNotOp)

Returns the truth value of NOT x element-wise.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_Involution

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
x tensor of bool values

Results:

Result Description
y tensor of bool values

tf.LogicalOr (TF::LogicalOrOp)

Returns the truth value of x OR y element-wise.

Traits: AlwaysSpeculatableImplTrait, Commutative, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
x tensor of bool values
y tensor of bool values

Results:

Result Description
z tensor of bool values

tf.LogSoftmax (TF::LogSoftmaxOp)

Computes log softmax activations.

For each batch i and class j we have

logsoftmax[i, j] = logits[i, j] - log(sum(exp(logits[i])))

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
logits tensor of floating-point values

Results:

Result Description
logsoftmax tensor of floating-point values

tf.LookupTableExportV2 (TF::LookupTableExportV2Op)

Outputs all keys and values in the table.

Attributes:

AttributeMLIR TypeDescription
Tkeys::mlir::Attributederived attribute
Tvalues::mlir::Attributederived attribute

Operands:

Operand Description
table_handle tensor of resource values

Results:

Result Description
keys tensor of tf.dtype values
values tensor of tf.dtype values

tf.LookupTableFind (TF::LookupTableFindOp)

Looks up keys in a table, outputs the corresponding values.

The tensor keys must of the same type as the keys of the table. The output values is of the type of the table values.

The scalar default_value is the value output for keys not present in the table. It must also be of the same type as the table values.

Attributes:

AttributeMLIR TypeDescription
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
table_handle tensor of string values
keys tensor of tf.dtype values
default_value tensor of tf.dtype values

Results:

Result Description
values tensor of tf.dtype values

tf.LookupTableFindV2 (TF::LookupTableFindV2Op)

Looks up keys in a table, outputs the corresponding values.

The tensor keys must of the same type as the keys of the table. The output values is of the type of the table values.

The scalar default_value is the value output for keys not present in the table. It must also be of the same type as the table values.

Attributes:

AttributeMLIR TypeDescription
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
table_handle tensor of resource values
keys tensor of tf.dtype values
default_value tensor of tf.dtype values

Results:

Result Description
values tensor of tf.dtype values

tf.LookupTableImportV2 (TF::LookupTableImportV2Op)

Replaces the contents of the table with the specified keys and values.

The tensor keys must be of the same type as the keys of the table. The tensor values must be of the type of the table values.

Attributes:

AttributeMLIR TypeDescription
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
table_handle tensor of resource values
keys tensor of tf.dtype values
values tensor of tf.dtype values

tf.LookupTableInsertV2 (TF::LookupTableInsertV2Op)

Updates the table to associates keys with values.

The tensor keys must be of the same type as the keys of the table. The tensor values must be of the type of the table values.

Attributes:

AttributeMLIR TypeDescription
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
table_handle tensor of resource values
keys tensor of tf.dtype values
values tensor of tf.dtype values

tf.LookupTableRemoveV2 (TF::LookupTableRemoveV2Op)

Removes keys and its associated values from a table.

The tensor keys must of the same type as the keys of the table. Keys not already in the table are silently ignored.

Attributes:

AttributeMLIR TypeDescription
Tin::mlir::Attributederived attribute

Operands:

Operand Description
table_handle tensor of resource values
keys tensor of tf.dtype values

tf.LookupTableSize (TF::LookupTableSizeOp)

Computes the number of elements in the given table.

Operands:

Operand Description
table_handle tensor of string values

Results:

Result Description
size tensor of 64-bit integer values

tf.LookupTableSizeV2 (TF::LookupTableSizeV2Op)

Computes the number of elements in the given table.

Operands:

Operand Description
table_handle tensor of resource values

Results:

Result Description
size tensor of 64-bit integer values

tf.LowerBound (TF::LowerBoundOp)

_Applies lower_bound(sorted_searchvalues, values) along each row.

Each set of rows with the same index in (sorted_inputs, values) is treated independently. The resulting row is the equivalent of calling np.searchsorted(sorted_inputs, values, side='left').

The result is not a global index to the entire Tensor, but rather just the index in the last dimension.

A 2-D example: sorted_sequence = [[0, 3, 9, 9, 10], [1, 2, 3, 4, 5]] values = [[2, 4, 9], [0, 2, 6]]

result = LowerBound(sorted_sequence, values)

result == [[1, 2, 2], [0, 1, 5]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
out_type::mlir::Attributederived attribute

Operands:

Operand Description
sorted_inputs tensor of tf.dtype values
values tensor of tf.dtype values

Results:

Result Description
output tensor of 32/64-bit signed integer values

tf.LRN (TF::LRNOp)

Local Response Normalization.

The 4-D input tensor is treated as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of inputs within depth_radius. In detail,

sqr_sum[a, b, c, d] =
    sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2)
output = input / (bias + alpha * sqr_sum) ** beta

For details, see Krizhevsky et al., ImageNet classification with deep convolutional neural networks (NIPS 2012).

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
depth_radius::mlir::IntegerAttr64-bit signless integer attribute
bias::mlir::FloatAttr32-bit float attribute
alpha::mlir::FloatAttr32-bit float attribute
beta::mlir::FloatAttr32-bit float attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float values

tf.LRNGrad (TF::LRNGradOp)

Gradients for Local Response Normalization.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
depth_radius::mlir::IntegerAttr64-bit signless integer attribute
bias::mlir::FloatAttr32-bit float attribute
alpha::mlir::FloatAttr32-bit float attribute
beta::mlir::FloatAttr32-bit float attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input_grads tensor of bfloat16 or 16-bit float or 32-bit float values
input_image tensor of bfloat16 or 16-bit float or 32-bit float values
output_image tensor of bfloat16 or 16-bit float or 32-bit float values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float values

tf.MakeIterator (TF::MakeIteratorOp)

Makes a new iterator from the given dataset and stores it in iterator.

This operation may be executed multiple times. Each execution will reset the iterator in iterator to the first element of dataset.

Operands:

Operand Description
dataset tensor of variant values
iterator tensor of resource values

tf.MakeUnique (TF::MakeUniqueOp)

Make all elements in the non-Batch dimension unique, but "close" to

their initial value. Never returns a sub-normal number. Never returns zero. The sign of each input element is always identical to the sign of the corresponding output element. Behavior for infinite elements is undefined. Behavior for subnormal elements is undefined.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
input tensor of 32-bit float values

Results:

Result Description
output tensor of 32-bit float values

tf.MapAndBatchDataset (TF::MapAndBatchDatasetOp)

Creates a dataset that fuses mapping with batching.

Creates a dataset that applies f to the outputs of input_dataset and then batches batch_size of them.

Unlike a "MapDataset", which applies f sequentially, this dataset invokes up to batch_size * num_parallel_batches copies of f in parallel.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
f::mlir::SymbolRefAttrsymbol reference attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
preserve_cardinality::mlir::BoolAttrbool attribute
metadata::mlir::StringAttrstring attribute
Targuments::mlir::Attributederived attribute

Operands:

Operand Description
input_dataset tensor of variant values
other_arguments variadic of tensor of tf.dtype values
batch_size tensor of 64-bit integer values
num_parallel_calls tensor of 64-bit integer values
drop_remainder tensor of bool values

Results:

Result Description
handle tensor of variant values

tf.MapDataset (TF::MapDatasetOp)

Creates a dataset that applies f to the outputs of input_dataset.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
f::mlir::SymbolRefAttrsymbol reference attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
use_inter_op_parallelism::mlir::BoolAttrbool attribute
preserve_cardinality::mlir::BoolAttrbool attribute
force_synchronous::mlir::BoolAttrbool attribute
metadata::mlir::StringAttrstring attribute
Targuments::mlir::Attributederived attribute

Operands:

Operand Description
input_dataset tensor of variant values
other_arguments variadic of tensor of tf.dtype values

Results:

Result Description
handle tensor of variant values

tf.MatMul (TF::MatMulOp)

Multiply the matrix "a" by the matrix "b".

The inputs must be two-dimensional matrices and the inner dimension of "a" (after being transposed if transpose_a is true) must match the outer dimension of "b" (after being transposed if transposed_b is true).

Traits: AlwaysSpeculatableImplTrait, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
transpose_a::mlir::BoolAttrbool attribute
transpose_b::mlir::BoolAttrbool attribute
grad_a::mlir::BoolAttrbool attribute
grad_b::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
a tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
b tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
product tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.MatrixBandPart (TF::MatrixBandPartOp)

Copy a tensor setting everything outside a central band in each innermost matrix to zero.

The band part is computed as follows: Assume input has k dimensions [I, J, K, ..., M, N], then the output is a tensor with the same shape where

band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n].

The indicator function

in_band(m, n) = (num_lower < 0 || (m-n) <= num_lower)) && (num_upper < 0 || (n-m) <= num_upper).

For example:

# if 'input' is [[ 0,  1,  2, 3]
#                [-1,  0,  1, 2]
#                [-2, -1,  0, 1]
#                [-3, -2, -1, 0]],

tf.linalg.band_part(input, 1, -1) ==> [[ 0,  1,  2, 3]
                                       [-1,  0,  1, 2]
                                       [ 0, -1,  0, 1]
                                       [ 0,  0, -1, 0]],

tf.linalg.band_part(input, 2, 1) ==> [[ 0,  1,  0, 0]
                                      [-1,  0,  1, 0]
                                      [-2, -1,  0, 1]
                                      [ 0, -2, -1, 0]]

Useful special cases:

 tf.linalg.band_part(input, 0, -1) ==> Upper triangular part.
 tf.linalg.band_part(input, -1, 0) ==> Lower triangular part.
 tf.linalg.band_part(input, 0, 0) ==> Diagonal.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindex::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
num_lower tensor of 32/64-bit signed integer values
num_upper tensor of 32/64-bit signed integer values

Results:

Result Description
band tensor of tf.dtype values

tf.MatrixDiag (TF::MatrixDiagOp)

Returns a batched diagonal tensor with a given batched diagonal values.

Given a diagonal, this operation returns a tensor with the diagonal and everything else padded with zeros. The diagonal is computed as follows:

Assume diagonal has k dimensions [I, J, K, ..., N], then the output is a tensor of rank k+1 with dimensions [I, J, K, ..., N, N]` where:

output[i, j, k, ..., m, n] = 1{m=n} * diagonal[i, j, k, ..., n].

For example:

# 'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]]

and diagonal.shape = (2, 4)

tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0]
                                     [0, 2, 0, 0]
                                     [0, 0, 3, 0]
                                     [0, 0, 0, 4]],
                                    [[5, 0, 0, 0]
                                     [0, 6, 0, 0]
                                     [0, 0, 7, 0]
                                     [0, 0, 0, 8]]]

which has shape (2, 4, 4)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
diagonal tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.MatrixDiagPartV3 (TF::MatrixDiagPartV3Op)

Returns the batched diagonal part of a batched tensor.

Returns a tensor with the k[0]-th to k[1]-th diagonals of the batched input.

Assume input has r dimensions [I, J, ..., L, M, N]. Let max_diag_len be the maximum length among all diagonals to be extracted, max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0)) Let num_diags be the number of diagonals to extract, num_diags = k[1] - k[0] + 1.

If num_diags == 1, the output tensor is of rank r - 1 with shape [I, J, ..., L, max_diag_len] and values:

diagonal[i, j, ..., l, n]
  = input[i, j, ..., l, n+y, n+x] ; if 0 <= n+y < M and 0 <= n+x < N,
    padding_value                 ; otherwise.

where y = max(-k[1], 0), x = max(k[1], 0).

Otherwise, the output tensor has rank r with dimensions [I, J, ..., L, num_diags, max_diag_len] with values:

diagonal[i, j, ..., l, m, n]
  = input[i, j, ..., l, n+y, n+x] ; if 0 <= n+y < M and 0 <= n+x < N,
    padding_value                 ; otherwise.

where d = k[1] - m, y = max(-d, 0) - offset, and x = max(d, 0) - offset.

offset is zero except when the alignment of the diagonal is to the right.

offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT}
                                           and `d >= 0`) or
                                         (`align` in {LEFT_RIGHT, RIGHT_RIGHT}
                                           and `d <= 0`)
         0                          ; otherwise

where diag_len(d) = min(cols - max(d, 0), rows + min(d, 0)).

The input must be at least a matrix.

For example:

input = np.array([[[1, 2, 3, 4],  # Input shape: (2, 3, 4)
                   [5, 6, 7, 8],
                   [9, 8, 7, 6]],
                  [[5, 4, 3, 2],
                   [1, 2, 3, 4],
                   [5, 6, 7, 8]]])

# A main diagonal from each batch.
tf.matrix_diag_part(input) ==> [[1, 6, 7],  # Output shape: (2, 3)
                                [5, 2, 7]]

# A superdiagonal from each batch.
tf.matrix_diag_part(input, k = 1)
  ==> [[2, 7, 6],  # Output shape: (2, 3)
       [4, 3, 8]]

# A band from each batch.
tf.matrix_diag_part(input, k = (-1, 2))
  ==> [[[0, 3, 8],  # Output shape: (2, 4, 3)
        [2, 7, 6],
        [1, 6, 7],
        [5, 8, 0]],
       [[0, 3, 4],
        [4, 3, 8],
        [5, 2, 7],
        [1, 6, 0]]]

# LEFT_RIGHT alignment.
tf.matrix_diag_part(input, k = (-1, 2), align="LEFT_RIGHT")
  ==> [[[3, 8, 0],  # Output shape: (2, 4, 3)
        [2, 7, 6],
        [1, 6, 7],
        [0, 5, 8]],
       [[3, 4, 0],
        [4, 3, 8],
        [5, 2, 7],
        [0, 1, 6]]]

# max_diag_len can be shorter than the main diagonal.
tf.matrix_diag_part(input, k = (-2, -1))
  ==> [[[5, 8],
        [9, 0]],
       [[1, 6],
        [5, 0]]]

# padding_value = 9
tf.matrix_diag_part(input, k = (1, 3), padding_value = 9)
  ==> [[[9, 9, 4],  # Output shape: (2, 3, 3)
        [9, 3, 8],
        [2, 7, 6]],
       [[9, 9, 2],
        [9, 3, 4],
        [4, 3, 8]]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
align::mlir::StringAttrstring attribute whose value is LEFT_RIGHT, or RIGHT_LEFT, or LEFT_LEFT, or RIGHT_RIGHT
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
k tensor of 32-bit integer values
padding_value tensor of tf.dtype values

Results:

Result Description
diagonal tensor of tf.dtype values

tf.MatrixDiagV2 (TF::MatrixDiagV2Op)

Returns a batched diagonal tensor with given batched diagonal values.

Returns a tensor with the contents in diagonal as k[0]-th to k[1]-th diagonals of a matrix, with everything else padded with padding. num_rows and num_cols specify the dimension of the innermost matrix of the output. If both are not specified, the op assumes the innermost matrix is square and infers its size from k and the innermost dimension of diagonal. If only one of them is specified, the op assumes the unspecified value is the smallest possible based on other criteria.

Let diagonal have r dimensions [I, J, ..., L, M, N]. The output tensor has rank r+1 with shape [I, J, ..., L, M, num_rows, num_cols] when only one diagonal is given (k is an integer or k[0] == k[1]). Otherwise, it has rank r with shape [I, J, ..., L, num_rows, num_cols].

The second innermost dimension of diagonal has double meaning. When k is scalar or k[0] == k[1], M is part of the batch size [I, J, ..., M], and the output tensor is:

output[i, j, ..., l, m, n]
  = diagonal[i, j, ..., l, n-max(d_upper, 0)] ; if n - m == d_upper
    padding_value                             ; otherwise

Otherwise, M is treated as the number of diagonals for the matrix in the same batch (M = k[1]-k[0]+1), and the output tensor is:

output[i, j, ..., l, m, n]
  = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1]
    padding_value                                     ; otherwise

where d = n - m, diag_index = k[1] - d, and index_in_diag = n - max(d, 0).

For example:

# The main diagonal.
diagonal = np.array([[1, 2, 3, 4],            # Input shape: (2, 4)
                     [5, 6, 7, 8]])
tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0],  # Output shape: (2, 4, 4)
                               [0, 2, 0, 0],
                               [0, 0, 3, 0],
                               [0, 0, 0, 4]],
                              [[5, 0, 0, 0],
                               [0, 6, 0, 0],
                               [0, 0, 7, 0],
                               [0, 0, 0, 8]]]

# A superdiagonal (per batch).
diagonal = np.array([[1, 2, 3],  # Input shape: (2, 3)
                     [4, 5, 6]])
tf.matrix_diag(diagonal, k = 1)
  ==> [[[0, 1, 0, 0],  # Output shape: (2, 4, 4)
        [0, 0, 2, 0],
        [0, 0, 0, 3],
        [0, 0, 0, 0]],
       [[0, 4, 0, 0],
        [0, 0, 5, 0],
        [0, 0, 0, 6],
        [0, 0, 0, 0]]]

# A band of diagonals.
diagonals = np.array([[[1, 2, 3],  # Input shape: (2, 2, 3)
                       [4, 5, 0]],
                      [[6, 7, 9],
                       [9, 1, 0]]])
tf.matrix_diag(diagonals, k = (-1, 0))
  ==> [[[1, 0, 0],  # Output shape: (2, 3, 3)
        [4, 2, 0],
        [0, 5, 3]],
       [[6, 0, 0],
        [9, 7, 0],
        [0, 1, 9]]]

# Rectangular matrix.
diagonal = np.array([1, 2])  # Input shape: (2)
tf.matrix_diag(diagonal, k = -1, num_rows = 3, num_cols = 4)
  ==> [[0, 0, 0, 0],  # Output shape: (3, 4)
       [1, 0, 0, 0],
       [0, 2, 0, 0]]

# Rectangular matrix with inferred num_cols and padding_value = 9.
tf.matrix_diag(diagonal, k = -1, num_rows = 3, padding_value = 9)
  ==> [[9, 9],  # Output shape: (3, 2)
       [1, 9],
       [9, 2]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
diagonal tensor of tf.dtype values
k tensor of 32-bit integer values
num_rows tensor of 32-bit integer values
num_cols tensor of 32-bit integer values
padding_value tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.MatrixDiagV3 (TF::MatrixDiagV3Op)

Returns a batched diagonal tensor with given batched diagonal values.

Returns a tensor with the contents in diagonal as k[0]-th to k[1]-th diagonals of a matrix, with everything else padded with padding. num_rows and num_cols specify the dimension of the innermost matrix of the output. If both are not specified, the op assumes the innermost matrix is square and infers its size from k and the innermost dimension of diagonal. If only one of them is specified, the op assumes the unspecified value is the smallest possible based on other criteria.

Let diagonal have r dimensions [I, J, ..., L, M, N]. The output tensor has rank r+1 with shape [I, J, ..., L, M, num_rows, num_cols] when only one diagonal is given (k is an integer or k[0] == k[1]). Otherwise, it has rank r with shape [I, J, ..., L, num_rows, num_cols].

The second innermost dimension of diagonal has double meaning. When k is scalar or k[0] == k[1], M is part of the batch size [I, J, ..., M], and the output tensor is:

output[i, j, ..., l, m, n]
  = diagonal[i, j, ..., l, n-max(d_upper, 0)] ; if n - m == d_upper
    padding_value                             ; otherwise

Otherwise, M is treated as the number of diagonals for the matrix in the same batch (M = k[1]-k[0]+1), and the output tensor is:

output[i, j, ..., l, m, n]
  = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1]
    padding_value                                     ; otherwise

where d = n - m, diag_index = [k] - d, and index_in_diag = n - max(d, 0) + offset.

offset is zero except when the alignment of the diagonal is to the right.

offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT}
                                           and `d >= 0`) or
                                         (`align` in {LEFT_RIGHT, RIGHT_RIGHT}
                                           and `d <= 0`)
         0                          ; otherwise

where diag_len(d) = min(cols - max(d, 0), rows + min(d, 0)).

For example:

# The main diagonal.
diagonal = np.array([[1, 2, 3, 4],            # Input shape: (2, 4)
                     [5, 6, 7, 8]])
tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0],  # Output shape: (2, 4, 4)
                               [0, 2, 0, 0],
                               [0, 0, 3, 0],
                               [0, 0, 0, 4]],
                              [[5, 0, 0, 0],
                               [0, 6, 0, 0],
                               [0, 0, 7, 0],
                               [0, 0, 0, 8]]]

# A superdiagonal (per batch).
diagonal = np.array([[1, 2, 3],  # Input shape: (2, 3)
                     [4, 5, 6]])
tf.matrix_diag(diagonal, k = 1)
  ==> [[[0, 1, 0, 0],  # Output shape: (2, 4, 4)
        [0, 0, 2, 0],
        [0, 0, 0, 3],
        [0, 0, 0, 0]],
       [[0, 4, 0, 0],
        [0, 0, 5, 0],
        [0, 0, 0, 6],
        [0, 0, 0, 0]]]

# A tridiagonal band (per batch).
diagonals = np.array([[[0, 8, 9],  # Input shape: (2, 2, 3)
                       [1, 2, 3],
                       [4, 5, 0]],
                      [[0, 2, 3],
                       [6, 7, 9],
                       [9, 1, 0]]])
tf.matrix_diag(diagonals, k = (-1, 1))
  ==> [[[1, 8, 0],  # Output shape: (2, 3, 3)
        [4, 2, 9],
        [0, 5, 3]],
       [[6, 2, 0],
        [9, 7, 3],
        [0, 1, 9]]]

# LEFT_RIGHT alignment.
diagonals = np.array([[[8, 9, 0],  # Input shape: (2, 2, 3)
                       [1, 2, 3],
                       [0, 4, 5]],
                      [[2, 3, 0],
                       [6, 7, 9],
                       [0, 9, 1]]])
tf.matrix_diag(diagonals, k = (-1, 1), align="LEFT_RIGHT")
  ==> [[[1, 8, 0],  # Output shape: (2, 3, 3)
        [4, 2, 9],
        [0, 5, 3]],
       [[6, 2, 0],
        [9, 7, 3],
        [0, 1, 9]]]

# Rectangular matrix.
diagonal = np.array([1, 2])  # Input shape: (2)
tf.matrix_diag(diagonal, k = -1, num_rows = 3, num_cols = 4)
  ==> [[0, 0, 0, 0],  # Output shape: (3, 4)
       [1, 0, 0, 0],
       [0, 2, 0, 0]]

# Rectangular matrix with inferred num_cols and padding_value = 9.
tf.matrix_diag(diagonal, k = -1, num_rows = 3, padding_value = 9)
  ==> [[9, 9],  # Output shape: (3, 2)
       [1, 9],
       [9, 2]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
align::mlir::StringAttrstring attribute whose value is LEFT_RIGHT, or RIGHT_LEFT, or LEFT_LEFT, or RIGHT_RIGHT
T::mlir::Attributederived attribute

Operands:

Operand Description
diagonal tensor of tf.dtype values
k tensor of 32-bit integer values
num_rows tensor of 32-bit integer values
num_cols tensor of 32-bit integer values
padding_value tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.MatrixInverse (TF::MatrixInverseOp)

Computes the inverse of one or more square invertible matrices or their adjoints (conjugate transposes).

The input is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices. The output is a tensor of the same shape as the input containing the inverse for all input submatrices [..., :, :].

The op uses LU decomposition with partial pivoting to compute the inverses.

If a matrix is not invertible there is no guarantee what the op does. It may detect the condition and raise an exception or it may simply return a garbage result.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
adjoint::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values

tf.MatrixSetDiag (TF::MatrixSetDiagOp)

Returns a batched matrix tensor with new batched diagonal values.

Given input and diagonal, this operation returns a tensor with the same shape and values as input, except for the main diagonal of the innermost matrices. These will be overwritten by the values in diagonal.

The output is computed as follows:

Assume input has k+1 dimensions [I, J, K, ..., M, N] and diagonal has k dimensions [I, J, K, ..., min(M, N)]. Then the output is a tensor of rank k+1 with dimensions [I, J, K, ..., M, N] where:

  • output[i, j, k, ..., m, n] = diagonal[i, j, k, ..., n] for m == n.
  • output[i, j, k, ..., m, n] = input[i, j, k, ..., m, n] for m != n.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
diagonal tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.MatrixSetDiagV2 (TF::MatrixSetDiagV2Op)

Returns a batched matrix tensor with new batched diagonal values.

Given input and diagonal, this operation returns a tensor with the same shape and values as input, except for the specified diagonals of the innermost matrices. These will be overwritten by the values in diagonal.

input has r+1 dimensions [I, J, ..., L, M, N]. When k is scalar or k[0] == k[1], diagonal has r dimensions [I, J, ..., L, max_diag_len]. Otherwise, it has r+1 dimensions [I, J, ..., L, num_diags, max_diag_len]. num_diags is the number of diagonals, num_diags = k[1] - k[0] + 1. max_diag_len is the longest diagonal in the range [k[0], k[1]], max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0))

The output is a tensor of rank k+1 with dimensions [I, J, ..., L, M, N]. If k is scalar or k[0] == k[1]:

output[i, j, ..., l, m, n]
  = diagonal[i, j, ..., l, n-max(k[1], 0)] ; if n - m == k[1]
    input[i, j, ..., l, m, n]              ; otherwise

Otherwise,

output[i, j, ..., l, m, n]
  = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1]
    input[i, j, ..., l, m, n]                         ; otherwise

where d = n - m, diag_index = k[1] - d, and index_in_diag = n - max(d, 0).

For example:

# The main diagonal.
input = np.array([[[7, 7, 7, 7],              # Input shape: (2, 3, 4)
                   [7, 7, 7, 7],
                   [7, 7, 7, 7]],
                  [[7, 7, 7, 7],
                   [7, 7, 7, 7],
                   [7, 7, 7, 7]]])
diagonal = np.array([[1, 2, 3],               # Diagonal shape: (2, 3)
                     [4, 5, 6]])
tf.matrix_set_diag(diagonal) ==> [[[1, 7, 7, 7],  # Output shape: (2, 3, 4)
                                   [7, 2, 7, 7],
                                   [7, 7, 3, 7]],
                                  [[4, 7, 7, 7],
                                   [7, 5, 7, 7],
                                   [7, 7, 6, 7]]]

# A superdiagonal (per batch).
tf.matrix_set_diag(diagonal, k = 1)
  ==> [[[7, 1, 7, 7],  # Output shape: (2, 3, 4)
        [7, 7, 2, 7],
        [7, 7, 7, 3]],
       [[7, 4, 7, 7],
        [7, 7, 5, 7],
        [7, 7, 7, 6]]]

# A band of diagonals.
diagonals = np.array([[[1, 2, 3],  # Diagonal shape: (2, 2, 3)
                       [4, 5, 0]],
                      [[6, 1, 2],
                       [3, 4, 0]]])
tf.matrix_set_diag(diagonals, k = (-1, 0))
  ==> [[[1, 7, 7, 7],  # Output shape: (2, 3, 4)
        [4, 2, 7, 7],
        [0, 5, 3, 7]],
       [[6, 7, 7, 7],
        [3, 1, 7, 7],
        [7, 4, 2, 7]]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
diagonal tensor of tf.dtype values
k tensor of 32-bit integer values

Results:

Result Description
output tensor of tf.dtype values

tf.MatrixSetDiagV3 (TF::MatrixSetDiagV3Op)

Returns a batched matrix tensor with new batched diagonal values.

Given input and diagonal, this operation returns a tensor with the same shape and values as input, except for the specified diagonals of the innermost matrices. These will be overwritten by the values in diagonal.

input has r+1 dimensions [I, J, ..., L, M, N]. When k is scalar or k[0] == k[1], diagonal has r dimensions [I, J, ..., L, max_diag_len]. Otherwise, it has r+1 dimensions [I, J, ..., L, num_diags, max_diag_len]. num_diags is the number of diagonals, num_diags = k[1] - k[0] + 1. max_diag_len is the longest diagonal in the range [k[0], k[1]], max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0))

The output is a tensor of rank k+1 with dimensions [I, J, ..., L, M, N]. If k is scalar or k[0] == k[1]:

output[i, j, ..., l, m, n]
  = diagonal[i, j, ..., l, n-max(k[1], 0)] ; if n - m == k[1]
    input[i, j, ..., l, m, n]              ; otherwise

Otherwise,

output[i, j, ..., l, m, n]
  = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1]
    input[i, j, ..., l, m, n]                         ; otherwise

where d = n - m, diag_index = k[1] - d, and index_in_diag = n - max(d, 0) + offset.

offset is zero except when the alignment of the diagonal is to the right.

offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT}
                                           and `d >= 0`) or
                                         (`align` in {LEFT_RIGHT, RIGHT_RIGHT}
                                           and `d <= 0`)
         0                          ; otherwise

where diag_len(d) = min(cols - max(d, 0), rows + min(d, 0)).

For example:

# The main diagonal.
input = np.array([[[7, 7, 7, 7],              # Input shape: (2, 3, 4)
                   [7, 7, 7, 7],
                   [7, 7, 7, 7]],
                  [[7, 7, 7, 7],
                   [7, 7, 7, 7],
                   [7, 7, 7, 7]]])
diagonal = np.array([[1, 2, 3],               # Diagonal shape: (2, 3)
                     [4, 5, 6]])
tf.matrix_set_diag(input, diagonal)
  ==> [[[1, 7, 7, 7],  # Output shape: (2, 3, 4)
        [7, 2, 7, 7],
        [7, 7, 3, 7]],
       [[4, 7, 7, 7],
        [7, 5, 7, 7],
        [7, 7, 6, 7]]]

# A superdiagonal (per batch).
tf.matrix_set_diag(input, diagonal, k = 1)
  ==> [[[7, 1, 7, 7],  # Output shape: (2, 3, 4)
        [7, 7, 2, 7],
        [7, 7, 7, 3]],
       [[7, 4, 7, 7],
        [7, 7, 5, 7],
        [7, 7, 7, 6]]]

# A band of diagonals.
diagonals = np.array([[[0, 9, 1],  # Diagonal shape: (2, 4, 3)
                       [6, 5, 8],
                       [1, 2, 3],
                       [4, 5, 0]],
                      [[0, 1, 2],
                       [5, 6, 4],
                       [6, 1, 2],
                       [3, 4, 0]]])
tf.matrix_set_diag(input, diagonals, k = (-1, 2))
  ==> [[[1, 6, 9, 7],  # Output shape: (2, 3, 4)
        [4, 2, 5, 1],
        [7, 5, 3, 8]],
       [[6, 5, 1, 7],
        [3, 1, 6, 2],
        [7, 4, 2, 4]]]

# LEFT_RIGHT alignment.
diagonals = np.array([[[9, 1, 0],  # Diagonal shape: (2, 4, 3)
                       [6, 5, 8],
                       [1, 2, 3],
                       [0, 4, 5]],
                      [[1, 2, 0],
                       [5, 6, 4],
                       [6, 1, 2],
                       [0, 3, 4]]])
tf.matrix_set_diag(input, diagonals, k = (-1, 2), align="LEFT_RIGHT")
  ==> [[[1, 6, 9, 7],  # Output shape: (2, 3, 4)
        [4, 2, 5, 1],
        [7, 5, 3, 8]],
       [[6, 5, 1, 7],
        [3, 1, 6, 2],
        [7, 4, 2, 4]]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
align::mlir::StringAttrstring attribute whose value is LEFT_RIGHT, or RIGHT_LEFT, or LEFT_LEFT, or RIGHT_RIGHT
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
diagonal tensor of tf.dtype values
k tensor of 32-bit integer values

Results:

Result Description
output tensor of tf.dtype values

tf.MatrixSolve (TF::MatrixSolveOp)

Solves systems of linear equations.

Matrix is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices. Rhs is a tensor of shape [..., M, K]. The output is a tensor shape [..., M, K]. If adjoint is False then each output matrix satisfies matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]. If adjoint is True then each output matrix satisfies adjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :].

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
adjoint::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
matrix tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values
rhs tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values

tf.MatrixTriangularSolve (TF::MatrixTriangularSolveOp)

Solves systems of linear equations with upper or lower triangular matrices by backsubstitution.

matrix is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices. If lower is True then the strictly upper triangular part of each inner-most matrix is assumed to be zero and not accessed. If lower is False then the strictly lower triangular part of each inner-most matrix is assumed to be zero and not accessed. rhs is a tensor of shape [..., M, N].

The output is a tensor of shape [..., M, N]. If adjoint is True then the innermost matrices in output satisfy matrix equations matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]. If adjoint is False then the strictly then the innermost matrices in output satisfy matrix equations adjoint(matrix[..., i, k]) * output[..., k, j] = rhs[..., i, j].

Note, the batch shapes for the inputs only need to broadcast.

Example:


a = tf.constant([[3,  0,  0,  0],
                 [2,  1,  0,  0],
                 [1,  0,  1,  0],
                 [1,  1,  1,  1]], dtype=tf.float32)

b = tf.constant([[4],
                 [2],
                 [4],
                 [2]], dtype=tf.float32)

x = tf.linalg.triangular_solve(a, b, lower=True)
x
# <tf.Tensor: shape=(4, 1), dtype=float32, numpy=
# array([[ 1.3333334 ],
#        [-0.66666675],
#        [ 2.6666665 ],
#        [-1.3333331 ]], dtype=float32)>

# in python3 one can use `a@x`
tf.matmul(a, x)
# <tf.Tensor: shape=(4, 1), dtype=float32, numpy=
# array([[4.       ],
#        [2.       ],
#        [4.       ],
#        [1.9999999]], dtype=float32)>

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
lower::mlir::BoolAttrbool attribute
adjoint::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
matrix tensor of floating-point or complex values
rhs tensor of floating-point or complex values

Results:

Result Description
output tensor of floating-point or complex values

tf.Max (TF::MaxOp)

Computes the maximum of elements across dimensions of a tensor.

Reduces input along the dimensions given in axis. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keep_dims is true, the reduced dimensions are retained with length 1.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
keep_dims::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
reduction_indices tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.Maximum (TF::MaximumOp)

Returns the max of x and y (i.e. x > y ? x : y) element-wise.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer or floating-point values
y tensor of integer or floating-point values

Results:

Result Description
z tensor of integer or floating-point values

tf.MaxPool (TF::MaxPoolOp)

Performs max pooling on the input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), TF_FoldOperandsTransposeInterface, TF_LayoutSensitiveInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
ksize::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
strides::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttrstring attribute whose value is NHWC, or NCHW, or NCHW_VECT_C
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 8-bit quantized integer or 16-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 8-bit quantized integer or 16-bit unsigned integer or 8-bit unsigned integer values

tf.MaxPool3D (TF::MaxPool3DOp)

Performs 3D max pooling on the input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
ksize::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
strides::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttrstring attribute whose value is NDHWC, or NCDHW
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float values

tf.MaxPool3DGrad (TF::MaxPool3DGradOp)

Computes gradients of 3D max pooling function.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
ksize::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
strides::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttrstring attribute whose value is NDHWC, or NCDHW
T::mlir::Attributederived attribute
TInput::mlir::Attributederived attribute

Operands:

Operand Description
orig_input tensor of bfloat16 or 16-bit float or 32-bit float values
orig_output tensor of bfloat16 or 16-bit float or 32-bit float values
grad tensor of bfloat16 or 16-bit float or 32-bit float values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float values

tf.MaxPool3DGradGrad (TF::MaxPool3DGradGradOp)

Computes second-order gradients of the maxpooling function.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
ksize::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
strides::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttrstring attribute whose value is NDHWC, or NCDHW
T::mlir::Attributederived attribute

Operands:

Operand Description
orig_input tensor of integer or floating-point values
orig_output tensor of integer or floating-point values
grad tensor of integer or floating-point values

Results:

Result Description
output tensor of integer or floating-point values

tf.MaxPoolGrad (TF::MaxPoolGradOp)

Computes gradients of the maxpooling function.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
ksize::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
strides::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
T::mlir::Attributederived attribute

Operands:

Operand Description
orig_input tensor of integer or floating-point values
orig_output tensor of integer or floating-point values
grad tensor of integer or floating-point values

Results:

Result Description
output tensor of integer or floating-point values

tf.MaxPoolGradGrad (TF::MaxPoolGradGradOp)

Computes second-order gradients of the maxpooling function.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
ksize::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
strides::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
T::mlir::Attributederived attribute

Operands:

Operand Description
orig_input tensor of integer or floating-point values
orig_output tensor of integer or floating-point values
grad tensor of integer or floating-point values

Results:

Result Description
output tensor of integer or floating-point values

tf.MaxPoolGradGradV2 (TF::MaxPoolGradGradV2Op)

Computes second-order gradients of the maxpooling function.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
T::mlir::Attributederived attribute

Operands:

Operand Description
orig_input tensor of integer or floating-point values
orig_output tensor of integer or floating-point values
grad tensor of integer or floating-point values
ksize tensor of 32-bit integer values
strides tensor of 32-bit integer values

Results:

Result Description
output tensor of integer or floating-point values

tf.MaxPoolGradV2 (TF::MaxPoolGradV2Op)

Computes gradients of the maxpooling function.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
T::mlir::Attributederived attribute

Operands:

Operand Description
orig_input tensor of integer or floating-point values
orig_output tensor of integer or floating-point values
grad tensor of integer or floating-point values
ksize tensor of 32-bit integer values
strides tensor of 32-bit integer values

Results:

Result Description
output tensor of integer or floating-point values

tf.MaxPoolV2 (TF::MaxPoolV2Op)

Performs max pooling on the input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttrstring attribute whose value is NHWC, or NCHW, or NCHW_VECT_C
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 8-bit quantized integer or 16-bit unsigned integer or 8-bit unsigned integer values
ksize tensor of 32-bit integer values
strides tensor of 32-bit integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 8-bit quantized integer or 16-bit unsigned integer or 8-bit unsigned integer values

tf.Mean (TF::MeanOp)

Computes the mean of elements across dimensions of a tensor.

Reduces input along the dimensions given in axis. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keep_dims is true, the reduced dimensions are retained with length 1.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), TF_FoldOperandsTransposeInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
keep_dims::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of number values
reduction_indices tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of number values

tf.MergeSummary (TF::MergeSummaryOp)

Merges summaries.

This op creates a Summary protocol buffer that contains the union of all the values in the input summaries.

When the Op is run, it reports an InvalidArgument error if multiple values in the summaries to merge use the same tag.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of string values

Results:

Result Description
summary tensor of string values

tf.MergeV2Checkpoints (TF::MergeV2CheckpointsOp)

V2 format specific: merges the metadata files of sharded checkpoints. The

result is one logical checkpoint, with one physical metadata file and renamed data files.

Intended for "grouping" multiple checkpoints in a sharded checkpoint setup.

If delete_old_dirs is true, attempts to delete recursively the dirname of each path in the input checkpoint_prefixes. This is useful when those paths are non user-facing temporary locations.

If allow_missing_files is true, merges the checkpoint prefixes as long as at least one file exists. Otherwise, if no files exist, an error will be thrown. The default value for allow_missing_files is false.

Attributes:

AttributeMLIR TypeDescription
delete_old_dirs::mlir::BoolAttrbool attribute
allow_missing_files::mlir::BoolAttrbool attribute

Operands:

Operand Description
checkpoint_prefixes tensor of string values
destination_prefix tensor of string values

tf.Min (TF::MinOp)

Computes the minimum of elements across dimensions of a tensor.

Reduces input along the dimensions given in axis. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keep_dims is true, the reduced dimensions are retained with length 1.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
keep_dims::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
reduction_indices tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.Minimum (TF::MinimumOp)

Returns the min of x and y (i.e. x < y ? x : y) element-wise.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer or floating-point values
y tensor of integer or floating-point values

Results:

Result Description
z tensor of integer or floating-point values

tf.MirrorPad (TF::MirrorPadOp)

Pads a tensor with mirrored values.

This operation pads a input with mirrored values according to the paddings you specify. paddings is an integer tensor with shape [n, 2], where n is the rank of input. For each dimension D of input, paddings[D, 0] indicates how many values to add before the contents of input in that dimension, and paddings[D, 1] indicates how many values to add after the contents of input in that dimension. Both paddings[D, 0] and paddings[D, 1] must be no greater than input.dim_size(D) (or input.dim_size(D) - 1) if copy_border is true (if false, respectively).

The padded size of each dimension D of the output is:

paddings(D, 0) + input.dim_size(D) + paddings(D, 1)

For example:

# 't' is [[1, 2, 3], [4, 5, 6]].
# 'paddings' is [[1, 1]], [2, 2]].
# 'mode' is SYMMETRIC.
# rank of 't' is 2.
pad(t, paddings) ==> [[2, 1, 1, 2, 3, 3, 2]
                      [2, 1, 1, 2, 3, 3, 2]
                      [5, 4, 4, 5, 6, 6, 5]
                      [5, 4, 4, 5, 6, 6, 5]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
mode::mlir::StringAttrstring attribute whose value is REFLECT, or SYMMETRIC
T::mlir::Attributederived attribute
Tpaddings::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
paddings tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.MirrorPadGrad (TF::MirrorPadGradOp)

Gradient op for MirrorPad op. This op folds a mirror-padded tensor.

This operation folds the padded areas of input by MirrorPad according to the paddings you specify. paddings must be the same as paddings argument given to the corresponding MirrorPad op.

The folded size of each dimension D of the output is:

input.dim_size(D) - paddings(D, 0) - paddings(D, 1)

For example:

# 't' is [[1, 2, 3], [4, 5, 6], [7, 8, 9]].
# 'paddings' is [[0, 1]], [0, 1]].
# 'mode' is SYMMETRIC.
# rank of 't' is 2.
pad(t, paddings) ==> [[ 1,  5]
                      [11, 28]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
mode::mlir::StringAttrstring attribute whose value is REFLECT, or SYMMETRIC
T::mlir::Attributederived attribute
Tpaddings::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
paddings tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.MlirLocalVarOp (TF::MlirLocalVarOp)

Creates a handle to an in-scope variable.

Used by internal passes for temporary representation of local state, which will be eventually removed.

Results:

Result Description
resource tensor of resource values

tf.MlirPassthroughOp (TF::MlirPassthroughOp)

Wraps an arbitrary MLIR computation expressed as a module with a main() function.

This operation does not have an associated kernel and is not intended to be executed in a regular TensorFlow session. Instead it is intended to be used for testing or for special case where a user intends to pass custom MLIR computation through a TensorFlow graph with the intent of having custom tooling processing it downstream (when targeting a different environment, like TensorFlow lite for example). The MLIR module is expected to have a main() function that will be used as an entry point. The inputs to the operations will be passed as argument to the main() function and the returned values of the main function mapped to the outputs. Example usage:

import tensorflow as tf
from tensorflow.compiler.mlir.tensorflow.gen_mlir_passthrough_op import mlir_passthrough_op

mlir_module = '''python
func @main(%arg0 : tensor<10xf32>, %arg1 : tensor<10xf32>) -> tensor<10x10xf32> {
   %add = "magic.op"(%arg0, %arg1) : (tensor<10xf32>, tensor<10xf32>) -> tensor<10x10xf32>
   return %ret : tensor<10x10xf32>
}
'''

@tf.function
def foo(x, y):
  return mlir_passthrough_op([x, y], mlir_module, Toutputs=[tf.float32])

graph_def = foo.get_concrete_function(tf.TensorSpec([10], tf.float32), tf.TensorSpec([10], tf.float32)).graph.as_graph_def()

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
mlir_module::mlir::StringAttrstring attribute
Tinputs::mlir::Attributederived attribute
Toutputs::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of tf.dtype values

Results:

Result Description
outputs variadic of tensor of tf.dtype values

tf.Mod (TF::ModOp)

Returns element-wise remainder of division. This emulates C semantics in that

the result here is consistent with a truncating divide. E.g. tf.truncatediv(x, y) * y + truncate_mod(x, y) = x.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or 32/64-bit signed integer values
y tensor of floating-point or 32/64-bit signed integer values

Results:

Result Description
z tensor of floating-point or 32/64-bit signed integer values

tf.ModelDataset (TF::ModelDatasetOp)

Identity transformation that models performance.

Identity transformation that models performance.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
algorithm::mlir::IntegerAttr64-bit signless integer attribute
cpu_budget::mlir::IntegerAttr64-bit signless integer attribute
ram_budget::mlir::IntegerAttr64-bit signless integer attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements

Operands:

Operand Description
input_dataset tensor of variant values

Results:

Result Description
handle tensor of variant values

tf.Mul (TF::MulOp)

Returns x * y element-wise.

Traits: AlwaysSpeculatableImplTrait, Commutative, ResultsBroadcastableShape, TF_CwiseBinary, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
z tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.MulNoNan (TF::MulNoNanOp)

Returns x * y element-wise. Returns zero if y is zero, even if x if infinite or NaN.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values
y tensor of floating-point or complex values

Results:

Result Description
z tensor of floating-point or complex values

tf.MultiDeviceIterator (TF::MultiDeviceIteratorOp)

Creates a MultiDeviceIterator resource.

Attributes:

AttributeMLIR TypeDescription
devices::mlir::ArrayAttrstring array attribute with at least 1 elements
shared_name::mlir::StringAttrstring attribute
container::mlir::StringAttrstring attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements

Results:

Result Description
handle tensor of resource values

tf.MultiDeviceIteratorFromStringHandle (TF::MultiDeviceIteratorFromStringHandleOp)

Generates a MultiDeviceIterator resource from its provided string handle.

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute
output_shapes::mlir::ArrayAttrtensorflow shape attribute array

Operands:

Operand Description
string_handle tensor of string values

Results:

Result Description
multi_device_iterator tensor of resource values

tf.MultiDeviceIteratorGetNextFromShard (TF::MultiDeviceIteratorGetNextFromShardOp)

Gets next element for the provided shard number.

Attributes:

AttributeMLIR TypeDescription
output_shapes::mlir::Attributederived attribute
output_types::mlir::Attributederived attribute

Operands:

Operand Description
multi_device_iterator tensor of resource values
shard_num tensor of 32-bit integer values
incarnation_id tensor of 64-bit integer values

Results:

Result Description
components variadic of tensor of tf.dtype values

tf.MultiDeviceIteratorInit (TF::MultiDeviceIteratorInitOp)

Initializes the multi device iterator with the given dataset.

Operands:

Operand Description
dataset tensor of variant values
multi_device_iterator tensor of resource values
max_buffer_size tensor of 64-bit integer values

Results:

Result Description
incarnation_id tensor of 64-bit integer values

tf.MultiDeviceIteratorToStringHandle (TF::MultiDeviceIteratorToStringHandleOp)

Produces a string handle for the given MultiDeviceIterator.

Operands:

Operand Description
multi_device_iterator tensor of resource values

Results:

Result Description
string_handle tensor of string values

tf.Multinomial (TF::MultinomialOp)

Draws samples from a multinomial distribution.

Traits: TF_CannotDuplicate

Attributes:

AttributeMLIR TypeDescription
seed::mlir::IntegerAttr64-bit signless integer attribute
seed2::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute
output_dtype::mlir::Attributederived attribute

Operands:

Operand Description
logits tensor of integer or floating-point values
num_samples tensor of 32-bit integer values

Results:

Result Description
output tensor of 32/64-bit signed integer values

tf.MutableDenseHashTableV2 (TF::MutableDenseHashTableV2Op)

Creates an empty hash table that uses tensors as the backing store.

It uses "open addressing" with quadratic reprobing to resolve collisions.

This op creates a mutable hash table, specifying the type of its keys and values. Each value must be a scalar. Data can be inserted into the table using the insert operations. It does not support the initialization operation.

Attributes:

AttributeMLIR TypeDescription
container::mlir::StringAttrstring attribute
shared_name::mlir::StringAttrstring attribute
use_node_name_sharing::mlir::BoolAttrbool attribute
value_dtype::mlir::TypeAttrany type attribute
value_shape::mlir::AttributeTensorFlow shape attribute
initial_num_buckets::mlir::IntegerAttr64-bit signless integer attribute
max_load_factor::mlir::FloatAttr32-bit float attribute
key_dtype::mlir::Attributederived attribute

Operands:

Operand Description
empty_key tensor of tf.dtype values
deleted_key tensor of tf.dtype values

Results:

Result Description
table_handle tensor of resource values

tf.MutableHashTableOfTensorsV2 (TF::MutableHashTableOfTensorsV2Op)

Creates an empty hash table.

This op creates a mutable hash table, specifying the type of its keys and values. Each value must be a vector. Data can be inserted into the table using the insert operations. It does not support the initialization operation.

Attributes:

AttributeMLIR TypeDescription
container::mlir::StringAttrstring attribute
shared_name::mlir::StringAttrstring attribute
use_node_name_sharing::mlir::BoolAttrbool attribute
key_dtype::mlir::TypeAttrany type attribute
value_dtype::mlir::TypeAttrany type attribute
value_shape::mlir::AttributeTensorFlow shape attribute

Results:

Result Description
table_handle tensor of resource values

tf.MutableHashTableV2 (TF::MutableHashTableV2Op)

Creates an empty hash table.

This op creates a mutable hash table, specifying the type of its keys and values. Each value must be a scalar. Data can be inserted into the table using the insert operations. It does not support the initialization operation.

Attributes:

AttributeMLIR TypeDescription
container::mlir::StringAttrstring attribute
shared_name::mlir::StringAttrstring attribute
use_node_name_sharing::mlir::BoolAttrbool attribute
key_dtype::mlir::TypeAttrany type attribute
value_dtype::mlir::TypeAttrany type attribute

Results:

Result Description
table_handle tensor of resource values

tf.NcclAllReduce (TF::NcclAllReduceOp)

Outputs a tensor containing the reduction across all input tensors.

Outputs a tensor containing the reduction across all input tensors passed to ops within the same `shared_name.

The graph should be constructed so if one op runs with shared_name value c, then num_devices ops will run with shared_name value c. Failure to do so will cause the graph execution to fail to complete.

input: the input to the reduction data: the value of the reduction across all num_devices devices. reduction: the reduction operation to perform. num_devices: The number of devices participating in this reduction. shared_name: Identifier that shared between ops of the same reduction.

Traits: InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: GetResourceInstanceInterface, InferShapedTypeOpInterface, InferTypeOpInterface, TF_NcclAllReduceOrderingEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::NcclAllReduceOrdering}

Attributes:

AttributeMLIR TypeDescription
reduction::mlir::StringAttrstring attribute whose value is min, or max, or prod, or sum
num_devices::mlir::IntegerAttr64-bit signless integer attribute
shared_name::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

Results:

Result Description
data tensor of 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.Ndtri (TF::NdtriOp)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of floating-point values

tf.Neg (TF::NegOp)

Computes numerical negative value element-wise.

I.e., \(y = -x\).

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_CwiseUnary, TF_Involution

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer values

Results:

Result Description
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer values

tf.NextAfter (TF::NextAfterOp)

Returns the next representable value of x1 in the direction of x2, element-wise.

This operation returns the same result as the C++ std::nextafter function.

It can also return a subnormal number.

@compatibility(cpp) Equivalent to C++ std::nextafter function. @end_compatibility

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x1 tensor of 32/64-bit float values
x2 tensor of 32/64-bit float values

Results:

Result Description
output tensor of 32/64-bit float values

tf.NonMaxSuppressionV3 (TF::NonMaxSuppressionV3Op)

Greedily selects a subset of bounding boxes in descending order of score,

pruning away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes with score less than score_threshold are removed. Bounding boxes are supplied as [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system and more generally is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation. For example: selected_indices = tf.image.non_max_suppression_v2( boxes, scores, max_output_size, iou_threshold, score_threshold) selected_boxes = tf.gather(boxes, selected_indices)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
T_threshold::mlir::Attributederived attribute

Operands:

Operand Description
boxes tensor of 16-bit float or 32-bit float values
scores tensor of 16-bit float or 32-bit float values
max_output_size tensor of 32-bit integer values
iou_threshold tensor of 16-bit float or 32-bit float values
score_threshold tensor of 16-bit float or 32-bit float values

Results:

Result Description
selected_indices tensor of 32-bit integer values

tf.NonMaxSuppressionV4 (TF::NonMaxSuppressionV4Op)

Greedily selects a subset of bounding boxes in descending order of score,

pruning away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes with score less than score_threshold are removed. Bounding boxes are supplied as [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system and more generally is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation. For example: selected_indices = tf.image.non_max_suppression_v2( boxes, scores, max_output_size, iou_threshold, score_threshold) selected_boxes = tf.gather(boxes, selected_indices)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
pad_to_max_output_size::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
T_threshold::mlir::Attributederived attribute

Operands:

Operand Description
boxes tensor of 16-bit float or 32-bit float values
scores tensor of 16-bit float or 32-bit float values
max_output_size tensor of 32-bit integer values
iou_threshold tensor of 16-bit float or 32-bit float values
score_threshold tensor of 16-bit float or 32-bit float values

Results:

Result Description
selected_indices tensor of 32-bit integer values
valid_outputs tensor of 32-bit integer values

tf.NonMaxSuppressionV5 (TF::NonMaxSuppressionV5Op)

Greedily selects a subset of bounding boxes in descending order of score,

pruning away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes with score less than score_threshold are removed. Bounding boxes are supplied as [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system and more generally is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation. For example: selected_indices = tf.image.non_max_suppression_v2( boxes, scores, max_output_size, iou_threshold, score_threshold) selected_boxes = tf.gather(boxes, selected_indices) This op also supports a Soft-NMS (with Gaussian weighting) mode (c.f. Bodla et al, https://arxiv.org/abs/1704.04503) where boxes reduce the score of other overlapping boxes instead of directly causing them to be pruned. To enable this Soft-NMS mode, set the soft_nms_sigma parameter to be larger than 0.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
pad_to_max_output_size::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
boxes tensor of 16-bit float or 32-bit float values
scores tensor of 16-bit float or 32-bit float values
max_output_size tensor of 32-bit integer values
iou_threshold tensor of 16-bit float or 32-bit float values
score_threshold tensor of 16-bit float or 32-bit float values
soft_nms_sigma tensor of 16-bit float or 32-bit float values

Results:

Result Description
selected_indices tensor of 32-bit integer values
selected_scores tensor of 16-bit float or 32-bit float values
valid_outputs tensor of 32-bit integer values

tf.NoOp (TF::NoOp)

Does nothing. Only useful as a placeholder for control edges.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

tf.NotEqual (TF::NotEqualOp)

Returns the truth value of (x != y) element-wise.

Traits: AlwaysSpeculatableImplTrait, Commutative

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
incompatible_shape_error::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of tf.dtype values
y tensor of tf.dtype values

Results:

Result Description
z tensor of bool values

tf.OneHot (TF::OneHotOp)

Returns a one-hot tensor.

The locations represented by indices in indices take value on_value, while all other locations take value off_value.

If the input indices is rank N, the output will have rank N+1, The new axis is created at dimension axis (default: the new axis is appended at the end).

If indices is a scalar the output shape will be a vector of length depth.

If indices is a vector of length features, the output shape will be:

  features x depth if axis == -1
  depth x features if axis == 0

If indices is a matrix (batch) with shape [batch, features], the output shape will be:

  batch x features x depth if axis == -1
  batch x depth x features if axis == 1
  depth x batch x features if axis == 0

Examples

Suppose that

  indices = [0, 2, -1, 1]
  depth = 3
  on_value = 5.0
  off_value = 0.0
  axis = -1

Then output is [4 x 3]:

output =
  [5.0 0.0 0.0]  // one_hot(0)
  [0.0 0.0 5.0]  // one_hot(2)
  [0.0 0.0 0.0]  // one_hot(-1)
  [0.0 5.0 0.0]  // one_hot(1)

Suppose that

  indices = [0, 2, -1, 1]
  depth = 3
  on_value = 0.0
  off_value = 3.0
  axis = 0

Then output is [3 x 4]:

output =
  [0.0 3.0 3.0 3.0]
  [3.0 3.0 3.0 0.0]
  [3.0 3.0 3.0 3.0]
  [3.0 0.0 3.0 3.0]
//  ^                one_hot(0)
//      ^            one_hot(2)
//          ^        one_hot(-1)
//              ^    one_hot(1)

Suppose that

  indices = [[0, 2], [1, -1]]
  depth = 3
  on_value = 1.0
  off_value = 0.0
  axis = -1

Then output is [2 x 2 x 3]:

output =
  [
    [1.0, 0.0, 0.0]  // one_hot(0)
    [0.0, 0.0, 1.0]  // one_hot(2)
  ][
    [0.0, 1.0, 0.0]  // one_hot(1)
    [0.0, 0.0, 0.0]  // one_hot(-1)
  ]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
axis::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute
TI::mlir::Attributederived attribute

Operands:

Operand Description
indices tensor of 32-bit integer or 64-bit integer or 8-bit integer or 8-bit unsigned integer values
depth tensor of 32-bit integer values
on_value tensor of tf.dtype values
off_value tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.OneShotIterator (TF::OneShotIteratorOp)

Makes a "one-shot" iterator that can be iterated only once.

A one-shot iterator bundles the logic for defining the dataset and the state of the iterator in a single op, which allows simple input pipelines to be defined without an additional initialization ("MakeIterator") step.

One-shot iterators have the following limitations:

  • They do not support parameterization: all logic for creating the underlying dataset must be bundled in the dataset_factory function.
  • They are not resettable. Once a one-shot iterator reaches the end of its underlying dataset, subsequent "IteratorGetNext" operations on that iterator will always produce an OutOfRange error.

For greater flexibility, use "Iterator" and "MakeIterator" to define an iterator using an arbitrary subgraph, which may capture tensors (including fed values) as parameters, and which may be reset multiple times by rerunning "MakeIterator".

Attributes:

AttributeMLIR TypeDescription
dataset_factory::mlir::SymbolRefAttrsymbol reference attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
container::mlir::StringAttrstring attribute
shared_name::mlir::StringAttrstring attribute

Results:

Result Description
handle tensor of resource values

tf.OnesLike (TF::OnesLikeOp)

Returns a tensor of ones with the same shape and type as x.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_Idempotent

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
y tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.OptimizeDatasetV2 (TF::OptimizeDatasetV2Op)

Creates a dataset by applying related optimizations to input_dataset.

Creates a dataset by applying related optimizations to input_dataset.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
optimization_configs::mlir::ArrayAttrstring array attribute

Operands:

Operand Description
input_dataset tensor of variant values
optimizations_enabled tensor of string values
optimizations_disabled tensor of string values
optimizations_default tensor of string values

Results:

Result Description
handle tensor of variant values

tf.OptionalFromValue (TF::OptionalFromValueOp)

Constructs an Optional variant from a tuple of tensors.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Toutput_types::mlir::Attributederived attribute

Operands:

Operand Description
components variadic of tensor of tf.dtype values

Results:

Result Description
optional tensor of variant values

tf.OptionalGetValue (TF::OptionalGetValueOp)

Returns the value stored in an Optional variant or raises an error if none exists.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
output_shapes::mlir::Attributederived attribute
output_types::mlir::Attributederived attribute

Operands:

Operand Description
optional tensor of variant values

Results:

Result Description
components variadic of tensor of tf.dtype values

tf.OptionalHasValue (TF::OptionalHasValueOp)

Returns true if and only if the given Optional variant has a value.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
optional tensor of variant values

Results:

Result Description
has_value tensor of bool values

tf.OptionalNone (TF::OptionalNoneOp)

Creates an Optional variant with no value.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Results:

Result Description
optional tensor of variant values

tf.OutfeedEnqueue (TF::OutfeedEnqueueOp)

Enqueue a Tensor on the computation outfeed.

Attributes:

AttributeMLIR TypeDescription
dtype::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

tf.OutfeedEnqueueTuple (TF::OutfeedEnqueueTupleOp)

Enqueue multiple Tensor values on the computation outfeed.

Attributes:

AttributeMLIR TypeDescription
dtypes::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of tf.dtype values

tf.Pack (TF::PackOp)

Packs a list of N rank-R tensors into one rank-(R+1) tensor.

Packs the N tensors in values into a tensor with rank one higher than each tensor in values, by packing them along the axis dimension. Given a list of tensors of shape (A, B, C);

if axis == 0 then the output tensor will have the shape (N, A, B, C). if axis == 1 then the output tensor will have the shape (A, N, B, C). Etc.

For example:

# 'x' is [1, 4]
# 'y' is [2, 5]
# 'z' is [3, 6]
pack([x, y, z]) => [[1, 4], [2, 5], [3, 6]]  # Pack along first dim.
pack([x, y, z], axis=1) => [[1, 2, 3], [4, 5, 6]]

This is the opposite of unpack.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
axis::mlir::IntegerAttr64-bit signless integer attribute
N::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
values variadic of tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.Pad (TF::PadOp)

Pads a tensor with zeros.

This operation pads a input with zeros according to the paddings you specify. paddings is an integer tensor with shape [Dn, 2], where n is the rank of input. For each dimension D of input, paddings[D, 0] indicates how many zeros to add before the contents of input in that dimension, and paddings[D, 1] indicates how many zeros to add after the contents of input in that dimension.

The padded size of each dimension D of the output is:

paddings(D, 0) + input.dim_size(D) + paddings(D, 1)

For example:

# 't' is [[1, 1], [2, 2]]
# 'paddings' is [[1, 1], [2, 2]]
# rank of 't' is 2
pad(t, paddings) ==> [[0, 0, 0, 0, 0, 0]
                      [0, 0, 1, 1, 0, 0]
                      [0, 0, 2, 2, 0, 0]
                      [0, 0, 0, 0, 0, 0]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), TF_FoldOperandsTransposeInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tpaddings::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
paddings tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.PadV2 (TF::PadV2Op)

Pads a tensor.

This operation pads input according to the paddings and constant_values you specify. paddings is an integer tensor with shape [Dn, 2], where n is the rank of input. For each dimension D of input, paddings[D, 0] indicates how many padding values to add before the contents of input in that dimension, and paddings[D, 1] indicates how many padding values to add after the contents of input in that dimension. constant_values is a scalar tensor of the same type as input that indicates the value to use for padding input.

The padded size of each dimension D of the output is:

paddings(D, 0) + input.dim_size(D) + paddings(D, 1)

For example:

# 't' is [[1, 1], [2, 2]]
# 'paddings' is [[1, 1], [2, 2]]
# 'constant_values' is 0
# rank of 't' is 2
pad(t, paddings) ==> [[0, 0, 0, 0, 0, 0]
                      [0, 0, 1, 1, 0, 0]
                      [0, 0, 2, 2, 0, 0]
                      [0, 0, 0, 0, 0, 0]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tpaddings::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
paddings tensor of 32/64-bit signed integer values
constant_values tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.ParallelDynamicStitch (TF::ParallelDynamicStitchOp)

Interleave the values from the data tensors into a single tensor.

Builds a merged tensor such that

    merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]

For example, if each indices[m] is scalar or vector, we have

    # Scalar indices:
    merged[indices[m], ...] = data[m][...]

    # Vector indices:
    merged[indices[m][i], ...] = data[m][i, ...]

Each data[i].shape must start with the corresponding indices[i].shape, and the rest of data[i].shape must be constant w.r.t. i. That is, we must have data[i].shape = indices[i].shape + constant. In terms of this constant, the output shape is

merged.shape = [max(indices)] + constant

Values may be merged in parallel, so if an index appears in both indices[m][i] and indices[n][j], the result may be invalid. This differs from the normal DynamicStitch operator that defines the behavior in that case.

For example:

    indices[0] = 6
    indices[1] = [4, 1]
    indices[2] = [[5, 2], [0, 3]]
    data[0] = [61, 62]
    data[1] = [[41, 42], [11, 12]]
    data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]]
    merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42],
              [51, 52], [61, 62]]

This method can be used to merge partitions created by dynamic_partition as illustrated on the following example:

    # Apply function (increments x_i) on elements for which a certain condition
    # apply (x_i != -1 in this example).
    x=tf.constant([0.1, -1., 5.2, 4.3, -1., 7.4])
    condition_mask=tf.not_equal(x,tf.constant(-1.))
    partitioned_data = tf.dynamic_partition(
        x, tf.cast(condition_mask, tf.int32) , 2)
    partitioned_data[1] = partitioned_data[1] + 1.0
    condition_indices = tf.dynamic_partition(
        tf.range(tf.shape(x)[0]), tf.cast(condition_mask, tf.int32) , 2)
    x = tf.dynamic_stitch(condition_indices, partitioned_data)
    # Here x=[1.1, -1., 6.2, 5.3, -1, 8.4], the -1. values remain
    # unchanged.

Traits: AlwaysSpeculatableImplTrait, SameVariadicOperandSize

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
indices variadic of tensor of 32-bit integer values
data variadic of tensor of tf.dtype values

Results:

Result Description
merged tensor of tf.dtype values

tf.ParallelMapDataset (TF::ParallelMapDatasetOp)

Creates a dataset that applies f to the outputs of input_dataset.

Unlike a "MapDataset", which applies f sequentially, this dataset invokes up to num_parallel_calls copies of f in parallel.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
f::mlir::SymbolRefAttrsymbol reference attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
use_inter_op_parallelism::mlir::BoolAttrbool attribute
sloppy::mlir::BoolAttrbool attribute
preserve_cardinality::mlir::BoolAttrbool attribute
metadata::mlir::StringAttrstring attribute
Targuments::mlir::Attributederived attribute

Operands:

Operand Description
input_dataset tensor of variant values
other_arguments variadic of tensor of tf.dtype values
num_parallel_calls tensor of 32-bit integer values

Results:

Result Description
handle tensor of variant values

tf.ParallelMapDatasetV2 (TF::ParallelMapDatasetV2Op)

Creates a dataset that applies f to the outputs of input_dataset.

Unlike a "MapDataset", which applies f sequentially, this dataset invokes up to num_parallel_calls copies of f in parallel.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
f::mlir::SymbolRefAttrsymbol reference attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
use_inter_op_parallelism::mlir::BoolAttrbool attribute
deterministic::mlir::StringAttrstring attribute
preserve_cardinality::mlir::BoolAttrbool attribute
use_unbounded_threadpool::mlir::BoolAttrbool attribute
metadata::mlir::StringAttrstring attribute
Targuments::mlir::Attributederived attribute

Operands:

Operand Description
input_dataset tensor of variant values
other_arguments variadic of tensor of tf.dtype values
num_parallel_calls tensor of 64-bit integer values

Results:

Result Description
handle tensor of variant values

tf.ParameterizedTruncatedNormal (TF::ParameterizedTruncatedNormalOp)

Outputs random values from a normal distribution. The parameters may each be a

scalar which applies to the entire output, or a vector of length shape[0] which stores the parameters for each batch.

Traits: TF_CannotDuplicate

Attributes:

AttributeMLIR TypeDescription
seed::mlir::IntegerAttr64-bit signless integer attribute
seed2::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values
means tensor of floating-point values
stdevs tensor of floating-point values
minvals tensor of floating-point values
maxvals tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.ParseExample (TF::ParseExampleOp)

Transforms a vector of tf.Example protos (as strings) into typed tensors.

Traits: AlwaysSpeculatableImplTrait, AttrSizedOperandSegments, AttrSizedResultSegments

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
dense_shapes::mlir::ArrayAttrtensorflow shape attribute array
Nsparse::mlir::Attributederived attribute
Ndense::mlir::Attributederived attribute
Tdense::mlir::Attributederived attribute
sparse_types::mlir::Attributederived attribute

Operands:

Operand Description
serialized tensor of string values
names tensor of string values
sparse_keys variadic of tensor of string values
dense_keys variadic of tensor of string values
dense_defaults variadic of tensor of 32-bit float or 64-bit integer or string values

Results:

Result Description
sparse_indices variadic of tensor of 64-bit integer values
sparse_values variadic of tensor of 32-bit float or 64-bit integer or string values
sparse_shapes variadic of tensor of 64-bit integer values
dense_values variadic of tensor of 32-bit float or 64-bit integer or string values

tf.ParseExampleV2 (TF::ParseExampleV2Op)

Transforms a vector of tf.Example protos (as strings) into typed tensors.

Traits: AlwaysSpeculatableImplTrait, AttrSizedResultSegments

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
num_sparse::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 0
dense_shapes::mlir::ArrayAttrtensorflow shape attribute array
Tdense::mlir::Attributederived attribute
sparse_types::mlir::Attributederived attribute
ragged_value_types::mlir::Attributederived attribute
ragged_split_types::mlir::Attributederived attribute

Operands:

Operand Description
serialized tensor of string values
names tensor of string values
sparse_keys tensor of string values
dense_keys tensor of string values
ragged_key