'tf' Dialect

The TensorFlow dialect.

This dialect maps to TensorFlow operations.

Invariants:

  • All values are of Tensor type (in particular, scalars are represented using zero-dimensional tensors);

TODO: Make invariants more structured so that we can reference them in ops.

Operations

tf._ArrayToList (TF::_ArrayToListOp)

Converts an array of tensors to a list of tensors.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute
T::mlir::Attributederived attribute
out_types::mlir::Attributederived attribute

Operands:

Operand Description
input variadic of tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf._EagerConst (TF::_EagerConstOp)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf._FusedBatchNormEx (TF::_FusedBatchNormExOp)

Internal FusedBatchNorm operation: reserved for internal use.

Do not invoke this operator directly in Python. A fusion optimization is expected to create these operators.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
epsilon::mlir::FloatAttr32-bit float attribute
exponential_avg_factor::mlir::FloatAttr32-bit float attribute
activation_mode::mlir::StringAttrstring attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
is_training::mlir::BoolAttrbool attribute
num_side_inputs::mlir::Attributederived attribute
T::mlir::Attributederived attribute
U::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 16-bit float or 32-bit float values
scale tensor of 32-bit float values
offset tensor of 32-bit float values
mean tensor of 32-bit float values
variance tensor of 32-bit float values
side_input variadic of tensor of bfloat16 or 16-bit float or 32-bit float values

Results:

Result Description
y tensor of bfloat16 or 16-bit float or 32-bit float values
batch_mean tensor of 32-bit float values
batch_variance tensor of 32-bit float values
reserve_space_1 tensor of 32-bit float values
reserve_space_2 tensor of 32-bit float values
reserve_space_3 tensor of 32-bit float values

tf._FusedConv2D (TF::_FusedConv2DOp)

Performs a convolution followed by a specified series of operations.

The inputs to the convolution are input and filter. The series of operations that follows is specified by the fused_ops attribute, which is a list of TF op names specified as strings (e.g. "Relu"). They are performed in order, where the (first) input to each op is the output of the preceding op. The first input and the output of each fused_op must be of type T.

Currently supported fused_op combinations are: [X] and [X,A], where X is one of {"BiasAdd","FusedBatchNorm"} and A is one of {"Elu","Relu","Relu6"}.

  • The first input to op X is the Conv2D result, and the additional input(s) to X are specified by args.
  • If there is an op A specified, the output of op X is the input to op A, and op A produces the _FusedConv2D output. Otherwise, op X produces the _FusedConv2D output.

Traits: AlwaysSpeculatableImplTrait, AttrSizedOperandSegments

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
num_args::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 0
strides::mlir::ArrayAttr64-bit integer array attribute
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttrstring attribute whose value is NHWC, or NCHW, or NCHW_VECT_C
filter_format::mlir::StringAttrstring attribute whose value is HWIO, or OIHW, or OIHW_VECT_I
dilations::mlir::ArrayAttr64-bit integer array attribute
use_cudnn_on_gpu::mlir::BoolAttrbool attribute
fused_ops::mlir::ArrayAttrstring array attribute
epsilon::mlir::FloatAttr32-bit float attribute
leakyrelu_alpha::mlir::FloatAttr32-bit float attribute
num_host_args::mlir::Attributederived attribute
T::mlir::Attributederived attribute
TArgs::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 16-bit float or 32-bit float or 64-bit float or 8-bit integer or 8-bit quantized integer values
filter tensor of 16-bit float or 32-bit float or 64-bit float or 8-bit integer or 8-bit quantized integer values
args variadic of tensor of tf.dtype values
host_args variadic of tensor of 32-bit float values

Results:

Result Description
output tensor of 16-bit float or 32-bit float or 64-bit float or 8-bit integer or 8-bit quantized integer values

tf._FusedMatMul (TF::_FusedMatMulOp)

Performs a MatMul followed by a specified series of operations.

The inputs to the MatMul are specified by a and b. The series of operations that follows is specified by the fused_ops attribute, which is a list of TF op names specified as strings (e.g. "Relu"). They are performed in order, where the (first) input to each op is the output of the preceding op. The first input and the output of each fused_op must be of type T.

Currently supported fused_op combinations are: ["BiasAdd"] and ["BiasAdd",A], where A is one of {"Elu","Relu","Relu6"}.

  • The first input to BiasAdd is the MatMul result, and the additional BiasAdd input is specified by args.
  • If there is an op A specified, the output of the BiasAdd is the input to op A, and op A produces the _FusedConv2D output. Otherwise, the BiasAdd produces the _FusedConv2D output.

Traits: AlwaysSpeculatableImplTrait, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
transpose_a::mlir::BoolAttrbool attribute
transpose_b::mlir::BoolAttrbool attribute
fused_ops::mlir::ArrayAttrstring array attribute
epsilon::mlir::FloatAttr32-bit float attribute
leakyrelu_alpha::mlir::FloatAttr32-bit float attribute
num_args::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
a tensor of bfloat16 or 16-bit float or 32-bit float values
b tensor of bfloat16 or 16-bit float or 32-bit float values
args variadic of tensor of bfloat16 or 16-bit float or 32-bit float values

Results:

Result Description
product tensor of bfloat16 or 16-bit float or 32-bit float values

tf._HostRecv (TF::_HostRecvOp)

_Receives the named tensor from send_device on recvdevice.

_HostRecv produces its output on host memory whereas _Recv produces its output on device memory.

Interfaces: GetResourceInstanceInterface, TF_RecvSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Recv}

Attributes:

AttributeMLIR TypeDescription
tensor_name::mlir::StringAttrstring attribute
send_device::mlir::StringAttrstring attribute
send_device_incarnation::mlir::IntegerAttr64-bit signless integer attribute
recv_device::mlir::StringAttrstring attribute
client_terminated::mlir::BoolAttrbool attribute
tensor_type::mlir::Attributederived attribute

Results:

Result Description
tensor tensor of tf.dtype values

tf._HostSend (TF::_HostSendOp)

_Sends the named tensor from send_device to recvdevice.

_HostSend requires its input on host memory whereas _Send requires its input on device memory.

Interfaces: GetResourceInstanceInterface, TF_SendSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Send}

Attributes:

AttributeMLIR TypeDescription
tensor_name::mlir::StringAttrstring attribute
send_device::mlir::StringAttrstring attribute
send_device_incarnation::mlir::IntegerAttr64-bit signless integer attribute
recv_device::mlir::StringAttrstring attribute
client_terminated::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
tensor tensor of tf.dtype values

tf._InternalTestMustExecuteTrait_ (TF::InternalTestMustExecuteTrait)

Internal op for testing only

Interfaces: TF_MustExecute (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

tf._InternalTestNonResourceValueSideEffects_ (TF::InternalTestNonResourceValueSideEffects)

Internal op for testing only

Operands:

Operand Description
key tensor of string values

tf._ListToArray (TF::_ListToArrayOp)

Converts a list of tensors to an array of tensors.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tin::mlir::Attributederived attribute
N::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input variadic of tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf._Recv (TF::_RecvOp)

_Receives the named tensor from send_device on recvdevice.

Interfaces: GetResourceInstanceInterface, TF_RecvSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Recv}

Attributes:

AttributeMLIR TypeDescription
tensor_name::mlir::StringAttrstring attribute
send_device::mlir::StringAttrstring attribute
send_device_incarnation::mlir::IntegerAttr64-bit signless integer attribute
recv_device::mlir::StringAttrstring attribute
client_terminated::mlir::BoolAttrbool attribute
tensor_type::mlir::Attributederived attribute

Results:

Result Description
tensor tensor of tf.dtype values

tf._Send (TF::_SendOp)

_Sends the named tensor from send_device to recvdevice.

Interfaces: GetResourceInstanceInterface, TF_SendSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Send}

Attributes:

AttributeMLIR TypeDescription
tensor_name::mlir::StringAttrstring attribute
send_device::mlir::StringAttrstring attribute
send_device_incarnation::mlir::IntegerAttr64-bit signless integer attribute
recv_device::mlir::StringAttrstring attribute
client_terminated::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
tensor tensor of tf.dtype values

tf._TPUCompileMlir (TF::_TPUCompileMlirOp)

Compiles a computations for execution on one or more TPU devices.

For the internal use of the distributed TPU compiler.

'mlir_module' is a serialized MLIR module with a main function that contains target computation. 'dynamic_shapes' contains dynamic shapes of arguments whose shapes were not known statically at TPUReplication rewrite time. 'metadata' is a serialized TPUCompileMetadataProto describing the shapes and types of the inputs to the computation, as well as a mapping onto the TPU pod topology. 'program' output is a string key that is passed to the TPUExecute op and used to look up the program in the compilation cache.

Interfaces: TF_MustExecute (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
mlir_module::mlir::StringAttrstring attribute
metadata::mlir::StringAttrstring attribute
NumDynamicShapes::mlir::Attributederived attribute
num_computations::mlir::Attributederived attribute

Operands:

Operand Description
dynamic_shapes variadic of tensor of 64-bit integer values

Results:

Result Description
compilation_status tensor of string values
program variadic of tensor of string values

tf._TPUDeviceOrdinalPlaceholder (TF::_TPUDeviceOrdinalPlaceholderOp)

_Placeholder for a device ordinal that depends on its tfdevice.replicate ancestor.

This op must have a tf_device.replicate ancestor. The ancestor replica_id and logical_core attribute correspond to a TPU core. This op maps the TPU core to a device_ordinal, where the device ordinal is the index of the core relative to its host.

The replicate_to_island pass removes and flattens tf_device.replicate, so it converts this op to the constant index of the core relative to its host.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
logical_core::mlir::IntegerAttr64-bit signless integer attribute

Results:

Result Description
device_ordinal tensor of 64-bit integer values

tf._UnaryOpsComposition (TF::_UnaryOpsCompositionOp)

NOTE: Do not invoke this operator directly in Python. Graph rewrite pass is

expected to create these operators.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
op_names::mlir::ArrayAttrstring array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of 16-bit float or 32-bit float or 64-bit float values

Results:

Result Description
y tensor of 16-bit float or 32-bit float or 64-bit float values

tf._XlaCompile (TF::_XlaCompileOp)

XLA Compile Op. For use by the XLA JIT only.

Compiles a TensorFlow function into an XLA LocalExecutable and returns a key that _XlaRun can use to look up the LocalExecutable and execute it.

Traits: AttrSizedOperandSegments

Attributes:

AttributeMLIR TypeDescription
must_compile::mlir::BoolAttrbool attribute
function::mlir::SymbolRefAttrsymbol reference attribute
Nresources::mlir::Attributederived attribute
Targs::mlir::Attributederived attribute
Tconstants::mlir::Attributederived attribute

Operands:

Operand Description
constants variadic of tensor of tf.dtype values
args variadic of tensor of tf.dtype values
resources variadic of tensor of resource values

Results:

Result Description
key tensor of string values
compilation_successful tensor of bool values

tf._XlaCompileMlirPlaceholderProgramKey (TF::_XlaCompileMlirPlaceholderProgramKeyOp)

Placeholder program key (compilation cache key) of a XLA program.

This op can be used when certain rewrite passes materialize ops that require a program key but the _TPUCompileMlir or _XlaCompile op has not been added yet. Subsequent rewrite passes must replace this op with program output.

Interfaces: TF_MustExecute (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Results:

Result Description
program tensor of string values

tf._XlaHostComputeMlir (TF::_XlaHostComputeMlirOp)

A pseudo-op to represent host-side computation in an XLA program.

Interfaces: TF_RecvSideEffect (MemoryEffectOpInterface), TF_SendSideEffect (MemoryEffectOpInterface), TF_XlaHostComputeSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Recv}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Send}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::XlaHostCompute}

Attributes:

AttributeMLIR TypeDescription
send_key::mlir::StringAttrstring attribute
recv_key::mlir::StringAttrstring attribute
host_mlir_module::mlir::StringAttrstring attribute
manual_sharding::mlir::BoolAttrbool attribute
Tinputs::mlir::Attributederived attribute
Toutputs::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of tf.dtype values

Results:

Result Description
outputs variadic of tensor of tf.dtype values

tf._XlaRecvAtHost (TF::_XlaRecvAtHostOp)

A placeholder op to receive values from a running XLA computation.

Interfaces: GetResourceInstanceInterface, TF_RecvSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Recv}

Attributes:

AttributeMLIR TypeDescription
key::mlir::StringAttrstring attribute
device_ordinal::mlir::IntegerAttr64-bit signless integer attribute
device_type::mlir::StringAttrstring attribute
Toutputs::mlir::Attributederived attribute

Operands:

Operand Description
dynamic_key tensor of string values

Results:

Result Description
outputs variadic of tensor of tf.dtype values

tf._XlaRecvAtHostV2 (TF::_XlaRecvAtHostV2Op)

A placeholder op to receive values from a running XLA computation with support for a runtime device ordinal.

Interfaces: GetResourceInstanceInterface, TF_RecvSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Recv}

Attributes:

AttributeMLIR TypeDescription
key::mlir::StringAttrstring attribute
device_type::mlir::StringAttrstring attribute
Toutputs::mlir::Attributederived attribute

Operands:

Operand Description
dynamic_key tensor of string values
device_ordinal tensor of 64-bit integer values

Results:

Result Description
outputs variadic of tensor of tf.dtype values

tf._XlaRun (TF::_XlaRunOp)

XLA Run Op. For use by the XLA JIT only.

Executes a TensorFlow function previously compiled into a LocalExecutable by an _XlaCompile op.

Interfaces: MemoryEffectOpInterface

Attributes:

AttributeMLIR TypeDescription
Targs::mlir::Attributederived attribute
Tresults::mlir::Attributederived attribute

Operands:

Operand Description
args variadic of tensor of tf.dtype values
key tensor of string values

Results:

Result Description
results variadic of tensor of tf.dtype values

tf._XlaSendFromHost (TF::_XlaSendFromHostOp)

A placeholder op to send values to a running XLA computation.

Interfaces: GetResourceInstanceInterface, TF_SendSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Send}

Attributes:

AttributeMLIR TypeDescription
key::mlir::StringAttrstring attribute
device_ordinal::mlir::IntegerAttr64-bit signless integer attribute
device_type::mlir::StringAttrstring attribute
Tinputs::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of tf.dtype values
dynamic_key tensor of string values

tf._XlaSendFromHostV2 (TF::_XlaSendFromHostV2Op)

A placeholder op to send values to a running XLA computation with support for a runtime device ordinal.

Interfaces: GetResourceInstanceInterface, TF_SendSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Send}

Attributes:

AttributeMLIR TypeDescription
key::mlir::StringAttrstring attribute
device_type::mlir::StringAttrstring attribute
Tinputs::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of tf.dtype values
dynamic_key tensor of string values
device_ordinal tensor of 64-bit integer values

tf.Abs (TF::AbsOp)

Computes the absolute value of a tensor.

Given a tensor x, this operation returns a tensor containing the absolute value of each element in x. For example, if x is an input element and y is an output element, this operation computes \(y = |x|\).

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_Idempotent

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer values

Results:

Result Description
y tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer values

tf.Acos (TF::AcosOp)

Computes acos of x element-wise.

Provided an input tensor, the tf.math.acos operation returns the inverse cosine of each element of the tensor. If y = tf.math.cos(x) then, x = tf.math.acos(y).

Input range is [-1, 1] and the output has a range of [0, pi].

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.Acosh (TF::AcoshOp)

Computes inverse hyperbolic cosine of x element-wise.

Given an input tensor, the function computes inverse hyperbolic cosine of every element. Input range is [1, inf]. It returns nan if the input lies outside the range.

x = tf.constant([-2, -0.5, 1, 1.2, 200, 10000, float("inf")])
tf.math.acosh(x) ==> [nan nan 0. 0.62236255 5.9914584 9.903487 inf]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.Add (TF::AddOp)

Returns x + y element-wise.

Given two input tensors, the tf.add operation computes the sum for every element in the tensor.

Both input and output have a range (-inf, inf).

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_LayoutAgnostic, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or signed integer or complex or 8-bit unsigned integer or string values
y tensor of floating-point or signed integer or complex or 8-bit unsigned integer or string values

Results:

Result Description
z tensor of floating-point or signed integer or complex or 8-bit unsigned integer or string values

tf.AddN (TF::AddNOp)

Add all input tensors element wise.

Inputs must be of same size and shape.

  x = [9, 7, 10]
  tf.math.add_n(x) ==> 26

Traits: AlwaysSpeculatableImplTrait, Commutative

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer or variant values

Results:

Result Description
sum tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer or variant values

tf.AddV2 (TF::AddV2Op)

Returns x + y element-wise.

Traits: AlwaysSpeculatableImplTrait, Commutative, ResultsBroadcastableShape, TF_CwiseBinary, TF_LayoutAgnostic, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
z tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.AdjustContrastv2 (TF::AdjustContrastv2Op)

Adjust the contrast of one or more images.

images is a tensor of at least 3 dimensions. The last 3 dimensions are interpreted as [height, width, channels]. The other dimensions only represent a collection of images, such as [batch, height, width, channels].

Contrast is adjusted independently for each channel of each image.

For each channel, the Op first computes the mean of the image pixels in the channel and then adjusts each component of each pixel to (x - mean) * contrast_factor + mean.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
images tensor of 16-bit float or 32-bit float values
contrast_factor tensor of 32-bit float values

Results:

Result Description
output tensor of 16-bit float or 32-bit float values

tf.AdjustHue (TF::AdjustHueOp)

Adjust the hue of one or more images.

images is a tensor of at least 3 dimensions. The last dimension is interpreted as channels, and must be three.

The input image is considered in the RGB colorspace. Conceptually, the RGB colors are first mapped into HSV. A delta is then applied all the hue values, and then remapped back to RGB colorspace.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
images tensor of 16-bit float or 32-bit float values
delta tensor of 32-bit float values

Results:

Result Description
output tensor of 16-bit float or 32-bit float values

tf.AdjustSaturation (TF::AdjustSaturationOp)

Adjust the saturation of one or more images.

images is a tensor of at least 3 dimensions. The last dimension is interpreted as channels, and must be three.

The input image is considered in the RGB colorspace. Conceptually, the RGB colors are first mapped into HSV. A scale is then applied all the saturation values, and then remapped back to RGB colorspace.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
images tensor of 16-bit float or 32-bit float values
scale tensor of 32-bit float values

Results:

Result Description
output tensor of 16-bit float or 32-bit float values

tf.All (TF::AllOp)

Computes the "logical and" of elements across dimensions of a tensor.

Reduces input along the dimensions given in axis. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keep_dims is true, the reduced dimensions are retained with length 1.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
keep_dims::mlir::BoolAttrbool attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bool values
reduction_indices tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of bool values

tf.AllToAll (TF::AllToAllOp)

An Op to exchange data across TPU replicas.

On each replica, the input is split into split_count blocks along split_dimension and send to the other replicas given group_assignment. After receiving split_count - 1 blocks from other replicas, we concatenate the blocks along concat_dimension as the output.

For example, suppose there are 2 TPU replicas: replica 0 receives input: [[A, B]] replica 1 receives input: [[C, D]]

group_assignment=[[0, 1]] concat_dimension=0 split_dimension=1 split_count=2

replica 0's output: [[A], [C]] replica 1's output: [[B], [D]]

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
concat_dimension::mlir::IntegerAttr64-bit signless integer attribute
split_dimension::mlir::IntegerAttr64-bit signless integer attribute
split_count::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
group_assignment tensor of 32-bit integer values

Results:

Result Description
output tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.Angle (TF::AngleOp)

Returns the argument of a complex number.

Given a tensor input of complex numbers, this operation returns a tensor of type float that is the argument of each element in input. All elements in input must be complex numbers of the form \(a + bj\), where a is the real part and b is the imaginary part.

The argument returned by this operation is of the form \(atan2(b, a)\).

For example:

# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]
tf.math.angle(input) ==> [2.0132, 1.056]

@compatibility(numpy) Equivalent to np.angle. @end_compatibility

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values

Results:

Result Description
output tensor of 32/64-bit float values

tf.AnonymousIterator (TF::AnonymousIteratorOp)

A container for an iterator resource.

Traits: TF::UniqueResourceAllocation

Interfaces: TF_ResourceHandleAllocatorInterface

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements

Results:

Result Description
handle tensor of resource values

tf.AnonymousIteratorV2 (TF::AnonymousIteratorV2Op)

A container for an iterator resource.

Traits: TF::UniqueResourceAllocation

Interfaces: TF_ResourceHandleAllocatorInterface

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements

Results:

Result Description
handle tensor of resource values
deleter tensor of variant values

tf.AnonymousIteratorV3 (TF::AnonymousIteratorV3Op)

A container for an iterator resource.

Traits: TF::UniqueResourceAllocation

Interfaces: TF_ResourceHandleAllocatorInterface

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements

Results:

Result Description
handle tensor of resource values

tf.AnonymousMemoryCache (TF::AnonymousMemoryCacheOp)

Traits: TF::UniqueResourceAllocation

Interfaces: TF_ResourceHandleAllocatorInterface

Results:

Result Description
handle tensor of resource values
deleter tensor of variant values

tf.AnonymousMultiDeviceIterator (TF::AnonymousMultiDeviceIteratorOp)

A container for a multi device iterator resource.

Traits: TF::UniqueResourceAllocation

Interfaces: TF_ResourceHandleAllocatorInterface

Attributes:

AttributeMLIR TypeDescription
devices::mlir::ArrayAttrstring array attribute with at least 1 elements
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements

Results:

Result Description
handle tensor of resource values
deleter tensor of variant values

tf.AnonymousMultiDeviceIteratorV3 (TF::AnonymousMultiDeviceIteratorV3Op)

A container for a multi device iterator resource.

Traits: TF::UniqueResourceAllocation

Interfaces: TF_ResourceHandleAllocatorInterface

Attributes:

AttributeMLIR TypeDescription
devices::mlir::ArrayAttrstring array attribute with at least 1 elements
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements

Results:

Result Description
handle tensor of resource values

tf.AnonymousRandomSeedGenerator (TF::AnonymousRandomSeedGeneratorOp)

Traits: TF::UniqueResourceAllocation

Interfaces: TF_ResourceHandleAllocatorInterface

Operands:

Operand Description
seed tensor of 64-bit integer values
seed2 tensor of 64-bit integer values

Results:

Result Description
handle tensor of resource values
deleter tensor of variant values

tf.AnonymousSeedGenerator (TF::AnonymousSeedGeneratorOp)

Traits: TF::UniqueResourceAllocation

Interfaces: TF_ResourceHandleAllocatorInterface

Operands:

Operand Description
seed tensor of 64-bit integer values
seed2 tensor of 64-bit integer values
reshuffle tensor of bool values

Results:

Result Description
handle tensor of resource values
deleter tensor of variant values

tf.Any (TF::AnyOp)

Computes the "logical or" of elements across dimensions of a tensor.

Reduces input along the dimensions given in axis. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keep_dims is true, the reduced dimensions are retained with length 1.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
keep_dims::mlir::BoolAttrbool attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bool values
reduction_indices tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of bool values

tf.ApproximateEqual (TF::ApproximateEqualOp)

Returns the truth value of abs(x-y) < tolerance element-wise.

Traits: AlwaysSpeculatableImplTrait, Commutative

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
tolerance::mlir::FloatAttr32-bit float attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of number values
y tensor of number values

Results:

Result Description
z tensor of bool values

tf.ApproxTopK (TF::ApproxTopKOp)

Returns min/max k values and their indices of the input operand in an approximate manner.

See https://arxiv.org/abs/2206.14286 for the algorithm details. This op is only optimized on TPU currently.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
k::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 0
reduction_dimension::mlir::IntegerAttr64-bit signless integer attribute
recall_target::mlir::FloatAttr32-bit float attribute
is_max_k::mlir::BoolAttrbool attribute
reduction_input_size_override::mlir::IntegerAttr64-bit signless integer attribute
aggregate_to_topk::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float values

Results:

Result Description
values tensor of bfloat16 or 16-bit float or 32-bit float values
indices tensor of 32-bit integer values

tf.ArgMax (TF::ArgMaxOp)

Returns the index with the largest value across dimensions of a tensor.

Note that in case of ties the identity of the return value is not guaranteed.

Usage:

  import tensorflow as tf
  a = [1, 10, 26.9, 2.8, 166.32, 62.3]
  b = tf.math.argmax(input = a)
  c = tf.keras.backend.eval(b)
  # c = 4
  # here a[4] = 166.32 which is the largest element of a across axis 0

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute
output_type::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or bool or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
dimension tensor of 16-bit integer or 32-bit integer or 64-bit integer values

Results:

Result Description
output tensor of 16-bit integer or 32-bit integer or 64-bit integer or 16-bit unsigned integer values

tf.ArgMin (TF::ArgMinOp)

Returns the index with the smallest value across dimensions of a tensor.

Note that in case of ties the identity of the return value is not guaranteed.

Usage:

  import tensorflow as tf
  a = [1, 10, 26.9, 2.8, 166.32, 62.3]
  b = tf.math.argmin(input = a)
  c = tf.keras.backend.eval(b)
  # c = 0
  # here a[0] = 1 which is the smallest element of a across axis 0

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute
output_type::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or bool or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
dimension tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of 32/64-bit signed integer values

tf.Asin (TF::AsinOp)

Computes the trignometric inverse sine of x element-wise.

The tf.math.asin operation returns the inverse of tf.math.sin, such that if y = tf.math.sin(x) then, x = tf.math.asin(y).

For example:

# Note: [1.047, 0.785] ~= [(pi/3), (pi/4)]
x = tf.constant([1.047, 0.785])
y = tf.math.sin(x) # [0.8659266, 0.7068252]

tf.math.asin(y) # [1.047, 0.785] = x

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.Asinh (TF::AsinhOp)

Computes inverse hyperbolic sine of x element-wise.

Given an input tensor, this function computes inverse hyperbolic sine for every element in the tensor. Both input and output has a range of [-inf, inf].

  x = tf.constant([-float("inf"), -2, -0.5, 1, 1.2, 200, 10000, float("inf")])
  tf.math.asinh(x) ==> [-inf -1.4436355 -0.4812118 0.8813736 1.0159732 5.991471 9.903487 inf]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.Assert (TF::AssertOp)

Asserts that the given condition is true.

If condition evaluates to false, print the list of tensors in data. summarize determines how many entries of the tensors to print.

Attributes:

AttributeMLIR TypeDescription
summarize::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
condition tensor of bool values
data variadic of tensor of tf.dtype values

tf.Assign (TF::AssignOp)

Update 'ref' by assigning 'value' to it.

This operation outputs "ref" after the assignment is done. This makes it easier to chain operations that need to use the reset value.

Attributes:

AttributeMLIR TypeDescription
validate_shape::mlir::BoolAttrbool attribute
use_locking::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
ref tensor of tf.dtype values
value tensor of tf.dtype values

Results:

Result Description
output_ref tensor of tf.dtype values

tf.AssignAddVariableOp (TF::AssignAddVariableOp)

Adds a value to the current value of a variable.

Any ReadVariableOp with a control dependency on this op is guaranteed to see the incremented value or a subsequent newer one.

Attributes:

AttributeMLIR TypeDescription
dtype::mlir::Attributederived attribute

Operands:

Operand Description
resource tensor of resource values
value tensor of tf.dtype values

tf.AssignSubVariableOp (TF::AssignSubVariableOp)

Subtracts a value from the current value of a variable.

Any ReadVariableOp with a control dependency on this op is guaranteed to see the decremented value or a subsequent newer one.

Attributes:

AttributeMLIR TypeDescription
dtype::mlir::Attributederived attribute

Operands:

Operand Description
resource tensor of resource values
value tensor of tf.dtype values

tf.AssignVariableOp (TF::AssignVariableOp)

Assigns a new value to a variable.

Any ReadVariableOp with a control dependency on this op is guaranteed to return this value or a subsequent newer value of the variable.

Attributes:

AttributeMLIR TypeDescription
validate_shape::mlir::BoolAttrbool attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
resource tensor of resource values
value tensor of tf.dtype values

tf.AsString (TF::AsStringOp)

Converts each entry in the given tensor to strings.

Supports many numeric types and boolean.

For Unicode, see the https://www.tensorflow.org/tutorials/representation/unicode tutorial.

Examples:

tf.strings.as_string([3, 2]) tf.strings.as_string([3.1415926, 2.71828], precision=2).numpy() array([b'3.14', b'2.72'], dtype=object)

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
precision::mlir::IntegerAttr64-bit signless integer attribute
scientific::mlir::BoolAttrbool attribute
shortest::mlir::BoolAttrbool attribute
width::mlir::IntegerAttr64-bit signless integer attribute
fill::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or string or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer or variant values

Results:

Result Description
output tensor of string values

tf.Atan (TF::AtanOp)

Computes the trignometric inverse tangent of x element-wise.

The tf.math.atan operation returns the inverse of tf.math.tan, such that if y = tf.math.tan(x) then, x = tf.math.atan(y).

For example:

# Note: [1.047, 0.785] ~= [(pi/3), (pi/4)]
x = tf.constant([1.047, 0.785])
y = tf.math.tan(x) # [1.731261, 0.99920404]

tf.math.atan(y) # [1.047, 0.785] = x

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.Atan2 (TF::Atan2Op)

Computes arctangent of y/x element-wise, respecting signs of the arguments.

This is the angle \( \theta \in [-\pi, \pi] \) such that \[ x = r \cos(\theta) \] and \[ y = r \sin(\theta) \] where \(r = \sqrt{x^2 + y^2} \).

For example:

x = [1., 1.] y = [1., -1.] print((tf.math.atan2(y,x) * (180 / np.pi)).numpy()) [ 45. -45.]

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
y tensor of floating-point values
x tensor of floating-point values

Results:

Result Description
z tensor of floating-point values

tf.Atanh (TF::AtanhOp)

Computes inverse hyperbolic tangent of x element-wise.

Given an input tensor, this function computes inverse hyperbolic tangent for every element in the tensor. Input range is [-1,1] and output range is [-inf, inf]. If input is -1, output will be -inf and if the input is 1, output will be inf. Values outside the range will have nan as output.

  x = tf.constant([-float("inf"), -1, -0.5, 1, 0, 0.5, 10, float("inf")])
  tf.math.atanh(x) ==> [nan -inf -0.54930615 inf  0. 0.54930615 nan nan]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.AvgPool (TF::AvgPoolOp)

Performs average pooling on the input.

Each entry in output is the mean of the corresponding size ksize window in value.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
ksize::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
strides::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
T::mlir::Attributederived attribute

Operands:

Operand Description
value tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.AvgPool3D (TF::AvgPool3DOp)

Performs 3D average pooling on the input.

Each entry in output is the mean of the corresponding size ksize window in value.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
ksize::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
strides::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttrstring attribute whose value is NDHWC, or NCDHW
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.AvgPool3DGrad (TF::AvgPool3DGradOp)

Computes gradients of average pooling function.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
ksize::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
strides::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttrstring attribute whose value is NDHWC, or NCDHW
T::mlir::Attributederived attribute

Operands:

Operand Description
orig_input_shape tensor of 32-bit integer values
grad tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.AvgPoolGrad (TF::AvgPoolGradOp)

Computes gradients of the average pooling function.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
ksize::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
strides::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
T::mlir::Attributederived attribute

Operands:

Operand Description
orig_input_shape tensor of 32-bit integer values
grad tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.BatchDatasetV2 (TF::BatchDatasetV2Op)

Creates a dataset that batches batch_size elements from input_dataset.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
parallel_copy::mlir::BoolAttrbool attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
metadata::mlir::StringAttrstring attribute

Operands:

Operand Description
input_dataset tensor of variant values
batch_size tensor of 64-bit integer values
drop_remainder tensor of bool values

Results:

Result Description
handle tensor of variant values

tf.BatchFunction (TF::BatchFunctionOp)

Batches all the inputs tensors to the computation done by the function.

So, for example, in the following code


  # This input will be captured.
  y = tf.placeholder_with_default(1.0, shape=[])

  @tf.Defun(tf.float32)
  def computation(a):
    return tf.matmul(a, a) + y

  b = gen_batch_ops.batch_function(
          f=computation
          in_tensors=[a],
          captured_tensors=computation.captured_inputs,
          Tout=[o.type for o in computation.definition.signature.output_arg],
          num_batch_threads=1,
          max_batch_size=10,
          batch_timeout_micros=100000,  # 100ms
          allowed_batch_sizes=[3, 10],
          batching_queue="")

If more than one session.run call is simultaneously trying to compute b the values of a will be gathered, non-deterministically concatenated along the first axis, and only one thread will run the computation.

Assumes that all arguments of the function are Tensors which will be batched along their first dimension.

Arguments that are captured, are not batched. The session.run call which does the concatenation, will use the values of the captured tensors available to it. Therefore, typical uses of captured tensors should involve values which remain unchanged across session.run calls. Inference is a good example of this.

SparseTensor is not supported. The return value of the decorated function must be a Tensor or a list/tuple of Tensors.

Traits: AlwaysSpeculatableImplTrait, AttrSizedOperandSegments

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), SymbolUserOpInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
f::mlir::SymbolRefAttrsymbol reference attribute
num_batch_threads::mlir::IntegerAttr64-bit signless integer attribute
max_batch_size::mlir::IntegerAttr64-bit signless integer attribute
batch_timeout_micros::mlir::IntegerAttr64-bit signless integer attribute
max_enqueued_batches::mlir::IntegerAttr64-bit signless integer attribute
allowed_batch_sizes::mlir::ArrayAttr64-bit integer array attribute
container::mlir::StringAttrstring attribute
shared_name::mlir::StringAttrstring attribute
batching_queue::mlir::StringAttrstring attribute
low_priority_max_batch_size::mlir::IntegerAttr64-bit signless integer attribute
low_priority_batch_timeout_micros::mlir::IntegerAttr64-bit signless integer attribute
low_priority_allowed_batch_sizes::mlir::ArrayAttr64-bit integer array attribute
low_priority_max_enqueued_batches::mlir::IntegerAttr64-bit signless integer attribute
enable_large_batch_splitting::mlir::BoolAttrbool attribute
Tcaptured::mlir::Attributederived attribute
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
in_tensors variadic of tensor of tf.dtype values
captured_tensors variadic of tensor of tf.dtype values

Results:

Result Description
out_tensors variadic of tensor of tf.dtype values

tf.BatchMatMul (TF::BatchMatMulOp)

Multiplies slices of two tensors in batches.

Multiplies all slices of Tensor x and y (each slice can be viewed as an element of a batch), and arranges the individual results in a single output tensor of the same batch size. Each of the individual slices can optionally be adjointed (to adjoint a matrix means to transpose and conjugate it) before multiplication by setting the adj_x or adj_y flag to True, which are by default False.

The input tensors x and y are 2-D or higher with shape [..., r_x, c_x] and [..., r_y, c_y].

The output tensor is 2-D or higher with shape [..., r_o, c_o], where:

r_o = c_x if adj_x else r_x
c_o = r_y if adj_y else c_y

It is computed as:

output[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :])

Traits: AlwaysSpeculatableImplTrait, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
adj_x::mlir::BoolAttrbool attribute
adj_y::mlir::BoolAttrbool attribute
grad_x::mlir::BoolAttrbool attribute
grad_y::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

Results:

Result Description
output tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.BatchMatMulV2 (TF::BatchMatMulV2Op)

Multiplies slices of two tensors in batches.

Multiplies all slices of Tensor x and y (each slice can be viewed as an element of a batch), and arranges the individual results in a single output tensor of the same batch size. Each of the individual slices can optionally be adjointed (to adjoint a matrix means to transpose and conjugate it) before multiplication by setting the adj_x or adj_y flag to True, which are by default False.

The input tensors x and y are 2-D or higher with shape [..., r_x, c_x] and [..., r_y, c_y].

The output tensor is 2-D or higher with shape [..., r_o, c_o], where:

r_o = c_x if adj_x else r_x
c_o = r_y if adj_y else c_y

It is computed as:

output[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :])

Traits: AlwaysSpeculatableImplTrait, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
adj_x::mlir::BoolAttrbool attribute
adj_y::mlir::BoolAttrbool attribute
grad_x::mlir::BoolAttrbool attribute
grad_y::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
output tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.BatchMatMulV3 (TF::BatchMatMulV3Op)

Multiplies slices of two tensors in batches.

Multiplies all slices of Tensor x and y (each slice can be viewed as an element of a batch), and arranges the individual results in a single output tensor of the same batch size. Each of the individual slices can optionally be adjointed (to adjoint a matrix means to transpose and conjugate it) before multiplication by setting the adj_x or adj_y flag to True, which are by default False.

The input tensors x and y are 2-D or higher with shape [..., r_x, c_x] and [..., r_y, c_y].

The output tensor is 2-D or higher with shape [..., r_o, c_o], where:

r_o = c_x if adj_x else r_x
c_o = r_y if adj_y else c_y

It is computed as:

output[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :])

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
adj_x::mlir::BoolAttrbool attribute
adj_y::mlir::BoolAttrbool attribute
grad_x::mlir::BoolAttrbool attribute
grad_y::mlir::BoolAttrbool attribute
Ta::mlir::Attributederived attribute
Tb::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 8-bit unsigned integer values
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 8-bit unsigned integer values

Results:

Result Description
output tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer values

tf.BatchNormWithGlobalNormalization (TF::BatchNormWithGlobalNormalizationOp)

Batch normalization.

This op is deprecated. Prefer tf.nn.batch_normalization.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
variance_epsilon::mlir::FloatAttr32-bit float attribute
scale_after_normalization::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
m tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
v tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
beta tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
gamma tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
result tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.BatchToSpace (TF::BatchToSpaceOp)

BatchToSpace for 4-D tensors of type T.

This is a legacy version of the more general BatchToSpaceND.

Rearranges (permutes) data from batch into blocks of spatial data, followed by cropping. This is the reverse transformation of SpaceToBatch. More specifically, this op outputs a copy of the input tensor where values from the batch dimension are moved in spatial blocks to the height and width dimensions, followed by cropping along the height and width dimensions.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
block_size::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 2
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
crops tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.BatchToSpaceND (TF::BatchToSpaceNDOp)

BatchToSpace for N-D tensors of type T.

This operation reshapes the "batch" dimension 0 into M + 1 dimensions of shape block_shape + [batch], interleaves these blocks back into the grid defined by the spatial dimensions [1, ..., M], to obtain a result with the same rank as the input. The spatial dimensions of this intermediate result are then optionally cropped according to crops to produce the output. This is the reverse of SpaceToBatch. See below for a precise description.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tblock_shape::mlir::Attributederived attribute
Tcrops::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
block_shape tensor of 32/64-bit signed integer values
crops tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.BesselI0e (TF::BesselI0eOp)

Computes the Bessel i0e function of x element-wise.

Exponentially scaled modified Bessel function of order 0 defined as bessel_i0e(x) = exp(-abs(x)) bessel_i0(x).

This function is faster and numerically stabler than bessel_i0(x).

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of floating-point values

tf.BesselI1e (TF::BesselI1eOp)

Computes the Bessel i1e function of x element-wise.

Exponentially scaled modified Bessel function of order 0 defined as bessel_i1e(x) = exp(-abs(x)) bessel_i1(x).

This function is faster and numerically stabler than bessel_i1(x).

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of floating-point values

tf.Betainc (TF::BetaincOp)

_Compute the regularized incomplete beta integral \(I_x(a, b)\)._

The regularized incomplete beta integral is defined as:

\(I_x(a, b) = \frac{B(x; a, b)}{B(a, b)}\)

where

\(B(x; a, b) = \int_0^x t^{a-1} (1 - t)^{b-1} dt\)

is the incomplete beta function and \(B(a, b)\) is the complete beta function.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
a tensor of 32/64-bit float values
b tensor of 32/64-bit float values
x tensor of 32/64-bit float values

Results:

Result Description
z tensor of 32/64-bit float values

tf.BiasAdd (TF::BiasAddOp)

Adds bias to value.

This is a special case of tf.add where bias is restricted to be 1-D. Broadcasting is supported, so value may have any number of dimensions.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), TF_LayoutSensitiveInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
T::mlir::Attributederived attribute

Operands:

Operand Description
value tensor of number values
bias tensor of number values

Results:

Result Description
output tensor of number values

tf.BiasAddGrad (TF::BiasAddGradOp)

The backward operation for "BiasAdd" on the "bias" tensor.

It accumulates all the values from out_backprop into the feature dimension. For NHWC data format, the feature dimension is the last. For NCHW data format, the feature dimension is the third-to-last.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
T::mlir::Attributederived attribute

Operands:

Operand Description
out_backprop tensor of number values

Results:

Result Description
output tensor of number values

tf.BiasAddV1 (TF::BiasAddV1Op)

Adds bias to value.

This is a deprecated version of BiasAdd and will be soon removed.

This is a special case of tf.add where bias is restricted to be 1-D. Broadcasting is supported, so value may have any number of dimensions.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
value tensor of number values
bias tensor of number values

Results:

Result Description
output tensor of number values

tf.Bincount (TF::BincountOp)

Counts the number of occurrences of each value in an integer array.

Outputs a vector with length size and the same dtype as weights. If weights are empty, then index i stores the number of times the value i is counted in arr. If weights are non-empty, then index i stores the sum of the value in weights at each index where the corresponding value in arr is i.

Values in arr outside of the range [0, size) are ignored.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
arr tensor of 32-bit integer values
size tensor of 32-bit integer values
weights tensor of 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

Results:

Result Description
bins tensor of 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.Bitcast (TF::BitcastOp)

Bitcasts a tensor from one type to another without copying data.

Given a tensor input, this operation returns a tensor that has the same buffer data as input with datatype type.

If the input datatype T is larger than the output datatype type then the shape changes from [...] to [..., sizeof(T)/sizeof(type)].

If T is smaller than type, the operator requires that the rightmost dimension be equal to sizeof(type)/sizeof(T). The shape then goes from [..., sizeof(type)/sizeof(T)] to [...].

tf.bitcast() and tf.cast() work differently when real dtype is casted as a complex dtype (e.g. tf.complex64 or tf.complex128) as tf.cast() make imaginary part 0 while tf.bitcast() gives module error. For example,

Example 1:

a = [1., 2., 3.] equality_bitcast = tf.bitcast(a, tf.complex128) Traceback (most recent call last): ... InvalidArgumentError: Cannot bitcast from 1 to 18 [Op:Bitcast] equality_cast = tf.cast(a, tf.complex128) print(equality_cast) tf.Tensor([1.+0.j 2.+0.j 3.+0.j], shape=(3,), dtype=complex128)

Example 2:

tf.bitcast(tf.constant(0xffffffff, dtype=tf.uint32), tf.uint8)

Example 3:

x = [1., 2., 3.] y = [0., 2., 3.] equality= tf.equal(x,y) equality_cast = tf.cast(equality,tf.float32) equality_bitcast = tf.bitcast(equality_cast,tf.uint8) print(equality) tf.Tensor([False True True], shape=(3,), dtype=bool) print(equality_cast) tf.Tensor([0. 1. 1.], shape=(3,), dtype=float32) print(equality_bitcast) tf.Tensor( [[ 0 0 0 0] [ 0 0 128 63] [ 0 0 128 63]], shape=(3, 4), dtype=uint8)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
type::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of number values

Results:

Result Description
output tensor of number values

tf.BitwiseAnd (TF::BitwiseAndOp)

Elementwise computes the bitwise AND of x and y.

The result will have those bits set, that are set in both x and y. The computation is performed on the underlying representations of x and y.

For example:

import tensorflow as tf
from tensorflow.python.ops import bitwise_ops
dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64,
              tf.uint8, tf.uint16, tf.uint32, tf.uint64]

for dtype in dtype_list:
  lhs = tf.constant([0, 5, 3, 14], dtype=dtype)
  rhs = tf.constant([5, 0, 7, 11], dtype=dtype)
  exp = tf.constant([0, 0, 3, 10], dtype=tf.float32)

  res = bitwise_ops.bitwise_and(lhs, rhs)
  tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE

Traits: AlwaysSpeculatableImplTrait, Commutative, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer values
y tensor of integer values

Results:

Result Description
z tensor of integer values

tf.BitwiseOr (TF::BitwiseOrOp)

Elementwise computes the bitwise OR of x and y.

The result will have those bits set, that are set in x, y or both. The computation is performed on the underlying representations of x and y.

For example:

import tensorflow as tf
from tensorflow.python.ops import bitwise_ops
dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64,
              tf.uint8, tf.uint16, tf.uint32, tf.uint64]

for dtype in dtype_list:
  lhs = tf.constant([0, 5, 3, 14], dtype=dtype)
  rhs = tf.constant([5, 0, 7, 11], dtype=dtype)
  exp = tf.constant([5, 5, 7, 15], dtype=tf.float32)

  res = bitwise_ops.bitwise_or(lhs, rhs)
  tf.assert_equal(tf.cast(res,  tf.float32), exp)  # TRUE

Traits: AlwaysSpeculatableImplTrait, Commutative, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer values
y tensor of integer values

Results:

Result Description
z tensor of integer values

tf.BitwiseXor (TF::BitwiseXorOp)

Elementwise computes the bitwise XOR of x and y.

The result will have those bits set, that are different in x and y. The computation is performed on the underlying representations of x and y.

For example:

import tensorflow as tf
from tensorflow.python.ops import bitwise_ops
dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64,
              tf.uint8, tf.uint16, tf.uint32, tf.uint64]

for dtype in dtype_list:
  lhs = tf.constant([0, 5, 3, 14], dtype=dtype)
  rhs = tf.constant([5, 0, 7, 11], dtype=dtype)
  exp = tf.constant([5, 5, 4, 5],  dtype=tf.float32)

  res = bitwise_ops.bitwise_xor(lhs, rhs)
  tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE

Traits: AlwaysSpeculatableImplTrait, Commutative, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer values
y tensor of integer values

Results:

Result Description
z tensor of integer values

tf.BoostedTreesBucketize (TF::BoostedTreesBucketizeOp)

Bucketize each feature based on bucket boundaries.

An op that returns a list of float tensors, where each tensor represents the bucketized values for a single feature.

Traits: AlwaysSpeculatableImplTrait, SameVariadicOperandSize

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
num_features::mlir::Attributederived attribute

Operands:

Operand Description
float_values variadic of tensor of 32-bit float values
bucket_boundaries variadic of tensor of 32-bit float values

Results:

Result Description
buckets variadic of tensor of 32-bit integer values

tf.BroadcastArgs (TF::BroadcastArgsOp)

Return the shape of s0 op s1 with broadcast.

Given s0 and s1, tensors that represent shapes, compute r0, the broadcasted shape. s0, s1 and r0 are all integer vectors.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
s0 tensor of 32/64-bit signed integer values
s1 tensor of 32/64-bit signed integer values

Results:

Result Description
r0 tensor of 32/64-bit signed integer values

tf.BroadcastGradientArgs (TF::BroadcastGradientArgsOp)

Return the reduction indices for computing gradients of s0 op s1 with broadcast.

This is typically used by gradient computations for a broadcasting operation.

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultElementType

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
s0 tensor of 32/64-bit signed integer values
s1 tensor of 32/64-bit signed integer values

Results:

Result Description
r0 tensor of 32/64-bit signed integer values
r1 tensor of 32/64-bit signed integer values

tf.BroadcastTo (TF::BroadcastToOp)

Broadcast an array for a compatible shape.

Broadcasting is the process of making arrays to have compatible shapes for arithmetic operations. Two shapes are compatible if for each dimension pair they are either equal or one of them is one.

For example:

x = tf.constant([[1, 2, 3]]) # Shape (1, 3,) y = tf.broadcast_to(x, [2, 3]) print(y) tf.Tensor( [[1 2 3] [1 2 3]], shape=(2, 3), dtype=int32)

In the above example, the input Tensor with the shape of [1, 3] is broadcasted to output Tensor with shape of [2, 3].

When broadcasting, if a tensor has fewer axes than necessary its shape is padded on the left with ones. So this gives the same result as the previous example:

x = tf.constant([1, 2, 3]) # Shape (3,) y = tf.broadcast_to(x, [2, 3])

When doing broadcasted operations such as multiplying a tensor by a scalar, broadcasting (usually) confers some time or space benefit, as the broadcasted tensor is never materialized.

However, broadcast_to does not carry with it any such benefits. The newly-created tensor takes the full memory of the broadcasted shape. (In a graph context, broadcast_to might be fused to subsequent operation and then be optimized away, however.)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
shape tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.Bucketize (TF::BucketizeOp)

Bucketizes 'input' based on 'boundaries'.

For example, if the inputs are boundaries = [0, 10, 100] input = [[-5, 10000] [150, 10] [5, 100]]

then the output will be output = [[0, 3] [3, 2] [1, 3]]

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
boundaries::mlir::ArrayAttr32-bit float array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

Results:

Result Description
output tensor of 32-bit integer values

tf.CacheDatasetV2 (TF::CacheDatasetV2Op)

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
metadata::mlir::StringAttrstring attribute

Operands:

Operand Description
input_dataset tensor of variant values
filename tensor of string values
cache tensor of resource values

Results:

Result Description
handle tensor of variant values

tf.Case (TF::CaseOp)

An n-way switch statement which calls a single branch function.

An n-way switch statement, implementing the following:

```
switch (branch_index) {
  case 0:
    output = branches[0](input);
    break;
  case 1:
    output = branches[1](input);
    break;
  ...
  case [[nbranches-1]]:
  default:
    output = branches[nbranches-1](input);
    break;
}
```

Interfaces: SymbolUserOpInterface

Attributes:

AttributeMLIR TypeDescription
branches::mlir::ArrayAttrsymbol ref array attribute with at least 1 elements
is_stateless::mlir::BoolAttrbool attribute
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute
output_shapes::mlir::Attributederived attribute

Operands:

Operand Description
branch_index tensor of 32-bit signless integer values
input variadic of tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.CaseRegion (TF::CaseRegionOp)

An n-way switch statement which calls a single branch function.

An n-way switch statement, implementing the following:

```
switch (branch_index) {
  case 0:
    output = branches[0](input);
    break;
  case 1:
    output = branches[1](input);
    break;
  ...
  case [[nbranches-1]]:
  default:
    output = branches[nbranches-1](input);
    break;
}
```

Traits: NoRegionArguments, SingleBlockImplicitTerminator<YieldOp>, SingleBlock

Attributes:

AttributeMLIR TypeDescription
is_stateless::mlir::BoolAttrbool attribute

Operands:

Operand Description
branch_index tensor of 32-bit signless integer values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.Cast (TF::CastOp)

Cast x of type SrcT to y of DstT.

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Truncate::mlir::BoolAttrbool attribute
SrcT::mlir::Attributederived attribute
DstT::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of tf.dtype values

Results:

Result Description
y tensor of tf.dtype values

tf.Ceil (TF::CeilOp)

Returns element-wise smallest integer not less than x.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_Idempotent

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of floating-point values

tf.CheckNumerics (TF::CheckNumericsOp)

Checks a tensor for NaN and Inf values.

When run, reports an InvalidArgument error if tensor has any values that are not a number (NaN) or infinity (Inf). Otherwise, returns the input tensor.

Example usage:

a = tf.Variable(1.0)
tf.debugging.check_numerics(a, message='')

b = tf.Variable(np.nan)
try:
  tf.debugging.check_numerics(b, message='Checking b')
except Exception as e:
  assert "Checking b : Tensor had NaN values" in e.message

c = tf.Variable(np.inf)
try:
  tf.debugging.check_numerics(c, message='Checking c')
except Exception as e:
  assert "Checking c : Tensor had Inf values" in e.message

Traits: InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_NoConstantFold

Interfaces: InferShapedTypeOpInterface, InferTypeOpInterface, TF_MustExecute (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
message::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
tensor tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.Cholesky (TF::CholeskyOp)

Computes the Cholesky decomposition of one or more square matrices.

The input is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices.

The input has to be symmetric and positive definite. Only the lower-triangular part of the input will be used for this operation. The upper-triangular part will not be read.

The output is a tensor of the same shape as the input containing the Cholesky decompositions for all input submatrices [..., :, :].

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values

tf.ClipByValue (TF::ClipByValueOp)

Clips tensor values to a specified min and max.

Given a tensor x, this operation returns a tensor of the same type and shape as x with its values clipped to clip_value_min and clip_value_max. Any values less than clip_value_min are set to clip_value_min. Any values greater than clip_value_max are set to clip_value_max.

Traits: AlwaysSpeculatableImplTrait, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
clip_value_min tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
clip_value_max tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
output tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.CloseSummaryWriter (TF::CloseSummaryWriterOp)

Flushes and closes the summary writer.

Also removes it from the resource manager. To reopen, use another CreateSummaryFileWriter op.

writer: A handle to the summary writer resource.

Operands:

Operand Description
writer tensor of resource values

tf.CollateTPUEmbeddingMemory (TF::CollateTPUEmbeddingMemoryOp)

An op that merges the string-encoded memory config protos from all hosts.

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute

Operands:

Operand Description
memory_configs variadic of tensor of string values

Results:

Result Description
merged_memory_config tensor of string values

tf.CollectiveAllToAllV2 (TF::CollectiveAllToAllV2Op)

Mutually exchanges multiple tensors of identical type and shape.

is_stateless means each op does not need control dependencies to other collective ops. In this case, keys that are unique at runtime (e.g. instance_key) should be used to distinguish collective groups.

Interfaces: GetResourceInstanceInterface, TF_CollectiveReduceOrderingEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::CollectiveReduceOrdering}

Attributes:

AttributeMLIR TypeDescription
communication_hint::mlir::StringAttrstring attribute
timeout_seconds::mlir::FloatAttr32-bit float attribute
is_stateless::mlir::BoolAttrbool attribute
Nordering_token::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point or 32/64-bit signed integer values
group_size tensor of 32-bit integer values
group_key tensor of 32-bit integer values
instance_key tensor of 32-bit integer values
ordering_token variadic of tensor of resource values

Results:

Result Description
data tensor of floating-point or 32/64-bit signed integer values

tf.CollectiveAssignGroupV2 (TF::CollectiveAssignGroupV2Op)

Assign group keys based on group assignment.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
group_assignment tensor of 32-bit integer values
device_index tensor of 32-bit integer values
base_key tensor of 32-bit integer values

Results:

Result Description
group_size tensor of 32-bit integer values
group_key tensor of 32-bit integer values

tf.CollectiveBcastRecv (TF::CollectiveBcastRecvOp)

Receives a tensor value broadcast from another device.

Attributes:

AttributeMLIR TypeDescription
group_size::mlir::IntegerAttr64-bit signless integer attribute
group_key::mlir::IntegerAttr64-bit signless integer attribute
instance_key::mlir::IntegerAttr64-bit signless integer attribute
shape::mlir::AttributeTensorFlow shape attribute
communication_hint::mlir::StringAttrstring attribute
timeout_seconds::mlir::FloatAttr32-bit float attribute
T::mlir::Attributederived attribute

Results:

Result Description
data tensor of bool or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.CollectiveBcastSend (TF::CollectiveBcastSendOp)

Broadcasts a tensor value to one or more other devices.

Attributes:

AttributeMLIR TypeDescription
group_size::mlir::IntegerAttr64-bit signless integer attribute
group_key::mlir::IntegerAttr64-bit signless integer attribute
instance_key::mlir::IntegerAttr64-bit signless integer attribute
shape::mlir::AttributeTensorFlow shape attribute
communication_hint::mlir::StringAttrstring attribute
timeout_seconds::mlir::FloatAttr32-bit float attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bool or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

Results:

Result Description
data tensor of bool or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.CollectiveGather (TF::CollectiveGatherOp)

Mutually accumulates multiple tensors of identical type and shape.

Attributes:

AttributeMLIR TypeDescription
group_size::mlir::IntegerAttr64-bit signless integer attribute
group_key::mlir::IntegerAttr64-bit signless integer attribute
instance_key::mlir::IntegerAttr64-bit signless integer attribute
shape::mlir::AttributeTensorFlow shape attribute
communication_hint::mlir::StringAttrstring attribute
timeout_seconds::mlir::FloatAttr32-bit float attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

Results:

Result Description
data tensor of 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.CollectiveGatherV2 (TF::CollectiveGatherV2Op)

Mutually accumulates multiple tensors of identical type and shape.

is_stateless means each op does not need control dependencies to other collective ops. In this case, keys that are unique at runtime (e.g. instance_key) should be used to distinguish collective groups.

Interfaces: GetResourceInstanceInterface, TF_CollectiveReduceOrderingEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::CollectiveReduceOrdering}

Attributes:

AttributeMLIR TypeDescription
communication_hint::mlir::StringAttrstring attribute
timeout_seconds::mlir::FloatAttr32-bit float attribute
is_stateless::mlir::BoolAttrbool attribute
Nordering_token::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values
group_size tensor of 32-bit integer values
group_key tensor of 32-bit integer values
instance_key tensor of 32-bit integer values
ordering_token variadic of tensor of resource values

Results:

Result Description
data tensor of 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.CollectivePermute (TF::CollectivePermuteOp)

An Op to permute tensors across replicated TPU instances.

Each instance supplies its own input.

For example, suppose there are 4 TPU instances: [A, B, C, D]. Passing source_target_pairs=[[0,1],[1,2],[2,3],[3,0]] gets the outputs: [D, A, B, C].

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of number values
source_target_pairs tensor of 32-bit integer values

Results:

Result Description
output tensor of number values

tf.CollectiveReduce (TF::CollectiveReduceOp)

Mutually reduces multiple tensors of identical type and shape.

Traits: InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: InferShapedTypeOpInterface, InferTypeOpInterface

Attributes:

AttributeMLIR TypeDescription
group_size::mlir::IntegerAttr64-bit signless integer attribute
group_key::mlir::IntegerAttr64-bit signless integer attribute
instance_key::mlir::IntegerAttr64-bit signless integer attribute
merge_op::mlir::StringAttrstring attribute whose value is Min, or Max, or Mul, or Add
final_op::mlir::StringAttrstring attribute whose value is Id, or Div
subdiv_offsets::mlir::ArrayAttr64-bit integer array attribute
wait_for::mlir::ArrayAttr64-bit integer array attribute
communication_hint::mlir::StringAttrstring attribute
timeout_seconds::mlir::FloatAttr32-bit float attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point or 32/64-bit signed integer values

Results:

Result Description
data tensor of floating-point or 32/64-bit signed integer values

tf.CollectiveReduceScatterV2 (TF::CollectiveReduceScatterV2Op)

Mutually reduces multiple tensors of identical type and shape and scatters the result.

is_stateless means each op does not need control dependencies to other collective ops. In this case, keys that are unique at runtime (e.g. instance_key) should be used to distinguish collective groups.

Interfaces: GetResourceInstanceInterface, TF_CollectiveReduceOrderingEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::CollectiveReduceOrdering}

Attributes:

AttributeMLIR TypeDescription
merge_op::mlir::StringAttrstring attribute whose value is Min, or Max, or Mul, or Add
final_op::mlir::StringAttrstring attribute whose value is Id, or Div
communication_hint::mlir::StringAttrstring attribute
timeout_seconds::mlir::FloatAttr32-bit float attribute
is_stateless::mlir::BoolAttrbool attribute
max_subdivs_per_device::mlir::IntegerAttr64-bit signless integer attribute
Nordering_token::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point or 32/64-bit signed integer values
group_size tensor of 32-bit integer values
group_key tensor of 32-bit integer values
instance_key tensor of 32-bit integer values
ordering_token variadic of tensor of resource values

Results:

Result Description
data tensor of floating-point or 32/64-bit signed integer values

tf.CollectiveReduceV2 (TF::CollectiveReduceV2Op)

Mutually reduces multiple tensors of identical type and shape.

is_stateless means each op does not need control dependencies to other collective ops. In this case, keys that are unique at runtime (e.g. instance_key) should be used to distinguish collective groups.

Interfaces: GetResourceInstanceInterface, TF_CollectiveReduceOrderingEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::CollectiveReduceOrdering}

Attributes:

AttributeMLIR TypeDescription
merge_op::mlir::StringAttrstring attribute whose value is Min, or Max, or Mul, or Add
final_op::mlir::StringAttrstring attribute whose value is Id, or Div
communication_hint::mlir::StringAttrstring attribute
timeout_seconds::mlir::FloatAttr32-bit float attribute
is_stateless::mlir::BoolAttrbool attribute
max_subdivs_per_device::mlir::IntegerAttr64-bit signless integer attribute
Nordering_token::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point or 32/64-bit signed integer values
group_size tensor of 32-bit integer values
group_key tensor of 32-bit integer values
instance_key tensor of 32-bit integer values
ordering_token variadic of tensor of resource values

Results:

Result Description
data tensor of floating-point or 32/64-bit signed integer values

tf.Complex (TF::ComplexOp)

Converts two real numbers to a complex number.

Given a tensor real representing the real part of a complex number, and a tensor imag representing the imaginary part of a complex number, this operation returns complex numbers elementwise of the form \(a + bj\), where a represents the real part and b represents the imag part.

The input tensors real and imag must have the same shape.

For example:

# tensor 'real' is [2.25, 3.25]
# tensor `imag` is [4.75, 5.75]
tf.complex(real, imag) ==> [[2.25 + 4.75j], [3.25 + 5.75j]]

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
real tensor of 32/64-bit float values
imag tensor of 32/64-bit float values

Results:

Result Description
out tensor of 128-bit complex or 64-bit complex values

tf.ComplexAbs (TF::ComplexAbsOp)

Computes the complex absolute value of a tensor.

Given a tensor x of complex numbers, this operation returns a tensor of type float or double that is the absolute value of each element in x. All elements in x must be complex numbers of the form \(a + bj\). The absolute value is computed as \( \sqrt{a^2 + b^2}\).

For example:

x = tf.complex(3.0, 4.0) print((tf.raw_ops.ComplexAbs(x=x, Tout=tf.dtypes.float32, name=None)).numpy()) 5.0

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of 128-bit complex or 64-bit complex values

Results:

Result Description
y tensor of 32/64-bit float values

tf.Concat (TF::ConcatOp)

Concatenates tensors along one dimension.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
concat_dim tensor of 32-bit integer values
values variadic of tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.ConcatOffset (TF::ConcatOffsetOp)

Computes offsets of concat inputs within its output.

For example:

x = [2, 2, 7] y = [2, 3, 7] z = [2, 9, 7] offsets = concat_offset(1, [x, y, z]) [list(off.numpy()) for off in offsets] [[0, 0, 0], [0, 2, 0], [0, 5, 0]]

This is typically used by gradient computations for a concat operation.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute
shape_type::mlir::Attributederived attribute

Operands:

Operand Description
concat_dim tensor of 32-bit integer values
shape variadic of tensor of 32/64-bit signed integer values

Results:

Result Description
offset variadic of tensor of 32/64-bit signed integer values

tf.ConcatV2 (TF::ConcatV2Op)

Concatenates tensors along one dimension.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
values variadic of tensor of tf.dtype values
axis tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.ConfigureAndInitializeGlobalTPU (TF::ConfigureAndInitializeGlobalTPUOp)

An op that initialize the TPU system in a multi-client set up.

Initializes global TPU system for mutli-client execution.

This op does the work of both ConfigureDistributedTpuOp and InitializeHostForDistributedTpuOp, and outputs the latter's result.

Results:

Result Description
output tensor of 32-bit integer values

tf.ConfigureDistributedTPU (TF::ConfigureDistributedTPUOp)

Sets up the centralized structures for a distributed TPU system.

Attributes:

AttributeMLIR TypeDescription
embedding_config::mlir::StringAttrstring attribute
tpu_embedding_config::mlir::StringAttrstring attribute
is_global_init::mlir::BoolAttrbool attribute
enable_whole_mesh_compilations::mlir::BoolAttrbool attribute
compilation_failure_closes_chips::mlir::BoolAttrbool attribute
tpu_cancellation_closes_chips::mlir::IntegerAttr64-bit signless integer attribute

Results:

Result Description
topology tensor of string values

tf.ConfigureTPUEmbedding (TF::ConfigureTPUEmbeddingOp)

Sets up TPUEmbedding in a distributed TPU system.

Attributes:

AttributeMLIR TypeDescription
config::mlir::StringAttrstring attribute

tf.ConfigureTPUEmbeddingHost (TF::ConfigureTPUEmbeddingHostOp)

An op that configures the TPUEmbedding software on a host.

Attributes:

AttributeMLIR TypeDescription
config::mlir::StringAttrstring attribute

Operands:

Operand Description
common_config tensor of string values
memory_config tensor of string values

Results:

Result Description
network_config tensor of string values

tf.ConfigureTPUEmbeddingMemory (TF::ConfigureTPUEmbeddingMemoryOp)

An op that configures the TPUEmbedding software on a host.

Operands:

Operand Description
common_config tensor of string values

Results:

Result Description
memory_config tensor of string values

tf.Conj (TF::ConjOp)

Returns the complex conjugate of a complex number.

Given a tensor input of complex numbers, this operation returns a tensor of complex numbers that are the complex conjugate of each element in input. The complex numbers in input must be of the form \(a + bj\), where a is the real part and b is the imaginary part.

The complex conjugate returned by this operation is of the form \(a - bj\).

For example:

# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]
tf.conj(input) ==> [-2.25 - 4.75j, 3.25 - 5.75j]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_Involution

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex or variant values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex or variant values

tf.ConjugateTranspose (TF::ConjugateTransposeOp)

Shuffle dimensions of x according to a permutation and conjugate the result.

The output y has the same rank as x. The shapes of x and y satisfy: y.shape[i] == x.shape[perm[i]] for i in [0, 1, ..., rank(x) - 1] y[i,j,k,...,s,t,u] == conj(x[perm[i], perm[j], perm[k],...,perm[s], perm[t], perm[u]])

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tperm::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of tf.dtype values
perm tensor of 32/64-bit signed integer values

Results:

Result Description
y tensor of tf.dtype values

tf.ConnectTPUEmbeddingHosts (TF::ConnectTPUEmbeddingHostsOp)

An op that sets up communication between TPUEmbedding host software instances

after ConfigureTPUEmbeddingHost has been called on each host.

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute

Operands:

Operand Description
network_configs variadic of tensor of string values

tf.Const (TF::ConstOp)

Constant tensor op

Traits: AlwaysSpeculatableImplTrait, ConstantLike

Interfaces: ConditionallySpeculatable, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface), OpAsmOpInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
value::mlir::ElementsAttrconstant vector/tensor attribute
dtype::mlir::Attributederived attribute

Results:

Result Description
output tensor of tf.dtype values

tf.Conv (TF::ConvOp)

_Computes a N-D convolution given (N+1+batchdims)-D input and (N+2)-D filter tensors.

General function for computing a N-D convolution. It is required that 1 <= N <= 3.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttrstring attribute whose value is CHANNELS_FIRST, or CHANNELS_LAST
dilations::mlir::ArrayAttr64-bit integer array attribute
batch_dims::mlir::IntegerAttr64-bit signless integer attribute
groups::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values
filter tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values

tf.Conv2D (TF::Conv2DOp)

Computes a 2-D convolution given 4-D input and filter tensors.

Given an input tensor of shape [batch, in_height, in_width, in_channels] and a filter / kernel tensor of shape [filter_height, filter_width, in_channels, out_channels], this op performs the following:

  1. Flattens the filter to a 2-D matrix with shape [filter_height * filter_width * in_channels, output_channels].
  2. Extracts image patches from the input tensor to form a virtual tensor of shape [batch, out_height, out_width, filter_height * filter_width * in_channels].
  3. For each patch, right-multiplies the filter matrix and the image patch vector.

In detail, with the default NHWC format,

output[b, i, j, k] =
    sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] *
                    filter[di, dj, q, k]

Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertices strides, strides = [1, stride, stride, 1].

Traits: AlwaysSpeculatableImplTrait, InferTensorType

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface), TF_LayoutSensitiveInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute
use_cudnn_on_gpu::mlir::BoolAttrbool attribute
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values
filter tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values

tf.Conv2DBackpropFilter (TF::Conv2DBackpropFilterOp)

Computes the gradients of convolution with respect to the filter.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), TF_LayoutSensitiveInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute
use_cudnn_on_gpu::mlir::BoolAttrbool attribute
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point values
filter_sizes tensor of 32-bit integer values
out_backprop tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.Conv2DBackpropFilterV2 (TF::Conv2DBackpropFilterV2Op)

Computes the gradients of convolution with respect to the filter.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute
use_cudnn_on_gpu::mlir::BoolAttrbool attribute
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point values
filter tensor of floating-point values
out_backprop tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.Conv2DBackpropInput (TF::Conv2DBackpropInputOp)

Computes the gradients of convolution with respect to the input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), TF_LayoutSensitiveInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute
use_cudnn_on_gpu::mlir::BoolAttrbool attribute
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input_sizes tensor of 32-bit integer values
filter tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values
out_backprop tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values

tf.Conv2DBackpropInputV2 (TF::Conv2DBackpropInputV2Op)

Computes the gradients of convolution with respect to the input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute
use_cudnn_on_gpu::mlir::BoolAttrbool attribute
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values
filter tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values
out_backprop tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer values

tf.Conv3D (TF::Conv3DOp)

Computes a 3-D convolution given 5-D input and filter tensors.

In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product.

Our Conv3D implements a form of cross-correlation.

Traits: AlwaysSpeculatableImplTrait, InferTensorType

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttrstring attribute whose value is NDHWC, or NCDHW
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point values
filter tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.Conv3DBackpropFilter (TF::Conv3DBackpropFilterOp)

Computes the gradients of 3-D convolution with respect to the filter.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 16-bit float or 32-bit float or 64-bit float values
filter tensor of 16-bit float or 32-bit float or 64-bit float values
out_backprop tensor of 16-bit float or 32-bit float or 64-bit float values

Results:

Result Description
output tensor of 16-bit float or 32-bit float or 64-bit float values

tf.Conv3DBackpropFilterV2 (TF::Conv3DBackpropFilterV2Op)

Computes the gradients of 3-D convolution with respect to the filter.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttrstring attribute whose value is NDHWC, or NCDHW
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point values
filter_sizes tensor of 32-bit integer values
out_backprop tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.Conv3DBackpropInput (TF::Conv3DBackpropInputOp)

Computes the gradients of 3-D convolution with respect to the input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 16-bit float or 32-bit float or 64-bit float values
filter tensor of 16-bit float or 32-bit float or 64-bit float values
out_backprop tensor of 16-bit float or 32-bit float or 64-bit float values

Results:

Result Description
output tensor of 16-bit float or 32-bit float or 64-bit float values

tf.Conv3DBackpropInputV2 (TF::Conv3DBackpropInputV2Op)

Computes the gradients of 3-D convolution with respect to the input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttrstring attribute whose value is NDHWC, or NCDHW
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute
Tshape::mlir::Attributederived attribute

Operands:

Operand Description
input_sizes tensor of 32/64-bit signed integer values
filter tensor of floating-point values
out_backprop tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.ConvertToCooTensor (TF::ConvertToCooTensorOp)

Op that converts tensors into coo format.

This op coverts the dense, sparse and ragged tensor into standard coo tensor format which contains three 1D tensors.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
sample_count::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
combiner::mlir::StringAttrstring attribute

Operands:

Operand Description
indices_or_row_splits tensor of 32-bit integer values
values tensor of 32-bit integer values
weights tensor of 32-bit float values

Results:

Result Description
row_ids tensor of 32-bit integer values
col_ids tensor of 32-bit integer values
gains tensor of 32-bit float values

tf.Cos (TF::CosOp)

Computes cos of x element-wise.

Given an input tensor, this function computes cosine of every element in the tensor. Input range is (-inf, inf) and output range is [-1,1]. If input lies outside the boundary, nan is returned.

  x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 200, 10000, float("inf")])
  tf.math.cos(x) ==> [nan -0.91113025 0.87758255 0.5403023 0.36235774 0.48718765 -0.95215535 nan]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.Cosh (TF::CoshOp)

Computes hyperbolic cosine of x element-wise.

Given an input tensor, this function computes hyperbolic cosine of every element in the tensor. Input range is [-inf, inf] and output range is [1, inf].

  x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 2, 10, float("inf")])
  tf.math.cosh(x) ==> [inf 4.0515420e+03 1.1276259e+00 1.5430807e+00 1.8106556e+00 3.7621956e+00 1.1013233e+04 inf]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.CreateSummaryDbWriter (TF::CreateSummaryDbWriterOp)

Creates summary database writer accessible by given resource handle.

This can be used to write tensors from the execution graph directly to a database. Only SQLite is supported right now. This function will create the schema if it doesn't exist. Entries in the Users, Experiments, and Runs tables will be created automatically if they don't already exist.

writer: Handle to SummaryWriter resource to overwrite. db_uri: For example "file:/tmp/foo.sqlite". experiment_name: Can't contain ASCII control characters or <>. Case sensitive. If empty, then the Run will not be associated with any Experiment. run_name: Can't contain ASCII control characters or <>. Case sensitive. If empty, then each Tag will not be associated with any Run. user_name: Must be valid as both a DNS label and Linux username. If empty, then the Experiment will not be associated with any User.

Operands:

Operand Description
writer tensor of resource values
db_uri tensor of string values
experiment_name tensor of string values
run_name tensor of string values
user_name tensor of string values

tf.CreateSummaryFileWriter (TF::CreateSummaryFileWriterOp)

Creates a summary file writer accessible by the given resource handle.

writer: A handle to the summary writer resource logdir: Directory where the event file will be written. max_queue: Size of the queue of pending events and summaries. flush_millis: How often, in milliseconds, to flush the pending events and summaries to disk. filename_suffix: Every event file's name is suffixed with this suffix.

Operands:

Operand Description
writer tensor of resource values
logdir tensor of string values
max_queue tensor of 32-bit integer values
flush_millis tensor of 32-bit integer values
filename_suffix tensor of string values

tf.Cross (TF::CrossOp)

Compute the pairwise cross product.

a and b must be the same shape; they can either be simple 3-element vectors, or any shape where the innermost dimension is 3. In the latter case, each pair of corresponding 3-element vectors is cross-multiplied independently.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
a tensor of integer or floating-point values
b tensor of integer or floating-point values

Results:

Result Description
product tensor of integer or floating-point values

tf.CrossReplicaSum (TF::CrossReplicaSumOp)

An Op to sum inputs across replicated TPU instances.

Each instance supplies its own input.

For example, suppose there are 8 TPU instances: [A, B, C, D, E, F, G, H]. Passing group_assignment=[[0,2,4,6],[1,3,5,7]] sets A, C, E, G as group 0, and B, D, F, H as group 1. Thus we get the outputs: [A+C+E+G, B+D+F+H, A+C+E+G, B+D+F+H, A+C+E+G, B+D+F+H, A+C+E+G, B+D+F+H].

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 32-bit unsigned integer values
group_assignment tensor of 32-bit integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 32-bit unsigned integer values

tf.Cumprod (TF::CumprodOp)

Compute the cumulative product of the tensor x along axis.

By default, this op performs an inclusive cumprod, which means that the first element of the input is identical to the first element of the output:

tf.cumprod([a, b, c])  # => [a, a * b, a * b * c]

By setting the exclusive kwarg to True, an exclusive cumprod is performed instead:

tf.cumprod([a, b, c], exclusive=True)  # => [1, a, a * b]

By setting the reverse kwarg to True, the cumprod is performed in the opposite direction:

tf.cumprod([a, b, c], reverse=True)  # => [a * b * c, b * c, c]

This is more efficient than using separate tf.reverse ops.

The reverse and exclusive kwargs can also be combined:

tf.cumprod([a, b, c], exclusive=True, reverse=True)  # => [b * c, c, 1]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
exclusive::mlir::BoolAttrbool attribute
reverse::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of number values
axis tensor of 32/64-bit signed integer values

Results:

Result Description
out tensor of number values

tf.Cumsum (TF::CumsumOp)

Compute the cumulative sum of the tensor x along axis.

By default, this op performs an inclusive cumsum, which means that the first element of the input is identical to the first element of the output:

tf.cumsum([a, b, c])  # => [a, a + b, a + b + c]

By setting the exclusive kwarg to True, an exclusive cumsum is performed instead:

tf.cumsum([a, b, c], exclusive=True)  # => [0, a, a + b]

By setting the reverse kwarg to True, the cumsum is performed in the opposite direction:

tf.cumsum([a, b, c], reverse=True)  # => [a + b + c, b + c, c]

This is more efficient than using separate tf.reverse ops.

The reverse and exclusive kwargs can also be combined:

tf.cumsum([a, b, c], exclusive=True, reverse=True)  # => [b + c, c, 0]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
exclusive::mlir::BoolAttrbool attribute
reverse::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of number values
axis tensor of 32/64-bit signed integer values

Results:

Result Description
out tensor of number values

tf.CumulativeLogsumexp (TF::CumulativeLogsumexpOp)

Compute the cumulative product of the tensor x along axis.

By default, this op performs an inclusive cumulative log-sum-exp, which means that the first element of the input is identical to the first element of the output:

tf.math.cumulative_logsumexp([a, b, c])  # => [a, log(exp(a) + exp(b)), log(exp(a) + exp(b) + exp(c))]

By setting the exclusive kwarg to True, an exclusive cumulative log-sum-exp is performed instead:

tf.cumulative_logsumexp([a, b, c], exclusive=True)  # => [-inf, a, log(exp(a) * exp(b))]

Note that the neutral element of the log-sum-exp operation is -inf, however, for performance reasons, the minimal value representable by the floating point type is used instead.

By setting the reverse kwarg to True, the cumulative log-sum-exp is performed in the opposite direction.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
exclusive::mlir::BoolAttrbool attribute
reverse::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values
axis tensor of 32/64-bit signed integer values

Results:

Result Description
out tensor of floating-point values

tf.DataFormatDimMap (TF::DataFormatDimMapOp)

Returns the dimension index in the destination data format given the one in

the source data format.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
src_format::mlir::StringAttrstring attribute
dst_format::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of 32/64-bit signed integer values

Results:

Result Description
y tensor of 32/64-bit signed integer values

tf.DataFormatVecPermute (TF::DataFormatVecPermuteOp)

Permute input tensor from src_format to dst_format.

Given source and destination format strings of length n=4 or 5, the input tensor must be a vector of size n or n-2, or a 2D tensor of shape (n, 2) or (n-2, 2).

If the first dimension of the input tensor is n-2, it is assumed that non-spatial dimensions are omitted (i.e N, C).

For example, with src_format of NHWC, dst_format of NCHW, and input:

[1, 2, 3, 4]

, the output will be:

[1, 4, 2, 3]

With src_format of NDHWC, dst_format of NCDHW, and input:

[[1, 6], [2, 7], [3, 8], [4, 9], [5, 10]]

, the output will be:

[[1, 6], [5, 10], [2, 7], [3, 8], [4, 9]]

With src_format of NHWC, dst_format of NCHW, and input:

[1, 2]

, the output will be:

[1, 2]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
src_format::mlir::StringAttrstring attribute
dst_format::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of 32/64-bit signed integer values

Results:

Result Description
y tensor of 32/64-bit signed integer values

tf.DebugIdentityV2 (TF::DebugIdentityV2Op)

Debug Identity V2 Op.

Provides an identity mapping from input to output, while writing the content of the input tensor by calling DebugEventsWriter.

The semantics of the input tensor depends on tensor_debug_mode. In typical usage, the input tensor comes directly from the user computation only when graph_debug_mode is FULL_TENSOR (see protobuf/debug_event.proto for a list of all the possible values of graph_debug_mode). For the other debug modes, the input tensor should be produced by an additional op or subgraph that computes summary information about one or more tensors.

Attributes:

AttributeMLIR TypeDescription
tfdbg_context_id::mlir::StringAttrstring attribute
op_name::mlir::StringAttrstring attribute
output_slot::mlir::IntegerAttr64-bit signless integer attribute
tensor_debug_mode::mlir::IntegerAttr64-bit signless integer attribute
debug_urls::mlir::ArrayAttrstring array attribute
circular_buffer_size::mlir::IntegerAttr64-bit signless integer attribute
tfdbg_run_id::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.DecodeAndCropJpeg (TF::DecodeAndCropJpegOp)

Decode and Crop a JPEG-encoded image to a uint8 tensor.

The attr channels indicates the desired number of color channels for the decoded image.

Accepted values are:

  • 0: Use the number of channels in the JPEG-encoded image.
  • 1: output a grayscale image.
  • 3: output an RGB image.

If needed, the JPEG-encoded image is transformed to match the requested number of color channels.

The attr ratio allows downscaling the image by an integer factor during decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than downscaling the image later.

It is equivalent to a combination of decode and crop, but much faster by only decoding partial jpeg image.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
channels::mlir::IntegerAttr64-bit signless integer attribute
ratio::mlir::IntegerAttr64-bit signless integer attribute
fancy_upscaling::mlir::BoolAttrbool attribute
try_recover_truncated::mlir::BoolAttrbool attribute
acceptable_fraction::mlir::FloatAttr32-bit float attribute
dct_method::mlir::StringAttrstring attribute

Operands:

Operand Description
contents tensor of string values
crop_window tensor of 32-bit integer values

Results:

Result Description
image tensor of 8-bit unsigned integer values

tf.DecodeGif (TF::DecodeGifOp)

Decode the frame(s) of a GIF-encoded image to a uint8 tensor.

GIF images with frame or transparency compression are not supported. On Linux and MacOS systems, convert animated GIFs from compressed to uncompressed by running:

convert \\(src.gif -coalesce \\)dst.gif

This op also supports decoding JPEGs and PNGs, though it is cleaner to use tf.io.decode_image.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
contents tensor of string values

Results:

Result Description
image tensor of 8-bit unsigned integer values

tf.DecodeJpeg (TF::DecodeJpegOp)

Decode a JPEG-encoded image to a uint8 tensor.

The attr channels indicates the desired number of color channels for the decoded image.

Accepted values are:

  • 0: Use the number of channels in the JPEG-encoded image.
  • 1: output a grayscale image.
  • 3: output an RGB image.

If needed, the JPEG-encoded image is transformed to match the requested number of color channels.

The attr ratio allows downscaling the image by an integer factor during decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than downscaling the image later.

This op also supports decoding PNGs and non-animated GIFs since the interface is the same, though it is cleaner to use tf.io.decode_image.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
channels::mlir::IntegerAttr64-bit signless integer attribute
ratio::mlir::IntegerAttr64-bit signless integer attribute
fancy_upscaling::mlir::BoolAttrbool attribute
try_recover_truncated::mlir::BoolAttrbool attribute
acceptable_fraction::mlir::FloatAttr32-bit float attribute
dct_method::mlir::StringAttrstring attribute

Operands:

Operand Description
contents tensor of string values

Results:

Result Description
image tensor of 8-bit unsigned integer values

tf.DecodePaddedRaw (TF::DecodePaddedRawOp)

Reinterpret the bytes of a string as a vector of numbers.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
little_endian::mlir::BoolAttrbool attribute
out_type::mlir::Attributederived attribute

Operands:

Operand Description
input_bytes tensor of string values
fixed_length tensor of 32-bit integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 8-bit unsigned integer values

tf.DecodePng (TF::DecodePngOp)

Decode a PNG-encoded image to a uint8 or uint16 tensor.

The attr channels indicates the desired number of color channels for the decoded image.

Accepted values are:

  • 0: Use the number of channels in the PNG-encoded image.
  • 1: output a grayscale image.
  • 3: output an RGB image.
  • 4: output an RGBA image.

If needed, the PNG-encoded image is transformed to match the requested number of color channels.

This op also supports decoding JPEGs and non-animated GIFs since the interface is the same, though it is cleaner to use tf.io.decode_image.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
channels::mlir::IntegerAttr64-bit signless integer attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
contents tensor of string values

Results:

Result Description
image tensor of 16-bit unsigned integer or 8-bit unsigned integer values

tf.DeleteIterator (TF::DeleteIteratorOp)

A container for an iterator resource.

Operands:

Operand Description
handle tensor of resource values
deleter tensor of variant values

tf.DeleteMemoryCache (TF::DeleteMemoryCacheOp)

Operands:

Operand Description
handle tensor of resource values
deleter tensor of variant values

tf.DeleteMultiDeviceIterator (TF::DeleteMultiDeviceIteratorOp)

A container for an iterator resource.

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute

Operands:

Operand Description
multi_device_iterator tensor of resource values
iterators variadic of tensor of resource values
deleter tensor of variant values

tf.DeleteRandomSeedGenerator (TF::DeleteRandomSeedGeneratorOp)

Operands:

Operand Description
handle tensor of resource values
deleter tensor of variant values

tf.DeleteSeedGenerator (TF::DeleteSeedGeneratorOp)

Operands:

Operand Description
handle tensor of resource values
deleter tensor of variant values

tf.DepthToSpace (TF::DepthToSpaceOp)

DepthToSpace for tensors of type T.

Rearranges data from depth into blocks of spatial data. This is the reverse transformation of SpaceToDepth. More specifically, this op outputs a copy of the input tensor where values from the depth dimension are moved in spatial blocks to the height and width dimensions. The attr block_size indicates the input block size and how the data is moved.

  • Chunks of data of size block_size * block_size from depth are rearranged into non-overlapping blocks of size block_size x block_size
  • The width of the output tensor is input_depth * block_size, whereas the height is input_height * block_size.
  • The Y, X coordinates within each block of the output image are determined by the high order component of the input channel index.
  • The depth of the input tensor must be divisible by block_size * block_size.

The data_format attr specifies the layout of the input and output tensors with the following options: "NHWC": [ batch, height, width, channels ] "NCHW": [ batch, channels, height, width ] "NCHW_VECT_C": qint8 [ batch, channels / 4, height, width, 4 ]

It is useful to consider the operation as transforming a 6-D Tensor. e.g. for data_format = NHWC, Each element in the input tensor can be specified via 6 coordinates, ordered by decreasing memory layout significance as: n,iY,iX,bY,bX,oC (where n=batch index, iX, iY means X or Y coordinates within the input image, bX, bY means coordinates within the output block, oC means output channels). The output would be the input transposed to the following layout: n,iY,bY,iX,bX,oC

This operation is useful for resizing the activations between convolutions (but keeping all data), e.g. instead of pooling. It is also useful for training purely convolutional models.

For example, given an input of shape [1, 1, 1, 4], data_format = "NHWC" and block_size = 2:

x = [[[[1, 2, 3, 4]]]]

This operation will output a tensor of shape [1, 2, 2, 1]:

   [[[[1], [2]],
     [[3], [4]]]]

Here, the input has a batch of 1 and each batch element has shape [1, 1, 4], the corresponding output will have 2x2 elements and will have a depth of 1 channel (1 = 4 / (block_size * block_size)). The output element shape is [2, 2, 1].

For an input tensor with larger depth, here of shape [1, 1, 1, 12], e.g.

x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]

This operation, for block size of 2, will return the following tensor of shape [1, 2, 2, 3]

   [[[[1, 2, 3], [4, 5, 6]],
     [[7, 8, 9], [10, 11, 12]]]]

Similarly, for the following input of shape [1 2 2 4], and a block size of 2:

x =  [[[[1, 2, 3, 4],
       [5, 6, 7, 8]],
      [[9, 10, 11, 12],
       [13, 14, 15, 16]]]]

the operator will return the following tensor of shape [1 4 4 1]:

x = [[[ [1],   [2],  [5],  [6]],
      [ [3],   [4],  [7],  [8]],
      [ [9],  [10], [13],  [14]],
      [ [11], [12], [15],  [16]]]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
block_size::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 2
data_format::mlir::StringAttrstring attribute whose value is NHWC, or NCHW, or NCHW_VECT_C
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.DepthwiseConv2dNative (TF::DepthwiseConv2dNativeOp)

Computes a 2-D depthwise convolution given 4-D input and filter tensors.

Given an input tensor of shape [batch, in_height, in_width, in_channels] and a filter / kernel tensor of shape [filter_height, filter_width, in_channels, channel_multiplier], containing in_channels convolutional filters of depth 1, depthwise_conv2d applies a different filter to each input channel (expanding from 1 channel to channel_multiplier channels for each), then concatenates the results together. Thus, the output has in_channels * channel_multiplier channels.

for k in 0..in_channels-1
  for q in 0..channel_multiplier-1
    output[b, i, j, k * channel_multiplier + q] =
      sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] *
                        filter[di, dj, k, q]

Must have strides[0] = strides[3] = 1. For the most common case of the same horizontal and vertices strides, strides = [1, stride, stride, 1].

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point values
filter tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.DepthwiseConv2dNativeBackpropFilter (TF::DepthwiseConv2dNativeBackpropFilterOp)

Computes the gradients of depthwise convolution with respect to the filter.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point values
filter_sizes tensor of 32-bit integer values
out_backprop tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.DepthwiseConv2dNativeBackpropInput (TF::DepthwiseConv2dNativeBackpropInputOp)

Computes the gradients of depthwise convolution with respect to the input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strides::mlir::ArrayAttr64-bit integer array attribute
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
dilations::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input_sizes tensor of 32-bit integer values
filter tensor of floating-point values
out_backprop tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.Dequantize (TF::DequantizeOp)

Dequantize the 'input' tensor into a float or bfloat16 Tensor.

[min_range, max_range] are scalar floats that specify the range for the output. The 'mode' attribute controls exactly which calculations are used to convert the float values to their quantized equivalents.

In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:

if T == qint8: in[i] += (range(T) + 1)/ 2.0
out[i] = min_range + (in[i]* (max_range - min_range) / range(T))

here range(T) = numeric_limits<T>::max() - numeric_limits<T>::min()

MIN_COMBINED Mode Example

If the input comes from a QuantizedRelu6, the output type is quint8 (range of 0-255) but the possible range of QuantizedRelu6 is 0-6. The min_range and max_range values are therefore 0.0 and 6.0. Dequantize on quint8 will take each value, cast to float, and multiply by 6 / 255. Note that if quantizedtype is qint8, the operation will additionally add each value by 128 prior to casting.

If the mode is 'MIN_FIRST', then this approach is used:

num_discrete_values = 1 << (# of bits in T)
range_adjust = num_discrete_values / (num_discrete_values - 1)
range = (range_max - range_min) * range_adjust
range_scale = range / num_discrete_values
const double offset_input = static_cast<double>(input) - lowest_quantized;
result = range_min + ((input - numeric_limits<T>::min()) * range_scale)

If the mode is SCALED, dequantization is performed by multiplying each input value by a scaling_factor. (Thus an input of 0 always maps to 0.0).

The scaling_factor is determined from min_range, max_range, and narrow_range in a way that is compatible with QuantizeAndDequantize{V2|V3} and QuantizeV2, using the following algorithm:


  const int min_expected_T = std::numeric_limits<T>::min() +
    (narrow_range ? 1 : 0);
  const int max_expected_T = std::numeric_limits<T>::max();
  const float max_expected_T = std::numeric_limits<float>::max();

  const float scale_factor =
    (std::numeric_limits<T>::min() == 0) ? (max_range / max_expected_T)
                                         : std::max(min_range / min_expected_T,
                                                    max_range / max_expected_T);

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
mode::mlir::StringAttrstring attribute whose value is MIN_COMBINED, or MIN_FIRST, or SCALED
narrow_range::mlir::BoolAttrbool attribute
axis::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer values
min_range tensor of 32-bit float values
max_range tensor of 32-bit float values

Results:

Result Description
output tensor of bfloat16 or 32-bit float values

tf.DeserializeIterator (TF::DeserializeIteratorOp)

Converts the given variant tensor to an iterator and stores it in the given resource.

Operands:

Operand Description
resource_handle tensor of resource values
serialized tensor of variant values

tf.DeserializeSparse (TF::DeserializeSparseOp)

Deserialize SparseTensor objects.

The input serialized_sparse must have the shape [?, ?, ..., ?, 3] where the last dimension stores serialized SparseTensor objects and the other N dimensions (N >= 0) correspond to a batch. The ranks of the original SparseTensor objects must all match. When the final SparseTensor is created, its rank is the rank of the incoming SparseTensor objects plus N; the sparse tensors have been concatenated along new dimensions, one for each batch.

The output SparseTensor object's shape values for the original dimensions are the max across the input SparseTensor objects' shape values for the corresponding dimensions. The new dimensions match the size of the batch.

The input SparseTensor objects' indices are assumed ordered in standard lexicographic order. If this is not the case, after this step run SparseReorder to restore index ordering.

For example, if the serialized input is a [2 x 3] matrix representing two original SparseTensor objects:

index = [ 0]
        [10]
        [20]
values = [1, 2, 3]
shape = [50]

and

index = [ 2]
        [10]
values = [4, 5]
shape = [30]

then the final deserialized SparseTensor will be:

index = [0  0]
        [0 10]
        [0 20]
        [1  2]
        [1 10]
values = [1, 2, 3, 4, 5]
shape = [2 50]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tserialized::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
serialized_sparse tensor of string or variant values

Results:

Result Description
sparse_indices tensor of 64-bit integer values
sparse_values tensor of tf.dtype values
sparse_shape tensor of 64-bit integer values

tf.DestroyResourceOp (TF::DestroyResourceOp)

Deletes the resource specified by the handle.

All subsequent operations using the resource will result in a NotFound error status.

Attributes:

AttributeMLIR TypeDescription
ignore_lookup_error::mlir::BoolAttrbool attribute

Operands:

Operand Description
resource tensor of resource values

tf.DeviceIndex (TF::DeviceIndexOp)

Return the index of device the op runs.

Given a list of device names, this operation returns the index of the device this op runs. The length of the list is returned in two cases: (1) Device does not exist in the given device list. (2) It is in XLA compilation.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
device_names::mlir::ArrayAttrstring array attribute

Results:

Result Description
index tensor of 32-bit integer values

tf.Diag (TF::DiagOp)

Returns a diagonal tensor with a given diagonal values.

Given a diagonal, this operation returns a tensor with the diagonal and everything else padded with zeros. The diagonal is computed as follows:

Assume diagonal has dimensions [D1,..., Dk], then the output is a tensor of rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where:

output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik] and 0 everywhere else.

For example:

# 'diagonal' is [1, 2, 3, 4]
tf.diag(diagonal) ==> [[1, 0, 0, 0]
                       [0, 2, 0, 0]
                       [0, 0, 3, 0]
                       [0, 0, 0, 4]]

Traits: AlwaysSpeculatableImplTrait, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
diagonal tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

Results:

Result Description
output tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.DiagPart (TF::DiagPartOp)

Returns the diagonal part of the tensor.

This operation returns a tensor with the diagonal part of the input. The diagonal part is computed as follows:

Assume input has dimensions [D1,..., Dk, D1,..., Dk], then the output is a tensor of rank k with dimensions [D1,..., Dk] where:

diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik].

For example:

# 'input' is [[1, 0, 0, 0]
              [0, 2, 0, 0]
              [0, 0, 3, 0]
              [0, 0, 0, 4]]

tf.diag_part(input) ==> [1, 2, 3, 4]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

Results:

Result Description
diagonal tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.Digamma (TF::DigammaOp)

Computes Psi, the derivative of Lgamma (the log of the absolute value of

Gamma(x)), element-wise.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of floating-point values

tf.DisableCopyOnRead (TF::DisableCopyOnReadOp)

Turns off the copy-on-read mode.

Turns off the copy-on-read mode of a resource variable. If the variable is not in copy-on-read mode, this op has no effect.

Operands:

Operand Description
resource tensor of resource values

tf.Div (TF::DivOp)

Returns x / y element-wise.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
z tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.DivNoNan (TF::DivNoNanOp)

Returns 0 if the denominator is zero.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values
y tensor of floating-point or complex values

Results:

Result Description
z tensor of floating-point or complex values

tf.DummyMemoryCache (TF::DummyMemoryCacheOp)

Results:

Result Description
handle tensor of resource values

tf.DummySeedGenerator (TF::DummySeedGeneratorOp)

Results:

Result Description
handle tensor of resource values

tf.DynamicEnqueueTPUEmbeddingArbitraryTensorBatch (TF::DynamicEnqueueTPUEmbeddingArbitraryTensorBatchOp)

_Eases the porting of code that uses tf.nn.embedding_lookupsparse().

embedding_indices[i] and aggregation_weights[i] correspond to the ith feature.

The tensors at corresponding positions in the three input lists (sample_indices, embedding_indices and aggregation_weights) must have the same shape, i.e. rank 1 with dim_size() equal to the total number of lookups into the table described by the corresponding feature.

Traits: SameVariadicOperandSize

Interfaces: TF_TPUEmbeddingWriteEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::TPUEmbedding}

Attributes:

AttributeMLIR TypeDescription
combiners::mlir::ArrayAttrstring array attribute
N::mlir::Attributederived attribute
T1::mlir::Attributederived attribute
T2::mlir::Attributederived attribute
T3::mlir::Attributederived attribute

Operands:

Operand Description
sample_indices_or_row_splits variadic of tensor of 32/64-bit signed integer values
embedding_indices variadic of tensor of 32/64-bit signed integer values
aggregation_weights variadic of tensor of 32/64-bit float values
mode_override tensor of string values
device_ordinal tensor of 32-bit integer values

tf.DynamicPartition (TF::DynamicPartitionOp)

Partitions data into num_partitions tensors using indices from partitions.

For each index tuple js of size partitions.ndim, the slice data[js, ...] becomes part of outputs[partitions[js]]. The slices with partitions[js] = i are placed in outputs[i] in lexicographic order of js, and the first dimension of outputs[i] is the number of entries in partitions equal to i. In detail,

    outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:]

    outputs[i] = pack([data[js, ...] for js if partitions[js] == i])

data.shape must start with partitions.shape.

For example:

    # Scalar partitions.
    partitions = 1
    num_partitions = 2
    data = [10, 20]
    outputs[0] = []  # Empty with shape [0, 2]
    outputs[1] = [[10, 20]]

    # Vector partitions.
    partitions = [0, 0, 1, 1, 0]
    num_partitions = 2
    data = [10, 20, 30, 40, 50]
    outputs[0] = [10, 20, 50]
    outputs[1] = [30, 40]

See dynamic_stitch for an example on how to merge partitions back.

Raises:

  • InvalidArgumentError in following cases:
    • If partitions is not in range [0, num_partiions)
    • If partitions.shape does not match prefix of data.shape argument.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
num_partitions::mlir::Attributederived attribute

Operands:

Operand Description
data tensor of tf.dtype values
partitions tensor of 32-bit integer values

Results:

Result Description
outputs variadic of tensor of tf.dtype values

tf.DynamicStitch (TF::DynamicStitchOp)

Interleave the values from the data tensors into a single tensor.

Builds a merged tensor such that

    merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]

For example, if each indices[m] is scalar or vector, we have

    # Scalar indices:
    merged[indices[m], ...] = data[m][...]

    # Vector indices:
    merged[indices[m][i], ...] = data[m][i, ...]

Each data[i].shape must start with the corresponding indices[i].shape, and the rest of data[i].shape must be constant w.r.t. i. That is, we must have data[i].shape = indices[i].shape + constant. In terms of this constant, the output shape is

merged.shape = [max(indices) + 1] + constant

Values are merged in order, so if an index appears in both indices[m][i] and indices[n][j] for (m,i) < (n,j) the slice data[n][j] will appear in the merged result. If you do not need this guarantee, ParallelDynamicStitch might perform better on some devices.

For example:

    indices[0] = 6
    indices[1] = [4, 1]
    indices[2] = [[5, 2], [0, 3]]
    data[0] = [61, 62]
    data[1] = [[41, 42], [11, 12]]
    data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]]
    merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42],
              [51, 52], [61, 62]]

This method can be used to merge partitions created by dynamic_partition as illustrated on the following example:

    # Apply function (increments x_i) on elements for which a certain condition
    # apply (x_i != -1 in this example).
    x=tf.constant([0.1, -1., 5.2, 4.3, -1., 7.4])
    condition_mask=tf.not_equal(x,tf.constant(-1.))
    partitioned_data = tf.dynamic_partition(
        x, tf.cast(condition_mask, tf.int32) , 2)
    partitioned_data[1] = partitioned_data[1] + 1.0
    condition_indices = tf.dynamic_partition(
        tf.range(tf.shape(x)[0]), tf.cast(condition_mask, tf.int32) , 2)
    x = tf.dynamic_stitch(condition_indices, partitioned_data)
    # Here x=[1.1, -1., 6.2, 5.3, -1, 8.4], the -1. values remain
    # unchanged.

Traits: AlwaysSpeculatableImplTrait, SameVariadicOperandSize

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
indices variadic of tensor of 32-bit integer values
data variadic of tensor of tf.dtype values

Results:

Result Description
merged tensor of tf.dtype values

tf.Einsum (TF::EinsumOp)

Tensor contraction according to Einstein summation convention.

Implements generalized Tensor contraction and reduction. Each input Tensor must have a corresponding input subscript appearing in the comma-separated left-hand side of the equation. The right-hand side of the equation consists of the output subscript. The input subscripts and the output subscript should consist of zero or more named axis labels and at most one ellipsis (...).

The named axis labels may be any single character other than those having special meaning, namely ,.->. The behavior of this Op is undefined if it receives an ill-formatted equation; since the validation is done at graph-building time, we omit format validation checks at runtime.

Operations are applied to the input(s) according to the following rules:

(a) Generalized Diagonals: For input dimensions corresponding to axis labels appearing more than once in the same input subscript, we take the generalized (k-dimensional) diagonal. For example, in the equation iii->i with input shape [3, 3, 3], the generalized diagonal would consist of 3 elements at indices (0, 0, 0), (1, 1, 1) and (2, 2, 2) to create a Tensor of shape [3].

(b) Reduction: Axes corresponding to labels appearing only in one input subscript but not in the output subscript are summed over prior to Tensor contraction. For example, in the equation ab,bc->b, the axis labels a and c are the reduction axis labels.

(c) Batch Dimensions: Axes corresponding to labels appearing in each of the input subscripts and also in the output subscript make up the batch dimensions in Tensor contraction. Unnamed axis labels corresponding to ellipsis (...) also correspond to batch dimensions. For example, for the equation denoting batch matrix multiplication, bij,bjk->bik, the axis label b corresponds to a batch dimension.

(d) Contraction: In case of binary einsum, axes corresponding to labels appearing in two different inputs (and not in the output) are contracted against each other. Considering the batch matrix multiplication equation again (bij,bjk->bik), the contracted axis label is j.

(e) Expand Diagonal: If the output subscripts contain repeated (explicit) axis labels, the opposite operation of (a) is applied. For example, in the equation i->iii, and input shape [3], the output of shape [3, 3, 3] are all zeros, except for the (generalized) diagonal which is populated with values from the input. Note: This operation is not supported by np.einsum or tf.einsum; it is provided to enable computing the symbolic gradient of tf.einsum.

The output subscripts must contain only labels appearing in at least one of the input subscripts. Furthermore, all dimensions mapping to the same axis label must be equal.

Any of the input and output subscripts may contain at most a single ellipsis (...). These ellipsis are mapped against dimensions not corresponding to any named axis label. If two inputs contain ellipsis, then they are broadcasted according to standard NumPy broadcasting rules.

The broadcasted dimensions are placed in the corresponding location of the ellipsis in the output subscript. If the broadcasted dimensions are non-empty and the output subscripts do not contain ellipsis, then an InvalidArgument error is raised.

@compatibility(numpy) Similar to numpy.einsum.

Comparison with numpy.einsum:

  • This Op only supports unary and binary forms of numpy.einsum.
  • This Op does not support implicit form. (i.e. equations without ->).
  • This Op also supports repeated indices in the output subscript, which is not supported by numpy.einsum. @end_compatibility

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
equation::mlir::StringAttrstring attribute
N::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.Elu (TF::EluOp)

Computes the exponential linear function.

The ELU function is defined as:

  • \( e ^ x - 1 \) if \( x < 0 \)
  • \( x \) if \( x >= 0 \)

Examples:

tf.nn.elu(1.0) tf.nn.elu(0.0) tf.nn.elu(-1000.0)

See Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
features tensor of floating-point values

Results:

Result Description
activations tensor of floating-point values

tf.EluGrad (TF::EluGradOp)

Computes gradients for the exponential linear (Elu) operation.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
gradients tensor of floating-point values
outputs tensor of floating-point values

Results:

Result Description
backprops tensor of floating-point values

tf.Empty (TF::EmptyOp)

_Creates a tensor with the given shape.

This operation creates a tensor of shape and dtype._

Attributes:

AttributeMLIR TypeDescription
init::mlir::BoolAttrbool attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32-bit integer values

Results:

Result Description
output tensor of tf.dtype values

tf.EmptyTensorList (TF::EmptyTensorListOp)

Creates and returns an empty tensor list.

All list elements must be tensors of dtype element_dtype and shape compatible with element_shape.

handle: an empty tensor list. element_dtype: the type of elements in the list. element_shape: a shape compatible with that of elements in the list.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
shape_type::mlir::Attributederived attribute
element_dtype::mlir::Attributederived attribute

Operands:

Operand Description
element_shape tensor of 32/64-bit signed integer values
max_num_elements tensor of 32-bit integer values

Results:

Result Description
handle tensor of variant values

tf.EncodePng (TF::EncodePngOp)

PNG-encode an image.

image is a 3-D uint8 or uint16 Tensor of shape [height, width, channels] where channels is:

  • 1: for grayscale.
  • 2: for grayscale + alpha.
  • 3: for RGB.
  • 4: for RGBA.

The ZLIB compression level, compression, can be -1 for the PNG-encoder default or a value from 0 to 9. 9 is the highest compression level, generating the smallest output, but is slower.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
compression::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
image tensor of 16-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
contents tensor of string values

tf.EnqueueTPUEmbeddingArbitraryTensorBatch (TF::EnqueueTPUEmbeddingArbitraryTensorBatchOp)

_Eases the porting of code that uses tf.nn.embedding_lookupsparse().

embedding_indices[i] and aggregation_weights[i] correspond to the ith feature.

The tensors at corresponding positions in the three input lists (sample_indices, embedding_indices and aggregation_weights) must have the same shape, i.e. rank 1 with dim_size() equal to the total number of lookups into the table described by the corresponding feature.

Traits: SameVariadicOperandSize

Interfaces: GetResourceInstanceInterface, TF_TPUEmbeddingWriteEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::TPUEmbedding}

Attributes:

AttributeMLIR TypeDescription
device_ordinal::mlir::IntegerAttr64-bit signless integer attribute
combiners::mlir::ArrayAttrstring array attribute
N::mlir::Attributederived attribute
T1::mlir::Attributederived attribute
T2::mlir::Attributederived attribute
T3::mlir::Attributederived attribute

Operands:

Operand Description
sample_indices_or_row_splits variadic of tensor of 32/64-bit signed integer values
embedding_indices variadic of tensor of 32/64-bit signed integer values
aggregation_weights variadic of tensor of 32/64-bit float values
mode_override tensor of string values

tf.EnqueueTPUEmbeddingBatch (TF::EnqueueTPUEmbeddingBatchOp)

An op that enqueues a list of input batch tensors to TPUEmbedding.

An op that enqueues a list of input batch tensors to TPUEmbedding.

Interfaces: GetResourceInstanceInterface, TF_TPUEmbeddingWriteEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::TPUEmbedding}

Attributes:

AttributeMLIR TypeDescription
device_ordinal::mlir::IntegerAttr64-bit signless integer attribute
combiners::mlir::ArrayAttrstring array attribute
N::mlir::Attributederived attribute

Operands:

Operand Description
batch variadic of tensor of string values
mode_override tensor of string values

tf.EnqueueTPUEmbeddingIntegerBatch (TF::EnqueueTPUEmbeddingIntegerBatchOp)

An op that enqueues a list of input batch tensors to TPUEmbedding.

Interfaces: GetResourceInstanceInterface, TF_TPUEmbeddingWriteEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::TPUEmbedding}

Attributes:

AttributeMLIR TypeDescription
device_ordinal::mlir::IntegerAttr64-bit signless integer attribute
N::mlir::Attributederived attribute

Operands:

Operand Description
batch variadic of tensor of 32-bit integer values
mode_override tensor of string values

tf.EnqueueTPUEmbeddingRaggedTensorBatch (TF::EnqueueTPUEmbeddingRaggedTensorBatchOp)

_Eases the porting of code that uses tf.nn.embeddinglookup().

sample_splits[i], embedding_indices[i] and aggregation_weights[i] correspond to the ith feature. table_ids[i] indicates which embedding table to look up ith feature.

The tensors at corresponding positions in two of the input lists, embedding_indices and aggregation_weights, must have the same shape, i.e. rank 1 with dim_size() equal to the total number of lookups into the table described by the corresponding feature.

Traits: SameVariadicOperandSize

Interfaces: GetResourceInstanceInterface, TF_TPUEmbeddingWriteEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::TPUEmbedding}

Attributes:

AttributeMLIR TypeDescription
device_ordinal::mlir::IntegerAttr64-bit signless integer attribute
combiners::mlir::ArrayAttrstring array attribute
table_ids::mlir::ArrayAttr64-bit integer array attribute
max_sequence_lengths::mlir::ArrayAttr64-bit integer array attribute
num_features::mlir::ArrayAttr64-bit integer array attribute
N::mlir::Attributederived attribute
T1::mlir::Attributederived attribute
T2::mlir::Attributederived attribute
T3::mlir::Attributederived attribute

Operands:

Operand Description
sample_splits variadic of tensor of 32/64-bit signed integer values
embedding_indices variadic of tensor of 32/64-bit signed integer values
aggregation_weights variadic of tensor of 32/64-bit float values
mode_override tensor of string values

tf.EnqueueTPUEmbeddingSparseBatch (TF::EnqueueTPUEmbeddingSparseBatchOp)

An op that enqueues TPUEmbedding input indices from a SparseTensor.

This Op eases the porting of code that uses embedding_lookup_sparse(), although some Python preprocessing of the SparseTensor arguments to embedding_lookup_sparse() is required to produce the arguments to this Op, since only a single EnqueueTPUEmbeddingSparseBatch Op is allowed per training step.

The tensors at corresponding positions in the three input lists must have the same shape, i.e. rank 1 with dim_size() equal to the total number of lookups into the table described by the corresponding table_id.

Traits: SameVariadicOperandSize

Interfaces: GetResourceInstanceInterface, TF_TPUEmbeddingWriteEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::TPUEmbedding}

Attributes:

AttributeMLIR TypeDescription
device_ordinal::mlir::IntegerAttr64-bit signless integer attribute
combiners::mlir::ArrayAttrstring array attribute
N::mlir::Attributederived attribute
T1::mlir::Attributederived attribute
T2::mlir::Attributederived attribute
T3::mlir::Attributederived attribute

Operands:

Operand Description
sample_indices variadic of tensor of 32/64-bit signed integer values
embedding_indices variadic of tensor of 32/64-bit signed integer values
aggregation_weights variadic of tensor of 32/64-bit float values
mode_override tensor of string values

tf.EnqueueTPUEmbeddingSparseTensorBatch (TF::EnqueueTPUEmbeddingSparseTensorBatchOp)

_Eases the porting of code that uses tf.nn.embedding_lookupsparse().

sample_indices[i], embedding_indices[i] and aggregation_weights[i] correspond to the ith feature. table_ids[i] indicates which embedding table to look up ith feature.

The tensors at corresponding positions in the three input lists (sample_indices, embedding_indices and aggregation_weights) must have the same shape, i.e. rank 1 with dim_size() equal to the total number of lookups into the table described by the corresponding feature.

Traits: SameVariadicOperandSize

Interfaces: GetResourceInstanceInterface, TF_TPUEmbeddingWriteEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::TPUEmbedding}

Attributes:

AttributeMLIR TypeDescription
device_ordinal::mlir::IntegerAttr64-bit signless integer attribute
combiners::mlir::ArrayAttrstring array attribute
table_ids::mlir::ArrayAttr64-bit integer array attribute
max_sequence_lengths::mlir::ArrayAttr64-bit integer array attribute
num_features::mlir::ArrayAttr64-bit integer array attribute
N::mlir::Attributederived attribute
T1::mlir::Attributederived attribute
T2::mlir::Attributederived attribute
T3::mlir::Attributederived attribute

Operands:

Operand Description
sample_indices variadic of tensor of 32/64-bit signed integer values
embedding_indices variadic of tensor of 32/64-bit signed integer values
aggregation_weights variadic of tensor of 32/64-bit float values
mode_override tensor of string values

tf.EnsureShape (TF::EnsureShapeOp)

Ensures that the tensor's shape matches the expected shape.

Raises an error if the input tensor's shape does not match the specified shape. Returns the input tensor otherwise.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
shape::mlir::AttributeTensorFlow shape attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.Equal (TF::EqualOp)

Returns the truth value of (x == y) element-wise.

x = tf.constant([2, 4])
y = tf.constant(2)
tf.math.equal(x, y) ==> array([True, False])

x = tf.constant([2, 4])
y = tf.constant([2, 4])
tf.math.equal(x, y) ==> array([True,  True])

Traits: AlwaysSpeculatableImplTrait, Commutative

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
incompatible_shape_error::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of tf.dtype values
y tensor of tf.dtype values

Results:

Result Description
z tensor of bool values

tf.Erf (TF::ErfOp)

Computes the Gauss error function of x element-wise. In statistics, for non-negative values of \(x\), the error function has the following interpretation: for a random variable \(Y\) that is normally distributed with mean 0 and variance \(1/\sqrt{2}\), \(erf(x)\) is the probability that \(Y\) falls in the range \([−x, x]\).

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of floating-point values

tf.Erfc (TF::ErfcOp)

Computes the complementary error function of x element-wise.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of floating-point values

tf.Erfinv (TF::ErfinvOp)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of floating-point values

tf.ExecuteTPUEmbeddingPartitioner (TF::ExecuteTPUEmbeddingPartitionerOp)

An op that executes the TPUEmbedding partitioner on the central configuration

device and computes the HBM size (in bytes) required for TPUEmbedding operation.

Attributes:

AttributeMLIR TypeDescription
config::mlir::StringAttrstring attribute

Results:

Result Description
common_config tensor of string values

tf.Exp (TF::ExpOp)

Computes exponential of x element-wise. \(y = e^x\).

This function computes the exponential of every element in the input tensor. i.e. exp(x) or e^(x), where x is the input tensor. e denotes Euler's number and is approximately equal to 2.718281. Output is positive for any real input.

  x = tf.constant(2.0)
  tf.math.exp(x) ==> 7.389056

  x = tf.constant([2.0, 8.0])
  tf.math.exp(x) ==> array([7.389056, 2980.958], dtype=float32)

For complex numbers, the exponential value is calculated as follows:

  e^(x+iy) = e^x * e^iy = e^x * (cos y + i sin y)

Let's consider complex number 1+1j as an example. e^1 * (cos 1 + i sin 1) = 2.7182818284590 * (0.54030230586+0.8414709848j)

  x = tf.constant(1 + 1j)
  tf.math.exp(x) ==> 1.4686939399158851+2.2873552871788423j

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.ExpandDims (TF::ExpandDimsOp)

Inserts a dimension of 1 into a tensor's shape.

Given a tensor input, this operation inserts a dimension of 1 at the dimension index axis of input's shape. The dimension index axis starts at zero; if you specify a negative number for axis it is counted backward from the end.

This operation is useful if you want to add a batch dimension to a single element. For example, if you have a single image of shape [height, width, channels], you can make it a batch of 1 image with expand_dims(image, 0), which will make the shape [1, height, width, channels].

Other examples:

# 't' is a tensor of shape [2]
shape(expand_dims(t, 0)) ==> [1, 2]
shape(expand_dims(t, 1)) ==> [2, 1]
shape(expand_dims(t, -1)) ==> [2, 1]

# 't2' is a tensor of shape [2, 3, 5]
shape(expand_dims(t2, 0)) ==> [1, 2, 3, 5]
shape(expand_dims(t2, 2)) ==> [2, 3, 1, 5]
shape(expand_dims(t2, 3)) ==> [2, 3, 5, 1]

This operation requires that:

-1-input.dims() <= dim <= input.dims()

This operation is related to squeeze(), which removes dimensions of size 1.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tdim::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
dim tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.Expm1 (TF::Expm1Op)

Computes exp(x) - 1 element-wise.

i.e. exp(x) - 1 or e^(x) - 1, where x is the input tensor. e denotes Euler's number and is approximately equal to 2.718281.

  x = tf.constant(2.0)
  tf.math.expm1(x) ==> 6.389056

  x = tf.constant([2.0, 8.0])
  tf.math.expm1(x) ==> array([6.389056, 2979.958], dtype=float32)

  x = tf.constant(1 + 1j)
  tf.math.expm1(x) ==> (0.46869393991588515+2.2873552871788423j)

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.ExtractImagePatches (TF::ExtractImagePatchesOp)

Extract patches from images and put them in the "depth" output dimension.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
ksizes::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
strides::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
rates::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
T::mlir::Attributederived attribute

Operands:

Operand Description
images tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
patches tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.FakeParam (TF::FakeParamOp)

This op is used as a placeholder in If branch functions. It doesn't provide a valid output when run, so must either be removed (e.g. replaced with a function input) or guaranteed not to be used (e.g. if mirroring an intermediate output needed for the gradient computation of the other branch).

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
shape::mlir::AttributeTensorFlow shape attribute
dtype::mlir::Attributederived attribute

Results:

Result Description
output tensor of tf.dtype values

tf.FakeQuantWithMinMaxArgs (TF::FakeQuantWithMinMaxArgsOp)

Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same shape and type.

Quantization is called fake since the output is still in floating point. The API converts inputs into values within the range [min and max] and returns as output.

Attributes

  • [min; max] define the clamping range for the inputs data.
  • inputs values are quantized into the quantization range ( [0; 2^num_bits - 1] when narrow_range is false and [1; 2^num_bits - 1] when it is true) and then de-quantized and output as floats in [min; max] interval.
  • num_bits is the bitwidth of the quantization; between 2 and 16, inclusive.

Before quantization, min and max values are adjusted with the following logic. It is suggested to have min <= 0 <= max. If 0 is not in the range of values, the behavior can be unexpected:

  • If 0 < min < max: min_adj = 0 and max_adj = max - min.
  • If min < max < 0: min_adj = min - max and max_adj = 0.
  • If min <= 0 <= max: scale = (max - min) / (2^num_bits - 1), min_adj = scale * round(min / scale) and max_adj = max + min_adj - min.

Examples


inp = tf.constant ([10.03, -10.23, 3])
out = tf.quantization.fake_quant_with_min_max_args(inp, min=-5, max=5,
                                                   num_bits=16)
print(out)

#  Output:
#  tf.Tensor([ 4.9999237 -5.0000763  3.0000763], shape=(3,), dtype=float32)

Raises:

  • InvalidArgumentError:
    • If num_bits are outside of range [2, 16].
    • If min >= max.
  • ValueError: If inputs are of any other type than float32.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
min::mlir::FloatAttr32-bit float attribute
max::mlir::FloatAttr32-bit float attribute
num_bits::mlir::IntegerAttr64-bit signless integer attribute
narrow_range::mlir::BoolAttrbool attribute

Operands:

Operand Description
inputs tensor of 32-bit float values

Results:

Result Description
outputs tensor of 32-bit float values

tf.FakeQuantWithMinMaxArgsGradient (TF::FakeQuantWithMinMaxArgsGradientOp)

Compute gradients for a FakeQuantWithMinMaxArgs operation.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
min::mlir::FloatAttr32-bit float attribute
max::mlir::FloatAttr32-bit float attribute
num_bits::mlir::IntegerAttr64-bit signless integer attribute
narrow_range::mlir::BoolAttrbool attribute

Operands:

Operand Description
gradients tensor of 32-bit float values
inputs tensor of 32-bit float values

Results:

Result Description
backprops tensor of 32-bit float values

tf.FakeQuantWithMinMaxVars (TF::FakeQuantWithMinMaxVarsOp)

Fake-quantize the 'inputs' tensor of type float via global float scalars

Fake-quantize the inputs tensor of type float via global float scalars min and max to outputs tensor of same shape as inputs.

Attributes

  • [min; max] define the clamping range for the inputs data.
  • inputs values are quantized into the quantization range ( [0; 2^num_bits - 1] when narrow_range is false and [1; 2^num_bits - 1] when it is true) and then de-quantized and output as floats in [min; max] interval.
  • num_bits is the bitwidth of the quantization; between 2 and 16, inclusive.

Before quantization, min and max values are adjusted with the following logic. It is suggested to have min <= 0 <= max. If 0 is not in the range of values, the behavior can be unexpected:

  • If 0 < min < max: min_adj = 0 and max_adj = max - min.
  • If min < max < 0: min_adj = min - max and max_adj = 0.
  • If min <= 0 <= max: scale = (max - min) / (2^num_bits - 1), min_adj = scale * round(min / scale) and max_adj = max + min_adj - min.

This operation has a gradient and thus allows for training min and max values.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
num_bits::mlir::IntegerAttr64-bit signless integer attribute
narrow_range::mlir::BoolAttrbool attribute

Operands:

Operand Description
inputs tensor of 32-bit float values
min tensor of 32-bit float values
max tensor of 32-bit float values

Results:

Result Description
outputs tensor of 32-bit float values

tf.FakeQuantWithMinMaxVarsGradient (TF::FakeQuantWithMinMaxVarsGradientOp)

Compute gradients for a FakeQuantWithMinMaxVars operation.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
num_bits::mlir::IntegerAttr64-bit signless integer attribute
narrow_range::mlir::BoolAttrbool attribute

Operands:

Operand Description
gradients tensor of 32-bit float values
inputs tensor of 32-bit float values
min tensor of 32-bit float values
max tensor of 32-bit float values

Results:

Result Description
backprops_wrt_input tensor of 32-bit float values
backprop_wrt_min tensor of 32-bit float values
backprop_wrt_max tensor of 32-bit float values

tf.FakeQuantWithMinMaxVarsPerChannel (TF::FakeQuantWithMinMaxVarsPerChannelOp)

Fake-quantize the 'inputs' tensor of type float via per-channel floats

Fake-quantize the inputs tensor of type float per-channel and one of the shapes: [d], [b, d] [b, h, w, d] via per-channel floats min and max of shape [d] to outputs tensor of same shape as inputs.

Attributes

  • [min; max] define the clamping range for the inputs data.
  • inputs values are quantized into the quantization range ( [0; 2^num_bits - 1] when narrow_range is false and [1; 2^num_bits - 1] when it is true) and then de-quantized and output as floats in [min; max] interval.
  • num_bits is the bitwidth of the quantization; between 2 and 16, inclusive.

Before quantization, min and max values are adjusted with the following logic. It is suggested to have min <= 0 <= max. If 0 is not in the range of values, the behavior can be unexpected:

  • If 0 < min < max: min_adj = 0 and max_adj = max - min.
  • If min < max < 0: min_adj = min - max and max_adj = 0.
  • If min <= 0 <= max: scale = (max - min) / (2^num_bits - 1), min_adj = scale * round(min / scale) and max_adj = max + min_adj - min.

This operation has a gradient and thus allows for training min and max values.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
num_bits::mlir::IntegerAttr64-bit signless integer attribute
narrow_range::mlir::BoolAttrbool attribute

Operands:

Operand Description
inputs tensor of 32-bit float values
min tensor of 32-bit float values
max tensor of 32-bit float values

Results:

Result Description
outputs tensor of 32-bit float values

tf.FakeQuantWithMinMaxVarsPerChannelGradient (TF::FakeQuantWithMinMaxVarsPerChannelGradientOp)

Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
num_bits::mlir::IntegerAttr64-bit signless integer attribute
narrow_range::mlir::BoolAttrbool attribute

Operands:

Operand Description
gradients tensor of 32-bit float values
inputs tensor of 32-bit float values
min tensor of 32-bit float values
max tensor of 32-bit float values

Results:

Result Description
backprops_wrt_input tensor of 32-bit float values
backprop_wrt_min tensor of 32-bit float values
backprop_wrt_max tensor of 32-bit float values

tf.FFT (TF::FFTOp)

Fast Fourier transform.

Computes the 1-dimensional discrete Fourier transform over the inner-most dimension of input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tcomplex::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex values

tf.FFT2D (TF::FFT2DOp)

2D fast Fourier transform.

Computes the 2-dimensional discrete Fourier transform over the inner-most 2 dimensions of input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tcomplex::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex values

tf.FFT3D (TF::FFT3DOp)

3D fast Fourier transform.

Computes the 3-dimensional discrete Fourier transform over the inner-most 3 dimensions of input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tcomplex::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex values

tf.Fill (TF::FillOp)

Creates a tensor filled with a scalar value.

This operation creates a tensor of shape dims and fills it with value.

For example:

# Output tensor has shape [2, 3].
fill([2, 3], 9) ==> [[9, 9, 9]
                     [9, 9, 9]]

tf.fill differs from tf.constant in a few ways:

  • tf.fill only supports scalar contents, whereas tf.constant supports Tensor values.
  • tf.fill creates an Op in the computation graph that constructs the actual Tensor value at runtime. This is in contrast to tf.constant which embeds the entire Tensor into the graph with a Const node.
  • Because tf.fill evaluates at graph runtime, it supports dynamic shapes based on other runtime Tensors, unlike tf.constant.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
index_type::mlir::Attributederived attribute

Operands:

Operand Description
dims tensor of 32/64-bit signed integer values
value tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.FinalizeDataset (TF::FinalizeDatasetOp)

_Creates a dataset by applying tf.data.Options to input_dataset._

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
has_captured_ref::mlir::BoolAttrbool attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements

Operands:

Operand Description
input_dataset tensor of variant values

Results:

Result Description
handle tensor of variant values

tf.FinalizeTPUEmbedding (TF::FinalizeTPUEmbeddingOp)

An op that finalizes the TPUEmbedding configuration.

Operands:

Operand Description
common_config tensor of string values
memory_config tensor of string values

tf.FlatMapDataset (TF::FlatMapDatasetOp)

Creates a dataset that applies f to the outputs of input_dataset.

Unlike MapDataset, the f in FlatMapDataset is expected to return a Dataset variant, and FlatMapDataset will flatten successive results into a single Dataset.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
f::mlir::SymbolRefAttrsymbol reference attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
metadata::mlir::StringAttrstring attribute
Targuments::mlir::Attributederived attribute

Operands:

Operand Description
input_dataset tensor of variant values
other_arguments variadic of tensor of tf.dtype values

Results:

Result Description
handle tensor of variant values

tf.Floor (TF::FloorOp)

Returns element-wise largest integer not greater than x.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_Idempotent

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of floating-point values

tf.FloorDiv (TF::FloorDivOp)

Returns x // y element-wise.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
z tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.FloorMod (TF::FloorModOp)

Returns element-wise remainder of division.

This follows Python semantics in that the result here is consistent with a flooring divide. E.g. floor(x / y) * y + floormod(x, y) = x, regardless of the signs of x and y.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer or floating-point values
y tensor of integer or floating-point values

Results:

Result Description
z tensor of integer or floating-point values

tf.FlushSummaryWriter (TF::FlushSummaryWriterOp)

Flushes the writer's unwritten events.

writer: A handle to the summary writer resource.

Operands:

Operand Description
writer tensor of resource values

tf.FusedBatchNorm (TF::FusedBatchNormOp)

Batch normalization.

Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". The size of 1D Tensors matches the dimension C of the 4D Tensors.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
epsilon::mlir::FloatAttr32-bit float attribute
exponential_avg_factor::mlir::FloatAttr32-bit float attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
is_training::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of 32-bit float values
scale tensor of 32-bit float values
offset tensor of 32-bit float values
mean tensor of 32-bit float values
variance tensor of 32-bit float values

Results:

Result Description
y tensor of 32-bit float values
batch_mean tensor of 32-bit float values
batch_variance tensor of 32-bit float values
reserve_space_1 tensor of 32-bit float values
reserve_space_2 tensor of 32-bit float values

tf.FusedBatchNormGrad (TF::FusedBatchNormGradOp)

Gradient for batch normalization.

Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". The size of 1D Tensors matches the dimension C of the 4D Tensors.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
epsilon::mlir::FloatAttr32-bit float attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
is_training::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
y_backprop tensor of 32-bit float values
x tensor of 32-bit float values
scale tensor of 32-bit float values
reserve_space_1 tensor of 32-bit float values
reserve_space_2 tensor of 32-bit float values

Results:

Result Description
x_backprop tensor of 32-bit float values
scale_backprop tensor of 32-bit float values
offset_backprop tensor of 32-bit float values
reserve_space_3 tensor of 32-bit float values
reserve_space_4 tensor of 32-bit float values

tf.FusedBatchNormGradV2 (TF::FusedBatchNormGradV2Op)

Gradient for batch normalization.

Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". The size of 1D Tensors matches the dimension C of the 4D Tensors.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
epsilon::mlir::FloatAttr32-bit float attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
is_training::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
U::mlir::Attributederived attribute

Operands:

Operand Description
y_backprop tensor of bfloat16 or 16-bit float or 32-bit float values
x tensor of bfloat16 or 16-bit float or 32-bit float values
scale tensor of 32-bit float values
reserve_space_1 tensor of 32-bit float values
reserve_space_2 tensor of 32-bit float values

Results:

Result Description
x_backprop tensor of bfloat16 or 16-bit float or 32-bit float values
scale_backprop tensor of 32-bit float values
offset_backprop tensor of 32-bit float values
reserve_space_3 tensor of 32-bit float values
reserve_space_4 tensor of 32-bit float values

tf.FusedBatchNormGradV3 (TF::FusedBatchNormGradV3Op)

Gradient for batch normalization.

Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". The size of 1D Tensors matches the dimension C of the 4D Tensors.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), TF_LayoutSensitiveInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
epsilon::mlir::FloatAttr32-bit float attribute
data_format::mlir::StringAttrstring attribute whose value is NHWC, or NCHW, or NDHWC, or NCDHW
is_training::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
U::mlir::Attributederived attribute

Operands:

Operand Description
y_backprop tensor of bfloat16 or 16-bit float or 32-bit float values
x tensor of bfloat16 or 16-bit float or 32-bit float values
scale tensor of 32-bit float values
reserve_space_1 tensor of 32-bit float values
reserve_space_2 tensor of 32-bit float values
reserve_space_3 tensor of 32-bit float values

Results:

Result Description
x_backprop tensor of bfloat16 or 16-bit float or 32-bit float values
scale_backprop tensor of 32-bit float values
offset_backprop tensor of 32-bit float values
reserve_space_4 tensor of 32-bit float values
reserve_space_5 tensor of 32-bit float values

tf.FusedBatchNormV2 (TF::FusedBatchNormV2Op)

Batch normalization.

Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". The size of 1D Tensors matches the dimension C of the 4D Tensors.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), TF_FoldOperandsTransposeInterface, TF_LayoutSensitiveInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
epsilon::mlir::FloatAttr32-bit float attribute
exponential_avg_factor::mlir::FloatAttr32-bit float attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
is_training::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
U::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 16-bit float or 32-bit float values
scale tensor of 32-bit float values
offset tensor of 32-bit float values
mean tensor of 32-bit float values
variance tensor of 32-bit float values

Results:

Result Description
y tensor of bfloat16 or 16-bit float or 32-bit float values
batch_mean tensor of 32-bit float values
batch_variance tensor of 32-bit float values
reserve_space_1 tensor of 32-bit float values
reserve_space_2 tensor of 32-bit float values

tf.FusedBatchNormV3 (TF::FusedBatchNormV3Op)

Batch normalization.

Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". The size of 1D Tensors matches the dimension C of the 4D Tensors.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), TF_FoldOperandsTransposeInterface, TF_LayoutSensitiveInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
epsilon::mlir::FloatAttr32-bit float attribute
exponential_avg_factor::mlir::FloatAttr32-bit float attribute
data_format::mlir::StringAttrstring attribute whose value is NHWC, or NCHW, or NDHWC, or NCDHW
is_training::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
U::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 16-bit float or 32-bit float values
scale tensor of bfloat16 or 32-bit float values
offset tensor of bfloat16 or 32-bit float values
mean tensor of bfloat16 or 32-bit float values
variance tensor of bfloat16 or 32-bit float values

Results:

Result Description
y tensor of bfloat16 or 16-bit float or 32-bit float values
batch_mean tensor of bfloat16 or 32-bit float values
batch_variance tensor of bfloat16 or 32-bit float values
reserve_space_1 tensor of bfloat16 or 32-bit float values
reserve_space_2 tensor of bfloat16 or 32-bit float values
reserve_space_3 tensor of bfloat16 or 32-bit float values

tf.Gather (TF::GatherOp)

Gather slices from params according to indices.

indices must be an integer tensor of any dimension (usually 0-D or 1-D). Produces an output tensor with shape indices.shape + params.shape[1:] where:

    # Scalar indices
    output[:, ..., :] = params[indices, :, ... :]

    # Vector indices
    output[i, :, ..., :] = params[indices[i], :, ... :]

    # Higher rank indices
    output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :]

If indices is a permutation and len(indices) == params.shape[0] then this operation will permute params accordingly.

validate_indices: DEPRECATED. If this operation is assigned to CPU, values in indices are always validated to be within range. If assigned to GPU, out-of-bound indices result in safe but unspecified behavior, which may include raising an error.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
validate_indices::mlir::BoolAttrbool attribute
Tindices::mlir::Attributederived attribute
Tparams::mlir::Attributederived attribute

Operands:

Operand Description
params tensor of tf.dtype values
indices tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.GatherNd (TF::GatherNdOp)

Gather slices from params into a Tensor with shape specified by indices.

indices is a K-dimensional integer tensor, best thought of as a (K-1)-dimensional tensor of indices into params, where each element defines a slice of params:

output[\\(i_0, ..., i_{K-2}\\)] = params[indices[\\(i_0, ..., i_{K-2}\\)]]

Whereas in tf.gather indices defines slices into the axis dimension of params, in tf.gather_nd, indices defines slices into the first N dimensions of params, where N = indices.shape[-1].

The last dimension of indices can be at most the rank of params:

indices.shape[-1] <= params.rank

The last dimension of indices corresponds to elements (if indices.shape[-1] == params.rank) or slices (if indices.shape[-1] < params.rank) along dimension indices.shape[-1] of params. The output tensor has shape

indices.shape[:-1] + params.shape[indices.shape[-1]:]

Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, a 0 is stored in the corresponding output value.

Some examples below.

Simple indexing into a matrix:

    indices = [[0, 0], [1, 1]]
    params = [['a', 'b'], ['c', 'd']]
    output = ['a', 'd']

Slice indexing into a matrix:

    indices = [[1], [0]]
    params = [['a', 'b'], ['c', 'd']]
    output = [['c', 'd'], ['a', 'b']]

Indexing into a 3-tensor:

    indices = [[1]]
    params = [[['a0', 'b0'], ['c0', 'd0']],
              [['a1', 'b1'], ['c1', 'd1']]]
    output = [[['a1', 'b1'], ['c1', 'd1']]]


    indices = [[0, 1], [1, 0]]
    params = [[['a0', 'b0'], ['c0', 'd0']],
              [['a1', 'b1'], ['c1', 'd1']]]
    output = [['c0', 'd0'], ['a1', 'b1']]


    indices = [[0, 0, 1], [1, 0, 1]]
    params = [[['a0', 'b0'], ['c0', 'd0']],
              [['a1', 'b1'], ['c1', 'd1']]]
    output = ['b0', 'b1']

Batched indexing into a matrix:

    indices = [[[0, 0]], [[0, 1]]]
    params = [['a', 'b'], ['c', 'd']]
    output = [['a'], ['b']]

Batched slice indexing into a matrix:

    indices = [[[1]], [[0]]]
    params = [['a', 'b'], ['c', 'd']]
    output = [[['c', 'd']], [['a', 'b']]]

Batched indexing into a 3-tensor:

    indices = [[[1]], [[0]]]
    params = [[['a0', 'b0'], ['c0', 'd0']],
              [['a1', 'b1'], ['c1', 'd1']]]
    output = [[[['a1', 'b1'], ['c1', 'd1']]],
              [[['a0', 'b0'], ['c0', 'd0']]]]

    indices = [[[0, 1], [1, 0]], [[0, 0], [1, 1]]]
    params = [[['a0', 'b0'], ['c0', 'd0']],
              [['a1', 'b1'], ['c1', 'd1']]]
    output = [[['c0', 'd0'], ['a1', 'b1']],
              [['a0', 'b0'], ['c1', 'd1']]]


    indices = [[[0, 0, 1], [1, 0, 1]], [[0, 1, 1], [1, 1, 0]]]
    params = [[['a0', 'b0'], ['c0', 'd0']],
              [['a1', 'b1'], ['c1', 'd1']]]
    output = [['b0', 'b1'], ['d0', 'c1']]

See also tf.gather and tf.batch_gather.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tindices::mlir::Attributederived attribute
Tparams::mlir::Attributederived attribute

Operands:

Operand Description
params tensor of tf.dtype values
indices tensor of 16-bit integer or 32-bit integer or 64-bit integer values

Results:

Result Description
output tensor of tf.dtype values

tf.GatherV2 (TF::GatherV2Op)

Gather slices from params axis axis according to indices.

indices must be an integer tensor of any dimension (usually 0-D or 1-D). Produces an output tensor with shape params.shape[:axis] + indices.shape[batch_dims:] + params.shape[axis + 1:] where:

    # Scalar indices (output is rank(params) - 1).
    output[a_0, ..., a_n, b_0, ..., b_n] =
      params[a_0, ..., a_n, indices, b_0, ..., b_n]

    # Vector indices (output is rank(params)).
    output[a_0, ..., a_n, i, b_0, ..., b_n] =
      params[a_0, ..., a_n, indices[i], b_0, ..., b_n]

    # Higher rank indices (output is rank(params) + rank(indices) - 1).
    output[a_0, ..., a_n, i, ..., j, b_0, ... b_n] =
      params[a_0, ..., a_n, indices[i, ..., j], b_0, ..., b_n]

Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, a 0 is stored in the corresponding output value.

See also tf.batch_gather and tf.gather_nd.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
batch_dims::mlir::IntegerAttr64-bit signless integer attribute
Taxis::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute
Tparams::mlir::Attributederived attribute

Operands:

Operand Description
params tensor of tf.dtype values
indices tensor of 16-bit integer or 32-bit integer or 64-bit integer values
axis tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.GeneratorDataset (TF::GeneratorDatasetOp)

Creates a dataset that invokes a function to generate elements.

Traits: AttrSizedOperandSegments

Interfaces: TF_GeneratorOpSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::GeneratorOp}

Attributes:

AttributeMLIR TypeDescription
init_func::mlir::SymbolRefAttrsymbol reference attribute
next_func::mlir::SymbolRefAttrsymbol reference attribute
finalize_func::mlir::SymbolRefAttrsymbol reference attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
metadata::mlir::StringAttrstring attribute
Tfinalize_func_args::mlir::Attributederived attribute
Tinit_func_args::mlir::Attributederived attribute
Tnext_func_args::mlir::Attributederived attribute

Operands:

Operand Description
init_func_other_args variadic of tensor of tf.dtype values
next_func_other_args variadic of tensor of tf.dtype values
finalize_func_other_args variadic of tensor of tf.dtype values

Results:

Result Description
handle tensor of variant values

tf.GeneratorDatasetRegion (TF::GeneratorDatasetRegionOp)

Regional version of GeneratorDataset

Creates a dataset that invokes its 'next' region to generate elements. Conceptually, within MLIR, we treat this op as if it fills a buffer with all the results right away, and those results are then passed (through the variant tensor result) to MakeIterator / IteratorGetNext. Note that the actual TF implementation differs: It generates the next element just in time, during IteratorGetNext.

init_extra_args: Additional arguments to pass to 'init'. next_extra_args: Additional arguments to pass to 'next'. (Passed after the normal arguments which are from the return values of 'init'.) finalize_extra_args: Additional arguments to pass to 'finalize'. (Passed after the normal arguments which are from the return values of 'init'.)

Traits: AttrSizedOperandSegments, SingleBlockImplicitTerminator<YieldOp>, SingleBlock

Interfaces: RegionBranchOpInterface, TF_GeneratorOpSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::GeneratorOp}

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
metadata::mlir::StringAttrstring attribute
Tinit_func_args::mlir::Attributederived attribute
Tnext_func_args::mlir::Attributederived attribute
Tfinalize_func_args::mlir::Attributederived attribute

Operands:

Operand Description
init_func_other_args variadic of tensor of tf.dtype values
next_func_other_args variadic of tensor of tf.dtype values
finalize_func_other_args variadic of tensor of tf.dtype values

Results:

Result Description
handle tensor of variant values

tf.GetMinibatchesInCsrWithPhysicalReplica (TF::GetMinibatchesInCsrWithPhysicalReplicaOp)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
sample_count::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
num_replica::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
max_minibatches_per_sc::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
max_ids_per_chip_per_sample::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
table_vocab_size::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
feature_width::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
num_sc_per_chip::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
table_name::mlir::StringAttrstring attribute
mini_batch_in_csr::mlir::StringAttrstring attribute

Operands:

Operand Description
program_key tensor of string values
row_ids tensor of 32-bit integer values
col_ids tensor of 32-bit integer values
gains tensor of 32-bit float values
splits tensor of 64-bit integer values
id_counts tensor of 32-bit integer values

Results:

Result Description
row_pointers tensor of 32-bit integer values
sorted_sample_ids tensor of 32-bit integer values
sorted_token_ids tensor of 32-bit integer values
sorted_gains tensor of 32-bit float values
row_pointers_unpadded_size tensor of 32-bit integer values
ids_unpadded_size tensor of 32-bit integer values
num_minibatches_per_physical_sparse_core tensor of 32-bit integer values

tf.GetMinibatchSplitsWithPhysicalReplica (TF::GetMinibatchSplitsWithPhysicalReplicaOp)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
sample_count::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
num_replica::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
table_vocab_size::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
feature_width::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
num_sc_per_chip::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
table_name::mlir::StringAttrstring attribute
mini_batch_splits::mlir::StringAttrstring attribute

Operands:

Operand Description
program_key tensor of string values
row_ids tensor of 32-bit integer values
col_ids tensor of 32-bit integer values
gains tensor of 32-bit float values

Results:

Result Description
sorted_row_ids tensor of 32-bit integer values
sorted_col_ids tensor of 32-bit integer values
sorted_gains tensor of 32-bit float values
splits tensor of 64-bit integer values
id_counts tensor of 32-bit integer values
max_ids tensor of 32-bit integer values
max_uniques tensor of 32-bit integer values

tf.GlobalIterId (TF::GlobalIterIdOp)

Op that gets the global step id.

This op gets the step id for each loop iteration.

Interfaces: GetResourceInstanceInterface, TF_GlobalIterIdEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::GlobalIterId}

Results:

Result Description
iter_id tensor of 64-bit integer values

tf.Greater (TF::GreaterOp)

Returns the truth value of (x > y) element-wise.

Example:

x = tf.constant([5, 4, 6])
y = tf.constant([5, 2, 5])
tf.math.greater(x, y) ==> [False, True, True]

x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.greater(x, y) ==> [False, False, True]

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer or floating-point values
y tensor of integer or floating-point values

Results:

Result Description
z tensor of bool values

tf.GreaterEqual (TF::GreaterEqualOp)

Returns the truth value of (x >= y) element-wise.

Example:

x = tf.constant([5, 4, 6, 7])
y = tf.constant([5, 2, 5, 10])
tf.math.greater_equal(x, y) ==> [True, True, True, False]

x = tf.constant([5, 4, 6, 7])
y = tf.constant([5])
tf.math.greater_equal(x, y) ==> [True, False, True, True]

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer or floating-point values
y tensor of integer or floating-point values

Results:

Result Description
z tensor of bool values

tf.HashTable (TF::HashTableOp)

Creates a non-initialized hash table.

This op creates a hash table, specifying the type of its keys and values. Before using the table you will have to initialize it. After initialization the table will be immutable.

Attributes:

AttributeMLIR TypeDescription
container::mlir::StringAttrstring attribute
shared_name::mlir::StringAttrstring attribute
use_node_name_sharing::mlir::BoolAttrbool attribute
key_dtype::mlir::TypeAttrany type attribute
value_dtype::mlir::TypeAttrany type attribute

Results:

Result Description
table_handle tensor of string values

tf.HashTableV2 (TF::HashTableV2Op)

Creates a non-initialized hash table.

This op creates a hash table, specifying the type of its keys and values. Before using the table you will have to initialize it. After initialization the table will be immutable.

Attributes:

AttributeMLIR TypeDescription
container::mlir::StringAttrstring attribute
shared_name::mlir::StringAttrstring attribute
use_node_name_sharing::mlir::BoolAttrbool attribute
key_dtype::mlir::TypeAttrany type attribute
value_dtype::mlir::TypeAttrany type attribute

Results:

Result Description
table_handle tensor of resource values

tf.HSVToRGB (TF::HSVToRGBOp)

Convert one or more images from HSV to RGB.

Outputs a tensor of the same shape as the images tensor, containing the RGB value of the pixels. The output is only well defined if the value in images are in [0,1].

See rgb_to_hsv for a description of the HSV encoding.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
images tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.Identity (TF::IdentityOp)

Return a tensor with the same shape and contents as the input tensor or value.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold, TF_OperandsSameAsResultsTypeOrRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.IdentityN (TF::IdentityNOp)

Returns a list of tensors with the same shapes and contents as the input

tensors.

This op can be used to override the gradient for complicated functions. For example, suppose y = f(x) and we wish to apply a custom function g for backprop such that dx = g(dy). In Python,

with tf.get_default_graph().gradient_override_map(
    {'IdentityN': 'OverrideGradientWithG'}):
  y, _ = identity_n([f(x), x])

@tf.RegisterGradient('OverrideGradientWithG')
def ApplyG(op, dy, _):
  return [None, g(dy)]  # Do not backprop to f(x).

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input variadic of tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.If (TF::IfOp)

_Output = cond ? then_branch(input) : elsebranch(input)

output = cond ? then_branch(input) : else_branch(input)

cond: A Tensor. If the tensor is a scalar of non-boolean type, the scalar is converted to a boolean according to the following rule: if the scalar is a numerical value, non-zero means True and zero means False; if the scalar is a string, non-empty means True and empty means False. If the tensor is not a scalar, being empty means False and being non-empty means True. input: A list of input tensors. then_branch: A function that takes 'inputs' and returns a list of tensors, whose types are the same as what else_branch returns. else_branch: A function that takes 'inputs' and returns a list of tensors. whose types are the same as what then_branch returns.

Interfaces: SymbolUserOpInterface

Attributes:

AttributeMLIR TypeDescription
then_branch::mlir::FlatSymbolRefAttrflat symbol reference attribute
else_branch::mlir::FlatSymbolRefAttrflat symbol reference attribute
is_stateless::mlir::BoolAttrbool attribute
Tcond::mlir::Attributederived attribute
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute
output_shapes::mlir::Attributederived attribute

Operands:

Operand Description
cond tensor of tf.dtype values
input variadic of tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.IFFT (TF::IFFTOp)

Inverse fast Fourier transform.

Computes the inverse 1-dimensional discrete Fourier transform over the inner-most dimension of input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tcomplex::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex values

tf.IFFT2D (TF::IFFT2DOp)

Inverse 2D fast Fourier transform.

Computes the inverse 2-dimensional discrete Fourier transform over the inner-most 2 dimensions of input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tcomplex::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex values

tf.IFFT3D (TF::IFFT3DOp)

Inverse 3D fast Fourier transform.

Computes the inverse 3-dimensional discrete Fourier transform over the inner-most 3 dimensions of input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tcomplex::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex values

tf.IfRegion (TF::IfRegionOp)

_Output = cond ? then_branch output : elsebranch output

"output = cond ? then_branch output : else_branch output"

cond: A Tensor. If the tensor is a scalar of non-boolean type, the scalar is converted to a boolean according to the following rule: if the scalar is a numerical value, non-zero means True and zero means False; if the scalar is a string, non-empty means True and empty means False. If the tensor is not a scalar, being empty means False and being non-empty means True. then_branch: A region that computes the outputs of the op if cond = true. It returns a list of tensors using tf.yield (as the terminator). The types of these returned tensors is same as that of the else_branch else_branch: A region that computes the outputs of the op if cond = false. It returns a list of tensors using tf.yield (as the terminator). The types of these returned tensors is same as that of the then_branch

Traits: NoRegionArguments, SingleBlockImplicitTerminator<YieldOp>, SingleBlock

Interfaces: RegionBranchOpInterface

Attributes:

AttributeMLIR TypeDescription
is_stateless::mlir::BoolAttrbool attribute
_then_func_name::mlir::StringAttrstring attribute
_else_func_name::mlir::StringAttrstring attribute

Operands:

Operand Description
cond 0D tensor of 1-bit signless integer values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.Igamma (TF::IgammaOp)

Compute the lower regularized incomplete Gamma function P(a, x).

The lower regularized incomplete Gamma function is defined as:

\(P(a, x) = gamma(a, x) / Gamma(a) = 1 - Q(a, x)\)

where

\(gamma(a, x) = \\int_{0}^{x} t^{a-1} exp(-t) dt\)

is the lower incomplete Gamma function.

Note, above Q(a, x) (Igammac) is the upper regularized complete Gamma function.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
a tensor of floating-point values
x tensor of floating-point values

Results:

Result Description
z tensor of floating-point values

tf.Igammac (TF::IgammacOp)

Compute the upper regularized incomplete Gamma function Q(a, x).

The upper regularized incomplete Gamma function is defined as:

\(Q(a, x) = Gamma(a, x) / Gamma(a) = 1 - P(a, x)\)

where

\(Gamma(a, x) = \int_{x}^{\infty} t^{a-1} exp(-t) dt\)

is the upper incomplete Gamma function.

Note, above P(a, x) (Igamma) is the lower regularized complete Gamma function.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
a tensor of floating-point values
x tensor of floating-point values

Results:

Result Description
z tensor of floating-point values

tf.IgammaGradA (TF::IgammaGradAOp)

Computes the gradient of igamma(a, x) wrt a.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
a tensor of 32/64-bit float values
x tensor of 32/64-bit float values

Results:

Result Description
z tensor of 32/64-bit float values

tf.Imag (TF::ImagOp)

Returns the imaginary part of a complex number.

Given a tensor input of complex numbers, this operation returns a tensor of type float that is the imaginary part of each element in input. All elements in input must be complex numbers of the form \(a + bj\), where a is the real part and b is the imaginary part returned by this operation.

For example:

# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]
tf.imag(input) ==> [4.75, 5.75]

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values

Results:

Result Description
output tensor of 32/64-bit float values

tf.ImportEvent (TF::ImportEventOp)

Outputs a tf.Event protocol buffer.

When CreateSummaryDbWriter is being used, this op can be useful for importing data from event logs.

writer: A handle to a summary writer. event: A string containing a binary-encoded tf.Event proto.

Operands:

Operand Description
writer tensor of resource values
event tensor of string values

tf.InfeedDequeue (TF::InfeedDequeueOp)

A placeholder op for a value that will be fed into the computation.

Attributes:

AttributeMLIR TypeDescription
shape::mlir::AttributeTensorFlow shape attribute
dtype::mlir::Attributederived attribute

Results:

Result Description
output tensor of tf.dtype values

tf.InfeedDequeueTuple (TF::InfeedDequeueTupleOp)

Fetches multiple values from infeed as an XLA tuple.

Attributes:

AttributeMLIR TypeDescription
_XlaSharding::mlir::StringAttrstring attribute
layouts::mlir::ArrayAttrarray attribute
shapes::mlir::Attributederived attribute
dtypes::mlir::Attributederived attribute

Results:

Result Description
outputs variadic of tensor of tf.dtype values

tf.InfeedEnqueueTuple (TF::InfeedEnqueueTupleOp)

Feeds multiple Tensor values into the computation as an XLA tuple.

Attributes:

AttributeMLIR TypeDescription
dtypes::mlir::ArrayAttrtype array attribute with at least 1 elements
shapes::mlir::ArrayAttrtensorflow shape attribute array
layouts::mlir::ArrayAttr64-bit integer array attribute
device_ordinal::mlir::IntegerAttr64-bit signless integer attribute

Operands:

Operand Description
inputs variadic of tensor of tf.dtype values

tf.InitializeTable (TF::InitializeTableOp)

Table initializer that takes two tensors for keys and values respectively.

Attributes:

AttributeMLIR TypeDescription
Tkey::mlir::Attributederived attribute
Tval::mlir::Attributederived attribute

Operands:

Operand Description
table_handle tensor of string values
keys tensor of tf.dtype values
values tensor of tf.dtype values

tf.InitializeTableFromDataset (TF::InitializeTableFromDatasetOp)

Operands:

Operand Description
table_handle tensor of resource values
dataset tensor of variant values

tf.InitializeTableFromTextFile (TF::InitializeTableFromTextFileOp)

Initializes a table from a text file.

It inserts one key-value pair into the table for each line of the file. The key and value is extracted from the whole line content, elements from the split line based on delimiter or the line number (starting from zero). Where to extract the key and value from a line is specified by key_index and value_index.

  • A value of -1 means use the line number(starting from zero), expects int64.
  • A value of -2 means use the whole line content, expects string.
  • A value >= 0 means use the index (starting at zero) of the split line based on delimiter.

Attributes:

AttributeMLIR TypeDescription
key_index::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is -2
value_index::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is -2
vocab_size::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is -1
delimiter::mlir::StringAttrstring attribute
offset::mlir::IntegerAttr64-bit signless integer attribute

Operands:

Operand Description
table_handle tensor of string values
filename tensor of string values

tf.InitializeTableFromTextFileV2 (TF::InitializeTableFromTextFileV2Op)

Initializes a table from a text file.

It inserts one key-value pair into the table for each line of the file. The key and value is extracted from the whole line content, elements from the split line based on delimiter or the line number (starting from zero). Where to extract the key and value from a line is specified by key_index and value_index.

  • A value of -1 means use the line number(starting from zero), expects int64.
  • A value of -2 means use the whole line content, expects string.
  • A value >= 0 means use the index (starting at zero) of the split line based on delimiter.

Attributes:

AttributeMLIR TypeDescription
key_index::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is -2
value_index::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is -2
vocab_size::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is -1
delimiter::mlir::StringAttrstring attribute
offset::mlir::IntegerAttr64-bit signless integer attribute

Operands:

Operand Description
table_handle tensor of resource values
filename tensor of string values

tf.InitializeTableV2 (TF::InitializeTableV2Op)

Table initializer that takes two tensors for keys and values respectively.

Attributes:

AttributeMLIR TypeDescription
Tkey::mlir::Attributederived attribute
Tval::mlir::Attributederived attribute

Operands:

Operand Description
table_handle tensor of resource values
keys tensor of tf.dtype values
values tensor of tf.dtype values

tf.InplaceAdd (TF::InplaceAddOp)

Adds v into specified rows of x.

Computes y = x; y[i, :] += v; return y.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of tf.dtype values
i tensor of 32-bit integer values
v tensor of tf.dtype values

Results:

Result Description
y tensor of tf.dtype values

tf.InplaceUpdate (TF::InplaceUpdateOp)

Updates specified rows 'i' with values 'v'.

Computes x[i, :] = v; return x.

Originally this function is mutative however for compilation we make this operation create / operate on a copy of x.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of tf.dtype values
i tensor of 32-bit integer values
v tensor of tf.dtype values

Results:

Result Description
y tensor of tf.dtype values

tf.InTopKV2 (TF::InTopKV2Op)

Says whether the targets are in the top K predictions.

This outputs a batch_size bool array, an entry out[i] is true if the prediction for the target class is among the top k predictions among all predictions for example i. Note that the behavior of InTopK differs from the TopK op in its handling of ties; if multiple classes have the same prediction value and straddle the top-k boundary, all of those classes are considered to be in the top k.

More formally, let

\(predictions_i\) be the predictions for all classes for example i, \(targets_i\) be the target class for example i, \(out_i\) be the output for example i,

\[out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)\]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
predictions tensor of 32-bit float values
targets tensor of 32/64-bit signed integer values
k tensor of 32/64-bit signed integer values

Results:

Result Description
precision tensor of bool values

tf.Inv (TF::InvOp)

Computes the reciprocal of x element-wise.

I.e., \(y = 1 / x\).

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer values

Results:

Result Description
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer values

tf.Invert (TF::InvertOp)

Invert (flip) each bit of supported types; for example, type uint8 value 01010101 becomes 10101010.

Flip each bit of supported types. For example, type int8 (decimal 2) binary 00000010 becomes (decimal -3) binary 11111101. This operation is performed on each element of the tensor argument x.

Example:

import tensorflow as tf
from tensorflow.python.ops import bitwise_ops

# flip 2 (00000010) to -3 (11111101)
tf.assert_equal(-3, bitwise_ops.invert(2))

dtype_list = [dtypes.int8, dtypes.int16, dtypes.int32, dtypes.int64,
              dtypes.uint8, dtypes.uint16, dtypes.uint32, dtypes.uint64]

inputs = [0, 5, 3, 14]
for dtype in dtype_list:
  # Because of issues with negative numbers, let's test this indirectly.
  # 1. invert(a) and a = 0
  # 2. invert(a) or a = invert(0)
  input_tensor = tf.constant([0, 5, 3, 14], dtype=dtype)
  not_a_and_a, not_a_or_a, not_0 = [bitwise_ops.bitwise_and(
                                      input_tensor, bitwise_ops.invert(input_tensor)),
                                    bitwise_ops.bitwise_or(
                                      input_tensor, bitwise_ops.invert(input_tensor)),
                                    bitwise_ops.invert(
                                      tf.constant(0, dtype=dtype))]

  expected = tf.constant([0, 0, 0, 0], dtype=tf.float32)
  tf.assert_equal(tf.cast(not_a_and_a, tf.float32), expected)

  expected = tf.cast([not_0] * 4, tf.float32)
  tf.assert_equal(tf.cast(not_a_or_a, tf.float32), expected)

  # For unsigned dtypes let's also check the result directly.
  if dtype.is_unsigned:
    inverted = bitwise_ops.invert(input_tensor)
    expected = tf.constant([dtype.max - x for x in inputs], dtype=tf.float32)
    tf.assert_equal(tf.cast(inverted, tf.float32), tf.cast(expected, tf.float32))

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_Involution

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer values

Results:

Result Description
y tensor of integer values

tf.InvertPermutation (TF::InvertPermutationOp)

Computes the inverse permutation of a tensor.

This operation computes the inverse of an index permutation. It takes a 1-D integer tensor x, which represents the indices of a zero-based array, and swaps each value with its index position. In other words, for an output tensor y and an input tensor x, this operation computes the following:

y[x[i]] = i for i in [0, 1, ..., len(x) - 1]

The values must include 0. There can be no duplicate values or negative values.

For example:

# tensor `x` is [3, 4, 0, 2, 1]
invert_permutation(x) ==> [2, 4, 3, 0, 1]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of 32/64-bit signed integer values

Results:

Result Description
y tensor of 32/64-bit signed integer values

tf.IRFFT (TF::IRFFTOp)

Inverse real-valued fast Fourier transform.

Computes the inverse 1-dimensional discrete Fourier transform of a real-valued signal over the inner-most dimension of input.

The inner-most dimension of input is assumed to be the result of RFFT: the fft_length / 2 + 1 unique components of the DFT of a real-valued signal. If fft_length is not provided, it is computed from the size of the inner-most dimension of input (fft_length = 2 * (inner - 1)). If the FFT length used to compute input is odd, it should be provided since it cannot be inferred properly.

Along the axis IRFFT is computed on, if fft_length / 2 + 1 is smaller than the corresponding dimension of input, the dimension is cropped. If it is larger, the dimension is padded with zeros.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tcomplex::mlir::Attributederived attribute
Treal::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values
fft_length tensor of 32-bit integer values

Results:

Result Description
output tensor of 32/64-bit float values

tf.IRFFT2D (TF::IRFFT2DOp)

Inverse 2D real-valued fast Fourier transform.

Computes the inverse 2-dimensional discrete Fourier transform of a real-valued signal over the inner-most 2 dimensions of input.

The inner-most 2 dimensions of input are assumed to be the result of RFFT2D: The inner-most dimension contains the fft_length / 2 + 1 unique components of the DFT of a real-valued signal. If fft_length is not provided, it is computed from the size of the inner-most 2 dimensions of input. If the FFT length used to compute input is odd, it should be provided since it cannot be inferred properly.

Along each axis IRFFT2D is computed on, if fft_length (or fft_length / 2 + 1 for the inner-most dimension) is smaller than the corresponding dimension of input, the dimension is cropped. If it is larger, the dimension is padded with zeros.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tcomplex::mlir::Attributederived attribute
Treal::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values
fft_length tensor of 32-bit integer values

Results:

Result Description
output tensor of 32/64-bit float values

tf.IRFFT3D (TF::IRFFT3DOp)

Inverse 3D real-valued fast Fourier transform.

Computes the inverse 3-dimensional discrete Fourier transform of a real-valued signal over the inner-most 3 dimensions of input.

The inner-most 3 dimensions of input are assumed to be the result of RFFT3D: The inner-most dimension contains the fft_length / 2 + 1 unique components of the DFT of a real-valued signal. If fft_length is not provided, it is computed from the size of the inner-most 3 dimensions of input. If the FFT length used to compute input is odd, it should be provided since it cannot be inferred properly.

Along each axis IRFFT3D is computed on, if fft_length (or fft_length / 2 + 1 for the inner-most dimension) is smaller than the corresponding dimension of input, the dimension is cropped. If it is larger, the dimension is padded with zeros.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tcomplex::mlir::Attributederived attribute
Treal::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values
fft_length tensor of 32-bit integer values

Results:

Result Description
output tensor of 32/64-bit float values

tf.IsFinite (TF::IsFiniteOp)

Returns which elements of x are finite.

@compatibility(numpy) Equivalent to np.isfinite @end_compatibility

Example:

x = tf.constant([5.0, 4.8, 6.8, np.inf, np.nan])
tf.math.is_finite(x) ==> [True, True, True, False, False]

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of bool values

tf.IsInf (TF::IsInfOp)

Returns which elements of x are Inf.

@compatibility(numpy) Equivalent to np.isinf @end_compatibility

Example:

x = tf.constant([5.0, np.inf, 6.8, np.inf])
tf.math.is_inf(x) ==> [False, True, False, True]

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of bool values

tf.IsNan (TF::IsNanOp)

Returns which elements of x are NaN.

@compatibility(numpy) Equivalent to np.isnan @end_compatibility

Example:

x = tf.constant([5.0, np.nan, 6.8, np.nan, np.inf])
tf.math.is_nan(x) ==> [False, True, False, True, False]

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of bool values

tf.Iterator (TF::IteratorOp)

A container for an iterator resource.

Attributes:

AttributeMLIR TypeDescription
shared_name::mlir::StringAttrstring attribute
container::mlir::StringAttrstring attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements

Results:

Result Description
handle tensor of resource values

tf.IteratorFromStringHandle (TF::IteratorFromStringHandleOp)

Converts the given string representing a handle to an iterator to a resource.

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute
output_shapes::mlir::ArrayAttrtensorflow shape attribute array

Operands:

Operand Description
string_handle tensor of string values

Results:

Result Description
resource_handle tensor of resource values

tf.IteratorFromStringHandleV2 (TF::IteratorFromStringHandleV2Op)

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute
output_shapes::mlir::ArrayAttrtensorflow shape attribute array

Operands:

Operand Description
string_handle tensor of string values

Results:

Result Description
resource_handle tensor of resource values

tf.IteratorGetNext (TF::IteratorGetNextOp)

Gets the next output from the given iterator .

Attributes:

AttributeMLIR TypeDescription
output_shapes::mlir::Attributederived attribute
output_types::mlir::Attributederived attribute

Operands:

Operand Description
iterator tensor of resource values

Results:

Result Description
components variadic of tensor of tf.dtype values

tf.IteratorGetNextAsOptional (TF::IteratorGetNextAsOptionalOp)

Gets the next output from the given iterator as an Optional variant.

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements

Operands:

Operand Description
iterator tensor of resource values

Results:

Result Description
optional tensor of variant values

tf.IteratorGetNextSync (TF::IteratorGetNextSyncOp)

Gets the next output from the given iterator.

This operation is a synchronous version IteratorGetNext. It should only be used in situations where the iterator does not block the calling thread, or where the calling thread is not a member of the thread pool used to execute parallel operations (e.g. in eager mode).

Attributes:

AttributeMLIR TypeDescription
output_shapes::mlir::Attributederived attribute
output_types::mlir::Attributederived attribute

Operands:

Operand Description
iterator tensor of resource values

Results:

Result Description
components variadic of tensor of tf.dtype values

tf.IteratorToStringHandle (TF::IteratorToStringHandleOp)

Converts the given resource_handle representing an iterator to a string.

Operands:

Operand Description
resource_handle tensor of resource values

Results:

Result Description
string_handle tensor of string values

tf.IteratorV2 (TF::IteratorV2Op)

Attributes:

AttributeMLIR TypeDescription
shared_name::mlir::StringAttrstring attribute
container::mlir::StringAttrstring attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements

Results:

Result Description
handle tensor of resource values

tf.KthOrderStatistic (TF::KthOrderStatisticOp)

Computes the Kth order statistic of a data set. The current

implementation uses a binary search requiring exactly 32 passes over the input data. The running time is linear with respect to input size. The median-of-medians algorithm is probably faster, but is difficult to implement efficiently in XLA. The implementation imposes a total ordering on floats. The ordering is consistent with the usual partial order. Positive NaNs are greater than positive infinity. Negative NaNs are less than negative infinity. NaNs with distinct payloads are treated as distinct. Subnormal numbers are preserved (not flushed to zero). Positive infinity is greater than all numbers. Negative infinity is less than all numbers. Positive is greater than negative zero. There are less than k values greater than the kth order statistic. There are at least k values greater than or equal to the Kth order statistic. The semantics are not the same as top_k_unique.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
k::mlir::IntegerAttr64-bit signless integer attribute

Operands:

Operand Description
input tensor of 32-bit float values

Results:

Result Description
output tensor of 32-bit float values

tf.L2Loss (TF::L2LossOp)

L2 Loss.

Computes half the L2 norm of a tensor without the sqrt:

output = sum(t ** 2) / 2

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.LeakyRelu (TF::LeakyReluOp)

Computes rectified linear: max(features, features * alpha).

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
alpha::mlir::FloatAttr32-bit float attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
features tensor of floating-point values

Results:

Result Description
activations tensor of floating-point values

tf.LeakyReluGrad (TF::LeakyReluGradOp)

Computes rectified linear gradients for a LeakyRelu operation.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
alpha::mlir::FloatAttr32-bit float attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
gradients tensor of floating-point values
features tensor of floating-point values

Results:

Result Description
backprops tensor of floating-point values

tf.LeftShift (TF::LeftShiftOp)

Elementwise computes the bitwise left-shift of x and y.

If y is negative, or greater than or equal to the width of x in bits the result is implementation defined.

Example:

import tensorflow as tf
from tensorflow.python.ops import bitwise_ops
import numpy as np
dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64]

for dtype in dtype_list:
  lhs = tf.constant([-1, -5, -3, -14], dtype=dtype)
  rhs = tf.constant([5, 0, 7, 11], dtype=dtype)

  left_shift_result = bitwise_ops.left_shift(lhs, rhs)

  print(left_shift_result)

# This will print:
# tf.Tensor([ -32   -5 -128    0], shape=(4,), dtype=int8)
# tf.Tensor([   -32     -5   -384 -28672], shape=(4,), dtype=int16)
# tf.Tensor([   -32     -5   -384 -28672], shape=(4,), dtype=int32)
# tf.Tensor([   -32     -5   -384 -28672], shape=(4,), dtype=int64)

lhs = np.array([-2, 64, 101, 32], dtype=np.int8)
rhs = np.array([-1, -5, -3, -14], dtype=np.int8)
bitwise_ops.left_shift(lhs, rhs)
# <tf.Tensor: shape=(4,), dtype=int8, numpy=array([ -2,  64, 101,  32], dtype=int8)>

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer values
y tensor of integer values

Results:

Result Description
z tensor of integer values

tf.LegacyCall (TF::LegacyCallOp)

Returns f(inputs), where f is a function.

The LegacyCall operation represents a direct call to a function that is within the same symbol scope as the call and is mapped to a GraphDef node with the function name as the op name. Unlike a PartitionedCall which represents asynchronously executing a function across multiple devices, a LegacyCall ignores specification for ops in the attached function and instead executes it on the device assigned to this op.

Traits: AlwaysSpeculatableImplTrait

Interfaces: CallOpInterface, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), SymbolUserOpInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
f::mlir::FlatSymbolRefAttrflat symbol reference attribute
_disable_call_shape_inference::mlir::BoolAttrbool attribute

Operands:

Operand Description
args variadic of tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.Less (TF::LessOp)

Returns the truth value of (x < y) element-wise.

Example:

x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.less(x, y) ==> [False, True, False]

x = tf.constant([5, 4, 6])
y = tf.constant([5, 6, 7])
tf.math.less(x, y) ==> [False, True, True]

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer or floating-point values
y tensor of integer or floating-point values

Results:

Result Description
z tensor of bool values

tf.LessEqual (TF::LessEqualOp)

Returns the truth value of (x <= y) element-wise.

Example:

x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.less_equal(x, y) ==> [True, True, False]

x = tf.constant([5, 4, 6])
y = tf.constant([5, 6, 6])
tf.math.less_equal(x, y) ==> [True, True, True]

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer or floating-point values
y tensor of integer or floating-point values

Results:

Result Description
z tensor of bool values

tf.Lgamma (TF::LgammaOp)

Computes the log of the absolute value of Gamma(x) element-wise.

For positive numbers, this function computes log((input - 1)!) for every element in the tensor. lgamma(5) = log((5-1)!) = log(4!) = log(24) = 3.1780539

Example:

x = tf.constant([0, 0.5, 1, 4.5, -4, -5.6])
tf.math.lgamma(x) ==> [inf, 0.5723649, 0., 2.4537368, inf, -4.6477685]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of floating-point values

tf.LinSpace (TF::LinSpaceOp)

Generates values in an interval.

A sequence of num evenly-spaced values are generated beginning at start. If num > 1, the values in the sequence increase by (stop - start) / (num - 1), so that the last one is exactly stop.

For example:

tf.linspace(10.0, 12.0, 3, name="linspace") => [ 10.0  11.0  12.0]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
start tensor of floating-point values
stop tensor of floating-point values
num tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of floating-point values

tf.ListDiff (TF::ListDiffOp)

Computes the difference between two lists of numbers or strings.

Given a list x and a list y, this operation returns a list out that represents all values that are in x but not in y. The returned list out is sorted in the same order that the numbers appear in x (duplicates are preserved). This operation also returns a list idx that represents the position of each out element in x. In other words:

out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]

For example, given this input:

x = [1, 2, 3, 4, 5, 6]
y = [1, 3, 5]

This operation would return:

out ==> [2, 4, 6]
idx ==> [1, 3, 5]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
out_idx::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of tf.dtype values
y tensor of tf.dtype values

Results:

Result Description
out tensor of tf.dtype values
idx tensor of 32/64-bit signed integer values

tf.LoadTPUEmbeddingAdadeltaParameters (TF::LoadTPUEmbeddingAdadeltaParametersOp)

Load Adadelta embedding parameters.

An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values
updates tensor of 32-bit float values

tf.LoadTPUEmbeddingAdadeltaParametersGradAccumDebug (TF::LoadTPUEmbeddingAdadeltaParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values
updates tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.LoadTPUEmbeddingAdagradParameters (TF::LoadTPUEmbeddingAdagradParametersOp)

Load Adagrad embedding parameters.

An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values

tf.LoadTPUEmbeddingAdagradParametersGradAccumDebug (TF::LoadTPUEmbeddingAdagradParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.LoadTPUEmbeddingADAMParameters (TF::LoadTPUEmbeddingADAMParametersOp)

Load ADAM embedding parameters.

An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
momenta tensor of 32-bit float values
velocities tensor of 32-bit float values

tf.LoadTPUEmbeddingADAMParametersGradAccumDebug (TF::LoadTPUEmbeddingADAMParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
momenta tensor of 32-bit float values
velocities tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.LoadTPUEmbeddingCenteredRMSPropParameters (TF::LoadTPUEmbeddingCenteredRMSPropParametersOp)

Load centered RMSProp embedding parameters.

An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
ms tensor of 32-bit float values
mom tensor of 32-bit float values
mg tensor of 32-bit float values

tf.LoadTPUEmbeddingFTRLParameters (TF::LoadTPUEmbeddingFTRLParametersOp)

Load FTRL embedding parameters.

An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values
linears tensor of 32-bit float values

tf.LoadTPUEmbeddingFTRLParametersGradAccumDebug (TF::LoadTPUEmbeddingFTRLParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values
linears tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.LoadTPUEmbeddingMDLAdagradLightParameters (TF::LoadTPUEmbeddingMDLAdagradLightParametersOp)

Load MDL Adagrad Light embedding parameters.

An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values
weights tensor of 32-bit float values
benefits tensor of 32-bit float values

tf.LoadTPUEmbeddingMomentumParameters (TF::LoadTPUEmbeddingMomentumParametersOp)

Load Momentum embedding parameters.

An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
momenta tensor of 32-bit float values

tf.LoadTPUEmbeddingMomentumParametersGradAccumDebug (TF::LoadTPUEmbeddingMomentumParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
momenta tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.LoadTPUEmbeddingProximalAdagradParameters (TF::LoadTPUEmbeddingProximalAdagradParametersOp)

Load proximal Adagrad embedding parameters.

An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values

tf.LoadTPUEmbeddingProximalAdagradParametersGradAccumDebug (TF::LoadTPUEmbeddingProximalAdagradParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.LoadTPUEmbeddingProximalYogiParameters (TF::LoadTPUEmbeddingProximalYogiParametersOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
v tensor of 32-bit float values
m tensor of 32-bit float values

tf.LoadTPUEmbeddingProximalYogiParametersGradAccumDebug (TF::LoadTPUEmbeddingProximalYogiParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
v tensor of 32-bit float values
m tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.LoadTPUEmbeddingRMSPropParameters (TF::LoadTPUEmbeddingRMSPropParametersOp)

Load RMSProp embedding parameters.

An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
ms tensor of 32-bit float values
mom tensor of 32-bit float values

tf.LoadTPUEmbeddingRMSPropParametersGradAccumDebug (TF::LoadTPUEmbeddingRMSPropParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
ms tensor of 32-bit float values
mom tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.LoadTPUEmbeddingStochasticGradientDescentParameters (TF::LoadTPUEmbeddingStochasticGradientDescentParametersOp)

Load SGD embedding parameters.

An op that loads optimization parameters into HBM for embedding. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to install parameters that are loaded from a checkpoint before a training loop is executed.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values

tf.LoadTPUEmbeddingStochasticGradientDescentParametersGradAccumDebug (TF::LoadTPUEmbeddingStochasticGradientDescentParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Operands:

Operand Description
parameters tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.Log (TF::LogOp)

Computes natural logarithm of x element-wise.

I.e., \(y = \log_e x\).

Example:

x = tf.constant([0, 0.5, 1, 5])
tf.math.log(x) ==> [-inf, -0.6931472,  0. ,  1.609438]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.Log1p (TF::Log1pOp)

Computes natural logarithm of (1 + x) element-wise.

I.e., \(y = \log_e (1 + x)\).

Example:

x = tf.constant([0, 0.5, 1, 5])
tf.math.log1p(x) ==> [0., 0.4054651, 0.6931472, 1.7917595]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_CwiseUnary

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.LogicalAnd (TF::LogicalAndOp)

Returns the truth value of x AND y element-wise.

Traits: AlwaysSpeculatableImplTrait, Commutative, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
x tensor of bool values
y tensor of bool values

Results:

Result Description
z tensor of bool values

tf.LogicalNot (TF::LogicalNotOp)

Returns the truth value of NOT x element-wise.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_Involution

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
x tensor of bool values

Results:

Result Description
y tensor of bool values

tf.LogicalOr (TF::LogicalOrOp)

Returns the truth value of x OR y element-wise.

Traits: AlwaysSpeculatableImplTrait, Commutative, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
x tensor of bool values
y tensor of bool values

Results:

Result Description
z tensor of bool values

tf.LogSoftmax (TF::LogSoftmaxOp)

Computes log softmax activations.

For each batch i and class j we have

logsoftmax[i, j] = logits[i, j] - log(sum(exp(logits[i])))

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
logits tensor of floating-point values

Results:

Result Description
logsoftmax tensor of floating-point values

tf.LookupTableExportV2 (TF::LookupTableExportV2Op)

Outputs all keys and values in the table.

Attributes:

AttributeMLIR TypeDescription
Tkeys::mlir::Attributederived attribute
Tvalues::mlir::Attributederived attribute

Operands:

Operand Description
table_handle tensor of resource values

Results:

Result Description
keys tensor of tf.dtype values
values tensor of tf.dtype values

tf.LookupTableFind (TF::LookupTableFindOp)

Looks up keys in a table, outputs the corresponding values.

The tensor keys must of the same type as the keys of the table. The output values is of the type of the table values.

The scalar default_value is the value output for keys not present in the table. It must also be of the same type as the table values.

Attributes:

AttributeMLIR TypeDescription
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
table_handle tensor of string values
keys tensor of tf.dtype values
default_value tensor of tf.dtype values

Results:

Result Description
values tensor of tf.dtype values

tf.LookupTableFindV2 (TF::LookupTableFindV2Op)

Looks up keys in a table, outputs the corresponding values.

The tensor keys must of the same type as the keys of the table. The output values is of the type of the table values.

The scalar default_value is the value output for keys not present in the table. It must also be of the same type as the table values.

Attributes:

AttributeMLIR TypeDescription
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
table_handle tensor of resource values
keys tensor of tf.dtype values
default_value tensor of tf.dtype values

Results:

Result Description
values tensor of tf.dtype values

tf.LookupTableImportV2 (TF::LookupTableImportV2Op)

Replaces the contents of the table with the specified keys and values.

The tensor keys must be of the same type as the keys of the table. The tensor values must be of the type of the table values.

Attributes:

AttributeMLIR TypeDescription
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
table_handle tensor of resource values
keys tensor of tf.dtype values
values tensor of tf.dtype values

tf.LookupTableInsertV2 (TF::LookupTableInsertV2Op)

Updates the table to associates keys with values.

The tensor keys must be of the same type as the keys of the table. The tensor values must be of the type of the table values.

Attributes:

AttributeMLIR TypeDescription
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
table_handle tensor of resource values
keys tensor of tf.dtype values
values tensor of tf.dtype values

tf.LookupTableRemoveV2 (TF::LookupTableRemoveV2Op)

Removes keys and its associated values from a table.

The tensor keys must of the same type as the keys of the table. Keys not already in the table are silently ignored.

Attributes:

AttributeMLIR TypeDescription
Tin::mlir::Attributederived attribute

Operands:

Operand Description
table_handle tensor of resource values
keys tensor of tf.dtype values

tf.LookupTableSize (TF::LookupTableSizeOp)

Computes the number of elements in the given table.

Operands:

Operand Description
table_handle tensor of string values

Results:

Result Description
size tensor of 64-bit integer values

tf.LookupTableSizeV2 (TF::LookupTableSizeV2Op)

Computes the number of elements in the given table.

Operands:

Operand Description
table_handle tensor of resource values

Results:

Result Description
size tensor of 64-bit integer values

tf.LowerBound (TF::LowerBoundOp)

_Applies lower_bound(sorted_searchvalues, values) along each row.

Each set of rows with the same index in (sorted_inputs, values) is treated independently. The resulting row is the equivalent of calling np.searchsorted(sorted_inputs, values, side='left').

The result is not a global index to the entire Tensor, but rather just the index in the last dimension.

A 2-D example: sorted_sequence = [[0, 3, 9, 9, 10], [1, 2, 3, 4, 5]] values = [[2, 4, 9], [0, 2, 6]]

result = LowerBound(sorted_sequence, values)

result == [[1, 2, 2], [0, 1, 5]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
out_type::mlir::Attributederived attribute

Operands:

Operand Description
sorted_inputs tensor of tf.dtype values
values tensor of tf.dtype values

Results:

Result Description
output tensor of 32/64-bit signed integer values

tf.LRN (TF::LRNOp)

Local Response Normalization.

The 4-D input tensor is treated as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of inputs within depth_radius. In detail,

sqr_sum[a, b, c, d] =
    sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2)
output = input / (bias + alpha * sqr_sum) ** beta

For details, see Krizhevsky et al., ImageNet classification with deep convolutional neural networks (NIPS 2012).

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
depth_radius::mlir::IntegerAttr64-bit signless integer attribute
bias::mlir::FloatAttr32-bit float attribute
alpha::mlir::FloatAttr32-bit float attribute
beta::mlir::FloatAttr32-bit float attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float values

tf.LRNGrad (TF::LRNGradOp)

Gradients for Local Response Normalization.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
depth_radius::mlir::IntegerAttr64-bit signless integer attribute
bias::mlir::FloatAttr32-bit float attribute
alpha::mlir::FloatAttr32-bit float attribute
beta::mlir::FloatAttr32-bit float attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input_grads tensor of bfloat16 or 16-bit float or 32-bit float values
input_image tensor of bfloat16 or 16-bit float or 32-bit float values
output_image tensor of bfloat16 or 16-bit float or 32-bit float values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float values

tf.MakeIterator (TF::MakeIteratorOp)

Makes a new iterator from the given dataset and stores it in iterator.

This operation may be executed multiple times. Each execution will reset the iterator in iterator to the first element of dataset.

Operands:

Operand Description
dataset tensor of variant values
iterator tensor of resource values

tf.MakeUnique (TF::MakeUniqueOp)

Make all elements in the non-Batch dimension unique, but "close" to

their initial value. Never returns a sub-normal number. Never returns zero. The sign of each input element is always identical to the sign of the corresponding output element. Behavior for infinite elements is undefined. Behavior for subnormal elements is undefined.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
input tensor of 32-bit float values

Results:

Result Description
output tensor of 32-bit float values

tf.MapAndBatchDataset (TF::MapAndBatchDatasetOp)

Creates a dataset that fuses mapping with batching.

Creates a dataset that applies f to the outputs of input_dataset and then batches batch_size of them.

Unlike a "MapDataset", which applies f sequentially, this dataset invokes up to batch_size * num_parallel_batches copies of f in parallel.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
f::mlir::SymbolRefAttrsymbol reference attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
preserve_cardinality::mlir::BoolAttrbool attribute
metadata::mlir::StringAttrstring attribute
Targuments::mlir::Attributederived attribute

Operands:

Operand Description
input_dataset tensor of variant values
other_arguments variadic of tensor of tf.dtype values
batch_size tensor of 64-bit integer values
num_parallel_calls tensor of 64-bit integer values
drop_remainder tensor of bool values

Results:

Result Description
handle tensor of variant values

tf.MapDataset (TF::MapDatasetOp)

Creates a dataset that applies f to the outputs of input_dataset.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
f::mlir::SymbolRefAttrsymbol reference attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
use_inter_op_parallelism::mlir::BoolAttrbool attribute
preserve_cardinality::mlir::BoolAttrbool attribute
metadata::mlir::StringAttrstring attribute
Targuments::mlir::Attributederived attribute

Operands:

Operand Description
input_dataset tensor of variant values
other_arguments variadic of tensor of tf.dtype values

Results:

Result Description
handle tensor of variant values

tf.MatMul (TF::MatMulOp)

Multiply the matrix "a" by the matrix "b".

The inputs must be two-dimensional matrices and the inner dimension of "a" (after being transposed if transpose_a is true) must match the outer dimension of "b" (after being transposed if transposed_b is true).

Traits: AlwaysSpeculatableImplTrait, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
transpose_a::mlir::BoolAttrbool attribute
transpose_b::mlir::BoolAttrbool attribute
grad_a::mlir::BoolAttrbool attribute
grad_b::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
a tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
b tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
product tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.MatrixBandPart (TF::MatrixBandPartOp)

Copy a tensor setting everything outside a central band in each innermost matrix to zero.

The band part is computed as follows: Assume input has k dimensions [I, J, K, ..., M, N], then the output is a tensor with the same shape where

band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n].

The indicator function

in_band(m, n) = (num_lower < 0 || (m-n) <= num_lower)) && (num_upper < 0 || (n-m) <= num_upper).

For example:

# if 'input' is [[ 0,  1,  2, 3]
#                [-1,  0,  1, 2]
#                [-2, -1,  0, 1]
#                [-3, -2, -1, 0]],

tf.linalg.band_part(input, 1, -1) ==> [[ 0,  1,  2, 3]
                                       [-1,  0,  1, 2]
                                       [ 0, -1,  0, 1]
                                       [ 0,  0, -1, 0]],

tf.linalg.band_part(input, 2, 1) ==> [[ 0,  1,  0, 0]
                                      [-1,  0,  1, 0]
                                      [-2, -1,  0, 1]
                                      [ 0, -2, -1, 0]]

Useful special cases:

 tf.linalg.band_part(input, 0, -1) ==> Upper triangular part.
 tf.linalg.band_part(input, -1, 0) ==> Lower triangular part.
 tf.linalg.band_part(input, 0, 0) ==> Diagonal.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindex::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
num_lower tensor of 32/64-bit signed integer values
num_upper tensor of 32/64-bit signed integer values

Results:

Result Description
band tensor of tf.dtype values

tf.MatrixDiag (TF::MatrixDiagOp)

Returns a batched diagonal tensor with a given batched diagonal values.

Given a diagonal, this operation returns a tensor with the diagonal and everything else padded with zeros. The diagonal is computed as follows:

Assume diagonal has k dimensions [I, J, K, ..., N], then the output is a tensor of rank k+1 with dimensions [I, J, K, ..., N, N]` where:

output[i, j, k, ..., m, n] = 1{m=n} * diagonal[i, j, k, ..., n].

For example:

# 'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]]

and diagonal.shape = (2, 4)

tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0]
                                     [0, 2, 0, 0]
                                     [0, 0, 3, 0]
                                     [0, 0, 0, 4]],
                                    [[5, 0, 0, 0]
                                     [0, 6, 0, 0]
                                     [0, 0, 7, 0]
                                     [0, 0, 0, 8]]]

which has shape (2, 4, 4)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
diagonal tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.MatrixDiagPartV3 (TF::MatrixDiagPartV3Op)

Returns the batched diagonal part of a batched tensor.

Returns a tensor with the k[0]-th to k[1]-th diagonals of the batched input.

Assume input has r dimensions [I, J, ..., L, M, N]. Let max_diag_len be the maximum length among all diagonals to be extracted, max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0)) Let num_diags be the number of diagonals to extract, num_diags = k[1] - k[0] + 1.

If num_diags == 1, the output tensor is of rank r - 1 with shape [I, J, ..., L, max_diag_len] and values:

diagonal[i, j, ..., l, n]
  = input[i, j, ..., l, n+y, n+x] ; if 0 <= n+y < M and 0 <= n+x < N,
    padding_value                 ; otherwise.

where y = max(-k[1], 0), x = max(k[1], 0).

Otherwise, the output tensor has rank r with dimensions [I, J, ..., L, num_diags, max_diag_len] with values:

diagonal[i, j, ..., l, m, n]
  = input[i, j, ..., l, n+y, n+x] ; if 0 <= n+y < M and 0 <= n+x < N,
    padding_value                 ; otherwise.

where d = k[1] - m, y = max(-d, 0) - offset, and x = max(d, 0) - offset.

offset is zero except when the alignment of the diagonal is to the right.

offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT}
                                           and `d >= 0`) or
                                         (`align` in {LEFT_RIGHT, RIGHT_RIGHT}
                                           and `d <= 0`)
         0                          ; otherwise

where diag_len(d) = min(cols - max(d, 0), rows + min(d, 0)).

The input must be at least a matrix.

For example:

input = np.array([[[1, 2, 3, 4],  # Input shape: (2, 3, 4)
                   [5, 6, 7, 8],
                   [9, 8, 7, 6]],
                  [[5, 4, 3, 2],
                   [1, 2, 3, 4],
                   [5, 6, 7, 8]]])

# A main diagonal from each batch.
tf.matrix_diag_part(input) ==> [[1, 6, 7],  # Output shape: (2, 3)
                                [5, 2, 7]]

# A superdiagonal from each batch.
tf.matrix_diag_part(input, k = 1)
  ==> [[2, 7, 6],  # Output shape: (2, 3)
       [4, 3, 8]]

# A band from each batch.
tf.matrix_diag_part(input, k = (-1, 2))
  ==> [[[0, 3, 8],  # Output shape: (2, 4, 3)
        [2, 7, 6],
        [1, 6, 7],
        [5, 8, 0]],
       [[0, 3, 4],
        [4, 3, 8],
        [5, 2, 7],
        [1, 6, 0]]]

# LEFT_RIGHT alignment.
tf.matrix_diag_part(input, k = (-1, 2), align="LEFT_RIGHT")
  ==> [[[3, 8, 0],  # Output shape: (2, 4, 3)
        [2, 7, 6],
        [1, 6, 7],
        [0, 5, 8]],
       [[3, 4, 0],
        [4, 3, 8],
        [5, 2, 7],
        [0, 1, 6]]]

# max_diag_len can be shorter than the main diagonal.
tf.matrix_diag_part(input, k = (-2, -1))
  ==> [[[5, 8],
        [9, 0]],
       [[1, 6],
        [5, 0]]]

# padding_value = 9
tf.matrix_diag_part(input, k = (1, 3), padding_value = 9)
  ==> [[[9, 9, 4],  # Output shape: (2, 3, 3)
        [9, 3, 8],
        [2, 7, 6]],
       [[9, 9, 2],
        [9, 3, 4],
        [4, 3, 8]]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
align::mlir::StringAttrstring attribute whose value is LEFT_RIGHT, or RIGHT_LEFT, or LEFT_LEFT, or RIGHT_RIGHT
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
k tensor of 32-bit integer values
padding_value tensor of tf.dtype values

Results:

Result Description
diagonal tensor of tf.dtype values

tf.MatrixDiagV2 (TF::MatrixDiagV2Op)

Returns a batched diagonal tensor with given batched diagonal values.

Returns a tensor with the contents in diagonal as k[0]-th to k[1]-th diagonals of a matrix, with everything else padded with padding. num_rows and num_cols specify the dimension of the innermost matrix of the output. If both are not specified, the op assumes the innermost matrix is square and infers its size from k and the innermost dimension of diagonal. If only one of them is specified, the op assumes the unspecified value is the smallest possible based on other criteria.

Let diagonal have r dimensions [I, J, ..., L, M, N]. The output tensor has rank r+1 with shape [I, J, ..., L, M, num_rows, num_cols] when only one diagonal is given (k is an integer or k[0] == k[1]). Otherwise, it has rank r with shape [I, J, ..., L, num_rows, num_cols].

The second innermost dimension of diagonal has double meaning. When k is scalar or k[0] == k[1], M is part of the batch size [I, J, ..., M], and the output tensor is:

output[i, j, ..., l, m, n]
  = diagonal[i, j, ..., l, n-max(d_upper, 0)] ; if n - m == d_upper
    padding_value                             ; otherwise

Otherwise, M is treated as the number of diagonals for the matrix in the same batch (M = k[1]-k[0]+1), and the output tensor is:

output[i, j, ..., l, m, n]
  = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1]
    padding_value                                     ; otherwise

where d = n - m, diag_index = k[1] - d, and index_in_diag = n - max(d, 0).

For example:

# The main diagonal.
diagonal = np.array([[1, 2, 3, 4],            # Input shape: (2, 4)
                     [5, 6, 7, 8]])
tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0],  # Output shape: (2, 4, 4)
                               [0, 2, 0, 0],
                               [0, 0, 3, 0],
                               [0, 0, 0, 4]],
                              [[5, 0, 0, 0],
                               [0, 6, 0, 0],
                               [0, 0, 7, 0],
                               [0, 0, 0, 8]]]

# A superdiagonal (per batch).
diagonal = np.array([[1, 2, 3],  # Input shape: (2, 3)
                     [4, 5, 6]])
tf.matrix_diag(diagonal, k = 1)
  ==> [[[0, 1, 0, 0],  # Output shape: (2, 4, 4)
        [0, 0, 2, 0],
        [0, 0, 0, 3],
        [0, 0, 0, 0]],
       [[0, 4, 0, 0],
        [0, 0, 5, 0],
        [0, 0, 0, 6],
        [0, 0, 0, 0]]]

# A band of diagonals.
diagonals = np.array([[[1, 2, 3],  # Input shape: (2, 2, 3)
                       [4, 5, 0]],
                      [[6, 7, 9],
                       [9, 1, 0]]])
tf.matrix_diag(diagonals, k = (-1, 0))
  ==> [[[1, 0, 0],  # Output shape: (2, 3, 3)
        [4, 2, 0],
        [0, 5, 3]],
       [[6, 0, 0],
        [9, 7, 0],
        [0, 1, 9]]]

# Rectangular matrix.
diagonal = np.array([1, 2])  # Input shape: (2)
tf.matrix_diag(diagonal, k = -1, num_rows = 3, num_cols = 4)
  ==> [[0, 0, 0, 0],  # Output shape: (3, 4)
       [1, 0, 0, 0],
       [0, 2, 0, 0]]

# Rectangular matrix with inferred num_cols and padding_value = 9.
tf.matrix_diag(diagonal, k = -1, num_rows = 3, padding_value = 9)
  ==> [[9, 9],  # Output shape: (3, 2)
       [1, 9],
       [9, 2]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
diagonal tensor of tf.dtype values
k tensor of 32-bit integer values
num_rows tensor of 32-bit integer values
num_cols tensor of 32-bit integer values
padding_value tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.MatrixDiagV3 (TF::MatrixDiagV3Op)

Returns a batched diagonal tensor with given batched diagonal values.

Returns a tensor with the contents in diagonal as k[0]-th to k[1]-th diagonals of a matrix, with everything else padded with padding. num_rows and num_cols specify the dimension of the innermost matrix of the output. If both are not specified, the op assumes the innermost matrix is square and infers its size from k and the innermost dimension of diagonal. If only one of them is specified, the op assumes the unspecified value is the smallest possible based on other criteria.

Let diagonal have r dimensions [I, J, ..., L, M, N]. The output tensor has rank r+1 with shape [I, J, ..., L, M, num_rows, num_cols] when only one diagonal is given (k is an integer or k[0] == k[1]). Otherwise, it has rank r with shape [I, J, ..., L, num_rows, num_cols].

The second innermost dimension of diagonal has double meaning. When k is scalar or k[0] == k[1], M is part of the batch size [I, J, ..., M], and the output tensor is:

output[i, j, ..., l, m, n]
  = diagonal[i, j, ..., l, n-max(d_upper, 0)] ; if n - m == d_upper
    padding_value                             ; otherwise

Otherwise, M is treated as the number of diagonals for the matrix in the same batch (M = k[1]-k[0]+1), and the output tensor is:

output[i, j, ..., l, m, n]
  = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1]
    padding_value                                     ; otherwise

where d = n - m, diag_index = [k] - d, and index_in_diag = n - max(d, 0) + offset.

offset is zero except when the alignment of the diagonal is to the right.

offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT}
                                           and `d >= 0`) or
                                         (`align` in {LEFT_RIGHT, RIGHT_RIGHT}
                                           and `d <= 0`)
         0                          ; otherwise

where diag_len(d) = min(cols - max(d, 0), rows + min(d, 0)).

For example:

# The main diagonal.
diagonal = np.array([[1, 2, 3, 4],            # Input shape: (2, 4)
                     [5, 6, 7, 8]])
tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0],  # Output shape: (2, 4, 4)
                               [0, 2, 0, 0],
                               [0, 0, 3, 0],
                               [0, 0, 0, 4]],
                              [[5, 0, 0, 0],
                               [0, 6, 0, 0],
                               [0, 0, 7, 0],
                               [0, 0, 0, 8]]]

# A superdiagonal (per batch).
diagonal = np.array([[1, 2, 3],  # Input shape: (2, 3)
                     [4, 5, 6]])
tf.matrix_diag(diagonal, k = 1)
  ==> [[[0, 1, 0, 0],  # Output shape: (2, 4, 4)
        [0, 0, 2, 0],
        [0, 0, 0, 3],
        [0, 0, 0, 0]],
       [[0, 4, 0, 0],
        [0, 0, 5, 0],
        [0, 0, 0, 6],
        [0, 0, 0, 0]]]

# A tridiagonal band (per batch).
diagonals = np.array([[[0, 8, 9],  # Input shape: (2, 2, 3)
                       [1, 2, 3],
                       [4, 5, 0]],
                      [[0, 2, 3],
                       [6, 7, 9],
                       [9, 1, 0]]])
tf.matrix_diag(diagonals, k = (-1, 1))
  ==> [[[1, 8, 0],  # Output shape: (2, 3, 3)
        [4, 2, 9],
        [0, 5, 3]],
       [[6, 2, 0],
        [9, 7, 3],
        [0, 1, 9]]]

# LEFT_RIGHT alignment.
diagonals = np.array([[[8, 9, 0],  # Input shape: (2, 2, 3)
                       [1, 2, 3],
                       [0, 4, 5]],
                      [[2, 3, 0],
                       [6, 7, 9],
                       [0, 9, 1]]])
tf.matrix_diag(diagonals, k = (-1, 1), align="LEFT_RIGHT")
  ==> [[[1, 8, 0],  # Output shape: (2, 3, 3)
        [4, 2, 9],
        [0, 5, 3]],
       [[6, 2, 0],
        [9, 7, 3],
        [0, 1, 9]]]

# Rectangular matrix.
diagonal = np.array([1, 2])  # Input shape: (2)
tf.matrix_diag(diagonal, k = -1, num_rows = 3, num_cols = 4)
  ==> [[0, 0, 0, 0],  # Output shape: (3, 4)
       [1, 0, 0, 0],
       [0, 2, 0, 0]]

# Rectangular matrix with inferred num_cols and padding_value = 9.
tf.matrix_diag(diagonal, k = -1, num_rows = 3, padding_value = 9)
  ==> [[9, 9],  # Output shape: (3, 2)
       [1, 9],
       [9, 2]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
align::mlir::StringAttrstring attribute whose value is LEFT_RIGHT, or RIGHT_LEFT, or LEFT_LEFT, or RIGHT_RIGHT
T::mlir::Attributederived attribute

Operands:

Operand Description
diagonal tensor of tf.dtype values
k tensor of 32-bit integer values
num_rows tensor of 32-bit integer values
num_cols tensor of 32-bit integer values
padding_value tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.MatrixInverse (TF::MatrixInverseOp)

Computes the inverse of one or more square invertible matrices or their adjoints (conjugate transposes).

The input is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices. The output is a tensor of the same shape as the input containing the inverse for all input submatrices [..., :, :].

The op uses LU decomposition with partial pivoting to compute the inverses.

If a matrix is not invertible there is no guarantee what the op does. It may detect the condition and raise an exception or it may simply return a garbage result.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
adjoint::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values

tf.MatrixSetDiag (TF::MatrixSetDiagOp)

Returns a batched matrix tensor with new batched diagonal values.

Given input and diagonal, this operation returns a tensor with the same shape and values as input, except for the main diagonal of the innermost matrices. These will be overwritten by the values in diagonal.

The output is computed as follows:

Assume input has k+1 dimensions [I, J, K, ..., M, N] and diagonal has k dimensions [I, J, K, ..., min(M, N)]. Then the output is a tensor of rank k+1 with dimensions [I, J, K, ..., M, N] where:

  • output[i, j, k, ..., m, n] = diagonal[i, j, k, ..., n] for m == n.
  • output[i, j, k, ..., m, n] = input[i, j, k, ..., m, n] for m != n.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
diagonal tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.MatrixSetDiagV2 (TF::MatrixSetDiagV2Op)

Returns a batched matrix tensor with new batched diagonal values.

Given input and diagonal, this operation returns a tensor with the same shape and values as input, except for the specified diagonals of the innermost matrices. These will be overwritten by the values in diagonal.

input has r+1 dimensions [I, J, ..., L, M, N]. When k is scalar or k[0] == k[1], diagonal has r dimensions [I, J, ..., L, max_diag_len]. Otherwise, it has r+1 dimensions [I, J, ..., L, num_diags, max_diag_len]. num_diags is the number of diagonals, num_diags = k[1] - k[0] + 1. max_diag_len is the longest diagonal in the range [k[0], k[1]], max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0))

The output is a tensor of rank k+1 with dimensions [I, J, ..., L, M, N]. If k is scalar or k[0] == k[1]:

output[i, j, ..., l, m, n]
  = diagonal[i, j, ..., l, n-max(k[1], 0)] ; if n - m == k[1]
    input[i, j, ..., l, m, n]              ; otherwise

Otherwise,

output[i, j, ..., l, m, n]
  = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1]
    input[i, j, ..., l, m, n]                         ; otherwise

where d = n - m, diag_index = k[1] - d, and index_in_diag = n - max(d, 0).

For example:

# The main diagonal.
input = np.array([[[7, 7, 7, 7],              # Input shape: (2, 3, 4)
                   [7, 7, 7, 7],
                   [7, 7, 7, 7]],
                  [[7, 7, 7, 7],
                   [7, 7, 7, 7],
                   [7, 7, 7, 7]]])
diagonal = np.array([[1, 2, 3],               # Diagonal shape: (2, 3)
                     [4, 5, 6]])
tf.matrix_set_diag(diagonal) ==> [[[1, 7, 7, 7],  # Output shape: (2, 3, 4)
                                   [7, 2, 7, 7],
                                   [7, 7, 3, 7]],
                                  [[4, 7, 7, 7],
                                   [7, 5, 7, 7],
                                   [7, 7, 6, 7]]]

# A superdiagonal (per batch).
tf.matrix_set_diag(diagonal, k = 1)
  ==> [[[7, 1, 7, 7],  # Output shape: (2, 3, 4)
        [7, 7, 2, 7],
        [7, 7, 7, 3]],
       [[7, 4, 7, 7],
        [7, 7, 5, 7],
        [7, 7, 7, 6]]]

# A band of diagonals.
diagonals = np.array([[[1, 2, 3],  # Diagonal shape: (2, 2, 3)
                       [4, 5, 0]],
                      [[6, 1, 2],
                       [3, 4, 0]]])
tf.matrix_set_diag(diagonals, k = (-1, 0))
  ==> [[[1, 7, 7, 7],  # Output shape: (2, 3, 4)
        [4, 2, 7, 7],
        [0, 5, 3, 7]],
       [[6, 7, 7, 7],
        [3, 1, 7, 7],
        [7, 4, 2, 7]]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
diagonal tensor of tf.dtype values
k tensor of 32-bit integer values

Results:

Result Description
output tensor of tf.dtype values

tf.MatrixSetDiagV3 (TF::MatrixSetDiagV3Op)

Returns a batched matrix tensor with new batched diagonal values.

Given input and diagonal, this operation returns a tensor with the same shape and values as input, except for the specified diagonals of the innermost matrices. These will be overwritten by the values in diagonal.

input has r+1 dimensions [I, J, ..., L, M, N]. When k is scalar or k[0] == k[1], diagonal has r dimensions [I, J, ..., L, max_diag_len]. Otherwise, it has r+1 dimensions [I, J, ..., L, num_diags, max_diag_len]. num_diags is the number of diagonals, num_diags = k[1] - k[0] + 1. max_diag_len is the longest diagonal in the range [k[0], k[1]], max_diag_len = min(M + min(k[1], 0), N + min(-k[0], 0))

The output is a tensor of rank k+1 with dimensions [I, J, ..., L, M, N]. If k is scalar or k[0] == k[1]:

output[i, j, ..., l, m, n]
  = diagonal[i, j, ..., l, n-max(k[1], 0)] ; if n - m == k[1]
    input[i, j, ..., l, m, n]              ; otherwise

Otherwise,

output[i, j, ..., l, m, n]
  = diagonal[i, j, ..., l, diag_index, index_in_diag] ; if k[0] <= d <= k[1]
    input[i, j, ..., l, m, n]                         ; otherwise

where d = n - m, diag_index = k[1] - d, and index_in_diag = n - max(d, 0) + offset.

offset is zero except when the alignment of the diagonal is to the right.

offset = max_diag_len - diag_len(d) ; if (`align` in {RIGHT_LEFT, RIGHT_RIGHT}
                                           and `d >= 0`) or
                                         (`align` in {LEFT_RIGHT, RIGHT_RIGHT}
                                           and `d <= 0`)
         0                          ; otherwise

where diag_len(d) = min(cols - max(d, 0), rows + min(d, 0)).

For example:

# The main diagonal.
input = np.array([[[7, 7, 7, 7],              # Input shape: (2, 3, 4)
                   [7, 7, 7, 7],
                   [7, 7, 7, 7]],
                  [[7, 7, 7, 7],
                   [7, 7, 7, 7],
                   [7, 7, 7, 7]]])
diagonal = np.array([[1, 2, 3],               # Diagonal shape: (2, 3)
                     [4, 5, 6]])
tf.matrix_set_diag(input, diagonal)
  ==> [[[1, 7, 7, 7],  # Output shape: (2, 3, 4)
        [7, 2, 7, 7],
        [7, 7, 3, 7]],
       [[4, 7, 7, 7],
        [7, 5, 7, 7],
        [7, 7, 6, 7]]]

# A superdiagonal (per batch).
tf.matrix_set_diag(input, diagonal, k = 1)
  ==> [[[7, 1, 7, 7],  # Output shape: (2, 3, 4)
        [7, 7, 2, 7],
        [7, 7, 7, 3]],
       [[7, 4, 7, 7],
        [7, 7, 5, 7],
        [7, 7, 7, 6]]]

# A band of diagonals.
diagonals = np.array([[[0, 9, 1],  # Diagonal shape: (2, 4, 3)
                       [6, 5, 8],
                       [1, 2, 3],
                       [4, 5, 0]],
                      [[0, 1, 2],
                       [5, 6, 4],
                       [6, 1, 2],
                       [3, 4, 0]]])
tf.matrix_set_diag(input, diagonals, k = (-1, 2))
  ==> [[[1, 6, 9, 7],  # Output shape: (2, 3, 4)
        [4, 2, 5, 1],
        [7, 5, 3, 8]],
       [[6, 5, 1, 7],
        [3, 1, 6, 2],
        [7, 4, 2, 4]]]

# LEFT_RIGHT alignment.
diagonals = np.array([[[9, 1, 0],  # Diagonal shape: (2, 4, 3)
                       [6, 5, 8],
                       [1, 2, 3],
                       [0, 4, 5]],
                      [[1, 2, 0],
                       [5, 6, 4],
                       [6, 1, 2],
                       [0, 3, 4]]])
tf.matrix_set_diag(input, diagonals, k = (-1, 2), align="LEFT_RIGHT")
  ==> [[[1, 6, 9, 7],  # Output shape: (2, 3, 4)
        [4, 2, 5, 1],
        [7, 5, 3, 8]],
       [[6, 5, 1, 7],
        [3, 1, 6, 2],
        [7, 4, 2, 4]]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
align::mlir::StringAttrstring attribute whose value is LEFT_RIGHT, or RIGHT_LEFT, or LEFT_LEFT, or RIGHT_RIGHT
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
diagonal tensor of tf.dtype values
k tensor of 32-bit integer values

Results:

Result Description
output tensor of tf.dtype values

tf.MatrixSolve (TF::MatrixSolveOp)

Solves systems of linear equations.

Matrix is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices. Rhs is a tensor of shape [..., M, K]. The output is a tensor shape [..., M, K]. If adjoint is False then each output matrix satisfies matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]. If adjoint is True then each output matrix satisfies adjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :].

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
adjoint::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
matrix tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values
rhs tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values

tf.MatrixTriangularSolve (TF::MatrixTriangularSolveOp)

Solves systems of linear equations with upper or lower triangular matrices by backsubstitution.

matrix is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices. If lower is True then the strictly upper triangular part of each inner-most matrix is assumed to be zero and not accessed. If lower is False then the strictly lower triangular part of each inner-most matrix is assumed to be zero and not accessed. rhs is a tensor of shape [..., M, N].

The output is a tensor of shape [..., M, N]. If adjoint is True then the innermost matrices in output satisfy matrix equations matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]. If adjoint is False then the strictly then the innermost matrices in output satisfy matrix equations adjoint(matrix[..., i, k]) * output[..., k, j] = rhs[..., i, j].

Note, the batch shapes for the inputs only need to broadcast.

Example:


a = tf.constant([[3,  0,  0,  0],
                 [2,  1,  0,  0],
                 [1,  0,  1,  0],
                 [1,  1,  1,  1]], dtype=tf.float32)

b = tf.constant([[4],
                 [2],
                 [4],
                 [2]], dtype=tf.float32)

x = tf.linalg.triangular_solve(a, b, lower=True)
x
# <tf.Tensor: shape=(4, 1), dtype=float32, numpy=
# array([[ 1.3333334 ],
#        [-0.66666675],
#        [ 2.6666665 ],
#        [-1.3333331 ]], dtype=float32)>

# in python3 one can use `a@x`
tf.matmul(a, x)
# <tf.Tensor: shape=(4, 1), dtype=float32, numpy=
# array([[4.       ],
#        [2.       ],
#        [4.       ],
#        [1.9999999]], dtype=float32)>

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
lower::mlir::BoolAttrbool attribute
adjoint::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
matrix tensor of floating-point or complex values
rhs tensor of floating-point or complex values

Results:

Result Description
output tensor of floating-point or complex values

tf.Max (TF::MaxOp)

Computes the maximum of elements across dimensions of a tensor.

Reduces input along the dimensions given in axis. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keep_dims is true, the reduced dimensions are retained with length 1.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
keep_dims::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
reduction_indices tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.Maximum (TF::MaximumOp)

Returns the max of x and y (i.e. x > y ? x : y) element-wise.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer or floating-point values
y tensor of integer or floating-point values

Results:

Result Description
z tensor of integer or floating-point values

tf.MaxPool (TF::MaxPoolOp)

Performs max pooling on the input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), TF_FoldOperandsTransposeInterface, TF_LayoutSensitiveInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
ksize::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
strides::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttrstring attribute whose value is NHWC, or NCHW, or NCHW_VECT_C
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 8-bit quantized integer or 16-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 8-bit quantized integer or 16-bit unsigned integer or 8-bit unsigned integer values

tf.MaxPool3D (TF::MaxPool3DOp)

Performs 3D max pooling on the input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
ksize::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
strides::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttrstring attribute whose value is NDHWC, or NCDHW
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float values

tf.MaxPool3DGrad (TF::MaxPool3DGradOp)

Computes gradients of 3D max pooling function.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
ksize::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
strides::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttrstring attribute whose value is NDHWC, or NCDHW
T::mlir::Attributederived attribute
TInput::mlir::Attributederived attribute

Operands:

Operand Description
orig_input tensor of bfloat16 or 16-bit float or 32-bit float values
orig_output tensor of bfloat16 or 16-bit float or 32-bit float values
grad tensor of bfloat16 or 16-bit float or 32-bit float values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float values

tf.MaxPool3DGradGrad (TF::MaxPool3DGradGradOp)

Computes second-order gradients of the maxpooling function.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
ksize::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
strides::mlir::ArrayAttr64-bit integer array attribute with at least 5 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttrstring attribute whose value is NDHWC, or NCDHW
T::mlir::Attributederived attribute

Operands:

Operand Description
orig_input tensor of integer or floating-point values
orig_output tensor of integer or floating-point values
grad tensor of integer or floating-point values

Results:

Result Description
output tensor of integer or floating-point values

tf.MaxPoolGrad (TF::MaxPoolGradOp)

Computes gradients of the maxpooling function.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
ksize::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
strides::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID, or EXPLICIT
explicit_paddings::mlir::ArrayAttr64-bit integer array attribute
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
T::mlir::Attributederived attribute

Operands:

Operand Description
orig_input tensor of integer or floating-point values
orig_output tensor of integer or floating-point values
grad tensor of integer or floating-point values

Results:

Result Description
output tensor of integer or floating-point values

tf.MaxPoolGradGrad (TF::MaxPoolGradGradOp)

Computes second-order gradients of the maxpooling function.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
ksize::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
strides::mlir::ArrayAttr64-bit integer array attribute with at least 4 elements
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
T::mlir::Attributederived attribute

Operands:

Operand Description
orig_input tensor of integer or floating-point values
orig_output tensor of integer or floating-point values
grad tensor of integer or floating-point values

Results:

Result Description
output tensor of integer or floating-point values

tf.MaxPoolGradGradV2 (TF::MaxPoolGradGradV2Op)

Computes second-order gradients of the maxpooling function.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
T::mlir::Attributederived attribute

Operands:

Operand Description
orig_input tensor of integer or floating-point values
orig_output tensor of integer or floating-point values
grad tensor of integer or floating-point values
ksize tensor of 32-bit integer values
strides tensor of 32-bit integer values

Results:

Result Description
output tensor of integer or floating-point values

tf.MaxPoolGradV2 (TF::MaxPoolGradV2Op)

Computes gradients of the maxpooling function.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttr'NHWC' or 'NCHW' convnet data format
T::mlir::Attributederived attribute

Operands:

Operand Description
orig_input tensor of integer or floating-point values
orig_output tensor of integer or floating-point values
grad tensor of integer or floating-point values
ksize tensor of 32-bit integer values
strides tensor of 32-bit integer values

Results:

Result Description
output tensor of integer or floating-point values

tf.MaxPoolV2 (TF::MaxPoolV2Op)

Performs max pooling on the input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
padding::mlir::StringAttrstring attribute whose value is SAME, or VALID
data_format::mlir::StringAttrstring attribute whose value is NHWC, or NCHW, or NCHW_VECT_C
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 8-bit quantized integer or 16-bit unsigned integer or 8-bit unsigned integer values
ksize tensor of 32-bit integer values
strides tensor of 32-bit integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 8-bit quantized integer or 16-bit unsigned integer or 8-bit unsigned integer values

tf.Mean (TF::MeanOp)

Computes the mean of elements across dimensions of a tensor.

Reduces input along the dimensions given in axis. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keep_dims is true, the reduced dimensions are retained with length 1.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), TF_FoldOperandsTransposeInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
keep_dims::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of number values
reduction_indices tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of number values

tf.MergeSummary (TF::MergeSummaryOp)

Merges summaries.

This op creates a Summary protocol buffer that contains the union of all the values in the input summaries.

When the Op is run, it reports an InvalidArgument error if multiple values in the summaries to merge use the same tag.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of string values

Results:

Result Description
summary tensor of string values

tf.MergeV2Checkpoints (TF::MergeV2CheckpointsOp)

V2 format specific: merges the metadata files of sharded checkpoints. The

result is one logical checkpoint, with one physical metadata file and renamed data files.

Intended for "grouping" multiple checkpoints in a sharded checkpoint setup.

If delete_old_dirs is true, attempts to delete recursively the dirname of each path in the input checkpoint_prefixes. This is useful when those paths are non user-facing temporary locations.

If allow_missing_files is true, merges the checkpoint prefixes as long as at least one file exists. Otherwise, if no files exist, an error will be thrown. The default value for allow_missing_files is false.

Attributes:

AttributeMLIR TypeDescription
delete_old_dirs::mlir::BoolAttrbool attribute
allow_missing_files::mlir::BoolAttrbool attribute

Operands:

Operand Description
checkpoint_prefixes tensor of string values
destination_prefix tensor of string values

tf.Min (TF::MinOp)

Computes the minimum of elements across dimensions of a tensor.

Reduces input along the dimensions given in axis. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keep_dims is true, the reduced dimensions are retained with length 1.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
keep_dims::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
reduction_indices tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.Minimum (TF::MinimumOp)

Returns the min of x and y (i.e. x < y ? x : y) element-wise.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer or floating-point values
y tensor of integer or floating-point values

Results:

Result Description
z tensor of integer or floating-point values

tf.MirrorPad (TF::MirrorPadOp)

Pads a tensor with mirrored values.

This operation pads a input with mirrored values according to the paddings you specify. paddings is an integer tensor with shape [n, 2], where n is the rank of input. For each dimension D of input, paddings[D, 0] indicates how many values to add before the contents of input in that dimension, and paddings[D, 1] indicates how many values to add after the contents of input in that dimension. Both paddings[D, 0] and paddings[D, 1] must be no greater than input.dim_size(D) (or input.dim_size(D) - 1) if copy_border is true (if false, respectively).

The padded size of each dimension D of the output is:

paddings(D, 0) + input.dim_size(D) + paddings(D, 1)

For example:

# 't' is [[1, 2, 3], [4, 5, 6]].
# 'paddings' is [[1, 1]], [2, 2]].
# 'mode' is SYMMETRIC.
# rank of 't' is 2.
pad(t, paddings) ==> [[2, 1, 1, 2, 3, 3, 2]
                      [2, 1, 1, 2, 3, 3, 2]
                      [5, 4, 4, 5, 6, 6, 5]
                      [5, 4, 4, 5, 6, 6, 5]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
mode::mlir::StringAttrstring attribute whose value is REFLECT, or SYMMETRIC
T::mlir::Attributederived attribute
Tpaddings::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
paddings tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.MirrorPadGrad (TF::MirrorPadGradOp)

Gradient op for MirrorPad op. This op folds a mirror-padded tensor.

This operation folds the padded areas of input by MirrorPad according to the paddings you specify. paddings must be the same as paddings argument given to the corresponding MirrorPad op.

The folded size of each dimension D of the output is:

input.dim_size(D) - paddings(D, 0) - paddings(D, 1)

For example:

# 't' is [[1, 2, 3], [4, 5, 6], [7, 8, 9]].
# 'paddings' is [[0, 1]], [0, 1]].
# 'mode' is SYMMETRIC.
# rank of 't' is 2.
pad(t, paddings) ==> [[ 1,  5]
                      [11, 28]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
mode::mlir::StringAttrstring attribute whose value is REFLECT, or SYMMETRIC
T::mlir::Attributederived attribute
Tpaddings::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
paddings tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.MlirLocalVarOp (TF::MlirLocalVarOp)

Creates a handle to an in-scope variable.

Used by internal passes for temporary representation of local state, which will be eventually removed.

Results:

Result Description
resource tensor of resource values

tf.MlirPassthroughOp (TF::MlirPassthroughOp)

Wraps an arbitrary MLIR computation expressed as a module with a main() function.

This operation does not have an associated kernel and is not intended to be executed in a regular TensorFlow session. Instead it is intended to be used for testing or for special case where a user intends to pass custom MLIR computation through a TensorFlow graph with the intent of having custom tooling processing it downstream (when targeting a different environment, like TensorFlow lite for example). The MLIR module is expected to have a main() function that will be used as an entry point. The inputs to the operations will be passed as argument to the main() function and the returned values of the main function mapped to the outputs. Example usage:

import tensorflow as tf
from tensorflow.compiler.mlir.tensorflow.gen_mlir_passthrough_op import mlir_passthrough_op

mlir_module = '''python
func @main(%arg0 : tensor<10xf32>, %arg1 : tensor<10xf32>) -> tensor<10x10xf32> {
   %add = "magic.op"(%arg0, %arg1) : (tensor<10xf32>, tensor<10xf32>) -> tensor<10x10xf32>
   return %ret : tensor<10x10xf32>
}
'''

@tf.function
def foo(x, y):
  return mlir_passthrough_op([x, y], mlir_module, Toutputs=[tf.float32])

graph_def = foo.get_concrete_function(tf.TensorSpec([10], tf.float32), tf.TensorSpec([10], tf.float32)).graph.as_graph_def()

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
mlir_module::mlir::StringAttrstring attribute
Tinputs::mlir::Attributederived attribute
Toutputs::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of tf.dtype values

Results:

Result Description
outputs variadic of tensor of tf.dtype values

tf.Mod (TF::ModOp)

Returns element-wise remainder of division. This emulates C semantics in that

the result here is consistent with a truncating divide. E.g. tf.truncatediv(x, y) * y + truncate_mod(x, y) = x.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or 32/64-bit signed integer values
y tensor of floating-point or 32/64-bit signed integer values

Results:

Result Description
z tensor of floating-point or 32/64-bit signed integer values

tf.ModelDataset (TF::ModelDatasetOp)

Identity transformation that models performance.

Identity transformation that models performance.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
algorithm::mlir::IntegerAttr64-bit signless integer attribute
cpu_budget::mlir::IntegerAttr64-bit signless integer attribute
ram_budget::mlir::IntegerAttr64-bit signless integer attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements

Operands:

Operand Description
input_dataset tensor of variant values

Results:

Result Description
handle tensor of variant values

tf.Mul (TF::MulOp)

Returns x * y element-wise.

Traits: AlwaysSpeculatableImplTrait, Commutative, ResultsBroadcastableShape, TF_CwiseBinary, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
z tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.MulNoNan (TF::MulNoNanOp)

Returns x * y element-wise. Returns zero if y is zero, even if x if infinite or NaN.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values
y tensor of floating-point or complex values

Results:

Result Description
z tensor of floating-point or complex values

tf.MultiDeviceIterator (TF::MultiDeviceIteratorOp)

Creates a MultiDeviceIterator resource.

Attributes:

AttributeMLIR TypeDescription
devices::mlir::ArrayAttrstring array attribute with at least 1 elements
shared_name::mlir::StringAttrstring attribute
container::mlir::StringAttrstring attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements

Results:

Result Description
handle tensor of resource values

tf.MultiDeviceIteratorFromStringHandle (TF::MultiDeviceIteratorFromStringHandleOp)

Generates a MultiDeviceIterator resource from its provided string handle.

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute
output_shapes::mlir::ArrayAttrtensorflow shape attribute array

Operands:

Operand Description
string_handle tensor of string values

Results:

Result Description
multi_device_iterator tensor of resource values

tf.MultiDeviceIteratorGetNextFromShard (TF::MultiDeviceIteratorGetNextFromShardOp)

Gets next element for the provided shard number.

Attributes:

AttributeMLIR TypeDescription
output_shapes::mlir::Attributederived attribute
output_types::mlir::Attributederived attribute

Operands:

Operand Description
multi_device_iterator tensor of resource values
shard_num tensor of 32-bit integer values
incarnation_id tensor of 64-bit integer values

Results:

Result Description
components variadic of tensor of tf.dtype values

tf.MultiDeviceIteratorInit (TF::MultiDeviceIteratorInitOp)

Initializes the multi device iterator with the given dataset.

Operands:

Operand Description
dataset tensor of variant values
multi_device_iterator tensor of resource values
max_buffer_size tensor of 64-bit integer values

Results:

Result Description
incarnation_id tensor of 64-bit integer values

tf.MultiDeviceIteratorToStringHandle (TF::MultiDeviceIteratorToStringHandleOp)

Produces a string handle for the given MultiDeviceIterator.

Operands:

Operand Description
multi_device_iterator tensor of resource values

Results:

Result Description
string_handle tensor of string values

tf.Multinomial (TF::MultinomialOp)

Draws samples from a multinomial distribution.

Traits: TF_CannotDuplicate

Attributes:

AttributeMLIR TypeDescription
seed::mlir::IntegerAttr64-bit signless integer attribute
seed2::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute
output_dtype::mlir::Attributederived attribute

Operands:

Operand Description
logits tensor of integer or floating-point values
num_samples tensor of 32-bit integer values

Results:

Result Description
output tensor of 32/64-bit signed integer values

tf.MutableDenseHashTableV2 (TF::MutableDenseHashTableV2Op)

Creates an empty hash table that uses tensors as the backing store.

It uses "open addressing" with quadratic reprobing to resolve collisions.

This op creates a mutable hash table, specifying the type of its keys and values. Each value must be a scalar. Data can be inserted into the table using the insert operations. It does not support the initialization operation.

Attributes:

AttributeMLIR TypeDescription
container::mlir::StringAttrstring attribute
shared_name::mlir::StringAttrstring attribute
use_node_name_sharing::mlir::BoolAttrbool attribute
value_dtype::mlir::TypeAttrany type attribute
value_shape::mlir::AttributeTensorFlow shape attribute
initial_num_buckets::mlir::IntegerAttr64-bit signless integer attribute
max_load_factor::mlir::FloatAttr32-bit float attribute
key_dtype::mlir::Attributederived attribute

Operands:

Operand Description
empty_key tensor of tf.dtype values
deleted_key tensor of tf.dtype values

Results:

Result Description
table_handle tensor of resource values

tf.MutableHashTableOfTensorsV2 (TF::MutableHashTableOfTensorsV2Op)

Creates an empty hash table.

This op creates a mutable hash table, specifying the type of its keys and values. Each value must be a vector. Data can be inserted into the table using the insert operations. It does not support the initialization operation.

Attributes:

AttributeMLIR TypeDescription
container::mlir::StringAttrstring attribute
shared_name::mlir::StringAttrstring attribute
use_node_name_sharing::mlir::BoolAttrbool attribute
key_dtype::mlir::TypeAttrany type attribute
value_dtype::mlir::TypeAttrany type attribute
value_shape::mlir::AttributeTensorFlow shape attribute

Results:

Result Description
table_handle tensor of resource values

tf.MutableHashTableV2 (TF::MutableHashTableV2Op)

Creates an empty hash table.

This op creates a mutable hash table, specifying the type of its keys and values. Each value must be a scalar. Data can be inserted into the table using the insert operations. It does not support the initialization operation.

Attributes:

AttributeMLIR TypeDescription
container::mlir::StringAttrstring attribute
shared_name::mlir::StringAttrstring attribute
use_node_name_sharing::mlir::BoolAttrbool attribute
key_dtype::mlir::TypeAttrany type attribute
value_dtype::mlir::TypeAttrany type attribute

Results:

Result Description
table_handle tensor of resource values

tf.NcclAllReduce (TF::NcclAllReduceOp)

Outputs a tensor containing the reduction across all input tensors.

Outputs a tensor containing the reduction across all input tensors passed to ops within the same `shared_name.

The graph should be constructed so if one op runs with shared_name value c, then num_devices ops will run with shared_name value c. Failure to do so will cause the graph execution to fail to complete.

input: the input to the reduction data: the value of the reduction across all num_devices devices. reduction: the reduction operation to perform. num_devices: The number of devices participating in this reduction. shared_name: Identifier that shared between ops of the same reduction.

Traits: InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: GetResourceInstanceInterface, InferShapedTypeOpInterface, InferTypeOpInterface, TF_NcclAllReduceOrderingEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::NcclAllReduceOrdering}

Attributes:

AttributeMLIR TypeDescription
reduction::mlir::StringAttrstring attribute whose value is min, or max, or prod, or sum
num_devices::mlir::IntegerAttr64-bit signless integer attribute
shared_name::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

Results:

Result Description
data tensor of 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.Ndtri (TF::NdtriOp)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of floating-point values

tf.Neg (TF::NegOp)

Computes numerical negative value element-wise.

I.e., \(y = -x\).

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_CwiseUnary, TF_Involution

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer values

Results:

Result Description
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer values

tf.NextAfter (TF::NextAfterOp)

Returns the next representable value of x1 in the direction of x2, element-wise.

This operation returns the same result as the C++ std::nextafter function.

It can also return a subnormal number.

@compatibility(cpp) Equivalent to C++ std::nextafter function. @end_compatibility

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x1 tensor of 32/64-bit float values
x2 tensor of 32/64-bit float values

Results:

Result Description
output tensor of 32/64-bit float values

tf.NonMaxSuppressionV3 (TF::NonMaxSuppressionV3Op)

Greedily selects a subset of bounding boxes in descending order of score,

pruning away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes with score less than score_threshold are removed. Bounding boxes are supplied as [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system and more generally is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation. For example: selected_indices = tf.image.non_max_suppression_v2( boxes, scores, max_output_size, iou_threshold, score_threshold) selected_boxes = tf.gather(boxes, selected_indices)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
T_threshold::mlir::Attributederived attribute

Operands:

Operand Description
boxes tensor of 16-bit float or 32-bit float values
scores tensor of 16-bit float or 32-bit float values
max_output_size tensor of 32-bit integer values
iou_threshold tensor of 16-bit float or 32-bit float values
score_threshold tensor of 16-bit float or 32-bit float values

Results:

Result Description
selected_indices tensor of 32-bit integer values

tf.NonMaxSuppressionV4 (TF::NonMaxSuppressionV4Op)

Greedily selects a subset of bounding boxes in descending order of score,

pruning away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes with score less than score_threshold are removed. Bounding boxes are supplied as [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system and more generally is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation. For example: selected_indices = tf.image.non_max_suppression_v2( boxes, scores, max_output_size, iou_threshold, score_threshold) selected_boxes = tf.gather(boxes, selected_indices)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
pad_to_max_output_size::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
T_threshold::mlir::Attributederived attribute

Operands:

Operand Description
boxes tensor of 16-bit float or 32-bit float values
scores tensor of 16-bit float or 32-bit float values
max_output_size tensor of 32-bit integer values
iou_threshold tensor of 16-bit float or 32-bit float values
score_threshold tensor of 16-bit float or 32-bit float values

Results:

Result Description
selected_indices tensor of 32-bit integer values
valid_outputs tensor of 32-bit integer values

tf.NonMaxSuppressionV5 (TF::NonMaxSuppressionV5Op)

Greedily selects a subset of bounding boxes in descending order of score,

pruning away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes with score less than score_threshold are removed. Bounding boxes are supplied as [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system and more generally is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the tf.gather operation. For example: selected_indices = tf.image.non_max_suppression_v2( boxes, scores, max_output_size, iou_threshold, score_threshold) selected_boxes = tf.gather(boxes, selected_indices) This op also supports a Soft-NMS (with Gaussian weighting) mode (c.f. Bodla et al, https://arxiv.org/abs/1704.04503) where boxes reduce the score of other overlapping boxes instead of directly causing them to be pruned. To enable this Soft-NMS mode, set the soft_nms_sigma parameter to be larger than 0.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
pad_to_max_output_size::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
boxes tensor of 16-bit float or 32-bit float values
scores tensor of 16-bit float or 32-bit float values
max_output_size tensor of 32-bit integer values
iou_threshold tensor of 16-bit float or 32-bit float values
score_threshold tensor of 16-bit float or 32-bit float values
soft_nms_sigma tensor of 16-bit float or 32-bit float values

Results:

Result Description
selected_indices tensor of 32-bit integer values
selected_scores tensor of 16-bit float or 32-bit float values
valid_outputs tensor of 32-bit integer values

tf.NoOp (TF::NoOp)

Does nothing. Only useful as a placeholder for control edges.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

tf.NotEqual (TF::NotEqualOp)

Returns the truth value of (x != y) element-wise.

Traits: AlwaysSpeculatableImplTrait, Commutative

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
incompatible_shape_error::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of tf.dtype values
y tensor of tf.dtype values

Results:

Result Description
z tensor of bool values

tf.OneHot (TF::OneHotOp)

Returns a one-hot tensor.

The locations represented by indices in indices take value on_value, while all other locations take value off_value.

If the input indices is rank N, the output will have rank N+1, The new axis is created at dimension axis (default: the new axis is appended at the end).

If indices is a scalar the output shape will be a vector of length depth.

If indices is a vector of length features, the output shape will be:

  features x depth if axis == -1
  depth x features if axis == 0

If indices is a matrix (batch) with shape [batch, features], the output shape will be:

  batch x features x depth if axis == -1
  batch x depth x features if axis == 1
  depth x batch x features if axis == 0

Examples

Suppose that

  indices = [0, 2, -1, 1]
  depth = 3
  on_value = 5.0
  off_value = 0.0
  axis = -1

Then output is [4 x 3]:

output =
  [5.0 0.0 0.0]  // one_hot(0)
  [0.0 0.0 5.0]  // one_hot(2)
  [0.0 0.0 0.0]  // one_hot(-1)
  [0.0 5.0 0.0]  // one_hot(1)

Suppose that

  indices = [0, 2, -1, 1]
  depth = 3
  on_value = 0.0
  off_value = 3.0
  axis = 0

Then output is [3 x 4]:

output =
  [0.0 3.0 3.0 3.0]
  [3.0 3.0 3.0 0.0]
  [3.0 3.0 3.0 3.0]
  [3.0 0.0 3.0 3.0]
//  ^                one_hot(0)
//      ^            one_hot(2)
//          ^        one_hot(-1)
//              ^    one_hot(1)

Suppose that

  indices = [[0, 2], [1, -1]]
  depth = 3
  on_value = 1.0
  off_value = 0.0
  axis = -1

Then output is [2 x 2 x 3]:

output =
  [
    [1.0, 0.0, 0.0]  // one_hot(0)
    [0.0, 0.0, 1.0]  // one_hot(2)
  ][
    [0.0, 1.0, 0.0]  // one_hot(1)
    [0.0, 0.0, 0.0]  // one_hot(-1)
  ]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
axis::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute
TI::mlir::Attributederived attribute

Operands:

Operand Description
indices tensor of 32-bit integer or 64-bit integer or 8-bit integer or 8-bit unsigned integer values
depth tensor of 32-bit integer values
on_value tensor of tf.dtype values
off_value tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.OneShotIterator (TF::OneShotIteratorOp)

Makes a "one-shot" iterator that can be iterated only once.

A one-shot iterator bundles the logic for defining the dataset and the state of the iterator in a single op, which allows simple input pipelines to be defined without an additional initialization ("MakeIterator") step.

One-shot iterators have the following limitations:

  • They do not support parameterization: all logic for creating the underlying dataset must be bundled in the dataset_factory function.
  • They are not resettable. Once a one-shot iterator reaches the end of its underlying dataset, subsequent "IteratorGetNext" operations on that iterator will always produce an OutOfRange error.

For greater flexibility, use "Iterator" and "MakeIterator" to define an iterator using an arbitrary subgraph, which may capture tensors (including fed values) as parameters, and which may be reset multiple times by rerunning "MakeIterator".

Attributes:

AttributeMLIR TypeDescription
dataset_factory::mlir::SymbolRefAttrsymbol reference attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
container::mlir::StringAttrstring attribute
shared_name::mlir::StringAttrstring attribute

Results:

Result Description
handle tensor of resource values

tf.OnesLike (TF::OnesLikeOp)

Returns a tensor of ones with the same shape and type as x.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_Idempotent

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
y tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.OptimizeDatasetV2 (TF::OptimizeDatasetV2Op)

Creates a dataset by applying related optimizations to input_dataset.

Creates a dataset by applying related optimizations to input_dataset.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
optimization_configs::mlir::ArrayAttrstring array attribute

Operands:

Operand Description
input_dataset tensor of variant values
optimizations_enabled tensor of string values
optimizations_disabled tensor of string values
optimizations_default tensor of string values

Results:

Result Description
handle tensor of variant values

tf.OptionalFromValue (TF::OptionalFromValueOp)

Constructs an Optional variant from a tuple of tensors.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Toutput_types::mlir::Attributederived attribute

Operands:

Operand Description
components variadic of tensor of tf.dtype values

Results:

Result Description
optional tensor of variant values

tf.OptionalGetValue (TF::OptionalGetValueOp)

Returns the value stored in an Optional variant or raises an error if none exists.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
output_shapes::mlir::Attributederived attribute
output_types::mlir::Attributederived attribute

Operands:

Operand Description
optional tensor of variant values

Results:

Result Description
components variadic of tensor of tf.dtype values

tf.OptionalHasValue (TF::OptionalHasValueOp)

Returns true if and only if the given Optional variant has a value.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
optional tensor of variant values

Results:

Result Description
has_value tensor of bool values

tf.OptionalNone (TF::OptionalNoneOp)

Creates an Optional variant with no value.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Results:

Result Description
optional tensor of variant values

tf.OutfeedEnqueue (TF::OutfeedEnqueueOp)

Enqueue a Tensor on the computation outfeed.

Attributes:

AttributeMLIR TypeDescription
dtype::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

tf.OutfeedEnqueueTuple (TF::OutfeedEnqueueTupleOp)

Enqueue multiple Tensor values on the computation outfeed.

Attributes:

AttributeMLIR TypeDescription
dtypes::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of tf.dtype values

tf.Pack (TF::PackOp)

Packs a list of N rank-R tensors into one rank-(R+1) tensor.

Packs the N tensors in values into a tensor with rank one higher than each tensor in values, by packing them along the axis dimension. Given a list of tensors of shape (A, B, C);

if axis == 0 then the output tensor will have the shape (N, A, B, C). if axis == 1 then the output tensor will have the shape (A, N, B, C). Etc.

For example:

# 'x' is [1, 4]
# 'y' is [2, 5]
# 'z' is [3, 6]
pack([x, y, z]) => [[1, 4], [2, 5], [3, 6]]  # Pack along first dim.
pack([x, y, z], axis=1) => [[1, 2, 3], [4, 5, 6]]

This is the opposite of unpack.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
axis::mlir::IntegerAttr64-bit signless integer attribute
N::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
values variadic of tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.Pad (TF::PadOp)

Pads a tensor with zeros.

This operation pads a input with zeros according to the paddings you specify. paddings is an integer tensor with shape [Dn, 2], where n is the rank of input. For each dimension D of input, paddings[D, 0] indicates how many zeros to add before the contents of input in that dimension, and paddings[D, 1] indicates how many zeros to add after the contents of input in that dimension.

The padded size of each dimension D of the output is:

paddings(D, 0) + input.dim_size(D) + paddings(D, 1)

For example:

# 't' is [[1, 1], [2, 2]]
# 'paddings' is [[1, 1], [2, 2]]
# rank of 't' is 2
pad(t, paddings) ==> [[0, 0, 0, 0, 0, 0]
                      [0, 0, 1, 1, 0, 0]
                      [0, 0, 2, 2, 0, 0]
                      [0, 0, 0, 0, 0, 0]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), TF_FoldOperandsTransposeInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tpaddings::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
paddings tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.PadV2 (TF::PadV2Op)

Pads a tensor.

This operation pads input according to the paddings and constant_values you specify. paddings is an integer tensor with shape [Dn, 2], where n is the rank of input. For each dimension D of input, paddings[D, 0] indicates how many padding values to add before the contents of input in that dimension, and paddings[D, 1] indicates how many padding values to add after the contents of input in that dimension. constant_values is a scalar tensor of the same type as input that indicates the value to use for padding input.

The padded size of each dimension D of the output is:

paddings(D, 0) + input.dim_size(D) + paddings(D, 1)

For example:

# 't' is [[1, 1], [2, 2]]
# 'paddings' is [[1, 1], [2, 2]]
# 'constant_values' is 0
# rank of 't' is 2
pad(t, paddings) ==> [[0, 0, 0, 0, 0, 0]
                      [0, 0, 1, 1, 0, 0]
                      [0, 0, 2, 2, 0, 0]
                      [0, 0, 0, 0, 0, 0]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tpaddings::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
paddings tensor of 32/64-bit signed integer values
constant_values tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.ParallelDynamicStitch (TF::ParallelDynamicStitchOp)

Interleave the values from the data tensors into a single tensor.

Builds a merged tensor such that

    merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]

For example, if each indices[m] is scalar or vector, we have

    # Scalar indices:
    merged[indices[m], ...] = data[m][...]

    # Vector indices:
    merged[indices[m][i], ...] = data[m][i, ...]

Each data[i].shape must start with the corresponding indices[i].shape, and the rest of data[i].shape must be constant w.r.t. i. That is, we must have data[i].shape = indices[i].shape + constant. In terms of this constant, the output shape is

merged.shape = [max(indices)] + constant

Values may be merged in parallel, so if an index appears in both indices[m][i] and indices[n][j], the result may be invalid. This differs from the normal DynamicStitch operator that defines the behavior in that case.

For example:

    indices[0] = 6
    indices[1] = [4, 1]
    indices[2] = [[5, 2], [0, 3]]
    data[0] = [61, 62]
    data[1] = [[41, 42], [11, 12]]
    data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]]
    merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42],
              [51, 52], [61, 62]]

This method can be used to merge partitions created by dynamic_partition as illustrated on the following example:

    # Apply function (increments x_i) on elements for which a certain condition
    # apply (x_i != -1 in this example).
    x=tf.constant([0.1, -1., 5.2, 4.3, -1., 7.4])
    condition_mask=tf.not_equal(x,tf.constant(-1.))
    partitioned_data = tf.dynamic_partition(
        x, tf.cast(condition_mask, tf.int32) , 2)
    partitioned_data[1] = partitioned_data[1] + 1.0
    condition_indices = tf.dynamic_partition(
        tf.range(tf.shape(x)[0]), tf.cast(condition_mask, tf.int32) , 2)
    x = tf.dynamic_stitch(condition_indices, partitioned_data)
    # Here x=[1.1, -1., 6.2, 5.3, -1, 8.4], the -1. values remain
    # unchanged.

Traits: AlwaysSpeculatableImplTrait, SameVariadicOperandSize

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
indices variadic of tensor of 32-bit integer values
data variadic of tensor of tf.dtype values

Results:

Result Description
merged tensor of tf.dtype values

tf.ParallelMapDataset (TF::ParallelMapDatasetOp)

Creates a dataset that applies f to the outputs of input_dataset.

Unlike a "MapDataset", which applies f sequentially, this dataset invokes up to num_parallel_calls copies of f in parallel.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
f::mlir::SymbolRefAttrsymbol reference attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
use_inter_op_parallelism::mlir::BoolAttrbool attribute
sloppy::mlir::BoolAttrbool attribute
preserve_cardinality::mlir::BoolAttrbool attribute
metadata::mlir::StringAttrstring attribute
Targuments::mlir::Attributederived attribute

Operands:

Operand Description
input_dataset tensor of variant values
other_arguments variadic of tensor of tf.dtype values
num_parallel_calls tensor of 32-bit integer values

Results:

Result Description
handle tensor of variant values

tf.ParallelMapDatasetV2 (TF::ParallelMapDatasetV2Op)

Creates a dataset that applies f to the outputs of input_dataset.

Unlike a "MapDataset", which applies f sequentially, this dataset invokes up to num_parallel_calls copies of f in parallel.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
f::mlir::SymbolRefAttrsymbol reference attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
use_inter_op_parallelism::mlir::BoolAttrbool attribute
deterministic::mlir::StringAttrstring attribute
preserve_cardinality::mlir::BoolAttrbool attribute
metadata::mlir::StringAttrstring attribute
Targuments::mlir::Attributederived attribute

Operands:

Operand Description
input_dataset tensor of variant values
other_arguments variadic of tensor of tf.dtype values
num_parallel_calls tensor of 64-bit integer values

Results:

Result Description
handle tensor of variant values

tf.ParameterizedTruncatedNormal (TF::ParameterizedTruncatedNormalOp)

Outputs random values from a normal distribution. The parameters may each be a

scalar which applies to the entire output, or a vector of length shape[0] which stores the parameters for each batch.

Traits: TF_CannotDuplicate

Attributes:

AttributeMLIR TypeDescription
seed::mlir::IntegerAttr64-bit signless integer attribute
seed2::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values
means tensor of floating-point values
stdevs tensor of floating-point values
minvals tensor of floating-point values
maxvals tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.ParseExample (TF::ParseExampleOp)

Transforms a vector of tf.Example protos (as strings) into typed tensors.

Traits: AlwaysSpeculatableImplTrait, AttrSizedOperandSegments, AttrSizedResultSegments

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
dense_shapes::mlir::ArrayAttrtensorflow shape attribute array
Nsparse::mlir::Attributederived attribute
Ndense::mlir::Attributederived attribute
Tdense::mlir::Attributederived attribute
sparse_types::mlir::Attributederived attribute

Operands:

Operand Description
serialized tensor of string values
names tensor of string values
sparse_keys variadic of tensor of string values
dense_keys variadic of tensor of string values
dense_defaults variadic of tensor of 32-bit float or 64-bit integer or string values

Results:

Result Description
sparse_indices variadic of tensor of 64-bit integer values
sparse_values variadic of tensor of 32-bit float or 64-bit integer or string values
sparse_shapes variadic of tensor of 64-bit integer values
dense_values variadic of tensor of 32-bit float or 64-bit integer or string values

tf.ParseExampleV2 (TF::ParseExampleV2Op)

Transforms a vector of tf.Example protos (as strings) into typed tensors.

Traits: AlwaysSpeculatableImplTrait, AttrSizedResultSegments

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
num_sparse::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 0
dense_shapes::mlir::ArrayAttrtensorflow shape attribute array
Tdense::mlir::Attributederived attribute
sparse_types::mlir::Attributederived attribute
ragged_value_types::mlir::Attributederived attribute
ragged_split_types::mlir::Attributederived attribute

Operands:

Operand Description
serialized tensor of string values
names tensor of string values
sparse_keys tensor of string values
dense_keys tensor of string values
ragged_keys tensor of string values
dense_defaults variadic of tensor of 32-bit float or 64-bit integer or string values

Results:

Result Description
sparse_indices variadic of tensor of 64-bit integer values
sparse_values variadic of tensor of 32-bit float or 64-bit integer or string values
sparse_shapes variadic of tensor of 64-bit integer values
dense_values variadic of tensor of 32-bit float or 64-bit integer or string values
ragged_values variadic of tensor of 32-bit float or 64-bit integer or string values
ragged_row_splits variadic of tensor of 32-bit integer or 64-bit integer values

tf.PartitionedCall (TF::PartitionedCallOp)

Returns f(inputs), where f's body is placed and partitioned.

Asynchronously executes a function, potentially across multiple devices but within a single process. The kernel places and partitions a given function's underlying graph, and executes each of the partitioned subgraphs as a function.

Traits: AlwaysSpeculatableImplTrait

Interfaces: CallOpInterface, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), SymbolUserOpInterface

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
f::mlir::SymbolRefAttrsymbol reference attribute
config::mlir::StringAttrstring attribute
config_proto::mlir::StringAttrstring attribute
executor_type::mlir::StringAttrstring attribute
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
args variadic of tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.Placeholder (TF::PlaceholderOp)

Placeholder op

Inserts a placeholder for a tensor that will be always fed.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
dtype::mlir::Attributederived attribute

Results:

Result Description
output tensor of tf.dtype values

tf.PlaceholderWithDefault (TF::PlaceholderWithDefaultOp)

Placeholder op

A placeholder op that passes through input when its output is not fed.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
dtype::mlir::Attributederived attribute
shape::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.Polygamma (TF::PolygammaOp)

Compute the polygamma function \(\psi^{(n)}(x)\).

The polygamma function is defined as:

\(\psi^{(a)}(x) = \frac{d^a}{dx^a} \psi(x)\)

where \(\psi(x)\) is the digamma function. The polygamma function is defined only for non-negative integer orders \a\.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
a tensor of 32/64-bit float values
x tensor of 32/64-bit float values

Results:

Result Description
z tensor of 32/64-bit float values

tf.PopulationCount (TF::PopulationCountOp)

Computes element-wise population count (a.k.a. popcount, bitsum, bitcount).

For each entry in x, calculates the number of 1 (on) bits in the binary representation of that entry.

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer values

Results:

Result Description
y tensor of 8-bit unsigned integer values

tf.Pow (TF::PowOp)

Computes the power of one value to another.

Given a tensor x and a tensor y, this operation computes \(x^y\) for corresponding elements in x and y. For example:

# tensor 'x' is [[2, 2]], [3, 3]]
# tensor 'y' is [[8, 16], [2, 3]]
tf.pow(x, y) ==> [[256, 65536], [9, 27]]

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer values
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer values

Results:

Result Description
z tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer values

tf.PrefetchDataset (TF::PrefetchDatasetOp)

Creates a dataset that asynchronously prefetches elements from input_dataset.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
slack_period::mlir::IntegerAttr64-bit signless integer attribute
legacy_autotune::mlir::BoolAttrbool attribute
buffer_size_min::mlir::IntegerAttr64-bit signless integer attribute
metadata::mlir::StringAttrstring attribute

Operands:

Operand Description
input_dataset tensor of variant values
buffer_size tensor of 64-bit integer values

Results:

Result Description
handle tensor of variant values

tf.PreventGradient (TF::PreventGradientOp)

An identity op that triggers an error if a gradient is requested.

When executed in a graph, this op outputs its input tensor as-is.

When building ops to compute gradients, the TensorFlow gradient system will return an error when trying to lookup the gradient of this op, because no gradient must ever be registered for this function. This op exists to prevent subtle bugs from silently returning unimplemented gradients in some corner cases.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
message::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.Print (TF::PrintOp)

Prints a list of tensors.

Passes input through to output and prints data when evaluating.

Attributes:

AttributeMLIR TypeDescription
message::mlir::StringAttrstring attribute
first_n::mlir::IntegerAttr64-bit signless integer attribute
summarize::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute
U::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
data variadic of tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.PrintV2 (TF::PrintV2Op)

Prints a string scalar.

Prints a string scalar to the desired output_stream.

Attributes:

AttributeMLIR TypeDescription
output_stream::mlir::StringAttrstring attribute
end::mlir::StringAttrstring attribute

Operands:

Operand Description
input tensor of string values

tf.Prod (TF::ProdOp)

Computes the product of elements across dimensions of a tensor.

Reduces input along the dimensions given in axis. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keep_dims is true, the reduced dimensions are retained with length 1.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
keep_dims::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of number values
reduction_indices tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of number values

tf.Qr (TF::QrOp)

Computes the QR decompositions of one or more matrices.

Computes the QR decomposition of each inner matrix in tensor such that tensor[..., :, :] = q[..., :, :] * r[..., :,:])

Currently, the gradient for the QR decomposition is well-defined only when the first P columns of the inner matrix are linearly independent, where P is the minimum of M and N, the 2 inner-most dimmensions of tensor.

# a is a tensor.
# q is a tensor of orthonormal matrices.
# r is a tensor of upper triangular matrices.
q, r = qr(a)
q_full, r_full = qr(a, full_matrices=True)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
full_matrices::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values

Results:

Result Description
q tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values
r tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values

tf.QuantizeAndDequantize (TF::QuantizeAndDequantizeOp)

Use QuantizeAndDequantizeV2 instead.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
signed_input::mlir::BoolAttrbool attribute
num_bits::mlir::IntegerAttr64-bit signless integer attribute
range_given::mlir::BoolAttrbool attribute
input_min::mlir::FloatAttr32-bit float attribute
input_max::mlir::FloatAttr32-bit float attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.QuantizeAndDequantizeV2 (TF::QuantizeAndDequantizeV2Op)

Quantizes then dequantizes a tensor.

This op simulates the precision loss from the quantized forward pass by:

  1. Quantizing the tensor to fixed point numbers, which should match the target quantization method when it is used in inference.
  2. Dequantizing it back to floating point numbers for the following ops, most likely matmul.

There are different ways to quantize. This version uses only scaling, so 0.0 maps to 0.

From the specified 'num_bits' in the quantized output type, it determines minimum and maximum representable quantized values.

e.g.

  • [-128, 127] for signed, num_bits = 8, or
  • [0, 255] for unsigned, num_bits = 8.

If range_given == False, the initial input_min, input_max will be determined automatically as the minimum and maximum values in the input tensor, otherwise the specified values of input_min, input_max are used.

This op determines the maximum scale_factor that would map the initial [input_min, input_max] range to a range that lies within the representable quantized range.

It determines the scale from one of input_min and input_max, then updates the other one to maximize the representable range.

e.g.

  • if the output is signed, num_bits = 8, [input_min, input_max] = [-10.0, 5.0]: it would use a scale_factor of -128 / -10.0 = 12.8 In this case, it would update input_max to be 127 / 12.8 = 9.921875
  • if the output is signed, num_bits = 8, [input_min, input_max] = [-10.0, 10.0]: it would use a scale_factor of 127 / 10.0 = 12.7 In this case, it would update input_min to be 128.0 / 12.7 = -10.07874
  • if the output is unsigned, input_min is forced to be 0, and only the specified input_max is used.

After determining the scale_factor and updating the input range, it applies the following to each value in the 'input' tensor.

output = round(clamp(value, input_min, input_max) * scale_factor) / scale_factor.

The above round function rounds the value based on the given round_mode.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
signed_input::mlir::BoolAttrbool attribute
num_bits::mlir::IntegerAttr64-bit signless integer attribute
range_given::mlir::BoolAttrbool attribute
round_mode::mlir::StringAttrstring attribute whose value is HALF_TO_EVEN, or HALF_UP
narrow_range::mlir::BoolAttrbool attribute
axis::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point values
input_min tensor of floating-point values
input_max tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.QuantizeAndDequantizeV3 (TF::QuantizeAndDequantizeV3Op)

Quantizes then dequantizes a tensor.

This is almost identical to QuantizeAndDequantizeV2, except that num_bits is a tensor, so its value can change during training.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
signed_input::mlir::BoolAttrbool attribute
range_given::mlir::BoolAttrbool attribute
narrow_range::mlir::BoolAttrbool attribute
axis::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point values
input_min tensor of floating-point values
input_max tensor of floating-point values
num_bits tensor of 32-bit integer values

Results:

Result Description
output tensor of floating-point values

tf.QuantizeAndDequantizeV4 (TF::QuantizeAndDequantizeV4Op)

Quantizes then dequantizes a tensor.

This is almost identical to QuantizeAndDequantizeV2, except that it returns a gradient of 1 for inputs that are within the quantization range, or 0 otherwise.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
signed_input::mlir::BoolAttrbool attribute
num_bits::mlir::IntegerAttr64-bit signless integer attribute
range_given::mlir::BoolAttrbool attribute
round_mode::mlir::StringAttrstring attribute whose value is HALF_TO_EVEN, or HALF_UP
narrow_range::mlir::BoolAttrbool attribute
axis::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of floating-point values
input_min tensor of floating-point values
input_max tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.QuantizeV2 (TF::QuantizeV2Op)

Quantize the 'input' tensor of type float to 'output' tensor of type 'T'.

[min_range, max_range] are scalar floats that specify the range for the 'input' data. The 'mode' attribute controls exactly which calculations are used to convert the float values to their quantized equivalents. The 'round_mode' attribute controls which rounding tie-breaking algorithm is used when rounding float values to their quantized equivalents.

In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:

out[i] = (in[i] - min_range) * range(T) / (max_range - min_range)
if T == qint8: out[i] -= (range(T) + 1) / 2.0

here range(T) = numeric_limits<T>::max() - numeric_limits<T>::min()

MIN_COMBINED Mode Example

Assume the input is type float and has a possible range of [0.0, 6.0] and the output type is quint8 ([0, 255]). The min_range and max_range values should be specified as 0.0 and 6.0. Quantizing from float to quint8 will multiply each value of the input by 255/6 and cast to quint8.

If the output type was qint8 ([-128, 127]), the operation will additionally subtract each value by 128 prior to casting, so that the range of values aligns with the range of qint8.

If the mode is 'MIN_FIRST', then this approach is used:

num_discrete_values = 1 << (# of bits in T)
range_adjust = num_discrete_values / (num_discrete_values - 1)
range = (range_max - range_min) * range_adjust
range_scale = num_discrete_values / range
quantized = round(input * range_scale) - round(range_min * range_scale) +
  numeric_limits<T>::min()
quantized = max(quantized, numeric_limits<T>::min())
quantized = min(quantized, numeric_limits<T>::max())

The biggest difference between this and MIN_COMBINED is that the minimum range is rounded first, before it's subtracted from the rounded value. With MIN_COMBINED, a small bias is introduced where repeated iterations of quantizing and dequantizing will introduce a larger and larger error.

SCALED mode Example

SCALED mode matches the quantization approach used in QuantizeAndDequantize{V2|V3}.

If the mode is SCALED, the quantization is performed by multiplying each input value by a scaling_factor. The scaling_factor is determined from min_range and max_range to be as large as possible such that the range from min_range to max_range is representable within values of type T.


  const int min_T = std::numeric_limits<T>::min();
  const int max_T = std::numeric_limits<T>::max();
  const float max_float = std::numeric_limits<float>::max();

  const float scale_factor_from_min_side =
      (min_T * min_range > 0) ? min_T / min_range : max_float;
  const float scale_factor_from_max_side =
      (max_T * max_range > 0) ? max_T / max_range : max_float;

  const float scale_factor = std::min(scale_factor_from_min_side,
                                      scale_factor_from_max_side);

We next use the scale_factor to adjust min_range and max_range as follows:

      min_range = min_T / scale_factor;
      max_range = max_T / scale_factor;

e.g. if T = qint8, and initially min_range = -10, and max_range = 9, we would compare -128/-10.0 = 12.8 to 127/9.0 = 14.11, and set scaling_factor = 12.8 In this case, min_range would remain -10, but max_range would be adjusted to 127 / 12.8 = 9.921875

So we will quantize input values in the range (-10, 9.921875) to (-128, 127).

The input tensor can now be quantized by clipping values to the range min_range to max_range, then multiplying by scale_factor as follows:

result = round(min(max_range, max(min_range, input)) * scale_factor)

The adjusted min_range and max_range are returned as outputs 2 and 3 of this operation. These outputs should be used as the range for any further calculations.

narrow_range (bool) attribute

If true, we do not use the minimum quantized value. i.e. for int8 the quantized output, it would be restricted to the range -127..127 instead of the full -128..127 range. This is provided for compatibility with certain inference backends. (Only applies to SCALED mode)

axis (int) attribute

An optional axis attribute can specify a dimension index of the input tensor, such that quantization ranges will be calculated and applied separately for each slice of the tensor along that dimension. This is useful for per-channel quantization.

If axis is specified, min_range and max_range

if axis=None, per-tensor quantization is performed as normal.

ensure_minimum_range (float) attribute

Ensures the minimum quantization range is at least this value. The legacy default value for this is 0.01, but it is strongly suggested to set it to 0 for new uses.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
mode::mlir::StringAttrstring attribute whose value is MIN_COMBINED, or MIN_FIRST, or SCALED
round_mode::mlir::StringAttrstring attribute whose value is HALF_AWAY_FROM_ZERO, or HALF_TO_EVEN
narrow_range::mlir::BoolAttrbool attribute
axis::mlir::IntegerAttr64-bit signless integer attribute
ensure_minimum_range::mlir::FloatAttr32-bit float attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 32-bit float values
min_range tensor of 32-bit float values
max_range tensor of 32-bit float values

Results:

Result Description
output tensor of 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer values
output_min tensor of 32-bit float values
output_max tensor of 32-bit float values

tf.QueueDequeueV2 (TF::QueueDequeueV2Op)

Dequeues a tuple of one or more tensors from the given queue.

This operation has k outputs, where k is the number of components in the tuples stored in the given queue, and output i is the ith component of the dequeued tuple.

N.B. If the queue is empty, this operation will block until an element has been dequeued (or 'timeout_ms' elapses, if specified).

Attributes:

AttributeMLIR TypeDescription
timeout_ms::mlir::IntegerAttr64-bit signless integer attribute
component_types::mlir::Attributederived attribute

Operands:

Operand Description
handle tensor of resource values

Results:

Result Description
components variadic of tensor of tf.dtype values

tf.RaggedGather (TF::RaggedGatherOp)

Gather ragged slices from params axis 0 according to indices.

Outputs a RaggedTensor output composed from output_dense_values and output_nested_splits, such that:

output.shape = indices.shape + params.shape[1:]
output.ragged_rank = indices.shape.ndims + params.ragged_rank
output[i...j, d0...dn] = params[indices[i...j], d0...dn]

where

  • params = ragged.from_nested_row_splits(params_dense_values, params_nested_splits) provides the values that should be gathered.
  • indices ia a dense tensor with dtype int32 or int64, indicating which values should be gathered.
  • output = ragged.from_nested_row_splits(output_dense_values, output_nested_splits) is the output tensor.

(Note: This c++ op is used to implement the higher-level python tf.ragged.gather op, which also supports ragged indices.)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
PARAMS_RAGGED_RANK::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute
Tsplits::mlir::Attributederived attribute
Tvalues::mlir::Attributederived attribute
OUTPUT_RAGGED_RANK::mlir::Attributederived attribute

Operands:

Operand Description
params_nested_splits variadic of tensor of 32/64-bit signed integer values
params_dense_values tensor of tf.dtype values
indices tensor of 32/64-bit signed integer values

Results:

Result Description
output_nested_splits variadic of tensor of 32/64-bit signed integer values
output_dense_values tensor of tf.dtype values

tf.RaggedRange (TF::RaggedRangeOp)

Returns a RaggedTensor containing the specified sequences of numbers.

Returns a RaggedTensor result composed from rt_dense_values and rt_nested_splits, such that result[i] = range(starts[i], limits[i], deltas[i]).

(rt_nested_splits, rt_dense_values) = ragged_range(
      starts=[2, 5, 8], limits=[3, 5, 12], deltas=1)
result = tf.ragged.from_row_splits(rt_dense_values, rt_nested_splits)
print(result)
<tf.RaggedTensor [[2], [], [8, 9, 10, 11]] >

The input tensors starts, limits, and deltas may be scalars or vectors. The vector inputs must all have the same size. Scalar inputs are broadcast to match the size of the vector inputs.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tsplits::mlir::Attributederived attribute

Operands:

Operand Description
starts tensor of bfloat16 or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values
limits tensor of bfloat16 or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values
deltas tensor of bfloat16 or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

Results:

Result Description
rt_nested_splits tensor of 32/64-bit signed integer values
rt_dense_values tensor of bfloat16 or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.RandomGamma (TF::RandomGammaOp)

Outputs random values from the Gamma distribution(s) described by alpha.

This op uses the algorithm by Marsaglia et al. to acquire samples via transformation-rejection from pairs of uniform and normal random variables. See http://dl.acm.org/citation.cfm?id=358414

Traits: TF_CannotDuplicate

Interfaces: TF_RandomGeneratorSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::RandomGenerator}

Attributes:

AttributeMLIR TypeDescription
seed::mlir::IntegerAttr64-bit signless integer attribute
seed2::mlir::IntegerAttr64-bit signless integer attribute
S::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values
alpha tensor of 16-bit float or 32-bit float or 64-bit float values

Results:

Result Description
output tensor of 16-bit float or 32-bit float or 64-bit float values

tf.RandomGammaGrad (TF::RandomGammaGradOp)

Computes the derivative of a Gamma random sample w.r.t. alpha.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
alpha tensor of 32/64-bit float values
sample tensor of 32/64-bit float values

Results:

Result Description
output tensor of 32/64-bit float values

tf.RandomPoisson (TF::RandomPoissonOp)

Use RandomPoissonV2 instead.

Traits: TF_CannotDuplicate

Interfaces: TF_RandomGeneratorSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::RandomGenerator}

Attributes:

AttributeMLIR TypeDescription
seed::mlir::IntegerAttr64-bit signless integer attribute
seed2::mlir::IntegerAttr64-bit signless integer attribute
S::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values
rate tensor of 16-bit float or 32-bit float or 64-bit float values

Results:

Result Description
output tensor of 16-bit float or 32-bit float or 64-bit float values

tf.RandomPoissonV2 (TF::RandomPoissonV2Op)

Outputs random values from the Poisson distribution(s) described by rate.

This op uses two algorithms, depending on rate. If rate >= 10, then the algorithm by Hormann is used to acquire samples via transformation-rejection. See http://www.sciencedirect.com/science/article/pii/0167668793909974

Otherwise, Knuth's algorithm is used to acquire samples via multiplying uniform random variables. See Donald E. Knuth (1969). Seminumerical Algorithms. The Art of Computer Programming, Volume 2. Addison Wesley

Traits: TF_CannotDuplicate

Interfaces: TF_RandomGeneratorSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::RandomGenerator}

Attributes:

AttributeMLIR TypeDescription
seed::mlir::IntegerAttr64-bit signless integer attribute
seed2::mlir::IntegerAttr64-bit signless integer attribute
R::mlir::Attributederived attribute
S::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values
rate tensor of 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

Results:

Result Description
output tensor of 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.RandomShuffle (TF::RandomShuffleOp)

Randomly shuffles a tensor along its first dimension.

The tensor is shuffled along dimension 0, such that each value[j] is mapped to one and only one output[i]. For example, a mapping that might occur for a 3x2 tensor is:

[[1, 2],       [[5, 6],
 [3, 4],  ==>   [1, 2],
 [5, 6]]        [3, 4]]

Traits: InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_CannotDuplicate

Interfaces: InferShapedTypeOpInterface, InferTypeOpInterface, TF_RandomGeneratorSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::RandomGenerator}

Attributes:

AttributeMLIR TypeDescription
seed::mlir::IntegerAttr64-bit signless integer attribute
seed2::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
value tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.RandomStandardNormal (TF::RandomStandardNormalOp)

Outputs random values from a normal distribution.

The generated values will have mean 0 and standard deviation 1.

Traits: TF_CannotDuplicate

Interfaces: TF_RandomGeneratorSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::RandomGenerator}

Attributes:

AttributeMLIR TypeDescription
seed::mlir::IntegerAttr64-bit signless integer attribute
seed2::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of floating-point values

tf.RandomUniform (TF::RandomUniformOp)

Outputs random values from a uniform distribution.

The generated values follow a uniform distribution in the range [0, 1). The lower bound 0 is included in the range, while the upper bound 1 is excluded.

Traits: TF_CannotDuplicate

Interfaces: GetResourceInstanceInterface, TF_RandomGeneratorSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::RandomGenerator}

Attributes:

AttributeMLIR TypeDescription
seed::mlir::IntegerAttr64-bit signless integer attribute
seed2::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of floating-point values

tf.RandomUniformInt (TF::RandomUniformIntOp)

Outputs random integers from a uniform distribution.

The generated values are uniform integers in the range [minval, maxval). The lower bound minval is included in the range, while the upper bound maxval is excluded.

The random integers are slightly biased unless maxval - minval is an exact power of two. The bias is small for values of maxval - minval significantly smaller than the range of the output (either 2^32 or 2^64).

Traits: TF_CannotDuplicate

Interfaces: TF_RandomGeneratorSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::RandomGenerator}

Attributes:

AttributeMLIR TypeDescription
seed::mlir::IntegerAttr64-bit signless integer attribute
seed2::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values
minval tensor of 32/64-bit signed integer values
maxval tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of 32/64-bit signed integer values

tf.Range (TF::RangeOp)

Creates a sequence of numbers.

This operation creates a sequence of numbers that begins at start and extends by increments of delta up to but not including limit.

For example:

# 'start' is 3
# 'limit' is 18
# 'delta' is 3
tf.range(start, limit, delta) ==> [3, 6, 9, 12, 15]

Traits: AlwaysSpeculatableImplTrait, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
start tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer values
limit tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer values
delta tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer values

tf.RangeDataset (TF::RangeDatasetOp)

Creates a dataset with a range of values. Corresponds to python's xrange.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
metadata::mlir::StringAttrstring attribute
replicate_on_split::mlir::BoolAttrbool attribute

Operands:

Operand Description
start tensor of 64-bit integer values
stop tensor of 64-bit integer values
step tensor of 64-bit integer values

Results:

Result Description
handle tensor of variant values

tf.Rank (TF::RankOp)

Returns the rank of a tensor.

This operation returns an integer representing the rank of input.

For example:

# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
# shape of tensor 't' is [2, 2, 3]
rank(t) ==> 3

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of 32-bit integer values

tf.ReadFile (TF::ReadFileOp)

Reads and outputs the entire contents of the input filename.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
filename tensor of string values

Results:

Result Description
contents tensor of string values

tf.ReadVariableOp (TF::ReadVariableOp)

Reads the value of a variable.

The tensor returned by this operation is immutable.

The value returned by this operation is guaranteed to be influenced by all the writes on which this operation depends directly or indirectly, and to not be influenced by any of the writes which depend directly or indirectly on this operation.

Attributes:

AttributeMLIR TypeDescription
dtype::mlir::Attributederived attribute

Operands:

Operand Description
resource tensor of resource values

Results:

Result Description
value tensor of tf.dtype values

tf.Real (TF::RealOp)

Returns the real part of a complex number.

Given a tensor input of complex numbers, this operation returns a tensor of type float that is the real part of each element in input. All elements in input must be complex numbers of the form \(a + bj\), where a is the real part returned by this operation and b is the imaginary part.

For example:

# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]
tf.real(input) ==> [-2.25, 3.25]

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex values

Results:

Result Description
output tensor of 32/64-bit float values

tf.RealDiv (TF::RealDivOp)

Returns x / y element-wise for real types.

If x and y are reals, this will return the floating-point division.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_CwiseBinary

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
z tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.Reciprocal (TF::ReciprocalOp)

Computes the reciprocal of x element-wise.

I.e., \(y = 1 / x\).

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_Involution

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer values

Results:

Result Description
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer values

tf.ReciprocalGrad (TF::ReciprocalGradOp)

Computes the gradient for the inverse of x wrt its input.

Specifically, grad = -dy * y*y, where y = 1/x, and dy is the corresponding input gradient.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
y tensor of floating-point or complex values
dy tensor of floating-point or complex values

Results:

Result Description
z tensor of floating-point or complex values

tf.Recv (TF::RecvOp)

_Receives the named tensor from send_device on recvdevice.

Interfaces: TF_RecvSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Recv}

Attributes:

AttributeMLIR TypeDescription
tensor_name::mlir::StringAttrstring attribute
send_device::mlir::StringAttrstring attribute
send_device_incarnation::mlir::IntegerAttr64-bit signless integer attribute
recv_device::mlir::StringAttrstring attribute
client_terminated::mlir::BoolAttrbool attribute
tensor_type::mlir::Attributederived attribute

Results:

Result Description
tensor tensor of tf.dtype values

tf.RecvTPUEmbeddingActivations (TF::RecvTPUEmbeddingActivationsOp)

An op that receives embedding activations on the TPU.

The TPU system performs the embedding lookups and aggregations specified by the arguments to TPUEmbeddingEnqueue(Integer/Sparse/SparseTensor)Batch. The results of these aggregations are visible to the Tensorflow Graph as the outputs of a RecvTPUEmbeddingActivations op. This op returns a list containing one Tensor of activations per table specified in the model. There can be at most one RecvTPUEmbeddingActivations op in the TPU graph.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
config::mlir::StringAttrstring attribute
num_outputs::mlir::Attributederived attribute

Results:

Result Description
outputs variadic of tensor of 32-bit float values

tf.ReduceDataset (TF::ReduceDatasetOp)

Reduces the input dataset to a singleton using a reduce function.

Traits: SameVariadicOperandSize

Attributes:

AttributeMLIR TypeDescription
f::mlir::SymbolRefAttrsymbol reference attribute
Tstate::mlir::ArrayAttrtype array attribute with at least 1 elements
Targuments::mlir::ArrayAttrtype array attribute with at least 0 elements
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
use_inter_op_parallelism::mlir::BoolAttrbool attribute

Operands:

Operand Description
input_dataset tensor of variant values
initial_state variadic of tensor of tf.dtype values
other_arguments variadic of tensor of tf.dtype values

Results:

Result Description
components variadic of tensor of tf.dtype values

tf.ReduceJoin (TF::ReduceJoinOp)

Joins a string Tensor across the given dimensions.

Computes the string join across dimensions in the given string Tensor of shape [\\(d_0, d_1, ..., d_{n-1}\\)]. Returns a new Tensor created by joining the input strings with the given separator (default: empty string). Negative indices are counted backwards from the end, with -1 being equivalent to n - 1. If indices are not specified, joins across all dimensions beginning from n - 1 through 0.

For example:

# tensor `a` is [["a", "b"], ["c", "d"]]
tf.reduce_join(a, 0) ==> ["ac", "bd"]
tf.reduce_join(a, 1) ==> ["ab", "cd"]
tf.reduce_join(a, -2) = tf.reduce_join(a, 0) ==> ["ac", "bd"]
tf.reduce_join(a, -1) = tf.reduce_join(a, 1) ==> ["ab", "cd"]
tf.reduce_join(a, 0, keep_dims=True) ==> [["ac", "bd"]]
tf.reduce_join(a, 1, keep_dims=True) ==> [["ab"], ["cd"]]
tf.reduce_join(a, 0, separator=".") ==> ["a.c", "b.d"]
tf.reduce_join(a, [0, 1]) ==> "acbd"
tf.reduce_join(a, [1, 0]) ==> "abcd"
tf.reduce_join(a, []) ==> [["a", "b"], ["c", "d"]]
tf.reduce_join(a) = tf.reduce_join(a, [1, 0]) ==> "abcd"

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
keep_dims::mlir::BoolAttrbool attribute
separator::mlir::StringAttrstring attribute

Operands:

Operand Description
inputs tensor of string values
reduction_indices tensor of 32-bit integer values

Results:

Result Description
output tensor of string values

tf.Relu (TF::ReluOp)

Computes rectified linear: max(features, 0).

See: https://en.wikipedia.org/wiki/Rectifier_(neural_networks) Example usage:

tf.nn.relu([-2., 0., 3.]).numpy() array([0., 0., 3.], dtype=float32)

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_Idempotent, TF_LayoutAgnostic

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
features tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 8-bit quantized integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
activations tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 8-bit quantized integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.Relu6 (TF::Relu6Op)

Computes rectified linear 6: min(max(features, 0), 6).

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_Idempotent

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
features tensor of integer or floating-point values

Results:

Result Description
activations tensor of integer or floating-point values

tf.Relu6Grad (TF::Relu6GradOp)

Computes rectified linear 6 gradients for a Relu6 operation.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
gradients tensor of integer or floating-point values
features tensor of integer or floating-point values

Results:

Result Description
backprops tensor of integer or floating-point values

tf.ReluGrad (TF::ReluGradOp)

Computes rectified linear gradients for a Relu operation.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
gradients tensor of integer or floating-point values
features tensor of integer or floating-point values

Results:

Result Description
backprops tensor of integer or floating-point values

tf.RemoteCall (TF::RemoteCallOp)

Runs function f on a remote device indicated by target.

Interfaces: CallOpInterface

Attributes:

AttributeMLIR TypeDescription
f::mlir::SymbolRefAttrsymbol reference attribute
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
target tensor of string values
args variadic of tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.RepeatDataset (TF::RepeatDatasetOp)

Creates a dataset that emits the outputs of input_dataset count times.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
metadata::mlir::StringAttrstring attribute

Operands:

Operand Description
input_dataset tensor of variant values
count tensor of 64-bit integer values

Results:

Result Description
handle tensor of variant values

tf.Reshape (TF::ReshapeOp)

Reshapes a tensor.

Given tensor, this operation returns a tensor that has the same values as tensor with shape shape.

If one component of 1-D tensor shape is the special value -1, the size of that dimension is computed so that the total size remains constant. In particular, a shape of [-1] flattens into 1-D. At most one component of shape may be unknown.

The shape must be 1-D and the operation returns a tensor with shape shape filled with the values of tensor. In this case, the number of elements implied by shape must be the same as the number of elements in tensor.

It is an error if shape is not 1-D.

For example:

# tensor 't' is [1, 2, 3, 4, 5, 6, 7, 8, 9]
# tensor 't' has shape [9]
reshape(t, [3, 3]) ==> [[1, 2, 3],
                        [4, 5, 6],
                        [7, 8, 9]]

# tensor 't' is [[[1, 1], [2, 2]],
#                [[3, 3], [4, 4]]]
# tensor 't' has shape [2, 2, 2]
reshape(t, [2, 4]) ==> [[1, 1, 2, 2],
                        [3, 3, 4, 4]]

# tensor 't' is [[[1, 1, 1],
#                 [2, 2, 2]],
#                [[3, 3, 3],
#                 [4, 4, 4]],
#                [[5, 5, 5],
#                 [6, 6, 6]]]
# tensor 't' has shape [3, 2, 3]
# pass '[-1]' to flatten 't'
reshape(t, [-1]) ==> [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6]

# -1 can also be used to infer the shape

# -1 is inferred to be 9:
reshape(t, [2, -1]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3],
                         [4, 4, 4, 5, 5, 5, 6, 6, 6]]
# -1 is inferred to be 2:
reshape(t, [-1, 9]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3],
                         [4, 4, 4, 5, 5, 5, 6, 6, 6]]
# -1 is inferred to be 3:
reshape(t, [ 2, -1, 3]) ==> [[[1, 1, 1],
                              [2, 2, 2],
                              [3, 3, 3]],
                             [[4, 4, 4],
                              [5, 5, 5],
                              [6, 6, 6]]]

# tensor 't' is [7]
# shape `[]` reshapes to a scalar
reshape(t, []) ==> 7

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tshape::mlir::Attributederived attribute

Operands:

Operand Description
tensor tensor of tf.dtype values
shape tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.ResizeBilinear (TF::ResizeBilinearOp)

Resize images to size using bilinear interpolation.

Input images can be of different types but output images are always float.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
align_corners::mlir::BoolAttrbool attribute
half_pixel_centers::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
images tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 8-bit unsigned integer values
size tensor of 32-bit integer values

Results:

Result Description
resized_images tensor of 32-bit float values

tf.ResizeBilinearGrad (TF::ResizeBilinearGradOp)

Computes the gradient of bilinear interpolation.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
align_corners::mlir::BoolAttrbool attribute
half_pixel_centers::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
grads tensor of 32-bit float values
original_image tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.ResizeNearestNeighbor (TF::ResizeNearestNeighborOp)

Resize images to size using nearest neighbor interpolation.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
align_corners::mlir::BoolAttrbool attribute
half_pixel_centers::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
images tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 8-bit unsigned integer values
size tensor of 32-bit integer values

Results:

Result Description
resized_images tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 8-bit unsigned integer values

tf.ResizeNearestNeighborGrad (TF::ResizeNearestNeighborGradOp)

Computes the gradient of nearest neighbor interpolation.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
align_corners::mlir::BoolAttrbool attribute
half_pixel_centers::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
grads tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 8-bit integer or 8-bit unsigned integer values
size tensor of 32-bit integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 8-bit integer or 8-bit unsigned integer values

tf.ResourceApplyAdadelta (TF::ResourceApplyAdadeltaOp)

Update '*var' according to the adadelta scheme.

accum = rho() * accum + (1 - rho()) * grad.square(); update = (update_accum + epsilon).sqrt() * (accum + epsilon()).rsqrt() * grad; update_accum = rho() * update_accum + (1 - rho()) * update.square(); var -= update;

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
var tensor of resource values
accum tensor of resource values
accum_update tensor of resource values
lr tensor of number values
rho tensor of number values
epsilon tensor of number values
grad tensor of number values

tf.ResourceApplyAdagrad (TF::ResourceApplyAdagradOp)

Update '*var' according to the adagrad scheme.

accum += grad * grad var -= lr * grad * (1 / sqrt(accum))

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
update_slots::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
var tensor of resource values
accum tensor of resource values
lr tensor of number values
grad tensor of number values

tf.ResourceApplyAdagradDA (TF::ResourceApplyAdagradDAOp)

Update '*var' according to the proximal adagrad scheme.

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
var tensor of resource values
gradient_accumulator tensor of resource values
gradient_squared_accumulator tensor of resource values
grad tensor of number values
lr tensor of number values
l1 tensor of number values
l2 tensor of number values
global_step tensor of 64-bit integer values

tf.ResourceApplyAdagradV2 (TF::ResourceApplyAdagradV2Op)

Update '*var' according to the adagrad scheme.

accum += grad * grad var -= lr * grad * (1 / (sqrt(accum) + epsilon))

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
update_slots::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
var tensor of resource values
accum tensor of resource values
lr tensor of number values
epsilon tensor of number values
grad tensor of number values

tf.ResourceApplyAdam (TF::ResourceApplyAdamOp)

Update '*var' according to the Adam algorithm.

\[\text{lr}_t := \mathrm{lr} \cdot \frac{\sqrt{1 - \beta_2^t} }{1 - \beta_1^t}\]

\[m_t := \beta_1 \cdot m_{t-1} + (1 - \beta_1) \cdot g\]

\[v_t := \beta_2 \cdot v_{t-1} + (1 - \beta_2) \cdot g^2\]

\[\text{var} := \begin{cases} \text{var} - (m_t \beta_1 + g \cdot (1 - \beta_1))\cdot\text{lr}_t/(\sqrt{v_t} + \epsilon), &\text{if use_nesterov}\\\\ \text{var} - m_t \cdot \text{lr}_t /(\sqrt{v_t} + \epsilon), &\text{otherwise} \end{cases}\]

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
use_nesterov::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
var tensor of resource values
m tensor of resource values
v tensor of resource values
beta1_power tensor of number values
beta2_power tensor of number values
lr tensor of number values
beta1 tensor of number values
beta2 tensor of number values
epsilon tensor of number values
grad tensor of number values

tf.ResourceApplyAdaMax (TF::ResourceApplyAdaMaxOp)

Update '*var' according to the AdaMax algorithm.

mt <- beta1 * m{t-1} + (1 - beta1) * g vt <- max(beta2 * v{t-1}, abs(g)) variable <- variable - learning_rate / (1 - beta1^t) * m_t / (v_t + epsilon)

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
var tensor of resource values
m tensor of resource values
v tensor of resource values
beta1_power tensor of number values
lr tensor of number values
beta1 tensor of number values
beta2 tensor of number values
epsilon tensor of number values
grad tensor of number values

tf.ResourceApplyAddSign (TF::ResourceApplyAddSignOp)

Update '*var' according to the AddSign update.

mt <- beta1 * m{t-1} + (1 - beta1) * g update <- (alpha + sign_decay * sign(g) *sign(m)) * g variable <- variable - lr_t * update

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
var tensor of resource values
m tensor of resource values
lr tensor of number values
alpha tensor of number values
sign_decay tensor of number values
beta tensor of number values
grad tensor of number values

tf.ResourceApplyCenteredRMSProp (TF::ResourceApplyCenteredRMSPropOp)

Update '*var' according to the centered RMSProp algorithm.

The centered RMSProp algorithm uses an estimate of the centered second moment (i.e., the variance) for normalization, as opposed to regular RMSProp, which uses the (uncentered) second moment. This often helps with training, but is slightly more expensive in terms of computation and memory.

Note that in dense implementation of this algorithm, mg, ms, and mom will update even if the grad is zero, but in this sparse implementation, mg, ms, and mom will not update in iterations during which the grad is zero.

mean_square = decay * mean_square + (1-decay) * gradient ** 2 mean_grad = decay * mean_grad + (1-decay) * gradient

Delta = learning_rate * gradient / sqrt(mean_square + epsilon - mean_grad ** 2)

mg <- rho * mg{t-1} + (1-rho) * grad ms <- rho * ms{t-1} + (1-rho) * grad * grad mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms - mg * mg + epsilon) var <- var - mom

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
var tensor of resource values
mg tensor of resource values
ms tensor of resource values
mom tensor of resource values
lr tensor of number values
rho tensor of number values
momentum tensor of number values
epsilon tensor of number values
grad tensor of number values

tf.ResourceApplyFtrl (TF::ResourceApplyFtrlOp)

Update '*var' according to the Ftrl-proximal scheme.

accum_new = accum + grad * grad linear += grad - (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
multiply_linear_by_lr::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
var tensor of resource values
accum tensor of resource values
linear tensor of resource values
grad tensor of number values
lr tensor of number values
l1 tensor of number values
l2 tensor of number values
lr_power tensor of number values

tf.ResourceApplyFtrlV2 (TF::ResourceApplyFtrlV2Op)

Update '*var' according to the Ftrl-proximal scheme.

accum_new = accum + grad * grad grad_with_shrinkage = grad + 2 * l2_shrinkage * var linear += grad_with_shrinkage + (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
multiply_linear_by_lr::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
var tensor of resource values
accum tensor of resource values
linear tensor of resource values
grad tensor of number values
lr tensor of number values
l1 tensor of number values
l2 tensor of number values
l2_shrinkage tensor of number values
lr_power tensor of number values

tf.ResourceApplyGradientDescent (TF::ResourceApplyGradientDescentOp)

Update '*var' by subtracting 'alpha' * 'delta' from it.

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
var tensor of resource values
alpha tensor of number values
delta tensor of number values

tf.ResourceApplyKerasMomentum (TF::ResourceApplyKerasMomentumOp)

Update '*var' according to the momentum scheme.

Set use_nesterov = True if you want to use Nesterov momentum.

accum = accum * momentum - lr * grad var += accum

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
use_nesterov::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
var tensor of resource values
accum tensor of resource values
lr tensor of number values
grad tensor of number values
momentum tensor of number values

tf.ResourceApplyMomentum (TF::ResourceApplyMomentumOp)

Update '*var' according to the momentum scheme.

Set use_nesterov = True if you want to use Nesterov momentum.

accum = accum * momentum + grad var -= lr * accum

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
use_nesterov::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
var tensor of resource values
accum tensor of resource values
lr tensor of number values
grad tensor of number values
momentum tensor of number values

tf.ResourceApplyPowerSign (TF::ResourceApplyPowerSignOp)

Update '*var' according to the AddSign update.

mt <- beta1 * m{t-1} + (1 - beta1) * g update <- exp(logbase * sign_decay * sign(g) * sign(m_t)) * g variable <- variable - lr_t * update

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
var tensor of resource values
m tensor of resource values
lr tensor of number values
logbase tensor of number values
sign_decay tensor of number values
beta tensor of number values
grad tensor of number values

tf.ResourceApplyProximalAdagrad (TF::ResourceApplyProximalAdagradOp)

Update 'var' and 'accum' according to FOBOS with Adagrad learning rate.

accum += grad * grad prox_v = var - lr * grad * (1 / sqrt(accum)) var = sign(prox_v)/(1+lr*l2) * max{|prox_v|-lr*l1,0}

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
var tensor of resource values
accum tensor of resource values
lr tensor of number values
l1 tensor of number values
l2 tensor of number values
grad tensor of number values

tf.ResourceApplyProximalGradientDescent (TF::ResourceApplyProximalGradientDescentOp)

Update '*var' as FOBOS algorithm with fixed learning rate.

prox_v = var - alpha * delta var = sign(prox_v)/(1+alpha*l2) * max{|prox_v|-alpha*l1,0}

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
var tensor of resource values
alpha tensor of number values
l1 tensor of number values
l2 tensor of number values
delta tensor of number values

tf.ResourceApplyRMSProp (TF::ResourceApplyRMSPropOp)

Update '*var' according to the RMSProp algorithm.

Note that in dense implementation of this algorithm, ms and mom will update even if the grad is zero, but in this sparse implementation, ms and mom will not update in iterations during which the grad is zero.

mean_square = decay * mean_square + (1-decay) * gradient ** 2 Delta = learning_rate * gradient / sqrt(mean_square + epsilon)

ms <- rho * ms{t-1} + (1-rho) * grad * grad mom <- momentum * mom{t-1} + lr * grad / sqrt(ms + epsilon) var <- var - mom

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
var tensor of resource values
ms tensor of resource values
mom tensor of resource values
lr tensor of number values
rho tensor of number values
momentum tensor of number values
epsilon tensor of number values
grad tensor of number values

tf.ResourceGather (TF::ResourceGatherOp)

Gather slices from the variable pointed to by resource according to indices.

indices must be an integer tensor of any dimension (usually 0-D or 1-D). Produces an output tensor with shape indices.shape + params.shape[1:] where:

    # Scalar indices
    output[:, ..., :] = params[indices, :, ... :]

    # Vector indices
    output[i, :, ..., :] = params[indices[i], :, ... :]

    # Higher rank indices
    output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :]

Attributes:

AttributeMLIR TypeDescription
batch_dims::mlir::IntegerAttr64-bit signless integer attribute
validate_indices::mlir::BoolAttrbool attribute
Tindices::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
resource tensor of resource values
indices tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.ResourceGatherNd (TF::ResourceGatherNdOp)

GatherNd on a resource.

This op reads the variable referenced by the first argument, and then performs a GatherNd operation on it.

Attributes:

AttributeMLIR TypeDescription
Tindices::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
resource tensor of resource values
indices tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.ResourceScatterAdd (TF::ResourceScatterAddOp)

Adds sparse updates to the variable referenced by resource.

This operation computes

# Scalar indices
ref[indices, ...] += updates[...]

# Vector indices (for each i)
ref[indices[i], ...] += updates[i, ...]

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] += updates[i, ..., j, ...]

Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions add.

Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = [].

Attributes:

AttributeMLIR TypeDescription
Tindices::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
resource tensor of resource values
indices tensor of 32/64-bit signed integer values
updates tensor of number values

tf.ResourceScatterDiv (TF::ResourceScatterDivOp)

Divides sparse updates into the variable referenced by resource.

This operation computes

# Scalar indices
ref[indices, ...] /= updates[...]

# Vector indices (for each i)
ref[indices[i], ...] /= updates[i, ...]

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] /= updates[i, ..., j, ...]

Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions multiply.

Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = [].

Attributes:

AttributeMLIR TypeDescription
Tindices::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
resource tensor of resource values
indices tensor of 32/64-bit signed integer values
updates tensor of number values

tf.ResourceScatterMax (TF::ResourceScatterMaxOp)

Reduces sparse updates into the variable referenced by resource using the max operation.

This operation computes

# Scalar indices
ref[indices, ...] = max(ref[indices, ...], updates[...])

# Vector indices (for each i)
ref[indices[i], ...] = max(ref[indices[i], ...], updates[i, ...])

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] = max(ref[indices[i, ..., j], ...], updates[i, ..., j, ...])

Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions are combined.

Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = [].

Attributes:

AttributeMLIR TypeDescription
Tindices::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
resource tensor of resource values
indices tensor of 32/64-bit signed integer values
updates tensor of number values

tf.ResourceScatterMin (TF::ResourceScatterMinOp)

Reduces sparse updates into the variable referenced by resource using the min operation.

This operation computes

# Scalar indices
ref[indices, ...] = min(ref[indices, ...], updates[...])

# Vector indices (for each i)
ref[indices[i], ...] = min(ref[indices[i], ...], updates[i, ...])

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] = min(ref[indices[i, ..., j], ...], updates[i, ..., j, ...])

Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions are combined.

Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = [].

Attributes:

AttributeMLIR TypeDescription
Tindices::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
resource tensor of resource values
indices tensor of 32/64-bit signed integer values
updates tensor of number values

tf.ResourceScatterMul (TF::ResourceScatterMulOp)

Multiplies sparse updates into the variable referenced by resource.

This operation computes

# Scalar indices
ref[indices, ...] *= updates[...]

# Vector indices (for each i)
ref[indices[i], ...] *= updates[i, ...]

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] *= updates[i, ..., j, ...]

Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions multiply.

Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = [].

Attributes:

AttributeMLIR TypeDescription
Tindices::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
resource tensor of resource values
indices tensor of 32/64-bit signed integer values
updates tensor of number values

tf.ResourceScatterNdAdd (TF::ResourceScatterNdAddOp)

Applies sparse addition to individual values or slices in a Variable.

ref is a Tensor with rank P and indices is a Tensor of rank Q.

indices must be integer tensor, containing indices into ref. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P.

The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of ref.

updates is Tensor of rank Q-1+P-K with shape:

[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]]

For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that addition would look like this:

ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8], use_resource=True)
indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([9, 10, 11, 12])
add = tf.scatter_nd_add(ref, indices, updates)
with tf.Session() as sess:
  print sess.run(add)

The resulting update to ref would look like this:

[1, 13, 3, 14, 14, 6, 7, 20]

See tf.scatter_nd for more details about how to make updates to slices.

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
ref tensor of resource values
indices tensor of 32/64-bit signed integer values
updates tensor of tf.dtype values

tf.ResourceScatterNdSub (TF::ResourceScatterNdSubOp)

Applies sparse subtraction to individual values or slices in a Variable.

ref is a Tensor with rank P and indices is a Tensor of rank Q.

indices must be integer tensor, containing indices into ref. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P.

The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of ref.

updates is Tensor of rank Q-1+P-K with shape:

[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]]

For example, say we want to subtract 4 scattered elements from a rank-1 tensor with 8 elements. In Python, that subtraction would look like this:

ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8], use_resource=True)
indices = tf.constant([[4], [3], [1], [7]])
updates = tf.constant([9, 10, 11, 12])
sub = tf.scatter_nd_sub(ref, indices, updates)
with tf.Session() as sess:
  print sess.run(sub)

The resulting update to ref would look like this:

[1, -9, 3, -6, -4, 6, 7, -4]

See tf.scatter_nd for more details about how to make updates to slices.

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
ref tensor of resource values
indices tensor of 32/64-bit signed integer values
updates tensor of tf.dtype values

tf.ResourceScatterNdUpdate (TF::ResourceScatterNdUpdateOp)

Applies sparse updates to individual values or slices within a given

variable according to indices.

ref is a Tensor with rank P and indices is a Tensor of rank Q.

indices must be integer tensor, containing indices into ref. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P.

The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of ref.

updates is Tensor of rank Q-1+P-K with shape:

[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].

For example, say we want to update 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this:

    ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8])
    indices = tf.constant([[4], [3], [1] ,[7]])
    updates = tf.constant([9, 10, 11, 12])
    update = tf.scatter_nd_update(ref, indices, updates)
    with tf.Session() as sess:
      print sess.run(update)

The resulting update to ref would look like this:

[1, 11, 3, 10, 9, 6, 7, 12]

See tf.scatter_nd for more details about how to make updates to slices.

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
ref tensor of resource values
indices tensor of 32/64-bit signed integer values
updates tensor of tf.dtype values

tf.ResourceScatterSub (TF::ResourceScatterSubOp)

Subtracts sparse updates from the variable referenced by resource.

This operation computes

# Scalar indices
ref[indices, ...] -= updates[...]

# Vector indices (for each i)
ref[indices[i], ...] -= updates[i, ...]

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] -= updates[i, ..., j, ...]

Duplicate entries are handled correctly: if multiple indices reference the same location, their contributions add.

Requires updates.shape = indices.shape + ref.shape[1:] or updates.shape = [].

Attributes:

AttributeMLIR TypeDescription
Tindices::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
resource tensor of resource values
indices tensor of 32/64-bit signed integer values
updates tensor of number values

tf.ResourceScatterUpdate (TF::ResourceScatterUpdateOp)

Assigns sparse updates to the variable referenced by resource.

This operation computes

# Scalar indices
ref[indices, ...] = updates[...]

# Vector indices (for each i)
ref[indices[i], ...] = updates[i, ...]

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] = updates[i, ..., j, ...]

Attributes:

AttributeMLIR TypeDescription
Tindices::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
resource tensor of resource values
indices tensor of 32/64-bit signed integer values
updates tensor of tf.dtype values

tf.ResourceSparseApplyAdagrad (TF::ResourceSparseApplyAdagradOp)

Update relevant entries in 'var' and 'accum' according to the adagrad scheme.

That is for rows we have grad for, we update var and accum as follows: accum += grad * grad var -= lr * grad * (1 / sqrt(accum))

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
update_slots::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
var tensor of resource values
accum tensor of resource values
lr tensor of number values
grad tensor of number values
indices tensor of 32/64-bit signed integer values

tf.ResourceSparseApplyAdagradV2 (TF::ResourceSparseApplyAdagradV2Op)

Update relevant entries in 'var' and 'accum' according to the adagrad scheme.

That is for rows we have grad for, we update var and accum as follows: accum += grad * grad var -= lr * grad * (1 / sqrt(accum))

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
update_slots::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
var tensor of resource values
accum tensor of resource values
lr tensor of number values
epsilon tensor of number values
grad tensor of number values
indices tensor of 32/64-bit signed integer values

tf.ResourceSparseApplyFtrl (TF::ResourceSparseApplyFtrlOp)

Update relevant entries in '*var' according to the Ftrl-proximal scheme.

That is for rows we have grad for, we update var, accum and linear as follows: accum_new = accum + grad * grad linear += grad - (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new

Attributes:

AttributeMLIR TypeDescription
use_locking::mlir::BoolAttrbool attribute
multiply_linear_by_lr::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
var tensor of resource values
accum tensor of resource values
linear tensor of resource values
grad tensor of number values
indices tensor of 32/64-bit signed integer values
lr tensor of number values
l1 tensor of number values
l2 tensor of number values
lr_power tensor of number values

tf.ResourceStridedSliceAssign (TF::ResourceStridedSliceAssignOp)

Assign value to the sliced l-value reference of ref.

The values of value are assigned to the positions in the variable ref that are selected by the slice parameters. The slice parameters begin,end,strides, etc. work exactly as inStridedSlice`.

NOTE this op currently does not support broadcasting and so value's shape must be exactly the shape produced by the slice of ref.

Attributes:

AttributeMLIR TypeDescription
begin_mask::mlir::IntegerAttr64-bit signless integer attribute
end_mask::mlir::IntegerAttr64-bit signless integer attribute
ellipsis_mask::mlir::IntegerAttr64-bit signless integer attribute
new_axis_mask::mlir::IntegerAttr64-bit signless integer attribute
shrink_axis_mask::mlir::IntegerAttr64-bit signless integer attribute
Index::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
ref tensor of resource values
begin tensor of 32/64-bit signed integer values
end tensor of 32/64-bit signed integer values
strides tensor of 32/64-bit signed integer values
value tensor of tf.dtype values

tf.Restore (TF::RestoreOp)

Restores a tensor from checkpoint files.

Reads a tensor stored in one or several files. If there are several files (for instance because a tensor was saved as slices), file_pattern may contain wildcard symbols (* and ?) in the filename portion only, not in the directory portion.

If a file_pattern matches several files, preferred_shard can be used to hint in which file the requested tensor is likely to be found. This op will first open the file at index preferred_shard in the list of matching files and try to restore tensors from that file. Only if some tensors or tensor slices are not found in that first file, then the Op opens all the files. Setting preferred_shard to match the value passed as the shard input of a matching Save Op may speed up Restore. This attribute only affects performance, not correctness. The default value -1 means files are processed in order.

See also RestoreSlice.

Attributes:

AttributeMLIR TypeDescription
preferred_shard::mlir::IntegerAttr64-bit signless integer attribute
dt::mlir::Attributederived attribute

Operands:

Operand Description
file_pattern tensor of string values
tensor_name tensor of string values

Results:

Result Description
tensor tensor of tf.dtype values

tf.RestoreV2 (TF::RestoreV2Op)

Restores tensors from a V2 checkpoint.

For backward compatibility with the V1 format, this Op currently allows restoring from a V1 checkpoint as well:

  • This Op first attempts to find the V2 index file pointed to by "prefix", and if found proceed to read it as a V2 checkpoint;
  • Otherwise the V1 read path is invoked. Relying on this behavior is not recommended, as the ability to fall back to read V1 might be deprecated and eventually removed.

By default, restores the named tensors in full. If the caller wishes to restore specific slices of stored tensors, "shape_and_slices" should be non-empty strings and correspondingly well-formed.

Callers must ensure all the named tensors are indeed stored in the checkpoint.

Attributes:

AttributeMLIR TypeDescription
dtypes::mlir::Attributederived attribute

Operands:

Operand Description
prefix tensor of string values
tensor_names tensor of string values
shape_and_slices tensor of string values

Results:

Result Description
tensors variadic of tensor of tf.dtype values

tf.RetrieveTPUEmbeddingAdadeltaParameters (TF::RetrieveTPUEmbeddingAdadeltaParametersOp)

Retrieve Adadelta embedding parameters.

An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Results:

Result Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values
updates tensor of 32-bit float values

tf.RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebug (TF::RetrieveTPUEmbeddingAdadeltaParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Results:

Result Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values
updates tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.RetrieveTPUEmbeddingAdagradParameters (TF::RetrieveTPUEmbeddingAdagradParametersOp)

Retrieve Adagrad embedding parameters.

An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Results:

Result Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values

tf.RetrieveTPUEmbeddingAdagradParametersGradAccumDebug (TF::RetrieveTPUEmbeddingAdagradParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Results:

Result Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.RetrieveTPUEmbeddingADAMParameters (TF::RetrieveTPUEmbeddingADAMParametersOp)

Retrieve ADAM embedding parameters.

An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Results:

Result Description
parameters tensor of 32-bit float values
momenta tensor of 32-bit float values
velocities tensor of 32-bit float values

tf.RetrieveTPUEmbeddingADAMParametersGradAccumDebug (TF::RetrieveTPUEmbeddingADAMParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Results:

Result Description
parameters tensor of 32-bit float values
momenta tensor of 32-bit float values
velocities tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.RetrieveTPUEmbeddingCenteredRMSPropParameters (TF::RetrieveTPUEmbeddingCenteredRMSPropParametersOp)

Retrieve centered RMSProp embedding parameters.

An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Results:

Result Description
parameters tensor of 32-bit float values
ms tensor of 32-bit float values
mom tensor of 32-bit float values
mg tensor of 32-bit float values

tf.RetrieveTPUEmbeddingFTRLParameters (TF::RetrieveTPUEmbeddingFTRLParametersOp)

Retrieve FTRL embedding parameters.

An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Results:

Result Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values
linears tensor of 32-bit float values

tf.RetrieveTPUEmbeddingFTRLParametersGradAccumDebug (TF::RetrieveTPUEmbeddingFTRLParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Results:

Result Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values
linears tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.RetrieveTPUEmbeddingMDLAdagradLightParameters (TF::RetrieveTPUEmbeddingMDLAdagradLightParametersOp)

Retrieve MDL Adagrad Light embedding parameters.

An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Results:

Result Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values
weights tensor of 32-bit float values
benefits tensor of 32-bit float values

tf.RetrieveTPUEmbeddingMomentumParameters (TF::RetrieveTPUEmbeddingMomentumParametersOp)

Retrieve Momentum embedding parameters.

An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Results:

Result Description
parameters tensor of 32-bit float values
momenta tensor of 32-bit float values

tf.RetrieveTPUEmbeddingMomentumParametersGradAccumDebug (TF::RetrieveTPUEmbeddingMomentumParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Results:

Result Description
parameters tensor of 32-bit float values
momenta tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.RetrieveTPUEmbeddingProximalAdagradParameters (TF::RetrieveTPUEmbeddingProximalAdagradParametersOp)

Retrieve proximal Adagrad embedding parameters.

An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Results:

Result Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values

tf.RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebug (TF::RetrieveTPUEmbeddingProximalAdagradParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Results:

Result Description
parameters tensor of 32-bit float values
accumulators tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.RetrieveTPUEmbeddingProximalYogiParameters (TF::RetrieveTPUEmbeddingProximalYogiParametersOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Results:

Result Description
parameters tensor of 32-bit float values
v tensor of 32-bit float values
m tensor of 32-bit float values

tf.RetrieveTPUEmbeddingProximalYogiParametersGradAccumDebug (TF::RetrieveTPUEmbeddingProximalYogiParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Results:

Result Description
parameters tensor of 32-bit float values
v tensor of 32-bit float values
m tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.RetrieveTPUEmbeddingRMSPropParameters (TF::RetrieveTPUEmbeddingRMSPropParametersOp)

Retrieve RMSProp embedding parameters.

An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Results:

Result Description
parameters tensor of 32-bit float values
ms tensor of 32-bit float values
mom tensor of 32-bit float values

tf.RetrieveTPUEmbeddingRMSPropParametersGradAccumDebug (TF::RetrieveTPUEmbeddingRMSPropParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Results:

Result Description
parameters tensor of 32-bit float values
ms tensor of 32-bit float values
mom tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.RetrieveTPUEmbeddingStochasticGradientDescentParameters (TF::RetrieveTPUEmbeddingStochasticGradientDescentParametersOp)

Retrieve SGD embedding parameters.

An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Results:

Result Description
parameters tensor of 32-bit float values

tf.RetrieveTPUEmbeddingStochasticGradientDescentParametersGradAccumDebug (TF::RetrieveTPUEmbeddingStochasticGradientDescentParametersGradAccumDebugOp)

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute
num_shards::mlir::IntegerAttr64-bit signless integer attribute
shard_id::mlir::IntegerAttr64-bit signless integer attribute
config::mlir::StringAttrstring attribute

Results:

Result Description
parameters tensor of 32-bit float values
gradient_accumulators tensor of 32-bit float values

tf.Reverse (TF::ReverseOp)

Reverses specific dimensions of a tensor.

Given a tensor, and a bool tensor dims representing the dimensions of tensor, this operation reverses each dimension i of tensor where dims[i] is True.

tensor can have up to 8 dimensions. The number of dimensions of tensor must equal the number of elements in dims. In other words:

rank(tensor) = size(dims)

For example:

# tensor 't' is [[[[ 0,  1,  2,  3],
#                  [ 4,  5,  6,  7],
#                  [ 8,  9, 10, 11]],
#                 [[12, 13, 14, 15],
#                  [16, 17, 18, 19],
#                  [20, 21, 22, 23]]]]
# tensor 't' shape is [1, 2, 3, 4]

# 'dims' is [False, False, False, True]
reverse(t, dims) ==> [[[[ 3,  2,  1,  0],
                        [ 7,  6,  5,  4],
                        [ 11, 10, 9, 8]],
                       [[15, 14, 13, 12],
                        [19, 18, 17, 16],
                        [23, 22, 21, 20]]]]

# 'dims' is [False, True, False, False]
reverse(t, dims) ==> [[[[12, 13, 14, 15],
                        [16, 17, 18, 19],
                        [20, 21, 22, 23]
                       [[ 0,  1,  2,  3],
                        [ 4,  5,  6,  7],
                        [ 8,  9, 10, 11]]]]

# 'dims' is [False, False, True, False]
reverse(t, dims) ==> [[[[8, 9, 10, 11],
                        [4, 5, 6, 7],
                        [0, 1, 2, 3]]
                       [[20, 21, 22, 23],
                        [16, 17, 18, 19],
                        [12, 13, 14, 15]]]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
tensor tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or string or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
dims tensor of bool values

Results:

Result Description
output tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or string or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.ReverseSequence (TF::ReverseSequenceOp)

Reverses variable length slices.

This op first slices input along the dimension batch_dim, and for each slice i, reverses the first seq_lengths[i] elements along the dimension seq_dim.

The elements of seq_lengths must obey seq_lengths[i] <= input.dims[seq_dim], and seq_lengths must be a vector of length input.dims[batch_dim].

The output slice i along dimension batch_dim is then given by input slice i, with the first seq_lengths[i] slices along dimension seq_dim reversed.

For example:

# Given this:
batch_dim = 0
seq_dim = 1
input.dims = (4, 8, ...)
seq_lengths = [7, 2, 3, 5]

# then slices of input are reversed on seq_dim, but only up to seq_lengths:
output[0, 0:7, :, ...] = input[0, 7:0:-1, :, ...]
output[1, 0:2, :, ...] = input[1, 2:0:-1, :, ...]
output[2, 0:3, :, ...] = input[2, 3:0:-1, :, ...]
output[3, 0:5, :, ...] = input[3, 5:0:-1, :, ...]

# while entries past seq_lens are copied through:
output[0, 7:, :, ...] = input[0, 7:, :, ...]
output[1, 2:, :, ...] = input[1, 2:, :, ...]
output[2, 3:, :, ...] = input[2, 3:, :, ...]
output[3, 2:, :, ...] = input[3, 2:, :, ...]

In contrast, if:

# Given this:
batch_dim = 2
seq_dim = 0
input.dims = (8, ?, 4, ...)
seq_lengths = [7, 2, 3, 5]

# then slices of input are reversed on seq_dim, but only up to seq_lengths:
output[0:7, :, 0, :, ...] = input[7:0:-1, :, 0, :, ...]
output[0:2, :, 1, :, ...] = input[2:0:-1, :, 1, :, ...]
output[0:3, :, 2, :, ...] = input[3:0:-1, :, 2, :, ...]
output[0:5, :, 3, :, ...] = input[5:0:-1, :, 3, :, ...]

# while entries past seq_lens are copied through:
output[7:, :, 0, :, ...] = input[7:, :, 0, :, ...]
output[2:, :, 1, :, ...] = input[2:, :, 1, :, ...]
output[3:, :, 2, :, ...] = input[3:, :, 2, :, ...]
output[2:, :, 3, :, ...] = input[2:, :, 3, :, ...]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
seq_dim::mlir::IntegerAttr64-bit signless integer attribute
batch_dim::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute
Tlen::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
seq_lengths tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.ReverseV2 (TF::ReverseV2Op)

Reverses specific dimensions of a tensor.

Given a tensor, and a int32 tensor axis representing the set of dimensions of tensor to reverse. This operation reverses each dimension i for which there exists j s.t. axis[j] == i.

tensor can have up to 8 dimensions. The number of dimensions specified in axis may be 0 or more entries. If an index is specified more than once, a InvalidArgument error is raised.

For example:

# tensor 't' is [[[[ 0,  1,  2,  3],
#                  [ 4,  5,  6,  7],
#                  [ 8,  9, 10, 11]],
#                 [[12, 13, 14, 15],
#                  [16, 17, 18, 19],
#                  [20, 21, 22, 23]]]]
# tensor 't' shape is [1, 2, 3, 4]

# 'dims' is [3] or 'dims' is [-1]
reverse(t, dims) ==> [[[[ 3,  2,  1,  0],
                        [ 7,  6,  5,  4],
                        [ 11, 10, 9, 8]],
                       [[15, 14, 13, 12],
                        [19, 18, 17, 16],
                        [23, 22, 21, 20]]]]

# 'dims' is '[1]' (or 'dims' is '[-3]')
reverse(t, dims) ==> [[[[12, 13, 14, 15],
                        [16, 17, 18, 19],
                        [20, 21, 22, 23]
                       [[ 0,  1,  2,  3],
                        [ 4,  5,  6,  7],
                        [ 8,  9, 10, 11]]]]

# 'dims' is '[2]' (or 'dims' is '[-2]')
reverse(t, dims) ==> [[[[8, 9, 10, 11],
                        [4, 5, 6, 7],
                        [0, 1, 2, 3]]
                       [[20, 21, 22, 23],
                        [16, 17, 18, 19],
                        [12, 13, 14, 15]]]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
tensor tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or string or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
axis tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or string or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.RFFT (TF::RFFTOp)

Real-valued fast Fourier transform.

Computes the 1-dimensional discrete Fourier transform of a real-valued signal over the inner-most dimension of input.

Since the DFT of a real signal is Hermitian-symmetric, RFFT only returns the fft_length / 2 + 1 unique components of the FFT: the zero-frequency term, followed by the fft_length / 2 positive-frequency terms.

Along the axis RFFT is computed on, if fft_length is smaller than the corresponding dimension of input, the dimension is cropped. If it is larger, the dimension is padded with zeros.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Treal::mlir::Attributederived attribute
Tcomplex::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 32/64-bit float values
fft_length tensor of 32-bit integer values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex values

tf.RFFT2D (TF::RFFT2DOp)

2D real-valued fast Fourier transform.

Computes the 2-dimensional discrete Fourier transform of a real-valued signal over the inner-most 2 dimensions of input.

Since the DFT of a real signal is Hermitian-symmetric, RFFT2D only returns the fft_length / 2 + 1 unique components of the FFT for the inner-most dimension of output: the zero-frequency term, followed by the fft_length / 2 positive-frequency terms.

Along each axis RFFT2D is computed on, if fft_length is smaller than the corresponding dimension of input, the dimension is cropped. If it is larger, the dimension is padded with zeros.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Treal::mlir::Attributederived attribute
Tcomplex::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 32/64-bit float values
fft_length tensor of 32-bit integer values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex values

tf.RFFT3D (TF::RFFT3DOp)

3D real-valued fast Fourier transform.

Computes the 3-dimensional discrete Fourier transform of a real-valued signal over the inner-most 3 dimensions of input.

Since the DFT of a real signal is Hermitian-symmetric, RFFT3D only returns the fft_length / 2 + 1 unique components of the FFT for the inner-most dimension of output: the zero-frequency term, followed by the fft_length / 2 positive-frequency terms.

Along each axis RFFT3D is computed on, if fft_length is smaller than the corresponding dimension of input, the dimension is cropped. If it is larger, the dimension is padded with zeros.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Treal::mlir::Attributederived attribute
Tcomplex::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 32/64-bit float values
fft_length tensor of 32-bit integer values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex values

tf.RGBToHSV (TF::RGBToHSVOp)

Converts one or more images from RGB to HSV.

Outputs a tensor of the same shape as the images tensor, containing the HSV value of the pixels. The output is only well defined if the value in images are in [0,1].

output[..., 0] contains hue, output[..., 1] contains saturation, and output[..., 2] contains value. All HSV values are in [0,1]. A hue of 0 corresponds to pure red, hue 1/3 is pure green, and 2/3 is pure blue.

Usage Example:

blue_image = tf.stack([ ... tf.zeros([5,5]), ... tf.zeros([5,5]), ... tf.ones([5,5])], ... axis=-1) blue_hsv_image = tf.image.rgb_to_hsv(blue_image) blue_hsv_image[0,0].numpy() array([0.6666667, 1. , 1. ], dtype=float32)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
images tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.RightShift (TF::RightShiftOp)

Elementwise computes the bitwise right-shift of x and y.

Performs a logical shift for unsigned integer types, and an arithmetic shift for signed integer types.

If y is negative, or greater than or equal to than the width of x in bits the result is implementation defined.

Example:

import tensorflow as tf
from tensorflow.python.ops import bitwise_ops
import numpy as np
dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64]

for dtype in dtype_list:
  lhs = tf.constant([-1, -5, -3, -14], dtype=dtype)
  rhs = tf.constant([5, 0, 7, 11], dtype=dtype)

  right_shift_result = bitwise_ops.right_shift(lhs, rhs)

  print(right_shift_result)

# This will print:
# tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int8)
# tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int16)
# tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int32)
# tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int64)

lhs = np.array([-2, 64, 101, 32], dtype=np.int8)
rhs = np.array([-1, -5, -3, -14], dtype=np.int8)
bitwise_ops.right_shift(lhs, rhs)
# <tf.Tensor: shape=(4,), dtype=int8, numpy=array([ -2,  64, 101,  32], dtype=int8)>

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of integer values
y tensor of integer values

Results:

Result Description
z tensor of integer values

tf.Rint (TF::RintOp)

Returns element-wise integer closest to x.

If the result is midway between two representable values, the even representable is chosen. For example:

rint(-1.5) ==> -2.0
rint(0.5000001) ==> 1.0
rint([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) ==> [-2., -2., -0., 0., 2., 2., 2.]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_Idempotent

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values

Results:

Result Description
y tensor of floating-point values

tf.RiscAdd (TF::RiscAddOp)

Returns x + y element-wise.

Given two input tensors, the tf.risc_add operation computes the sum for every element in the tensor.

Both input and output have a range (-inf, inf).

Traits: AlwaysSpeculatableImplTrait, Commutative

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point values
y tensor of floating-point values

Results:

Result Description
z tensor of floating-point values

tf.RiscDot (TF::RiscDotOp)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
transpose_a::mlir::BoolAttrbool attribute
transpose_b::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
a tensor of floating-point values
b tensor of floating-point values

Results:

Result Description
product tensor of floating-point values

tf.RngReadAndSkip (TF::RngReadAndSkipOp)

Advance the counter of a counter-based RNG.

The state of the RNG after rng_read_and_skip(n) will be the same as that after uniform([n]) (or any other distribution). The actual increment added to the counter is an unspecified implementation choice.

In the case that the input algorithm is RNG_ALG_AUTO_SELECT, the counter in the state needs to be of size int64[2], the current maximal counter size among algorithms. In this case, this op will manage the counter as if it is an 128-bit integer with layout [lower_64bits, higher_64bits]. If an algorithm needs less than 128 bits for the counter, it should use the left portion of the int64[2]. In this way, the int64[2] is compatible with all current RNG algorithms (Philox, ThreeFry and xla::RandomAlgorithm::RNG_DEFAULT). Downstream RNG ops can thus use this counter with any RNG algorithm.

Operands:

Operand Description
resource tensor of resource values
alg tensor of 32-bit integer values
delta tensor of 64-bit unsigned integer values

Results:

Result Description
value tensor of 64-bit integer values

tf.Roll (TF::RollOp)

Rolls the elements of a tensor along an axis.

The elements are shifted positively (towards larger indices) by the offset of shift along the dimension of axis. Negative shift values will shift elements in the opposite direction. Elements that roll passed the last position will wrap around to the first and vice versa. Multiple shifts along multiple axes may be specified.

For example:

# 't' is [0, 1, 2, 3, 4]
roll(t, shift=2, axis=0) ==> [3, 4, 0, 1, 2]

# shifting along multiple dimensions
# 't' is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]
roll(t, shift=[1, -2], axis=[0, 1]) ==> [[7, 8, 9, 5, 6], [2, 3, 4, 0, 1]]

# shifting along the same axis multiple times
# 't' is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]
roll(t, shift=[2, -3], axis=[1, 1]) ==> [[1, 2, 3, 4, 0], [6, 7, 8, 9, 5]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Taxis::mlir::Attributederived attribute
Tshift::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
shift tensor of 32/64-bit signed integer values
axis tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.Round (TF::RoundOp)

Rounds the values of a tensor to the nearest integer, element-wise.

Rounds half to even. Also known as bankers rounding. If you want to round according to the current system rounding mode use std::cint.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_Idempotent

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer values

Results:

Result Description
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer values

tf.Rsqrt (TF::RsqrtOp)

Computes reciprocal of square root of x element-wise.

I.e., \(y = 1 / \sqrt{x}\).

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.RsqrtGrad (TF::RsqrtGradOp)

Computes the gradient for the rsqrt of x wrt its input.

Specifically, grad = dy * -0.5 * y^3, where y = rsqrt(x), and dy is the corresponding input gradient.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
y tensor of floating-point or complex values
dy tensor of floating-point or complex values

Results:

Result Description
z tensor of floating-point or complex values

tf.Save (TF::SaveOp)

Saves the input tensors to disk.

The size of tensor_names must match the number of tensors in data. data[i] is written to filename with name tensor_names[i].

See also SaveSlices.

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
filename tensor of string values
tensor_names tensor of string values
data variadic of tensor of tf.dtype values

tf.SaveSlices (TF::SaveSlicesOp)

Saves input tensors slices to disk.

This is like Save except that tensors can be listed in the saved file as being a slice of a larger tensor. shapes_and_slices specifies the shape of the larger tensor and the slice that this tensor covers. shapes_and_slices must have as many elements as tensor_names.

Elements of the shapes_and_slices input must either be:

  • The empty string, in which case the corresponding tensor is saved normally.
  • A string of the form dim0 dim1 ... dimN-1 slice-spec where the dimI are the dimensions of the larger tensor and slice-spec specifies what part is covered by the tensor to save.

slice-spec itself is a :-separated list: slice0:slice1:...:sliceN-1 where each sliceI is either:

  • The string - meaning that the slice covers all indices of this dimension
  • start,length where start and length are integers. In that case the slice covers length indices starting at start.

See also Save.

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
filename tensor of string values
tensor_names tensor of string values
shapes_and_slices tensor of string values
data variadic of tensor of tf.dtype values

tf.SaveV2 (TF::SaveV2Op)

Saves tensors in V2 checkpoint format.

By default, saves the named tensors in full. If the caller wishes to save specific slices of full tensors, "shape_and_slices" should be non-empty strings and correspondingly well-formed.

Attributes:

AttributeMLIR TypeDescription
dtypes::mlir::Attributederived attribute

Operands:

Operand Description
prefix tensor of string values
tensor_names tensor of string values
shape_and_slices tensor of string values
tensors variadic of tensor of tf.dtype values

tf.ScatterNd (TF::ScatterNdOp)

Scatters updates into a tensor of shape shape according to indices.

Scatter sparse updates according to individual values at the specified indices. This op returns an output tensor with the shape you specify. This op is the inverse of the tf.gather_nd operator which extracts values or slices from a given tensor.

This operation is similar to tf.tensor_scatter_nd_add, except that the tensor is zero-initialized. Calling tf.scatter_nd(indices, updates, shape) is identical to calling tf.tensor_scatter_nd_add(tf.zeros(shape, updates.dtype), indices, updates)

If indices contains duplicates, the associated updates are accumulated (summed) into the output tensor.

indices is an integer tensor containing indices into the output tensor. The last dimension of indices can be at most the rank of shape:

indices.shape[-1] <= shape.rank

The last dimension of indices corresponds to indices of elements (if indices.shape[-1] = shape.rank) or slices (if indices.shape[-1] < shape.rank) along dimension indices.shape[-1] of shape.

updates is a tensor with shape:

indices.shape[:-1] + shape[indices.shape[-1]:]

The simplest form of the scatter op is to insert individual elements in a tensor by index. Consider an example where you want to insert 4 scattered elements in a rank-1 tensor with 8 elements.

In Python, this scatter operation would look like this:

    indices = tf.constant([[4], [3], [1], [7]])
    updates = tf.constant([9, 10, 11, 12])
    shape = tf.constant([8])
    scatter = tf.scatter_nd(indices, updates, shape)
    print(scatter)

The resulting tensor would look like this:

[0, 11, 0, 10, 9, 0, 0, 12]

You can also insert entire slices of a higher rank tensor all at once. For example, you can insert two slices in the first dimension of a rank-3 tensor with two matrices of new values.

In Python, this scatter operation would look like this:

    indices = tf.constant([[1], [3]])
    updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],
                            [7, 7, 7, 7], [8, 8, 8, 8]],
                           [[5, 5, 5, 5], [6, 6, 6, 6],
                            [7, 7, 7, 7], [8, 8, 8, 8]]])
    shape = tf.constant([4, 4, 4])
    scatter = tf.scatter_nd(indices, updates, shape)
    print(scatter)

The resulting tensor would look like this:

[[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]],
 [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
 [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]],
 [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]

Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, the index is ignored.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
indices tensor of 16-bit integer or 32-bit integer or 64-bit integer values
updates tensor of tf.dtype values
shape tensor of 16-bit integer or 32-bit integer or 64-bit integer values

Results:

Result Description
output tensor of tf.dtype values

tf.SegmentMax (TF::SegmentMaxOp)

Computes the maximum along segments of a tensor.

Read the section on segmentation for an explanation of segments.

Computes a tensor such that \(output_i = \max_j(data_j)\) where max is over j such that segment_ids[j] == i.

If the max is empty for a given segment ID i, output[i] = 0.

For example:

c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) tf.math.segment_max(c, tf.constant([0, 0, 1])).numpy() array([[4, 3, 3, 4], [5, 6, 7, 8]], dtype=int32)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
data tensor of integer or floating-point values
segment_ids tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of integer or floating-point values

tf.SegmentMaxV2 (TF::SegmentMaxV2Op)

Computes the maximum along segments of a tensor.

Read the section on segmentation for an explanation of segments.

Computes a tensor such that \(output_i = \max_j(data_j)\) where max is over j such that segment_ids[j] == i.

If the maximum is empty for a given segment ID i, it outputs the smallest possible value for the specific numeric type, output[i] = numeric_limits<T>::lowest().

The only difference with SegmentMax is the additional input num_segments. This helps in evaluating the output shape in compile time. num_segments should be consistent with segment_ids. e.g. Max(segment_ids) should be equal to num_segments - 1 for a 1-d segment_ids With inconsistent num_segments, the op still runs. only difference is, the output takes the size of num_segments irrespective of size of segment_ids and data. for num_segments less than expected output size, the last elements are ignored for num_segments more than the expected output size, last elements are assigned smallest possible value for the specific numeric type.

For example:

@tf.function(jit_compile=True) ... def test(c): ... return tf.raw_ops.SegmentMaxV2(data=c, segment_ids=tf.constant([0, 0, 1]), num_segments=2) c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) test(c).numpy() array([[4, 3, 3, 4], [5, 6, 7, 8]], dtype=int32)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute
Tnumsegments::mlir::Attributederived attribute

Operands:

Operand Description
data tensor of integer or floating-point values
segment_ids tensor of 32/64-bit signed integer values
num_segments tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of integer or floating-point values

tf.SegmentMean (TF::SegmentMeanOp)

Computes the mean along segments of a tensor.

Read the section on segmentation for an explanation of segments.

Computes a tensor such that \(output_i = \frac{\sum_j data_j}{N}\) where mean is over j such that segment_ids[j] == i and N is the total number of values summed.

If the mean is empty for a given segment ID i, output[i] = 0.

For example:

c = tf.constant([[1.0,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) tf.math.segment_mean(c, tf.constant([0, 0, 1])).numpy() array([[2.5, 2.5, 2.5, 2.5], [5., 6., 7., 8.]], dtype=float32)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
data tensor of number values
segment_ids tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of number values

tf.SegmentMin (TF::SegmentMinOp)

Computes the minimum along segments of a tensor.

Read the section on segmentation for an explanation of segments.

Computes a tensor such that \(output_i = \min_j(data_j)\) where min is over j such that segment_ids[j] == i.

If the min is empty for a given segment ID i, output[i] = 0.

For example:

c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) tf.math.segment_min(c, tf.constant([0, 0, 1])).numpy() array([[1, 2, 2, 1], [5, 6, 7, 8]], dtype=int32)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
data tensor of integer or floating-point values
segment_ids tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of integer or floating-point values

tf.SegmentMinV2 (TF::SegmentMinV2Op)

Computes the minimum along segments of a tensor.

Read the section on segmentation for an explanation of segments.

Computes a tensor such that \(output_i = \min_j(data_j)\) where min is over j such that segment_ids[j] == i.

If the minimum is empty for a given segment ID i, it outputs the largest possible value for the specific numeric type, output[i] = numeric_limits<T>::max().

The only difference with SegmentMin is the additional input num_segments. This helps in evaluating the output shape in compile time. num_segments should be consistent with segment_ids. e.g. Max(segment_ids) should be equal to num_segments - 1 for a 1-d segment_ids With inconsistent num_segments, the op still runs. only difference is, the output takes the size of num_segments irrespective of size of segment_ids and data. for num_segments less than expected output size, the last elements are ignored for num_segments more than the expected output size, last elements are assigned the largest possible value for the specific numeric type.

For example:

@tf.function(jit_compile=True) ... def test(c): ... return tf.raw_ops.SegmentMinV2(data=c, segment_ids=tf.constant([0, 0, 1]), num_segments=2) c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) test(c).numpy() array([[1, 2, 2, 1], [5, 6, 7, 8]], dtype=int32)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute
Tnumsegments::mlir::Attributederived attribute

Operands:

Operand Description
data tensor of integer or floating-point values
segment_ids tensor of 32/64-bit signed integer values
num_segments tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of integer or floating-point values

tf.SegmentProd (TF::SegmentProdOp)

Computes the product along segments of a tensor.

Read the section on segmentation for an explanation of segments.

Computes a tensor such that \(output_i = \prod_j data_j\) where the product is over j such that segment_ids[j] == i.

If the product is empty for a given segment ID i, output[i] = 1.

For example:

c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) tf.math.segment_prod(c, tf.constant([0, 0, 1])).numpy() array([[4, 6, 6, 4], [5, 6, 7, 8]], dtype=int32)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
data tensor of number values
segment_ids tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of number values

tf.SegmentProdV2 (TF::SegmentProdV2Op)

Computes the product along segments of a tensor.

Read the section on segmentation for an explanation of segments.

Computes a tensor such that \(output_i = \prod_j data_j\) where the product is over j such that segment_ids[j] == i.

If the product is empty for a given segment ID i, output[i] = 1.

The only difference with SegmentProd is the additional input num_segments. This helps in evaluating the output shape in compile time. num_segments should be consistent with segment_ids. e.g. Max(segment_ids) - 1 should be equal to num_segments for a 1-d segment_ids With inconsistent num_segments, the op still runs. only difference is, the output takes the size of num_segments irrespective of size of segment_ids and data. for num_segments less than expected output size, the last elements are ignored for num_segments more than the expected output size, last elements are assigned 1.

For example:

@tf.function(jit_compile=True) ... def test(c): ... return tf.raw_ops.SegmentProdV2(data=c, segment_ids=tf.constant([0, 0, 1]), num_segments=2) c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) test(c).numpy() array([[4, 6, 6, 4], [5, 6, 7, 8]], dtype=int32)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute
Tnumsegments::mlir::Attributederived attribute

Operands:

Operand Description
data tensor of number values
segment_ids tensor of 32/64-bit signed integer values
num_segments tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of number values

tf.SegmentSum (TF::SegmentSumOp)

Computes the sum along segments of a tensor.

Read the section on segmentation for an explanation of segments.

Computes a tensor such that \(output_i = \sum_j data_j\) where sum is over j such that segment_ids[j] == i.

If the sum is empty for a given segment ID i, output[i] = 0.

For example:

c = tf.constant([[1,2,3,4], [4, 3, 2, 1], [5,6,7,8]]) tf.math.segment_sum(c, tf.constant([0, 0, 1])).numpy() array([[5, 5, 5, 5], [5, 6, 7, 8]], dtype=int32)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
data tensor of number values
segment_ids tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of number values

tf.SegmentSumV2 (TF::SegmentSumV2Op)

Computes the sum along segments of a tensor.

Read the section on segmentation for an explanation of segments.

Computes a tensor such that \(output_i = \sum_j data_j\) where sum is over j such that segment_ids[j] == i.

If the sum is empty for a given segment ID i, output[i] = 0.

Note that this op is currently only supported with jit_compile=True.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute
Tnumsegments::mlir::Attributederived attribute

Operands:

Operand Description
data tensor of number values
segment_ids tensor of 32/64-bit signed integer values
num_segments tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of number values

tf.Select (TF::SelectOp)

Selects elements from x or y, depending on condition.

The x, and y tensors must all have the same shape, and the output will also have that shape.

The condition tensor must be a scalar if x and y are scalars. If x and y are vectors or higher rank, then condition must be either a scalar, a vector with size matching the first dimension of x, or must have the same shape as x.

The condition tensor acts as a mask that chooses, based on the value at each element, whether the corresponding element / row in the output should be taken from x (if true) or y (if false).

If condition is a vector and x and y are higher rank matrices, then it chooses which row (outer dimension) to copy from x and y. If condition has the same shape as x and y, then it chooses which element to copy from x and y.

For example:

# 'condition' tensor is [[True,  False]
#                        [False, True]]
# 't' is [[1, 2],
#         [3, 4]]
# 'e' is [[5, 6],
#         [7, 8]]
select(condition, t, e)  # => [[1, 6], [7, 4]]


# 'condition' tensor is [True, False]
# 't' is [[1, 2],
#         [3, 4]]
# 'e' is [[5, 6],
#         [7, 8]]
select(condition, t, e) ==> [[1, 2],
                             [7, 8]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
condition tensor of bool values
then_value tensor of tf.dtype values
else_value tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.SelectV2 (TF::SelectV2Op)

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
condition tensor of bool values
then_value tensor of tf.dtype values
else_value tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.SelfAdjointEigV2 (TF::SelfAdjointEigV2Op)

Computes the eigen decomposition of one or more square self-adjoint matrices.

Computes the eigenvalues and (optionally) eigenvectors of each inner matrix in input such that input[..., :, :] = v[..., :, :] * diag(e[..., :]). The eigenvalues are sorted in non-decreasing order.

# a is a tensor.
# e is a tensor of eigenvalues.
# v is a tensor of eigenvectors.
e, v = self_adjoint_eig(a)
e = self_adjoint_eig(a, compute_v=False)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
compute_v::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values

Results:

Result Description
e tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values
v tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values

tf.Selu (TF::SeluOp)

Computes scaled exponential linear: scale * alpha * (exp(features) - 1)

if < 0, scale * features otherwise.

To be used together with initializer = tf.variance_scaling_initializer(factor=1.0, mode='FAN_IN'). For correct dropout, use tf.contrib.nn.alpha_dropout.

See Self-Normalizing Neural Networks

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
features tensor of floating-point values

Results:

Result Description
activations tensor of floating-point values

tf.SeluGrad (TF::SeluGradOp)

Computes gradients for the scaled exponential linear (Selu) operation.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
gradients tensor of floating-point values
outputs tensor of floating-point values

Results:

Result Description
backprops tensor of floating-point values

tf.Send (TF::SendOp)

_Sends the named tensor from send_device to recvdevice.

Interfaces: TF_SendSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Send}

Attributes:

AttributeMLIR TypeDescription
tensor_name::mlir::StringAttrstring attribute
send_device::mlir::StringAttrstring attribute
send_device_incarnation::mlir::IntegerAttr64-bit signless integer attribute
recv_device::mlir::StringAttrstring attribute
client_terminated::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
tensor tensor of tf.dtype values

tf.SendTPUEmbeddingGradients (TF::SendTPUEmbeddingGradientsOp)

Performs gradient updates of embedding tables.

Traits: AttrSizedOperandSegments

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
config::mlir::StringAttrstring attribute
N::mlir::Attributederived attribute
NN::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of 32-bit float values
learning_rates variadic of tensor of 32-bit float values

tf.SerializeIterator (TF::SerializeIteratorOp)

Converts the given resource_handle representing an iterator to a variant tensor.

Attributes:

AttributeMLIR TypeDescription
external_state_policy::mlir::IntegerAttr64-bit signless integer attribute

Operands:

Operand Description
resource_handle tensor of resource values

Results:

Result Description
serialized tensor of variant values

tf.SerializeSparse (TF::SerializeSparseOp)

Serialize a SparseTensor into a [3] Tensor object.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
out_type::mlir::Attributederived attribute

Operands:

Operand Description
sparse_indices tensor of 64-bit integer values
sparse_values tensor of tf.dtype values
sparse_shape tensor of 64-bit integer values

Results:

Result Description
serialized_sparse tensor of string or variant values

tf.SetStaticDimensionBounds (TF::SetStaticDimensionBoundsOp)

Op used to indicate to the compiler and runtime the static bounds of a tensor.

The information passed through this op can possibly be used by the compiler and runtime to perform certain optimizations such as more efficient DMAs. The bounds passed via this op should be considered advisory only, and depending on the implementation, might do nothing and simply be an identity

input: The tensor that has dynamic dimensions. static_shape: The static shape of the tensor, corresponds to the maximum bounds of each dimension. output is the input tensor with no changes done to it.

Example usage:

def tpu_call(args): def model_fn(args): # do something with dynamic tensor

@function.Defun(capture_resource_var_by_value=False) def tpu_subgraph(): return tf.tpu.rewrite(model_fn, args)

return tf.raw_ops.TPUPartitionedCall( args=tpu_subgraph.captured_inputs, Tout=[o.type for o in tpu_subgraph.definition.signature.output_arg], f=tpu_subgraph, device_ordinal=[0])

static_shape = tf.placeholder(tf.int32, shape=([3]), name='static_size')

w = tf.Variable(tf.constant([[1.0], [2.0], [3.0]]), name='w')

w_dyn = tf.SetDynamicDimensionBounds(w, static_size]) tpu_call([w_dyn])

Operands:

Operand Description
input tensor of tf.dtype values
static_shape tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.Shape (TF::ShapeOp)

Returns the shape of a tensor.

This operation returns a 1-D integer tensor representing the shape of input.

For example:

# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
shape(t) ==> [2, 2, 3]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
out_type::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of 32/64-bit signed integer values

tf.ShapeN (TF::ShapeNOp)

Returns shape of tensors.

This operation returns N 1-D integer tensors representing shape of input[i]s.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute
T::mlir::Attributederived attribute
out_type::mlir::Attributederived attribute

Operands:

Operand Description
input variadic of tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of 32/64-bit signed integer values

tf.ShardedFilename (TF::ShardedFilenameOp)

Generate a sharded filename. The filename is printf formatted as

%s-%05d-of-%05d, basename, shard, num_shards.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
basename tensor of string values
shard tensor of 32-bit integer values
num_shards tensor of 32-bit integer values

Results:

Result Description
filename tensor of string values

tf.ShuffleAndRepeatDatasetV2 (TF::ShuffleAndRepeatDatasetV2Op)

Attributes:

AttributeMLIR TypeDescription
reshuffle_each_iteration::mlir::BoolAttrbool attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
metadata::mlir::StringAttrstring attribute

Operands:

Operand Description
input_dataset tensor of variant values
buffer_size tensor of 64-bit integer values
seed tensor of 64-bit integer values
seed2 tensor of 64-bit integer values
count tensor of 64-bit integer values
seed_generator tensor of resource values

Results:

Result Description
handle tensor of variant values

tf.ShuffleDatasetV2 (TF::ShuffleDatasetV2Op)

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
metadata::mlir::StringAttrstring attribute

Operands:

Operand Description
input_dataset tensor of variant values
buffer_size tensor of 64-bit integer values
seed_generator tensor of resource values

Results:

Result Description
handle tensor of variant values

tf.ShuffleDatasetV3 (TF::ShuffleDatasetV3Op)

Attributes:

AttributeMLIR TypeDescription
reshuffle_each_iteration::mlir::BoolAttrbool attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
metadata::mlir::StringAttrstring attribute

Operands:

Operand Description
input_dataset tensor of variant values
buffer_size tensor of 64-bit integer values
seed tensor of 64-bit integer values
seed2 tensor of 64-bit integer values
seed_generator tensor of resource values

Results:

Result Description
handle tensor of variant values

tf.ShutdownDistributedTPU (TF::ShutdownDistributedTPUOp)

Shuts down a running distributed TPU system.

The op returns an error if no system is running.

tf.ShutdownTPUSystem (TF::ShutdownTPUSystemOp)

An op that shuts down the TPU system.

Results:

Result Description
success tensor of bool values

tf.Sigmoid (TF::SigmoidOp)

Computes sigmoid of x element-wise.

Specifically, y = 1 / (1 + exp(-x)).

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.SigmoidGrad (TF::SigmoidGradOp)

Computes the gradient of the sigmoid of x wrt its input.

Specifically, grad = dy * y * (1 - y), where y = sigmoid(x), and dy is the corresponding input gradient.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
y tensor of floating-point or complex values
dy tensor of floating-point or complex values

Results:

Result Description
z tensor of floating-point or complex values

tf.Sign (TF::SignOp)

Returns an element-wise indication of the sign of a number.

y = sign(x) = -1 if x < 0; 0 if x == 0; 1 if x > 0.

For complex numbers, y = sign(x) = x / |x| if x != 0, otherwise y = 0.

Example usage:

tf.math.sign([0., 2., -3.])

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_Idempotent

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer values

Results:

Result Description
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer values

tf.Sin (TF::SinOp)

Computes sine of x element-wise.

Given an input tensor, this function computes sine of every element in the tensor. Input range is (-inf, inf) and output range is [-1,1].

  x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 200, 10, float("inf")])
  tf.math.sin(x) ==> [nan -0.4121185 -0.47942555 0.84147096 0.9320391 -0.87329733 -0.54402107 nan]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.Sinh (TF::SinhOp)

Computes hyperbolic sine of x element-wise.

Given an input tensor, this function computes hyperbolic sine of every element in the tensor. Input range is [-inf,inf] and output range is [-inf,inf].

  x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 2, 10, float("inf")])
  tf.math.sinh(x) ==> [-inf -4.0515420e+03 -5.2109528e-01 1.1752012e+00 1.5094614e+00 3.6268604e+00 1.1013232e+04 inf]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.Size (TF::SizeOp)

Returns the size of a tensor.

This operation returns an integer representing the number of elements in input.

For example:

# 't' is [[[1, 1,, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]]
size(t) ==> 12

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
out_type::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of 32/64-bit signed integer values

tf.Slice (TF::SliceOp)

Return a slice from 'input'.

The output tensor is a tensor with dimensions described by 'size' whose values are extracted from 'input' starting at the offsets in 'begin'.

Requirements: 0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Index::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
begin tensor of 32/64-bit signed integer values
size tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.Snapshot (TF::SnapshotOp)

Returns a copy of the input tensor.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.Softmax (TF::SoftmaxOp)

Computes softmax activations.

For each batch i and class j we have

$$softmax[i, j] = exp(logits[i, j]) / sum_j(exp(logits[i, j]))$$

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
logits tensor of floating-point values

Results:

Result Description
softmax tensor of floating-point values

tf.SoftmaxCrossEntropyWithLogits (TF::SoftmaxCrossEntropyWithLogitsOp)

Computes softmax cross entropy cost and gradients to backpropagate.

Inputs are the logits, not probabilities.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
features tensor of floating-point values
labels tensor of floating-point values

Results:

Result Description
loss tensor of floating-point values
backprop tensor of floating-point values

tf.Softplus (TF::SoftplusOp)

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
features tensor of floating-point values

Results:

Result Description
activations tensor of floating-point values

tf.SoftplusGrad (TF::SoftplusGradOp)

Computes softplus gradients for a softplus operation.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
gradients tensor of floating-point values
features tensor of floating-point values

Results:

Result Description
backprops tensor of floating-point values

tf.Softsign (TF::SoftsignOp)

Computes softsign: features / (abs(features) + 1).

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
features tensor of floating-point values

Results:

Result Description
activations tensor of floating-point values

tf.SoftsignGrad (TF::SoftsignGradOp)

Computes softsign gradients for a softsign operation.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
gradients tensor of floating-point values
features tensor of floating-point values

Results:

Result Description
backprops tensor of floating-point values

tf.SpaceToBatch (TF::SpaceToBatchOp)

SpaceToBatch for 4-D tensors of type T.

This is a legacy version of the more general SpaceToBatchND.

Zero-pads and then rearranges (permutes) blocks of spatial data into batch. More specifically, this op outputs a copy of the input tensor where values from the height and width dimensions are moved to the batch dimension. After the zero-padding, both height and width of the input must be divisible by the block size.

The attr block_size must be greater than one. It indicates the block size.

  • Non-overlapping blocks of size block_size x block size in the height and width dimensions are rearranged into the batch dimension at each location.
  • The batch of the output tensor is batch * block_size * block_size.
  • Both height_pad and width_pad must be divisible by block_size.

The shape of the output will be:

[batch*block_size*block_size, height_pad/block_size, width_pad/block_size,
 depth]

Some examples:

(1) For the following input of shape [1, 2, 2, 1] and block_size of 2:

x = [[[[1], [2]], [[3], [4]]]]

The output tensor has shape [4, 1, 1, 1] and value:

[[[[1]]], [[[2]]], [[[3]]], [[[4]]]]

(2) For the following input of shape [1, 2, 2, 3] and block_size of 2:

x = [[[[1, 2, 3], [4, 5, 6]],
      [[7, 8, 9], [10, 11, 12]]]]

The output tensor has shape [4, 1, 1, 3] and value:

[[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]]

(3) For the following input of shape [1, 4, 4, 1] and block_size of 2:

x = [[[[1],   [2],  [3],  [4]],
      [[5],   [6],  [7],  [8]],
      [[9],  [10], [11],  [12]],
      [[13], [14], [15],  [16]]]]

The output tensor has shape [4, 2, 2, 1] and value:

x = [[[[1], [3]], [[9], [11]]],
     [[[2], [4]], [[10], [12]]],
     [[[5], [7]], [[13], [15]]],
     [[[6], [8]], [[14], [16]]]]

(4) For the following input of shape [2, 2, 4, 1] and block_size of 2:

x = [[[[1],   [2],  [3],  [4]],
      [[5],   [6],  [7],  [8]]],
     [[[9],  [10], [11],  [12]],
      [[13], [14], [15],  [16]]]]

The output tensor has shape [8, 1, 2, 1] and value:

x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]],
     [[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]]

Among others, this operation is useful for reducing atrous convolution into regular convolution.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
block_size::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 2
T::mlir::Attributederived attribute
Tpaddings::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
paddings tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.SpaceToBatchND (TF::SpaceToBatchNDOp)

SpaceToBatch for N-D tensors of type T.

This operation divides "spatial" dimensions [1, ..., M] of the input into a grid of blocks of shape block_shape, and interleaves these blocks with the "batch" dimension (0) such that in the output, the spatial dimensions [1, ..., M] correspond to the position within the grid, and the batch dimension combines both the position within a spatial block and the original batch position. Prior to division into blocks, the spatial dimensions of the input are optionally zero padded according to paddings. See below for a precise description.

This operation is equivalent to the following steps:

  1. Zero-pad the start and end of dimensions [1, ..., M] of the input according to paddings to produce padded of shape padded_shape.

  2. Reshape padded to reshaped_padded of shape:

    [batch] + [padded_shape[1] / block_shape[0], block_shape[0], ..., padded_shape[M] / block_shape[M-1], block_shape[M-1]] + remaining_shape

  3. Permute dimensions of reshaped_padded to produce permuted_reshaped_padded of shape:

    block_shape + [batch] + [padded_shape[1] / block_shape[0], ..., padded_shape[M] / block_shape[M-1]] + remaining_shape

  4. Reshape permuted_reshaped_padded to flatten block_shape into the batch dimension, producing an output tensor of shape:

    [batch * prod(block_shape)] + [padded_shape[1] / block_shape[0], ..., padded_shape[M] / block_shape[M-1]] + remaining_shape

Some examples:

(1) For the following input of shape [1, 2, 2, 1], block_shape = [2, 2], and paddings = [[0, 0], [0, 0]]:

x = [[[[1], [2]], [[3], [4]]]]

The output tensor has shape [4, 1, 1, 1] and value:

[[[[1]]], [[[2]]], [[[3]]], [[[4]]]]

(2) For the following input of shape [1, 2, 2, 3], block_shape = [2, 2], and paddings = [[0, 0], [0, 0]]:

x = [[[[1, 2, 3], [4, 5, 6]],
      [[7, 8, 9], [10, 11, 12]]]]

The output tensor has shape [4, 1, 1, 3] and value:

[[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]]

(3) For the following input of shape [1, 4, 4, 1], block_shape = [2, 2], and paddings = [[0, 0], [0, 0]]:

x = [[[[1],   [2],  [3],  [4]],
      [[5],   [6],  [7],  [8]],
      [[9],  [10], [11],  [12]],
      [[13], [14], [15],  [16]]]]

The output tensor has shape [4, 2, 2, 1] and value:

x = [[[[1], [3]], [[9], [11]]],
     [[[2], [4]], [[10], [12]]],
     [[[5], [7]], [[13], [15]]],
     [[[6], [8]], [[14], [16]]]]

(4) For the following input of shape [2, 2, 4, 1], block_shape = [2, 2], and paddings = [[0, 0], [2, 0]]:

x = [[[[1],   [2],  [3],  [4]],
      [[5],   [6],  [7],  [8]]],
     [[[9],  [10], [11],  [12]],
      [[13], [14], [15],  [16]]]]

The output tensor has shape [8, 1, 3, 1] and value:

x = [[[[0], [1], [3]]], [[[0], [9], [11]]],
     [[[0], [2], [4]]], [[[0], [10], [12]]],
     [[[0], [5], [7]]], [[[0], [13], [15]]],
     [[[0], [6], [8]]], [[[0], [14], [16]]]]

Among others, this operation is useful for reducing atrous convolution into regular convolution.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tblock_shape::mlir::Attributederived attribute
Tpaddings::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
block_shape tensor of 32/64-bit signed integer values
paddings tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.SpaceToDepth (TF::SpaceToDepthOp)

SpaceToDepth for tensors of type T.

Rearranges blocks of spatial data, into depth. More specifically, this op outputs a copy of the input tensor where values from the height and width dimensions are moved to the depth dimension. The attr block_size indicates the input block size.

  • Non-overlapping blocks of size block_size x block size are rearranged into depth at each location.
  • The depth of the output tensor is block_size * block_size * input_depth.
  • The Y, X coordinates within each block of the input become the high order component of the output channel index.
  • The input tensor's height and width must be divisible by block_size.

The data_format attr specifies the layout of the input and output tensors with the following options: "NHWC": [ batch, height, width, channels ] "NCHW": [ batch, channels, height, width ] "NCHW_VECT_C": qint8 [ batch, channels / 4, height, width, 4 ]

It is useful to consider the operation as transforming a 6-D Tensor. e.g. for data_format = NHWC, Each element in the input tensor can be specified via 6 coordinates, ordered by decreasing memory layout significance as: n,oY,bY,oX,bX,iC (where n=batch index, oX, oY means X or Y coordinates within the output image, bX, bY means coordinates within the input block, iC means input channels). The output would be a transpose to the following layout: n,oY,oX,bY,bX,iC

This operation is useful for resizing the activations between convolutions (but keeping all data), e.g. instead of pooling. It is also useful for training purely convolutional models.

For example, given an input of shape [1, 2, 2, 1], data_format = "NHWC" and block_size = 2:

x = [[[[1], [2]],
      [[3], [4]]]]

This operation will output a tensor of shape [1, 1, 1, 4]:

[[[[1, 2, 3, 4]]]]

Here, the input has a batch of 1 and each batch element has shape [2, 2, 1], the corresponding output will have a single element (i.e. width and height are both 1) and will have a depth of 4 channels (1 * block_size * block_size). The output element shape is [1, 1, 4].

For an input tensor with larger depth, here of shape [1, 2, 2, 3], e.g.

x = [[[[1, 2, 3], [4, 5, 6]],
      [[7, 8, 9], [10, 11, 12]]]]

This operation, for block_size of 2, will return the following tensor of shape [1, 1, 1, 12]

[[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]

Similarly, for the following input of shape [1 4 4 1], and a block size of 2:

x = [[[[1],   [2],  [5],  [6]],
      [[3],   [4],  [7],  [8]],
      [[9],  [10], [13],  [14]],
      [[11], [12], [15],  [16]]]]

the operator will return the following tensor of shape [1 2 2 4]:

x = [[[[1, 2, 3, 4],
       [5, 6, 7, 8]],
      [[9, 10, 11, 12],
       [13, 14, 15, 16]]]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
block_size::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 2
data_format::mlir::StringAttrstring attribute whose value is NHWC, or NCHW, or NCHW_VECT_C
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.SparseAdd (TF::SparseAddOp)

Adds two SparseTensor objects to produce another SparseTensor.

The input SparseTensor objects' indices are assumed ordered in standard lexicographic order. If this is not the case, before this step run SparseReorder to restore index ordering.

By default, if two values sum to zero at some index, the output SparseTensor would still include that particular location in its index, storing a zero in the corresponding value slot. To override this, callers can specify thresh, indicating that if the sum has a magnitude strictly smaller than thresh, its corresponding value and index would then not be included. In particular, thresh == 0 (default) means everything is kept and actual thresholding happens only for a positive value.

In the following shapes, nnz is the count after taking thresh into account.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Treal::mlir::Attributederived attribute

Operands:

Operand Description
a_indices tensor of 64-bit integer values
a_values tensor of number values
a_shape tensor of 64-bit integer values
b_indices tensor of 64-bit integer values
b_values tensor of number values
b_shape tensor of 64-bit integer values
thresh tensor of integer or floating-point values

Results:

Result Description
sum_indices tensor of 64-bit integer values
sum_values tensor of number values
sum_shape tensor of 64-bit integer values

tf.SparseFillEmptyRows (TF::SparseFillEmptyRowsOp)

Fills empty rows in the input 2-D SparseTensor with a default value.

The input SparseTensor is represented via the tuple of inputs (indices, values, dense_shape). The output SparseTensor has the same dense_shape but with indices output_indices and values output_values.

This op inserts a single entry for every row that doesn't have any values. The index is created as [row, 0, ..., 0] and the inserted value is default_value.

For example, suppose sp_input has shape [5, 6] and non-empty values:

[0, 1]: a
[0, 3]: b
[2, 0]: c
[3, 1]: d

Rows 1 and 4 are empty, so the output will be of shape [5, 6] with values:

[0, 1]: a
[0, 3]: b
[1, 0]: default_value
[2, 0]: c
[3, 1]: d
[4, 0]: default_value

The output SparseTensor will be in row-major order and will have the same shape as the input.

This op also returns an indicator vector shaped [dense_shape[0]] such that

empty_row_indicator[i] = True iff row i was an empty row.

And a reverse index map vector shaped [indices.shape[0]] that is used during backpropagation,

reverse_index_map[j] = out_j s.t. indices[j, :] == output_indices[out_j, :]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
indices tensor of 64-bit integer values
values tensor of tf.dtype values
dense_shape tensor of 64-bit integer values
default_value tensor of tf.dtype values

Results:

Result Description
output_indices tensor of 64-bit integer values
output_values tensor of tf.dtype values
empty_row_indicator tensor of bool values
reverse_index_map tensor of 64-bit integer values

tf.SparseMatMul (TF::SparseMatMulOp)

Multiply matrix "a" by matrix "b".

The inputs must be two-dimensional matrices and the inner dimension of "a" must match the outer dimension of "b". Both "a" and "b" must be Tensors not SparseTensors. This op is optimized for the case where at least one of "a" or "b" is sparse, in the sense that they have a large proportion of zero values. The breakeven for using this versus a dense matrix multiply on one platform was 30% zero values in the sparse matrix.

The gradient computation of this operation will only take advantage of sparsity in the input gradient when that gradient comes from a Relu.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
transpose_a::mlir::BoolAttrbool attribute
transpose_b::mlir::BoolAttrbool attribute
a_is_sparse::mlir::BoolAttrbool attribute
b_is_sparse::mlir::BoolAttrbool attribute
Ta::mlir::Attributederived attribute
Tb::mlir::Attributederived attribute

Operands:

Operand Description
a tensor of bfloat16 or 32-bit float values
b tensor of bfloat16 or 32-bit float values

Results:

Result Description
product tensor of 32-bit float values

tf.SparseReduceSum (TF::SparseReduceSumOp)

Computes the sum of elements across dimensions of a SparseTensor.

This Op takes a SparseTensor and is the sparse counterpart to tf.reduce_sum(). In particular, this Op also returns a dense Tensor instead of a sparse one.

Reduces sp_input along the dimensions given in reduction_axes. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in reduction_axes. If keep_dims is true, the reduced dimensions are retained with length 1.

If reduction_axes has no entries, all dimensions are reduced, and a tensor with a single element is returned. Additionally, the axes can be negative, which are interpreted according to the indexing rules in Python.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
keep_dims::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input_indices tensor of 64-bit integer values
input_values tensor of number values
input_shape tensor of 64-bit integer values
reduction_axes tensor of 32-bit integer values

Results:

Result Description
output tensor of number values

tf.SparseReshape (TF::SparseReshapeOp)

Reshapes a SparseTensor to represent values in a new dense shape.

This operation has the same semantics as reshape on the represented dense tensor. The input_indices are recomputed based on the requested new_shape.

If one component of new_shape is the special value -1, the size of that dimension is computed so that the total dense size remains constant. At most one component of new_shape can be -1. The number of dense elements implied by new_shape must be the same as the number of dense elements originally implied by input_shape.

Reshaping does not affect the order of values in the SparseTensor.

If the input tensor has rank R_in and N non-empty values, and new_shape has length R_out, then input_indices has shape [N, R_in], input_shape has length R_in, output_indices has shape [N, R_out], and output_shape has length R_out.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
input_indices tensor of 64-bit integer values
input_shape tensor of 64-bit integer values
new_shape tensor of 64-bit integer values

Results:

Result Description
output_indices tensor of 64-bit integer values
output_shape tensor of 64-bit integer values

tf.SparseSegmentMean (TF::SparseSegmentMeanOp)

Computes the mean along sparse segments of a tensor.

See tf.sparse.segment_sum for usage examples.

Like SegmentMean, but segment_ids can have rank less than data's first dimension, selecting a subset of dimension 0, specified by indices.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
sparse_gradient::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute
Tsegmentids::mlir::Attributederived attribute

Operands:

Operand Description
data tensor of floating-point values
indices tensor of 32/64-bit signed integer values
segment_ids tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of floating-point values

tf.SparseSegmentMeanGrad (TF::SparseSegmentMeanGradOp)

Computes gradients for SparseSegmentMean.

Returns tensor "output" with same shape as grad, except for dimension 0 whose value is output_dim0.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute
Tsegmentids::mlir::Attributederived attribute

Operands:

Operand Description
grad tensor of floating-point values
indices tensor of 32/64-bit signed integer values
segment_ids tensor of 32/64-bit signed integer values
output_dim0 tensor of 32-bit integer values

Results:

Result Description
output tensor of floating-point values

tf.SparseSegmentMeanWithNumSegments (TF::SparseSegmentMeanWithNumSegmentsOp)

Computes the mean along sparse segments of a tensor.

Like SparseSegmentMean, but allows missing ids in segment_ids. If an id is missing, the output tensor at that position will be zeroed.

Read the section on segmentation for an explanation of segments.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
sparse_gradient::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute
Tnumsegments::mlir::Attributederived attribute
Tsegmentids::mlir::Attributederived attribute

Operands:

Operand Description
data tensor of floating-point values
indices tensor of 32/64-bit signed integer values
segment_ids tensor of 32/64-bit signed integer values
num_segments tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of floating-point values

tf.SparseSegmentSqrtN (TF::SparseSegmentSqrtNOp)

Computes the sum along sparse segments of a tensor divided by the sqrt of N.

N is the size of the segment being reduced.

See tf.sparse.segment_sum for usage examples.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
sparse_gradient::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute
Tsegmentids::mlir::Attributederived attribute

Operands:

Operand Description
data tensor of floating-point values
indices tensor of 32/64-bit signed integer values
segment_ids tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of floating-point values

tf.SparseSegmentSqrtNGrad (TF::SparseSegmentSqrtNGradOp)

Computes gradients for SparseSegmentSqrtN.

Returns tensor "output" with same shape as grad, except for dimension 0 whose value is output_dim0.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute
Tsegmentids::mlir::Attributederived attribute

Operands:

Operand Description
grad tensor of floating-point values
indices tensor of 32/64-bit signed integer values
segment_ids tensor of 32/64-bit signed integer values
output_dim0 tensor of 32-bit integer values

Results:

Result Description
output tensor of floating-point values

tf.SparseSegmentSqrtNWithNumSegments (TF::SparseSegmentSqrtNWithNumSegmentsOp)

Computes the sum along sparse segments of a tensor divided by the sqrt of N.

N is the size of the segment being reduced.

Like SparseSegmentSqrtN, but allows missing ids in segment_ids. If an id is missing, the output tensor at that position will be zeroed.

Read the section on segmentation for an explanation of segments.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
sparse_gradient::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute
Tnumsegments::mlir::Attributederived attribute
Tsegmentids::mlir::Attributederived attribute

Operands:

Operand Description
data tensor of floating-point values
indices tensor of 32/64-bit signed integer values
segment_ids tensor of 32/64-bit signed integer values
num_segments tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of floating-point values

tf.SparseSegmentSum (TF::SparseSegmentSumOp)

Computes the sum along sparse segments of a tensor.

Read the section on segmentation for an explanation of segments.

Like SegmentSum, but segment_ids can have rank less than data's first dimension, selecting a subset of dimension 0, specified by indices.

For example:

c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]])

# Select two rows, one segment.
tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0]))
# => [[0 0 0 0]]

# Select two rows, two segment.
tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 1]))
# => [[ 1  2  3  4]
#     [-1 -2 -3 -4]]

# Select all rows, two segments.
tf.sparse_segment_sum(c, tf.constant([0, 1, 2]), tf.constant([0, 0, 1]))
# => [[0 0 0 0]
#     [5 6 7 8]]

# Which is equivalent to:
tf.segment_sum(c, tf.constant([0, 0, 1]))

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
sparse_gradient::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute
Tsegmentids::mlir::Attributederived attribute

Operands:

Operand Description
data tensor of integer or floating-point values
indices tensor of 32/64-bit signed integer values
segment_ids tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of integer or floating-point values

tf.SparseSoftmaxCrossEntropyWithLogits (TF::SparseSoftmaxCrossEntropyWithLogitsOp)

Computes softmax cross entropy cost and gradients to backpropagate.

Unlike SoftmaxCrossEntropyWithLogits, this operation does not accept a matrix of label probabilities, but rather a single label per row of features. This label is considered to have probability 1.0 for the given row.

Inputs are the logits, not probabilities.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tlabels::mlir::Attributederived attribute

Operands:

Operand Description
features tensor of floating-point values
labels tensor of 32/64-bit signed integer values

Results:

Result Description
loss tensor of floating-point values
backprop tensor of floating-point values

tf.SparseTensorDenseMatMul (TF::SparseTensorDenseMatMulOp)

Multiply SparseTensor (of rank 2) "A" by dense matrix "B".

No validity checking is performed on the indices of A. However, the following input format is recommended for optimal behavior:

if adjoint_a == false: A should be sorted in lexicographically increasing order. Use SparseReorder if you're not sure. if adjoint_a == true: A should be sorted in order of increasing dimension 1 (i.e., "column major" order instead of "row major" order).

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
adjoint_a::mlir::BoolAttrbool attribute
adjoint_b::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
a_indices tensor of 32/64-bit signed integer values
a_values tensor of tf.dtype values
a_shape tensor of 64-bit integer values
b tensor of tf.dtype values

Results:

Result Description
product tensor of tf.dtype values

tf.SparseToDense (TF::SparseToDenseOp)

Converts a sparse representation into a dense tensor.

Builds an array dense with shape output_shape such that

# If sparse_indices is scalar
dense[i] = (i == sparse_indices ? sparse_values : default_value)

# If sparse_indices is a vector, then for each i
dense[sparse_indices[i]] = sparse_values[i]

# If sparse_indices is an n by d matrix, then for each i in [0, n)
dense[sparse_indices[i][0], ..., sparse_indices[i][d-1]] = sparse_values[i]

All other values in dense are set to default_value. If sparse_values is a scalar, all sparse indices are set to this single value.

Indices should be sorted in lexicographic order, and indices must not contain any repeats. If validate_indices is true, these properties are checked during execution.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
validate_indices::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
sparse_indices tensor of 32/64-bit signed integer values
output_shape tensor of 32/64-bit signed integer values
sparse_values tensor of tf.dtype values
default_value tensor of tf.dtype values

Results:

Result Description
dense tensor of tf.dtype values

tf.Split (TF::SplitOp)

Splits a tensor into num_split tensors along one dimension.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
num_split::mlir::Attributederived attribute

Operands:

Operand Description
split_dim tensor of 32-bit integer values
value tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.SplitV (TF::SplitVOp)

Splits a tensor into num_split tensors along one dimension.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tlen::mlir::Attributederived attribute
num_split::mlir::Attributederived attribute

Operands:

Operand Description
value tensor of tf.dtype values
size_splits tensor of 32-bit integer or 64-bit integer or 8-bit integer values
split_dim tensor of 32-bit integer values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.Sqrt (TF::SqrtOp)

Computes square root of x element-wise.

I.e., \(y = \sqrt{x} = x^{1/2}\).

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.SqrtGrad (TF::SqrtGradOp)

Computes the gradient for the sqrt of x wrt its input.

Specifically, grad = dy * 0.5 / y, where y = sqrt(x), and dy is the corresponding input gradient.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
y tensor of floating-point or complex values
dy tensor of floating-point or complex values

Results:

Result Description
z tensor of floating-point or complex values

tf.Square (TF::SquareOp)

Computes square of x element-wise.

I.e., \(y = x * x = x^2\).

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.SquaredDifference (TF::SquaredDifferenceOp)

Returns conj(x - y)(x - y) element-wise.

Traits: AlwaysSpeculatableImplTrait, Commutative, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

Results:

Result Description
z tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.Squeeze (TF::SqueezeOp)

Removes dimensions of size 1 from the shape of a tensor.

Given a tensor input, this operation returns a tensor of the same type with all dimensions of size 1 removed. If you don't want to remove all size 1 dimensions, you can remove specific size 1 dimensions by specifying axis.

For example:

# 't' is a tensor of shape [1, 2, 1, 3, 1, 1]
shape(squeeze(t)) ==> [2, 3]

Or, to remove specific size 1 dimensions:

# 't' is a tensor of shape [1, 2, 1, 3, 1, 1]
shape(squeeze(t, [2, 4])) ==> [1, 2, 3, 1]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
squeeze_dims::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.StackCloseV2 (TF::StackCloseV2Op)

Delete the stack from its resource container.

Operands:

Operand Description
handle tensor of resource values

tf.StackPopV2 (TF::StackPopV2Op)

Pop the element at the top of the stack.

Attributes:

AttributeMLIR TypeDescription
elem_type::mlir::Attributederived attribute

Operands:

Operand Description
handle tensor of resource values

Results:

Result Description
elem tensor of tf.dtype values

tf.StackPushV2 (TF::StackPushV2Op)

Push an element onto the stack.

Attributes:

AttributeMLIR TypeDescription
swap_memory::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
handle tensor of resource values
elem tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.StackV2 (TF::StackV2Op)

A stack that produces elements in first-in last-out order.

Traits: TF::UniqueResourceAllocation

Interfaces: TF_ResourceHandleAllocatorInterface

Attributes:

AttributeMLIR TypeDescription
elem_type::mlir::TypeAttrany type attribute
stack_name::mlir::StringAttrstring attribute

Operands:

Operand Description
max_size tensor of 32-bit integer values

Results:

Result Description
handle tensor of resource values

tf.StatefulPartitionedCall (TF::StatefulPartitionedCallOp)

Returns f(inputs), where f's body is placed and partitioned.

Asynchronously executes a function, potentially across multiple devices but within a single process. The kernel places and partitions a given function's underlying graph, and executes each of the partitioned subgraphs as a function.

Interfaces: CallOpInterface, SymbolUserOpInterface

Attributes:

AttributeMLIR TypeDescription
f::mlir::FlatSymbolRefAttrflat symbol reference attribute
config::mlir::StringAttrstring attribute
config_proto::mlir::StringAttrstring attribute
executor_type::mlir::StringAttrstring attribute
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
args variadic of tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.StatefulStandardNormalV2 (TF::StatefulStandardNormalV2Op)

Outputs random values from a normal distribution.

The generated values will have mean 0 and standard deviation 1.

Attributes:

AttributeMLIR TypeDescription
shape_dtype::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
resource tensor of resource values
algorithm tensor of 64-bit integer values
shape tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of floating-point values

tf.StatefulTruncatedNormal (TF::StatefulTruncatedNormalOp)

Outputs random values from a truncated normal distribution.

The generated values follow a normal distribution with mean 0 and standard deviation 1, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.

Attributes:

AttributeMLIR TypeDescription
shape_dtype::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
resource tensor of resource values
algorithm tensor of 64-bit integer values
shape tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of floating-point values

tf.StatefulUniform (TF::StatefulUniformOp)

Outputs random values from a uniform distribution.

The generated values follow a uniform distribution in the range [0, 1). The lower bound 0 is included in the range, while the upper bound 1 is excluded.

Attributes:

AttributeMLIR TypeDescription
shape_dtype::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
resource tensor of resource values
algorithm tensor of 64-bit integer values
shape tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of floating-point values

tf.StatefulUniformFullInt (TF::StatefulUniformFullIntOp)

Outputs random integers from a uniform distribution.

The generated values are uniform integers covering the whole range of dtype.

Attributes:

AttributeMLIR TypeDescription
shape_dtype::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
resource tensor of resource values
algorithm tensor of 64-bit integer values
shape tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of 32-bit integer or 64-bit integer or 32-bit unsigned integer or 64-bit unsigned integer values

tf.StatefulUniformInt (TF::StatefulUniformIntOp)

Outputs random integers from a uniform distribution.

The generated values are uniform integers in the range [minval, maxval). The lower bound minval is included in the range, while the upper bound maxval is excluded.

The random integers are slightly biased unless maxval - minval is an exact power of two. The bias is small for values of maxval - minval significantly smaller than the range of the output (either 2^32 or 2^64).

Attributes:

AttributeMLIR TypeDescription
shape_dtype::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
resource tensor of resource values
algorithm tensor of 64-bit integer values
shape tensor of 32/64-bit signed integer values
minval tensor of 32-bit integer or 64-bit integer or 32-bit unsigned integer or 64-bit unsigned integer values
maxval tensor of 32-bit integer or 64-bit integer or 32-bit unsigned integer or 64-bit unsigned integer values

Results:

Result Description
output tensor of 32-bit integer or 64-bit integer or 32-bit unsigned integer or 64-bit unsigned integer values

tf.StatelessMultinomial (TF::StatelessMultinomialOp)

Draws samples from a multinomial distribution.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tseed::mlir::Attributederived attribute
output_dtype::mlir::Attributederived attribute

Operands:

Operand Description
logits tensor of integer or floating-point values
num_samples tensor of 32-bit integer values
seed tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of 32/64-bit signed integer values

tf.StatelessParameterizedTruncatedNormal (TF::StatelessParameterizedTruncatedNormalOp)

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
S::mlir::Attributederived attribute
Tseed::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values
seed tensor of 32/64-bit signed integer values
means tensor of 16-bit float or 32-bit float or 64-bit float values
stddevs tensor of 16-bit float or 32-bit float or 64-bit float values
minvals tensor of 16-bit float or 32-bit float or 64-bit float values
maxvals tensor of 16-bit float or 32-bit float or 64-bit float values

Results:

Result Description
output tensor of 16-bit float or 32-bit float or 64-bit float values

tf.StatelessRandomBinomial (TF::StatelessRandomBinomialOp)

Outputs deterministic pseudorandom random numbers from a binomial distribution.

Outputs random values from a binomial distribution.

The outputs are a deterministic function of shape, seed, counts, and probs.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
S::mlir::Attributederived attribute
T::mlir::Attributederived attribute
Tseed::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values
seed tensor of 32/64-bit signed integer values
counts tensor of 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values
probs tensor of 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

Results:

Result Description
output tensor of 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.StatelessRandomGammaV2 (TF::StatelessRandomGammaV2Op)

Outputs deterministic pseudorandom random numbers from a gamma distribution.

Outputs random values from a gamma distribution.

The outputs are a deterministic function of shape, seed, and alpha.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tseed::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values
seed tensor of 32/64-bit signed integer values
alpha tensor of 16-bit float or 32-bit float or 64-bit float values

Results:

Result Description
output tensor of 16-bit float or 32-bit float or 64-bit float values

tf.StatelessRandomGetAlg (TF::StatelessRandomGetAlgOp)

Picks the best counter-based RNG algorithm based on device.

This op picks the best counter-based RNG algorithm based on device.

Results:

Result Description
alg tensor of 32-bit integer values

tf.StatelessRandomGetKeyCounter (TF::StatelessRandomGetKeyCounterOp)

Scrambles seed into key and counter, using the best algorithm based on device.

This op scrambles a shape-[2] seed into a key and a counter, both needed by counter-based RNG algorithms. The scrambing uses the best algorithm based on device. The scrambling is opaque but approximately satisfies the property that different seed results in different key/counter pair (which will in turn result in different random numbers).

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tseed::mlir::Attributederived attribute

Operands:

Operand Description
seed tensor of 32/64-bit signed integer values

Results:

Result Description
key tensor of 64-bit unsigned integer values
counter tensor of 64-bit unsigned integer values

tf.StatelessRandomGetKeyCounterAlg (TF::StatelessRandomGetKeyCounterAlgOp)

Picks the best algorithm based on device, and scrambles seed into key and counter.

This op picks the best counter-based RNG algorithm based on device, and scrambles a shape-[2] seed into a key and a counter, both needed by the counter-based algorithm. The scrambling is opaque but approximately satisfies the property that different seed results in different key/counter pair (which will in turn result in different random numbers).

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tseed::mlir::Attributederived attribute

Operands:

Operand Description
seed tensor of 32/64-bit signed integer values

Results:

Result Description
key tensor of 64-bit unsigned integer values
counter tensor of 64-bit unsigned integer values
alg tensor of 32-bit integer values

tf.StatelessRandomNormal (TF::StatelessRandomNormalOp)

Outputs deterministic pseudorandom values from a normal distribution.

The generated values will have mean 0 and standard deviation 1.

The outputs are a deterministic function of shape and seed.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tseed::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values
seed tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of floating-point values

tf.StatelessRandomNormalV2 (TF::StatelessRandomNormalV2Op)

Outputs deterministic pseudorandom values from a normal distribution.

The generated values will have mean 0 and standard deviation 1.

The outputs are a deterministic function of shape, key, counter and alg.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tshape::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values
key tensor of 64-bit unsigned integer values
counter tensor of 64-bit unsigned integer values
alg tensor of 32-bit integer values

Results:

Result Description
output tensor of floating-point values

tf.StatelessRandomPoisson (TF::StatelessRandomPoissonOp)

Outputs deterministic pseudorandom random numbers from a Poisson distribution.

Outputs random values from a Poisson distribution.

The outputs are a deterministic function of shape, seed, and lam.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Rtype::mlir::Attributederived attribute
T::mlir::Attributederived attribute
Tseed::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values
seed tensor of 32/64-bit signed integer values
lam tensor of 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

Results:

Result Description
output tensor of 16-bit float or 32-bit float or 64-bit float or 32-bit integer or 64-bit integer values

tf.StatelessRandomUniform (TF::StatelessRandomUniformOp)

Outputs deterministic pseudorandom random values from a uniform distribution.

The generated values follow a uniform distribution in the range [0, 1). The lower bound 0 is included in the range, while the upper bound 1 is excluded.

The outputs are a deterministic function of shape and seed.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tseed::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values
seed tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of floating-point values

tf.StatelessRandomUniformFullInt (TF::StatelessRandomUniformFullIntOp)

Outputs deterministic pseudorandom random integers from a uniform distribution.

The generated values are uniform integers covering the whole range of dtype.

The outputs are a deterministic function of shape and seed.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tseed::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values
seed tensor of 32-bit integer or 64-bit integer or 32-bit unsigned integer or 64-bit unsigned integer values

Results:

Result Description
output tensor of 32-bit integer or 64-bit integer or 32-bit unsigned integer or 64-bit unsigned integer values

tf.StatelessRandomUniformFullIntV2 (TF::StatelessRandomUniformFullIntV2Op)

Outputs deterministic pseudorandom random integers from a uniform distribution.

The generated values are uniform integers covering the whole range of dtype.

The outputs are a deterministic function of shape, key, counter and alg.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tshape::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values
key tensor of 64-bit unsigned integer values
counter tensor of 64-bit unsigned integer values
alg tensor of 32-bit integer values

Results:

Result Description
output tensor of 32-bit integer or 64-bit integer or 32-bit unsigned integer or 64-bit unsigned integer values

tf.StatelessRandomUniformInt (TF::StatelessRandomUniformIntOp)

Outputs deterministic pseudorandom random integers from a uniform distribution.

The generated values follow a uniform distribution in the range [minval, maxval).

The outputs are a deterministic function of shape, seed, minval, and maxval.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tseed::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values
seed tensor of 32/64-bit signed integer values
minval tensor of 32/64-bit signed integer values
maxval tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of 32/64-bit signed integer values

tf.StatelessRandomUniformIntV2 (TF::StatelessRandomUniformIntV2Op)

Outputs deterministic pseudorandom random integers from a uniform distribution.

The generated values follow a uniform distribution in the range [minval, maxval).

The outputs are a deterministic function of shape, key, counter, alg, minval and maxval.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tshape::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values
key tensor of 64-bit unsigned integer values
counter tensor of 64-bit unsigned integer values
alg tensor of 32-bit integer values
minval tensor of 32-bit integer or 64-bit integer or 32-bit unsigned integer or 64-bit unsigned integer values
maxval tensor of 32-bit integer or 64-bit integer or 32-bit unsigned integer or 64-bit unsigned integer values

Results:

Result Description
output tensor of 32-bit integer or 64-bit integer or 32-bit unsigned integer or 64-bit unsigned integer values

tf.StatelessRandomUniformV2 (TF::StatelessRandomUniformV2Op)

Outputs deterministic pseudorandom random values from a uniform distribution.

The generated values follow a uniform distribution in the range [0, 1). The lower bound 0 is included in the range, while the upper bound 1 is excluded.

The outputs are a deterministic function of shape, key, counter and alg.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tshape::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values
key tensor of 64-bit unsigned integer values
counter tensor of 64-bit unsigned integer values
alg tensor of 32-bit integer values

Results:

Result Description
output tensor of floating-point values

tf.StatelessTruncatedNormal (TF::StatelessTruncatedNormalOp)

Outputs deterministic pseudorandom values from a truncated normal distribution.

The generated values follow a normal distribution with mean 0 and standard deviation 1, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.

The outputs are a deterministic function of shape and seed.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tseed::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values
seed tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of floating-point values

tf.StatelessTruncatedNormalV2 (TF::StatelessTruncatedNormalV2Op)

Outputs deterministic pseudorandom values from a truncated normal distribution.

The generated values follow a normal distribution with mean 0 and standard deviation 1, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.

The outputs are a deterministic function of shape, key, counter and alg.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tshape::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values
key tensor of 64-bit unsigned integer values
counter tensor of 64-bit unsigned integer values
alg tensor of 32-bit integer values

Results:

Result Description
output tensor of floating-point values

tf.StaticRegexFullMatch (TF::StaticRegexFullMatchOp)

Check if the input matches the regex pattern.

The input is a string tensor of any shape. The pattern is the regular expression to be matched with every element of the input tensor. The boolean values (True or False) of the output tensor indicate if the input matches the regex pattern provided.

The pattern follows the re2 syntax (https://github.com/google/re2/wiki/Syntax)

Traits: AlwaysSpeculatableImplTrait, SameOperandsAndResultShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
pattern::mlir::StringAttrstring attribute

Operands:

Operand Description
input tensor of string values

Results:

Result Description
output tensor of bool values

tf.StopGradient (TF::StopGradientOp)

Stops gradient computation.

When executed in a graph, this op outputs its input tensor as-is.

When building ops to compute gradients, this op prevents the contribution of its inputs to be taken into account. Normally, the gradient generator adds ops to a graph to compute the derivatives of a specified 'loss' by recursively finding out inputs that contributed to its computation. If you insert this op in the graph it inputs are masked from the gradient generator. They are not taken into account for computing gradients.

This is useful any time you want to compute a value with TensorFlow but need to pretend that the value was a constant. For example, the softmax function for a vector x can be written as


  def softmax(x):
    numerator = tf.exp(x)
    denominator = tf.reduce_sum(numerator)
    return numerator / denominator

This however is susceptible to overflow if the values in x are large. An alternative more stable way is to subtract the maximum of x from each of the values.


  def stable_softmax(x):
    z = x - tf.reduce_max(x)
    numerator = tf.exp(z)
    denominator = tf.reduce_sum(numerator)
    return numerator / denominator

However, when we backprop through the softmax to x, we dont want to backprop through the tf.reduce_max(x) (if the max values are not unique then the gradient could flow to the wrong input) calculation and treat that as a constant. Therefore, we should write this out as


  def stable_softmax(x):
    z = x - tf.stop_gradient(tf.reduce_max(x))
    numerator = tf.exp(z)
    denominator = tf.reduce_sum(numerator)
    return numerator / denominator

Some other examples include:

  • The EM algorithm where the M-step should not involve backpropagation through the output of the E-step.
  • Contrastive divergence training of Boltzmann machines where, when differentiating the energy function, the training must not backpropagate through the graph that generated the samples from the model.
  • Adversarial training, where no backprop should happen through the adversarial example generation process.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.StoreMinibatchStatisticsInFdo (TF::StoreMinibatchStatisticsInFdoOp)

Store the number of IDs and unique IDs in an FDO table.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
sample_count::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
num_replica::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
feature_width::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
num_sc_per_chip::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
table_name::mlir::StringAttrstring attribute
mini_batch_splits::mlir::StringAttrstring attribute

Operands:

Operand Description
program_key tensor of string values
max_ids tensor of 32-bit integer values
max_uniques tensor of 32-bit integer values

tf.StridedSlice (TF::StridedSliceOp)

Return a strided slice from input.

Note, most python users will want to use the Python Tensor.getitem or Variable.getitem rather than this op directly.

The goal of this op is to produce a new tensor with a subset of the elements from the n dimensional input tensor. The subset is chosen using a sequence of m sparse range specifications encoded into the arguments of this function. Note, in some cases m could be equal to n, but this need not be the case. Each range specification entry can be one of the following:

  • An ellipsis (...). Ellipses are used to imply zero or more dimensions of full-dimension selection and are produced using ellipsis_mask. For example, foo[...] is the identity slice.

  • A new axis. This is used to insert a new shape=1 dimension and is produced using new_axis_mask. For example, foo[:, ...] where foo is shape (3, 4) produces a (1, 3, 4) tensor.

  • A range begin:end:stride. This is used to specify how much to choose from a given dimension. stride can be any integer but 0. begin is an integer which represents the index of the first value to select while end represents the index of the last value to select. The number of values selected in each dimension is end - begin if stride > 0 and begin - end if stride < 0. begin and end can be negative where -1 is the last element, -2 is the second to last. begin_mask controls whether to replace the explicitly given begin with an implicit effective value of 0 if stride > 0 and -1 if stride < 0. end_mask is analogous but produces the number required to create the largest open interval. For example, given a shape (3,) tensor foo[:], the effective begin and end are 0 and 3. Do not assume this is equivalent to foo[0:-1] which has an effective begin and end of 0 and 2. Another example is foo[-2::-1] which reverses the first dimension of a tensor while dropping the last two (in the original order elements). For example foo = [1,2,3,4]; foo[-2::-1] is [4,3].

  • A single index. This is used to keep only elements that have a given index. For example (foo[2, :] on a shape (5,6) tensor produces a shape (6,) tensor. This is encoded in begin and end and shrink_axis_mask.

Each conceptual range specification is encoded in the op's argument. This encoding is best understand by considering a non-trivial example. In particular, foo[1, 2:4, None, ..., :-3:-1, :] will be encoded as

begin = [1, 2, x, x, 0, x] # x denotes don't care (usually 0)
end = [2, 4, x, x, -3, x]
strides = [1, 1, x, x, -1, 1]
begin_mask = 1<<4 | 1<<5 = 48
end_mask = 1<<5 = 32
ellipsis_mask = 1<<3 = 8
new_axis_mask = 1<<2 = 4
shrink_axis_mask = 1<<0 = 1

In this case if foo.shape is (5, 5, 5, 5, 5, 5) the final shape of the slice becomes (2, 1, 5, 5, 2, 5). Let us walk step by step through each argument specification.

  1. The first argument in the example slice is turned into begin = 1 and end = begin + 1 = 2. To disambiguate from the original spec 2:4 we also set the appropriate bit in shrink_axis_mask.

  2. 2:4 is contributes 2, 4, 1 to begin, end, and stride. All masks have zero bits contributed.

  3. None is a synonym for tf.newaxis. This means insert a dimension of size 1 dimension in the final shape. Dummy values are contributed to begin, end and stride, while the new_axis_mask bit is set.

  4. ... grab the full ranges from as many dimensions as needed to fully specify a slice for every dimension of the input shape.

  5. :-3:-1 shows the use of negative indices. A negative index i associated with a dimension that has shape s is converted to a positive index s + i. So -1 becomes s-1 (i.e. the last element). This conversion is done internally so begin, end and strides receive x, -3, and -1. The appropriate begin_mask bit is set to indicate the start range is the full range (ignoring the x).

  6. : indicates that the entire contents of the corresponding dimension is selected. This is equivalent to :: or 0::1. begin, end, and strides receive 0, 0, and 1, respectively. The appropriate bits in begin_mask and end_mask are also set.

Requirements: 0 != strides[i] for i in [0, m) ellipsis_mask must be a power of two (only one ellipsis)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
begin_mask::mlir::IntegerAttr64-bit signless integer attribute
end_mask::mlir::IntegerAttr64-bit signless integer attribute
ellipsis_mask::mlir::IntegerAttr64-bit signless integer attribute
new_axis_mask::mlir::IntegerAttr64-bit signless integer attribute
shrink_axis_mask::mlir::IntegerAttr64-bit signless integer attribute
Index::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
begin tensor of 16-bit integer or 32-bit integer or 64-bit integer values
end tensor of 16-bit integer or 32-bit integer or 64-bit integer values
strides tensor of 16-bit integer or 32-bit integer or 64-bit integer values

Results:

Result Description
output tensor of tf.dtype values

tf.StridedSliceGrad (TF::StridedSliceGradOp)

Returns the gradient of StridedSlice.

Since StridedSlice cuts out pieces of its input which is size shape, its gradient will have the same shape (which is passed here as shape). The gradient will be zero in any element that the slice does not select.

Arguments are the same as StridedSliceGrad with the exception that dy is the input gradient to be propagated and shape is the shape of StridedSlice's input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
begin_mask::mlir::IntegerAttr64-bit signless integer attribute
end_mask::mlir::IntegerAttr64-bit signless integer attribute
ellipsis_mask::mlir::IntegerAttr64-bit signless integer attribute
new_axis_mask::mlir::IntegerAttr64-bit signless integer attribute
shrink_axis_mask::mlir::IntegerAttr64-bit signless integer attribute
Index::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values
begin tensor of 32/64-bit signed integer values
end tensor of 32/64-bit signed integer values
strides tensor of 32/64-bit signed integer values
dy tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.StringFormat (TF::StringFormatOp)

Formats a string template using a list of tensors.

Formats a string template using a list of tensors, pretty-printing tensor summaries.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
strtemplate::mlir::StringAttrstring attribute
placeholder::mlir::StringAttrstring attribute
summarize::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of tf.dtype values

Results:

Result Description
output tensor of string values

tf.StringJoin (TF::StringJoinOp)

Joins the strings in the given list of string tensors into one tensor;

with the given separator (default is an empty separator).

Examples:

s = ["hello", "world", "tensorflow"] tf.strings.join(s, " ")

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
separator::mlir::StringAttrstring attribute
N::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of string values

Results:

Result Description
output tensor of string values

tf.StringStrip (TF::StringStripOp)

Strip leading and trailing whitespaces from the Tensor.

Examples:

tf.strings.strip(["\nTensorFlow", " The python library "]).numpy() array([b'TensorFlow', b'The python library'], dtype=object)

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
input tensor of string values

Results:

Result Description
output tensor of string values

tf.StringToHashBucketFast (TF::StringToHashBucketFastOp)

Converts each string in the input Tensor to its hash mod by a number of buckets.

The hash function is deterministic on the content of the string within the process and will never change. However, it is not suitable for cryptography. This function may be used when CPU time is scarce and inputs are trusted or unimportant. There is a risk of adversaries constructing inputs that all hash to the same bucket. To prevent this problem, use a strong hash function with tf.string_to_hash_bucket_strong.

Examples:

tf.strings.to_hash_bucket_fast(["Hello", "TensorFlow", "2.x"], 3).numpy() array([0, 2, 2])

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
num_buckets::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1

Operands:

Operand Description
input tensor of string values

Results:

Result Description
output tensor of 64-bit integer values

tf.Sub (TF::SubOp)

Returns x - y element-wise.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_CwiseBinary, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
z tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.Sum (TF::SumOp)

Computes the sum of elements across dimensions of a tensor.

Reduces input along the dimensions given in axis. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keep_dims is true, the reduced dimensions are retained with length 1.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
keep_dims::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tidx::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of number values
reduction_indices tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of number values

tf.SummaryWriter (TF::SummaryWriterOp)

Returns a handle to be used to access a summary writer.

The summary writer is an in-graph resource which can be used by ops to write summaries to event files.

writer: the summary writer resource. Scalar handle.

Interfaces: ResourceHandleAllocatorInterface

Attributes:

AttributeMLIR TypeDescription
shared_name::mlir::StringAttrstring attribute
container::mlir::StringAttrstring attribute

Results:

Result Description
writer tensor of resource values

tf.Svd (TF::SvdOp)

Computes the singular value decompositions of one or more matrices.

Computes the SVD of each inner matrix in input such that input[..., :, :] = u[..., :, :] * diag(s[..., :, :]) * transpose(v[..., :, :])

# a is a tensor containing a batch of matrices.
# s is a tensor of singular values for each matrix.
# u is the tensor containing the left singular vectors for each matrix.
# v is the tensor containing the right singular vectors for each matrix.
s, u, v = svd(a)
s, _, _ = svd(a, compute_uv=False)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
compute_uv::mlir::BoolAttrbool attribute
full_matrices::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values

Results:

Result Description
s tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values
u tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values
v tensor of 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float values

tf.SymbolicGradient (TF::SymbolicGradientOp)

Computes the gradient function for function f via backpropagation.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
f::mlir::SymbolRefAttrsymbol reference attribute
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
input variadic of tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.TakeDataset (TF::TakeDatasetOp)

Creates a dataset that contains count elements from the input_dataset.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
metadata::mlir::StringAttrstring attribute

Operands:

Operand Description
input_dataset tensor of variant values
count tensor of 64-bit integer values

Results:

Result Description
handle tensor of variant values

tf.TakeWhileDataset (TF::TakeWhileDatasetOp)

Creates a dataset that stops iteration when predicate` is false.

The predicate function must return a scalar boolean and accept the following arguments:

  • One tensor for each component of an element of input_dataset.
  • One tensor for each value in other_arguments.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
predicate::mlir::SymbolRefAttrsymbol reference attribute
output_types::mlir::ArrayAttrtype array attribute with at least 1 elements
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
metadata::mlir::StringAttrstring attribute
Targuments::mlir::Attributederived attribute

Operands:

Operand Description
input_dataset tensor of variant values
other_arguments variadic of tensor of tf.dtype values

Results:

Result Description
handle tensor of variant values

tf.Tan (TF::TanOp)

Computes tan of x element-wise.

Given an input tensor, this function computes tangent of every element in the tensor. Input range is (-inf, inf) and output range is (-inf, inf). If input lies outside the boundary, nan is returned.

  x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 200, 10000, float("inf")])
  tf.math.tan(x) ==> [nan 0.45231566 -0.5463025 1.5574077 2.572152 -1.7925274 0.32097113 nan]

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.Tanh (TF::TanhOp)

Computes hyperbolic tangent of x element-wise.

Given an input tensor, this function computes hyperbolic tangent of every element in the tensor. Input range is [-inf, inf] and output range is [-1,1].

x = tf.constant([-float("inf"), -5, -0.5, 1, 1.2, 2, 3, float("inf")]) tf.math.tanh(x)

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_LayoutAgnostic

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values

Results:

Result Description
y tensor of floating-point or complex values

tf.TanhGrad (TF::TanhGradOp)

Computes the gradient for the tanh of x wrt its input.

Specifically, grad = dy * (1 - y*y), where y = tanh(x), and dy is the corresponding input gradient.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
y tensor of floating-point or complex values
dy tensor of floating-point or complex values

Results:

Result Description
z tensor of floating-point or complex values

tf.TensorArrayCloseV3 (TF::TensorArrayCloseV3Op)

Delete the TensorArray from its resource container.

This enables the user to close and release the resource in the middle of a step/run.

Operands:

Operand Description
handle tensor of resource values

tf.TensorArrayConcatV3 (TF::TensorArrayConcatV3Op)

Concat the elements from the TensorArray into value value.

Takes T elements of shapes

  (n0 x d0 x d1 x ...), (n1 x d0 x d1 x ...), ..., (n(T-1) x d0 x d1 x ...)

and concatenates them into a Tensor of shape:

  (n0 + n1 + ... + n(T-1) x d0 x d1 x ...)

All elements must have the same shape (excepting the first dimension).

Attributes:

AttributeMLIR TypeDescription
element_shape_except0::mlir::AttributeTensorFlow shape attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
handle tensor of resource values
flow_in tensor of 32-bit float values

Results:

Result Description
value tensor of tf.dtype values
lengths tensor of 64-bit integer values

tf.TensorArrayGatherV3 (TF::TensorArrayGatherV3Op)

Gather specific elements from the TensorArray into output value.

All elements selected by indices must have the same shape.

Attributes:

AttributeMLIR TypeDescription
element_shape::mlir::AttributeTensorFlow shape attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
handle tensor of resource values
indices tensor of 32-bit integer values
flow_in tensor of 32-bit float values

Results:

Result Description
value tensor of tf.dtype values

tf.TensorArrayGradV3 (TF::TensorArrayGradV3Op)

Creates a TensorArray for storing the gradients of values in the given handle.

If the given TensorArray gradient already exists, returns a reference to it.

Locks the size of the original TensorArray by disabling its dynamic size flag.

A note about the input flow_in:

The handle flow_in forces the execution of the gradient lookup to occur only after certain other operations have occurred. For example, when the forward TensorArray is dynamically sized, writes to this TensorArray may resize the object. The gradient TensorArray is statically sized based on the size of the forward TensorArray when this operation executes. Furthermore, the size of the forward TensorArray is frozen by this call. As a result, the flow is used to ensure that the call to generate the gradient TensorArray only happens after all writes are executed.

In the case of dynamically sized TensorArrays, gradient computation should only be performed on read operations that have themselves been chained via flow to occur only after all writes have executed. That way the final size of the forward TensorArray is known when this operation is called.

A note about the source attribute:

TensorArray gradient calls use an accumulator TensorArray object. If multiple gradients are calculated and run in the same session, the multiple gradient nodes may accidentally flow through the same accumulator TensorArray. This double counts and generally breaks the TensorArray gradient flow.

The solution is to identify which gradient call this particular TensorArray gradient is being called in. This is performed by identifying a unique string (e.g. "gradients", "gradients_1", ...) from the input gradient Tensor's name. This string is used as a suffix when creating the TensorArray gradient object here (the attribute source).

The attribute source is added as a suffix to the forward TensorArray's name when performing the creation / lookup, so that each separate gradient calculation gets its own TensorArray accumulator.

Attributes:

AttributeMLIR TypeDescription
source::mlir::StringAttrstring attribute

Operands:

Operand Description
handle tensor of resource values
flow_in tensor of 32-bit float values

Results:

Result Description
grad_handle tensor of resource values
flow_out tensor of 32-bit float values

tf.TensorArrayReadV3 (TF::TensorArrayReadV3Op)

Read an element from the TensorArray into output value.

Attributes:

AttributeMLIR TypeDescription
dtype::mlir::Attributederived attribute

Operands:

Operand Description
handle tensor of resource values
index tensor of 32-bit integer values
flow_in tensor of 32-bit float values

Results:

Result Description
value tensor of tf.dtype values

tf.TensorArrayScatterV3 (TF::TensorArrayScatterV3Op)

Scatter the data from the input value into specific TensorArray elements.

indices must be a vector, its length must match the first dim of value.

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
handle tensor of resource values
indices tensor of 32-bit integer values
value tensor of tf.dtype values
flow_in tensor of 32-bit float values

Results:

Result Description
flow_out tensor of 32-bit float values

tf.TensorArraySizeV3 (TF::TensorArraySizeV3Op)

Get the current size of the TensorArray.

Operands:

Operand Description
handle tensor of resource values
flow_in tensor of 32-bit float values

Results:

Result Description
size tensor of 32-bit integer values

tf.TensorArraySplitV3 (TF::TensorArraySplitV3Op)

Split the data from the input value into TensorArray elements.

Assuming that lengths takes on values

  (n0, n1, ..., n(T-1))

and that value has shape

  (n0 + n1 + ... + n(T-1) x d0 x d1 x ...),

this splits values into a TensorArray with T tensors.

TensorArray index t will be the subtensor of values with starting position

  (n0 + n1 + ... + n(t-1), 0, 0, ...)

and having size

  nt x d0 x d1 x ...

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
handle tensor of resource values
value tensor of tf.dtype values
lengths tensor of 64-bit integer values
flow_in tensor of 32-bit float values

Results:

Result Description
flow_out tensor of 32-bit float values

tf.TensorArrayV3 (TF::TensorArrayV3Op)

An array of Tensors of given size.

Write data via Write and read via Read or Pack.

Attributes:

AttributeMLIR TypeDescription
dtype::mlir::TypeAttrany type attribute
element_shape::mlir::AttributeTensorFlow shape attribute
dynamic_size::mlir::BoolAttrbool attribute
clear_after_read::mlir::BoolAttrbool attribute
identical_element_shapes::mlir::BoolAttrbool attribute
tensor_array_name::mlir::StringAttrstring attribute

Operands:

Operand Description
size tensor of 32-bit integer values

Results:

Result Description
handle tensor of resource values
flow tensor of 32-bit float values

tf.TensorArrayWriteV3 (TF::TensorArrayWriteV3Op)

_Push an element onto the tensorarray.

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
handle tensor of resource values
index tensor of 32-bit integer values
value tensor of tf.dtype values
flow_in tensor of 32-bit float values

Results:

Result Description
flow_out tensor of 32-bit float values

tf.TensorListConcatV2 (TF::TensorListConcatV2Op)

Concats all tensors in the list along the 0th dimension.

Requires that all tensors have the same shape except the first dimension.

input_handle: The input list. element_shape: The shape of the uninitialized elements in the list. If the first dimension is not -1, it is assumed that all list elements have the same leading dim. leading_dims: The list of leading dims of uninitialized list elements. Used if the leading dim of input_handle.element_shape or the element_shape input arg is not already set. tensor: The concated result. lengths: Output tensor containing sizes of the 0th dimension of tensors in the list, used for computing the gradient.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
shape_type::mlir::Attributederived attribute
element_dtype::mlir::Attributederived attribute

Operands:

Operand Description
input_handle tensor of variant values
element_shape tensor of 32/64-bit signed integer values
leading_dims tensor of 64-bit integer values

Results:

Result Description
tensor tensor of tf.dtype values
lengths tensor of 64-bit integer values

tf.TensorListElementShape (TF::TensorListElementShapeOp)

The shape of the elements of the given list, as a tensor.

input_handle: the list element_shape: the shape of elements of the list

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
shape_type::mlir::Attributederived attribute

Operands:

Operand Description
input_handle tensor of variant values

Results:

Result Description
element_shape tensor of 32/64-bit signed integer values

tf.TensorListFromTensor (TF::TensorListFromTensorOp)

Creates a TensorList which, when stacked, has the value of tensor.

Each tensor in the result list corresponds to one row of the input tensor.

tensor: The input tensor. output_handle: The list.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
element_dtype::mlir::Attributederived attribute
shape_type::mlir::Attributederived attribute

Operands:

Operand Description
tensor tensor of tf.dtype values
element_shape tensor of 32/64-bit signed integer values

Results:

Result Description
output_handle tensor of variant values

tf.TensorListGather (TF::TensorListGatherOp)

Creates a Tensor by indexing into the TensorList.

Each row in the produced Tensor corresponds to the element in the TensorList specified by the given index (see tf.gather).

input_handle: The input tensor list. indices: The indices used to index into the list. values: The tensor.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
element_dtype::mlir::Attributederived attribute

Operands:

Operand Description
input_handle tensor of variant values
indices tensor of 32-bit integer values
element_shape tensor of 32-bit integer values

Results:

Result Description
values tensor of tf.dtype values

tf.TensorListGetItem (TF::TensorListGetItemOp)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
element_dtype::mlir::Attributederived attribute

Operands:

Operand Description
input_handle tensor of variant values
index tensor of 32-bit integer values
element_shape tensor of 32-bit integer values

Results:

Result Description
item tensor of tf.dtype values

tf.TensorListLength (TF::TensorListLengthOp)

Returns the number of tensors in the input tensor list.

input_handle: the input list length: the number of tensors in the list

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
input_handle tensor of variant values

Results:

Result Description
length tensor of 32-bit integer values

tf.TensorListPopBack (TF::TensorListPopBackOp)

Returns the last element of the input list as well as a list with all but that element.

Fails if the list is empty.

input_handle: the input list tensor: the withdrawn last element of the list element_dtype: the type of elements in the list element_shape: the shape of the output tensor

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
element_dtype::mlir::Attributederived attribute

Operands:

Operand Description
input_handle tensor of variant values
element_shape tensor of 32-bit integer values

Results:

Result Description
output_handle tensor of variant values
tensor tensor of tf.dtype values

tf.TensorListPushBack (TF::TensorListPushBackOp)

Returns a list which has the passed-in Tensor as last element and the other elements of the given list in input_handle.

tensor: The tensor to put on the list. input_handle: The old list. output_handle: A list with the elements of the old list followed by tensor. element_dtype: the type of elements in the list. element_shape: a shape compatible with that of elements in the list.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
element_dtype::mlir::Attributederived attribute

Operands:

Operand Description
input_handle tensor of variant values
tensor tensor of tf.dtype values

Results:

Result Description
output_handle tensor of variant values

tf.TensorListReserve (TF::TensorListReserveOp)

List of the given size with empty elements.

element_shape: the shape of the future elements of the list num_elements: the number of elements to reserve handle: the output list element_dtype: the desired type of elements in the list.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
shape_type::mlir::Attributederived attribute
element_dtype::mlir::Attributederived attribute

Operands:

Operand Description
element_shape tensor of 32/64-bit signed integer values
num_elements tensor of 32-bit integer values

Results:

Result Description
handle tensor of variant values

tf.TensorListResize (TF::TensorListResizeOp)

Resizes the list.

input_handle: the input list size: size of the output list

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
input_handle tensor of variant values
size tensor of 32-bit integer values

Results:

Result Description
output_handle tensor of variant values

tf.TensorListScatterIntoExistingList (TF::TensorListScatterIntoExistingListOp)

Scatters tensor at indices in an input list.

Each member of the TensorList corresponds to one row of the input tensor, specified by the given index (see tf.gather).

input_handle: The list to scatter into. tensor: The input tensor. indices: The indices used to index into the list. output_handle: The TensorList.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
element_dtype::mlir::Attributederived attribute

Operands:

Operand Description
input_handle tensor of variant values
tensor tensor of tf.dtype values
indices tensor of 32-bit integer values

Results:

Result Description
output_handle tensor of variant values

tf.TensorListSetItem (TF::TensorListSetItemOp)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
resize_if_index_out_of_bounds::mlir::BoolAttrbool attribute
element_dtype::mlir::Attributederived attribute

Operands:

Operand Description
input_handle tensor of variant values
index tensor of 32-bit integer values
item tensor of tf.dtype values

Results:

Result Description
output_handle tensor of variant values

tf.TensorListStack (TF::TensorListStackOp)

Stacks all tensors in the list.

Requires that all tensors have the same shape.

input_handle: the input list tensor: the gathered result num_elements: optional. If not -1, the number of elements in the list.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
num_elements::mlir::IntegerAttr64-bit signless integer attribute
element_dtype::mlir::Attributederived attribute

Operands:

Operand Description
input_handle tensor of variant values
element_shape tensor of 32-bit integer values

Results:

Result Description
tensor tensor of tf.dtype values

tf.TensorScatterAdd (TF::TensorScatterAddOp)

Adds sparse updates to an existing tensor according to indices.

This operation creates a new tensor by adding sparse updates to the passed in tensor. This operation is very similar to tf.compat.v1.scatter_nd_add, except that the updates are added onto an existing tensor (as opposed to a variable). If the memory for the existing tensor cannot be re-used, a copy is made and updated.

indices is an integer tensor containing indices into a new tensor of shape tensor.shape. The last dimension of indices can be at most the rank of tensor.shape:

indices.shape[-1] <= tensor.shape.rank

The last dimension of indices corresponds to indices into elements (if indices.shape[-1] = tensor.shape.rank) or slices (if indices.shape[-1] < tensor.shape.rank) along dimension indices.shape[-1] of tensor.shape. updates is a tensor with shape

indices.shape[:-1] + tensor.shape[indices.shape[-1]:]

The simplest form of tensor_scatter_nd_add is to add individual elements to a tensor by index. For example, say we want to add 4 elements in a rank-1 tensor with 8 elements.

In Python, this scatter add operation would look like this:

indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) tensor = tf.ones([8], dtype=tf.int32) updated = tf.tensor_scatter_nd_add(tensor, indices, updates) updated

We can also, insert entire slices of a higher rank tensor all at once. For example, if we wanted to insert two slices in the first dimension of a rank-3 tensor with two matrices of new values.

In Python, this scatter add operation would look like this:

indices = tf.constant([[0], [2]]) updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6], ... [7, 7, 7, 7], [8, 8, 8, 8]], ... [[5, 5, 5, 5], [6, 6, 6, 6], ... [7, 7, 7, 7], [8, 8, 8, 8]]]) tensor = tf.ones([4, 4, 4],dtype=tf.int32) updated = tf.tensor_scatter_nd_add(tensor, indices, updates) updated

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
tensor tensor of tf.dtype values
indices tensor of 32/64-bit signed integer values
updates tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.TensorScatterMax (TF::TensorScatterMaxOp)

Apply a sparse update to a tensor taking the element-wise maximum.

Returns a new tensor copied from tensor whose values are element-wise maximum between tensor and updates according to the indices.

tensor = [0, 0, 0, 0, 0, 0, 0, 0] indices = [[1], [4], [5]] updates = [1, -1, 1] tf.tensor_scatter_nd_max(tensor, indices, updates).numpy() array([0, 1, 0, 0, 0, 1, 0, 0], dtype=int32)

Refer to tf.tensor_scatter_nd_update for more details.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
tensor tensor of tf.dtype values
indices tensor of 32/64-bit signed integer values
updates tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.TensorScatterMin (TF::TensorScatterMinOp)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
tensor tensor of tf.dtype values
indices tensor of 32/64-bit signed integer values
updates tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.TensorScatterSub (TF::TensorScatterSubOp)

Subtracts sparse updates from an existing tensor according to indices.

This operation creates a new tensor by subtracting sparse updates from the passed in tensor. This operation is very similar to tf.scatter_nd_sub, except that the updates are subtracted from an existing tensor (as opposed to a variable). If the memory for the existing tensor cannot be re-used, a copy is made and updated.

indices is an integer tensor containing indices into a new tensor of shape shape. The last dimension of indices can be at most the rank of shape:

indices.shape[-1] <= shape.rank

The last dimension of indices corresponds to indices into elements (if indices.shape[-1] = shape.rank) or slices (if indices.shape[-1] < shape.rank) along dimension indices.shape[-1] of shape. updates is a tensor with shape

indices.shape[:-1] + shape[indices.shape[-1]:]

The simplest form of tensor_scatter_sub is to subtract individual elements from a tensor by index. For example, say we want to insert 4 scattered elements in a rank-1 tensor with 8 elements.

In Python, this scatter subtract operation would look like this:

    indices = tf.constant([[4], [3], [1], [7]])
    updates = tf.constant([9, 10, 11, 12])
    tensor = tf.ones([8], dtype=tf.int32)
    updated = tf.tensor_scatter_nd_sub(tensor, indices, updates)
    print(updated)

The resulting tensor would look like this:

[1, -10, 1, -9, -8, 1, 1, -11]

We can also, insert entire slices of a higher rank tensor all at once. For example, if we wanted to insert two slices in the first dimension of a rank-3 tensor with two matrices of new values.

In Python, this scatter add operation would look like this:

    indices = tf.constant([[0], [2]])
    updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],
                            [7, 7, 7, 7], [8, 8, 8, 8]],
                           [[5, 5, 5, 5], [6, 6, 6, 6],
                            [7, 7, 7, 7], [8, 8, 8, 8]]])
    tensor = tf.ones([4, 4, 4],dtype=tf.int32)
    updated = tf.tensor_scatter_nd_sub(tensor, indices, updates)
    print(updated)

The resulting tensor would look like this:

[[[-4, -4, -4, -4], [-5, -5, -5, -5], [-6, -6, -6, -6], [-7, -7, -7, -7]],
 [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]],
 [[-4, -4, -4, -4], [-5, -5, -5, -5], [-6, -6, -6, -6], [-7, -7, -7, -7]],
 [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]]

Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, the index is ignored.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
tensor tensor of tf.dtype values
indices tensor of 32/64-bit signed integer values
updates tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.TensorScatterUpdate (TF::TensorScatterUpdateOp)

Scatter updates into an existing tensor according to indices.

This operation creates a new tensor by applying sparse updates to the passed in tensor. This operation is very similar to tf.scatter_nd, except that the updates are scattered onto an existing tensor (as opposed to a zero-tensor). If the memory for the existing tensor cannot be re-used, a copy is made and updated.

If indices contains duplicates, then we pick the last update for the index.

If an out of bound index is found on CPU, an error is returned.

  • If an out of bound index is found, the index is ignored.
  • The order in which updates are applied is nondeterministic, so the output will be nondeterministic if indices contains duplicates.

indices is an integer tensor containing indices into a new tensor of shape shape.

  • indices must have at least 2 axes: (num_updates, index_depth).
  • The last axis of indices is how deep to index into tensor so this index depth must be less than the rank of tensor: indices.shape[-1] <= tensor.ndim

if indices.shape[-1] = tensor.rank this Op indexes and updates scalar elements. if indices.shape[-1] < tensor.rank it indexes and updates slices of the input tensor.

Each update has a rank of tensor.rank - indices.shape[-1]. The overall shape of updates is:

indices.shape[:-1] + tensor.shape[indices.shape[-1]:]

For usage examples see the python tf.tensor_scatter_nd_update function

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
tensor tensor of tf.dtype values
indices tensor of 16-bit integer or 32-bit integer or 64-bit integer or 16-bit unsigned integer values
updates tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.TensorSliceDataset (TF::TensorSliceDatasetOp)

Creates a dataset that emits each dim-0 slice of components once.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
output_shapes::mlir::ArrayAttrtensorflow shape attribute array with at least 1 elements
is_files::mlir::BoolAttrbool attribute
metadata::mlir::StringAttrstring attribute
replicate_on_split::mlir::BoolAttrbool attribute
Toutput_types::mlir::Attributederived attribute

Operands:

Operand Description
components variadic of tensor of tf.dtype values

Results:

Result Description
handle tensor of variant values

tf.TensorStridedSliceUpdate (TF::TensorStridedSliceUpdateOp)

Assign value to the sliced l-value reference of input.

The values of value are assigned to the positions in the tensor input that are selected by the slice parameters. The slice parameters begin end strides etc. work exactly as in StridedSlice.

NOTE this op currently does not support broadcasting and so value's shape must be exactly the shape produced by the slice of input.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
begin_mask::mlir::IntegerAttr64-bit signless integer attribute
end_mask::mlir::IntegerAttr64-bit signless integer attribute
ellipsis_mask::mlir::IntegerAttr64-bit signless integer attribute
new_axis_mask::mlir::IntegerAttr64-bit signless integer attribute
shrink_axis_mask::mlir::IntegerAttr64-bit signless integer attribute
Index::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
begin tensor of 32/64-bit signed integer values
end tensor of 32/64-bit signed integer values
strides tensor of 32/64-bit signed integer values
value tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.Tile (TF::TileOp)

Constructs a tensor by tiling a given tensor.

This operation creates a new tensor by replicating input multiples times. The output tensor's i'th dimension has input.dims(i) * multiples[i] elements, and the values of input are replicated multiples[i] times along the 'i'th dimension. For example, tiling [a b c d] by [2] produces [a b c d a b c d].

a = tf.constant([[1,2,3],[4,5,6]], tf.int32) b = tf.constant([1,2], tf.int32) tf.tile(a, b) c = tf.constant([2,1], tf.int32) tf.tile(a, c) d = tf.constant([2,2], tf.int32) tf.tile(a, d)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tmultiples::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
multiples tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.Timestamp (TF::TimestampOp)

Provides the time since epoch in seconds.

Returns the timestamp as a float64 for seconds since the Unix epoch.

Common usages include:

  • Logging
  • Providing a random number seed
  • Debugging graph execution
  • Generating timing information, mainly through comparison of timestamps

Results:

Result Description
ts tensor of 64-bit float values

tf.ToBool (TF::ToBoolOp)

Converts a tensor to a scalar predicate.

Converts a tensor to a scalar predicate with the following rules:

  • For 0D tensors, truthiness is determined by comparing against a "zero" value. For numerical types it is the obvious zero. For strings it is the empty string.

  • For >0D tensors, truthiness is determined by looking at the number of elements. If has zero elements, then the result is false. Otherwise the result is true.

This matches the behavior of If and While for determining if a tensor counts as true/false for a branch condition.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of 1-bit signless integer values

tf.TopKUnique (TF::TopKUniqueOp)

Returns the TopK unique values in the array in sorted order.

The running time is proportional to the product of K and the input size. Sorting the whole array is more efficient for sufficiently large values of K. The median-of-medians algorithm is probably faster, but difficult to implement efficiently in XLA. If there are fewer than K unique numbers (not NANs), the results are padded with negative infinity. NaNs are never returned. Subnormal numbers are flushed to zero. If an element appears at multiple indices, the highest index is returned. If a TopK element never appears in the input due to padding values, the indices are padded with negative one. If a padding value appears in the input and padding is needed, the highest index of the padding value will be returned. The semantics are not the same as kth_order_statistic.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
k::mlir::IntegerAttr64-bit signless integer attribute

Operands:

Operand Description
input tensor of 32-bit float values

Results:

Result Description
topk tensor of 32-bit float values
topk_indices tensor of 32-bit integer values

tf.TopKV2 (TF::TopKV2Op)

Finds values and indices of the k largest elements for the last dimension.

If the input is a vector (rank-1), finds the k largest entries in the vector and outputs their values and indices as vectors. Thus values[j] is the j-th largest entry in input, and its index is indices[j].

For matrices (resp. higher rank input), computes the top k entries in each row (resp. vector along the last dimension). Thus,

values.shape = indices.shape = input.shape[:-1] + [k]

If two elements are equal, the lower-index element appears first.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
sorted::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tk::mlir::Attributederived attribute
index_type::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of integer or floating-point values
k tensor of 16-bit integer or 32-bit integer or 64-bit integer values

Results:

Result Description
values tensor of integer or floating-point values
indices tensor of 16-bit integer or 32-bit integer or 64-bit integer values

tf.TopKWithUnique (TF::TopKWithUniqueOp)

Returns the TopK values in the array in sorted order.

This is a combination of MakeUnique and TopKUnique. The returned top-K will have its lower bits replaced by iota, thus it will be close to the original value but not exactly the same. The running time is proportional to the product of K and the input size. NaNs are never returned. Subnormal numbers are flushed to zero.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
k::mlir::IntegerAttr64-bit signless integer attribute

Operands:

Operand Description
input tensor of 32-bit float values

Results:

Result Description
topk tensor of 32-bit float values
topk_indices tensor of 32-bit integer values

tf.TPUAnnotateTensorsWithDynamicShape (TF::TPUAnnotateTensorsWithDynamicShapeOp)

Placeholder op which takes the output of TPUCopyWithDynamicShapeOp and pass them to the following tpu ops.

This op serves as an annotation for the dynamic shaped tensor and will be removed during the bridge rewrite.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
tensors variadic of tensor of tf.dtype values

Results:

Result Description
tpu_tensors variadic of tensor of tf.dtype values

tf.TPUCompilationResult (TF::TPUCompilationResultOp)

Returns the result of a TPU compilation.

This operation returns the result of a TPU compilation as a serialized CompilationResultProto, which holds a status and an error message if an error occurred during compilation.

Interfaces: TF_MustExecute (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Results:

Result Description
output tensor of string values

tf.TPUCompileMlirAndExecute (TF::TPUCompileMlirAndExecuteOp)

Op that compiles a computation in MLIR into a TPU program, and loads and executes it on a TPU device.

For the internal use of the TPU compiler.

'static_shapes' are tensors specifying the maximum dimension sizes for the tensors specified in dynamic_operands. 'args' are inputs to the TPU computation. 'operands_with_static_shape' are the indices of the operands that have a maximal static shape specified. 'mlir_module' is a serialized MLIR module with a main function that contains target computation. 'metadata' is a serialized TPUCompileMetadataProto describing the shapes and types of the inputs to the computation, as well as a mapping onto the TPU pod topology. 'producer_name' is a string describing the name of the framework that add support for running this portion of the model on TPUs.

Traits: AttrSizedOperandSegments

Attributes:

AttributeMLIR TypeDescription
operands_with_static_shape::mlir::ArrayAttr32-bit integer array attribute
mlir_module::mlir::StringAttrstring attribute
metadata::mlir::StringAttrstring attribute
producer_name::mlir::StringAttrstring attribute
Targs::mlir::Attributederived attribute
Tresults::mlir::Attributederived attribute

Operands:

Operand Description
args variadic of tensor of tf.dtype values
static_shapes variadic of tensor of 64-bit integer values

Results:

Result Description
rendezvous_key_base tensor of tf.dtype values
results variadic of tensor of tf.dtype values

tf.TPUCompileSucceededAssert (TF::TPUCompileSucceededAssertOp)

Asserts that compilation succeeded.

This op produces no output and closes the device during failure to ensure all pending device interactions fail.

'compilation_status' is a serialized CompilationResultProto.

Interfaces: TF_MustExecute (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Operands:

Operand Description
compilation_status tensor of string values

tf.TPUCopyWithDynamicShape (TF::TPUCopyWithDynamicShapeOp)

Op that copies host tensors to device with bounded dynamic shape support.

This op copies the padded tensor on cpu to TPU without the padded data. tensors is a list of cpu tensors with padded data. unpadded_sizes is a list of shape tensors which describes unpadded size of each dimension for each cpu tensor. The size of the unpadded_sizes should be the same as tensors. They are both on host. tpu_tensors are list of tpu device tensors without the padded data. tpu_tensors also has the same size of the tensors and the shapes of tpu_tensors are determined by the unpadded_sizes.

Traits: AlwaysSpeculatableImplTrait, AttrSizedOperandSegments

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
tensors variadic of tensor of tf.dtype values
unpadded_sizes variadic of tensor of 32-bit integer values

Results:

Result Description
tpu_tensors variadic of tensor of tf.dtype values

tf.TPUCopyWithLayout (TF::TPUCopyWithLayoutOp)

Op that copies host tensor to device with specified layout.

For internal use only.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
layout tensor of 64-bit integer values

Results:

Result Description
output tensor of tf.dtype values

tf.TPUEmbeddingActivations (TF::TPUEmbeddingActivationsOp)

An op enabling differentiation of TPU Embeddings.

This op simply returns its first input, which is assumed to have been sliced from the Tensors returned by TPUEmbeddingDequeueActivations. The presence of this op, and its first argument being a trainable Variable, enables automatic differentiation of graphs containing embeddings via the TPU Embedding Python libraries.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
table_id::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 0
lookup_id::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 0

Operands:

Operand Description
embedding_variable tensor of 32-bit float values
sliced_activations tensor of 32-bit float values

Results:

Result Description
output tensor of 32-bit float values

tf.TPUExecute (TF::TPUExecuteOp)

Op that loads and executes a TPU program on a TPU device.

For the internal use of the distributed TPU compiler.

Interfaces: MemoryEffectOpInterface

Attributes:

AttributeMLIR TypeDescription
Targs::mlir::Attributederived attribute
Tresults::mlir::Attributederived attribute

Operands:

Operand Description
args variadic of tensor of tf.dtype values
key tensor of string values

Results:

Result Description
results variadic of tensor of tf.dtype values

tf.TPUExecuteAndUpdateVariables (TF::TPUExecuteAndUpdateVariablesOp)

Op that executes a program with optional in-place variable updates.

It (optionally) reads device variables, loads and executes a TPU program on a TPU device, and then (optionally) in-place updates variables using the program outputs, as specified in attributes device_var_reads_indices (program input indices from directly reading variables) and device_var_updates_indices (program output indices used to update variables, -1 means no-update/read-only). Such program outputs are consumed by these variables will not appear in the op output. For the internal use of the distributed TPU compiler.

Interfaces: MemoryEffectOpInterface

Attributes:

AttributeMLIR TypeDescription
device_var_reads_indices::mlir::ArrayAttr64-bit integer array attribute
device_var_updates_indices::mlir::ArrayAttr64-bit integer array attribute
Targs::mlir::Attributederived attribute
Tresults::mlir::Attributederived attribute

Operands:

Operand Description
args variadic of tensor of tf.dtype values
key tensor of string values

Results:

Result Description
results variadic of tensor of tf.dtype values

tf.TPUGetLayoutOp (TF::TPUGetLayoutOp)

Op that retrieves the layout of an input or output determined by TPUCompile.

For internal use only.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
index::mlir::IntegerAttr64-bit signless integer attribute
is_output::mlir::BoolAttrbool attribute

Operands:

Operand Description
cache_key tensor of string values

Results:

Result Description
layout tensor of 64-bit integer values

tf.TPUOrdinalSelector (TF::TPUOrdinalSelectorOp)

A TPU core selector Op.

This Op produces a set of TPU cores (for warm-up) or a single TPU core (for regular inference) to execute the TPU program on. The output is consumed by TPUPartitionedCall.

Results:

Result Description
device_ordinals tensor of 32-bit integer values

tf.TPUPartitionedCall (TF::TPUPartitionedCallOp)

Calls a function placed on a specified TPU device.

Interfaces: CallOpInterface, SymbolUserOpInterface

Attributes:

AttributeMLIR TypeDescription
f::mlir::SymbolRefAttrsymbol reference attribute
autotuner_thresh::mlir::IntegerAttr64-bit signless integer attribute
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
args variadic of tensor of tf.dtype values
device_ordinal tensor of 32-bit integer values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.TPUPartitionedInput (TF::TPUPartitionedInputOp)

An op that groups a list of partitioned inputs together. This op

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
partition_dim::mlir::IntegerAttr64-bit signless integer attribute
_XlaSharding::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute
N::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.TPUPartitionedInputV2 (TF::TPUPartitionedInputV2Op)

An op that groups a list of partitioned inputs together. Supports ND sharding.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
partition_dims::mlir::ArrayAttr64-bit integer array attribute
is_packed::mlir::BoolAttrbool attribute
_XlaSharding::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute
N::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.TPUPartitionedOutput (TF::TPUPartitionedOutputOp)

An op that demultiplexes a tensor to be sharded by XLA to a list of partitioned

outputs outside the XLA computation.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
partition_dim::mlir::IntegerAttr64-bit signless integer attribute
_XlaSharding::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute
num_splits::mlir::Attributederived attribute

Operands:

Operand Description
inputs tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.TPUPartitionedOutputV2 (TF::TPUPartitionedOutputV2Op)

An op that demultiplexes a tensor to be sharded by XLA to a list of partitioned

outputs outside the XLA computation. Supports ND sharding.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
partition_dims::mlir::ArrayAttr64-bit integer array attribute
_XlaSharding::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute
num_splits::mlir::Attributederived attribute

Operands:

Operand Description
inputs tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.TPUReplicatedInput (TF::TPUReplicatedInputOp)

Connects N inputs to an N-way replicated TPU computation.

This operation holds a replicated input to a tpu.replicate() computation subgraph. Each replicated input has the same shape and type alongside the output.

For example:

%a = "tf.opA"()
%b = "tf.opB"()
%replicated_input = "tf.TPUReplicatedInput"(%a, %b)
%computation = "tf.Computation"(%replicated_input)

The above computation has a replicated input of two replicas.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
is_mirrored_variable::mlir::BoolAttrbool attribute
index::mlir::IntegerAttr64-bit signless integer attribute
is_packed::mlir::BoolAttrbool attribute
N::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.TPUReplicatedOutput (TF::TPUReplicatedOutputOp)

Connects N outputs from an N-way replicated TPU computation.

This operation holds a replicated output from a tpu.replicate() computation subgraph. Each replicated output has the same shape and type alongside the input.

For example:

%computation = "tf.Computation"()
%replicated_output:2 = "tf.TPUReplicatedOutput"(%computation)

The above computation has a replicated output of two replicas.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
num_replicas::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
outputs variadic of tensor of tf.dtype values

tf.TPUReplicateMetadata (TF::TPUReplicateMetadataOp)

Metadata indicating how the TPU computation should be replicated.

This operation holds the metadata common to operations of a tpu.replicate() computation subgraph.

Attributes:

AttributeMLIR TypeDescription
num_replicas::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 0
num_cores_per_replica::mlir::IntegerAttr64-bit signless integer attribute
topology::mlir::StringAttrstring attribute
use_tpu::mlir::BoolAttrbool attribute
device_assignment::mlir::ArrayAttr64-bit integer array attribute
computation_shape::mlir::ArrayAttr64-bit integer array attribute
host_compute_core::mlir::ArrayAttrstring array attribute
padding_map::mlir::ArrayAttrstring array attribute
step_marker_location::mlir::StringAttrstring attribute
allow_soft_placement::mlir::BoolAttrbool attribute
use_spmd_for_xla_partitioning::mlir::BoolAttrbool attribute
tpu_compile_options_proto::mlir::StringAttrstring attribute

tf.TPUReshardVariables (TF::TPUReshardVariablesOp)

Op that reshards on-device TPU variables to specified state.

Op that reshards on-device TPU variables to specified state. Internal use only.

The sharding state is represented as the key of the compilation that generated the sharding/unsharding programs along with the main program. new_format_key specifies the desired state, and format_state_var is the current state of the variables.

Attributes:

AttributeMLIR TypeDescription
N::mlir::Attributederived attribute

Operands:

Operand Description
vars variadic of tensor of resource values
new_format_key tensor of string values
format_state_var tensor of resource values

tf.TPURoundRobin (TF::TPURoundRobinOp)

Round-robin load balancing on TPU cores.

A load balancing op that round-robins among TPU cores.

This op round-robins between the integers in [0, NumTPUCoresVisiblePerHost]. It is useful for interfacing with TensorFlow ops that take as input a TPU core on which to execute computations, such as TPUPartitionedCall.

device_ordinal: An integer in [0, NumTPUCoresVisiblePerHost].

Results:

Result Description
device_ordinal tensor of 32-bit integer values

tf.Transpose (TF::TransposeOp)

Shuffle dimensions of x according to a permutation.

The output y has the same rank as x. The shapes of x and y satisfy: y.shape[i] == x.shape[perm[i]] for i in [0, 1, ..., rank(x) - 1]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tperm::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of tf.dtype values
perm tensor of 32/64-bit signed integer values

Results:

Result Description
y tensor of tf.dtype values

tf.TridiagonalMatMul (TF::TridiagonalMatMulOp)

Calculate product with tridiagonal matrix.

Calculates product of two matrices, where left matrix is a tridiagonal matrix.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
superdiag tensor of 128-bit complex or 64-bit complex or 32-bit float or 64-bit float values
maindiag tensor of 128-bit complex or 64-bit complex or 32-bit float or 64-bit float values
subdiag tensor of 128-bit complex or 64-bit complex or 32-bit float or 64-bit float values
rhs tensor of 128-bit complex or 64-bit complex or 32-bit float or 64-bit float values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex or 32-bit float or 64-bit float values

tf.TridiagonalSolve (TF::TridiagonalSolveOp)

Solves tridiagonal systems of equations.

Solves tridiagonal systems of equations. Supports batch dimensions and multiple right-hand sides per each left-hand side. On CPU, solution is computed via Gaussian elimination with or without partial pivoting, depending on partial_pivoting attribute. On GPU, Nvidia's cuSPARSE library is used: https://docs.nvidia.com/cuda/cusparse/index.html#gtsv Partial pivoting is not yet supported by XLA backends.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
partial_pivoting::mlir::BoolAttrbool attribute
perturb_singular::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
diagonals tensor of 128-bit complex or 64-bit complex or 32-bit float or 64-bit float values
rhs tensor of 128-bit complex or 64-bit complex or 32-bit float or 64-bit float values

Results:

Result Description
output tensor of 128-bit complex or 64-bit complex or 32-bit float or 64-bit float values

tf.TruncateDiv (TF::TruncateDivOp)

Returns x / y element-wise, rounded towards zero.

Truncation designates that negative numbers will round fractional quantities toward zero. I.e. -7 / 5 = -1. This matches C semantics but it is different than Python semantics. See FloorDiv for a division function that matches Python Semantics.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
y tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
z tensor of bfloat16 or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.TruncatedNormal (TF::TruncatedNormalOp)

Outputs random values from a truncated normal distribution.

The generated values follow a normal distribution with mean 0 and standard deviation 1, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.

Traits: TF_CannotDuplicate

Attributes:

AttributeMLIR TypeDescription
seed::mlir::IntegerAttr64-bit signless integer attribute
seed2::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
shape tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of floating-point values

tf.TruncateMod (TF::TruncateModOp)

Returns element-wise remainder of division. This emulates C semantics in that

the result here is consistent with a truncating divide. E.g. truncate(x / y) * y + truncate_mod(x, y) = x.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or 32/64-bit signed integer values
y tensor of floating-point or 32/64-bit signed integer values

Results:

Result Description
z tensor of floating-point or 32/64-bit signed integer values

tf.UncompressElement (TF::UncompressElementOp)

Uncompresses a compressed dataset element.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
output_shapes::mlir::Attributederived attribute
output_types::mlir::Attributederived attribute

Operands:

Operand Description
compressed tensor of variant values

Results:

Result Description
components variadic of tensor of tf.dtype values

tf.UniformDequantize (TF::UniformDequantizeOp)

Perform dequantization on the quantized Tensor input.

Given quantized input which was quantized using scales and zero_points, performs dequantization using the formula: dequantized_data = (quantized_data - zero_point) * scale.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
quantization_axis::mlir::IntegerAttr64-bit signless integer attribute
quantization_min_val::mlir::IntegerAttr64-bit signless integer attribute
quantization_max_val::mlir::IntegerAttr64-bit signless integer attribute
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 32-bit quantized integer or 8-bit quantized integer values
scales tensor of 32-bit float values
zero_points tensor of 32-bit integer values

Results:

Result Description
output tensor of 32-bit float values

tf.UniformQuantize (TF::UniformQuantizeOp)

Perform quantization on Tensor input.

Given input, scales and zero_points, performs quantization using the formula: quantized_data = floor(input_data * (1.0f / scale) + 0.5f) + zero_point

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
quantization_axis::mlir::IntegerAttr64-bit signless integer attribute
quantization_min_val::mlir::IntegerAttr64-bit signless integer attribute
quantization_max_val::mlir::IntegerAttr64-bit signless integer attribute
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 32-bit float values
scales tensor of 32-bit float values
zero_points tensor of 32-bit integer values

Results:

Result Description
output tensor of 32-bit quantized integer or 8-bit quantized integer values

tf.UniformQuantizedAdd (TF::UniformQuantizedAddOp)

Perform quantized add of quantized Tensor lhs and quantized Tensor rhs to make quantized output.

Given quantized lhs and quantized rhs, performs quantized add on lhs and rhs to make quantized output.

UniformQuantizedAdd follows Numpy broadcasting rules. The two input array shapes are compared element-wise. Starting with the trailing dimensions, the two dimensions either have to be equal or one of them needs to be 1.

lhs and rhs must be quantized Tensor, where data value is quantized using the formula:

quantized_data = clip(original_data / scale + zero_point, quantization_min_val, quantization_max_val)

output is also quantized, using the same formula.

If lhs and output is both per-axis quantized, the quantization axis must match. Also, if rhs and output is both per-axis quantized, the quantization axis must match. Match means the axis must match when adding, regarding the broadcasting. i.e. For both operands lhs and rhs, if operand.quantization_axis >= 0 and output.quantization_axis >= 0, operand.dims - operand.quantization_axis must be equal to output.dims - output.quantization_axis.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
lhs_quantization_axis::mlir::IntegerAttr64-bit signless integer attribute
lhs_quantization_min_val::mlir::IntegerAttr64-bit signless integer attribute
lhs_quantization_max_val::mlir::IntegerAttr64-bit signless integer attribute
rhs_quantization_axis::mlir::IntegerAttr64-bit signless integer attribute
rhs_quantization_min_val::mlir::IntegerAttr64-bit signless integer attribute
rhs_quantization_max_val::mlir::IntegerAttr64-bit signless integer attribute
output_quantization_axis::mlir::IntegerAttr64-bit signless integer attribute
output_quantization_min_val::mlir::IntegerAttr64-bit signless integer attribute
output_quantization_max_val::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
lhs tensor of 32-bit quantized integer values
rhs tensor of 32-bit quantized integer values
lhs_scales tensor of 32-bit float values
lhs_zero_points tensor of 32-bit integer values
rhs_scales tensor of 32-bit float values
rhs_zero_points tensor of 32-bit integer values
output_scales tensor of 32-bit float values
output_zero_points tensor of 32-bit integer values

Results:

Result Description
output tensor of 32-bit quantized integer values

tf.UniformQuantizedClipByValue (TF::UniformQuantizedClipByValueOp)

Perform clip by value on the quantized Tensor operand.

Given quantized operand which was quantized using scales and zero_points, performs clip by value using min and max values. If quantization_axis is -1 (per-tensor quantized), the entire operand is clipped using scalar min, max. Otherwise (per-channel quantized), the clipping is also done per-channel.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
quantization_axis::mlir::IntegerAttr64-bit signless integer attribute
quantization_min_val::mlir::IntegerAttr64-bit signless integer attribute
quantization_max_val::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
operand tensor of 32-bit quantized integer values
min tensor of 32-bit quantized integer values
max tensor of 32-bit quantized integer values
scales tensor of 32-bit float values
zero_points tensor of 32-bit integer values

Results:

Result Description
output tensor of 32-bit quantized integer values

tf.UniformQuantizedConvolution (TF::UniformQuantizedConvolutionOp)

Perform quantized convolution of quantized Tensor lhs and quantized Tensor rhs. to make quantized output.

Given quantized lhs and quantized rhs, performs quantized dot on lhs and rhs to make quantized output.

lhs and rhs must be Tensors of same rank, and meet following shape conditions.

  • lhs_feature % feature_group_count == 0
  • lhs_feature % rhs_input_feature == 0
  • lhs_feature / feature_group_count == rhs_input_feature
  • rhs_output_feature % feature_group_count == 0
  • lhs_batch % batch_group_count == 0
  • rhs_output_feature % batch_group_count == 0

lhs and rhs must be quantized Tensor, where data value is quantized using the formula:

quantized_data = clip(original_data / scale + zero_point, quantization_min_val, quantization_max_val)

output is also quantized, using the same formula. If rhs is per-tensor quantized, output must be also per-tensor quantized.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
window_strides::mlir::ArrayAttr64-bit integer array attribute
padding::mlir::StringAttrstring attribute
explicit_padding::mlir::ArrayAttr64-bit integer array attribute
lhs_dilation::mlir::ArrayAttr64-bit integer array attribute
rhs_dilation::mlir::ArrayAttr64-bit integer array attribute
batch_group_count::mlir::IntegerAttr64-bit signless integer attribute
feature_group_count::mlir::IntegerAttr64-bit signless integer attribute
dimension_numbers::mlir::StringAttrstring attribute
lhs_quantization_axis::mlir::IntegerAttr64-bit signless integer attribute
lhs_quantization_min_val::mlir::IntegerAttr64-bit signless integer attribute
lhs_quantization_max_val::mlir::IntegerAttr64-bit signless integer attribute
rhs_quantization_axis::mlir::IntegerAttr64-bit signless integer attribute
rhs_quantization_min_val::mlir::IntegerAttr64-bit signless integer attribute
rhs_quantization_max_val::mlir::IntegerAttr64-bit signless integer attribute
output_quantization_axis::mlir::IntegerAttr64-bit signless integer attribute
output_quantization_min_val::mlir::IntegerAttr64-bit signless integer attribute
output_quantization_max_val::mlir::IntegerAttr64-bit signless integer attribute
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
lhs tensor of 8-bit quantized integer values
rhs tensor of 8-bit quantized integer values
lhs_scales tensor of 32-bit float values
lhs_zero_points tensor of 32-bit integer values
rhs_scales tensor of 32-bit float values
rhs_zero_points tensor of 32-bit integer values
output_scales tensor of 32-bit float values
output_zero_points tensor of 32-bit integer values

Results:

Result Description
output tensor of 32-bit quantized integer values

tf.UniformQuantizedConvolutionHybrid (TF::UniformQuantizedConvolutionHybridOp)

Perform hybrid quantized convolution of float Tensor lhs and quantized Tensor rhs.

Given float lhs and quantized rhs, internally performs quantization on lhs, and then performs quantized convolution on quantized lhs and rhs.

The internal quantization on lhs is a quantization to Trhs, dynamic range, per-batch (per-axis along axis dimension_numbers.input_batch_dimension), asymmetric, and not narrow range (the range is [Trhs_MIN, Trhs_MAX]).

lhs and rhs must be Tensors of same rank, and meet following shape conditions.

  • lhs_feature % feature_group_count == 0
  • lhs_feature % rhs_input_feature == 0
  • lhs_feature / feature_group_count == rhs_input_feature
  • rhs_output_feature % feature_group_count == 0
  • lhs_batch % batch_group_count == 0
  • rhs_output_feature % batch_group_count == 0

rhs must be quantized Tensor, where its data value is quantized using the formula: quantized_data = clip(original_data / scale + zero_point, quantization_min_val, quantization_max_val).

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
window_strides::mlir::ArrayAttr64-bit integer array attribute
padding::mlir::StringAttrstring attribute
explicit_padding::mlir::ArrayAttr64-bit integer array attribute
lhs_dilation::mlir::ArrayAttr64-bit integer array attribute
rhs_dilation::mlir::ArrayAttr64-bit integer array attribute
batch_group_count::mlir::IntegerAttr64-bit signless integer attribute
feature_group_count::mlir::IntegerAttr64-bit signless integer attribute
dimension_numbers::mlir::StringAttrstring attribute
rhs_quantization_axis::mlir::IntegerAttr64-bit signless integer attribute
rhs_quantization_min_val::mlir::IntegerAttr64-bit signless integer attribute
rhs_quantization_max_val::mlir::IntegerAttr64-bit signless integer attribute
Tlhs::mlir::Attributederived attribute
Trhs::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
lhs tensor of 32-bit float values
rhs tensor of 8-bit quantized integer values
rhs_scales tensor of 32-bit float values
rhs_zero_points tensor of 32-bit integer values

Results:

Result Description
output tensor of 32-bit float values

tf.UniformQuantizedDot (TF::UniformQuantizedDotOp)

Perform quantized dot of quantized Tensor lhs and quantized Tensor rhs to make quantized output.

Given quantized lhs and quantized rhs, performs quantized dot on lhs and rhs to make quantized output. lhs and rhs must be 2D Tensors and the lhs.dim_size(1) must match rhs.dim_size(0). lhs and rhs must be quantized Tensor, where data value is quantized using the formula: quantized_data = clip(original_data / scale + zero_point, quantization_min_val, quantization_max_val). output is also quantized, using the same formula. If rhs is per-tensor quantized, output must be also per-tensor quantized.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
lhs_quantization_axis::mlir::IntegerAttr64-bit signless integer attribute
lhs_quantization_min_val::mlir::IntegerAttr64-bit signless integer attribute
lhs_quantization_max_val::mlir::IntegerAttr64-bit signless integer attribute
rhs_quantization_axis::mlir::IntegerAttr64-bit signless integer attribute
rhs_quantization_min_val::mlir::IntegerAttr64-bit signless integer attribute
rhs_quantization_max_val::mlir::IntegerAttr64-bit signless integer attribute
output_quantization_axis::mlir::IntegerAttr64-bit signless integer attribute
output_quantization_min_val::mlir::IntegerAttr64-bit signless integer attribute
output_quantization_max_val::mlir::IntegerAttr64-bit signless integer attribute
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
lhs tensor of 8-bit quantized integer values
rhs tensor of 8-bit quantized integer values
lhs_scales tensor of 32-bit float values
lhs_zero_points tensor of 32-bit integer values
rhs_scales tensor of 32-bit float values
rhs_zero_points tensor of 32-bit integer values
output_scales tensor of 32-bit float values
output_zero_points tensor of 32-bit integer values

Results:

Result Description
output tensor of 32-bit quantized integer values

tf.UniformQuantizedDotHybrid (TF::UniformQuantizedDotHybridOp)

Perform hybrid quantized dot of float Tensor lhs and quantized Tensor rhs.

Given float lhs and quantized rhs, internally performs quantization on lhs, and then performs quantized dot on quantized lhs and rhs. The internal quantization on lhs is a quantization to qint8, dynamic range, per-batch (per-axis along axis 0), asymmetric, and not narrow range (the range is [-128, 127]). lhs and rhs must be 2D Tensors and the lhs.dim_size(1) must match rhs.dim_size(0). rhs must be quantized Tensor, where its data value is quantized using the formula: quantized_data = clip(original_data / scale + zero_point, quantization_min_val, quantization_max_val).

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
rhs_quantization_axis::mlir::IntegerAttr64-bit signless integer attribute
rhs_quantization_min_val::mlir::IntegerAttr64-bit signless integer attribute
rhs_quantization_max_val::mlir::IntegerAttr64-bit signless integer attribute
Tlhs::mlir::Attributederived attribute
Trhs::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
lhs tensor of 32-bit float values
rhs tensor of 8-bit quantized integer values
rhs_scales tensor of 32-bit float values
rhs_zero_points tensor of 32-bit integer values

Results:

Result Description
output tensor of 32-bit float values

tf.UniformRequantize (TF::UniformRequantizeOp)

Given quantized tensor input, requantize it with new quantization parameters.

Given quantized tensor input, which was quantized using {input_scales, input_zero_points, input_quantization_axis, input_quantization_min_val, input_quantization_max_val}, requantize it to a tensor, which is quantized using {output_scales, output_zero_points, output_quantization_axis, output_quantization_min_val, output_quantization_max_val}. The requantization is done by using the formula: output_quantized_data = clip( (input_quantized_data - input_zero_point) * (input_scale / output_scale) + output_zero_point, output_quantization_min_val, output_quantization_max_val)

Per-tensor and per-axis quantization supported cases are followings:

  • per-tensor -> per-tensor
  • per-tensor -> per-axis
  • per-axis -> per-axis where input_quantization_axis equals output_quantization_axis. i.e. At least one among input_quantization_axis and output_quantization_axis must be -1, or two must be equal.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
input_quantization_axis::mlir::IntegerAttr64-bit signless integer attribute
input_quantization_min_val::mlir::IntegerAttr64-bit signless integer attribute
input_quantization_max_val::mlir::IntegerAttr64-bit signless integer attribute
output_quantization_axis::mlir::IntegerAttr64-bit signless integer attribute
output_quantization_min_val::mlir::IntegerAttr64-bit signless integer attribute
output_quantization_max_val::mlir::IntegerAttr64-bit signless integer attribute
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of 32-bit quantized integer or 8-bit quantized integer values
input_scales tensor of 32-bit float values
input_zero_points tensor of 32-bit integer values
output_scales tensor of 32-bit float values
output_zero_points tensor of 32-bit integer values

Results:

Result Description
output tensor of 32-bit quantized integer or 8-bit quantized integer values

tf.Unique (TF::UniqueOp)

Finds unique elements in a 1-D tensor.

This operation returns a tensor y containing all of the unique elements of x sorted in the same order that they occur in x; x does not need to be sorted. This operation also returns a tensor idx the same size as x that contains the index of each value of x in the unique output y. In other words:

y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]

Examples:

# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
y, idx = unique(x)
y ==> [1, 2, 4, 7, 8]
idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]
# tensor 'x' is [4, 5, 1, 2, 3, 3, 4, 5]
y, idx = unique(x)
y ==> [4, 5, 1, 2, 3]
idx ==> [0, 1, 2, 3, 4, 4, 0, 1]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
out_idx::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of tf.dtype values

Results:

Result Description
y tensor of tf.dtype values
idx tensor of 32/64-bit signed integer values

tf.UniqueV2 (TF::UniqueV2Op)

Finds unique elements along an axis of a tensor.

This operation either returns a tensor y containing unique elements along the axis of a tensor. The returned unique elements is sorted in the same order as they occur along axis in x. This operation also returns a tensor idx that is the same size as the number of the elements in x along the axis dimension. It contains the index in the unique output y. In other words, for an 1-D tensor x with `axis = None:

y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]

For example:

# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
y, idx = unique(x)
y ==> [1, 2, 4, 7, 8]
idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]

For an 2-D tensor x with axis = 0:

# tensor 'x' is [[1, 0, 0],
#                [1, 0, 0],
#                [2, 0, 0]]
y, idx = unique(x, axis=0)
y ==> [[1, 0, 0],
       [2, 0, 0]]
idx ==> [0, 0, 1]

For an 2-D tensor x with axis = 1:

# tensor 'x' is [[1, 0, 0],
#                [1, 0, 0],
#                [2, 0, 0]]
y, idx = unique(x, axis=1)
y ==> [[1, 0],
       [1, 0],
       [2, 0]]
idx ==> [0, 1, 1]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Taxis::mlir::Attributederived attribute
out_idx::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of tf.dtype values
axis tensor of 32/64-bit signed integer values

Results:

Result Description
y tensor of tf.dtype values
idx tensor of 32/64-bit signed integer values

tf.Unpack (TF::UnpackOp)

Unpacks a given dimension of a rank-R tensor into num rank-(R-1) tensors.

Unpacks num tensors from value by chipping it along the axis dimension. For example, given a tensor of shape (A, B, C, D);

If axis == 0 then the i'th tensor in output is the slice value[i, :, :, :] and each tensor in output will have shape (B, C, D). (Note that the dimension unpacked along is gone, unlike split).

If axis == 1 then the i'th tensor in output is the slice value[:, i, :, :] and each tensor in output will have shape (A, C, D). Etc.

This is the opposite of pack.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
axis::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute
num::mlir::Attributederived attribute

Operands:

Operand Description
value tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.UnsortedSegmentMax (TF::UnsortedSegmentMaxOp)

Computes the maximum along segments of a tensor.

Read the section on segmentation for an explanation of segments.

This operator is similar to tf.math.unsorted_segment_sum, Instead of computing the sum over segments, it computes the maximum such that:

\(output_i = \max_{j...} data[j...]\) where max is over tuples j... such that segment_ids[j...] == i.

If the maximum is empty for a given segment ID i, it outputs the smallest possible value for the specific numeric type, output[i] = numeric_limits<T>::lowest().

If the given segment ID i is negative, then the corresponding value is dropped, and will not be included in the result.

For example:

c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]]) tf.math.unsorted_segment_max(c, tf.constant([0, 1, 0]), num_segments=2).numpy() array([[4, 3, 3, 4], [5, 6, 7, 8]], dtype=int32)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute
Tnumsegments::mlir::Attributederived attribute

Operands:

Operand Description
data tensor of integer or floating-point values
segment_ids tensor of 32/64-bit signed integer values
num_segments tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of integer or floating-point values

tf.UnsortedSegmentMin (TF::UnsortedSegmentMinOp)

Computes the minimum along segments of a tensor.

Read the section on segmentation for an explanation of segments.

This operator is similar to tf.math.unsorted_segment_sum, Instead of computing the sum over segments, it computes the minimum such that:

\(output_i = \min_{j...} data_[j...]\) where min is over tuples j... such that segment_ids[j...] == i.

If the minimum is empty for a given segment ID i, it outputs the largest possible value for the specific numeric type, output[i] = numeric_limits<T>::max().

For example:

c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]]) tf.math.unsorted_segment_min(c, tf.constant([0, 1, 0]), num_segments=2).numpy() array([[1, 2, 2, 1], [5, 6, 7, 8]], dtype=int32)

If the given segment ID i is negative, then the corresponding value is dropped, and will not be included in the result.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute
Tnumsegments::mlir::Attributederived attribute

Operands:

Operand Description
data tensor of integer or floating-point values
segment_ids tensor of 32/64-bit signed integer values
num_segments tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of integer or floating-point values

tf.UnsortedSegmentProd (TF::UnsortedSegmentProdOp)

Computes the product along segments of a tensor.

Read the section on segmentation for an explanation of segments.

This operator is similar to tf.math.unsorted_segment_sum, Instead of computing the sum over segments, it computes the product of all entries belonging to a segment such that:

\(output_i = \prod_{j...} data[j...]\) where the product is over tuples j... such that segment_ids[j...] == i.

For example:

c = tf.constant([[1,2,3,4], [5,6,7,8], [4,3,2,1]]) tf.math.unsorted_segment_prod(c, tf.constant([0, 1, 0]), num_segments=2).numpy() array([[4, 6, 6, 4], [5, 6, 7, 8]], dtype=int32)

If there is no entry for a given segment ID i, it outputs 1.

If the given segment ID i is negative, then the corresponding value is dropped, and will not be included in the result. Caution: On CPU, values in segment_ids are always validated to be less than num_segments, and an error is thrown for out-of-bound indices. On GPU, this does not throw an error for out-of-bound indices. On Gpu, out-of-bound indices result in safe but unspecified behavior, which may include ignoring out-of-bound indices or outputting a tensor with a 0 stored in the first dimension of its shape if num_segments is 0.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute
Tnumsegments::mlir::Attributederived attribute

Operands:

Operand Description
data tensor of number values
segment_ids tensor of 32/64-bit signed integer values
num_segments tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of number values

tf.UnsortedSegmentSum (TF::UnsortedSegmentSumOp)

Computes the sum along segments of a tensor.

Read the section on segmentation for an explanation of segments.

Computes a tensor such that \(output[i] = \sum_{j...} data[j...]\) where the sum is over tuples j... such that segment_ids[j...] == i. Unlike SegmentSum, segment_ids need not be sorted and need not cover all values in the full range of valid values.

If the sum is empty for a given segment ID i, output[i] = 0. If the given segment ID i is negative, the value is dropped and will not be added to the sum of the segment.

num_segments should equal the number of distinct segment IDs.

c = [[1,2,3,4], [5,6,7,8], [4,3,2,1]] tf.math.unsorted_segment_sum(c, [0, 1, 0], num_segments=2).numpy() array([[5, 5, 5, 5], [5, 6, 7, 8]], dtype=int32)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute
Tnumsegments::mlir::Attributederived attribute

Operands:

Operand Description
data tensor of number values
segment_ids tensor of 16-bit integer or 32-bit integer or 64-bit integer values
num_segments tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of number values

tf.UpperBound (TF::UpperBoundOp)

_Applies upper_bound(sorted_searchvalues, values) along each row.

Each set of rows with the same index in (sorted_inputs, values) is treated independently. The resulting row is the equivalent of calling np.searchsorted(sorted_inputs, values, side='right').

The result is not a global index to the entire Tensor, but rather just the index in the last dimension.

A 2-D example: sorted_sequence = [[0, 3, 9, 9, 10], [1, 2, 3, 4, 5]] values = [[2, 4, 9], [0, 2, 6]]

result = UpperBound(sorted_sequence, values)

result == [[1, 2, 4], [0, 2, 5]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
out_type::mlir::Attributederived attribute

Operands:

Operand Description
sorted_inputs tensor of tf.dtype values
values tensor of tf.dtype values

Results:

Result Description
output tensor of 32/64-bit signed integer values

tf.VarHandleOp (TF::VarHandleOp)

Creates a handle to a Variable resource from its name.

container: the container this variable is placed in. shared_name: the name by which this variable is referred to. dtype and shape: attributes representing the data type and shape held in the variable.

Example: resource_variable_ops.var_handle_op( dtype=dtypes.int32, shape=[8, 16], container="foo", shared_name="bar") returns a handle for a variable with name "bar" in container "foo", and the variable holds a tensor of shape [8, 16] and dtype int32.

Interfaces: ResourceHandleAllocatorInterface

Attributes:

AttributeMLIR TypeDescription
container::mlir::StringAttrstring attribute
shared_name::mlir::StringAttrstring attribute
dtype::mlir::Attributederived attribute
shape::mlir::Attributederived attribute

Results:

Result Description
resource tensor of resource values

tf.Variable (TF::VariableOp)

Use VariableV2 instead.

Attributes:

AttributeMLIR TypeDescription
shape::mlir::AttributeTensorFlow shape attribute
container::mlir::StringAttrstring attribute
shared_name::mlir::StringAttrstring attribute
dtype::mlir::Attributederived attribute

Results:

Result Description
ref tensor of tf.dtype values

tf.VariableShape (TF::VariableShapeOp)

Returns the shape of the variable pointed to by resource.

This operation returns a 1-D integer tensor representing the shape of input.

For example:

# 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
shape(t) ==> [2, 2, 3]

Attributes:

AttributeMLIR TypeDescription
out_type::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of resource values

Results:

Result Description
output tensor of 32/64-bit signed integer values

tf.VariableV2 (TF::VariableV2Op)

Holds state in the form of a tensor that persists across steps.

Outputs a ref to the tensor state so it may be read or modified. TODO(zhifengc/mrry): Adds a pointer to a more detail document about sharing states in tensorflow.

Attributes:

AttributeMLIR TypeDescription
shape::mlir::AttributeTensorFlow shape attribute
container::mlir::StringAttrstring attribute
shared_name::mlir::StringAttrstring attribute
dtype::mlir::Attributederived attribute

Results:

Result Description
ref tensor of tf.dtype values

tf.VarIsInitializedOp (TF::VarIsInitializedOp)

Checks whether a resource handle-based variable has been initialized.

Operands:

Operand Description
resource tensor of resource values

Results:

Result Description
is_initialized tensor of bool values

tf.Where (TF::WhereOp)

Returns locations of nonzero / true values in a tensor.

This operation returns the coordinates of true elements in condition. The coordinates are returned in a 2-D tensor where the first dimension (rows) represents the number of true elements, and the second dimension (columns) represents the coordinates of the true elements. Keep in mind, the shape of the output tensor can vary depending on how many true values there are in condition. Indices are output in row-major order.

For example:

# 'input' tensor is [[True, False]
#                    [True, False]]
# 'input' has two true values, so output has two coordinates.
# 'input' has rank of 2, so coordinates have two indices.
where(input) ==> [[0, 0],
                  [1, 0]]

# `condition` tensor is [[[True, False]
#                     [True, False]]
#                    [[False, True]
#                     [False, True]]
#                    [[False, False]
#                     [False, True]]]
# 'input' has 5 true values, so output has 5 coordinates.
# 'input' has rank of 3, so coordinates have three indices.
where(input) ==> [[0, 0, 0],
                  [0, 1, 0],
                  [1, 0, 1],
                  [1, 1, 1],
                  [2, 1, 1]]

# `condition` tensor is [[[1.5,  0.0]
#                     [-0.5, 0.0]]
#                    [[0.0,  0.25]
#                     [0.0,  0.75]]
#                    [[0.0,  0.0]
#                     [0.0,  0.01]]]
# 'input' has 5 nonzero values, so output has 5 coordinates.
# 'input' has rank of 3, so coordinates have three indices.
where(input) ==> [[0, 0, 0],
                  [0, 1, 0],
                  [1, 0, 1],
                  [1, 1, 1],
                  [2, 1, 1]]

# `condition` tensor is [[[1.5 + 0.0j, 0.0  + 0.0j]
#                     [0.0 + 0.5j, 0.0  + 0.0j]]
#                    [[0.0 + 0.0j, 0.25 + 1.5j]
#                     [0.0 + 0.0j, 0.75 + 0.0j]]
#                    [[0.0 + 0.0j, 0.0  + 0.0j]
#                     [0.0 + 0.0j, 0.01 + 0.0j]]]
# 'input' has 5 nonzero magnitude values, so output has 5 coordinates.
# 'input' has rank of 3, so coordinates have three indices.
where(input) ==> [[0, 0, 0],
                  [0, 1, 0],
                  [1, 0, 1],
                  [1, 1, 1],
                  [2, 1, 1]]

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
index tensor of 64-bit integer values

tf.While (TF::WhileOp)

Output = input; While (Cond(output)) { output = Body(output) }

output = input; While (Cond(output)) { output = Body(output) }

input: A list of input tensors whose types are T. output: A list of output tensors whose types are T. cond: A function that takes 'input' and returns a tensor. If the tensor is a scalar of non-boolean, the scalar is converted to a boolean according to the following rule: if the scalar is a numerical value, non-zero means True and zero means False; if the scalar is a string, non-empty means True and empty means False. If the tensor is not a scalar, non-emptiness means True and False otherwise. body: A function that takes a list of tensors and returns another list of tensors. Both lists have the same types as specified by T.

Interfaces: SymbolUserOpInterface

Attributes:

AttributeMLIR TypeDescription
cond::mlir::FlatSymbolRefAttrflat symbol reference attribute
body::mlir::FlatSymbolRefAttrflat symbol reference attribute
parallel_iterations::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
is_stateless::mlir::BoolAttrbool attribute
shape_invariant::mlir::UnitAttrunit attribute
T::mlir::Attributederived attribute
output_shapes::mlir::Attributederived attribute

Operands:

Operand Description
input variadic of tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.WhileRegion (TF::WhileRegionOp)

While operation

The tf.WhileRegion op represents a while loop using 2 regions and a set of iteration variables. The iteration variables maintained by this Op have the same types as the inputs. The Op executes a while loop described by the following pseudo code:

   func WhileRegionOp(inputs) {
     iteration_vars = inputs;
     while (cond(iteration_vars)) {
         iteration_vars = body(iteration_vars);
     }
     return iteration_vars;
   }

cond is the condition region and body is the body region. Both these regions accept the current value of the iteration variables as inputs.

The condition region yields a tensor which, if false, will exit the loop. It can also, optionally and additionally, yield the iteration variables, which must be unchanged.

The body region always has to yield the (possibly updated) iteration variables.

The iteration variables are initialized to the Op input, and the results of the tf.WhileRegion op are the final values of the iteration variables.

This implies that the operand and result types for tf.WhileRegion should be the same. Note that the condition and body regions can implicitly capture loop invariant values directly. In canonical form, iteration variables that pass through the loop body unmodified are converted to implicitly captured references to their values outside the loop.

Traits: SingleBlockImplicitTerminator<YieldOp>, SingleBlock

Interfaces: LoopLikeOpInterface, RegionBranchOpInterface

Attributes:

AttributeMLIR TypeDescription
parallel_iterations::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
is_stateless::mlir::BoolAttrbool attribute
shape_invariant::mlir::UnitAttrunit attribute

Operands:

Operand Description
input variadic of tensor of any type values

Results:

Result Description
output variadic of tensor of any type values

tf.WriteAudioSummary (TF::WriteAudioSummaryOp)

Writes a Summary protocol buffer with audio.

The summary has up to max_outputs summary values containing audio. The audio is built from tensor which must be 3-D with shape [batch_size, frames, channels] or 2-D with shape [batch_size, frames]. The values are assumed to be in the range of [-1.0, 1.0] with a sample rate of sample_rate.

The tag argument is a scalar Tensor of type string. It is used to build the tag of the summary values:

  • If max_outputs is 1, the summary value tag is 'tag/audio'.
  • If max_outputs is greater than 1, the summary value tags are generated sequentially as 'tag/audio/0', 'tag/audio/1', etc.

writer: A handle to a summary writer. step: The step to write the summary for. tag: Scalar. Used to build the tag attribute of the summary values. tensor: 2-D of shape [batch_size, frames]. sample_rate: The sample rate of the signal in hertz. max_outputs: Max number of batch elements to generate audio for.

Attributes:

AttributeMLIR TypeDescription
max_outputs::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1

Operands:

Operand Description
writer tensor of resource values
step tensor of 64-bit integer values
tag tensor of string values
tensor tensor of 32-bit float values
sample_rate tensor of 32-bit float values

tf.WriteGraphSummary (TF::WriteGraphSummaryOp)

Writes a GraphDef protocol buffer to a SummaryWriter.

writer: Handle of SummaryWriter. step: The step to write the summary for. tensor: A scalar string of the serialized tf.GraphDef proto.

Operands:

Operand Description
writer tensor of resource values
step tensor of 64-bit integer values
tensor tensor of string values

tf.WriteHistogramSummary (TF::WriteHistogramSummaryOp)

Writes a histogram summary.

The generated Summary has one summary value containing a histogram for values.

This op reports an InvalidArgument error if any value is not finite.

writer: A handle to a summary writer. step: The step to write the summary for. tag: Scalar. Tag to use for the Summary.Value. values: Any shape. Values to use to build the histogram.

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
writer tensor of resource values
step tensor of 64-bit integer values
tag tensor of string values
values tensor of integer or floating-point values

tf.WriteImageSummary (TF::WriteImageSummaryOp)

Writes a Summary protocol buffer with images.

The summary has up to max_images summary values containing images. The images are built from tensor which must be 4-D with shape [batch_size, height, width, channels] and where channels can be:

  • 1: tensor is interpreted as Grayscale.
  • 3: tensor is interpreted as RGB.
  • 4: tensor is interpreted as RGBA.

The images have the same number of channels as the input tensor. For float input, the values are normalized one image at a time to fit in the range [0, 255]. uint8 values are unchanged. The op uses two different normalization algorithms:

  • If the input values are all positive, they are rescaled so the largest one is 255.

  • If any input value is negative, the values are shifted so input value 0.0 is at 127. They are then rescaled so that either the smallest value is 0, or the largest one is 255.

The tag argument is a scalar Tensor of type string. It is used to build the tag of the summary values:

  • If max_images is 1, the summary value tag is 'tag/image'.
  • If max_images is greater than 1, the summary value tags are generated sequentially as 'tag/image/0', 'tag/image/1', etc.

The bad_color argument is the color to use in the generated images for non-finite input values. It is a unit8 1-D tensor of length channels. Each element must be in the range [0, 255] (It represents the value of a pixel in the output image). Non-finite values in the input tensor are replaced by this tensor in the output image. The default value is the color red.

writer: A handle to a summary writer. step: The step to write the summary for. tag: Scalar. Used to build the tag attribute of the summary values. tensor: 4-D of shape [batch_size, height, width, channels] where channels is 1, 3, or 4. max_images: Max number of batch elements to generate images for. bad_color: Color to use for pixels with non-finite values.

Attributes:

AttributeMLIR TypeDescription
max_images::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 1
T::mlir::Attributederived attribute

Operands:

Operand Description
writer tensor of resource values
step tensor of 64-bit integer values
tag tensor of string values
tensor tensor of 16-bit float or 32-bit float or 8-bit unsigned integer values
bad_color tensor of 8-bit unsigned integer values

tf.WriteRawProtoSummary (TF::WriteRawProtoSummaryOp)

Writes a Summary protocol buffer with serialized string Summary protocol buffers.

writer: A handle to a summary writer. step: The step to write the summary for. tensor: A tensor holding one or more serialized Summary protobufs to write.

Operands:

Operand Description
writer tensor of resource values
step tensor of 64-bit integer values
tensor tensor of string values

tf.WriteScalarSummary (TF::WriteScalarSummaryOp)

Writes a Summary protocol buffer with scalar values.

The input tag and value must have the scalars.

writer: A handle to a summary writer. step: The step to write the summary for. tag: Tag for the summary. value: Value for the summary.

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
writer tensor of resource values
step tensor of 64-bit integer values
tag tensor of string values
value tensor of integer or floating-point values

tf.WriteSummary (TF::WriteSummaryOp)

Outputs a Summary protocol buffer with a tensor.

writer: A handle to a summary writer. step: The step to write the summary for. tensor: A tensor to serialize. tag: The summary's tag. summary_metadata: Serialized SummaryMetadata protocol buffer containing plugin-related metadata for this summary.

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
writer tensor of resource values
step tensor of 64-bit integer values
tensor tensor of tf.dtype values
tag tensor of string values
summary_metadata tensor of string values

tf.WriteTrainingPredictions (TF::WriteTrainingPredictionsOp)

Writes the given predictions into a RecordIO file using a previously

initialized global TrainingPredictionWriter. The predictions are transformed into a PredictionData proto before they are written to the file.

Interfaces: MemoryEffectOpInterface

Attributes:

AttributeMLIR TypeDescription
prediction_names::mlir::ArrayAttrstring array attribute
training::mlir::BoolAttrbool attribute
file_path::mlir::StringAttrstring attribute
num_predictions::mlir::Attributederived attribute

Operands:

Operand Description
keys tensor of string values
predictions_list variadic of tensor of 32-bit float values
step tensor of 64-bit integer values
timestamp_usec tensor of 64-bit integer values

tf.Xdivy (TF::XdivyOp)

Returns 0 if x == 0, and x / y otherwise, elementwise.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values
y tensor of floating-point or complex values

Results:

Result Description
z tensor of floating-point or complex values

tf.XlaAllReduce (TF::XlaAllReduceOp)

Wraps the XLA AllReduce operator

documented at https://www.tensorflow.org/xla/operation_semantics#allreduce

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
reduce_op::mlir::StringAttrstring attribute whose value is Min, or Max, or Mul, or Add, or Mean
mode::mlir::StringAttrstring attribute whose value is CrossReplica, or CrossReplicaAndPartition
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float or 32-bit integer or 32-bit unsigned integer values
group_assignment tensor of 32-bit integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 32-bit integer or 32-bit unsigned integer values

tf.XlaBroadcastHelper (TF::XlaBroadcastHelperOp)

Helper operator for performing XLA-style broadcasts

Broadcasts lhs and rhs to the same rank, by adding size 1 dimensions to whichever of lhs and rhs has the lower rank, using XLA's broadcasting rules for binary operators.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
lhs tensor of number values
rhs tensor of number values
broadcast_dims tensor of 32/64-bit signed integer values

Results:

Result Description
lhs_output tensor of number values
rhs_output tensor of number values

tf.XlaCallModule (TF::XlaCallModuleOp)

Invokes a StableHLO module.

This op is used with JAX native serialization in a TensorFlow context with stability guarantees.

Interfaces: MemoryEffectOpInterface, SymbolUserOpInterface

Attributes:

AttributeMLIR TypeDescription
version::mlir::IntegerAttr64-bit signless integer attribute
module::mlir::StringAttrstring attribute
Sout::mlir::ArrayAttrtensorflow shape attribute array
dim_args_spec::mlir::ArrayAttrstring array attribute
platforms::mlir::ArrayAttrstring array attribute
function_list::mlir::ArrayAttrtensorflow symbol ref array attribute
has_token_input_output::mlir::BoolAttrbool attribute
disabled_checks::mlir::ArrayAttrstring array attribute
Tin::mlir::Attributederived attribute
Tout::mlir::Attributederived attribute

Operands:

Operand Description
args variadic of tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.XlaClusterOutput (TF::XlaClusterOutputOp)

Operator that connects the output of an XLA computation to other consumer graph nodes.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
outputs tensor of tf.dtype values

tf.XlaConv (TF::XlaConvOp)

Wraps the XLA ConvGeneralDilated operator, documented at

https://www.tensorflow.org/performance/xla/operation_semantics#conv_convolution .

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
dimension_numbers::mlir::StringAttrstring attribute
precision_config::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
lhs tensor of number values
rhs tensor of number values
window_strides tensor of 32/64-bit signed integer values
padding tensor of 32/64-bit signed integer values
lhs_dilation tensor of 32/64-bit signed integer values
rhs_dilation tensor of 32/64-bit signed integer values
feature_group_count tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of number values

tf.XlaConvV2 (TF::XlaConvV2Op)

Wraps the XLA ConvGeneralDilated operator, documented at

https://www.tensorflow.org/performance/xla/operation_semantics#conv_convolution .

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
dimension_numbers::mlir::StringAttrstring attribute
precision_config::mlir::StringAttrstring attribute
batch_group_count::mlir::IntegerAttr64-bit signless integer attribute
LhsT::mlir::Attributederived attribute
RhsT::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute
preferred_element_type::mlir::Attributederived attribute

Operands:

Operand Description
lhs tensor of number values
rhs tensor of number values
window_strides tensor of 32/64-bit signed integer values
padding tensor of 32/64-bit signed integer values
lhs_dilation tensor of 32/64-bit signed integer values
rhs_dilation tensor of 32/64-bit signed integer values
feature_group_count tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of number values

tf.XlaCustomCallV2 (TF::XlaCustomCallV2Op)

Emits an HLO CustomCall operation with multiple outputs.

As opposed to XlaCustomCall, this operation supports multiple outputs.

See CustomCall specification at https://tensorflow.org/xla/operation_semantics#customcall, and mhlo.custom_call specification at https://tensorflow.org/mlir/hlo_ops#mhlocustom_call_mlirmhlocustomcallop

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
call_target_name::mlir::StringAttrstring attribute
backend_config::mlir::StringAttrstring attribute
has_side_effect::mlir::BoolAttrbool attribute
result_shapes::mlir::ArrayAttrtensorflow shape attribute array
operand_dtypes::mlir::Attributederived attribute
result_dtypes::mlir::Attributederived attribute

Operands:

Operand Description
operands variadic of tensor of tf.dtype values

Results:

Result Description
results variadic of tensor of tf.dtype values

tf.XlaDot (TF::XlaDotOp)

Wraps the XLA DotGeneral operator, documented at

https://www.tensorflow.org/performance/xla/operation_semantics#dotgeneral .

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
dimension_numbers::mlir::StringAttrstring attribute
precision_config::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
lhs tensor of number values
rhs tensor of number values

Results:

Result Description
output tensor of number values

tf.XlaDotV2 (TF::XlaDotV2Op)

Wraps the XLA DotGeneral operator, documented at

https://www.tensorflow.org/performance/xla/operation_semantics#dotgeneral .

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
dimension_numbers::mlir::StringAttrstring attribute
precision_config::mlir::StringAttrstring attribute
LhsT::mlir::Attributederived attribute
RhsT::mlir::Attributederived attribute
preferred_element_type::mlir::Attributederived attribute

Operands:

Operand Description
lhs tensor of number values
rhs tensor of number values

Results:

Result Description
output tensor of number values

tf.XlaDynamicSlice (TF::XlaDynamicSliceOp)

Wraps the XLA DynamicSlice operator, documented at

https://www.tensorflow.org/performance/xla/operation_semantics#dynamicslice .

DynamicSlice extracts a sub-array from the input array at dynamic start_indices. The size of the slice in each dimension is passed in size_indices, which specify the end point of exclusive slice intervals in each dimension -- [start, start + size). The shape of start_indices must have rank 1, with dimension size equal to the rank of operand.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
start_indices tensor of 32/64-bit signed integer values
size_indices tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.XlaDynamicUpdateSlice (TF::XlaDynamicUpdateSliceOp)

Wraps the XLA DynamicUpdateSlice operator, documented at

https://www.tensorflow.org/performance/xla/operation_semantics#dynamicupdateslice .

XlaDynamicUpdateSlice generates a result which is the value of the input operand, with a slice update overwritten at indices. The shape of update determines the shape of the sub-array of the result which is updated. The shape of indices must be rank == 1, with dimension size equal to the rank of input.

Handling of out-of-bounds slice indices is implementation-defined.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
update tensor of tf.dtype values
indices tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.XlaEinsum (TF::XlaEinsumOp)

An op which supports basic einsum op with 2 inputs and 1 output.

This op has better TPU performance since it doesn't have explicitly reshape and transpose operations as tf.einsum does.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
equation::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
a tensor of bfloat16 or 64-bit complex or 32-bit float values
b tensor of bfloat16 or 64-bit complex or 32-bit float values

Results:

Result Description
product tensor of bfloat16 or 64-bit complex or 32-bit float values

tf.XlaGather (TF::XlaGatherOp)

Wraps the XLA Gather operator documented at

https://www.tensorflow.org/xla/operation_semantics#gather

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
dimension_numbers::mlir::StringAttrstring attribute
indices_are_sorted::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
operand tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
start_indices tensor of 32/64-bit signed integer values
slice_sizes tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.XlaHostCompute (TF::XlaHostComputeOp)

A pseudo-op to represent host-side computation in an XLA program.

Interfaces: TF_RecvSideEffect (MemoryEffectOpInterface), TF_SendSideEffect (MemoryEffectOpInterface), TF_XlaHostComputeSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Recv}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Send}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::XlaHostCompute}

Attributes:

AttributeMLIR TypeDescription
ancestors::mlir::ArrayAttrstring array attribute
shapes::mlir::ArrayAttrtensorflow shape attribute array
shape_inference_graph::mlir::SymbolRefAttrsymbol reference attribute
key::mlir::StringAttrstring attribute
send_key::mlir::StringAttrstring attribute
recv_key::mlir::StringAttrstring attribute
cost_estimate_ns::mlir::IntegerAttr64-bit signless integer attribute
tpu_core::mlir::IntegerAttr64-bit signless integer attribute
Tinputs::mlir::Attributederived attribute
Toutputs::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of tf.dtype values

Results:

Result Description
outputs variadic of tensor of tf.dtype values

tf.XlaKeyValueSort (TF::XlaKeyValueSortOp)

Wraps the XLA Sort operator, documented at

https://www.tensorflow.org/performance/xla/operation_semantics#sort .

Sorts a tensor. Currently only sorts in ascending order are supported.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
K::mlir::Attributederived attribute
V::mlir::Attributederived attribute

Operands:

Operand Description
keys tensor of integer or floating-point values
values tensor of tf.dtype values

Results:

Result Description
sorted_keys tensor of integer or floating-point values
sorted_values tensor of tf.dtype values

tf.XlaLaunch (TF::XlaLaunchOp)

XLA Launch Op. For use by the XLA JIT only.

Traits: AttrSizedOperandSegments

Interfaces: GetResourceInstanceInterface, MemoryEffectOpInterface

Attributes:

AttributeMLIR TypeDescription
function::mlir::SymbolRefAttrsymbol reference attribute
Nresources::mlir::Attributederived attribute
Targs::mlir::Attributederived attribute
Tconstants::mlir::Attributederived attribute
Tresults::mlir::Attributederived attribute

Operands:

Operand Description
constants variadic of tensor of tf.dtype values
args variadic of tensor of tf.dtype values
resources variadic of tensor of resource values

Results:

Result Description
results variadic of tensor of tf.dtype values

tf.XlaLaunchV2 (TF::XlaLaunchV2Op)

XLA Launch Op. For use by the XLA JIT only.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
constants::mlir::ArrayAttr64-bit integer array attribute
resources::mlir::ArrayAttr64-bit integer array attribute
function::mlir::SymbolRefAttrsymbol reference attribute
Targs::mlir::Attributederived attribute
Tresults::mlir::Attributederived attribute

Operands:

Operand Description
args variadic of tensor of tf.dtype values

Results:

Result Description
results variadic of tensor of tf.dtype values

tf.XlaOptimizationBarrier (TF::XlaOptimizationBarrierOp)

Wraps the XLA OptimizationBarrier operator.

Documented at https://www.tensorflow.org/xla/operation_semantics#optimizationbarrier

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input variadic of tensor of tf.dtype values

Results:

Result Description
output variadic of tensor of tf.dtype values

tf.XlaPad (TF::XlaPadOp)

Wraps the XLA Pad operator, documented at

https://www.tensorflow.org/performance/xla/operation_semantics#pad .

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
padding_value tensor of tf.dtype values
padding_low tensor of 32/64-bit signed integer values
padding_high tensor of 32/64-bit signed integer values
padding_interior tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of tf.dtype values

tf.XlaRecv (TF::XlaRecvOp)

Receives the named tensor from another XLA computation. Wraps the XLA Recv

operator documented at https://www.tensorflow.org/performance/xla/operation_semantics#recv .

Interfaces: TF_RecvSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Recv}

Attributes:

AttributeMLIR TypeDescription
tensor_name::mlir::StringAttrstring attribute
shape::mlir::AttributeTensorFlow shape attribute
dtype::mlir::Attributederived attribute

Results:

Result Description
tensor tensor of tf.dtype values

tf.XlaRecvFromHost (TF::XlaRecvFromHostOp)

An op to receive a tensor from the host.

output: the tensor that will be received from the host. Toutput: element type for output. shape: shape for output. key: A unique identifier for this region used to match up host transfers.

Interfaces: TF_RecvSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Recv}

Attributes:

AttributeMLIR TypeDescription
shape::mlir::AttributeTensorFlow shape attribute
key::mlir::StringAttrstring attribute
Toutput::mlir::Attributederived attribute

Results:

Result Description
output tensor of tf.dtype values

tf.XlaRecvTPUEmbeddingActivations (TF::XlaRecvTPUEmbeddingActivationsOp)

An op that receives embedding activations on the TPU.

The TPU system performs the embedding lookups and aggregations. The results of these aggregations are visible to the Tensorflow Graph as the outputs of a XlaRecvTPUEmbeddingActivations Op. This op returns a list containing one Tensor of activations per table specified in the model.

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
config::mlir::StringAttrstring attribute
num_tables::mlir::Attributederived attribute

Operands:

Operand Description
deduplication_data tensor of variant values

Results:

Result Description
outputs variadic of tensor of 32-bit float values

tf.XlaRecvTPUEmbeddingDeduplicationData (TF::XlaRecvTPUEmbeddingDeduplicationDataOp)

Receives deduplication data (indices and weights) from the embedding core.

The deduplication data is a Tensor with type=DT_VARIANT. The tensor itself is an XLA nested tuple containing N elements (where N is the ratio of the number of embedding to tensor cores per TPU chip). Each element of the nested tuple is a tuple of rank 1 tensors. Each tensor either contains indices (DT_UINT32) for embedding lookup on the TensorCore or weights (DT_FLOAT) to apply to the output of the embedding lookup operation.

Attributes:

AttributeMLIR TypeDescription
config::mlir::StringAttrstring attribute

Results:

Result Description
output tensor of variant values

tf.XlaReduce (TF::XlaReduceOp)

Wraps the XLA Reduce operator, documented at

https://www.tensorflow.org/performance/xla/operation_semantics#reduce .

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
dimensions_to_reduce::mlir::ArrayAttr64-bit integer array attribute
reducer::mlir::SymbolRefAttrsymbol reference attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
init_value tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
output tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.XlaReducePrecision (TF::XlaReducePrecisionOp)

Wraps the XLA ReducePrecision operator

documented at https://www.tensorflow.org/xla/operation_semantics#reduceprecision

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
exponent_bits::mlir::IntegerAttr64-bit signless integer attribute
mantissa_bits::mlir::IntegerAttr64-bit signless integer attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
operand tensor of floating-point values

Results:

Result Description
output tensor of floating-point values

tf.XlaReduceScatter (TF::XlaReduceScatterOp)

Wraps the XLA ReduceScatter operator

documented at https://www.tensorflow.org/xla/operation_semantics#reducescatter

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
reduce_op::mlir::StringAttrstring attribute whose value is Min, or Max, or Mul, or Add, or Mean
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or 16-bit float or 32-bit float or 32-bit integer or 32-bit unsigned integer values
group_assignment tensor of 32-bit integer values
scatter_dimension tensor of 32-bit integer values

Results:

Result Description
output tensor of bfloat16 or 16-bit float or 32-bit float or 32-bit integer or 32-bit unsigned integer values

tf.XlaReduceWindow (TF::XlaReduceWindowOp)

Wraps the XLA ReduceWindow operator, documented at

https://www.tensorflow.org/performance/xla/operation_semantics#reducewindow .

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
computation::mlir::SymbolRefAttrsymbol reference attribute
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
init_value tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
window_dimensions tensor of 32/64-bit signed integer values
window_strides tensor of 32/64-bit signed integer values
base_dilations tensor of 32/64-bit signed integer values
window_dilations tensor of 32/64-bit signed integer values
padding tensor of 32/64-bit signed integer values

Results:

Result Description
output tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.XlaRemoveDynamicDimensionSize (TF::XlaRemoveDynamicDimensionSizeOp)

Inverse of XlaSetDynamicDimensionSize.

Make an xla bounded dynamic dimension into a static dimension. The bound of the size of dimension dim_index becomes the static dimension size.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
dim_index tensor of 32-bit integer values

Results:

Result Description
output tensor of tf.dtype values

tf.XlaReplicaId (TF::XlaReplicaIdOp)

Replica ID.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Results:

Result Description
id tensor of 32-bit integer values

tf.XlaRngBitGenerator (TF::XlaRngBitGeneratorOp)

Stateless PRNG bit generator.

Wraps the XLA RngBitGenerator operator, documented at https://www.tensorflow.org/performance/xla/operation_semantics#rngbitgenerator

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
Tshape::mlir::Attributederived attribute
dtype::mlir::Attributederived attribute

Operands:

Operand Description
algorithm tensor of 32-bit integer values
initial_state tensor of 64-bit unsigned integer values
shape tensor of 32/64-bit signed integer values

Results:

Result Description
output_key tensor of 64-bit unsigned integer values
output tensor of 32-bit integer or 64-bit integer or 8-bit integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.XlaScatter (TF::XlaScatterOp)

Wraps the XLA Scatter operator documented at

https://www.tensorflow.org/xla/operation_semantics#scatter

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
update_computation::mlir::SymbolRefAttrsymbol reference attribute
dimension_numbers::mlir::StringAttrstring attribute
indices_are_sorted::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
operand tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
scatter_indices tensor of 32/64-bit signed integer values
updates tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
output tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.XlaSelectAndScatter (TF::XlaSelectAndScatterOp)

Wraps the XLA SelectAndScatter operator, documented at

https://www.tensorflow.org/performance/xla/operation_semantics#selectandscatter .

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
select::mlir::SymbolRefAttrsymbol reference attribute
scatter::mlir::SymbolRefAttrsymbol reference attribute
T::mlir::Attributederived attribute
Tindices::mlir::Attributederived attribute

Operands:

Operand Description
operand tensor of number values
window_dimensions tensor of 32/64-bit signed integer values
window_strides tensor of 32/64-bit signed integer values
padding tensor of 32/64-bit signed integer values
source tensor of number values
init_value tensor of number values

Results:

Result Description
output tensor of number values

tf.XlaSelfAdjointEig (TF::XlaSelfAdjointEigOp)

Computes the eigen decomposition of a batch of self-adjoint matrices

(Note: Only real inputs are supported).

Computes the eigenvalues and eigenvectors of the innermost N-by-N matrices in tensor such that tensor[...,:,:] * v[..., :,i] = e[..., i] * v[...,:,i], for i=0...N-1.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
lower::mlir::BoolAttrbool attribute
max_iter::mlir::IntegerAttr64-bit signless integer attribute
epsilon::mlir::FloatAttr32-bit float attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
a tensor of number values

Results:

Result Description
w tensor of number values
v tensor of number values

tf.XlaSend (TF::XlaSendOp)

Sends the named tensor to another XLA computation. Wraps the XLA Send operator

documented at https://www.tensorflow.org/performance/xla/operation_semantics#send .

Interfaces: TF_SendSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Send}

Attributes:

AttributeMLIR TypeDescription
tensor_name::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
tensor tensor of tf.dtype values

tf.XlaSendToHost (TF::XlaSendToHostOp)

An op to send a tensor to the host.

input: the tensor that will be sent to the host. Tinput: element type for input. key: A unique identifier for this region used to match up host transfers.

Interfaces: TF_SendSideEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::Send}

Attributes:

AttributeMLIR TypeDescription
key::mlir::StringAttrstring attribute
Tinput::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

tf.XlaSendTPUEmbeddingGradients (TF::XlaSendTPUEmbeddingGradientsOp)

An op that performs gradient updates of embedding tables.

The gradients argument is a TensorList having the same length and shapes as the return value of XlaRecvTPUEmbeddingActivations, but contains gradients of the model's loss with respect to the embedding activations. The embedding tables are updated from these gradients via the optimizer specified in the TPUEmbeddingConfiguration proto given to tpu.initialize_system.

Traits: AttrSizedOperandSegments

Interfaces: TF_MustExecute (MemoryEffectOpInterface), TF_TPUEmbeddingReadEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{MemoryEffects::Read on ::mlir::TF::ResourceEffects::TPUEmbedding}, MemoryEffects::Effect{MemoryEffects::Write on ::mlir::TF::ResourceEffects::MustExecute}

Attributes:

AttributeMLIR TypeDescription
config::mlir::StringAttrstring attribute
NumLearningRateTags::mlir::Attributederived attribute
NumTables::mlir::Attributederived attribute

Operands:

Operand Description
gradients variadic of tensor of 32-bit float values
learning_rates variadic of tensor of 32-bit float values
deduplication_data tensor of variant values

tf.XlaSetBound (TF::XlaSetBoundOp)

Set a bound for the given input value as a hint to Xla compiler,

returns the same value.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
input tensor of 32-bit integer values
bound tensor of 32-bit integer values

Results:

Result Description
output tensor of 32-bit integer values

tf.XlaSetDynamicDimensionSize (TF::XlaSetDynamicDimensionSizeOp)

Make a static dimension into a xla bounded dynamic dimension.

The current static dimension size will become the bound and the second operand becomes the dynamic size of the dimension.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values
dim_index tensor of 32-bit integer values
size tensor of 32-bit integer values

Results:

Result Description
output tensor of tf.dtype values

tf.XlaSharding (TF::XlaShardingOp)

An op which shards the input based on the given sharding attribute.

Traits: AlwaysSpeculatableImplTrait, TF_NoConstantFold

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
sharding::mlir::StringAttrstring attribute
_XlaSharding::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.XlaSort (TF::XlaSortOp)

Wraps the XLA Sort operator, documented at

https://www.tensorflow.org/performance/xla/operation_semantics#sort .

Sorts a tensor. Currently only sorts in ascending order are supported.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.XlaSparseCoreAdagrad (TF::XlaSparseCoreAdagradOp)

Aaa

Attributes:

AttributeMLIR TypeDescription
feature_width::mlir::IntegerAttr64-bit signless integer attribute

Operands:

Operand Description
indices tensor of 32-bit integer values
gradient tensor of 32-bit float values
learning_rate tensor of 32-bit float values
accumulator tensor of 32-bit float values
embedding_table tensor of 32-bit float values

Results:

Result Description
updated_embedding_table tensor of 32-bit float values
updated_accumulator tensor of 32-bit float values

tf.XlaSparseCoreAdagradMomentum (TF::XlaSparseCoreAdagradMomentumOp)

Aaa

Attributes:

AttributeMLIR TypeDescription
feature_width::mlir::IntegerAttr64-bit signless integer attribute
use_nesterov::mlir::BoolAttrbool attribute
beta_2::mlir::FloatAttr32-bit float attribute
exponent::mlir::FloatAttr32-bit float attribute

Operands:

Operand Description
indices tensor of 32-bit integer values
gradient tensor of 32-bit float values
learning_rate tensor of 32-bit float values
beta_1 tensor of 32-bit float values
epsilon tensor of 32-bit float values
accumulator tensor of 32-bit float values
momentum tensor of 32-bit float values
embedding_table tensor of 32-bit float values

Results:

Result Description
updated_embedding_table tensor of 32-bit float values
updated_accumulator tensor of 32-bit float values
updated_momentum tensor of 32-bit float values

tf.XlaSparseCoreAdam (TF::XlaSparseCoreAdamOp)

Aaa

Attributes:

AttributeMLIR TypeDescription
feature_width::mlir::IntegerAttr64-bit signless integer attribute
use_sum_inside_sqrt::mlir::BoolAttrbool attribute

Operands:

Operand Description
embedding_table tensor of 32-bit float values
indices tensor of 32-bit integer values
gradient tensor of 32-bit float values
learning_rate tensor of 32-bit float values
momentum tensor of 32-bit float values
velocity tensor of 32-bit float values
beta_1 tensor of 32-bit float values
beta_2 tensor of 32-bit float values
epsilon tensor of 32-bit float values

Results:

Result Description
updated_embedding_table tensor of 32-bit float values
updated_velocity tensor of 32-bit float values
updated_momentum tensor of 32-bit float values

tf.XlaSparseCoreFtrl (TF::XlaSparseCoreFtrlOp)

Aaa

Attributes:

AttributeMLIR TypeDescription
feature_width::mlir::IntegerAttr64-bit signless integer attribute
multiply_linear_by_learning_rate::mlir::BoolAttrbool attribute
l1_regularization_strength::mlir::FloatAttr32-bit float attribute

Operands:

Operand Description
embedding_table tensor of 32-bit float values
accumulator tensor of 32-bit float values
linear tensor of 32-bit float values
learning_rate tensor of 32-bit float values
indices tensor of 32-bit integer values
gradient tensor of 32-bit float values
beta tensor of 32-bit float values
learning_rate_power tensor of 32-bit float values
l2_regularization_strength tensor of 32-bit float values

Results:

Result Description
updated_embedding_table tensor of 32-bit float values
updated_accumulator tensor of 32-bit float values
updated_linear tensor of 32-bit float values

tf.XlaSparseCoreSgd (TF::XlaSparseCoreSgdOp)

Aaa

Attributes:

AttributeMLIR TypeDescription
feature_width::mlir::IntegerAttr64-bit signless integer attribute

Operands:

Operand Description
indices tensor of 32-bit integer values
gradient tensor of 32-bit float values
learning_rate tensor of 32-bit float values
embedding_table tensor of 32-bit float values

Results:

Result Description
updated_embedding_table tensor of 32-bit float values

tf.XlaSparseDenseMatmulGradWithAdagradAndCsrInput (TF::XlaSparseDenseMatmulGradWithAdagradAndCsrInputOp)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
clip_weight_min::mlir::FloatAttr32-bit float attribute
clip_weight_max::mlir::FloatAttr32-bit float attribute
table_name::mlir::StringAttrstring attribute

Operands:

Operand Description
row_pointers tensor of 32-bit integer values
sorted_sample_ids tensor of 32-bit integer values
sorted_token_ids tensor of 32-bit integer values
sorted_gains tensor of 32-bit float values
activation_gradients tensor of 32-bit float values
learning_rate tensor of 32-bit float values
embedding_table tensor of 32-bit float values
accumulator tensor of 32-bit float values
num_minibatches_per_physical_sparse_core tensor of 32-bit integer values

Results:

Result Description
updated_embedding_table tensor of 32-bit float values
updated_accumulator tensor of 32-bit float values

tf.XlaSparseDenseMatmulGradWithAdagradMomentumAndCsrInput (TF::XlaSparseDenseMatmulGradWithAdagradMomentumAndCsrInputOp)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
use_nesterov::mlir::BoolAttrbool attribute
exponent::mlir::FloatAttr32-bit float attribute
beta1::mlir::FloatAttr32-bit float attribute
beta2::mlir::FloatAttr32-bit float attribute
epsilon::mlir::FloatAttr32-bit float attribute
clip_weight_min::mlir::FloatAttr32-bit float attribute
clip_weight_max::mlir::FloatAttr32-bit float attribute
table_name::mlir::StringAttrstring attribute

Operands:

Operand Description
row_pointers tensor of 32-bit integer values
sorted_sample_ids tensor of 32-bit integer values
sorted_token_ids tensor of 32-bit integer values
sorted_gains tensor of 32-bit float values
activation_gradients tensor of 32-bit float values
learning_rate tensor of 32-bit float values
embedding_table tensor of 32-bit float values
accumulator tensor of 32-bit float values
momenta tensor of 32-bit float values
num_minibatches_per_physical_sparse_core tensor of 32-bit integer values

Results:

Result Description
updated_embedding_table tensor of 32-bit float values
updated_accumulator tensor of 32-bit float values
updated_momenta tensor of 32-bit float values

tf.XlaSparseDenseMatmulGradWithAdamAndCsrInput (TF::XlaSparseDenseMatmulGradWithAdamAndCsrInputOp)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
use_sum_inside_sqrt::mlir::BoolAttrbool attribute
beta1::mlir::FloatAttr32-bit float attribute
beta2::mlir::FloatAttr32-bit float attribute
epsilon::mlir::FloatAttr32-bit float attribute
clip_weight_min::mlir::FloatAttr32-bit float attribute
clip_weight_max::mlir::FloatAttr32-bit float attribute
table_name::mlir::StringAttrstring attribute

Operands:

Operand Description
row_pointers tensor of 32-bit integer values
sorted_sample_ids tensor of 32-bit integer values
sorted_token_ids tensor of 32-bit integer values
sorted_gains tensor of 32-bit float values
activation_gradients tensor of 32-bit float values
learning_rate tensor of 32-bit float values
embedding_table tensor of 32-bit float values
momenta tensor of 32-bit float values
velocity tensor of 32-bit float values
num_minibatches_per_physical_sparse_core tensor of 32-bit integer values

Results:

Result Description
updated_embedding_table tensor of 32-bit float values
updated_momenta tensor of 32-bit float values
updated_velocity tensor of 32-bit float values

tf.XlaSparseDenseMatmulGradWithFtrlAndCsrInput (TF::XlaSparseDenseMatmulGradWithFtrlAndCsrInputOp)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
multiply_linear_by_learning_rate::mlir::BoolAttrbool attribute
beta::mlir::FloatAttr32-bit float attribute
learning_rate_power::mlir::FloatAttr32-bit float attribute
l1_regularization_strength::mlir::FloatAttr32-bit float attribute
l2_regularization_strength::mlir::FloatAttr32-bit float attribute
clip_weight_min::mlir::FloatAttr32-bit float attribute
clip_weight_max::mlir::FloatAttr32-bit float attribute
table_name::mlir::StringAttrstring attribute

Operands:

Operand Description
row_pointers tensor of 32-bit integer values
sorted_sample_ids tensor of 32-bit integer values
sorted_token_ids tensor of 32-bit integer values
sorted_gains tensor of 32-bit float values
activation_gradients tensor of 32-bit float values
learning_rate tensor of 32-bit float values
embedding_table tensor of 32-bit float values
accumulator tensor of 32-bit float values
linear tensor of 32-bit float values
num_minibatches_per_physical_sparse_core tensor of 32-bit integer values

Results:

Result Description
updated_embedding_table tensor of 32-bit float values
updated_accumulator tensor of 32-bit float values
updated_linear tensor of 32-bit float values

tf.XlaSparseDenseMatmulGradWithSgdAndCsrInput (TF::XlaSparseDenseMatmulGradWithSgdAndCsrInputOp)

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
clip_weight_min::mlir::FloatAttr32-bit float attribute
clip_weight_max::mlir::FloatAttr32-bit float attribute
table_name::mlir::StringAttrstring attribute

Operands:

Operand Description
row_pointers tensor of 32-bit integer values
sorted_sample_ids tensor of 32-bit integer values
sorted_token_ids tensor of 32-bit integer values
sorted_gains tensor of 32-bit float values
activation_gradients tensor of 32-bit float values
learning_rate tensor of 32-bit float values
embedding_table tensor of 32-bit float values
num_minibatches_per_physical_sparse_core tensor of 32-bit integer values

Results:

Result Description
updated_embedding_table tensor of 32-bit float values

tf.XlaSparseDenseMatmulWithCsrInput (TF::XlaSparseDenseMatmulWithCsrInputOp)

Aaa

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
input_size::mlir::IntegerAttr64-bit signless integer attribute whose minimum value is 0
quantization_config_low::mlir::FloatAttr32-bit float attribute
quantization_config_high::mlir::FloatAttr32-bit float attribute
quantization_config_num_buckets::mlir::IntegerAttr64-bit signless integer attribute
table_name::mlir::StringAttrstring attribute

Operands:

Operand Description
row_pointers tensor of 32-bit integer values
sorted_sample_ids tensor of 32-bit integer values
sorted_token_ids tensor of 32-bit integer values
sorted_gains tensor of 32-bit float values
embedding_table tensor of 32-bit float values
num_minibatches_per_physical_sparse_core tensor of 32-bit integer values

Results:

Result Description
activations tensor of 32-bit float values

tf.XlaSpmdFullToShardShape (TF::XlaSpmdFullToShardShapeOp)

An op used by XLA SPMD partitioner to switch from automatic partitioning to

manual partitioning. It annotates the input (full-shape, to be automatically partitioned) with the same sharding used by manual partitioning, and outputs a shard-shaped tensor to be consumed by later manually-partitioned ops. If the shape is not evenly partitionable, the padding region will be masked with 0s. The conversion can happen partially in subgroups, by specifying the dim attribute, where only that dim will be converted.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
manual_sharding::mlir::StringAttrstring attribute
dim::mlir::IntegerAttr64-bit signless integer attribute
unspecified_dims::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.XlaSpmdShardToFullShape (TF::XlaSpmdShardToFullShapeOp)

An op used by XLA SPMD partitioner to switch from manual partitioning to

automatic partitioning. It converts the shard-shaped, manually partitioned input into full-shaped tensor to be partitioned automatically with the same sharding used by manual partitioning. The conversion can happen partially in subgroups, by specifying the dim attribute, where only that dim will be converted.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
manual_sharding::mlir::StringAttrstring attribute
full_shape::mlir::AttributeTensorFlow shape attribute
dim::mlir::IntegerAttr64-bit signless integer attribute
unspecified_dims::mlir::ArrayAttr64-bit integer array attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input tensor of tf.dtype values

Results:

Result Description
output tensor of tf.dtype values

tf.XlaSvd (TF::XlaSvdOp)

Computes the eigen decomposition of a batch of self-adjoint matrices

(Note: Only real inputs are supported).

Computes the eigenvalues and eigenvectors of the innermost M-by-N matrices in tensor such that tensor[...,:,:] = u[..., :, :] * Diag(s[..., :]) * Transpose(v[...,:,:]).

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
max_iter::mlir::IntegerAttr64-bit signless integer attribute
epsilon::mlir::FloatAttr32-bit float attribute
precision_config::mlir::StringAttrstring attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
a tensor of number values

Results:

Result Description
s tensor of number values
u tensor of number values
v tensor of number values

tf.XlaVariadicReduce (TF::XlaVariadicReduceOp)

Wraps the variadic XLA Reduce operator.

Semantics are documented at https://www.tensorflow.org/performance/xla/operation_semantics#variadic_reduce

This version is limited to operands of the same dtype. XlaVariadicReduceV2 is a version that supports heterogeneous operands.

Traits: AlwaysSpeculatableImplTrait, SameVariadicOperandSize

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
dimensions_to_reduce::mlir::ArrayAttr64-bit integer array attribute
reducer::mlir::SymbolRefAttrsymbol reference attribute
N::mlir::Attributederived attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
input variadic of tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values
init_value variadic of tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

Results:

Result Description
output variadic of tensor of bfloat16 or bool or 128-bit complex or 64-bit complex or 16-bit float or 32-bit float or 64-bit float or 16-bit integer or 32-bit integer or 64-bit integer or 8-bit integer or 16-bit quantized integer or 32-bit quantized integer or 8-bit quantized integer or 16-bit quantized unsigned integer or 8-bit quantized unsigned integer or 16-bit unsigned integer or 32-bit unsigned integer or 64-bit unsigned integer or 8-bit unsigned integer values

tf.XlaVariadicReduceV2 (TF::XlaVariadicReduceV2Op)

Wraps the variadic XLA Reduce operator.

Semantics are documented at https://www.tensorflow.org/performance/xla/operation_semantics#variadic_reduce

This is an expanded version of XlaVariadicReduce, with support for operands of different dtypes, and improved shape inference.

Traits: AlwaysSpeculatableImplTrait, AttrSizedOperandSegments

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
dimensions_to_reduce::mlir::ArrayAttr64-bit integer array attribute
reducer::mlir::SymbolRefAttrsymbol reference attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of tf.dtype values
init_values variadic of tensor of tf.dtype values

Results:

Result Description
outputs variadic of tensor of tf.dtype values

tf.XlaVariadicSort (TF::XlaVariadicSortOp)

Wraps the XLA Sort operator, documented at

https://www.tensorflow.org/performance/xla/operation_semantics#sort .

Sorts one or more tensors, with support for custom comparator, dimension, and is_stable attributes.

Traits: AlwaysSpeculatableImplTrait

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
comparator::mlir::SymbolRefAttrsymbol reference attribute
is_stable::mlir::BoolAttrbool attribute
T::mlir::Attributederived attribute

Operands:

Operand Description
inputs variadic of tensor of tf.dtype values
dimension tensor of 32-bit integer values

Results:

Result Description
outputs variadic of tensor of tf.dtype values

tf.Xlog1py (TF::Xlog1pyOp)

Returns 0 if x == 0, and x * log1p(y) otherwise, elementwise.

Traits: AlwaysSpeculatableImplTrait, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values
y tensor of floating-point or complex values

Results:

Result Description
z tensor of floating-point or complex values

tf.Xlogy (TF::XlogyOp)

Returns 0 if x == 0, and x * log(y) otherwise, elementwise.

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape, TF_SameOperandsAndResultElementTypeResolveRef

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of floating-point or complex values
y tensor of floating-point or complex values

Results:

Result Description
z tensor of floating-point or complex values

tf.Yield (TF::YieldOp)

Yield operation

The "yield" operation represents a return operation within the conditional and body of structured control flow (e.g., if and while). The operation takes a variable number of operands and produces no results. The number and types of inputs must match the signature of the operation that contains the region.

Traits: AlwaysSpeculatableImplTrait, HasParent<CaseRegionOp, IfRegionOp, WhileRegionOp, GeneratorDatasetRegionOp>, ReturnLike, Terminator

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), RegionBranchTerminatorOpInterface

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
«unnamed» variadic of any type

tf.ZerosLike (TF::ZerosLikeOp)

Returns a tensor of zeros with the same shape and type as x.

Traits: AlwaysSpeculatableImplTrait, InferTensorType, TF::SameOperandsAndResultTypeResolveRef, TF_Idempotent

Interfaces: ConditionallySpeculatable, InferShapedTypeOpInterface, InferTypeOpInterface, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of tf.dtype values

Results:

Result Description
y tensor of tf.dtype values

tf.Zeta (TF::ZetaOp)

Compute the Hurwitz zeta function \(\zeta(x, q)\).

The Hurwitz zeta function is defined as:

\(\zeta(x, q) = \sum_{n=0}^{\infty} (q + n)^{-x}\)

Traits: AlwaysSpeculatableImplTrait, ResultsBroadcastableShape

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)

Effects: MemoryEffects::Effect{}

Attributes:

AttributeMLIR TypeDescription
T::mlir::Attributederived attribute

Operands:

Operand Description
x tensor of 32/64-bit float values
q tensor of 32/64-bit float values

Results:

Result Description
z tensor of 32/64-bit float values