Se usó la API de Cloud Translation para traducir esta página.
Switch to English

Compatibilidad del operador TensorFlow Lite y TensorFlow

TensorFlow Lite admite varias operaciones de TensorFlow utilizadas en modelos de inferencia comunes. A medida que los procesa el convertidor de optimización TensorFlow Lite, esas operaciones se pueden elidir o fusionar, antes de que las operaciones admitidas se asignen a sus contrapartes TensorFlow Lite.

Dado que el conjunto de operaciones de TensorFlow Lite es más pequeño que el de TensorFlow, no todos los modelos son convertibles. Incluso para operaciones compatibles, a veces se esperan patrones de uso muy específicos, por razones de rendimiento. Esperamos expandir el conjunto de operaciones compatibles en futuras versiones de TensorFlow Lite. Se pueden incluir operaciones adicionales mediante el uso de operaciones seleccionadas de TensorFlow , a costa del tamaño binario.

La mejor manera de entender cómo construir un modelo TensorFlow que se pueda usar con TensorFlow Lite es considerar cuidadosamente cómo se convierten y optimizan las operaciones, junto con las limitaciones impuestas por este proceso.

Tipos soportados

La mayoría de las operaciones de TensorFlow Lite apuntan a float32 punto flotante ( float32 ) y cuantificada ( uint8 , int8 ), pero muchas operaciones aún no lo hacen para otros tipos como tf.float16 y strings.

Además de utilizar versiones diferentes de las operaciones, la otra diferencia entre los modelos de punto flotante y los cuantizados es la forma en que se convierten. La conversión cuantificada requiere información de rango dinámico para tensores. Esto requiere una "cuantificación falsa" durante el entrenamiento del modelo, obtener información de rango a través de un conjunto de datos de calibración o hacer una estimación de rango "sobre la marcha". Ver cuantización .

Formato de datos y difusión

Por el momento, TensorFlow Lite solo admite el formato "NHWC" de TensorFlow, y la transmisión solo es compatible en un número limitado de operaciones ( tf.add , tf.mul , tf.sub y tf.div ).

Operaciones compatibles

Las siguientes operaciones de TensorFlow generalmente se asignan a sus contrapartes de TensorFlow Lite:

Conversiones directas, plegado constante y fusión

TensorFlow Lite puede procesar varias operaciones de TensorFlow aunque no tengan un equivalente directo. Este es el caso de las operaciones que pueden eliminarse simplemente del gráfico ( tf.identity ), reemplazarse por tensores ( tf.placeholder ) o fusionarse en operaciones más complejas ( tf.nn.bias_add ). Incluso algunas operaciones compatibles a veces se pueden eliminar a través de uno de estos procesos.

Aquí hay una lista no exhaustiva de las operaciones de TensorFlow que generalmente se eliminan del gráfico:

Operaciones no compatibles

Es probable que la operación TensorFlow no mencionada anteriormente no sea compatible. En particular, las siguientes operaciones comunes no son compatibles en este momento:

  • tf.depth_to_space

Operaciones de TensorFlow Lite

Las siguientes operaciones de TensorFlow Lite son totalmente compatibles y se utilizan en lugar de las operaciones de TensorFlow enumeradas anteriormente:

abdominales

 Inputs {
  0: a tensor
}
Outputs {
  0: elementwise abs of the input
}
 

AÑADIR

 Inputs {
  0: a tensor
  1: a tensor
}
Outputs {
  0: elementwise sum of the input tensors
}
Options {
  fused_activation_function:  NONE|RELU|RELU6
}
 

AGREGAR_N

 Inputs {
  0-N: any number of tensors (must have same size and shape)
}
Outputs {
  0: elementwise sum of the input tensors
}
 

ARG_MAX

 Inputs {
  0: a tensor
  1: a tensor
}
Outputs {
  0: A tensor of indices of maximum values.
}
 

ARG_MIN

 Inputs {
  0: a tensor
  1: a tensor
}
Outputs {
  0: A tensor of indices of minimum values.
}
 

AVERAGE_POOL_2D

 Inputs {
  0: a tensor
}
Outputs {
  0: a tensor where each entry is the mean of the input values in the
     corresponding window.
}
Options {
  fused_activation_function:  NONE|RELU|RELU6
  padding: SAME|VALID
  stride_w,stride_h: stride of the sliding window
  filter_width,filter_height: size of the sliding window
}
 

BATCH_TO_SPACE_ND

 Inputs {
  0: 3D-4D tensor
  1: 1D tensor
  2: 2D tensor
}
Outputs {
  0: tensor rearranged using block_shape. See tf.batch_to_space_nd for
     details.
}
 

CONCATENACIÓN

 Inputs {
  0-N: any number of tensors
}
Outputs {
  0: concatenation of the input tensors along the given axis.
}
Options {
  fused_activation_function:  NONE|RELU|RELU6
  axis: dimension along which the concatenation is performed
}
 

CONV_2D

 Inputs {
  0: 4D tensor
  1: filter
  2: bias (optional)
}
Outputs {
  0: result of 2D convolution of the input tensor
}
Options {
  fused_activation_function:  NONE|RELU|RELU6
  padding: SAME|VALID
  stride_w,stride_h: stride of the filter window
}
 

TRANSPOSE_CONV

 Inputs {
  0: output_shape
  1: filter
  2: 4D tensor
}
Outputs {
  0: the transpose (gradient) of conv2d
}
Options {
  padding: SAME|VALID
  stride_w,stride_h: stride of the filter window
}
 

DEPTHWISE_CONV_2D

 Inputs {
  0: 4D tensor
  1: filter
  2: bias (optional)
}
Outputs {
  0: result of a depthwise-2D convolution of the input tensor
}
Options {
  fused_activation_function:  NONE|RELU|RELU6
  padding: SAME|VALID
  stride_w,stride_h: stride of the filter window
  depth_multiplier: relation between the last dimension of the input and output
    tensors
}
 

ELU

 Inputs {
  0: a tensor
}
Outputs {
  0: a tensor equivalent to exp(features) - 1 if < 0, features otherwise.
}
 

IGUAL

 Inputs {
  0: a tensor
  1: a tensor
}
Outputs {
  0: a tensor of type bool, true whenever an element of the first tensor is
  equal to the corresponding element of the second tensor.
}
 

Exp

 Inputs {
  0: tensor
}
Outputs {
  0: result of computing element-wise exponential of the input tensor
}
 

LLENAR

 Inputs {
  0: a 1D tensor
  1: a 0D (scalar) tensor
}
Outputs {
  0: A tensor of shape `tensor 0` filled with the value in `tensor 1`.
}
 

PISO

 Inputs {
  0: tensor
}
Outputs: {
  0: result of computing element-wise floor of the input tensor
}
 

FLOOR_DIV

 Inputs {
  0: a tensor
  1: a tensor
}
Outputs {
  0: result of computing element-wise floor of `tensor 0` divided by `tensor 1`.
}
 

FLOOR_MOD

 Inputs {
  0: a tensor
  1: a tensor
}
Outputs {
  0: result of computing element-wise floor of `tensor 0` modulo `tensor 1`.
}
 

FORTIFICAR TECHO

 Inputs {
  0: a tensor
}
Outputs {
  0: result of computing element-wise ceil of the input tensor
}
 

TOTALMENTE CONECTADO

 Inputs {
  0: 4D tensor
  1: filter
  2: bias (optional)
}
Outputs {
  0: output of a fully (densely) connected layer, which connects all
     elements in the input tensor with each element in this tensor.
}
Options {
  fused_activation_function:  NONE|RELU|RELU6
}
 

REUNIR

 Inputs {
  0: params tensor
  1: indices tensor
  2: axis tensor (optional)
}
Outputs {
  0: a tensor with same type as the params tensor.
}
 

GATHER_ND

 Inputs {
  0: params tensor
  1: indices tensor
}
Outputs {
  0: a tensor with same type as the params tensor.
}
 

MAYOR

 Inputs {
  0: a tensor
  1: a tensor
}
Outputs {
  0: a tensor of type bool, true whenever an element of the first tensor is
  greater than the corresponding element of the second tensor.
}
 

MAYOR_ECUAL

 Inputs {
  0: a tensor
  1: a tensor
}
Outputs {
  0: a tensor of type bool, true whenever an element of the first tensor is
  greater than or equal to the corresponding element of the second tensor.
}
 

L2_NORMALIZACIÓN

 Inputs {
  0: input tensor
}
Outputs {
  0: normalized tensor (along the last dimension)
}
Options {
  fused_activation_function:  NONE|RELU|RELU6
}
 

L2_POOL_2D

 Inputs {
  0: a tensor
}
Outputs {
  0: a tensor equivalent to tf.sqrt(tf.nn.ave_pool(tf.square(input))
}
Options {
  fused_activation_function:  NONE|RELU|RELU6
  padding: SAME|VALID
  stride_w,stride_h: stride of the sliding window
  filter_width,filter_height: size of the sliding window
}
 

LEAKY_RELU

 Inputs {
  0: a tensor
}
Outputs {
  0: a tensor equivalent to max(input, input * alpha)
}
Options {
  alpha: slope of the activation at x < 0 (provided alpha <= 1)
}
 

MENOS

 Inputs {
  0: a tensor
  1: a tensor
}
Outputs {
  0: a tensor of type bool, true whenever an element of the first tensor is less
  than the corresponding element of the second tensor.
}
 

MENOS_EQUAL

 Inputs {
  0: a tensor
  1: a tensor
}
Outputs {
  0: a tensor of type bool, true whenever an element of the first tensor is less
  than or equal to the corresponding element of the second tensor.
}
 

LOCAL_RESPONSE_NORMALIZATION

 Inputs {
  0: a tensor
}
Outputs {
  0: a tensor equivalent to tf.nn.local_response_normalization
}
Options {
  radius
  bias
  alpha
  beta
}
 

LÓGICO_OR

 Inputs {
  0: a list of tensors.
  1: a list of tensors.
}
Outputs {
  0: A tensor of logical_or output tensors.
}
 

LOGÍSTICO

 Inputs {
  0: a tensor
}
Outputs {
  0: a tensor equivalent to 1 / (1 + exp(-input))
}
 

INICIAR SESIÓN

 Inputs {
  0: a tensor
}
Outputs {
  0: a tensor equivalent to log(input)
}
 

LOG_SOFTMAX

 Inputs {
  0: tensor
}
Outputs {
  0: tensor equivalent to logits - log(reduce_sum(exp(logits), -1))
}
 

MAX_POOL_2D

 Inputs {
  0: a tensor
}
Outputs {
  0: a tensor where each entry is the maximum of the input values in the
     corresponding window.
}
Options {
  fused_activation_function:  NONE|RELU|RELU6
  padding: SAME|VALID
  stride_w,stride_h: stride of the sliding window
  filter_width,filter_height: size of the sliding window
}
 

MUL

 Inputs {
  0: a tensor
  1: a tensor
}
Outputs {
  0: elementwise multiplication of the input tensors
}
Options {
  fused_activation_function:  NONE|RELU|RELU6
}
 

NEG

 Inputs {
  0: a tensor
}
Outputs {
  0: elementwise negation of the input tensor
}
 

NON_MAX_SUPPRESSION_V4

 Inputs {
  0: boxes in format [y1, x1, y2, x2]
  1: scores
  2: max number of detections
  3: IOU threshold
  4: score threshold
}
Outputs {
  0: selected indices
  1: number of selected indices
}
 

NON_MAX_SUPPRESSION_V5

 Inputs {
  0: boxes in format [y1, x1, y2, x2]
  1: scores
  2: max number of detections
  3: IOU threshold
  4: score threshold
  5: soft NMS sigma
}
Outputs {
  0: selected indices
  1: selected scores
  2: number of selected indices
}
 

PAQUETE

 Inputs {
  0: a list of tensors.
  1: an integer.
}
Outputs {
  0: A tensor of stacked tensors.
}
 

ALMOHADILLA

 Inputs {
  0: tensor
  1: tensor
}
Outputs {
  0: tensor where additional values are added before and after the contents of
     each dimension
}
 

MEDIO (tf.reduce_mean)

 Inputs {
  0: tensor
  1: tensor
}
Outputs {
  0: tensor containing the mean of the elements
}
Options {
  keep_dims: whether to retain reduced dimensions
}
 

NO ES IGUAL

 Inputs {
  0: a tensor
  1: a tensor
}
Outputs {
  0: a tensor of type bool, true whenever an element of the first tensor is not
  equal to the corresponding element of the second tensor.
}
 

Prisionero de guerra

 Inputs {
  0: a tensor
  1: a tensor
}
Outputs {
  0: elementwise pow of the input tensors
}
 

RANGO

 Inputs {
  0: a 0D (scalar) tensor
  1: a 0D (scalar) tensor
  2: a 0D (scalar) tensor
}
Outputs {
  0: A 1D tensor of type `dtype` defined by a sequence where `tensor 0` is the
  start, `tensor 1` is the limit, and `tensor 2` is the delta.
}
Options {
  dtype
}
 

RANGO

 Inputs {
  0: a tensor
}
Outputs {
  0: a 0-D int32 Tensor representing the rank of input
}
 

RELU

 Inputs {
  0: a tensor
}
Outputs {
  0: a tensor equivalent to max(0, input)
}
 

RELU_N1_TO_1

 Inputs {
  0: a tensor
}
Outputs {
  0: a tensor equivalent to max(-1, min(input, 1)
}
 

RELU6

 Inputs {
  0: a tensor
}
Outputs {
  0: a tensor equivalent to max(0, min(input, 6)
}
 

Reformar

 Inputs {
  0: a tensor
  1: ignored
}
Outputs {
  0: a tensor with the same elements as the input but with the new shape
}
Options {
  new_shape
}
 

RESIZE_BILINEAR

 Inputs {
  0: a 4D tensor
  1: a 1D tensor with 2 elements
}
Outputs {
  0: A tensor of type `tensor 0` resized according to `tensor 1` height/width values
  using bilinear interpolation.
}
Options {
  align_corners
}
 

RESIZE_NEAREST_NEIGHBOR

 Inputs {
  0: a 4D tensor
  1: a 1D tensor with 2 elements
}
Outputs {
  0: A tensor of type `tensor 0` resized according to `tensor 1` height/width values
  using nearest neighbors interpolation.
}
Options {
  align_corners
}
 

RSQRT

 Inputs {
  0: a tensor
}
Outputs {
  0: result of computing element-wise reciprocal square root of the input tensor
}
 

SECUENCIA INVERSA

 Inputs {
  0: a tensor
  1: a 1-D tensor which specifies the length of sequence to be reversed in each
  dim
}
Outputs {
  0: a tensor with the same shape as the input tensor
}
Options {
  seq_dim: a 0-D int tensor (scalar). The dimension which is partially
  reversed.
  batch_dim: a 0-D int tensor (scalar). Defaults to 0. The dimension along
  which reversal is performed.
}
 

FORMA

 Inputs {
  0: a tensor
}
Outputs {
  0: a 1D tensor representing the shape of the input tensor
}
Options {
  out_type: the output type of the op (int32 or int64). Defaults to int32.
}
 

REDONDO

 Inputs {
  0: a tensor
}
Outputs {
  0: result of computing element-wise round of the input tensor
}
 

REBANADA

 Inputs {
  0: tensor
  1: 1D tensor
  2: 1D tensor
}
Outputs {
  0: slice of the input tensor of the given size from the given begin index.
}
 

SOFTMAX

 Inputs {
  0: a tensor
}
Outputs {
  0: a tensor equivalent to exp(input) / tf.reduce_sum(exp(input * beta), dim),
     where dim is always the last dimension of the input tensor.
}
Options {
  beta
}
 

SPACE_TO_DEPTH

 Inputs {
  0: a 4D tensor
}
Outputs {
  0: a tensor rearranged using block_size. See tf.space_to_depth for details.
}
Options {
  block_size
}
 

SPACE_TO_BATCH_ND

 Inputs {
  0: 3D-4D tensor
  1: 1D tensor
  2: 2D tensor
}
Outputs {
  0: a tensor rearranged using block_shape. See tf.space_to_batch_nd for
     details.
}
 

SPARSE_TO_DENSE

 Inputs {
  0: 0D or 1D or 2D tensor
  1: 1D tensor
  2: 0D or 1D tensor
  3: 0D tensor
  4: a boolean value
}
Outputs {
  0: Dense Tensor of shape output_shape. Has the same type as sparse_values.
}
 

DIVISIÓN

 Inputs {
  0: 0D tensor (axis)
  1: tensor (input)
}
Outputs {
  0-N: subtensors built from the input tensors
}
Options {
  num_splits: Specifies number of outputs
}
 

SPLIT_V

 Inputs {
  0: tensor (input)
  1: 1-D tensor (size_splits)
  2: 0-D tensor (axis)
}
Outputs {
  0-N: subtensors built from the input tensors
}
Options {
  num_splits: Specifies number of outputs
}
 

SQRT

 Inputs {
  0: a tensor
}
Outputs {
  0: result of computing element-wise square root of the input tensor
}
 

EXPRIMIR

 Inputs {
  0: tensor
}
Outputs {
  0: tensor without any dimensions of size 1
}
Options {
  squeeze_dims
}
 

STRIDED_SLICE

 Inputs {
  0: tensor
  1: 1D tensor
  2: 1D tensor
  3: 1D tensor
}
Outputs {
  0: slice of the input tensor of the given size
}
Options {
  begin_mask: mask for begin indices
  end_mask: mask for end indices
  shrink_axis_mask: mask that indicates which dimensions to remove
}
 

TANH

 Inputs {
  0: a tensor
}
Outputs {
  0: result of computing element-wise hyperbolic tangent of the input tensor
}
 

TOP_K

 Inputs {
  0: tensor
  1: OD tensor
}
Outputs {
  0: k largest element along each last dimensional slice
  1: indices of values within the last dimension of the input tensor
}
 

TRANSPONER

 Inputs {
  0: tensor
  1: tensor
}
Outputs {
  0: tensor permuted according to perm
}
 

SELECCIONE

 Inputs {
  0: tensor
  1: tensor
  2: tensor
}
Outputs {
  0: tensor that contains the elementwise values of 'tensor 1' if the
  corresponding value of 'tensor 0' is true or the value of 'tensor 2' if false.
}
 

DESHACER

 Inputs {
  0: a tensor.
  1: an integer.
  2: an integer.
}
Outputs {
  0-N: tensors of unpacked tensor.
}
 

DÓNDE

 Inputs {
  0: A tensor of type bool.
  1: A tensor which may have the same shape as condition. If condition is rank
     1, x may have higher rank, but its first dimension must match the size of
     condition.
  2: A tensor with the same shape and type as x.
}
Outputs {
  0: A tensor with the same type and shape as x, y if they are non-None, or
     a tensor with shape (num_true, dim_size(condition)).
}
 

ZEROS_LIKE

 Inputs {
  0: a tensor
}
Outputs {
  0: A tensor of the same shape and type as x but filled with zeros
}
 

LLENAR

 Inputs {
  0: A Tensor. Must be one of the following types: int32, int64. 1-D. Represents the shape of the output tensor.
  1: A Tensor. 0-D (scalar). Value to fill the returned tensor.
}
Outputs {
  0: A tensor of the same type as value (input1).
}
 

Las siguientes operaciones de TensorFlow Lite están presentes, pero no están listas para modelos personalizados:

  • CALL
  • CONCAT_EMBEDDINGS
  • CUSTOM
  • EMBEDDING_LOOKUP_SPARSE
  • HASHTABLE_LOOKUP
  • LSH_PROJECTION
  • SKIP_GRAM
  • SVDF