![]() |
![]() |
![]() |
![]() |
TensorFlow Lite Model Analyzer API 能够通过列出模型的结构,帮助您分析 TensorFlow Lite 格式的模型。
Model Analyzer API
以下 API 可用于 TensorFlow Lite Model Analyzer。
tf.lite.experimental.Analyzer.analyze(model_path=None,
model_content=None,
gpu_compatibility=False)
您可以在 https://tensorflow.google.cn/api_docs/python/tf/lite/experimental/Analyzer 查看 API 详细信息,也可以在 Python 终端运行 help(tf.lite.experimental.Analyzer.analyze)
。
简单 Keras 模型的基本用法
以下代码显示了 Model Analyzer 的基本用法。它在 TFLite 模型内容中显示转换后的 Keras 模型的内容,格式化为平面缓冲区对象。
import tensorflow as tf
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(128, 128)),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
fb_model = tf.lite.TFLiteConverter.from_keras_model(model).convert()
tf.lite.experimental.Analyzer.analyze(model_content=fb_model)
2022-08-11 19:22:44.889506: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2022-08-11 19:22:45.711168: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvrtc.so.11.1: cannot open shared object file: No such file or directory 2022-08-11 19:22:45.711456: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvrtc.so.11.1: cannot open shared object file: No such file or directory 2022-08-11 19:22:45.711470: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. INFO:tensorflow:Assets written to: /tmpfs/tmp/tmp50gum7h1/assets === TFLite ModelAnalyzer === Your TFLite model has '1' subgraph(s). In the subgraph description below, T# represents the Tensor numbers. For example, in Subgraph#0, the RESHAPE op takes tensor #0 and tensor #1 as input and produces tensor #4 as output. Subgraph#0 main(T#0) -> [T#6] Op#0 RESHAPE(T#0, T#1[-1, 16384]) -> [T#4] Op#1 FULLY_CONNECTED(T#4, T#2[], T#-1) -> [T#5] Op#2 FULLY_CONNECTED(T#5, T#3[], T#-1) -> [T#6] Tensors of Subgraph#0 T#0(serving_default_flatten_input:0) shape_signature:[-1, 128, 128], type:FLOAT32 T#1(sequential/flatten/Const) shape:[2], type:INT32 RO 8 bytes, data:[-1, 16384] T#2(sequential/dense/MatMul1) shape:[256, 16384], type:FLOAT32 RO 16777216 bytes, data:[] T#3(sequential/dense_1/MatMul) shape:[10, 256], type:FLOAT32 RO 10240 bytes, data:[] T#4(sequential/flatten/Reshape) shape_signature:[-1, 16384], type:FLOAT32 T#5(sequential/dense/MatMul;sequential/dense/Relu;sequential/dense/BiasAdd) shape_signature:[-1, 256], type:FLOAT32 T#6(StatefulPartitionedCall:0) shape_signature:[-1, 10], type:FLOAT32 --------------------------------------------------------------- Your TFLite model has '1' signature_def(s). Signature#0 key: 'serving_default' - Subgraph: Subgraph#0 - Inputs: 'flatten_input' : T#0 - Outputs: 'dense_1' : T#6 --------------------------------------------------------------- Model size: 16789044 bytes Non-data buffer size: 1476 bytes (00.01 %) Total data buffer size: 16787568 bytes (99.99 %) (Zero value buffers): 0 bytes (00.00 %) * Buffers of TFLite model are mostly used for constant tensors. And zero value buffers are buffers filled with zeros. Non-data buffers area are used to store operators, subgraphs and etc. You can find more details from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema.fbs 2022-08-11 19:22:51.163111: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:362] Ignored output_format. 2022-08-11 19:22:51.163156: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:365] Ignored drop_control_dependency.
MobileNetV3Large Keras 模型的基本用法
此 API 适用于 MobileNetV3Large 等大型模型。由于输出很大,您可能希望使用您最喜欢的文本编辑器来浏览它。
model = tf.keras.applications.MobileNetV3Large()
fb_model = tf.lite.TFLiteConverter.from_keras_model(model).convert()
tf.lite.experimental.Analyzer.analyze(model_content=fb_model)
WARNING:tensorflow:`input_shape` is undefined or non-square, or `rows` is not 224. Weights for input shape (224, 224) will be loaded as the default. WARNING:tensorflow:`input_shape` is undefined or non-square, or `rows` is not 224. Weights for input shape (224, 224) will be loaded as the default. Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/mobilenet_v3/weights_mobilenet_v3_large_224_1.0_float.h5 22661472/22661472 [==============================] - 0s 0us/step WARNING:absl:Found untraced functions such as _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op while saving (showing 5 of 64). These functions will not be directly callable after loading. INFO:tensorflow:Assets written to: /tmpfs/tmp/tmpmq1hheb9/assets INFO:tensorflow:Assets written to: /tmpfs/tmp/tmpmq1hheb9/assets 2022-08-11 19:23:27.887684: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:362] Ignored output_format. 2022-08-11 19:23:27.887733: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:365] Ignored drop_control_dependency. === TFLite ModelAnalyzer === Your TFLite model has '1' subgraph(s). In the subgraph description below, T# represents the Tensor numbers. For example, in Subgraph#0, the MUL op takes tensor #0 and tensor #133 as input and produces tensor #136 as output. Subgraph#0 main(T#0) -> [T#263] Op#0 MUL(T#0, T#133[]) -> [T#136] Op#1 ADD(T#136, T#134[]) -> [T#137] Op#2 CONV_2D(T#137, T#80[], T#37[]) -> [T#138] Op#3 HARD_SWISH(T#138) -> [T#139] Op#4 DEPTHWISE_CONV_2D(T#139, T#38[], T#1[]) -> [T#140] Op#5 CONV_2D(T#140, T#81[], T#39[]) -> [T#141] Op#6 ADD(T#139, T#141) -> [T#142] Op#7 CONV_2D(T#142, T#82[], T#2[]) -> [T#143] Op#8 PAD(T#143, T#129[0, 0, 0, 1, 0, ...]) -> [T#144] Op#9 DEPTHWISE_CONV_2D(T#144, T#40[], T#3[]) -> [T#145] Op#10 CONV_2D(T#145, T#83[], T#41[]) -> [T#146] Op#11 CONV_2D(T#146, T#84[], T#4[]) -> [T#147] Op#12 DEPTHWISE_CONV_2D(T#147, T#42[], T#5[]) -> [T#148] Op#13 CONV_2D(T#148, T#85[], T#43[]) -> [T#149] Op#14 ADD(T#146, T#149) -> [T#150] Op#15 CONV_2D(T#150, T#86[], T#6[]) -> [T#151] Op#16 PAD(T#151, T#131[0, 0, 1, 2, 1, ...]) -> [T#152] Op#17 DEPTHWISE_CONV_2D(T#152, T#44[], T#7[]) -> [T#153] Op#18 MEAN(T#153, T#130[1, 2]) -> [T#154] Op#19 CONV_2D(T#154, T#87[], T#8[]) -> [T#155] Op#20 CONV_2D(T#155, T#88[], T#9[]) -> [T#156] Op#21 MUL(T#156, T#135[]) -> [T#157] Op#22 MUL(T#153, T#157) -> [T#158] Op#23 CONV_2D(T#158, T#89[], T#45[]) -> [T#159] Op#24 CONV_2D(T#159, T#90[], T#10[]) -> [T#160] Op#25 DEPTHWISE_CONV_2D(T#160, T#46[], T#11[]) -> [T#161] Op#26 MEAN(T#161, T#130[1, 2]) -> [T#162] Op#27 CONV_2D(T#162, T#91[], T#12[]) -> [T#163] Op#28 CONV_2D(T#163, T#92[], T#13[]) -> [T#164] Op#29 MUL(T#164, T#135[]) -> [T#165] Op#30 MUL(T#161, T#165) -> [T#166] Op#31 CONV_2D(T#166, T#93[], T#47[]) -> [T#167] Op#32 ADD(T#159, T#167) -> [T#168] Op#33 CONV_2D(T#168, T#94[], T#14[]) -> [T#169] Op#34 DEPTHWISE_CONV_2D(T#169, T#48[], T#15[]) -> [T#170] Op#35 MEAN(T#170, T#130[1, 2]) -> [T#171] Op#36 CONV_2D(T#171, T#95[], T#16[]) -> [T#172] Op#37 CONV_2D(T#172, T#96[], T#17[]) -> [T#173] Op#38 MUL(T#173, T#135[]) -> [T#174] Op#39 MUL(T#170, T#174) -> [T#175] Op#40 CONV_2D(T#175, T#97[], T#49[]) -> [T#176] Op#41 ADD(T#168, T#176) -> [T#177] Op#42 CONV_2D(T#177, T#98[], T#50[]) -> [T#178] Op#43 HARD_SWISH(T#178) -> [T#179] Op#44 PAD(T#179, T#129[0, 0, 0, 1, 0, ...]) -> [T#180] Op#45 DEPTHWISE_CONV_2D(T#180, T#51[], T#18[]) -> [T#181] Op#46 HARD_SWISH(T#181) -> [T#182] Op#47 CONV_2D(T#182, T#99[], T#52[]) -> [T#183] Op#48 CONV_2D(T#183, T#100[], T#53[]) -> [T#184] Op#49 HARD_SWISH(T#184) -> [T#185] Op#50 DEPTHWISE_CONV_2D(T#185, T#54[], T#19[]) -> [T#186] Op#51 HARD_SWISH(T#186) -> [T#187] Op#52 CONV_2D(T#187, T#101[], T#55[]) -> [T#188] Op#53 ADD(T#183, T#188) -> [T#189] Op#54 CONV_2D(T#189, T#102[], T#56[]) -> [T#190] Op#55 HARD_SWISH(T#190) -> [T#191] Op#56 DEPTHWISE_CONV_2D(T#191, T#57[], T#20[]) -> [T#192] Op#57 HARD_SWISH(T#192) -> [T#193] Op#58 CONV_2D(T#193, T#103[], T#58[]) -> [T#194] Op#59 ADD(T#189, T#194) -> [T#195] Op#60 CONV_2D(T#195, T#104[], T#59[]) -> [T#196] Op#61 HARD_SWISH(T#196) -> [T#197] Op#62 DEPTHWISE_CONV_2D(T#197, T#60[], T#21[]) -> [T#198] Op#63 HARD_SWISH(T#198) -> [T#199] Op#64 CONV_2D(T#199, T#105[], T#61[]) -> [T#200] Op#65 ADD(T#195, T#200) -> [T#201] Op#66 CONV_2D(T#201, T#106[], T#62[]) -> [T#202] Op#67 HARD_SWISH(T#202) -> [T#203] Op#68 DEPTHWISE_CONV_2D(T#203, T#63[], T#22[]) -> [T#204] Op#69 HARD_SWISH(T#204) -> [T#205] Op#70 MEAN(T#205, T#130[1, 2]) -> [T#206] Op#71 CONV_2D(T#206, T#107[], T#23[]) -> [T#207] Op#72 CONV_2D(T#207, T#108[], T#24[]) -> [T#208] Op#73 MUL(T#208, T#135[]) -> [T#209] Op#74 MUL(T#205, T#209) -> [T#210] Op#75 CONV_2D(T#210, T#109[], T#64[]) -> [T#211] Op#76 CONV_2D(T#211, T#110[], T#65[]) -> [T#212] Op#77 HARD_SWISH(T#212) -> [T#213] Op#78 DEPTHWISE_CONV_2D(T#213, T#66[], T#25[]) -> [T#214] Op#79 HARD_SWISH(T#214) -> [T#215] Op#80 MEAN(T#215, T#130[1, 2]) -> [T#216] Op#81 CONV_2D(T#216, T#111[], T#26[]) -> [T#217] Op#82 CONV_2D(T#217, T#112[], T#27[]) -> [T#218] Op#83 MUL(T#218, T#135[]) -> [T#219] Op#84 MUL(T#215, T#219) -> [T#220] Op#85 CONV_2D(T#220, T#113[], T#67[]) -> [T#221] Op#86 ADD(T#211, T#221) -> [T#222] Op#87 CONV_2D(T#222, T#114[], T#68[]) -> [T#223] Op#88 HARD_SWISH(T#223) -> [T#224] Op#89 PAD(T#224, T#131[0, 0, 1, 2, 1, ...]) -> [T#225] Op#90 DEPTHWISE_CONV_2D(T#225, T#69[], T#28[]) -> [T#226] Op#91 HARD_SWISH(T#226) -> [T#227] Op#92 MEAN(T#227, T#130[1, 2]) -> [T#228] Op#93 CONV_2D(T#228, T#115[], T#29[]) -> [T#229] Op#94 CONV_2D(T#229, T#116[], T#30[]) -> [T#230] Op#95 MUL(T#230, T#135[]) -> [T#231] Op#96 MUL(T#227, T#231) -> [T#232] Op#97 CONV_2D(T#232, T#117[], T#70[]) -> [T#233] Op#98 CONV_2D(T#233, T#118[], T#71[]) -> [T#234] Op#99 HARD_SWISH(T#234) -> [T#235] Op#100 DEPTHWISE_CONV_2D(T#235, T#72[], T#31[]) -> [T#236] Op#101 HARD_SWISH(T#236) -> [T#237] Op#102 MEAN(T#237, T#130[1, 2]) -> [T#238] Op#103 CONV_2D(T#238, T#119[], T#32[]) -> [T#239] Op#104 CONV_2D(T#239, T#120[], T#33[]) -> [T#240] Op#105 MUL(T#240, T#135[]) -> [T#241] Op#106 MUL(T#237, T#241) -> [T#242] Op#107 CONV_2D(T#242, T#121[], T#73[]) -> [T#243] Op#108 ADD(T#233, T#243) -> [T#244] Op#109 CONV_2D(T#244, T#122[], T#74[]) -> [T#245] Op#110 HARD_SWISH(T#245) -> [T#246] Op#111 DEPTHWISE_CONV_2D(T#246, T#75[], T#34[]) -> [T#247] Op#112 HARD_SWISH(T#247) -> [T#248] Op#113 MEAN(T#248, T#130[1, 2]) -> [T#249] Op#114 CONV_2D(T#249, T#123[], T#35[]) -> [T#250] Op#115 CONV_2D(T#250, T#124[], T#36[]) -> [T#251] Op#116 MUL(T#251, T#135[]) -> [T#252] Op#117 MUL(T#248, T#252) -> [T#253] Op#118 CONV_2D(T#253, T#125[], T#76[]) -> [T#254] Op#119 ADD(T#244, T#254) -> [T#255] Op#120 CONV_2D(T#255, T#126[], T#77[]) -> [T#256] Op#121 HARD_SWISH(T#256) -> [T#257] Op#122 MEAN(T#257, T#130[1, 2]) -> [T#258] Op#123 CONV_2D(T#258, T#127[], T#78[]) -> [T#259] Op#124 HARD_SWISH(T#259) -> [T#260] Op#125 CONV_2D(T#260, T#128[], T#79[]) -> [T#261] Op#126 RESHAPE(T#261, T#132[-1, 1000]) -> [T#262] Op#127 SOFTMAX(T#262) -> [T#263] Tensors of Subgraph#0 T#0(serving_default_input_1:0) shape_signature:[-1, -1, -1, 3], type:FLOAT32 T#1(MobilenetV3large/expanded_conv/depthwise/BatchNorm/FusedBatchNormV3) shape:[16], type:FLOAT32 RO 64 bytes, data:[] T#2(MobilenetV3large/expanded_conv_1/expand/BatchNorm/FusedBatchNormV3) shape:[64], type:FLOAT32 RO 256 bytes, data:[] T#3(MobilenetV3large/expanded_conv_1/depthwise/BatchNorm/FusedBatchNormV3) shape:[64], type:FLOAT32 RO 256 bytes, data:[] T#4(MobilenetV3large/expanded_conv_2/expand/BatchNorm/FusedBatchNormV3) shape:[72], type:FLOAT32 RO 288 bytes, data:[] T#5(MobilenetV3large/expanded_conv_2/depthwise/BatchNorm/FusedBatchNormV3) shape:[72], type:FLOAT32 RO 288 bytes, data:[] T#6(MobilenetV3large/expanded_conv_3/expand/BatchNorm/FusedBatchNormV3) shape:[72], type:FLOAT32 RO 288 bytes, data:[] T#7(MobilenetV3large/expanded_conv_3/depthwise/BatchNorm/FusedBatchNormV3) shape:[72], type:FLOAT32 RO 288 bytes, data:[] T#8(MobilenetV3large/expanded_conv_3/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape:[24], type:FLOAT32 RO 96 bytes, data:[] T#9(MobilenetV3large/re_lu_8/Relu6;MobilenetV3large/tf.__operators__.add_1/AddV2;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y) shape:[72], type:FLOAT32 RO 288 bytes, data:[] T#10(MobilenetV3large/expanded_conv_4/expand/BatchNorm/FusedBatchNormV3) shape:[120], type:FLOAT32 RO 480 bytes, data:[] T#11(MobilenetV3large/expanded_conv_4/depthwise/BatchNorm/FusedBatchNormV3) shape:[120], type:FLOAT32 RO 480 bytes, data:[] T#12(MobilenetV3large/expanded_conv_4/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape:[32], type:FLOAT32 RO 128 bytes, data:[] T#13(MobilenetV3large/re_lu_11/Relu6;MobilenetV3large/tf.__operators__.add_2/AddV2;MobilenetV3large/expanded_conv_4/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_4/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_4/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y) shape:[120], type:FLOAT32 RO 480 bytes, data:[] T#14(MobilenetV3large/expanded_conv_5/expand/BatchNorm/FusedBatchNormV3) shape:[120], type:FLOAT32 RO 480 bytes, data:[] T#15(MobilenetV3large/expanded_conv_5/depthwise/BatchNorm/FusedBatchNormV3) shape:[120], type:FLOAT32 RO 480 bytes, data:[] T#16(MobilenetV3large/expanded_conv_5/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape:[32], type:FLOAT32 RO 128 bytes, data:[] T#17(MobilenetV3large/re_lu_14/Relu6;MobilenetV3large/tf.__operators__.add_3/AddV2;MobilenetV3large/expanded_conv_5/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_5/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_5/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y) shape:[120], type:FLOAT32 RO 480 bytes, data:[] T#18(MobilenetV3large/expanded_conv_6/depthwise/BatchNorm/FusedBatchNormV3) shape:[240], type:FLOAT32 RO 960 bytes, data:[] T#19(MobilenetV3large/expanded_conv_7/depthwise/BatchNorm/FusedBatchNormV3) shape:[200], type:FLOAT32 RO 800 bytes, data:[] T#20(MobilenetV3large/expanded_conv_8/depthwise/BatchNorm/FusedBatchNormV3) shape:[184], type:FLOAT32 RO 736 bytes, data:[] T#21(MobilenetV3large/expanded_conv_9/depthwise/BatchNorm/FusedBatchNormV3) shape:[184], type:FLOAT32 RO 736 bytes, data:[] T#22(MobilenetV3large/expanded_conv_10/depthwise/BatchNorm/FusedBatchNormV3) shape:[480], type:FLOAT32 RO 1920 bytes, data:[] T#23(MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape:[120], type:FLOAT32 RO 480 bytes, data:[] T#24(MobilenetV3large/re_lu_25/Relu6;MobilenetV3large/tf.__operators__.add_14/AddV2;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y) shape:[480], type:FLOAT32 RO 1920 bytes, data:[] T#25(MobilenetV3large/expanded_conv_11/depthwise/BatchNorm/FusedBatchNormV3) shape:[672], type:FLOAT32 RO 2688 bytes, data:[] T#26(MobilenetV3large/expanded_conv_11/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape:[168], type:FLOAT32 RO 672 bytes, data:[] T#27(MobilenetV3large/re_lu_28/Relu6;MobilenetV3large/tf.__operators__.add_17/AddV2;MobilenetV3large/expanded_conv_11/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_11/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/expanded_conv_11/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y) shape:[672], type:FLOAT32 RO 2688 bytes, data:[] T#28(MobilenetV3large/expanded_conv_12/depthwise/BatchNorm/FusedBatchNormV3) shape:[672], type:FLOAT32 RO 2688 bytes, data:[] T#29(MobilenetV3large/expanded_conv_12/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape:[168], type:FLOAT32 RO 672 bytes, data:[] T#30(MobilenetV3large/re_lu_31/Relu6;MobilenetV3large/tf.__operators__.add_20/AddV2;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y) shape:[672], type:FLOAT32 RO 2688 bytes, data:[] T#31(MobilenetV3large/expanded_conv_13/depthwise/BatchNorm/FusedBatchNormV3) shape:[960], type:FLOAT32 RO 3840 bytes, data:[] T#32(MobilenetV3large/expanded_conv_13/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape:[240], type:FLOAT32 RO 960 bytes, data:[] T#33(MobilenetV3large/re_lu_34/Relu6;MobilenetV3large/tf.__operators__.add_23/AddV2;MobilenetV3large/expanded_conv_13/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_13/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/Conv_1/Conv2D;MobilenetV3large/expanded_conv_13/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y) shape:[960], type:FLOAT32 RO 3840 bytes, data:[] T#34(MobilenetV3large/expanded_conv_14/depthwise/BatchNorm/FusedBatchNormV3) shape:[960], type:FLOAT32 RO 3840 bytes, data:[] T#35(MobilenetV3large/expanded_conv_14/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape:[240], type:FLOAT32 RO 960 bytes, data:[] T#36(MobilenetV3large/re_lu_37/Relu6;MobilenetV3large/tf.__operators__.add_26/AddV2;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/Conv_1/Conv2D;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y) shape:[960], type:FLOAT32 RO 3840 bytes, data:[] T#37(MobilenetV3large/Conv/BatchNorm/FusedBatchNormV3) shape:[16], type:FLOAT32 RO 64 bytes, data:[] T#38(MobilenetV3large/expanded_conv/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv/depthwise/depthwise;MobilenetV3large/expanded_conv/project/Conv2D) shape:[1, 3, 3, 16], type:FLOAT32 RO 576 bytes, data:[] T#39(MobilenetV3large/expanded_conv/project/BatchNorm/FusedBatchNormV3) shape:[16], type:FLOAT32 RO 64 bytes, data:[] T#40(MobilenetV3large/expanded_conv_1/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_1/depthwise/depthwise) shape:[1, 3, 3, 64], type:FLOAT32 RO 2304 bytes, data:[] T#41(MobilenetV3large/expanded_conv_1/project/BatchNorm/FusedBatchNormV3) shape:[24], type:FLOAT32 RO 96 bytes, data:[] T#42(MobilenetV3large/expanded_conv_2/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_2/depthwise/depthwise;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/Conv2D) shape:[1, 3, 3, 72], type:FLOAT32 RO 2592 bytes, data:[] T#43(MobilenetV3large/expanded_conv_2/project/BatchNorm/FusedBatchNormV3) shape:[24], type:FLOAT32 RO 96 bytes, data:[] T#44(MobilenetV3large/expanded_conv_3/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_3/depthwise/depthwise;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/Conv2D) shape:[1, 5, 5, 72], type:FLOAT32 RO 7200 bytes, data:[] T#45(MobilenetV3large/expanded_conv_3/project/BatchNorm/FusedBatchNormV3) shape:[40], type:FLOAT32 RO 160 bytes, data:[] T#46(MobilenetV3large/expanded_conv_4/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_4/depthwise/depthwise;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D) shape:[1, 5, 5, 120], type:FLOAT32 RO 12000 bytes, data:[] T#47(MobilenetV3large/expanded_conv_4/project/BatchNorm/FusedBatchNormV3) shape:[40], type:FLOAT32 RO 160 bytes, data:[] T#48(MobilenetV3large/expanded_conv_5/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_5/depthwise/depthwise;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D) shape:[1, 5, 5, 120], type:FLOAT32 RO 12000 bytes, data:[] T#49(MobilenetV3large/expanded_conv_5/project/BatchNorm/FusedBatchNormV3) shape:[40], type:FLOAT32 RO 160 bytes, data:[] T#50(MobilenetV3large/expanded_conv_6/expand/BatchNorm/FusedBatchNormV3) shape:[240], type:FLOAT32 RO 960 bytes, data:[] T#51(MobilenetV3large/expanded_conv_6/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_6/depthwise/depthwise;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv/Conv2D) shape:[1, 3, 3, 240], type:FLOAT32 RO 8640 bytes, data:[] T#52(MobilenetV3large/expanded_conv_6/project/BatchNorm/FusedBatchNormV3) shape:[80], type:FLOAT32 RO 320 bytes, data:[] T#53(MobilenetV3large/expanded_conv_7/expand/BatchNorm/FusedBatchNormV3) shape:[200], type:FLOAT32 RO 800 bytes, data:[] T#54(MobilenetV3large/expanded_conv_7/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_7/depthwise/depthwise) shape:[1, 3, 3, 200], type:FLOAT32 RO 7200 bytes, data:[] T#55(MobilenetV3large/expanded_conv_7/project/BatchNorm/FusedBatchNormV3) shape:[80], type:FLOAT32 RO 320 bytes, data:[] T#56(MobilenetV3large/expanded_conv_8/expand/BatchNorm/FusedBatchNormV3) shape:[184], type:FLOAT32 RO 736 bytes, data:[] T#57(MobilenetV3large/expanded_conv_8/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_8/depthwise/depthwise;MobilenetV3large/expanded_conv_9/depthwise/depthwise) shape:[1, 3, 3, 184], type:FLOAT32 RO 6624 bytes, data:[] T#58(MobilenetV3large/expanded_conv_8/project/BatchNorm/FusedBatchNormV3) shape:[80], type:FLOAT32 RO 320 bytes, data:[] T#59(MobilenetV3large/expanded_conv_9/expand/BatchNorm/FusedBatchNormV3) shape:[184], type:FLOAT32 RO 736 bytes, data:[] T#60(MobilenetV3large/expanded_conv_9/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_9/depthwise/depthwise) shape:[1, 3, 3, 184], type:FLOAT32 RO 6624 bytes, data:[] T#61(MobilenetV3large/expanded_conv_9/project/BatchNorm/FusedBatchNormV3) shape:[80], type:FLOAT32 RO 320 bytes, data:[] T#62(MobilenetV3large/expanded_conv_10/expand/BatchNorm/FusedBatchNormV3) shape:[480], type:FLOAT32 RO 1920 bytes, data:[] T#63(MobilenetV3large/expanded_conv_10/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_10/depthwise/depthwise;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv_1/Conv2D) shape:[1, 3, 3, 480], type:FLOAT32 RO 17280 bytes, data:[] T#64(MobilenetV3large/expanded_conv_10/project/BatchNorm/FusedBatchNormV3) shape:[112], type:FLOAT32 RO 448 bytes, data:[] T#65(MobilenetV3large/expanded_conv_11/expand/BatchNorm/FusedBatchNormV3) shape:[672], type:FLOAT32 RO 2688 bytes, data:[] T#66(MobilenetV3large/expanded_conv_11/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_11/depthwise/depthwise;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/Conv2D) shape:[1, 3, 3, 672], type:FLOAT32 RO 24192 bytes, data:[] T#67(MobilenetV3large/expanded_conv_11/project/BatchNorm/FusedBatchNormV3) shape:[112], type:FLOAT32 RO 448 bytes, data:[] T#68(MobilenetV3large/expanded_conv_12/expand/BatchNorm/FusedBatchNormV3) shape:[672], type:FLOAT32 RO 2688 bytes, data:[] T#69(MobilenetV3large/expanded_conv_12/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_12/depthwise/depthwise;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/Conv2D) shape:[1, 5, 5, 672], type:FLOAT32 RO 67200 bytes, data:[] T#70(MobilenetV3large/expanded_conv_12/project/BatchNorm/FusedBatchNormV3) shape:[160], type:FLOAT32 RO 640 bytes, data:[] T#71(MobilenetV3large/expanded_conv_13/expand/BatchNorm/FusedBatchNormV3) shape:[960], type:FLOAT32 RO 3840 bytes, data:[] T#72(MobilenetV3large/expanded_conv_13/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_13/depthwise/depthwise;MobilenetV3large/Conv_1/Conv2D) shape:[1, 5, 5, 960], type:FLOAT32 RO 96000 bytes, data:[] T#73(MobilenetV3large/expanded_conv_13/project/BatchNorm/FusedBatchNormV3) shape:[160], type:FLOAT32 RO 640 bytes, data:[] T#74(MobilenetV3large/expanded_conv_14/expand/BatchNorm/FusedBatchNormV3) shape:[960], type:FLOAT32 RO 3840 bytes, data:[] T#75(MobilenetV3large/expanded_conv_14/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_14/depthwise/depthwise;MobilenetV3large/Conv_1/Conv2D) shape:[1, 5, 5, 960], type:FLOAT32 RO 96000 bytes, data:[] T#76(MobilenetV3large/expanded_conv_14/project/BatchNorm/FusedBatchNormV3) shape:[160], type:FLOAT32 RO 640 bytes, data:[] T#77(MobilenetV3large/Conv_1/BatchNorm/FusedBatchNormV3) shape:[960], type:FLOAT32 RO 3840 bytes, data:[] T#78(MobilenetV3large/Conv_2/BiasAdd/ReadVariableOp) shape:[1280], type:FLOAT32 RO 5120 bytes, data:[] T#79(MobilenetV3large/Logits/BiasAdd/ReadVariableOp) shape:[1000], type:FLOAT32 RO 4000 bytes, data:[] T#80(MobilenetV3large/Conv/Conv2D) shape:[16, 3, 3, 3], type:FLOAT32 RO 1728 bytes, data:[] T#81(MobilenetV3large/expanded_conv/project/Conv2D) shape:[16, 1, 1, 16], type:FLOAT32 RO 1024 bytes, data:[] T#82(MobilenetV3large/expanded_conv_1/expand/Conv2D) shape:[64, 1, 1, 16], type:FLOAT32 RO 4096 bytes, data:[] T#83(MobilenetV3large/expanded_conv_1/project/Conv2D) shape:[24, 1, 1, 64], type:FLOAT32 RO 6144 bytes, data:[] T#84(MobilenetV3large/expanded_conv_2/expand/Conv2D) shape:[72, 1, 1, 24], type:FLOAT32 RO 6912 bytes, data:[] T#85(MobilenetV3large/expanded_conv_2/project/Conv2D) shape:[24, 1, 1, 72], type:FLOAT32 RO 6912 bytes, data:[] T#86(MobilenetV3large/expanded_conv_3/expand/Conv2D) shape:[72, 1, 1, 24], type:FLOAT32 RO 6912 bytes, data:[] T#87(MobilenetV3large/expanded_conv_3/squeeze_excite/Conv/Conv2D) shape:[24, 1, 1, 72], type:FLOAT32 RO 6912 bytes, data:[] T#88(MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/Conv2D) shape:[72, 1, 1, 24], type:FLOAT32 RO 6912 bytes, data:[] T#89(MobilenetV3large/expanded_conv_3/project/Conv2D) shape:[40, 1, 1, 72], type:FLOAT32 RO 11520 bytes, data:[] T#90(MobilenetV3large/expanded_conv_4/expand/Conv2D) shape:[120, 1, 1, 40], type:FLOAT32 RO 19200 bytes, data:[] T#91(MobilenetV3large/expanded_conv_4/squeeze_excite/Conv/Conv2D) shape:[32, 1, 1, 120], type:FLOAT32 RO 15360 bytes, data:[] T#92(MobilenetV3large/expanded_conv_4/squeeze_excite/Conv_1/Conv2D) shape:[120, 1, 1, 32], type:FLOAT32 RO 15360 bytes, data:[] T#93(MobilenetV3large/expanded_conv_4/project/Conv2D) shape:[40, 1, 1, 120], type:FLOAT32 RO 19200 bytes, data:[] T#94(MobilenetV3large/expanded_conv_5/expand/Conv2D) shape:[120, 1, 1, 40], type:FLOAT32 RO 19200 bytes, data:[] T#95(MobilenetV3large/expanded_conv_5/squeeze_excite/Conv/Conv2D) shape:[32, 1, 1, 120], type:FLOAT32 RO 15360 bytes, data:[] T#96(MobilenetV3large/expanded_conv_5/squeeze_excite/Conv_1/Conv2D) shape:[120, 1, 1, 32], type:FLOAT32 RO 15360 bytes, data:[] T#97(MobilenetV3large/expanded_conv_5/project/Conv2D) shape:[40, 1, 1, 120], type:FLOAT32 RO 19200 bytes, data:[] T#98(MobilenetV3large/expanded_conv_6/expand/Conv2D) shape:[240, 1, 1, 40], type:FLOAT32 RO 38400 bytes, data:[] T#99(MobilenetV3large/expanded_conv_6/project/Conv2D) shape:[80, 1, 1, 240], type:FLOAT32 RO 76800 bytes, data:[] T#100(MobilenetV3large/expanded_conv_7/expand/Conv2D) shape:[200, 1, 1, 80], type:FLOAT32 RO 64000 bytes, data:[] T#101(MobilenetV3large/expanded_conv_7/project/Conv2D) shape:[80, 1, 1, 200], type:FLOAT32 RO 64000 bytes, data:[] T#102(MobilenetV3large/expanded_conv_8/expand/Conv2D) shape:[184, 1, 1, 80], type:FLOAT32 RO 58880 bytes, data:[] T#103(MobilenetV3large/expanded_conv_8/project/Conv2D) shape:[80, 1, 1, 184], type:FLOAT32 RO 58880 bytes, data:[] T#104(MobilenetV3large/expanded_conv_9/expand/Conv2D) shape:[184, 1, 1, 80], type:FLOAT32 RO 58880 bytes, data:[] T#105(MobilenetV3large/expanded_conv_9/project/Conv2D) shape:[80, 1, 1, 184], type:FLOAT32 RO 58880 bytes, data:[] T#106(MobilenetV3large/expanded_conv_10/expand/Conv2D) shape:[480, 1, 1, 80], type:FLOAT32 RO 153600 bytes, data:[] T#107(MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D) shape:[120, 1, 1, 480], type:FLOAT32 RO 230400 bytes, data:[] T#108(MobilenetV3large/expanded_conv_10/squeeze_excite/Conv_1/Conv2D) shape:[480, 1, 1, 120], type:FLOAT32 RO 230400 bytes, data:[] T#109(MobilenetV3large/expanded_conv_10/project/Conv2D) shape:[112, 1, 1, 480], type:FLOAT32 RO 215040 bytes, data:[] T#110(MobilenetV3large/expanded_conv_11/expand/Conv2D) shape:[672, 1, 1, 112], type:FLOAT32 RO 301056 bytes, data:[] T#111(MobilenetV3large/expanded_conv_11/squeeze_excite/Conv/Conv2D) shape:[168, 1, 1, 672], type:FLOAT32 RO 451584 bytes, data:[] T#112(MobilenetV3large/expanded_conv_11/squeeze_excite/Conv_1/Conv2D) shape:[672, 1, 1, 168], type:FLOAT32 RO 451584 bytes, data:[] T#113(MobilenetV3large/expanded_conv_11/project/Conv2D) shape:[112, 1, 1, 672], type:FLOAT32 RO 301056 bytes, data:[] T#114(MobilenetV3large/expanded_conv_12/expand/Conv2D) shape:[672, 1, 1, 112], type:FLOAT32 RO 301056 bytes, data:[] T#115(MobilenetV3large/expanded_conv_12/squeeze_excite/Conv/Conv2D) shape:[168, 1, 1, 672], type:FLOAT32 RO 451584 bytes, data:[] T#116(MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/Conv2D) shape:[672, 1, 1, 168], type:FLOAT32 RO 451584 bytes, data:[] T#117(MobilenetV3large/expanded_conv_12/project/Conv2D) shape:[160, 1, 1, 672], type:FLOAT32 RO 430080 bytes, data:[] T#118(MobilenetV3large/expanded_conv_13/expand/Conv2D) shape:[960, 1, 1, 160], type:FLOAT32 RO 614400 bytes, data:[] T#119(MobilenetV3large/expanded_conv_13/squeeze_excite/Conv/Conv2D) shape:[240, 1, 1, 960], type:FLOAT32 RO 921600 bytes, data:[] T#120(MobilenetV3large/expanded_conv_13/squeeze_excite/Conv_1/Conv2D) shape:[960, 1, 1, 240], type:FLOAT32 RO 921600 bytes, data:[] T#121(MobilenetV3large/expanded_conv_13/project/Conv2D) shape:[160, 1, 1, 960], type:FLOAT32 RO 614400 bytes, data:[] T#122(MobilenetV3large/expanded_conv_14/expand/Conv2D) shape:[960, 1, 1, 160], type:FLOAT32 RO 614400 bytes, data:[] T#123(MobilenetV3large/expanded_conv_14/squeeze_excite/Conv/Conv2D) shape:[240, 1, 1, 960], type:FLOAT32 RO 921600 bytes, data:[] T#124(MobilenetV3large/expanded_conv_14/squeeze_excite/Conv_1/Conv2D) shape:[960, 1, 1, 240], type:FLOAT32 RO 921600 bytes, data:[] T#125(MobilenetV3large/expanded_conv_14/project/Conv2D) shape:[160, 1, 1, 960], type:FLOAT32 RO 614400 bytes, data:[] T#126(MobilenetV3large/Conv_1/Conv2D) shape:[960, 1, 1, 160], type:FLOAT32 RO 614400 bytes, data:[] T#127(MobilenetV3large/Conv_2/Conv2D) shape:[1280, 1, 1, 960], type:FLOAT32 RO 4915200 bytes, data:[] T#128(MobilenetV3large/Logits/Conv2D) shape:[1000, 1, 1, 1280], type:FLOAT32 RO 5120000 bytes, data:[] T#129(MobilenetV3large/expanded_conv_1/depthwise/pad/Pad/paddings) shape:[4, 2], type:INT32 RO 32 bytes, data:[0, 0, 0, 1, 0, ...] T#130(MobilenetV3large/expanded_conv_10/squeeze_excite/AvgPool/Mean/reduction_indices) shape:[2], type:INT32 RO 8 bytes, data:[1, 2] T#131(MobilenetV3large/expanded_conv_12/depthwise/pad/Pad/paddings) shape:[4, 2], type:INT32 RO 32 bytes, data:[0, 0, 1, 2, 1, ...] T#132(MobilenetV3large/flatten_1/Const) shape:[2], type:INT32 RO 8 bytes, data:[-1, 1000] T#133(MobilenetV3large/rescaling/Cast/x) shape:[], type:FLOAT32 RO 4 bytes, data:[] T#134(MobilenetV3large/rescaling/Cast_1/x) shape:[], type:FLOAT32 RO 4 bytes, data:[] T#135(MobilenetV3large/tf.math.multiply/Mul/y) shape:[], type:FLOAT32 RO 4 bytes, data:[] T#136(MobilenetV3large/rescaling/mul) shape_signature:[-1, -1, -1, 3], type:FLOAT32 T#137(MobilenetV3large/rescaling/add) shape_signature:[-1, -1, -1, 3], type:FLOAT32 T#138(MobilenetV3large/Conv/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv/project/Conv2D;MobilenetV3large/Conv/Conv2D) shape_signature:[-1, -1, -1, 16], type:FLOAT32 T#139(MobilenetV3large/multiply/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu/Relu6;MobilenetV3large/tf.__operators__.add/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply/Mul) shape_signature:[-1, -1, -1, 16], type:FLOAT32 T#140(MobilenetV3large/re_lu_1/Relu;MobilenetV3large/expanded_conv/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv/project/Conv2D;MobilenetV3large/expanded_conv/depthwise/depthwise) shape_signature:[-1, -1, -1, 16], type:FLOAT32 T#141(MobilenetV3large/expanded_conv/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv/project/Conv2D) shape_signature:[-1, -1, -1, 16], type:FLOAT32 T#142(MobilenetV3large/expanded_conv/Add/add) shape_signature:[-1, -1, -1, 16], type:FLOAT32 T#143(MobilenetV3large/re_lu_2/Relu;MobilenetV3large/expanded_conv_1/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_1/depthwise/depthwise;MobilenetV3large/expanded_conv_1/expand/Conv2D) shape_signature:[-1, -1, -1, 64], type:FLOAT32 T#144(MobilenetV3large/expanded_conv_1/depthwise/pad/Pad) shape_signature:[-1, -1, -1, 64], type:FLOAT32 T#145(MobilenetV3large/re_lu_3/Relu;MobilenetV3large/expanded_conv_1/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_1/depthwise/depthwise) shape_signature:[-1, -1, -1, 64], type:FLOAT32 T#146(MobilenetV3large/expanded_conv_1/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_1/project/Conv2D) shape_signature:[-1, -1, -1, 24], type:FLOAT32 T#147(MobilenetV3large/re_lu_4/Relu;MobilenetV3large/expanded_conv_2/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/expanded_conv_2/expand/Conv2D) shape_signature:[-1, -1, -1, 72], type:FLOAT32 T#148(MobilenetV3large/re_lu_5/Relu;MobilenetV3large/expanded_conv_2/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/expanded_conv_2/depthwise/depthwise) shape_signature:[-1, -1, -1, 72], type:FLOAT32 T#149(MobilenetV3large/expanded_conv_2/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_2/project/Conv2D) shape_signature:[-1, -1, -1, 24], type:FLOAT32 T#150(MobilenetV3large/expanded_conv_2/Add/add) shape_signature:[-1, -1, -1, 24], type:FLOAT32 T#151(MobilenetV3large/re_lu_6/Relu;MobilenetV3large/expanded_conv_3/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/expanded_conv_3/expand/Conv2D) shape_signature:[-1, -1, -1, 72], type:FLOAT32 T#152(MobilenetV3large/expanded_conv_3/depthwise/pad/Pad) shape_signature:[-1, -1, -1, 72], type:FLOAT32 T#153(MobilenetV3large/re_lu_7/Relu;MobilenetV3large/expanded_conv_3/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/expanded_conv_3/depthwise/depthwise) shape_signature:[-1, -1, -1, 72], type:FLOAT32 T#154(MobilenetV3large/expanded_conv_3/squeeze_excite/AvgPool/Mean) shape_signature:[-1, 1, 1, 72], type:FLOAT32 T#155(MobilenetV3large/expanded_conv_3/squeeze_excite/Relu/Relu;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv/BiasAdd;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape_signature:[-1, 1, 1, 24], type:FLOAT32 T#156(MobilenetV3large/re_lu_8/Relu6;MobilenetV3large/tf.__operators__.add_1/AddV2;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y1) shape_signature:[-1, 1, 1, 72], type:FLOAT32 T#157(MobilenetV3large/tf.math.multiply_1/Mul) shape_signature:[-1, 1, 1, 72], type:FLOAT32 T#158(MobilenetV3large/expanded_conv_3/squeeze_excite/Mul/mul) shape_signature:[-1, -1, -1, 72], type:FLOAT32 T#159(MobilenetV3large/expanded_conv_3/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_5/project/Conv2D;MobilenetV3large/expanded_conv_3/project/Conv2D) shape_signature:[-1, -1, -1, 40], type:FLOAT32 T#160(MobilenetV3large/re_lu_9/Relu;MobilenetV3large/expanded_conv_4/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_4/expand/Conv2D) shape_signature:[-1, -1, -1, 120], type:FLOAT32 T#161(MobilenetV3large/re_lu_10/Relu;MobilenetV3large/expanded_conv_4/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_4/depthwise/depthwise) shape_signature:[-1, -1, -1, 120], type:FLOAT32 T#162(MobilenetV3large/expanded_conv_4/squeeze_excite/AvgPool/Mean) shape_signature:[-1, 1, 1, 120], type:FLOAT32 T#163(MobilenetV3large/expanded_conv_4/squeeze_excite/Relu/Relu;MobilenetV3large/expanded_conv_4/squeeze_excite/Conv/BiasAdd;MobilenetV3large/expanded_conv_5/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_4/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_4/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape_signature:[-1, 1, 1, 32], type:FLOAT32 T#164(MobilenetV3large/re_lu_11/Relu6;MobilenetV3large/tf.__operators__.add_2/AddV2;MobilenetV3large/expanded_conv_4/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_4/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_4/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y1) shape_signature:[-1, 1, 1, 120], type:FLOAT32 T#165(MobilenetV3large/tf.math.multiply_2/Mul) shape_signature:[-1, 1, 1, 120], type:FLOAT32 T#166(MobilenetV3large/expanded_conv_4/squeeze_excite/Mul/mul) shape_signature:[-1, -1, -1, 120], type:FLOAT32 T#167(MobilenetV3large/expanded_conv_4/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_5/project/Conv2D;MobilenetV3large/expanded_conv_4/project/Conv2D) shape_signature:[-1, -1, -1, 40], type:FLOAT32 T#168(MobilenetV3large/expanded_conv_4/Add/add) shape_signature:[-1, -1, -1, 40], type:FLOAT32 T#169(MobilenetV3large/re_lu_12/Relu;MobilenetV3large/expanded_conv_5/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_5/expand/Conv2D) shape_signature:[-1, -1, -1, 120], type:FLOAT32 T#170(MobilenetV3large/re_lu_13/Relu;MobilenetV3large/expanded_conv_5/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_5/depthwise/depthwise) shape_signature:[-1, -1, -1, 120], type:FLOAT32 T#171(MobilenetV3large/expanded_conv_5/squeeze_excite/AvgPool/Mean) shape_signature:[-1, 1, 1, 120], type:FLOAT32 T#172(MobilenetV3large/expanded_conv_5/squeeze_excite/Relu/Relu;MobilenetV3large/expanded_conv_5/squeeze_excite/Conv/BiasAdd;MobilenetV3large/expanded_conv_5/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_5/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape_signature:[-1, 1, 1, 32], type:FLOAT32 T#173(MobilenetV3large/re_lu_14/Relu6;MobilenetV3large/tf.__operators__.add_3/AddV2;MobilenetV3large/expanded_conv_5/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_5/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_5/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y1) shape_signature:[-1, 1, 1, 120], type:FLOAT32 T#174(MobilenetV3large/tf.math.multiply_3/Mul) shape_signature:[-1, 1, 1, 120], type:FLOAT32 T#175(MobilenetV3large/expanded_conv_5/squeeze_excite/Mul/mul) shape_signature:[-1, -1, -1, 120], type:FLOAT32 T#176(MobilenetV3large/expanded_conv_5/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_5/project/Conv2D) shape_signature:[-1, -1, -1, 40], type:FLOAT32 T#177(MobilenetV3large/expanded_conv_5/Add/add) shape_signature:[-1, -1, -1, 40], type:FLOAT32 T#178(MobilenetV3large/expanded_conv_6/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_6/expand/Conv2D) shape_signature:[-1, -1, -1, 240], type:FLOAT32 T#179(MobilenetV3large/multiply_1/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_15/Relu6;MobilenetV3large/tf.__operators__.add_4/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_4/Mul) shape_signature:[-1, -1, -1, 240], type:FLOAT32 T#180(MobilenetV3large/expanded_conv_6/depthwise/pad/Pad) shape_signature:[-1, -1, -1, 240], type:FLOAT32 T#181(MobilenetV3large/expanded_conv_6/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_6/depthwise/depthwise) shape_signature:[-1, -1, -1, 240], type:FLOAT32 T#182(MobilenetV3large/multiply_2/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_16/Relu6;MobilenetV3large/tf.__operators__.add_5/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_5/Mul) shape_signature:[-1, -1, -1, 240], type:FLOAT32 T#183(MobilenetV3large/expanded_conv_6/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_9/project/Conv2D;MobilenetV3large/expanded_conv_6/project/Conv2D) shape_signature:[-1, -1, -1, 80], type:FLOAT32 T#184(MobilenetV3large/expanded_conv_7/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_7/depthwise/depthwise;MobilenetV3large/expanded_conv_7/expand/Conv2D) shape_signature:[-1, -1, -1, 200], type:FLOAT32 T#185(MobilenetV3large/multiply_3/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_17/Relu6;MobilenetV3large/tf.__operators__.add_6/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_6/Mul) shape_signature:[-1, -1, -1, 200], type:FLOAT32 T#186(MobilenetV3large/expanded_conv_7/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_7/depthwise/depthwise1) shape_signature:[-1, -1, -1, 200], type:FLOAT32 T#187(MobilenetV3large/multiply_4/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_18/Relu6;MobilenetV3large/tf.__operators__.add_7/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_7/Mul) shape_signature:[-1, -1, -1, 200], type:FLOAT32 T#188(MobilenetV3large/expanded_conv_7/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_9/project/Conv2D;MobilenetV3large/expanded_conv_7/project/Conv2D) shape_signature:[-1, -1, -1, 80], type:FLOAT32 T#189(MobilenetV3large/expanded_conv_7/Add/add) shape_signature:[-1, -1, -1, 80], type:FLOAT32 T#190(MobilenetV3large/expanded_conv_8/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_9/depthwise/depthwise;MobilenetV3large/expanded_conv_8/expand/Conv2D) shape_signature:[-1, -1, -1, 184], type:FLOAT32 T#191(MobilenetV3large/multiply_5/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_19/Relu6;MobilenetV3large/tf.__operators__.add_8/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_8/Mul) shape_signature:[-1, -1, -1, 184], type:FLOAT32 T#192(MobilenetV3large/expanded_conv_8/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_9/depthwise/depthwise;MobilenetV3large/expanded_conv_8/depthwise/depthwise) shape_signature:[-1, -1, -1, 184], type:FLOAT32 T#193(MobilenetV3large/multiply_6/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_20/Relu6;MobilenetV3large/tf.__operators__.add_9/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_9/Mul) shape_signature:[-1, -1, -1, 184], type:FLOAT32 T#194(MobilenetV3large/expanded_conv_8/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_9/project/Conv2D;MobilenetV3large/expanded_conv_8/project/Conv2D) shape_signature:[-1, -1, -1, 80], type:FLOAT32 T#195(MobilenetV3large/expanded_conv_8/Add/add) shape_signature:[-1, -1, -1, 80], type:FLOAT32 T#196(MobilenetV3large/expanded_conv_9/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_9/depthwise/depthwise;MobilenetV3large/expanded_conv_9/expand/Conv2D) shape_signature:[-1, -1, -1, 184], type:FLOAT32 T#197(MobilenetV3large/multiply_7/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_21/Relu6;MobilenetV3large/tf.__operators__.add_10/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_10/Mul) shape_signature:[-1, -1, -1, 184], type:FLOAT32 T#198(MobilenetV3large/expanded_conv_9/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_9/depthwise/depthwise1) shape_signature:[-1, -1, -1, 184], type:FLOAT32 T#199(MobilenetV3large/multiply_8/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_22/Relu6;MobilenetV3large/tf.__operators__.add_11/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_11/Mul) shape_signature:[-1, -1, -1, 184], type:FLOAT32 T#200(MobilenetV3large/expanded_conv_9/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_9/project/Conv2D) shape_signature:[-1, -1, -1, 80], type:FLOAT32 T#201(MobilenetV3large/expanded_conv_9/Add/add) shape_signature:[-1, -1, -1, 80], type:FLOAT32 T#202(MobilenetV3large/expanded_conv_10/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/expanded_conv_10/expand/Conv2D) shape_signature:[-1, -1, -1, 480], type:FLOAT32 T#203(MobilenetV3large/multiply_9/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_23/Relu6;MobilenetV3large/tf.__operators__.add_12/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_12/Mul) shape_signature:[-1, -1, -1, 480], type:FLOAT32 T#204(MobilenetV3large/expanded_conv_10/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_10/depthwise/depthwise;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv_1/Conv2D1) shape_signature:[-1, -1, -1, 480], type:FLOAT32 T#205(MobilenetV3large/multiply_10/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_24/Relu6;MobilenetV3large/tf.__operators__.add_13/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_13/Mul) shape_signature:[-1, -1, -1, 480], type:FLOAT32 T#206(MobilenetV3large/expanded_conv_10/squeeze_excite/AvgPool/Mean) shape_signature:[-1, 1, 1, 480], type:FLOAT32 T#207(MobilenetV3large/expanded_conv_10/squeeze_excite/Relu/Relu;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/BiasAdd;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape_signature:[-1, 1, 1, 120], type:FLOAT32 T#208(MobilenetV3large/re_lu_25/Relu6;MobilenetV3large/tf.__operators__.add_14/AddV2;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y1) shape_signature:[-1, 1, 1, 480], type:FLOAT32 T#209(MobilenetV3large/tf.math.multiply_14/Mul) shape_signature:[-1, 1, 1, 480], type:FLOAT32 T#210(MobilenetV3large/expanded_conv_10/squeeze_excite/Mul/mul) shape_signature:[-1, -1, -1, 480], type:FLOAT32 T#211(MobilenetV3large/expanded_conv_10/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_11/project/Conv2D;MobilenetV3large/expanded_conv_10/project/Conv2D) shape_signature:[-1, -1, -1, 112], type:FLOAT32 T#212(MobilenetV3large/expanded_conv_11/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/expanded_conv_11/expand/Conv2D) shape_signature:[-1, -1, -1, 672], type:FLOAT32 T#213(MobilenetV3large/multiply_11/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_26/Relu6;MobilenetV3large/tf.__operators__.add_15/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_15/Mul) shape_signature:[-1, -1, -1, 672], type:FLOAT32 T#214(MobilenetV3large/expanded_conv_11/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/expanded_conv_11/depthwise/depthwise) shape_signature:[-1, -1, -1, 672], type:FLOAT32 T#215(MobilenetV3large/multiply_12/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_27/Relu6;MobilenetV3large/tf.__operators__.add_16/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_16/Mul) shape_signature:[-1, -1, -1, 672], type:FLOAT32 T#216(MobilenetV3large/expanded_conv_11/squeeze_excite/AvgPool/Mean) shape_signature:[-1, 1, 1, 672], type:FLOAT32 T#217(MobilenetV3large/expanded_conv_11/squeeze_excite/Relu/Relu;MobilenetV3large/expanded_conv_11/squeeze_excite/Conv/BiasAdd;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_11/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_11/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape_signature:[-1, 1, 1, 168], type:FLOAT32 T#218(MobilenetV3large/re_lu_28/Relu6;MobilenetV3large/tf.__operators__.add_17/AddV2;MobilenetV3large/expanded_conv_11/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_11/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/expanded_conv_11/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y1) shape_signature:[-1, 1, 1, 672], type:FLOAT32 T#219(MobilenetV3large/tf.math.multiply_17/Mul) shape_signature:[-1, 1, 1, 672], type:FLOAT32 T#220(MobilenetV3large/expanded_conv_11/squeeze_excite/Mul/mul) shape_signature:[-1, -1, -1, 672], type:FLOAT32 T#221(MobilenetV3large/expanded_conv_11/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_11/project/Conv2D) shape_signature:[-1, -1, -1, 112], type:FLOAT32 T#222(MobilenetV3large/expanded_conv_11/Add/add) shape_signature:[-1, -1, -1, 112], type:FLOAT32 T#223(MobilenetV3large/expanded_conv_12/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/expanded_conv_12/expand/Conv2D) shape_signature:[-1, -1, -1, 672], type:FLOAT32 T#224(MobilenetV3large/multiply_13/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_29/Relu6;MobilenetV3large/tf.__operators__.add_18/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_18/Mul) shape_signature:[-1, -1, -1, 672], type:FLOAT32 T#225(MobilenetV3large/expanded_conv_12/depthwise/pad/Pad) shape_signature:[-1, -1, -1, 672], type:FLOAT32 T#226(MobilenetV3large/expanded_conv_12/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_12/depthwise/depthwise;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/Conv2D1) shape_signature:[-1, -1, -1, 672], type:FLOAT32 T#227(MobilenetV3large/multiply_14/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_30/Relu6;MobilenetV3large/tf.__operators__.add_19/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_19/Mul) shape_signature:[-1, -1, -1, 672], type:FLOAT32 T#228(MobilenetV3large/expanded_conv_12/squeeze_excite/AvgPool/Mean) shape_signature:[-1, 1, 1, 672], type:FLOAT32 T#229(MobilenetV3large/expanded_conv_12/squeeze_excite/Relu/Relu;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv/BiasAdd;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape_signature:[-1, 1, 1, 168], type:FLOAT32 T#230(MobilenetV3large/re_lu_31/Relu6;MobilenetV3large/tf.__operators__.add_20/AddV2;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y1) shape_signature:[-1, 1, 1, 672], type:FLOAT32 T#231(MobilenetV3large/tf.math.multiply_20/Mul) shape_signature:[-1, 1, 1, 672], type:FLOAT32 T#232(MobilenetV3large/expanded_conv_12/squeeze_excite/Mul/mul) shape_signature:[-1, -1, -1, 672], type:FLOAT32 T#233(MobilenetV3large/expanded_conv_12/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_14/project/Conv2D;MobilenetV3large/expanded_conv_12/project/Conv2D) shape_signature:[-1, -1, -1, 160], type:FLOAT32 T#234(MobilenetV3large/expanded_conv_13/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/Conv_1/Conv2D;MobilenetV3large/expanded_conv_13/expand/Conv2D) shape_signature:[-1, -1, -1, 960], type:FLOAT32 T#235(MobilenetV3large/multiply_15/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_32/Relu6;MobilenetV3large/tf.__operators__.add_21/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_21/Mul) shape_signature:[-1, -1, -1, 960], type:FLOAT32 T#236(MobilenetV3large/expanded_conv_13/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/Conv_1/Conv2D;MobilenetV3large/expanded_conv_13/depthwise/depthwise) shape_signature:[-1, -1, -1, 960], type:FLOAT32 T#237(MobilenetV3large/multiply_16/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_33/Relu6;MobilenetV3large/tf.__operators__.add_22/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_22/Mul) shape_signature:[-1, -1, -1, 960], type:FLOAT32 T#238(MobilenetV3large/expanded_conv_13/squeeze_excite/AvgPool/Mean) shape_signature:[-1, 1, 1, 960], type:FLOAT32 T#239(MobilenetV3large/expanded_conv_13/squeeze_excite/Relu/Relu;MobilenetV3large/expanded_conv_13/squeeze_excite/Conv/BiasAdd;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_13/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_13/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape_signature:[-1, 1, 1, 240], type:FLOAT32 T#240(MobilenetV3large/re_lu_34/Relu6;MobilenetV3large/tf.__operators__.add_23/AddV2;MobilenetV3large/expanded_conv_13/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_13/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/Conv_1/Conv2D;MobilenetV3large/expanded_conv_13/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y1) shape_signature:[-1, 1, 1, 960], type:FLOAT32 T#241(MobilenetV3large/tf.math.multiply_23/Mul) shape_signature:[-1, 1, 1, 960], type:FLOAT32 T#242(MobilenetV3large/expanded_conv_13/squeeze_excite/Mul/mul) shape_signature:[-1, -1, -1, 960], type:FLOAT32 T#243(MobilenetV3large/expanded_conv_13/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_14/project/Conv2D;MobilenetV3large/expanded_conv_13/project/Conv2D) shape_signature:[-1, -1, -1, 160], type:FLOAT32 T#244(MobilenetV3large/expanded_conv_13/Add/add) shape_signature:[-1, -1, -1, 160], type:FLOAT32 T#245(MobilenetV3large/expanded_conv_14/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/Conv_1/Conv2D;MobilenetV3large/expanded_conv_14/expand/Conv2D) shape_signature:[-1, -1, -1, 960], type:FLOAT32 T#246(MobilenetV3large/multiply_17/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_35/Relu6;MobilenetV3large/tf.__operators__.add_24/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_24/Mul) shape_signature:[-1, -1, -1, 960], type:FLOAT32 T#247(MobilenetV3large/expanded_conv_14/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/Conv_1/Conv2D;MobilenetV3large/expanded_conv_14/depthwise/depthwise) shape_signature:[-1, -1, -1, 960], type:FLOAT32 T#248(MobilenetV3large/multiply_18/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_36/Relu6;MobilenetV3large/tf.__operators__.add_25/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_25/Mul) shape_signature:[-1, -1, -1, 960], type:FLOAT32 T#249(MobilenetV3large/expanded_conv_14/squeeze_excite/AvgPool/Mean) shape_signature:[-1, 1, 1, 960], type:FLOAT32 T#250(MobilenetV3large/expanded_conv_14/squeeze_excite/Relu/Relu;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv/BiasAdd;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape_signature:[-1, 1, 1, 240], type:FLOAT32 T#251(MobilenetV3large/re_lu_37/Relu6;MobilenetV3large/tf.__operators__.add_26/AddV2;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/Conv_1/Conv2D;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y1) shape_signature:[-1, 1, 1, 960], type:FLOAT32 T#252(MobilenetV3large/tf.math.multiply_26/Mul) shape_signature:[-1, 1, 1, 960], type:FLOAT32 T#253(MobilenetV3large/expanded_conv_14/squeeze_excite/Mul/mul) shape_signature:[-1, -1, -1, 960], type:FLOAT32 T#254(MobilenetV3large/expanded_conv_14/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_14/project/Conv2D) shape_signature:[-1, -1, -1, 160], type:FLOAT32 T#255(MobilenetV3large/expanded_conv_14/Add/add) shape_signature:[-1, -1, -1, 160], type:FLOAT32 T#256(MobilenetV3large/Conv_1/BatchNorm/FusedBatchNormV3;MobilenetV3large/Conv_1/Conv2D) shape_signature:[-1, -1, -1, 960], type:FLOAT32 T#257(MobilenetV3large/multiply_19/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_38/Relu6;MobilenetV3large/tf.__operators__.add_27/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_27/Mul) shape_signature:[-1, -1, -1, 960], type:FLOAT32 T#258(MobilenetV3large/global_average_pooling2d/Mean) shape_signature:[-1, 1, 1, 960], type:FLOAT32 T#259(MobilenetV3large/Conv_2/BiasAdd;MobilenetV3large/Conv_2/Conv2D;MobilenetV3large/Conv_2/BiasAdd/ReadVariableOp) shape_signature:[-1, 1, 1, 1280], type:FLOAT32 T#260(MobilenetV3large/multiply_20/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_39/Relu6;MobilenetV3large/tf.__operators__.add_28/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_28/Mul) shape_signature:[-1, 1, 1, 1280], type:FLOAT32 T#261(MobilenetV3large/Logits/BiasAdd;MobilenetV3large/Logits/Conv2D;MobilenetV3large/Logits/BiasAdd/ReadVariableOp) shape_signature:[-1, 1, 1, 1000], type:FLOAT32 T#262(MobilenetV3large/flatten_1/Reshape) shape_signature:[-1, 1000], type:FLOAT32 T#263(StatefulPartitionedCall:0) shape_signature:[-1, 1000], type:FLOAT32 --------------------------------------------------------------- Your TFLite model has '1' signature_def(s). Signature#0 key: 'serving_default' - Subgraph: Subgraph#0 - Inputs: 'input_1' : T#0 - Outputs: 'Predictions' : T#263 --------------------------------------------------------------- Model size: 21944024 bytes Non-data buffer size: 60500 bytes (00.28 %) Total data buffer size: 21883524 bytes (99.72 %) (Zero value buffers): 0 bytes (00.00 %) * Buffers of TFLite model are mostly used for constant tensors. And zero value buffers are buffers filled with zeros. Non-data buffers area are used to store operators, subgraphs and etc. You can find more details from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema.fbs
检查 GPU 委托兼容性
Model Analyzer API 通过提供 gpu_compatibility=True
选项,提供了一种检查给定模型的 GPU 委托兼容性的方法。
第 1 种情况:当模型不兼容时
以下代码展示了将 gpu_compatibility=True
选项用于简单 tf.function 的方式,该函数使用与 GPU 委托不兼容的带有二维张量的 tf.slice
和 tf.cosh
。
对于每个存在兼容性问题的节点,您将看到 GPU COMPATIBILITY WARNING
。
import tensorflow as tf
@tf.function(input_signature=[
tf.TensorSpec(shape=[4, 4], dtype=tf.float32)
])
def func(x):
return tf.cosh(x) + tf.slice(x, [1, 1], [1, 1])
converter = tf.lite.TFLiteConverter.from_concrete_functions(
[func.get_concrete_function()], func)
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS,
]
fb_model = converter.convert()
tf.lite.experimental.Analyzer.analyze(model_content=fb_model, gpu_compatibility=True)
=== TFLite ModelAnalyzer === Your TFLite model has '1' subgraph(s). In the subgraph description below, T# represents the Tensor numbers. For example, in Subgraph#0, the FlexCosh op takes tensor #0 as input and produces tensor #2 as output. Subgraph#0 main(T#0) -> [T#4] Op#0 FlexCosh(T#0) -> [T#2] GPU COMPATIBILITY WARNING: Not supported custom op FlexCosh Op#1 SLICE(T#0, T#1[1, 1], T#1[1, 1]) -> [T#3] GPU COMPATIBILITY WARNING: SLICE supports for 3 or 4 dimensional tensors only, but node has 2 dimensional tensors. Op#2 ADD(T#2, T#3) -> [T#4] GPU COMPATIBILITY WARNING: Subgraph#0 has GPU delegate compatibility issues at nodes 0, 1 with TFLite runtime version 2.10.0-rc0 Tensors of Subgraph#0 T#0(x) shape:[4, 4], type:FLOAT32 T#1(Slice/begin) shape:[2], type:INT32 RO 8 bytes, data:[1, 1] T#2(Cosh) shape:[4, 4], type:FLOAT32 T#3(Slice) shape:[1, 1], type:FLOAT32 T#4(Identity) shape:[4, 4], type:FLOAT32 --------------------------------------------------------------- Model size: 1128 bytes Non-data buffer size: 1008 bytes (89.36 %) Total data buffer size: 120 bytes (10.64 %) (Zero value buffers): 0 bytes (00.00 %) * Buffers of TFLite model are mostly used for constant tensors. And zero value buffers are buffers filled with zeros. Non-data buffers area are used to store operators, subgraphs and etc. You can find more details from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema.fbs 2022-08-11 19:23:30.270593: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:362] Ignored output_format. 2022-08-11 19:23:30.270630: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:365] Ignored drop_control_dependency. 2022-08-11 19:23:30.290075: W tensorflow/compiler/mlir/lite/flatbuffer_export.cc:1918] TFLite interpreter needs to link Flex delegate in order to run the model since it contains the following Select TFop(s): Flex ops: FlexCosh Details: tf.Cosh(tensor<4x4xf32>) -> (tensor<4x4xf32>) : {device = ""} See instructions: https://www.tensorflow.org/lite/guide/ops_select
第 2 种情况:当模型兼容时
在本示例中,给定的模型与 GPU 委托兼容。
注:即使该工具没有发现任何兼容性问题,它也不能保证您的模型在每台设备上都能很好地使用 GPU 委托。可能会发生一些运行时不兼容的情况,例如目标 OpenGL 后端缺少 CL_DEVICE_IMAGE_SUPPORT
功能。
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(128, 128)),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
fb_model = tf.lite.TFLiteConverter.from_keras_model(model).convert()
tf.lite.experimental.Analyzer.analyze(model_content=fb_model, gpu_compatibility=True)
INFO:tensorflow:Assets written to: /tmpfs/tmp/tmp9h0p_lcj/assets INFO:tensorflow:Assets written to: /tmpfs/tmp/tmp9h0p_lcj/assets === TFLite ModelAnalyzer === Your TFLite model has '1' subgraph(s). In the subgraph description below, T# represents the Tensor numbers. For example, in Subgraph#0, the RESHAPE op takes tensor #0 and tensor #1 as input and produces tensor #4 as output. Subgraph#0 main(T#0) -> [T#6] Op#0 RESHAPE(T#0, T#1[-1, 16384]) -> [T#4] Op#1 FULLY_CONNECTED(T#4, T#2[], T#-1) -> [T#5] Op#2 FULLY_CONNECTED(T#5, T#3[], T#-1) -> [T#6] Tensors of Subgraph#0 T#0(serving_default_flatten_2_input:0) shape_signature:[-1, 128, 128], type:FLOAT32 T#1(sequential_1/flatten_2/Const) shape:[2], type:INT32 RO 8 bytes, data:[-1, 16384] T#2(sequential_1/dense_2/MatMul1) shape:[256, 16384], type:FLOAT32 RO 16777216 bytes, data:[] T#3(sequential_1/dense_3/MatMul) shape:[10, 256], type:FLOAT32 RO 10240 bytes, data:[] T#4(sequential_1/flatten_2/Reshape) shape_signature:[-1, 16384], type:FLOAT32 T#5(sequential_1/dense_2/MatMul;sequential_1/dense_2/Relu;sequential_1/dense_2/BiasAdd) shape_signature:[-1, 256], type:FLOAT32 T#6(StatefulPartitionedCall:0) shape_signature:[-1, 10], type:FLOAT32 Your model looks compatibile with GPU delegate with TFLite runtime version 2.10.0-rc0. But it doesn't guarantee that your model works well with GPU delegate. There could be some runtime incompatibililty happen. --------------------------------------------------------------- Your TFLite model has '1' signature_def(s). Signature#0 key: 'serving_default' - Subgraph: Subgraph#0 - Inputs: 'flatten_2_input' : T#0 - Outputs: 'dense_3' : T#6 --------------------------------------------------------------- Model size: 16789072 bytes Non-data buffer size: 1504 bytes (00.01 %) Total data buffer size: 16787568 bytes (99.99 %) (Zero value buffers): 0 bytes (00.00 %) * Buffers of TFLite model are mostly used for constant tensors. And zero value buffers are buffers filled with zeros. Non-data buffers area are used to store operators, subgraphs and etc. You can find more details from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema.fbs 2022-08-11 19:23:31.163988: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:362] Ignored output_format. 2022-08-11 19:23:31.164046: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:365] Ignored drop_control_dependency.