|TensorFlow 1 version|
The op serializes protobuf messages provided in the input tensors.
Compat aliases for migration
See Migration guide for more details.
tf.io.encode_proto( sizes, values, field_names, message_type, descriptor_source='local://', name=None )
The types of the tensors in
values must match the schema for the fields
field_names. All the tensors in
values must have a common
shape prefix, batch_shape.
sizes tensor specifies repeat counts for each field. The repeat count
(last dimension) of a each tensor in
values must be greater than or equal
to corresponding repeat count in
message_type name must be provided to give context for the field names.
The actual message descriptor can be looked up either in the linked-in
descriptor pool or a filename provided by the caller using the
For the most part, the mapping between Proto field types and TensorFlow dtypes is straightforward. However, there are a few special cases:
A proto field that contains a submessage or group can only be converted to
DT_STRING(the serialized submessage). This is to reduce the complexity of the API. The resulting string can be used as input to another instance of the decode_proto op.
TensorFlow lacks support for unsigned integers. The ops represent uint64 types as a
DT_INT64with the same twos-complement bit pattern (the obvious way). Unsigned int32 values can be represented exactly by specifying type
DT_INT64, or using twos-complement if the caller specifies
descriptor_source attribute selects the source of protocol
descriptors to consult when looking up
message_type. This may be:
An empty string or "local://", in which case protocol descriptors are created for C++ (not Python) proto definitions linked to the binary.
A file, in which case protocol descriptors are cre