Have a question? Connect with the community at the TensorFlow Forum Visit Forum


A dual encoder model based on a transformer-based encoder.

This is an implementation of the dual encoder network structure based on the transfomer stack, as described in "Language-agnostic BERT Sentence Embedding"

The DualEncoder allows a user to pass in a transformer stack, and build a dual encoder model based on the transformer stack.

network A transformer network which should output an encoding output.
max_seq_length The maximum allowed sequence length for transformer.
normalize If set to True, normalize the encoding produced by transfomer.
logit_scale The scaling factor of dot products when doing training.
logit_margin The margin between positive and negative when doing training.
output The output style for this network. Can be either 'logits' or 'predictions'. If set to 'predictions', it will output the embedding producted by transformer network.

checkpoint_items Returns a dictionary of items to be additionally checkpointed.



Calls the model on new inputs.

In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).

inputs A tensor or list of tensors.
training Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.
mask A mask or list of masks. A mask can be either a tensor or None (no mask).

A tensor if there is a single output, or a list of tensors if there are more than one outputs.