Transformer model for language understanding

View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook

This tutorial trains a transformer model to translate a Portuguese to English dataset.

This is an advanced example that assumes knowledge of text generation and attention.

This tutorial demonstrates how to build a transformer model and most of its components from scratch using low-level TensorFlow and Keras functionalities. Some of this could be minimized if you took advantage of built-in APIs like tf.keras.layers.MultiHeadAttention.

The core idea behind a transformer model is self-attention—the ability to attend to different positions of the input sequence to compute a representation of that sequence. Transformer creates stacks of self-attention layers and is explained below in the sections Scaled dot product attention and Multi-head attention.

A transformer model handles variable-sized input using stacks of self-attention layers instead of RNNs or CNNs. This general architecture has a number of advantages:

  • It makes no assumptions about the temporal/spatial relationships across the data. This is ideal for processing a set of objects (for example, StarCraft units).
  • Layer outputs can be calculated in parallel, instead of a series like an RNN.
  • Distant items can affect each other's output without passing through many RNN-steps, or convolution layers (see Scene Memory Transformer for example).
  • It can learn long-range dependencies. This is a challenge in many sequence tasks.

The downsides of this architecture are:

  • For a time-series, the output for a time-step is calculated from the entire history instead of only the inputs and current hidden-state. This may be less efficient.
  • If the input does have a temporal/spatial relationship, like text, some positional encoding must be added or the model will effectively see a bag of words.

After training the model in this notebook, you will be able to input a Portuguese sentence and return the English translation.

Attention heatmap

Setup

pip install tensorflow_datasets
pip install -U 'tensorflow-text==2.8.*'
import logging
import time

import numpy as np
import matplotlib.pyplot as plt

import tensorflow_datasets as tfds
import tensorflow as tf

# Import tf_text to load the ops used by the tokenizer saved model
import tensorflow_text  # pylint: disable=unused-import
logging.getLogger('tensorflow').setLevel(logging.ERROR)  # suppress warnings

Download the Dataset

Use TensorFlow datasets to load the Portuguese-English translation dataset from the TED Talks Open Translation Project.

This dataset contains approximately 50000 training examples, 1100 validation examples, and 2000 test examples.

examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en', with_info=True,
                               as_supervised=True)
train_examples, val_examples = examples['train'], examples['validation']

The tf.data.Dataset object returned by TensorFlow datasets yields pairs of text examples:

for pt_examples, en_examples in train_examples.batch(3).take(1):
  for pt in pt_examples.numpy():
    print(pt.decode('utf-8'))

  print()

  for en in en_examples.numpy():
    print(en.decode('utf-8'))
e quando melhoramos a procura , tiramos a única vantagem da impressão , que é a serendipidade .
mas e se estes fatores fossem ativos ?
mas eles não tinham a curiosidade de me testar .

and when you improve searchability , you actually take away the one advantage of print , which is serendipity .
but what if it were active ?
but they did n't test for curiosity .

Text tokenization & detokenization

You can't train a model directly on text. The text needs to be converted to some numeric representation first. Typically, you convert the text to sequences of token IDs, which are used as indices into an embedding.

One popular implementation is demonstrated in the Subword tokenizer tutorial builds subword tokenizers (text.BertTokenizer) optimized for this dataset and exports them in a saved_model.

Download and unzip and import the saved_model:

model_name = 'ted_hrlr_translate_pt_en_converter'
tf.keras.utils.get_file(
    f'{model_name}.zip',
    f'https://storage.googleapis.com/download.tensorflow.org/models/{model_name}.zip',
    cache_dir='.', cache_subdir='', extract=True
)
Downloading data from https://storage.googleapis.com/download.tensorflow.org/models/ted_hrlr_translate_pt_en_converter.zip
188416/184801 [==============================] - 0s 0us/step
196608/184801 [===============================] - 0s 0us/step
'./ted_hrlr_translate_pt_en_converter.zip'
tokenizers = tf.saved_model.load(model_name)

The tf.saved_model contains two text tokenizers, one for English and one for Portuguese. Both have the same methods:

[item for item in dir(tokenizers.en) if not item.startswith('_')]
['detokenize',
 'get_reserved_tokens',
 'get_vocab_path',
 'get_vocab_size',
 'lookup',
 'tokenize',
 'tokenizer',
 'vocab']

The tokenize method converts a batch of strings to a padded-batch of token IDs. This method splits punctuation, lowercases and unicode-normalizes the input before tokenizing. That standardization is not visible here because the input data is already standardized.

for en in en_examples.numpy():
  print(en.decode('utf-8'))
and when you improve searchability , you actually take away the one advantage of print , which is serendipity .
but what if it were active ?
but they did n't test for curiosity .
encoded = tokenizers.en.tokenize(en_examples)

for row in encoded.to_list():
  print(row)
[2, 72, 117, 79, 1259, 1491, 2362, 13, 79, 150, 184, 311, 71, 103, 2308, 74, 2679, 13, 148, 80, 55, 4840, 1434, 2423, 540, 15, 3]
[2, 87, 90, 107, 76, 129, 1852, 30, 3]
[2, 87, 83, 149, 50, 9, 56, 664, 85, 2512, 15, 3]

The detokenize method attempts to convert these token IDs back to human readable text:

round_trip = tokenizers.en.detokenize(encoded)
for line in round_trip.numpy():
  print(line.decode('utf-8'))
and when you improve searchability , you actually take away the one advantage of print , which is serendipity .
but what if it were active ?
but they did n ' t test for curiosity .

The lower level lookup method converts from token-IDs to token text:

tokens = tokenizers.en.lookup(encoded)
tokens
<tf.RaggedTensor [[b'[START]', b'and', b'when', b'you', b'improve', b'search', b'##ability',
  b',', b'you', b'actually', b'take', b'away', b'the', b'one', b'advantage',
  b'of', b'print', b',', b'which', b'is', b's', b'##ere', b'##nd', b'##ip',
  b'##ity', b'.', b'[END]']                                                 ,
 [b'[START]', b'but', b'what', b'if', b'it', b'were', b'active', b'?',
  b'[END]']                                                           ,
 [b'[START]', b'but', b'they', b'did', b'n', b"'", b't', b'test', b'for',
  b'curiosity', b'.', b'[END]']                                          ]>

Here you can see the "subword" aspect of the tokenizers. The word "searchability" is decomposed into "search ##ability" and the word "serendipity" into "s ##ere ##nd ##ip ##ity"

Now take a minute to investigate the distribution of tokens per example in the dataset:

lengths = []

for pt_examples, en_examples in train_examples.batch(1024):
  pt_tokens = tokenizers.en.tokenize(pt_examples)
  lengths.append(pt_tokens.row_lengths())

  en_tokens = tokenizers.en.tokenize(en_examples)
  lengths.append(en_tokens.row_lengths())
  print('.', end='', flush=True)
...................................................
all_lengths = np.concatenate(lengths)

plt.hist(all_lengths, np.linspace(0, 500, 101))
plt.ylim(plt.ylim())
max_length = max(all_lengths)
plt.plot([max_length, max_length], plt.ylim())
plt.title(f'Max tokens per example: {max_length}');

png

MAX_TOKENS = 128

Setup input pipeline

To build an input pipeline suitable for training define some functions to transform the dataset.

Define a function to drop the examples longer than MAX_TOKENS:

def filter_max_tokens(pt, en):
  num_tokens = tf.maximum(tf.shape(pt)[1],tf.shape(en)[1])
  return num_tokens < MAX_TOKENS

Define a function that tokenizes the batches of raw text:

def tokenize_pairs(pt, en):
    pt = tokenizers.pt.tokenize(pt)
    # Convert from ragged to dense, padding with zeros.
    pt = pt.to_tensor()

    en = tokenizers.en.tokenize(en)
    # Convert from ragged to dense, padding with zeros.
    en = en.to_tensor()
    return pt, en

Here's a simple input pipeline that processes, shuffles and batches the data:

BUFFER_SIZE = 20000
BATCH_SIZE = 64
def make_batches(ds):
  return (
      ds
      .cache()
      .shuffle(BUFFER_SIZE)
      .batch(BATCH_SIZE)
      .map(tokenize_pairs, num_parallel_calls=tf.data.AUTOTUNE)
      .filter(filter_max_tokens)
      .prefetch(tf.data.AUTOTUNE))


train_batches = make_batches(train_examples)
val_batches = make_batches(val_examples)

Positional encoding

Attention layers see their input as a set of vectors, with no sequential order. This model also doesn't contain any recurrent or convolutional layers. Because of this a "positional encoding" is added to give the model some information about the relative position of the tokens in the sentence.

The positional encoding vector is added to the embedding vector. Embeddings represent a token in a d-dimensional space where tokens with similar meaning will be closer to each other. But the embeddings do not encode the relative position of tokens in a sentence. So after adding the positional encoding, tokens will be closer to each other based on the similarity of their meaning and their position in the sentence, in the d-dimensional space.

The formula for calculating the positional encoding is as follows:

\[\Large{PE_{(pos, 2i)} = \sin(pos / 10000^{2i / d_{model} })} \]

\[\Large{PE_{(pos, 2i+1)} = \cos(pos / 10000^{2i / d_{model} })} \]

def get_angles(pos, i, d_model):
  angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model))
  return pos * angle_rates
def positional_encoding(position, d_model):
  angle_rads = get_angles(np.arange(position)[:, np.newaxis],
                          np.arange(d_model)[np.newaxis, :],
                          d_model)

  # apply sin to even indices in the array; 2i
  angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2])

  # apply cos to odd indices in the array; 2i+1
  angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2])

  pos_encoding = angle_rads[np.newaxis, ...]

  return tf.cast(pos_encoding, dtype=tf.float32)
n, d = 2048, 512
pos_encoding = positional_encoding(n, d)
print(pos_encoding.shape)
pos_encoding = pos_encoding[0]

# Juggle the dimensions for the plot
pos_encoding = tf.reshape(pos_encoding, (n, d//2, 2))
pos_encoding = tf.transpose(pos_encoding, (2, 1, 0))
pos_encoding = tf.reshape(pos_encoding, (d, n))

plt.pcolormesh(pos_encoding, cmap='RdBu')
plt.ylabel('Depth')
plt.xlabel('Position')
plt.colorbar()
plt.show()
(1, 2048, 512)

png

Masking

Mask all the pad tokens in the batch of sequence. It ensures that the model does not treat padding as the input. The mask indicates where pad value 0 is present: it outputs a 1 at those locations, and a 0 otherwise.

def create_padding_mask(seq):
  seq = tf.cast(tf.math.equal(seq, 0), tf.float32)

  # add extra dimensions to add the padding
  # to the attention logits.
  return seq[:, tf.newaxis, tf.newaxis, :]  # (batch_size, 1, 1, seq_len)
x = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]])
create_padding_mask(x)
<tf.Tensor: shape=(3, 1, 1, 5), dtype=float32, numpy=
array([[[[0., 0., 1., 1., 0.]]],


       [[[0., 0., 0., 1., 1.]]],


       [[[1., 1., 1., 0., 0.]]]], dtype=float32)>

The look-ahead mask is used to mask the future tokens in a sequence. In other words, the mask indicates which entries should not be used.

This means that to predict the third token, only the first and second token will be used. Similarly to predict the fourth token, only the first, second and the third tokens will be used and so on.

def create_look_ahead_mask(size):
  mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)
  return mask  # (seq_len, seq_len)
x = tf.random.uniform((1, 3))
temp = create_look_ahead_mask(x.shape[1])
temp
<tf.Tensor: shape=(3, 3), dtype=float32, numpy=
array([[0., 1., 1.],
       [0., 0., 1.],
       [0., 0., 0.]], dtype=float32)>
# def create_look_ahead_mask(size):
#     n = int(size * (size+1) / 2)
#     mask = tfp.math.fill_triangular(tf.ones((n,), dtype=tf.int32), upper=False)

Scaled dot product attention

scaled_dot_product_attention

The attention function used by a transformer takes three inputs: Q (query), K (key), V (value). The equation used to calculate the attention weights is:

\[\Large{Attention(Q, K, V) = softmax_k\left(\frac{QK^T}{\sqrt{d_k} }\right) V} \]

The dot-product attention is scaled by a factor of square root of the depth. This is done because for large values of depth, the dot product grows large in magnitude pushing the softmax function where it has small gradients resulting in a very hard softmax.

For example, consider that Q and K have a mean of 0 and variance of 1. Their matrix multiplication will have a mean of 0 and variance of dk. So the square root of dk is used for scaling, so you get a consistent variance regardless of the value of dk. If the variance is too low the output may be too flat to optimize effectively. If the variance is too high the softmax may saturate at initialization making it difficult to learn.

The mask is multiplied with -1e9 (close to negative infinity). This is done because the mask is summed with the scaled matrix multiplication of Q and K and is applied immediately before a softmax. The goal is to zero out these cells, and large negative inputs to softmax are near zero in the output.

def scaled_dot_product_attention(q, k, v, mask):
  """Calculate the attention weights.
  q, k, v must have matching leading dimensions.
  k, v must have matching penultimate dimension, i.e.: seq_len_k = seq_len_v.
  The mask has different shapes depending on its type(padding or look ahead)
  but it must be broadcastable for addition.

  Args:
    q: query shape == (..., seq_len_q, depth)
    k: key shape == (..., seq_len_k, depth)
    v: value shape == (..., seq_len_v, depth_v)
    mask: Float tensor with shape broadcastable
          to (..., seq_len_q, seq_len_k). Defaults to None.

  Returns:
    output, attention_weights
  """

  matmul_qk = tf.matmul(q, k, transpose_b=True)  # (..., seq_len_q, seq_len_k)

  # scale matmul_qk
  dk = tf.cast(tf.shape(k)[-1], tf.float32)
  scaled_attention_logits = matmul_qk / tf.math.sqrt(dk)

  # add the mask to the scaled tensor.
  if mask is not None:
    scaled_attention_logits += (mask * -1e9)

  # softmax is normalized on the last axis (seq_len_k) so that the scores
  # add up to 1.
  attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1)  # (..., seq_len_q, seq_len_k)

  output = tf.matmul(attention_weights, v)  # (..., seq_len_q, depth_v)

  return output, attention_weights

As the softmax normalization is done along the dimension for keys, the attention values decide the amount of importance given to the keys for each query.

The output represents the multiplication of the attention weights and the V (value) vector. This ensures that the tokens you want to focus on are kept as-is and the irrelevant tokens are flushed out.

def print_out(q, k, v):
  temp_out, temp_attn = scaled_dot_product_attention(
      q, k, v, None)
  print('Attention weights are:')
  print(temp_attn)
  print('Output is:')
  print(temp_out)
np.set_printoptions(suppress=True)

temp_k = tf.constant([[10, 0, 0],
                      [0, 10, 0],
                      [0, 0, 10],
                      [0, 0, 10]], dtype=tf.float32)  # (4, 3)

temp_v = tf.constant([[1, 0],
                      [10, 0],
                      [100, 5],
                      [1000, 6]], dtype=tf.float32)  # (4, 2)

# This `query` aligns with the second `key`,
# so the second `value` is returned.
temp_q = tf.constant([[0, 10, 0]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0. 1. 0. 0.]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[10.  0.]], shape=(1, 2), dtype=float32)
# This query aligns with a repeated key (third and fourth),
# so all associated values get averaged.
temp_q = tf.constant([[0, 0, 10]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0.  0.  0.5 0.5]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[550.    5.5]], shape=(1, 2), dtype=float32)
# This query aligns equally with the first and second key,
# so their values get averaged.
temp_q = tf.constant([[10, 10, 0]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0.5 0.5 0.  0. ]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[5.5 0. ]], shape=(1, 2), dtype=float32)

Pass all the queries together.

temp_q = tf.constant([[0, 0, 10],
                      [0, 10, 0],
                      [10, 10, 0]], dtype=tf.float32)  # (3, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor(
[[0.  0.  0.5 0.5]
 [0.  1.  0.  0. ]
 [0.5 0.5 0.  0. ]], shape=(3, 4), dtype=float32)
Output is:
tf.Tensor(
[[550.    5.5]
 [ 10.    0. ]
 [  5.5   0. ]], shape=(3, 2), dtype=float32)

Multi-head attention

multi-head attention

Multi-head attention consists of four parts:

  • Linear layers.
  • Scaled dot-product attention.
  • Final linear layer.

Each multi-head attention block gets three inputs; Q (query), K (key), V (value). These are put through linear (Dense) layers before the multi-head attention function.

In the diagram above (K,Q,V) are passed through sepearte linear (Dense) layers for each attention head. For simplicity/efficiency the code below implements this using a single dense layer with num_heads times as many outputs. The output is rearranged to a shape of (batch, num_heads, ...) before applying the attention function.

The scaled_dot_product_attention function defined above is applied in a single call, broadcasted for efficiency. An appropriate mask must be used in the attention step. The attention output for each head is then concatenated (using tf.transpose, and tf.reshape) and put through a final Dense layer.

Instead of one single attention head, Q, K, and V are split into multiple heads because it allows the model to jointly attend to information from different representation subspaces at different positions. After the split each head has a reduced dimensionality, so the total computation cost is the same as a single head attention with full dimensionality.

class MultiHeadAttention(tf.keras.layers.Layer):
  def __init__(self,*, d_model, num_heads):
    super(MultiHeadAttention, self).__init__()
    self.num_heads = num_heads
    self.d_model = d_model

    assert d_model % self.num_heads == 0

    self.depth = d_model // self.num_heads

    self.wq = tf.keras.layers.Dense(d_model)
    self.wk = tf.keras.layers.Dense(d_model)
    self.wv = tf.keras.layers.Dense(d_model)

    self.dense = tf.keras.layers.Dense(d_model)

  def split_heads(self, x, batch_size):
    """Split the last dimension into (num_heads, depth).
    Transpose the result such that the shape is (batch_size, num_heads, seq_len, depth)
    """
    x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth))
    return tf.transpose(x, perm=[0, 2, 1, 3])

  def call(self, v, k, q, mask):
    batch_size = tf.shape(q)[0]

    q = self.wq(q)  # (batch_size, seq_len, d_model)
    k = self.wk(k)  # (batch_size, seq_len, d_model)
    v = self.wv(v)  # (batch_size, seq_len, d_model)

    q = self.split_heads(q, batch_size)  # (batch_size, num_heads, seq_len_q, depth)
    k = self.split_heads(k, batch_size)  # (batch_size, num_heads, seq_len_k, depth)
    v = self.split_heads(v, batch_size)  # (batch_size, num_heads, seq_len_v, depth)

    # scaled_attention.shape == (batch_size, num_heads, seq_len_q, depth)
    # attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k)
    scaled_attention, attention_weights = scaled_dot_product_attention(
        q, k, v, mask)

    scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3])  # (batch_size, seq_len_q, num_heads, depth)

    concat_attention = tf.reshape(scaled_attention,
                                  (batch_size, -1, self.d_model))  # (batch_size, seq_len_q, d_model)

    output = self.dense(concat_attention)  # (batch_size, seq_len_q, d_model)

    return output, attention_weights

Create a MultiHeadAttention layer to try out. At each location in the sequence, y, the MultiHeadAttention runs all 8 attention heads across all other locations in the sequence, returning a new vector of the same length at each location.

temp_mha = MultiHeadAttention(d_model=512, num_heads=8)
y = tf.random.uniform((1, 60, 512))  # (batch_size, encoder_sequence, d_model)
out, attn = temp_mha(y, k=y, q=y, mask=None)
out.shape, attn.shape
(TensorShape([1, 60, 512]), TensorShape([1, 8, 60, 60]))

Point wise feed forward network

Point wise feed forward network consists of two fully-connected layers with a ReLU activation in between.

def point_wise_feed_forward_network(d_model, dff):
  return tf.keras.Sequential([
      tf.keras.layers.Dense(dff, activation='relu'),  # (batch_size, seq_len, dff)
      tf.keras.layers.Dense(d_model)  # (batch_size, seq_len, d_model)
  ])
sample_ffn = point_wise_feed_forward_network(512, 2048)
sample_ffn(tf.random.uniform((64, 50, 512))).shape
TensorShape([64, 50, 512])

Encoder and decoder

transformer

A transformer model follows the same general pattern as a standard sequence to sequence with attention model.

  • The input sentence is passed through N encoder layers that generates an output for each token in the sequence.
  • The decoder attends to the encoder's output and its own input (self-attention) to predict the next word.

Encoder layer

Each encoder layer consists of sublayers:

  1. Multi-head attention (with padding mask)
  2. Point wise feed forward networks.

Each of these sublayers has a residual connection around it followed by a layer normalization. Residual connections help in avoiding the vanishing gradient problem in deep networks.

The output of each sublayer is LayerNorm(x + Sublayer(x)). The normalization is done on the d_model (last) axis. There are N encoder layers in a transformer.

class EncoderLayer(tf.keras.layers.Layer):
  def __init__(self,*, d_model, num_heads, dff, rate=0.1):
    super(EncoderLayer, self).__init__()

    self.mha = MultiHeadAttention(d_model=d_model, num_heads=num_heads)
    self.ffn = point_wise_feed_forward_network(d_model, dff)

    self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)

    self.dropout1 = tf.keras.layers.Dropout(rate)
    self.dropout2 = tf.keras.layers.Dropout(rate)

  def call(self, x, training, mask):

    attn_output, _ = self.mha(x, x, x, mask)  # (batch_size, input_seq_len, d_model)
    attn_output = self.dropout1(attn_output, training=training)
    out1 = self.layernorm1(x + attn_output)  # (batch_size, input_seq_len, d_model)

    ffn_output = self.ffn(out1)  # (batch_size, input_seq_len, d_model)
    ffn_output = self.dropout2(ffn_output, training=training)
    out2 = self.layernorm2(out1 + ffn_output)  # (batch_size, input_seq_len, d_model)

    return out2
sample_encoder_layer = EncoderLayer(d_model=512, num_heads=8, dff=2048)

sample_encoder_layer_output = sample_encoder_layer(
    tf.random.uniform((64, 43, 512)), False, None)

sample_encoder_layer_output.shape  # (batch_size, input_seq_len, d_model)
TensorShape([64, 43, 512])

Decoder layer

Each decoder layer consists of sublayers:

  1. Masked multi-head attention (with look ahead mask and padding mask)
  2. Multi-head attention (with padding mask). V (value) and K (key) receive the encoder output as inputs. Q (query) receives the output from the masked multi-head attention sublayer.
  3. Point wise feed forward networks

Each of these sublayers has a residual connection around it followed by a layer normalization. The output of each sublayer is LayerNorm(x + Sublayer(x)). The normalization is done on the d_model (last) axis.

There are a number of decoder layers in the model.

As Q receives the output from decoder's first attention block, and K receives the encoder output, the attention weights represent the importance given to the decoder's input based on the encoder's output. In other words, the decoder predicts the next token by looking at the encoder output and self-attending to its own output. See the demonstration above in the scaled dot product attention section.

class DecoderLayer(tf.keras.layers.Layer):
  def __init__(self,*, d_model, num_heads, dff, rate=0.1):
    super(DecoderLayer, self).__init__()

    self.mha1 = MultiHeadAttention(d_model=d_model, num_heads=num_heads)
    self.mha2 = MultiHeadAttention(d_model=d_model, num_heads=num_heads)

    self.ffn = point_wise_feed_forward_network(d_model, dff)

    self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm3 = tf.keras.layers.LayerNormalization(epsilon=1e-6)

    self.dropout1 = tf.keras.layers.Dropout(rate)
    self.dropout2 = tf.keras.layers.Dropout(rate)
    self.dropout3 = tf.keras.layers.Dropout(rate)

  def call(self, x, enc_output, training,
           look_ahead_mask, padding_mask):
    # enc_output.shape == (batch_size, input_seq_len, d_model)

    attn1, attn_weights_block1 = self.mha1(x, x, x, look_ahead_mask)  # (batch_size, target_seq_len, d_model)
    attn1 = self.dropout1(attn1, training=training)
    out1 = self.layernorm1(attn1 + x)

    attn2, attn_weights_block2 = self.mha2(
        enc_output, enc_output, out1, padding_mask)  # (batch_size, target_seq_len, d_model)
    attn2 = self.dropout2(attn2, training=training)
    out2 = self.layernorm2(attn2 + out1)  # (batch_size, target_seq_len, d_model)

    ffn_output = self.ffn(out2)  # (batch_size, target_seq_len, d_model)
    ffn_output = self.dropout3(ffn_output, training=training)
    out3 = self.layernorm3(ffn_output + out2)  # (batch_size, target_seq_len, d_model)

    return out3, attn_weights_block1, attn_weights_block2
sample_decoder_layer = DecoderLayer(d_model=512, num_heads=8, dff=2048)

sample_decoder_layer_output, _, _ = sample_decoder_layer(
    tf.random.uniform((64, 50, 512)), sample_encoder_layer_output,
    False, None, None)

sample_decoder_layer_output.shape  # (batch_size, target_seq_len, d_model)
TensorShape([64, 50, 512])

Encoder

The Encoder consists of:

  1. Input Embedding
  2. Positional Encoding
  3. N encoder layers

The input is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the encoder layers. The output of the encoder is the input to the decoder.

class Encoder(tf.keras.layers.Layer):
  def __init__(self,*, num_layers, d_model, num_heads, dff, input_vocab_size,
               rate=0.1):
    super(Encoder, self).__init__()

    self.d_model = d_model
    self.num_layers = num_layers

    self.embedding = tf.keras.layers.Embedding(input_vocab_size, d_model)
    self.pos_encoding = positional_encoding(MAX_TOKENS, self.d_model)

    self.enc_layers = [
        EncoderLayer(d_model=d_model, num_heads=num_heads, dff=dff, rate=rate)
        for _ in range(num_layers)]

    self.dropout = tf.keras.layers.Dropout(rate)

  def call(self, x, training, mask):

    seq_len = tf.shape(x)[1]

    # adding embedding and position encoding.
    x = self.embedding(x)  # (batch_size, input_seq_len, d_model)
    x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
    x += self.pos_encoding[:, :seq_len, :]

    x = self.dropout(x, training=training)

    for i in range(self.num_layers):
      x = self.enc_layers[i](x, training, mask)

    return x  # (batch_size, input_seq_len, d_model)
sample_encoder = Encoder(num_layers=2, d_model=512, num_heads=8,
                         dff=2048, input_vocab_size=8500)
temp_input = tf.random.uniform((64, 62), dtype=tf.int64, minval=0, maxval=200)

sample_encoder_output = sample_encoder(temp_input, training=False, mask=None)

print(sample_encoder_output.shape)  # (batch_size, input_seq_len, d_model)
(64, 62, 512)

Decoder

The Decoder consists of:

  1. Output Embedding
  2. Positional Encoding
  3. N decoder layers

The target is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the decoder layers. The output of the decoder is the input to the final linear layer.

class Decoder(tf.keras.layers.Layer):
  def __init__(self,*, num_layers, d_model, num_heads, dff, target_vocab_size,
               rate=0.1):
    super(Decoder, self).__init__()

    self.d_model = d_model
    self.num_layers = num_layers

    self.embedding = tf.keras.layers.Embedding(target_vocab_size, d_model)
    self.pos_encoding = positional_encoding(MAX_TOKENS, d_model)

    self.dec_layers = [
        DecoderLayer(d_model=d_model, num_heads=num_heads, dff=dff, rate=rate)
        for _ in range(num_layers)]
    self.dropout = tf.keras.layers.Dropout(rate)

  def call(self, x, enc_output, training,
           look_ahead_mask, padding_mask):

    seq_len = tf.shape(x)[1]
    attention_weights = {}

    x = self.embedding(x)  # (batch_size, target_seq_len, d_model)
    x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
    x += self.pos_encoding[:, :seq_len, :]

    x = self.dropout(x, training=training)

    for i in range(self.num_layers):
      x, block1, block2 = self.dec_layers[i](x, enc_output, training,
                                             look_ahead_mask, padding_mask)

      attention_weights[f'decoder_layer{i+1}_block1'] = block1
      attention_weights[f'decoder_layer{i+1}_block2'] = block2

    # x.shape == (batch_size, target_seq_len, d_model)
    return x, attention_weights
sample_decoder = Decoder(num_layers=2, d_model=512, num_heads=8,
                         dff=2048, target_vocab_size=8000)
temp_input = tf.random.uniform((64, 26), dtype=tf.int64, minval=0, maxval=200)

output, attn = sample_decoder(temp_input,
                              enc_output=sample_encoder_output,
                              training=False,
                              look_ahead_mask=None,
                              padding_mask=None)

output.shape, attn['decoder_layer2_block2'].shape
(TensorShape([64, 26, 512]), TensorShape([64, 8, 26, 62]))

Create the transformer model

A transformer consists of the encoder, decoder, and a final linear layer. The output of the decoder is the input to the linear layer and its output is returned.

class Transformer(tf.keras.Model):
  def __init__(self,*, num_layers, d_model, num_heads, dff, input_vocab_size,
               target_vocab_size, rate=0.1):
    super().__init__()
    self.encoder = Encoder(num_layers=num_layers, d_model=d_model,
                           num_heads=num_heads, dff=dff,
                           input_vocab_size=input_vocab_size, rate=rate)

    self.decoder = Decoder(num_layers=num_layers, d_model=d_model,
                           num_heads=num_heads, dff=dff,
                           target_vocab_size=target_vocab_size, rate=rate)

    self.final_layer = tf.keras.layers.Dense(target_vocab_size)

  def call(self, inputs, training):
    # Keras models prefer if you pass all your inputs in the first argument
    inp, tar = inputs

    padding_mask, look_ahead_mask = self.create_masks(inp, tar)

    enc_output = self.encoder(inp, training, padding_mask)  # (batch_size, inp_seq_len, d_model)

    # dec_output.shape == (batch_size, tar_seq_len, d_model)
    dec_output, attention_weights = self.decoder(
        tar, enc_output, training, look_ahead_mask, padding_mask)

    final_output = self.final_layer(dec_output)  # (batch_size, tar_seq_len, target_vocab_size)

    return final_output, attention_weights

  def create_masks(self, inp, tar):
    # Encoder padding mask (Used in the 2nd attention block in the decoder too.)
    padding_mask = create_padding_mask(inp)

    # Used in the 1st attention block in the decoder.
    # It is used to pad and mask future tokens in the input received by
    # the decoder.
    look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1])
    dec_target_padding_mask = create_padding_mask(tar)
    look_ahead_mask = tf.maximum(dec_target_padding_mask, look_ahead_mask)

    return padding_mask, look_ahead_mask
sample_transformer = Transformer(
    num_layers=2, d_model=512, num_heads=8, dff=2048,
    input_vocab_size=8500, target_vocab_size=8000)

temp_input = tf.random.uniform((64, 38), dtype=tf.int64, minval=0, maxval=200)
temp_target = tf.random.uniform((64, 36), dtype=tf.int64, minval=0, maxval=200)

fn_out, _ = sample_transformer([temp_input, temp_target], training=False)

fn_out.shape  # (batch_size, tar_seq_len, target_vocab_size)
TensorShape([64, 36, 8000])

Set hyperparameters

To keep this example small and relatively fast, the values for num_layers, d_model, dff have been reduced.

The base model described in the paper used: num_layers=6, d_model=512, dff=2048.

num_layers = 4
d_model = 128
dff = 512
num_heads = 8
dropout_rate = 0.1

Optimizer

Use the Adam optimizer with a custom learning rate scheduler according to the formula in the paper.

\[\Large{lrate = d_{model}^{-0.5} * \min(step{\_}num^{-0.5}, step{\_}num \cdot warmup{\_}steps^{-1.5})}\]

class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
  def __init__(self, d_model, warmup_steps=4000):
    super(CustomSchedule, self).__init__()

    self.d_model = d_model
    self.d_model = tf.cast(self.d_model, tf.float32)

    self.warmup_steps = warmup_steps

  def __call__(self, step):
    arg1 = tf.math.rsqrt(step)
    arg2 = step * (self.warmup_steps ** -1.5)

    return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)
learning_rate = CustomSchedule(d_model)

optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98,
                                     epsilon=1e-9)
temp_learning_rate_schedule = CustomSchedule(d_model)

plt.plot(temp_learning_rate_schedule(tf.range(40000, dtype=tf.float32)))
plt.ylabel('Learning Rate')
plt.xlabel('Train Step')
Text(0.5, 0, 'Train Step')

png

Loss and metrics

Since the target sequences are padded, it is important to apply a padding mask when calculating the loss.

loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
    from_logits=True, reduction='none')
def loss_function(real, pred):
  mask = tf.math.logical_not(tf.math.equal(real, 0))
  loss_ = loss_object(real, pred)

  mask = tf.cast(mask, dtype=loss_.dtype)
  loss_ *= mask

  return tf.reduce_sum(loss_)/tf.reduce_sum(mask)


def accuracy_function(real, pred):
  accuracies = tf.equal(real, tf.argmax(pred, axis=2))

  mask = tf.math.logical_not(tf.math.equal(real, 0))
  accuracies = tf.math.logical_and(mask, accuracies)

  accuracies = tf.cast(accuracies, dtype=tf.float32)
  mask = tf.cast(mask, dtype=tf.float32)
  return tf.reduce_sum(accuracies)/tf.reduce_sum(mask)
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.Mean(name='train_accuracy')

Training and checkpointing

transformer = Transformer(
    num_layers=num_layers,
    d_model=d_model,
    num_heads=num_heads,
    dff=dff,
    input_vocab_size=tokenizers.pt.get_vocab_size().numpy(),
    target_vocab_size=tokenizers.en.get_vocab_size().numpy(),
    rate=dropout_rate)

Create the checkpoint path and the checkpoint manager. This will be used to save checkpoints every n epochs.

checkpoint_path = './checkpoints/train'

ckpt = tf.train.Checkpoint(transformer=transformer,
                           optimizer=optimizer)

ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)

# if a checkpoint exists, restore the latest checkpoint.
if ckpt_manager.latest_checkpoint:
  ckpt.restore(ckpt_manager.latest_checkpoint)
  print('Latest checkpoint restored!!')

The target is divided into tar_inp and tar_real. tar_inp is passed as an input to the decoder. tar_real is that same input shifted by 1: At each location in tar_input, tar_real contains the next token that should be predicted.

For example, sentence = 'SOS A lion in the jungle is sleeping EOS' becomes:

  • tar_inp = 'SOS A lion in the jungle is sleeping'
  • tar_real = 'A lion in the jungle is sleeping EOS'

A transformer is an auto-regressive model: it makes predictions one part at a time, and uses its output so far to decide what to do next.

During training this example uses teacher-forcing (like in the text generation tutorial). Teacher forcing is passing the true output to the next time step regardless of what the model predicts at the current time step.

As the model predicts each token, self-attention allows it to look at the previous tokens in the input sequence to better predict the next token.

To prevent the model from peeking at the expected output the model uses a look-ahead mask.

EPOCHS = 20
# The @tf.function trace-compiles train_step into a TF graph for faster
# execution. The function specializes to the precise shape of the argument
# tensors. To avoid re-tracing due to the variable sequence lengths or variable
# batch sizes (the last batch is smaller), use input_signature to specify
# more generic shapes.

train_step_signature = [
    tf.TensorSpec(shape=(None, None), dtype=tf.int64),
    tf.TensorSpec(shape=(None, None), dtype=tf.int64),
]


@tf.function(input_signature=train_step_signature)
def train_step(inp, tar):
  tar_inp = tar[:, :-1]
  tar_real = tar[:, 1:]

  with tf.GradientTape() as tape:
    predictions, _ = transformer([inp, tar_inp],
                                 training = True)
    loss = loss_function(tar_real, predictions)

  gradients = tape.gradient(loss, transformer.trainable_variables)
  optimizer.apply_gradients(zip(gradients, transformer.trainable_variables))

  train_loss(loss)
  train_accuracy(accuracy_function(tar_real, predictions))

Portuguese is used as the input language and English is the target language.

for epoch in range(EPOCHS):
  start = time.time()

  train_loss.reset_states()
  train_accuracy.reset_states()

  # inp -> portuguese, tar -> english
  for (batch, (inp, tar)) in enumerate(train_batches):
    train_step(inp, tar)

    if batch % 50 == 0:
      print(f'Epoch {epoch + 1} Batch {batch} Loss {train_loss.result():.4f} Accuracy {train_accuracy.result():.4f}')

  if (epoch + 1) % 5 == 0:
    ckpt_save_path = ckpt_manager.save()
    print(f'Saving checkpoint for epoch {epoch+1} at {ckpt_save_path}')

  print(f'Epoch {epoch + 1} Loss {train_loss.result():.4f} Accuracy {train_accuracy.result():.4f}')

  print(f'Time taken for 1 epoch: {time.time() - start:.2f} secs\n')
Epoch 1 Batch 0 Loss 8.8752 Accuracy 0.0000
Epoch 1 Batch 50 Loss 8.8109 Accuracy 0.0103
Epoch 1 Batch 100 Loss 8.7017 Accuracy 0.0276
Epoch 1 Batch 150 Loss 8.5794 Accuracy 0.0349
Epoch 1 Batch 200 Loss 8.4346 Accuracy 0.0394
Epoch 1 Batch 250 Loss 8.2634 Accuracy 0.0438
Epoch 1 Batch 300 Loss 8.0720 Accuracy 0.0492
Epoch 1 Batch 350 Loss 7.8763 Accuracy 0.0575
Epoch 1 Batch 400 Loss 7.6895 Accuracy 0.0647
Epoch 1 Batch 450 Loss 7.5216 Accuracy 0.0725
Epoch 1 Batch 500 Loss 7.3732 Accuracy 0.0800
Epoch 1 Batch 550 Loss 7.2392 Accuracy 0.0871
Epoch 1 Batch 600 Loss 7.1156 Accuracy 0.0943
Epoch 1 Batch 650 Loss 6.9994 Accuracy 0.1013
Epoch 1 Loss 6.8954 Accuracy 0.1076
Time taken for 1 epoch: 78.87 secs

Epoch 2 Batch 0 Loss 5.4016 Accuracy 0.2056
Epoch 2 Batch 50 Loss 5.4228 Accuracy 0.1967
Epoch 2 Batch 100 Loss 5.3617 Accuracy 0.2022
Epoch 2 Batch 150 Loss 5.3225 Accuracy 0.2056
Epoch 2 Batch 200 Loss 5.2860 Accuracy 0.2096
Epoch 2 Batch 250 Loss 5.2513 Accuracy 0.2130
Epoch 2 Batch 300 Loss 5.2213 Accuracy 0.2158
Epoch 2 Batch 350 Loss 5.1949 Accuracy 0.2183
Epoch 2 Batch 400 Loss 5.1683 Accuracy 0.2208
Epoch 2 Batch 450 Loss 5.1391 Accuracy 0.2236
Epoch 2 Batch 500 Loss 5.1132 Accuracy 0.2263
Epoch 2 Batch 550 Loss 5.0920 Accuracy 0.2282
Epoch 2 Batch 600 Loss 5.0709 Accuracy 0.2301
Epoch 2 Batch 650 Loss 5.0521 Accuracy 0.2318
Epoch 2 Batch 700 Loss 5.0325 Accuracy 0.2337
Epoch 2 Loss 5.0318 Accuracy 0.2338
Time taken for 1 epoch: 67.21 secs

Epoch 3 Batch 0 Loss 4.8419 Accuracy 0.2486
Epoch 3 Batch 50 Loss 4.7059 Accuracy 0.2625
Epoch 3 Batch 100 Loss 4.6962 Accuracy 0.2642
Epoch 3 Batch 150 Loss 4.6861 Accuracy 0.2645
Epoch 3 Batch 200 Loss 4.6714 Accuracy 0.2658
Epoch 3 Batch 250 Loss 4.6506 Accuracy 0.2675
Epoch 3 Batch 300 Loss 4.6452 Accuracy 0.2680
Epoch 3 Batch 350 Loss 4.6333 Accuracy 0.2691
Epoch 3 Batch 400 Loss 4.6233 Accuracy 0.2701
Epoch 3 Batch 450 Loss 4.6114 Accuracy 0.2713
Epoch 3 Batch 500 Loss 4.5961 Accuracy 0.2728
Epoch 3 Batch 550 Loss 4.5820 Accuracy 0.2745
Epoch 3 Batch 600 Loss 4.5690 Accuracy 0.2762
Epoch 3 Batch 650 Loss 4.5540 Accuracy 0.2779
Epoch 3 Loss 4.5382 Accuracy 0.2795
Time taken for 1 epoch: 67.29 secs

Epoch 4 Batch 0 Loss 4.5960 Accuracy 0.2810
Epoch 4 Batch 50 Loss 4.2663 Accuracy 0.3093
Epoch 4 Batch 100 Loss 4.2350 Accuracy 0.3128
Epoch 4 Batch 150 Loss 4.2310 Accuracy 0.3134
Epoch 4 Batch 200 Loss 4.2145 Accuracy 0.3156
Epoch 4 Batch 250 Loss 4.1990 Accuracy 0.3172
Epoch 4 Batch 300 Loss 4.1838 Accuracy 0.3191
Epoch 4 Batch 350 Loss 4.1685 Accuracy 0.3210
Epoch 4 Batch 400 Loss 4.1504 Accuracy 0.3232
Epoch 4 Batch 450 Loss 4.1352 Accuracy 0.3251
Epoch 4 Batch 500 Loss 4.1198 Accuracy 0.3271
Epoch 4 Batch 550 Loss 4.1022 Accuracy 0.3291
Epoch 4 Batch 600 Loss 4.0869 Accuracy 0.3312
Epoch 4 Batch 650 Loss 4.0717 Accuracy 0.3333
Epoch 4 Loss 4.0567 Accuracy 0.3352
Time taken for 1 epoch: 66.56 secs

Epoch 5 Batch 0 Loss 3.7833 Accuracy 0.3722
Epoch 5 Batch 50 Loss 3.7676 Accuracy 0.3679
Epoch 5 Batch 100 Loss 3.7407 Accuracy 0.3725
Epoch 5 Batch 150 Loss 3.7293 Accuracy 0.3740
Epoch 5 Batch 200 Loss 3.7235 Accuracy 0.3740
Epoch 5 Batch 250 Loss 3.7064 Accuracy 0.3765
Epoch 5 Batch 300 Loss 3.6987 Accuracy 0.3784
Epoch 5 Batch 350 Loss 3.6883 Accuracy 0.3797
Epoch 5 Batch 400 Loss 3.6754 Accuracy 0.3812
Epoch 5 Batch 450 Loss 3.6608 Accuracy 0.3831
Epoch 5 Batch 500 Loss 3.6493 Accuracy 0.3844
Epoch 5 Batch 550 Loss 3.6374 Accuracy 0.3862
Epoch 5 Batch 600 Loss 3.6272 Accuracy 0.3875
Epoch 5 Batch 650 Loss 3.6196 Accuracy 0.3885
Epoch 5 Batch 700 Loss 3.6088 Accuracy 0.3899
Saving checkpoint for epoch 5 at ./checkpoints/train/ckpt-1
Epoch 5 Loss 3.6075 Accuracy 0.3900
Time taken for 1 epoch: 67.89 secs

Epoch 6 Batch 0 Loss 3.2746 Accuracy 0.4342
Epoch 6 Batch 50 Loss 3.3152 Accuracy 0.4227
Epoch 6 Batch 100 Loss 3.3257 Accuracy 0.4218
Epoch 6 Batch 150 Loss 3.3297 Accuracy 0.4212
Epoch 6 Batch 200 Loss 3.3234 Accuracy 0.4220
Epoch 6 Batch 250 Loss 3.3184 Accuracy 0.4228
Epoch 6 Batch 300 Loss 3.3150 Accuracy 0.4226
Epoch 6 Batch 350 Loss 3.3072 Accuracy 0.4237
Epoch 6 Batch 400 Loss 3.3024 Accuracy 0.4246
Epoch 6 Batch 450 Loss 3.2957 Accuracy 0.4255
Epoch 6 Batch 500 Loss 3.2882 Accuracy 0.4265
Epoch 6 Batch 550 Loss 3.2805 Accuracy 0.4277
Epoch 6 Batch 600 Loss 3.2739 Accuracy 0.4289
Epoch 6 Batch 650 Loss 3.2654 Accuracy 0.4301
Epoch 6 Loss 3.2567 Accuracy 0.4315
Time taken for 1 epoch: 66.77 secs

Epoch 7 Batch 0 Loss 2.8377 Accuracy 0.4828
Epoch 7 Batch 50 Loss 3.0138 Accuracy 0.4579
Epoch 7 Batch 100 Loss 3.0084 Accuracy 0.4587
Epoch 7 Batch 150 Loss 3.0014 Accuracy 0.4600
Epoch 7 Batch 200 Loss 2.9968 Accuracy 0.4615
Epoch 7 Batch 250 Loss 2.9861 Accuracy 0.4630
Epoch 7 Batch 300 Loss 2.9838 Accuracy 0.4632
Epoch 7 Batch 350 Loss 2.9723 Accuracy 0.4647
Epoch 7 Batch 400 Loss 2.9639 Accuracy 0.4660
Epoch 7 Batch 450 Loss 2.9570 Accuracy 0.4673
Epoch 7 Batch 500 Loss 2.9485 Accuracy 0.4685
Epoch 7 Batch 550 Loss 2.9420 Accuracy 0.4696
Epoch 7 Batch 600 Loss 2.9364 Accuracy 0.4705
Epoch 7 Batch 650 Loss 2.9282 Accuracy 0.4719
Epoch 7 Loss 2.9236 Accuracy 0.4727
Time taken for 1 epoch: 66.60 secs

Epoch 8 Batch 0 Loss 2.5937 Accuracy 0.4994
Epoch 8 Batch 50 Loss 2.6908 Accuracy 0.4994
Epoch 8 Batch 100 Loss 2.6846 Accuracy 0.5009
Epoch 8 Batch 150 Loss 2.6896 Accuracy 0.4998
Epoch 8 Batch 200 Loss 2.6881 Accuracy 0.5009
Epoch 8 Batch 250 Loss 2.6852 Accuracy 0.5013
Epoch 8 Batch 300 Loss 2.6777 Accuracy 0.5024
Epoch 8 Batch 350 Loss 2.6704 Accuracy 0.5039
Epoch 8 Batch 400 Loss 2.6676 Accuracy 0.5045
Epoch 8 Batch 450 Loss 2.6623 Accuracy 0.5055
Epoch 8 Batch 500 Loss 2.6589 Accuracy 0.5062
Epoch 8 Batch 550 Loss 2.6530 Accuracy 0.5070
Epoch 8 Batch 600 Loss 2.6509 Accuracy 0.5073
Epoch 8 Batch 650 Loss 2.6461 Accuracy 0.5081
Epoch 8 Loss 2.6424 Accuracy 0.5087
Time taken for 1 epoch: 66.53 secs

Epoch 9 Batch 0 Loss 2.4541 Accuracy 0.5275
Epoch 9 Batch 50 Loss 2.4855 Accuracy 0.5262
Epoch 9 Batch 100 Loss 2.4790 Accuracy 0.5266
Epoch 9 Batch 150 Loss 2.4612 Accuracy 0.5305
Epoch 9 Batch 200 Loss 2.4613 Accuracy 0.5308
Epoch 9 Batch 250 Loss 2.4606 Accuracy 0.5318
Epoch 9 Batch 300 Loss 2.4592 Accuracy 0.5322
Epoch 9 Batch 350 Loss 2.4532 Accuracy 0.5330
Epoch 9 Batch 400 Loss 2.4506 Accuracy 0.5333
Epoch 9 Batch 450 Loss 2.4479 Accuracy 0.5336
Epoch 9 Batch 500 Loss 2.4438 Accuracy 0.5342
Epoch 9 Batch 550 Loss 2.4413 Accuracy 0.5347
Epoch 9 Batch 600 Loss 2.4387 Accuracy 0.5353
Epoch 9 Batch 650 Loss 2.4357 Accuracy 0.5362
Epoch 9 Loss 2.4334 Accuracy 0.5365
Time taken for 1 epoch: 66.47 secs

Epoch 10 Batch 0 Loss 2.0028 Accuracy 0.6072
Epoch 10 Batch 50 Loss 2.2426 Accuracy 0.5588
Epoch 10 Batch 100 Loss 2.2546 Accuracy 0.5587
Epoch 10 Batch 150 Loss 2.2623 Accuracy 0.5575
Epoch 10 Batch 200 Loss 2.2657 Accuracy 0.5577
Epoch 10 Batch 250 Loss 2.2705 Accuracy 0.5577
Epoch 10 Batch 300 Loss 2.2715 Accuracy 0.5576
Epoch 10 Batch 350 Loss 2.2746 Accuracy 0.5575
Epoch 10 Batch 400 Loss 2.2680 Accuracy 0.5584
Epoch 10 Batch 450 Loss 2.2685 Accuracy 0.5583
Epoch 10 Batch 500 Loss 2.2664 Accuracy 0.5586
Epoch 10 Batch 550 Loss 2.2665 Accuracy 0.5586
Epoch 10 Batch 600 Loss 2.2686 Accuracy 0.5585
Epoch 10 Batch 650 Loss 2.2674 Accuracy 0.5588
Saving checkpoint for epoch 10 at ./checkpoints/train/ckpt-2
Epoch 10 Loss 2.2672 Accuracy 0.5592
Time taken for 1 epoch: 66.91 secs

Epoch 11 Batch 0 Loss 2.2826 Accuracy 0.5411
Epoch 11 Batch 50 Loss 2.1336 Accuracy 0.5752
Epoch 11 Batch 100 Loss 2.1306 Accuracy 0.5770
Epoch 11 Batch 150 Loss 2.1364 Accuracy 0.5763
Epoch 11 Batch 200 Loss 2.1307 Accuracy 0.5776
Epoch 11 Batch 250 Loss 2.1329 Accuracy 0.5773
Epoch 11 Batch 300 Loss 2.1331 Accuracy 0.5776
Epoch 11 Batch 350 Loss 2.1358 Accuracy 0.5774
Epoch 11 Batch 400 Loss 2.1372 Accuracy 0.5773
Epoch 11 Batch 450 Loss 2.1325 Accuracy 0.5780
Epoch 11 Batch 500 Loss 2.1346 Accuracy 0.5774
Epoch 11 Batch 550 Loss 2.1365 Accuracy 0.5771
Epoch 11 Batch 600 Loss 2.1383 Accuracy 0.5771
Epoch 11 Batch 650 Loss 2.1385 Accuracy 0.5771
Epoch 11 Batch 700 Loss 2.1384 Accuracy 0.5773
Epoch 11 Loss 2.1384 Accuracy 0.5773
Time taken for 1 epoch: 67.18 secs

Epoch 12 Batch 0 Loss 1.8959 Accuracy 0.6328
Epoch 12 Batch 50 Loss 1.9745 Accuracy 0.6010
Epoch 12 Batch 100 Loss 1.9896 Accuracy 0.5984
Epoch 12 Batch 150 Loss 1.9991 Accuracy 0.5966
Epoch 12 Batch 200 Loss 2.0074 Accuracy 0.5952
Epoch 12 Batch 250 Loss 2.0106 Accuracy 0.5949
Epoch 12 Batch 300 Loss 2.0145 Accuracy 0.5944
Epoch 12 Batch 350 Loss 2.0141 Accuracy 0.5945
Epoch 12 Batch 400 Loss 2.0158 Accuracy 0.5943
Epoch 12 Batch 450 Loss 2.0143 Accuracy 0.5946
Epoch 12 Batch 500 Loss 2.0161 Accuracy 0.5943
Epoch 12 Batch 550 Loss 2.0179 Accuracy 0.5941
Epoch 12 Batch 600 Loss 2.0211 Accuracy 0.5936
Epoch 12 Batch 650 Loss 2.0227 Accuracy 0.5934
Epoch 12 Batch 700 Loss 2.0250 Accuracy 0.5932
Epoch 12 Loss 2.0256 Accuracy 0.5931
Time taken for 1 epoch: 67.06 secs

Epoch 13 Batch 0 Loss 1.8211 Accuracy 0.6230
Epoch 13 Batch 50 Loss 1.8812 Accuracy 0.6125
Epoch 13 Batch 100 Loss 1.8978 Accuracy 0.6112
Epoch 13 Batch 150 Loss 1.9060 Accuracy 0.6093
Epoch 13 Batch 200 Loss 1.9210 Accuracy 0.6072
Epoch 13 Batch 250 Loss 1.9226 Accuracy 0.6072
Epoch 13 Batch 300 Loss 1.9251 Accuracy 0.6066
Epoch 13 Batch 350 Loss 1.9240 Accuracy 0.6068
Epoch 13 Batch 400 Loss 1.9260 Accuracy 0.6068
Epoch 13 Batch 450 Loss 1.9238 Accuracy 0.6073
Epoch 13 Batch 500 Loss 1.9268 Accuracy 0.6070
Epoch 13 Batch 550 Loss 1.9276 Accuracy 0.6069
Epoch 13 Batch 600 Loss 1.9299 Accuracy 0.6067
Epoch 13 Batch 650 Loss 1.9318 Accuracy 0.6064
Epoch 13 Batch 700 Loss 1.9335 Accuracy 0.6064
Epoch 13 Loss 1.9331 Accuracy 0.6064
Time taken for 1 epoch: 67.15 secs

Epoch 14 Batch 0 Loss 1.7548 Accuracy 0.6380
Epoch 14 Batch 50 Loss 1.8375 Accuracy 0.6191
Epoch 14 Batch 100 Loss 1.8170 Accuracy 0.6233
Epoch 14 Batch 150 Loss 1.8236 Accuracy 0.6217
Epoch 14 Batch 200 Loss 1.8284 Accuracy 0.6212
Epoch 14 Batch 250 Loss 1.8308 Accuracy 0.6212
Epoch 14 Batch 300 Loss 1.8337 Accuracy 0.6205
Epoch 14 Batch 350 Loss 1.8362 Accuracy 0.6200
Epoch 14 Batch 400 Loss 1.8382 Accuracy 0.6198
Epoch 14 Batch 450 Loss 1.8388 Accuracy 0.6196
Epoch 14 Batch 500 Loss 1.8414 Accuracy 0.6191
Epoch 14 Batch 550 Loss 1.8441 Accuracy 0.6187
Epoch 14 Batch 600 Loss 1.8457 Accuracy 0.6188
Epoch 14 Batch 650 Loss 1.8478 Accuracy 0.6187
Epoch 14 Loss 1.8506 Accuracy 0.6183
Time taken for 1 epoch: 66.62 secs

Epoch 15 Batch 0 Loss 1.7373 Accuracy 0.6456
Epoch 15 Batch 50 Loss 1.7530 Accuracy 0.6330
Epoch 15 Batch 100 Loss 1.7466 Accuracy 0.6344
Epoch 15 Batch 150 Loss 1.7509 Accuracy 0.6332
Epoch 15 Batch 200 Loss 1.7529 Accuracy 0.6328
Epoch 15 Batch 250 Loss 1.7583 Accuracy 0.6323
Epoch 15 Batch 300 Loss 1.7641 Accuracy 0.6308
Epoch 15 Batch 350 Loss 1.7671 Accuracy 0.6303
Epoch 15 Batch 400 Loss 1.7664 Accuracy 0.6306
Epoch 15 Batch 450 Loss 1.7691 Accuracy 0.6302
Epoch 15 Batch 500 Loss 1.7703 Accuracy 0.6302
Epoch 15 Batch 550 Loss 1.7746 Accuracy 0.6298
Epoch 15 Batch 600 Loss 1.7774 Accuracy 0.6295
Epoch 15 Batch 650 Loss 1.7793 Accuracy 0.6293
Epoch 15 Batch 700 Loss 1.7808 Accuracy 0.6291
Saving checkpoint for epoch 15 at ./checkpoints/train/ckpt-3
Epoch 15 Loss 1.7811 Accuracy 0.6292
Time taken for 1 epoch: 67.34 secs

Epoch 16 Batch 0 Loss 1.5369 Accuracy 0.6669
Epoch 16 Batch 50 Loss 1.6715 Accuracy 0.6442
Epoch 16 Batch 100 Loss 1.6734 Accuracy 0.6451
Epoch 16 Batch 150 Loss 1.6813 Accuracy 0.6438
Epoch 16 Batch 200 Loss 1.6826 Accuracy 0.6437
Epoch 16 Batch 250 Loss 1.6931 Accuracy 0.6423
Epoch 16 Batch 300 Loss 1.6965 Accuracy 0.6417
Epoch 16 Batch 350 Loss 1.7010 Accuracy 0.6410
Epoch 16 Batch 400 Loss 1.7050 Accuracy 0.6403
Epoch 16 Batch 450 Loss 1.7046 Accuracy 0.6403
Epoch 16 Batch 500 Loss 1.7066 Accuracy 0.6401
Epoch 16 Batch 550 Loss 1.7091 Accuracy 0.6398
Epoch 16 Batch 600 Loss 1.7121 Accuracy 0.6393
Epoch 16 Batch 650 Loss 1.7151 Accuracy 0.6390
Epoch 16 Batch 700 Loss 1.7206 Accuracy 0.6379
Epoch 16 Loss 1.7206 Accuracy 0.6379
Time taken for 1 epoch: 66.99 secs

Epoch 17 Batch 0 Loss 1.7076 Accuracy 0.6399
Epoch 17 Batch 50 Loss 1.6255 Accuracy 0.6508
Epoch 17 Batch 100 Loss 1.6220 Accuracy 0.6527
Epoch 17 Batch 150 Loss 1.6258 Accuracy 0.6522
Epoch 17 Batch 200 Loss 1.6369 Accuracy 0.6505
Epoch 17 Batch 250 Loss 1.6427 Accuracy 0.6493
Epoch 17 Batch 300 Loss 1.6496 Accuracy 0.6482
Epoch 17 Batch 350 Loss 1.6493 Accuracy 0.6486
Epoch 17 Batch 400 Loss 1.6478 Accuracy 0.6488
Epoch 17 Batch 450 Loss 1.6501 Accuracy 0.6484
Epoch 17 Batch 500 Loss 1.6537 Accuracy 0.6480
Epoch 17 Batch 550 Loss 1.6560 Accuracy 0.6478
Epoch 17 Batch 600 Loss 1.6595 Accuracy 0.6472
Epoch 17 Batch 650 Loss 1.6638 Accuracy 0.6466
Epoch 17 Batch 700 Loss 1.6667 Accuracy 0.6461
Epoch 17 Loss 1.6667 Accuracy 0.6462
Time taken for 1 epoch: 67.11 secs

Epoch 18 Batch 0 Loss 1.6208 Accuracy 0.6569
Epoch 18 Batch 50 Loss 1.5751 Accuracy 0.6604
Epoch 18 Batch 100 Loss 1.5714 Accuracy 0.6614
Epoch 18 Batch 150 Loss 1.5766 Accuracy 0.6611
Epoch 18 Batch 200 Loss 1.5857 Accuracy 0.6593
Epoch 18 Batch 250 Loss 1.5886 Accuracy 0.6586
Epoch 18 Batch 300 Loss 1.5960 Accuracy 0.6573
Epoch 18 Batch 350 Loss 1.5979 Accuracy 0.6569
Epoch 18 Batch 400 Loss 1.6028 Accuracy 0.6559
Epoch 18 Batch 450 Loss 1.6057 Accuracy 0.6555
Epoch 18 Batch 500 Loss 1.6088 Accuracy 0.6548
Epoch 18 Batch 550 Loss 1.6118 Accuracy 0.6544
Epoch 18 Batch 600 Loss 1.6131 Accuracy 0.6543
Epoch 18 Batch 650 Loss 1.6155 Accuracy 0.6540
Epoch 18 Batch 700 Loss 1.6175 Accuracy 0.6537
Epoch 18 Loss 1.6175 Accuracy 0.6538
Time taken for 1 epoch: 67.48 secs

Epoch 19 Batch 0 Loss 1.5765 Accuracy 0.6575
Epoch 19 Batch 50 Loss 1.5264 Accuracy 0.6671
Epoch 19 Batch 100 Loss 1.5278 Accuracy 0.6679
Epoch 19 Batch 150 Loss 1.5360 Accuracy 0.6665
Epoch 19 Batch 200 Loss 1.5443 Accuracy 0.6651
Epoch 19 Batch 250 Loss 1.5442 Accuracy 0.6653
Epoch 19 Batch 300 Loss 1.5472 Accuracy 0.6646
Epoch 19 Batch 350 Loss 1.5494 Accuracy 0.6643
Epoch 19 Batch 400 Loss 1.5511 Accuracy 0.6639
Epoch 19 Batch 450 Loss 1.5526 Accuracy 0.6637
Epoch 19 Batch 500 Loss 1.5572 Accuracy 0.6629
Epoch 19 Batch 550 Loss 1.5609 Accuracy 0.6624
Epoch 19 Batch 600 Loss 1.5640 Accuracy 0.6620
Epoch 19 Batch 650 Loss 1.5676 Accuracy 0.6615
Epoch 19 Loss 1.5711 Accuracy 0.6611
Time taken for 1 epoch: 67.01 secs

Epoch 20 Batch 0 Loss 1.5116 Accuracy 0.6690
Epoch 20 Batch 50 Loss 1.4730 Accuracy 0.6757
Epoch 20 Batch 100 Loss 1.4876 Accuracy 0.6728
Epoch 20 Batch 150 Loss 1.4843 Accuracy 0.6741
Epoch 20 Batch 200 Loss 1.4927 Accuracy 0.6725
Epoch 20 Batch 250 Loss 1.4979 Accuracy 0.6715
Epoch 20 Batch 300 Loss 1.5041 Accuracy 0.6709
Epoch 20 Batch 350 Loss 1.5100 Accuracy 0.6700
Epoch 20 Batch 400 Loss 1.5122 Accuracy 0.6696
Epoch 20 Batch 450 Loss 1.5167 Accuracy 0.6690
Epoch 20 Batch 500 Loss 1.5174 Accuracy 0.6690
Epoch 20 Batch 550 Loss 1.5214 Accuracy 0.6684
Epoch 20 Batch 600 Loss 1.5250 Accuracy 0.6678
Epoch 20 Batch 650 Loss 1.5300 Accuracy 0.6670
Saving checkpoint for epoch 20 at ./checkpoints/train/ckpt-4
Epoch 20 Loss 1.5332 Accuracy 0.6666
Time taken for 1 epoch: 66.73 secs

Run inference

The following steps are used for inference:

  • Encode the input sentence using the Portuguese tokenizer (tokenizers.pt). This is the encoder input.
  • The decoder input is initialized to the [START] token.
  • Calculate the padding masks and the look ahead masks.
  • The decoder then outputs the predictions by looking at the encoder output and its own output (self-attention).
  • Concatenate the predicted token to the decoder input and pass it to the decoder.
  • In this approach, the decoder predicts the next token based on the previous tokens it predicted.
class Translator(tf.Module):
  def __init__(self, tokenizers, transformer):
    self.tokenizers = tokenizers
    self.transformer = transformer

  def __call__(self, sentence, max_length=MAX_TOKENS):
    # input sentence is portuguese, hence adding the start and end token
    assert isinstance(sentence, tf.Tensor)
    if len(sentence.shape) == 0:
      sentence = sentence[tf.newaxis]

    sentence = self.tokenizers.pt.tokenize(sentence).to_tensor()

    encoder_input = sentence

    # As the output language is english, initialize the output with the
    # english start token.
    start_end = self.tokenizers.en.tokenize([''])[0]
    start = start_end[0][tf.newaxis]
    end = start_end[1][tf.newaxis]

    # `tf.TensorArray` is required here (instead of a python list) so that the
    # dynamic-loop can be traced by `tf.function`.
    output_array = tf.TensorArray(dtype=tf.int64, size=0, dynamic_size=True)
    output_array = output_array.write(0, start)

    for i in tf.range(max_length):
      output = tf.transpose(output_array.stack())
      predictions, _ = self.transformer([encoder_input, output], training=False)

      # select the last token from the seq_len dimension
      predictions = predictions[:, -1:, :]  # (batch_size, 1, vocab_size)

      predicted_id = tf.argmax(predictions, axis=-1)

      # concatentate the predicted_id to the output which is given to the decoder
      # as its input.
      output_array = output_array.write(i+1, predicted_id[0])

      if predicted_id == end:
        break

    output = tf.transpose(output_array.stack())
    # output.shape (1, tokens)
    text = tokenizers.en.detokenize(output)[0]  # shape: ()

    tokens = tokenizers.en.lookup(output)[0]

    # `tf.function` prevents us from using the attention_weights that were
    # calculated on the last iteration of the loop. So recalculate them outside
    # the loop.
    _, attention_weights = self.transformer([encoder_input, output[:,:-1]], training=False)

    return text, tokens, attention_weights

Create an instance of this Translator class, and try it out a few times:

translator = Translator(tokenizers, transformer)
def print_translation(sentence, tokens, ground_truth):
  print(f'{"Input:":15s}: {sentence}')
  print(f'{"Prediction":15s}: {tokens.numpy().decode("utf-8")}')
  print(f'{"Ground truth":15s}: {ground_truth}')
sentence = 'este é um problema que temos que resolver.'
ground_truth = 'this is a problem we have to solve .'

translated_text, translated_tokens, attention_weights = translator(
    tf.constant(sentence))
print_translation(sentence, translated_text, ground_truth)
Input:         : este é um problema que temos que resolver.
Prediction     : this is a problem that we need to solve .
Ground truth   : this is a problem we have to solve .
sentence = 'os meus vizinhos ouviram sobre esta ideia.'
ground_truth = 'and my neighboring homes heard about this idea .'

translated_text, translated_tokens, attention_weights = translator(
    tf.constant(sentence))
print_translation(sentence, translated_text, ground_truth)
Input:         : os meus vizinhos ouviram sobre esta ideia.
Prediction     : my neighbors heard about this idea .
Ground truth   : and my neighboring homes heard about this idea .
sentence = 'vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.'
ground_truth = "so i'll just share with you some stories very quickly of some magical things that have happened."

translated_text, translated_tokens, attention_weights = translator(
    tf.constant(sentence))
print_translation(sentence, translated_text, ground_truth)
Input:         : vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.
Prediction     : so i ' m going to take you very quickly to share with you some stories of some magic things that happen .
Ground truth   : so i'll just share with you some stories very quickly of some magical things that have happened.

Attention plots

The Translator class returns a dictionary of attention maps you can use to visualize the internal working of the model:

sentence = 'este é o primeiro livro que eu fiz.'
ground_truth = "this is the first book i've ever done."

translated_text, translated_tokens, attention_weights = translator(
    tf.constant(sentence))
print_translation(sentence, translated_text, ground_truth)
Input:         : este é o primeiro livro que eu fiz.
Prediction     : this is the first book i made .
Ground truth   : this is the first book i've ever done.
def plot_attention_head(in_tokens, translated_tokens, attention):
  # The plot is of the attention when a token was generated.
  # The model didn't generate `<START>` in the output. Skip it.
  translated_tokens = translated_tokens[1:]

  ax = plt.gca()
  ax.matshow(attention)
  ax.set_xticks(range(len(in_tokens)))
  ax.set_yticks(range(len(translated_tokens)))

  labels = [label.decode('utf-8') for label in in_tokens.numpy()]
  ax.set_xticklabels(
      labels, rotation=90)

  labels = [label.decode('utf-8') for label in translated_tokens.numpy()]
  ax.set_yticklabels(labels)
head = 0
# shape: (batch=1, num_heads, seq_len_q, seq_len_k)
attention_heads = tf.squeeze(
  attention_weights['decoder_layer4_block2'], 0)
attention = attention_heads[head]
attention.shape
TensorShape([9, 11])
in_tokens = tf.convert_to_tensor([sentence])
in_tokens = tokenizers.pt.tokenize(in_tokens).to_tensor()
in_tokens = tokenizers.pt.lookup(in_tokens)[0]
in_tokens
<tf.Tensor: shape=(11,), dtype=string, numpy=
array([b'[START]', b'este', b'e', b'o', b'primeiro', b'livro', b'que',
       b'eu', b'fiz', b'.', b'[END]'], dtype=object)>
translated_tokens
<tf.Tensor: shape=(10,), dtype=string, numpy=
array([b'[START]', b'this', b'is', b'the', b'first', b'book', b'i',
       b'made', b'.', b'[END]'], dtype=object)>
plot_attention_head(in_tokens, translated_tokens, attention)

png

def plot_attention_weights(sentence, translated_tokens, attention_heads):
  in_tokens = tf.convert_to_tensor([sentence])
  in_tokens = tokenizers.pt.tokenize(in_tokens).to_tensor()
  in_tokens = tokenizers.pt.lookup(in_tokens)[0]
  in_tokens

  fig = plt.figure(figsize=(16, 8))

  for h, head in enumerate(attention_heads):
    ax = fig.add_subplot(2, 4, h+1)

    plot_attention_head(in_tokens, translated_tokens, head)

    ax.set_xlabel(f'Head {h+1}')

  plt.tight_layout()
  plt.show()
plot_attention_weights(sentence, translated_tokens,
                       attention_weights['decoder_layer4_block2'][0])

png

The model does okay on unfamiliar words. Neither "triceratops" or "encyclopedia" are in the input dataset and the model almost learns to transliterate them, even without a shared vocabulary:

sentence = 'Eu li sobre triceratops na enciclopédia.'
ground_truth = 'I read about triceratops in the encyclopedia.'

translated_text, translated_tokens, attention_weights = translator(
    tf.constant(sentence))
print_translation(sentence, translated_text, ground_truth)

plot_attention_weights(sentence, translated_tokens,
                       attention_weights['decoder_layer4_block2'][0])
Input:         : Eu li sobre triceratops na enciclopédia.
Prediction     : i read about thompathes in navizlo .
Ground truth   : I read about triceratops in the encyclopedia.

png

Export

That inference model is working, so next you'll export it as a tf.saved_model.

To do that, wrap it in yet another tf.Module sub-class, this time with a tf.function on the __call__ method:

class ExportTranslator(tf.Module):
  def __init__(self, translator):
    self.translator = translator

  @tf.function(input_signature=[tf.TensorSpec(shape=[], dtype=tf.string)])
  def __call__(self, sentence):
    (result,
     tokens,
     attention_weights) = self.translator(sentence, max_length=MAX_TOKENS)

    return result

In the above tf.function only the output sentence is returned. Thanks to the non-strict execution in tf.function any unnecessary values are never computed.

translator = ExportTranslator(translator)

Since the model is decoding the predictions using tf.argmax the predictions are deterministic. The original model and one reloaded from its SavedModel should give identical predictions:

translator('este é o primeiro livro que eu fiz.').numpy()
b'this is the first book i made .'
tf.saved_model.save(translator, export_dir='translator')
2022-05-31 11:58:56.232462: W tensorflow/python/util/util.cc:368] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
WARNING:absl:Found untraced functions such as embedding_4_layer_call_fn, embedding_4_layer_call_and_return_conditional_losses, dropout_37_layer_call_fn, dropout_37_layer_call_and_return_conditional_losses, embedding_5_layer_call_fn while saving (showing 5 of 224). These functions will not be directly callable after loading.
reloaded = tf.saved_model.load('translator')
reloaded('este é o primeiro livro que eu fiz.').numpy()
b'this is the first book i made .'

Summary

In this tutorial you learned about:

  • positional encoding
  • multi-head attention
  • the importance of masking
  • and how to put it all together to build a transformer.

This implementation tried to stay close to the implementation of the original paper. If you want to practice there are many things you could try with it. For example:

  • Using a different dataset to train the transformer.
  • Create the "Base Transformer" or "Transformer XL" configurations from the original paper by changing the hyperparameters.
  • Use the layers defined here to create an implementation of BERT.
  • Implement beam search to get better predictions.