Have a question? Connect with the community at the TensorFlow Forum Visit Forum

Transformer model for language understanding

View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook

This tutorial trains a Transformer model to translate a Portuguese to English dataset. This is an advanced example that assumes knowledge of text generation and attention.

The core idea behind the Transformer model is self-attention—the ability to attend to different positions of the input sequence to compute a representation of that sequence. Transformer creates stacks of self-attention layers and is explained below in the sections Scaled dot product attention and Multi-head attention.

A transformer model handles variable-sized input using stacks of self-attention layers instead of RNNs or CNNs. This general architecture has a number of advantages:

  • It makes no assumptions about the temporal/spatial relationships across the data. This is ideal for processing a set of objects (for example, StarCraft units).
  • Layer outputs can be calculated in parallel, instead of a series like an RNN.
  • Distant items can affect each other's output without passing through many RNN-steps, or convolution layers (see Scene Memory Transformer for example).
  • It can learn long-range dependencies. This is a challenge in many sequence tasks.

The downsides of this architecture are:

  • For a time-series, the output for a time-step is calculated from the entire history instead of only the inputs and current hidden-state. This may be less efficient.
  • If the input does have a temporal/spatial relationship, like text, some positional encoding must be added or the model will effectively see a bag of words.

After training the model in this notebook, you will be able to input a Portuguese sentence and return the English translation.

Attention heatmap

Setup

pip install -q tensorflow_datasets
pip install -q -U tensorflow-text
import collections
import logging
import os
import pathlib
import re
import string
import sys
import time

import numpy as np
import matplotlib.pyplot as plt

import tensorflow_datasets as tfds
import tensorflow_text as text
import tensorflow as tf
logging.getLogger('tensorflow').setLevel(logging.ERROR)  # suppress warnings

Download the Dataset

Use TensorFlow datasets to load the Portuguese-English translation dataset from the TED Talks Open Translation Project.

This dataset contains approximately 50000 training examples, 1100 validation examples, and 2000 test examples.

examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en', with_info=True,
                               as_supervised=True)
train_examples, val_examples = examples['train'], examples['validation']

The tf.data.Dataset object returned by TensorFlow datasets yields pairs of text examples:

for pt_examples, en_examples in train_examples.batch(3).take(1):
  for pt in pt_examples.numpy():
    print(pt.decode('utf-8'))

  print()

  for en in en_examples.numpy():
    print(en.decode('utf-8'))
e quando melhoramos a procura , tiramos a única vantagem da impressão , que é a serendipidade .
mas e se estes fatores fossem ativos ?
mas eles não tinham a curiosidade de me testar .

and when you improve searchability , you actually take away the one advantage of print , which is serendipity .
but what if it were active ?
but they did n't test for curiosity .

Text tokenization & detokenization

You can't train a model directly on text. The text needs to be converted to some numeric representation first. Typically, you convert the text to sequences of token IDs, which are as indexes into an embedding.

One popular implementation is demonstrated in the Subword tokenizer tutorial builds subword tokenizers (text.BertTokenizer) optimized for this dataset and exports them in a saved_model.

Download and unzip and import the saved_model:

model_name = "ted_hrlr_translate_pt_en_converter"
tf.keras.utils.get_file(
    f"{model_name}.zip",
    f"https://storage.googleapis.com/download.tensorflow.org/models/{model_name}.zip",
    cache_dir='.', cache_subdir='', extract=True
)
Downloading data from https://storage.googleapis.com/download.tensorflow.org/models/ted_hrlr_translate_pt_en_converter.zip
188416/184801 [==============================] - 0s 0us/step
'./ted_hrlr_translate_pt_en_converter.zip'
tokenizers = tf.saved_model.load(model_name)

The tf.saved_model contains two text tokenizers, one for English and one for Portuguese. Both have the same methods:

[item for item in dir(tokenizers.en) if not item.startswith('_')]
['detokenize',
 'get_reserved_tokens',
 'get_vocab_path',
 'get_vocab_size',
 'lookup',
 'tokenize',
 'tokenizer',
 'vocab']

The tokenize method converts a batch of strings to a padded-batch of token IDs. This method splits punctuation, lowercases and unicode-normalizes the input before tokenizing. That standardization is not visible here because the input data is already standardized.

for en in en_examples.numpy():
  print(en.decode('utf-8'))
and when you improve searchability , you actually take away the one advantage of print , which is serendipity .
but what if it were active ?
but they did n't test for curiosity .
encoded = tokenizers.en.tokenize(en_examples)

for row in encoded.to_list():
  print(row)
[2, 72, 117, 79, 1259, 1491, 2362, 13, 79, 150, 184, 311, 71, 103, 2308, 74, 2679, 13, 148, 80, 55, 4840, 1434, 2423, 540, 15, 3]
[2, 87, 90, 107, 76, 129, 1852, 30, 3]
[2, 87, 83, 149, 50, 9, 56, 664, 85, 2512, 15, 3]

The detokenize method attempts to convert these token IDs back to human readable text:

round_trip = tokenizers.en.detokenize(encoded)
for line in round_trip.numpy():
  print(line.decode('utf-8'))
and when you improve searchability , you actually take away the one advantage of print , which is serendipity .
but what if it were active ?
but they did n ' t test for curiosity .

The lower level lookup method converts from token-IDs to token text:

tokens = tokenizers.en.lookup(encoded)
tokens
<tf.RaggedTensor [[b'[START]', b'and', b'when', b'you', b'improve', b'search', b'##ability', b',', b'you', b'actually', b'take', b'away', b'the', b'one', b'advantage', b'of', b'print', b',', b'which', b'is', b's', b'##ere', b'##nd', b'##ip', b'##ity', b'.', b'[END]'], [b'[START]', b'but', b'what', b'if', b'it', b'were', b'active', b'?', b'[END]'], [b'[START]', b'but', b'they', b'did', b'n', b"'", b't', b'test', b'for', b'curiosity', b'.', b'[END]']]>

Here you can see the "subword" aspect of the tokenizers. The word "searchability" is decomposed into "search ##ability" and the word "serindipity" into "s ##ere ##nd ##ip ##ity"

Setup input pipeline

To build an input pipeline suitable for training you'll apply some transformations to the dataset.

This function will be used to encode the batches of raw text:

def tokenize_pairs(pt, en):
    pt = tokenizers.pt.tokenize(pt)
    # Convert from ragged to dense, padding with zeros.
    pt = pt.to_tensor()

    en = tokenizers.en.tokenize(en)
    # Convert from ragged to dense, padding with zeros.
    en = en.to_tensor()
    return pt, en

Here's a simple input pipeline that processes, shuffles and batches the data:

BUFFER_SIZE = 20000
BATCH_SIZE = 64
def make_batches(ds):
  return (
      ds
      .cache()
      .shuffle(BUFFER_SIZE)
      .batch(BATCH_SIZE)
      .map(tokenize_pairs, num_parallel_calls=tf.data.AUTOTUNE)
      .prefetch(tf.data.AUTOTUNE))


train_batches = make_batches(train_examples)
val_batches = make_batches(val_examples)

Positional encoding

Since this model doesn't contain any recurrence or convolution, positional encoding is added to give the model some information about the relative position of the words in the sentence.

The positional encoding vector is added to the embedding vector. Embeddings represent a token in a d-dimensional space where tokens with similar meaning will be closer to each other. But the embeddings do not encode the relative position of words in a sentence. So after adding the positional encoding, words will be closer to each other based on the similarity of their meaning and their position in the sentence, in the d-dimensional space.

The formula for calculating the positional encoding is as follows:

$$\Large{PE_{(pos, 2i)} = \sin(pos / 10000^{2i / d_{model} })} $$
$$\Large{PE_{(pos, 2i+1)} = \cos(pos / 10000^{2i / d_{model} })} $$
def get_angles(pos, i, d_model):
  angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model))
  return pos * angle_rates
def positional_encoding(position, d_model):
  angle_rads = get_angles(np.arange(position)[:, np.newaxis],
                          np.arange(d_model)[np.newaxis, :],
                          d_model)

  # apply sin to even indices in the array; 2i
  angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2])

  # apply cos to odd indices in the array; 2i+1
  angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2])

  pos_encoding = angle_rads[np.newaxis, ...]

  return tf.cast(pos_encoding, dtype=tf.float32)
n, d = 2048, 512
pos_encoding = positional_encoding(n, d)
print(pos_encoding.shape)
pos_encoding = pos_encoding[0]

# Juggle the dimensions for the plot
pos_encoding = tf.reshape(pos_encoding, (n, d//2, 2))
pos_encoding = tf.transpose(pos_encoding, (2, 1, 0))
pos_encoding = tf.reshape(pos_encoding, (d, n))

plt.pcolormesh(pos_encoding, cmap='RdBu')
plt.ylabel('Depth')
plt.xlabel('Position')
plt.colorbar()
plt.show()
(1, 2048, 512)

png

Masking

Mask all the pad tokens in the batch of sequence. It ensures that the model does not treat padding as the input. The mask indicates where pad value 0 is present: it outputs a 1 at those locations, and a 0 otherwise.

def create_padding_mask(seq):
  seq = tf.cast(tf.math.equal(seq, 0), tf.float32)

  # add extra dimensions to add the padding
  # to the attention logits.
  return seq[:, tf.newaxis, tf.newaxis, :]  # (batch_size, 1, 1, seq_len)
x = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]])
create_padding_mask(x)
<tf.Tensor: shape=(3, 1, 1, 5), dtype=float32, numpy=
array([[[[0., 0., 1., 1., 0.]]],


       [[[0., 0., 0., 1., 1.]]],


       [[[1., 1., 1., 0., 0.]]]], dtype=float32)>

The look-ahead mask is used to mask the future tokens in a sequence. In other words, the mask indicates which entries should not be used.

This means that to predict the third word, only the first and second word will be used. Similarly to predict the fourth word, only the first, second and the third word will be used and so on.

def create_look_ahead_mask(size):
  mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)
  return mask  # (seq_len, seq_len)
x = tf.random.uniform((1, 3))
temp = create_look_ahead_mask(x.shape[1])
temp
<tf.Tensor: shape=(3, 3), dtype=float32, numpy=
array([[0., 1., 1.],
       [0., 0., 1.],
       [0., 0., 0.]], dtype=float32)>

Scaled dot product attention

scaled_dot_product_attention

The attention function used by the transformer takes three inputs: Q (query), K (key), V (value). The equation used to calculate the attention weights is:

$$\Large{Attention(Q, K, V) = softmax_k\left(\frac{QK^T}{\sqrt{d_k} }\right) V} $$

The dot-product attention is scaled by a factor of square root of the depth. This is done because for large values of depth, the dot product grows large in magnitude pushing the softmax function where it has small gradients resulting in a very hard softmax.

For example, consider that Q and K have a mean of 0 and variance of 1. Their matrix multiplication will have a mean of 0 and variance of dk. So the square root of dk is used for scaling, so you get a consistent variance regardless of the value of dk. If the variance is too low the output may be too flat to optimize effectively. If the variance is too high the softmax may saturate at initilization making it dificult to learn.

The mask is multiplied with -1e9 (close to negative infinity). This is done because the mask is summed with the scaled matrix multiplication of Q and K and is applied immediately before a softmax. The goal is to zero out these cells, and large negative inputs to softmax are near zero in the output.

def scaled_dot_product_attention(q, k, v, mask):
  """Calculate the attention weights.
  q, k, v must have matching leading dimensions.
  k, v must have matching penultimate dimension, i.e.: seq_len_k = seq_len_v.
  The mask has different shapes depending on its type(padding or look ahead)
  but it must be broadcastable for addition.

  Args:
    q: query shape == (..., seq_len_q, depth)
    k: key shape == (..., seq_len_k, depth)
    v: value shape == (..., seq_len_v, depth_v)
    mask: Float tensor with shape broadcastable
          to (..., seq_len_q, seq_len_k). Defaults to None.

  Returns:
    output, attention_weights
  """

  matmul_qk = tf.matmul(q, k, transpose_b=True)  # (..., seq_len_q, seq_len_k)

  # scale matmul_qk
  dk = tf.cast(tf.shape(k)[-1], tf.float32)
  scaled_attention_logits = matmul_qk / tf.math.sqrt(dk)

  # add the mask to the scaled tensor.
  if mask is not None:
    scaled_attention_logits += (mask * -1e9)

  # softmax is normalized on the last axis (seq_len_k) so that the scores
  # add up to 1.
  attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1)  # (..., seq_len_q, seq_len_k)

  output = tf.matmul(attention_weights, v)  # (..., seq_len_q, depth_v)

  return output, attention_weights

As the softmax normalization is done on K, its values decide the amount of importance given to Q.

The output represents the multiplication of the attention weights and the V (value) vector. This ensures that the words you want to focus on are kept as-is and the irrelevant words are flushed out.

def print_out(q, k, v):
  temp_out, temp_attn = scaled_dot_product_attention(
      q, k, v, None)
  print('Attention weights are:')
  print(temp_attn)
  print('Output is:')
  print(temp_out)
np.set_printoptions(suppress=True)

temp_k = tf.constant([[10, 0, 0],
                      [0, 10, 0],
                      [0, 0, 10],
                      [0, 0, 10]], dtype=tf.float32)  # (4, 3)

temp_v = tf.constant([[1, 0],
                      [10, 0],
                      [100, 5],
                      [1000, 6]], dtype=tf.float32)  # (4, 2)

# This `query` aligns with the second `key`,
# so the second `value` is returned.
temp_q = tf.constant([[0, 10, 0]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0. 1. 0. 0.]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[10.  0.]], shape=(1, 2), dtype=float32)
# This query aligns with a repeated key (third and fourth),
# so all associated values get averaged.
temp_q = tf.constant([[0, 0, 10]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0.  0.  0.5 0.5]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[550.    5.5]], shape=(1, 2), dtype=float32)
# This query aligns equally with the first and second key,
# so their values get averaged.
temp_q = tf.constant([[10, 10, 0]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0.5 0.5 0.  0. ]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[5.5 0. ]], shape=(1, 2), dtype=float32)

Pass all the queries together.

temp_q = tf.constant([[0, 0, 10],
                      [0, 10, 0],
                      [10, 10, 0]], dtype=tf.float32)  # (3, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor(
[[0.  0.  0.5 0.5]
 [0.  1.  0.  0. ]
 [0.5 0.5 0.  0. ]], shape=(3, 4), dtype=float32)
Output is:
tf.Tensor(
[[550.    5.5]
 [ 10.    0. ]
 [  5.5   0. ]], shape=(3, 2), dtype=float32)

Multi-head attention

multi-head attention

Multi-head attention consists of four parts:

  • Linear layers and split into heads.
  • Scaled dot-product attention.
  • Concatenation of heads.
  • Final linear layer.

Each multi-head attention block gets three inputs; Q (query), K (key), V (value). These are put through linear (Dense) layers and split up into multiple heads.

The scaled_dot_product_attention defined above is applied to each head (broadcasted for efficiency). An appropriate mask must be used in the attention step. The attention output for each head is then concatenated (using tf.transpose, and tf.reshape) and put through a final Dense layer.

Instead of one single attention head, Q, K, and V are split into multiple heads because it allows the model to jointly attend to information at different positions from different representational spaces. After the split each head has a reduced dimensionality, so the total computation cost is the same as a single head attention with full dimensionality.

class MultiHeadAttention(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads):
    super(MultiHeadAttention, self).__init__()
    self.num_heads = num_heads
    self.d_model = d_model

    assert d_model % self.num_heads == 0

    self.depth = d_model // self.num_heads

    self.wq = tf.keras.layers.Dense(d_model)
    self.wk = tf.keras.layers.Dense(d_model)
    self.wv = tf.keras.layers.Dense(d_model)

    self.dense = tf.keras.layers.Dense(d_model)

  def split_heads(self, x, batch_size):
    """Split the last dimension into (num_heads, depth).
    Transpose the result such that the shape is (batch_size, num_heads, seq_len, depth)
    """
    x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth))
    return tf.transpose(x, perm=[0, 2, 1, 3])

  def call(self, v, k, q, mask):
    batch_size = tf.shape(q)[0]

    q = self.wq(q)  # (batch_size, seq_len, d_model)
    k = self.wk(k)  # (batch_size, seq_len, d_model)
    v = self.wv(v)  # (batch_size, seq_len, d_model)

    q = self.split_heads(q, batch_size)  # (batch_size, num_heads, seq_len_q, depth)
    k = self.split_heads(k, batch_size)  # (batch_size, num_heads, seq_len_k, depth)
    v = self.split_heads(v, batch_size)  # (batch_size, num_heads, seq_len_v, depth)

    # scaled_attention.shape == (batch_size, num_heads, seq_len_q, depth)
    # attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k)
    scaled_attention, attention_weights = scaled_dot_product_attention(
        q, k, v, mask)

    scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3])  # (batch_size, seq_len_q, num_heads, depth)

    concat_attention = tf.reshape(scaled_attention,
                                  (batch_size, -1, self.d_model))  # (batch_size, seq_len_q, d_model)

    output = self.dense(concat_attention)  # (batch_size, seq_len_q, d_model)

    return output, attention_weights

Create a MultiHeadAttention layer to try out. At each location in the sequence, y, the MultiHeadAttention runs all 8 attention heads across all other locations in the sequence, returning a new vector of the same length at each location.

temp_mha = MultiHeadAttention(d_model=512, num_heads=8)
y = tf.random.uniform((1, 60, 512))  # (batch_size, encoder_sequence, d_model)
out, attn = temp_mha(y, k=y, q=y, mask=None)
out.shape, attn.shape
(TensorShape([1, 60, 512]), TensorShape([1, 8, 60, 60]))

Point wise feed forward network

Point wise feed forward network consists of two fully-connected layers with a ReLU activation in between.

def point_wise_feed_forward_network(d_model, dff):
  return tf.keras.Sequential([
      tf.keras.layers.Dense(dff, activation='relu'),  # (batch_size, seq_len, dff)
      tf.keras.layers.Dense(d_model)  # (batch_size, seq_len, d_model)
  ])
sample_ffn = point_wise_feed_forward_network(512, 2048)
sample_ffn(tf.random.uniform((64, 50, 512))).shape
TensorShape([64, 50, 512])

Encoder and decoder

transformer

The transformer model follows the same general pattern as a standard sequence to sequence with attention model.

  • The input sentence is passed through N encoder layers that generates an output for each word/token in the sequence.
  • The decoder attends on the encoder's output and its own input (self-attention) to predict the next word.

Encoder layer

Each encoder layer consists of sublayers:

  1. Multi-head attention (with padding mask)
  2. Point wise feed forward networks.

Each of these sublayers has a residual connection around it followed by a layer normalization. Residual connections help in avoiding the vanishing gradient problem in deep networks.

The output of each sublayer is LayerNorm(x + Sublayer(x)). The normalization is done on the d_model (last) axis. There are N encoder layers in the transformer.

class EncoderLayer(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads, dff, rate=0.1):
    super(EncoderLayer, self).__init__()

    self.mha = MultiHeadAttention(d_model, num_heads)
    self.ffn = point_wise_feed_forward_network(d_model, dff)

    self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)

    self.dropout1 = tf.keras.layers.Dropout(rate)
    self.dropout2 = tf.keras.layers.Dropout(rate)

  def call(self, x, training, mask):

    attn_output, _ = self.mha(x, x, x, mask)  # (batch_size, input_seq_len, d_model)
    attn_output = self.dropout1(attn_output, training=training)
    out1 = self.layernorm1(x + attn_output)  # (batch_size, input_seq_len, d_model)

    ffn_output = self.ffn(out1)  # (batch_size, input_seq_len, d_model)
    ffn_output = self.dropout2(ffn_output, training=training)
    out2 = self.layernorm2(out1 + ffn_output)  # (batch_size, input_seq_len, d_model)

    return out2
sample_encoder_layer = EncoderLayer(512, 8, 2048)

sample_encoder_layer_output = sample_encoder_layer(
    tf.random.uniform((64, 43, 512)), False, None)

sample_encoder_layer_output.shape  # (batch_size, input_seq_len, d_model)
TensorShape([64, 43, 512])

Decoder layer

Each decoder layer consists of sublayers:

  1. Masked multi-head attention (with look ahead mask and padding mask)
  2. Multi-head attention (with padding mask). V (value) and K (key) receive the encoder output as inputs. Q (query) receives the output from the masked multi-head attention sublayer.
  3. Point wise feed forward networks

Each of these sublayers has a residual connection around it followed by a layer normalization. The output of each sublayer is LayerNorm(x + Sublayer(x)). The normalization is done on the d_model (last) axis.

There are N decoder layers in the transformer.

As Q receives the output from decoder's first attention block, and K receives the encoder output, the attention weights represent the importance given to the decoder's input based on the encoder's output. In other words, the decoder predicts the next word by looking at the encoder output and self-attending to its own output. See the demonstration above in the scaled dot product attention section.

class DecoderLayer(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads, dff, rate=0.1):
    super(DecoderLayer, self).__init__()

    self.mha1 = MultiHeadAttention(d_model, num_heads)
    self.mha2 = MultiHeadAttention(d_model, num_heads)

    self.ffn = point_wise_feed_forward_network(d_model, dff)

    self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm3 = tf.keras.layers.LayerNormalization(epsilon=1e-6)

    self.dropout1 = tf.keras.layers.Dropout(rate)
    self.dropout2 = tf.keras.layers.Dropout(rate)
    self.dropout3 = tf.keras.layers.Dropout(rate)

  def call(self, x, enc_output, training,
           look_ahead_mask, padding_mask):
    # enc_output.shape == (batch_size, input_seq_len, d_model)

    attn1, attn_weights_block1 = self.mha1(x, x, x, look_ahead_mask)  # (batch_size, target_seq_len, d_model)
    attn1 = self.dropout1(attn1, training=training)
    out1 = self.layernorm1(attn1 + x)

    attn2, attn_weights_block2 = self.mha2(
        enc_output, enc_output, out1, padding_mask)  # (batch_size, target_seq_len, d_model)
    attn2 = self.dropout2(attn2, training=training)
    out2 = self.layernorm2(attn2 + out1)  # (batch_size, target_seq_len, d_model)

    ffn_output = self.ffn(out2)  # (batch_size, target_seq_len, d_model)
    ffn_output = self.dropout3(ffn_output, training=training)
    out3 = self.layernorm3(ffn_output + out2)  # (batch_size, target_seq_len, d_model)

    return out3, attn_weights_block1, attn_weights_block2
sample_decoder_layer = DecoderLayer(512, 8, 2048)

sample_decoder_layer_output, _, _ = sample_decoder_layer(
    tf.random.uniform((64, 50, 512)), sample_encoder_layer_output,
    False, None, None)

sample_decoder_layer_output.shape  # (batch_size, target_seq_len, d_model)
TensorShape([64, 50, 512])

Encoder

The Encoder consists of:

  1. Input Embedding
  2. Positional Encoding
  3. N encoder layers

The input is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the encoder layers. The output of the encoder is the input to the decoder.

class Encoder(tf.keras.layers.Layer):
  def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
               maximum_position_encoding, rate=0.1):
    super(Encoder, self).__init__()

    self.d_model = d_model
    self.num_layers = num_layers

    self.embedding = tf.keras.layers.Embedding(input_vocab_size, d_model)
    self.pos_encoding = positional_encoding(maximum_position_encoding,
                                            self.d_model)

    self.enc_layers = [EncoderLayer(d_model, num_heads, dff, rate)
                       for _ in range(num_layers)]

    self.dropout = tf.keras.layers.Dropout(rate)

  def call(self, x, training, mask):

    seq_len = tf.shape(x)[1]

    # adding embedding and position encoding.
    x = self.embedding(x)  # (batch_size, input_seq_len, d_model)
    x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
    x += self.pos_encoding[:, :seq_len, :]

    x = self.dropout(x, training=training)

    for i in range(self.num_layers):
      x = self.enc_layers[i](x, training, mask)

    return x  # (batch_size, input_seq_len, d_model)
sample_encoder = Encoder(num_layers=2, d_model=512, num_heads=8,
                         dff=2048, input_vocab_size=8500,
                         maximum_position_encoding=10000)
temp_input = tf.random.uniform((64, 62), dtype=tf.int64, minval=0, maxval=200)

sample_encoder_output = sample_encoder(temp_input, training=False, mask=None)

print(sample_encoder_output.shape)  # (batch_size, input_seq_len, d_model)
(64, 62, 512)

Decoder

The Decoder consists of:

  1. Output Embedding
  2. Positional Encoding
  3. N decoder layers

The target is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the decoder layers. The output of the decoder is the input to the final linear layer.

class Decoder(tf.keras.layers.Layer):
  def __init__(self, num_layers, d_model, num_heads, dff, target_vocab_size,
               maximum_position_encoding, rate=0.1):
    super(Decoder, self).__init__()

    self.d_model = d_model
    self.num_layers = num_layers

    self.embedding = tf.keras.layers.Embedding(target_vocab_size, d_model)
    self.pos_encoding = positional_encoding(maximum_position_encoding, d_model)

    self.dec_layers = [DecoderLayer(d_model, num_heads, dff, rate)
                       for _ in range(num_layers)]
    self.dropout = tf.keras.layers.Dropout(rate)

  def call(self, x, enc_output, training,
           look_ahead_mask, padding_mask):

    seq_len = tf.shape(x)[1]
    attention_weights = {}

    x = self.embedding(x)  # (batch_size, target_seq_len, d_model)
    x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
    x += self.pos_encoding[:, :seq_len, :]

    x = self.dropout(x, training=training)

    for i in range(self.num_layers):
      x, block1, block2 = self.dec_layers[i](x, enc_output, training,
                                             look_ahead_mask, padding_mask)

      attention_weights[f'decoder_layer{i+1}_block1'] = block1
      attention_weights[f'decoder_layer{i+1}_block2'] = block2

    # x.shape == (batch_size, target_seq_len, d_model)
    return x, attention_weights
sample_decoder = Decoder(num_layers=2, d_model=512, num_heads=8,
                         dff=2048, target_vocab_size=8000,
                         maximum_position_encoding=5000)
temp_input = tf.random.uniform((64, 26), dtype=tf.int64, minval=0, maxval=200)

output, attn = sample_decoder(temp_input,
                              enc_output=sample_encoder_output,
                              training=False,
                              look_ahead_mask=None,
                              padding_mask=None)

output.shape, attn['decoder_layer2_block2'].shape
(TensorShape([64, 26, 512]), TensorShape([64, 8, 26, 62]))

Create the Transformer

Transformer consists of the encoder, decoder and a final linear layer. The output of the decoder is the input to the linear layer and its output is returned.

class Transformer(tf.keras.Model):
  def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
               target_vocab_size, pe_input, pe_target, rate=0.1):
    super(Transformer, self).__init__()

    self.tokenizer = Encoder(num_layers, d_model, num_heads, dff,
                             input_vocab_size, pe_input, rate)

    self.decoder = Decoder(num_layers, d_model, num_heads, dff,
                           target_vocab_size, pe_target, rate)

    self.final_layer = tf.keras.layers.Dense(target_vocab_size)

  def call(self, inp, tar, training, enc_padding_mask,
           look_ahead_mask, dec_padding_mask):

    enc_output = self.tokenizer(inp, training, enc_padding_mask)  # (batch_size, inp_seq_len, d_model)

    # dec_output.shape == (batch_size, tar_seq_len, d_model)
    dec_output, attention_weights = self.decoder(
        tar, enc_output, training, look_ahead_mask, dec_padding_mask)

    final_output = self.final_layer(dec_output)  # (batch_size, tar_seq_len, target_vocab_size)

    return final_output, attention_weights
sample_transformer = Transformer(
    num_layers=2, d_model=512, num_heads=8, dff=2048,
    input_vocab_size=8500, target_vocab_size=8000,
    pe_input=10000, pe_target=6000)

temp_input = tf.random.uniform((64, 38), dtype=tf.int64, minval=0, maxval=200)
temp_target = tf.random.uniform((64, 36), dtype=tf.int64, minval=0, maxval=200)

fn_out, _ = sample_transformer(temp_input, temp_target, training=False,
                               enc_padding_mask=None,
                               look_ahead_mask=None,
                               dec_padding_mask=None)

fn_out.shape  # (batch_size, tar_seq_len, target_vocab_size)
TensorShape([64, 36, 8000])

Set hyperparameters

To keep this example small and relatively fast, the values for num_layers, d_model, and dff have been reduced.

The values used in the base model of transformer were; num_layers=6, d_model = 512, dff = 2048. See the paper for all the other versions of the transformer.

num_layers = 4
d_model = 128
dff = 512
num_heads = 8
dropout_rate = 0.1

Optimizer

Use the Adam optimizer with a custom learning rate scheduler according to the formula in the paper.

$$\Large{lrate = d_{model}^{-0.5} * \min(step{\_}num^{-0.5}, step{\_}num \cdot warmup{\_}steps^{-1.5})}$$
class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
  def __init__(self, d_model, warmup_steps=4000):
    super(CustomSchedule, self).__init__()

    self.d_model = d_model
    self.d_model = tf.cast(self.d_model, tf.float32)

    self.warmup_steps = warmup_steps

  def __call__(self, step):
    arg1 = tf.math.rsqrt(step)
    arg2 = step * (self.warmup_steps ** -1.5)

    return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)
learning_rate = CustomSchedule(d_model)

optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98,
                                     epsilon=1e-9)
temp_learning_rate_schedule = CustomSchedule(d_model)

plt.plot(temp_learning_rate_schedule(tf.range(40000, dtype=tf.float32)))
plt.ylabel("Learning Rate")
plt.xlabel("Train Step")
Text(0.5, 0, 'Train Step')

png

Loss and metrics

Since the target sequences are padded, it is important to apply a padding mask when calculating the loss.

loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
    from_logits=True, reduction='none')
def loss_function(real, pred):
  mask = tf.math.logical_not(tf.math.equal(real, 0))
  loss_ = loss_object(real, pred)

  mask = tf.cast(mask, dtype=loss_.dtype)
  loss_ *= mask

  return tf.reduce_sum(loss_)/tf.reduce_sum(mask)


def accuracy_function(real, pred):
  accuracies = tf.equal(real, tf.argmax(pred, axis=2))

  mask = tf.math.logical_not(tf.math.equal(real, 0))
  accuracies = tf.math.logical_and(mask, accuracies)

  accuracies = tf.cast(accuracies, dtype=tf.float32)
  mask = tf.cast(mask, dtype=tf.float32)
  return tf.reduce_sum(accuracies)/tf.reduce_sum(mask)
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.Mean(name='train_accuracy')

Training and checkpointing

transformer = Transformer(
    num_layers=num_layers,
    d_model=d_model,
    num_heads=num_heads,
    dff=dff,
    input_vocab_size=tokenizers.pt.get_vocab_size(),
    target_vocab_size=tokenizers.en.get_vocab_size(),
    pe_input=1000,
    pe_target=1000,
    rate=dropout_rate)
def create_masks(inp, tar):
  # Encoder padding mask
  enc_padding_mask = create_padding_mask(inp)

  # Used in the 2nd attention block in the decoder.
  # This padding mask is used to mask the encoder outputs.
  dec_padding_mask = create_padding_mask(inp)

  # Used in the 1st attention block in the decoder.
  # It is used to pad and mask future tokens in the input received by
  # the decoder.
  look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1])
  dec_target_padding_mask = create_padding_mask(tar)
  combined_mask = tf.maximum(dec_target_padding_mask, look_ahead_mask)

  return enc_padding_mask, combined_mask, dec_padding_mask

Create the checkpoint path and the checkpoint manager. This will be used to save checkpoints every n epochs.

checkpoint_path = "./checkpoints/train"

ckpt = tf.train.Checkpoint(transformer=transformer,
                           optimizer=optimizer)

ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)

# if a checkpoint exists, restore the latest checkpoint.
if ckpt_manager.latest_checkpoint:
  ckpt.restore(ckpt_manager.latest_checkpoint)
  print('Latest checkpoint restored!!')

The target is divided into tar_inp and tar_real. tar_inp is passed as an input to the decoder. tar_real is that same input shifted by 1: At each location in tar_input, tar_real contains the next token that should be predicted.

For example, sentence = "SOS A lion in the jungle is sleeping EOS"

tar_inp = "SOS A lion in the jungle is sleeping"

tar_real = "A lion in the jungle is sleeping EOS"

The transformer is an auto-regressive model: it makes predictions one part at a time, and uses its output so far to decide what to do next.

During training this example uses teacher-forcing (like in the text generation tutorial). Teacher forcing is passing the true output to the next time step regardless of what the model predicts at the current time step.

As the transformer predicts each word, self-attention allows it to look at the previous words in the input sequence to better predict the next word.

To prevent the model from peeking at the expected output the model uses a look-ahead mask.

EPOCHS = 20
# The @tf.function trace-compiles train_step into a TF graph for faster
# execution. The function specializes to the precise shape of the argument
# tensors. To avoid re-tracing due to the variable sequence lengths or variable
# batch sizes (the last batch is smaller), use input_signature to specify
# more generic shapes.

train_step_signature = [
    tf.TensorSpec(shape=(None, None), dtype=tf.int64),
    tf.TensorSpec(shape=(None, None), dtype=tf.int64),
]


@tf.function(input_signature=train_step_signature)
def train_step(inp, tar):
  tar_inp = tar[:, :-1]
  tar_real = tar[:, 1:]

  enc_padding_mask, combined_mask, dec_padding_mask = create_masks(inp, tar_inp)

  with tf.GradientTape() as tape:
    predictions, _ = transformer(inp, tar_inp,
                                 True,
                                 enc_padding_mask,
                                 combined_mask,
                                 dec_padding_mask)
    loss = loss_function(tar_real, predictions)

  gradients = tape.gradient(loss, transformer.trainable_variables)
  optimizer.apply_gradients(zip(gradients, transformer.trainable_variables))

  train_loss(loss)
  train_accuracy(accuracy_function(tar_real, predictions))

Portuguese is used as the input language and English is the target language.

for epoch in range(EPOCHS):
  start = time.time()

  train_loss.reset_states()
  train_accuracy.reset_states()

  # inp -> portuguese, tar -> english
  for (batch, (inp, tar)) in enumerate(train_batches):
    train_step(inp, tar)

    if batch % 50 == 0:
      print(f'Epoch {epoch + 1} Batch {batch} Loss {train_loss.result():.4f} Accuracy {train_accuracy.result():.4f}')

  if (epoch + 1) % 5 == 0:
    ckpt_save_path = ckpt_manager.save()
    print(f'Saving checkpoint for epoch {epoch+1} at {ckpt_save_path}')

  print(f'Epoch {epoch + 1} Loss {train_loss.result():.4f} Accuracy {train_accuracy.result():.4f}')

  print(f'Time taken for 1 epoch: {time.time() - start:.2f} secs\n')
Epoch 1 Batch 0 Loss 8.8558 Accuracy 0.0000
Epoch 1 Batch 50 Loss 8.7941 Accuracy 0.0113
Epoch 1 Batch 100 Loss 8.6918 Accuracy 0.0303
Epoch 1 Batch 150 Loss 8.5788 Accuracy 0.0362
Epoch 1 Batch 200 Loss 8.4405 Accuracy 0.0396
Epoch 1 Batch 250 Loss 8.2736 Accuracy 0.0436
Epoch 1 Batch 300 Loss 8.0848 Accuracy 0.0498
Epoch 1 Batch 350 Loss 7.8871 Accuracy 0.0555
Epoch 1 Batch 400 Loss 7.6974 Accuracy 0.0618
Epoch 1 Batch 450 Loss 7.5298 Accuracy 0.0692
Epoch 1 Batch 500 Loss 7.3837 Accuracy 0.0765
Epoch 1 Batch 550 Loss 7.2525 Accuracy 0.0833
Epoch 1 Batch 600 Loss 7.1255 Accuracy 0.0906
Epoch 1 Batch 650 Loss 7.0100 Accuracy 0.0978
Epoch 1 Batch 700 Loss 6.9013 Accuracy 0.1048
Epoch 1 Batch 750 Loss 6.8035 Accuracy 0.1108
Epoch 1 Batch 800 Loss 6.7117 Accuracy 0.1166
Epoch 1 Loss 6.6949 Accuracy 0.1177
Time taken for 1 epoch: 66.94 secs

Epoch 2 Batch 0 Loss 5.2170 Accuracy 0.1997
Epoch 2 Batch 50 Loss 5.2086 Accuracy 0.2170
Epoch 2 Batch 100 Loss 5.1816 Accuracy 0.2212
Epoch 2 Batch 150 Loss 5.1665 Accuracy 0.2217
Epoch 2 Batch 200 Loss 5.1439 Accuracy 0.2243
Epoch 2 Batch 250 Loss 5.1148 Accuracy 0.2269
Epoch 2 Batch 300 Loss 5.0844 Accuracy 0.2301
Epoch 2 Batch 350 Loss 5.0592 Accuracy 0.2325
Epoch 2 Batch 400 Loss 5.0369 Accuracy 0.2347
Epoch 2 Batch 450 Loss 5.0130 Accuracy 0.2367
Epoch 2 Batch 500 Loss 4.9936 Accuracy 0.2385
Epoch 2 Batch 550 Loss 4.9717 Accuracy 0.2404
Epoch 2 Batch 600 Loss 4.9526 Accuracy 0.2420
Epoch 2 Batch 650 Loss 4.9339 Accuracy 0.2434
Epoch 2 Batch 700 Loss 4.9164 Accuracy 0.2448
Epoch 2 Batch 750 Loss 4.8998 Accuracy 0.2463
Epoch 2 Batch 800 Loss 4.8838 Accuracy 0.2475
Epoch 2 Loss 4.8815 Accuracy 0.2477
Time taken for 1 epoch: 51.61 secs

Epoch 3 Batch 0 Loss 4.6161 Accuracy 0.2493
Epoch 3 Batch 50 Loss 4.5871 Accuracy 0.2683
Epoch 3 Batch 100 Loss 4.5917 Accuracy 0.2699
Epoch 3 Batch 150 Loss 4.5765 Accuracy 0.2721
Epoch 3 Batch 200 Loss 4.5632 Accuracy 0.2742
Epoch 3 Batch 250 Loss 4.5445 Accuracy 0.2761
Epoch 3 Batch 300 Loss 4.5358 Accuracy 0.2769
Epoch 3 Batch 350 Loss 4.5199 Accuracy 0.2785
Epoch 3 Batch 400 Loss 4.5038 Accuracy 0.2802
Epoch 3 Batch 450 Loss 4.4889 Accuracy 0.2816
Epoch 3 Batch 500 Loss 4.4749 Accuracy 0.2829
Epoch 3 Batch 550 Loss 4.4626 Accuracy 0.2845
Epoch 3 Batch 600 Loss 4.4488 Accuracy 0.2861
Epoch 3 Batch 650 Loss 4.4349 Accuracy 0.2876
Epoch 3 Batch 700 Loss 4.4193 Accuracy 0.2894
Epoch 3 Batch 750 Loss 4.4037 Accuracy 0.2912
Epoch 3 Batch 800 Loss 4.3916 Accuracy 0.2926
Epoch 3 Loss 4.3878 Accuracy 0.2931
Time taken for 1 epoch: 50.36 secs

Epoch 4 Batch 0 Loss 4.1716 Accuracy 0.3411
Epoch 4 Batch 50 Loss 4.0858 Accuracy 0.3265
Epoch 4 Batch 100 Loss 4.0645 Accuracy 0.3278
Epoch 4 Batch 150 Loss 4.0457 Accuracy 0.3298
Epoch 4 Batch 200 Loss 4.0308 Accuracy 0.3320
Epoch 4 Batch 250 Loss 4.0184 Accuracy 0.3342
Epoch 4 Batch 300 Loss 4.0023 Accuracy 0.3367
Epoch 4 Batch 350 Loss 3.9890 Accuracy 0.3384
Epoch 4 Batch 400 Loss 3.9712 Accuracy 0.3407
Epoch 4 Batch 450 Loss 3.9585 Accuracy 0.3427
Epoch 4 Batch 500 Loss 3.9450 Accuracy 0.3443
Epoch 4 Batch 550 Loss 3.9288 Accuracy 0.3466
Epoch 4 Batch 600 Loss 3.9168 Accuracy 0.3485
Epoch 4 Batch 650 Loss 3.9013 Accuracy 0.3505
Epoch 4 Batch 700 Loss 3.8833 Accuracy 0.3531
Epoch 4 Batch 750 Loss 3.8668 Accuracy 0.3552
Epoch 4 Batch 800 Loss 3.8527 Accuracy 0.3570
Epoch 4 Loss 3.8494 Accuracy 0.3574
Time taken for 1 epoch: 50.26 secs

Epoch 5 Batch 0 Loss 3.8260 Accuracy 0.3580
Epoch 5 Batch 50 Loss 3.5539 Accuracy 0.3915
Epoch 5 Batch 100 Loss 3.5356 Accuracy 0.3944
Epoch 5 Batch 150 Loss 3.5221 Accuracy 0.3960
Epoch 5 Batch 200 Loss 3.5098 Accuracy 0.3984
Epoch 5 Batch 250 Loss 3.4993 Accuracy 0.4000
Epoch 5 Batch 300 Loss 3.4818 Accuracy 0.4023
Epoch 5 Batch 350 Loss 3.4718 Accuracy 0.4036
Epoch 5 Batch 400 Loss 3.4602 Accuracy 0.4054
Epoch 5 Batch 450 Loss 3.4553 Accuracy 0.4060
Epoch 5 Batch 500 Loss 3.4459 Accuracy 0.4072
Epoch 5 Batch 550 Loss 3.4369 Accuracy 0.4082
Epoch 5 Batch 600 Loss 3.4268 Accuracy 0.4095
Epoch 5 Batch 650 Loss 3.4180 Accuracy 0.4107
Epoch 5 Batch 700 Loss 3.4098 Accuracy 0.4116
Epoch 5 Batch 750 Loss 3.4001 Accuracy 0.4130
Epoch 5 Batch 800 Loss 3.3902 Accuracy 0.4144
Saving checkpoint for epoch 5 at ./checkpoints/train/ckpt-1
Epoch 5 Loss 3.3886 Accuracy 0.4145
Time taken for 1 epoch: 50.26 secs

Epoch 6 Batch 0 Loss 3.2305 Accuracy 0.4274
Epoch 6 Batch 50 Loss 3.1070 Accuracy 0.4469
Epoch 6 Batch 100 Loss 3.0989 Accuracy 0.4480
Epoch 6 Batch 150 Loss 3.0913 Accuracy 0.4503
Epoch 6 Batch 200 Loss 3.0827 Accuracy 0.4511
Epoch 6 Batch 250 Loss 3.0724 Accuracy 0.4526
Epoch 6 Batch 300 Loss 3.0657 Accuracy 0.4533
Epoch 6 Batch 350 Loss 3.0588 Accuracy 0.4540
Epoch 6 Batch 400 Loss 3.0501 Accuracy 0.4551
Epoch 6 Batch 450 Loss 3.0423 Accuracy 0.4563
Epoch 6 Batch 500 Loss 3.0329 Accuracy 0.4577
Epoch 6 Batch 550 Loss 3.0213 Accuracy 0.4593
Epoch 6 Batch 600 Loss 3.0136 Accuracy 0.4604
Epoch 6 Batch 650 Loss 3.0092 Accuracy 0.4609
Epoch 6 Batch 700 Loss 3.0018 Accuracy 0.4620
Epoch 6 Batch 750 Loss 2.9960 Accuracy 0.4629
Epoch 6 Batch 800 Loss 2.9876 Accuracy 0.4641
Epoch 6 Loss 2.9864 Accuracy 0.4643
Time taken for 1 epoch: 50.22 secs

Epoch 7 Batch 0 Loss 2.7779 Accuracy 0.4791
Epoch 7 Batch 50 Loss 2.6971 Accuracy 0.4990
Epoch 7 Batch 100 Loss 2.6964 Accuracy 0.4987
Epoch 7 Batch 150 Loss 2.7011 Accuracy 0.4986
Epoch 7 Batch 200 Loss 2.7005 Accuracy 0.4985
Epoch 7 Batch 250 Loss 2.7022 Accuracy 0.4986
Epoch 7 Batch 300 Loss 2.6984 Accuracy 0.4993
Epoch 7 Batch 350 Loss 2.6945 Accuracy 0.5000
Epoch 7 Batch 400 Loss 2.6895 Accuracy 0.5007
Epoch 7 Batch 450 Loss 2.6839 Accuracy 0.5016
Epoch 7 Batch 500 Loss 2.6769 Accuracy 0.5027
Epoch 7 Batch 550 Loss 2.6753 Accuracy 0.5029
Epoch 7 Batch 600 Loss 2.6712 Accuracy 0.5039
Epoch 7 Batch 650 Loss 2.6669 Accuracy 0.5047
Epoch 7 Batch 700 Loss 2.6619 Accuracy 0.5054
Epoch 7 Batch 750 Loss 2.6572 Accuracy 0.5063
Epoch 7 Batch 800 Loss 2.6542 Accuracy 0.5070
Epoch 7 Loss 2.6539 Accuracy 0.5071
Time taken for 1 epoch: 49.96 secs

Epoch 8 Batch 0 Loss 2.5908 Accuracy 0.4879
Epoch 8 Batch 50 Loss 2.4337 Accuracy 0.5354
Epoch 8 Batch 100 Loss 2.4194 Accuracy 0.5378
Epoch 8 Batch 150 Loss 2.4183 Accuracy 0.5373
Epoch 8 Batch 200 Loss 2.4275 Accuracy 0.5359
Epoch 8 Batch 250 Loss 2.4308 Accuracy 0.5353
Epoch 8 Batch 300 Loss 2.4289 Accuracy 0.5358
Epoch 8 Batch 350 Loss 2.4248 Accuracy 0.5366
Epoch 8 Batch 400 Loss 2.4211 Accuracy 0.5374
Epoch 8 Batch 450 Loss 2.4217 Accuracy 0.5374
Epoch 8 Batch 500 Loss 2.4201 Accuracy 0.5380
Epoch 8 Batch 550 Loss 2.4151 Accuracy 0.5391
Epoch 8 Batch 600 Loss 2.4141 Accuracy 0.5394
Epoch 8 Batch 650 Loss 2.4134 Accuracy 0.5394
Epoch 8 Batch 700 Loss 2.4146 Accuracy 0.5393
Epoch 8 Batch 750 Loss 2.4118 Accuracy 0.5399
Epoch 8 Batch 800 Loss 2.4128 Accuracy 0.5398
Epoch 8 Loss 2.4124 Accuracy 0.5398
Time taken for 1 epoch: 49.98 secs

Epoch 9 Batch 0 Loss 2.1669 Accuracy 0.5637
Epoch 9 Batch 50 Loss 2.2551 Accuracy 0.5606
Epoch 9 Batch 100 Loss 2.2500 Accuracy 0.5597
Epoch 9 Batch 150 Loss 2.2465 Accuracy 0.5600
Epoch 9 Batch 200 Loss 2.2423 Accuracy 0.5608
Epoch 9 Batch 250 Loss 2.2466 Accuracy 0.5603
Epoch 9 Batch 300 Loss 2.2484 Accuracy 0.5603
Epoch 9 Batch 350 Loss 2.2467 Accuracy 0.5609
Epoch 9 Batch 400 Loss 2.2387 Accuracy 0.5621
Epoch 9 Batch 450 Loss 2.2362 Accuracy 0.5630
Epoch 9 Batch 500 Loss 2.2346 Accuracy 0.5634
Epoch 9 Batch 550 Loss 2.2358 Accuracy 0.5631
Epoch 9 Batch 600 Loss 2.2359 Accuracy 0.5629
Epoch 9 Batch 650 Loss 2.2359 Accuracy 0.5631
Epoch 9 Batch 700 Loss 2.2352 Accuracy 0.5634
Epoch 9 Batch 750 Loss 2.2361 Accuracy 0.5634
Epoch 9 Batch 800 Loss 2.2351 Accuracy 0.5636
Epoch 9 Loss 2.2346 Accuracy 0.5637
Time taken for 1 epoch: 50.26 secs

Epoch 10 Batch 0 Loss 2.0434 Accuracy 0.5841
Epoch 10 Batch 50 Loss 2.0855 Accuracy 0.5830
Epoch 10 Batch 100 Loss 2.0920 Accuracy 0.5820
Epoch 10 Batch 150 Loss 2.0839 Accuracy 0.5837
Epoch 10 Batch 200 Loss 2.0908 Accuracy 0.5822
Epoch 10 Batch 250 Loss 2.0956 Accuracy 0.5817
Epoch 10 Batch 300 Loss 2.0962 Accuracy 0.5817
Epoch 10 Batch 350 Loss 2.0950 Accuracy 0.5823
Epoch 10 Batch 400 Loss 2.0947 Accuracy 0.5824
Epoch 10 Batch 450 Loss 2.0947 Accuracy 0.5824
Epoch 10 Batch 500 Loss 2.0949 Accuracy 0.5826
Epoch 10 Batch 550 Loss 2.0923 Accuracy 0.5831
Epoch 10 Batch 600 Loss 2.0923 Accuracy 0.5832
Epoch 10 Batch 650 Loss 2.0930 Accuracy 0.5833
Epoch 10 Batch 700 Loss 2.0960 Accuracy 0.5830
Epoch 10 Batch 750 Loss 2.0973 Accuracy 0.5828
Epoch 10 Batch 800 Loss 2.0972 Accuracy 0.5829
Saving checkpoint for epoch 10 at ./checkpoints/train/ckpt-2
Epoch 10 Loss 2.0972 Accuracy 0.5829
Time taken for 1 epoch: 49.71 secs

Epoch 11 Batch 0 Loss 2.2026 Accuracy 0.5518
Epoch 11 Batch 50 Loss 1.9669 Accuracy 0.5998
Epoch 11 Batch 100 Loss 1.9626 Accuracy 0.6002
Epoch 11 Batch 150 Loss 1.9582 Accuracy 0.6014
Epoch 11 Batch 200 Loss 1.9593 Accuracy 0.6015
Epoch 11 Batch 250 Loss 1.9649 Accuracy 0.6008
Epoch 11 Batch 300 Loss 1.9684 Accuracy 0.6005
Epoch 11 Batch 350 Loss 1.9682 Accuracy 0.6009
Epoch 11 Batch 400 Loss 1.9685 Accuracy 0.6013
Epoch 11 Batch 450 Loss 1.9690 Accuracy 0.6011
Epoch 11 Batch 500 Loss 1.9720 Accuracy 0.6007
Epoch 11 Batch 550 Loss 1.9741 Accuracy 0.6003
Epoch 11 Batch 600 Loss 1.9763 Accuracy 0.6001
Epoch 11 Batch 650 Loss 1.9734 Accuracy 0.6006
Epoch 11 Batch 700 Loss 1.9774 Accuracy 0.6002
Epoch 11 Batch 750 Loss 1.9795 Accuracy 0.5999
Epoch 11 Batch 800 Loss 1.9842 Accuracy 0.5993
Epoch 11 Loss 1.9846 Accuracy 0.5993
Time taken for 1 epoch: 49.89 secs

Epoch 12 Batch 0 Loss 1.8384 Accuracy 0.6058
Epoch 12 Batch 50 Loss 1.8900 Accuracy 0.6093
Epoch 12 Batch 100 Loss 1.8634 Accuracy 0.6140
Epoch 12 Batch 150 Loss 1.8703 Accuracy 0.6141
Epoch 12 Batch 200 Loss 1.8669 Accuracy 0.6152
Epoch 12 Batch 250 Loss 1.8709 Accuracy 0.6149
Epoch 12 Batch 300 Loss 1.8742 Accuracy 0.6144
Epoch 12 Batch 350 Loss 1.8771 Accuracy 0.6142
Epoch 12 Batch 400 Loss 1.8820 Accuracy 0.6134
Epoch 12 Batch 450 Loss 1.8790 Accuracy 0.6139
Epoch 12 Batch 500 Loss 1.8782 Accuracy 0.6141
Epoch 12 Batch 550 Loss 1.8797 Accuracy 0.6141
Epoch 12 Batch 600 Loss 1.8820 Accuracy 0.6137
Epoch 12 Batch 650 Loss 1.8829 Accuracy 0.6136
Epoch 12 Batch 700 Loss 1.8849 Accuracy 0.6133
Epoch 12 Batch 750 Loss 1.8867 Accuracy 0.6131
Epoch 12 Batch 800 Loss 1.8896 Accuracy 0.6127
Epoch 12 Loss 1.8903 Accuracy 0.6126
Time taken for 1 epoch: 50.06 secs

Epoch 13 Batch 0 Loss 1.6479 Accuracy 0.6608
Epoch 13 Batch 50 Loss 1.7716 Accuracy 0.6323
Epoch 13 Batch 100 Loss 1.7785 Accuracy 0.6293
Epoch 13 Batch 150 Loss 1.7795 Accuracy 0.6284
Epoch 13 Batch 200 Loss 1.7808 Accuracy 0.6280
Epoch 13 Batch 250 Loss 1.7800 Accuracy 0.6286
Epoch 13 Batch 300 Loss 1.7849 Accuracy 0.6281
Epoch 13 Batch 350 Loss 1.7904 Accuracy 0.6273
Epoch 13 Batch 400 Loss 1.7904 Accuracy 0.6273
Epoch 13 Batch 450 Loss 1.7908 Accuracy 0.6275
Epoch 13 Batch 500 Loss 1.7919 Accuracy 0.6274
Epoch 13 Batch 550 Loss 1.7948 Accuracy 0.6270
Epoch 13 Batch 600 Loss 1.7991 Accuracy 0.6262
Epoch 13 Batch 650 Loss 1.8030 Accuracy 0.6257
Epoch 13 Batch 700 Loss 1.8050 Accuracy 0.6255
Epoch 13 Batch 750 Loss 1.8068 Accuracy 0.6252
Epoch 13 Batch 800 Loss 1.8086 Accuracy 0.6250
Epoch 13 Loss 1.8083 Accuracy 0.6251
Time taken for 1 epoch: 49.99 secs

Epoch 14 Batch 0 Loss 1.6837 Accuracy 0.6499
Epoch 14 Batch 50 Loss 1.6870 Accuracy 0.6426
Epoch 14 Batch 100 Loss 1.6901 Accuracy 0.6426
Epoch 14 Batch 150 Loss 1.6884 Accuracy 0.6427
Epoch 14 Batch 200 Loss 1.6995 Accuracy 0.6410
Epoch 14 Batch 250 Loss 1.7016 Accuracy 0.6413
Epoch 14 Batch 300 Loss 1.7053 Accuracy 0.6402
Epoch 14 Batch 350 Loss 1.7088 Accuracy 0.6395
Epoch 14 Batch 400 Loss 1.7150 Accuracy 0.6386
Epoch 14 Batch 450 Loss 1.7183 Accuracy 0.6380
Epoch 14 Batch 500 Loss 1.7208 Accuracy 0.6376
Epoch 14 Batch 550 Loss 1.7244 Accuracy 0.6372
Epoch 14 Batch 600 Loss 1.7274 Accuracy 0.6369
Epoch 14 Batch 650 Loss 1.7294 Accuracy 0.6367
Epoch 14 Batch 700 Loss 1.7325 Accuracy 0.6363
Epoch 14 Batch 750 Loss 1.7354 Accuracy 0.6359
Epoch 14 Batch 800 Loss 1.7379 Accuracy 0.6356
Epoch 14 Loss 1.7380 Accuracy 0.6356
Time taken for 1 epoch: 49.59 secs

Epoch 15 Batch 0 Loss 1.5566 Accuracy 0.6724
Epoch 15 Batch 50 Loss 1.6414 Accuracy 0.6479
Epoch 15 Batch 100 Loss 1.6533 Accuracy 0.6476
Epoch 15 Batch 150 Loss 1.6563 Accuracy 0.6472
Epoch 15 Batch 200 Loss 1.6639 Accuracy 0.6460
Epoch 15 Batch 250 Loss 1.6653 Accuracy 0.6456
Epoch 15 Batch 300 Loss 1.6642 Accuracy 0.6459
Epoch 15 Batch 350 Loss 1.6668 Accuracy 0.6455
Epoch 15 Batch 400 Loss 1.6649 Accuracy 0.6459
Epoch 15 Batch 450 Loss 1.6646 Accuracy 0.6458
Epoch 15 Batch 500 Loss 1.6688 Accuracy 0.6451
Epoch 15 Batch 550 Loss 1.6714 Accuracy 0.6445
Epoch 15 Batch 600 Loss 1.6716 Accuracy 0.6446
Epoch 15 Batch 650 Loss 1.6741 Accuracy 0.6444
Epoch 15 Batch 700 Loss 1.6760 Accuracy 0.6442
Epoch 15 Batch 750 Loss 1.6776 Accuracy 0.6440
Epoch 15 Batch 800 Loss 1.6784 Accuracy 0.6440
Saving checkpoint for epoch 15 at ./checkpoints/train/ckpt-3
Epoch 15 Loss 1.6787 Accuracy 0.6440
Time taken for 1 epoch: 49.62 secs

Epoch 16 Batch 0 Loss 1.5110 Accuracy 0.6664
Epoch 16 Batch 50 Loss 1.5818 Accuracy 0.6597
Epoch 16 Batch 100 Loss 1.5850 Accuracy 0.6585
Epoch 16 Batch 150 Loss 1.5859 Accuracy 0.6581
Epoch 16 Batch 200 Loss 1.5872 Accuracy 0.6579
Epoch 16 Batch 250 Loss 1.5946 Accuracy 0.6570
Epoch 16 Batch 300 Loss 1.5954 Accuracy 0.6568
Epoch 16 Batch 350 Loss 1.5995 Accuracy 0.6565
Epoch 16 Batch 400 Loss 1.6033 Accuracy 0.6558
Epoch 16 Batch 450 Loss 1.6056 Accuracy 0.6556
Epoch 16 Batch 500 Loss 1.6083 Accuracy 0.6552
Epoch 16 Batch 550 Loss 1.6087 Accuracy 0.6551
Epoch 16 Batch 600 Loss 1.6140 Accuracy 0.6542
Epoch 16 Batch 650 Loss 1.6166 Accuracy 0.6538
Epoch 16 Batch 700 Loss 1.6188 Accuracy 0.6535
Epoch 16 Batch 750 Loss 1.6223 Accuracy 0.6529
Epoch 16 Batch 800 Loss 1.6248 Accuracy 0.6524
Epoch 16 Loss 1.6252 Accuracy 0.6524
Time taken for 1 epoch: 49.52 secs

Epoch 17 Batch 0 Loss 1.5022 Accuracy 0.6761
Epoch 17 Batch 50 Loss 1.5280 Accuracy 0.6678
Epoch 17 Batch 100 Loss 1.5414 Accuracy 0.6649
Epoch 17 Batch 150 Loss 1.5421 Accuracy 0.6650
Epoch 17 Batch 200 Loss 1.5400 Accuracy 0.6652
Epoch 17 Batch 250 Loss 1.5464 Accuracy 0.6645
Epoch 17 Batch 300 Loss 1.5472 Accuracy 0.6645
Epoch 17 Batch 350 Loss 1.5531 Accuracy 0.6633
Epoch 17 Batch 400 Loss 1.5560 Accuracy 0.6627
Epoch 17 Batch 450 Loss 1.5598 Accuracy 0.6620
Epoch 17 Batch 500 Loss 1.5594 Accuracy 0.6622
Epoch 17 Batch 550 Loss 1.5632 Accuracy 0.6616
Epoch 17 Batch 600 Loss 1.5631 Accuracy 0.6618
Epoch 17 Batch 650 Loss 1.5649 Accuracy 0.6615
Epoch 17 Batch 700 Loss 1.5669 Accuracy 0.6613
Epoch 17 Batch 750 Loss 1.5699 Accuracy 0.6608
Epoch 17 Batch 800 Loss 1.5736 Accuracy 0.6602
Epoch 17 Loss 1.5745 Accuracy 0.6602
Time taken for 1 epoch: 49.84 secs

Epoch 18 Batch 0 Loss 1.5577 Accuracy 0.6591
Epoch 18 Batch 50 Loss 1.4857 Accuracy 0.6738
Epoch 18 Batch 100 Loss 1.4816 Accuracy 0.6751
Epoch 18 Batch 150 Loss 1.4805 Accuracy 0.6756
Epoch 18 Batch 200 Loss 1.4917 Accuracy 0.6734
Epoch 18 Batch 250 Loss 1.4963 Accuracy 0.6724
Epoch 18 Batch 300 Loss 1.5066 Accuracy 0.6707
Epoch 18 Batch 350 Loss 1.5111 Accuracy 0.6699
Epoch 18 Batch 400 Loss 1.5117 Accuracy 0.6696
Epoch 18 Batch 450 Loss 1.5146 Accuracy 0.6691
Epoch 18 Batch 500 Loss 1.5179 Accuracy 0.6686
Epoch 18 Batch 550 Loss 1.5192 Accuracy 0.6687
Epoch 18 Batch 600 Loss 1.5218 Accuracy 0.6682
Epoch 18 Batch 650 Loss 1.5245 Accuracy 0.6678
Epoch 18 Batch 700 Loss 1.5266 Accuracy 0.6674
Epoch 18 Batch 750 Loss 1.5294 Accuracy 0.6670
Epoch 18 Batch 800 Loss 1.5304 Accuracy 0.6670
Epoch 18 Loss 1.5313 Accuracy 0.6668
Time taken for 1 epoch: 49.40 secs

Epoch 19 Batch 0 Loss 1.6217 Accuracy 0.6528
Epoch 19 Batch 50 Loss 1.4279 Accuracy 0.6837
Epoch 19 Batch 100 Loss 1.4346 Accuracy 0.6831
Epoch 19 Batch 150 Loss 1.4409 Accuracy 0.6818
Epoch 19 Batch 200 Loss 1.4469 Accuracy 0.6808
Epoch 19 Batch 250 Loss 1.4542 Accuracy 0.6799
Epoch 19 Batch 300 Loss 1.4619 Accuracy 0.6784
Epoch 19 Batch 350 Loss 1.4641 Accuracy 0.6780
Epoch 19 Batch 400 Loss 1.4661 Accuracy 0.6775
Epoch 19 Batch 450 Loss 1.4688 Accuracy 0.6770
Epoch 19 Batch 500 Loss 1.4737 Accuracy 0.6761
Epoch 19 Batch 550 Loss 1.4764 Accuracy 0.6757
Epoch 19 Batch 600 Loss 1.4800 Accuracy 0.6750
Epoch 19 Batch 650 Loss 1.4839 Accuracy 0.6743
Epoch 19 Batch 700 Loss 1.4857 Accuracy 0.6740
Epoch 19 Batch 750 Loss 1.4896 Accuracy 0.6736
Epoch 19 Batch 800 Loss 1.4909 Accuracy 0.6735
Epoch 19 Loss 1.4912 Accuracy 0.6736
Time taken for 1 epoch: 49.29 secs

Epoch 20 Batch 0 Loss 1.3881 Accuracy 0.6962
Epoch 20 Batch 50 Loss 1.4189 Accuracy 0.6832
Epoch 20 Batch 100 Loss 1.4183 Accuracy 0.6848
Epoch 20 Batch 150 Loss 1.4153 Accuracy 0.6862
Epoch 20 Batch 200 Loss 1.4144 Accuracy 0.6865
Epoch 20 Batch 250 Loss 1.4187 Accuracy 0.6851
Epoch 20 Batch 300 Loss 1.4236 Accuracy 0.6840
Epoch 20 Batch 350 Loss 1.4297 Accuracy 0.6826
Epoch 20 Batch 400 Loss 1.4299 Accuracy 0.6826
Epoch 20 Batch 450 Loss 1.4331 Accuracy 0.6822
Epoch 20 Batch 500 Loss 1.4347 Accuracy 0.6820
Epoch 20 Batch 550 Loss 1.4376 Accuracy 0.6815
Epoch 20 Batch 600 Loss 1.4409 Accuracy 0.6810
Epoch 20 Batch 650 Loss 1.4433 Accuracy 0.6807
Epoch 20 Batch 700 Loss 1.4445 Accuracy 0.6806
Epoch 20 Batch 750 Loss 1.4505 Accuracy 0.6798
Epoch 20 Batch 800 Loss 1.4537 Accuracy 0.6791
Saving checkpoint for epoch 20 at ./checkpoints/train/ckpt-4
Epoch 20 Loss 1.4541 Accuracy 0.6791
Time taken for 1 epoch: 49.55 secs

Evaluate

The following steps are used for evaluation:

  • Encode the input sentence using the Portuguese tokenizer (tokenizers.pt). This is the encoder input.
  • The decoder input is initialized to the [START] token.
  • Calculate the padding masks and the look ahead masks.
  • The decoder then outputs the predictions by looking at the encoder output and its own output (self-attention).
  • The model makes predictions of the next word for each word in the output. Most of these are redundant. Use the predictions from the last word.
  • Concatenate the predicted word to the decoder input and pass it to the decoder.
  • In this approach, the decoder predicts the next word based on the previous words it predicted.
def evaluate(sentence, max_length=40):
  # inp sentence is portuguese, hence adding the start and end token
  sentence = tf.convert_to_tensor([sentence])
  sentence = tokenizers.pt.tokenize(sentence).to_tensor()

  encoder_input = sentence

  # as the target is english, the first word to the transformer should be the
  # english start token.
  start, end = tokenizers.en.tokenize([''])[0]
  output = tf.convert_to_tensor([start])
  output = tf.expand_dims(output, 0)

  for i in range(max_length):
    enc_padding_mask, combined_mask, dec_padding_mask = create_masks(
        encoder_input, output)

    # predictions.shape == (batch_size, seq_len, vocab_size)
    predictions, attention_weights = transformer(encoder_input,
                                                 output,
                                                 False,
                                                 enc_padding_mask,
                                                 combined_mask,
                                                 dec_padding_mask)

    # select the last word from the seq_len dimension
    predictions = predictions[:, -1:, :]  # (batch_size, 1, vocab_size)

    predicted_id = tf.argmax(predictions, axis=-1)

    # concatentate the predicted_id to the output which is given to the decoder
    # as its input.
    output = tf.concat([output, predicted_id], axis=-1)

    # return the result if the predicted_id is equal to the end token
    if predicted_id == end:
      break

  # output.shape (1, tokens)
  text = tokenizers.en.detokenize(output)[0]  # shape: ()

  tokens = tokenizers.en.lookup(output)[0]

  return text, tokens, attention_weights
def print_translation(sentence, tokens, ground_truth):
  print(f'{"Input:":15s}: {sentence}')
  print(f'{"Prediction":15s}: {tokens.numpy().decode("utf-8")}')
  print(f'{"Ground truth":15s}: {ground_truth}')
sentence = "este é um problema que temos que resolver."
ground_truth = "this is a problem we have to solve ."

translated_text, translated_tokens, attention_weights = evaluate(sentence)
print_translation(sentence, translated_text, ground_truth)
Input:         : este é um problema que temos que resolver.
Prediction     : this is a problem that we have to solve .
Ground truth   : this is a problem we have to solve .
sentence = "os meus vizinhos ouviram sobre esta ideia."
ground_truth = "and my neighboring homes heard about this idea ."

translated_text, translated_tokens, attention_weights = evaluate(sentence)
print_translation(sentence, translated_text, ground_truth)
Input:         : os meus vizinhos ouviram sobre esta ideia.
Prediction     : my neighbors have heard about this idea .
Ground truth   : and my neighboring homes heard about this idea .
sentence = "vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram."
ground_truth = "so i \'ll just share with you some stories very quickly of some magical things that have happened ."

translated_text, translated_tokens, attention_weights = evaluate(sentence)
print_translation(sentence, translated_text, ground_truth)
Input:         : vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.
Prediction     : so i ' m so quickly share with you some magical stories of some magic things that happened .
Ground truth   : so i 'll just share with you some stories very quickly of some magical things that have happened .

You can pass different layers and attention blocks of the decoder to the plot parameter.

Attention plots

The evaluate function also returns a dictionary of attention maps you can use to visualize the internal working of the model:

sentence = "este é o primeiro livro que eu fiz."
ground_truth = "this is the first book i've ever done."

translated_text, translated_tokens, attention_weights = evaluate(sentence)
print_translation(sentence, translated_text, ground_truth)
Input:         : este é o primeiro livro que eu fiz.
Prediction     : this is the first book that i did .
Ground truth   : this is the first book i've ever done.
def plot_attention_head(in_tokens, translated_tokens, attention):
  # The plot is of the attention when a token was generated.
  # The model didn't generate `<START>` in the output. Skip it.
  translated_tokens = translated_tokens[1:]

  ax = plt.gca()
  ax.matshow(attention)
  ax.set_xticks(range(len(in_tokens)))
  ax.set_yticks(range(len(translated_tokens)))

  labels = [label.decode('utf-8') for label in in_tokens.numpy()]
  ax.set_xticklabels(
      labels, rotation=90)

  labels = [label.decode('utf-8') for label in translated_tokens.numpy()]
  ax.set_yticklabels(labels)
head = 0
# shape: (batch=1, num_heads, seq_len_q, seq_len_k)
attention_heads = tf.squeeze(
  attention_weights['decoder_layer4_block2'], 0)
attention = attention_heads[head]
attention.shape
TensorShape([10, 11])
in_tokens = tf.convert_to_tensor([sentence])
in_tokens = tokenizers.pt.tokenize(in_tokens).to_tensor()
in_tokens = tokenizers.pt.lookup(in_tokens)[0]
in_tokens
<tf.Tensor: shape=(11,), dtype=string, numpy=
array([b'[START]', b'este', b'e', b'o', b'primeiro', b'livro', b'que',
       b'eu', b'fiz', b'.', b'[END]'], dtype=object)>
translated_tokens
<tf.Tensor: shape=(11,), dtype=string, numpy=
array([b'[START]', b'this', b'is', b'the', b'first', b'book', b'that',
       b'i', b'did', b'.', b'[END]'], dtype=object)>
plot_attention_head(in_tokens, translated_tokens, attention)

png

def plot_attention_weights(sentence, translated_tokens, attention_heads):
  in_tokens = tf.convert_to_tensor([sentence])
  in_tokens = tokenizers.pt.tokenize(in_tokens).to_tensor()
  in_tokens = tokenizers.pt.lookup(in_tokens)[0]
  in_tokens

  fig = plt.figure(figsize=(16, 8))

  for h, head in enumerate(attention_heads):
    ax = fig.add_subplot(2, 4, h+1)

    plot_attention_head(in_tokens, translated_tokens, head)

    ax.set_xlabel(f'Head {h+1}')

  plt.tight_layout()
  plt.show()
plot_attention_weights(sentence, translated_tokens,
                       attention_weights['decoder_layer4_block2'][0])

png

The model does okay on unfamiliar words. Neither "triceratops" or "encyclopedia" are in the input dataset and the model almost learns to transliterate them, even without a shared vocabulary:

sentence = "Eu li sobre triceratops na enciclopédia."
ground_truth = "I read about triceratops in the encyclopedia."

translated_text, translated_tokens, attention_weights = evaluate(sentence)
print_translation(sentence, translated_text, ground_truth)

plot_attention_weights(sentence, translated_tokens,
                       attention_weights['decoder_layer4_block2'][0])
Input:         : Eu li sobre triceratops na enciclopédia.
Prediction     : i read about triel - encompperate in the encyclopedia .
Ground truth   : I read about triceratops in the encyclopedia.

png

Summary

In this tutorial, you learned about positional encoding, multi-head attention, the importance of masking and how to create a transformer.

Try using a different dataset to train the transformer. You can also create the base transformer or transformer XL by changing the hyperparameters above. You can also use the layers defined here to create BERT and train state of the art models. Furthermore, you can implement beam search to get better predictions.