Transformer model for language understanding

View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook

This tutorial trains a Transformer model to translate Portuguese to English. This is an advanced example that assumes knowledge of text generation and attention.

The core idea behind the Transformer model is self-attention—the ability to attend to different positions of the input sequence to compute a representation of that sequence. Transformer creates stacks of self-attention layers and is explained below in the sections Scaled dot product attention and Multi-head attention.

A transformer model handles variable-sized input using stacks of self-attention layers instead of RNNs or CNNs. This general architecture has a number of advantages:

  • It make no assumptions about the temporal/spatial relationships across the data. This is ideal for processing a set of objects (for example, StarCraft units).
  • Layer outputs can be calculated in parallel, instead of a series like an RNN.
  • Distant items can affect each other's output without passing through many RNN-steps, or convolution layers (see Scene Memory Transformer for example).
  • It can learn long-range dependencies. This is a challenge in many sequence tasks.

The downsides of this architecture are:

  • For a time-series, the output for a time-step is calculated from the entire history instead of only the inputs and current hidden-state. This may be less efficient.
  • If the input does have a temporal/spatial relationship, like text, some positional encoding must be added or the model will effectively see a bag of words.

After training the model in this notebook, you will be able to input a Portuguese sentence and return the English translation.

Attention heatmap

import tensorflow_datasets as tfds
import tensorflow as tf

import time
import numpy as np
import matplotlib.pyplot as plt

Setup input pipeline

Use TFDS to load the Portugese-English translation dataset from the TED Talks Open Translation Project.

This dataset contains approximately 50000 training examples, 1100 validation examples, and 2000 test examples.

examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en', with_info=True,
                               as_supervised=True)
train_examples, val_examples = examples['train'], examples['validation']
Downloading and preparing dataset ted_hrlr_translate/pt_to_en/1.0.0 (download: 124.94 MiB, generated: Unknown size, total: 124.94 MiB) to /home/kbuilder/tensorflow_datasets/ted_hrlr_translate/pt_to_en/1.0.0...
Shuffling and writing examples to /home/kbuilder/tensorflow_datasets/ted_hrlr_translate/pt_to_en/1.0.0.incomplete8P39OX/ted_hrlr_translate-train.tfrecord
Shuffling and writing examples to /home/kbuilder/tensorflow_datasets/ted_hrlr_translate/pt_to_en/1.0.0.incomplete8P39OX/ted_hrlr_translate-validation.tfrecord
Shuffling and writing examples to /home/kbuilder/tensorflow_datasets/ted_hrlr_translate/pt_to_en/1.0.0.incomplete8P39OX/ted_hrlr_translate-test.tfrecord
Dataset ted_hrlr_translate downloaded and prepared to /home/kbuilder/tensorflow_datasets/ted_hrlr_translate/pt_to_en/1.0.0. Subsequent calls will reuse this data.

Create a custom subwords tokenizer from the training dataset.

tokenizer_en = tfds.features.text.SubwordTextEncoder.build_from_corpus(
    (en.numpy() for pt, en in train_examples), target_vocab_size=2**13)

tokenizer_pt = tfds.features.text.SubwordTextEncoder.build_from_corpus(
    (pt.numpy() for pt, en in train_examples), target_vocab_size=2**13)
sample_string = 'Transformer is awesome.'

tokenized_string = tokenizer_en.encode(sample_string)
print ('Tokenized string is {}'.format(tokenized_string))

original_string = tokenizer_en.decode(tokenized_string)
print ('The original string: {}'.format(original_string))

assert original_string == sample_string
Tokenized string is [7915, 1248, 7946, 7194, 13, 2799, 7877]
The original string: Transformer is awesome.

The tokenizer encodes the string by breaking it into subwords if the word is not in its dictionary.

for ts in tokenized_string:
  print ('{} ----> {}'.format(ts, tokenizer_en.decode([ts])))
7915 ----> T
1248 ----> ran
7946 ----> s
7194 ----> former 
13 ----> is 
2799 ----> awesome
7877 ----> .

BUFFER_SIZE = 20000
BATCH_SIZE = 64

Add a start and end token to the input and target.

def encode(lang1, lang2):
  lang1 = [tokenizer_pt.vocab_size] + tokenizer_pt.encode(
      lang1.numpy()) + [tokenizer_pt.vocab_size+1]

  lang2 = [tokenizer_en.vocab_size] + tokenizer_en.encode(
      lang2.numpy()) + [tokenizer_en.vocab_size+1]
  
  return lang1, lang2

You want to use Dataset.map to apply this function to each element of the dataset. Dataset.map runs in graph mode.

  • Graph tensors do not have a value.
  • In graph mode you can only use TensorFlow Ops and functions.

So you can't .map this function directly: You need to wrap it in a tf.py_function. The tf.py_function will pass regular tensors (with a value and a .numpy() method to access it), to the wrapped python function.

def tf_encode(pt, en):
  result_pt, result_en = tf.py_function(encode, [pt, en], [tf.int64, tf.int64])
  result_pt.set_shape([None])
  result_en.set_shape([None])

  return result_pt, result_en
MAX_LENGTH = 40
def filter_max_length(x, y, max_length=MAX_LENGTH):
  return tf.logical_and(tf.size(x) <= max_length,
                        tf.size(y) <= max_length)
train_dataset = train_examples.map(tf_encode)
train_dataset = train_dataset.filter(filter_max_length)
# cache the dataset to memory to get a speedup while reading from it.
train_dataset = train_dataset.cache()
train_dataset = train_dataset.shuffle(BUFFER_SIZE).padded_batch(BATCH_SIZE)
train_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE)


val_dataset = val_examples.map(tf_encode)
val_dataset = val_dataset.filter(filter_max_length).padded_batch(BATCH_SIZE)
pt_batch, en_batch = next(iter(val_dataset))
pt_batch, en_batch
(<tf.Tensor: shape=(64, 38), dtype=int64, numpy=
 array([[8214,  342, 3032, ...,    0,    0,    0],
        [8214,   95,  198, ...,    0,    0,    0],
        [8214, 4479, 7990, ...,    0,    0,    0],
        ...,
        [8214,  584,   12, ...,    0,    0,    0],
        [8214,   59, 1548, ...,    0,    0,    0],
        [8214,  118,   34, ...,    0,    0,    0]])>,
 <tf.Tensor: shape=(64, 40), dtype=int64, numpy=
 array([[8087,   98,   25, ...,    0,    0,    0],
        [8087,   12,   20, ...,    0,    0,    0],
        [8087,   12, 5453, ...,    0,    0,    0],
        ...,
        [8087,   18, 2059, ...,    0,    0,    0],
        [8087,   16, 1436, ...,    0,    0,    0],
        [8087,   15,   57, ...,    0,    0,    0]])>)

Positional encoding

Since this model doesn't contain any recurrence or convolution, positional encoding is added to give the model some information about the relative position of the words in the sentence.

The positional encoding vector is added to the embedding vector. Embeddings represent a token in a d-dimensional space where tokens with similar meaning will be closer to each other. But the embeddings do not encode the relative position of words in a sentence. So after adding the positional encoding, words will be closer to each other based on the similarity of their meaning and their position in the sentence, in the d-dimensional space.

See the notebook on positional encoding to learn more about it. The formula for calculating the positional encoding is as follows:

$$\Large{PE_{(pos, 2i)} = sin(pos / 10000^{2i / d_{model} })} $$
$$\Large{PE_{(pos, 2i+1)} = cos(pos / 10000^{2i / d_{model} })} $$
def get_angles(pos, i, d_model):
  angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model))
  return pos * angle_rates
def positional_encoding(position, d_model):
  angle_rads = get_angles(np.arange(position)[:, np.newaxis],
                          np.arange(d_model)[np.newaxis, :],
                          d_model)
  
  # apply sin to even indices in the array; 2i
  angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2])
  
  # apply cos to odd indices in the array; 2i+1
  angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2])
    
  pos_encoding = angle_rads[np.newaxis, ...]
    
  return tf.cast(pos_encoding, dtype=tf.float32)
pos_encoding = positional_encoding(50, 512)
print (pos_encoding.shape)

plt.pcolormesh(pos_encoding[0], cmap='RdBu')
plt.xlabel('Depth')
plt.xlim((0, 512))
plt.ylabel('Position')
plt.colorbar()
plt.show()
(1, 50, 512)

png

Masking

Mask all the pad tokens in the batch of sequence. It ensures that the model does not treat padding as the input. The mask indicates where pad value 0 is present: it outputs a 1 at those locations, and a 0 otherwise.

def create_padding_mask(seq):
  seq = tf.cast(tf.math.equal(seq, 0), tf.float32)
  
  # add extra dimensions to add the padding
  # to the attention logits.
  return seq[:, tf.newaxis, tf.newaxis, :]  # (batch_size, 1, 1, seq_len)
x = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]])
create_padding_mask(x)
<tf.Tensor: shape=(3, 1, 1, 5), dtype=float32, numpy=
array([[[[0., 0., 1., 1., 0.]]],


       [[[0., 0., 0., 1., 1.]]],


       [[[1., 1., 1., 0., 0.]]]], dtype=float32)>

The look-ahead mask is used to mask the future tokens in a sequence. In other words, the mask indicates which entries should not be used.

This means that to predict the third word, only the first and second word will be used. Similarly to predict the fourth word, only the first, second and the third word will be used and so on.

def create_look_ahead_mask(size):
  mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)
  return mask  # (seq_len, seq_len)
x = tf.random.uniform((1, 3))
temp = create_look_ahead_mask(x.shape[1])
temp
<tf.Tensor: shape=(3, 3), dtype=float32, numpy=
array([[0., 1., 1.],
       [0., 0., 1.],
       [0., 0., 0.]], dtype=float32)>

Scaled dot product attention

scaled_dot_product_attention

The attention function used by the transformer takes three inputs: Q (query), K (key), V (value). The equation used to calculate the attention weights is:

$$\Large{Attention(Q, K, V) = softmax_k(\frac{QK^T}{\sqrt{d_k} }) V} $$

The dot-product attention is scaled by a factor of square root of the depth. This is done because for large values of depth, the dot product grows large in magnitude pushing the softmax function where it has small gradients resulting in a very hard softmax.

For example, consider that Q and K have a mean of 0 and variance of 1. Their matrix multiplication will have a mean of 0 and variance of dk. Hence, square root of dk is used for scaling (and not any other number) because the matmul of Q and K should have a mean of 0 and variance of 1, and you get a gentler softmax.

The mask is multiplied with -1e9 (close to negative infinity). This is done because the mask is summed with the scaled matrix multiplication of Q and K and is applied immediately before a softmax. The goal is to zero out these cells, and large negative inputs to softmax are near zero in the output.

def scaled_dot_product_attention(q, k, v, mask):
  """Calculate the attention weights.
  q, k, v must have matching leading dimensions.
  k, v must have matching penultimate dimension, i.e.: seq_len_k = seq_len_v.
  The mask has different shapes depending on its type(padding or look ahead) 
  but it must be broadcastable for addition.
  
  Args:
    q: query shape == (..., seq_len_q, depth)
    k: key shape == (..., seq_len_k, depth)
    v: value shape == (..., seq_len_v, depth_v)
    mask: Float tensor with shape broadcastable 
          to (..., seq_len_q, seq_len_k). Defaults to None.
    
  Returns:
    output, attention_weights
  """

  matmul_qk = tf.matmul(q, k, transpose_b=True)  # (..., seq_len_q, seq_len_k)
  
  # scale matmul_qk
  dk = tf.cast(tf.shape(k)[-1], tf.float32)
  scaled_attention_logits = matmul_qk / tf.math.sqrt(dk)

  # add the mask to the scaled tensor.
  if mask is not None:
    scaled_attention_logits += (mask * -1e9)  

  # softmax is normalized on the last axis (seq_len_k) so that the scores
  # add up to 1.
  attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1)  # (..., seq_len_q, seq_len_k)

  output = tf.matmul(attention_weights, v)  # (..., seq_len_q, depth_v)

  return output, attention_weights

As the softmax normalization is done on K, its values decide the amount of importance given to Q.

The output represents the multiplication of the attention weights and the V (value) vector. This ensures that the words you want to focus on are kept as-is and the irrelevant words are flushed out.

def print_out(q, k, v):
  temp_out, temp_attn = scaled_dot_product_attention(
      q, k, v, None)
  print ('Attention weights are:')
  print (temp_attn)
  print ('Output is:')
  print (temp_out)
np.set_printoptions(suppress=True)

temp_k = tf.constant([[10,0,0],
                      [0,10,0],
                      [0,0,10],
                      [0,0,10]], dtype=tf.float32)  # (4, 3)

temp_v = tf.constant([[   1,0],
                      [  10,0],
                      [ 100,5],
                      [1000,6]], dtype=tf.float32)  # (4, 2)

# This `query` aligns with the second `key`,
# so the second `value` is returned.
temp_q = tf.constant([[0, 10, 0]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0. 1. 0. 0.]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[10.  0.]], shape=(1, 2), dtype=float32)

# This query aligns with a repeated key (third and fourth), 
# so all associated values get averaged.
temp_q = tf.constant([[0, 0, 10]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0.  0.  0.5 0.5]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[550.    5.5]], shape=(1, 2), dtype=float32)

# This query aligns equally with the first and second key, 
# so their values get averaged.
temp_q = tf.constant([[10, 10, 0]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0.5 0.5 0.  0. ]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[5.5 0. ]], shape=(1, 2), dtype=float32)

Pass all the queries together.

temp_q = tf.constant([[0, 0, 10], [0, 10, 0], [10, 10, 0]], dtype=tf.float32)  # (3, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor(
[[0.  0.  0.5 0.5]
 [0.  1.  0.  0. ]
 [0.5 0.5 0.  0. ]], shape=(3, 4), dtype=float32)
Output is:
tf.Tensor(
[[550.    5.5]
 [ 10.    0. ]
 [  5.5   0. ]], shape=(3, 2), dtype=float32)

Multi-head attention

multi-head attention

Multi-head attention consists of four parts:

  • Linear layers and split into heads.
  • Scaled dot-product attention.
  • Concatenation of heads.
  • Final linear layer.

Each multi-head attention block gets three inputs; Q (query), K (key), V (value). These are put through linear (Dense) layers and split up into multiple heads.

The scaled_dot_product_attention defined above is applied to each head (broadcasted for efficiency). An appropriate mask must be used in the attention step. The attention output for each head is then concatenated (using tf.transpose, and tf.reshape) and put through a final Dense layer.

Instead of one single attention head, Q, K, and V are split into multiple heads because it allows the model to jointly attend to information at different positions from different representational spaces. After the split each head has a reduced dimensionality, so the total computation cost is the same as a single head attention with full dimensionality.

class MultiHeadAttention(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads):
    super(MultiHeadAttention, self).__init__()
    self.num_heads = num_heads
    self.d_model = d_model
    
    assert d_model % self.num_heads == 0
    
    self.depth = d_model // self.num_heads
    
    self.wq = tf.keras.layers.Dense(d_model)
    self.wk = tf.keras.layers.Dense(d_model)
    self.wv = tf.keras.layers.Dense(d_model)
    
    self.dense = tf.keras.layers.Dense(d_model)
        
  def split_heads(self, x, batch_size):
    """Split the last dimension into (num_heads, depth).
    Transpose the result such that the shape is (batch_size, num_heads, seq_len, depth)
    """
    x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth))
    return tf.transpose(x, perm=[0, 2, 1, 3])
    
  def call(self, v, k, q, mask):
    batch_size = tf.shape(q)[0]
    
    q = self.wq(q)  # (batch_size, seq_len, d_model)
    k = self.wk(k)  # (batch_size, seq_len, d_model)
    v = self.wv(v)  # (batch_size, seq_len, d_model)
    
    q = self.split_heads(q, batch_size)  # (batch_size, num_heads, seq_len_q, depth)
    k = self.split_heads(k, batch_size)  # (batch_size, num_heads, seq_len_k, depth)
    v = self.split_heads(v, batch_size)  # (batch_size, num_heads, seq_len_v, depth)
    
    # scaled_attention.shape == (batch_size, num_heads, seq_len_q, depth)
    # attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k)
    scaled_attention, attention_weights = scaled_dot_product_attention(
        q, k, v, mask)
    
    scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3])  # (batch_size, seq_len_q, num_heads, depth)

    concat_attention = tf.reshape(scaled_attention, 
                                  (batch_size, -1, self.d_model))  # (batch_size, seq_len_q, d_model)

    output = self.dense(concat_attention)  # (batch_size, seq_len_q, d_model)
        
    return output, attention_weights

Create a MultiHeadAttention layer to try out. At each location in the sequence, y, the MultiHeadAttention runs all 8 attention heads across all other locations in the sequence, returning a new vector of the same length at each location.

temp_mha = MultiHeadAttention(d_model=512, num_heads=8)
y = tf.random.uniform((1, 60, 512))  # (batch_size, encoder_sequence, d_model)
out, attn = temp_mha(y, k=y, q=y, mask=None)
out.shape, attn.shape
(TensorShape([1, 60, 512]), TensorShape([1, 8, 60, 60]))

Point wise feed forward network

Point wise feed forward network consists of two fully-connected layers with a ReLU activation in between.

def point_wise_feed_forward_network(d_model, dff):
  return tf.keras.Sequential([
      tf.keras.layers.Dense(dff, activation='relu'),  # (batch_size, seq_len, dff)
      tf.keras.layers.Dense(d_model)  # (batch_size, seq_len, d_model)
  ])
sample_ffn = point_wise_feed_forward_network(512, 2048)
sample_ffn(tf.random.uniform((64, 50, 512))).shape
TensorShape([64, 50, 512])

Encoder and decoder

transformer

The transformer model follows the same general pattern as a standard sequence to sequence with attention model.

  • The input sentence is passed through N encoder layers that generates an output for each word/token in the sequence.
  • The decoder attends on the encoder's output and its own input (self-attention) to predict the next word.

Encoder layer

Each encoder layer consists of sublayers:

  1. Multi-head attention (with padding mask)
  2. Point wise feed forward networks.

Each of these sublayers has a residual connection around it followed by a layer normalization. Residual connections help in avoiding the vanishing gradient problem in deep networks.

The output of each sublayer is LayerNorm(x + Sublayer(x)). The normalization is done on the d_model (last) axis. There are N encoder layers in the transformer.

class EncoderLayer(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads, dff, rate=0.1):
    super(EncoderLayer, self).__init__()

    self.mha = MultiHeadAttention(d_model, num_heads)
    self.ffn = point_wise_feed_forward_network(d_model, dff)

    self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    
    self.dropout1 = tf.keras.layers.Dropout(rate)
    self.dropout2 = tf.keras.layers.Dropout(rate)
    
  def call(self, x, training, mask):

    attn_output, _ = self.mha(x, x, x, mask)  # (batch_size, input_seq_len, d_model)
    attn_output = self.dropout1(attn_output, training=training)
    out1 = self.layernorm1(x + attn_output)  # (batch_size, input_seq_len, d_model)
    
    ffn_output = self.ffn(out1)  # (batch_size, input_seq_len, d_model)
    ffn_output = self.dropout2(ffn_output, training=training)
    out2 = self.layernorm2(out1 + ffn_output)  # (batch_size, input_seq_len, d_model)
    
    return out2
sample_encoder_layer = EncoderLayer(512, 8, 2048)

sample_encoder_layer_output = sample_encoder_layer(
    tf.random.uniform((64, 43, 512)), False, None)

sample_encoder_layer_output.shape  # (batch_size, input_seq_len, d_model)
TensorShape([64, 43, 512])

Decoder layer

Each decoder layer consists of sublayers:

  1. Masked multi-head attention (with look ahead mask and padding mask)
  2. Multi-head attention (with padding mask). V (value) and K (key) receive the encoder output as inputs. Q (query) receives the output from the masked multi-head attention sublayer.
  3. Point wise feed forward networks

Each of these sublayers has a residual connection around it followed by a layer normalization. The output of each sublayer is LayerNorm(x + Sublayer(x)). The normalization is done on the d_model (last) axis.

There are N decoder layers in the transformer.

As Q receives the output from decoder's first attention block, and K receives the encoder output, the attention weights represent the importance given to the decoder's input based on the encoder's output. In other words, the decoder predicts the next word by looking at the encoder output and self-attending to its own output. See the demonstration above in the scaled dot product attention section.

class DecoderLayer(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads, dff, rate=0.1):
    super(DecoderLayer, self).__init__()

    self.mha1 = MultiHeadAttention(d_model, num_heads)
    self.mha2 = MultiHeadAttention(d_model, num_heads)

    self.ffn = point_wise_feed_forward_network(d_model, dff)
 
    self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm3 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    
    self.dropout1 = tf.keras.layers.Dropout(rate)
    self.dropout2 = tf.keras.layers.Dropout(rate)
    self.dropout3 = tf.keras.layers.Dropout(rate)
    
    
  def call(self, x, enc_output, training, 
           look_ahead_mask, padding_mask):
    # enc_output.shape == (batch_size, input_seq_len, d_model)

    attn1, attn_weights_block1 = self.mha1(x, x, x, look_ahead_mask)  # (batch_size, target_seq_len, d_model)
    attn1 = self.dropout1(attn1, training=training)
    out1 = self.layernorm1(attn1 + x)
    
    attn2, attn_weights_block2 = self.mha2(
        enc_output, enc_output, out1, padding_mask)  # (batch_size, target_seq_len, d_model)
    attn2 = self.dropout2(attn2, training=training)
    out2 = self.layernorm2(attn2 + out1)  # (batch_size, target_seq_len, d_model)
    
    ffn_output = self.ffn(out2)  # (batch_size, target_seq_len, d_model)
    ffn_output = self.dropout3(ffn_output, training=training)
    out3 = self.layernorm3(ffn_output + out2)  # (batch_size, target_seq_len, d_model)
    
    return out3, attn_weights_block1, attn_weights_block2
sample_decoder_layer = DecoderLayer(512, 8, 2048)

sample_decoder_layer_output, _, _ = sample_decoder_layer(
    tf.random.uniform((64, 50, 512)), sample_encoder_layer_output, 
    False, None, None)

sample_decoder_layer_output.shape  # (batch_size, target_seq_len, d_model)
TensorShape([64, 50, 512])

Encoder

The Encoder consists of:

  1. Input Embedding
  2. Positional Encoding
  3. N encoder layers

The input is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the encoder layers. The output of the encoder is the input to the decoder.

class Encoder(tf.keras.layers.Layer):
  def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
               maximum_position_encoding, rate=0.1):
    super(Encoder, self).__init__()

    self.d_model = d_model
    self.num_layers = num_layers
    
    self.embedding = tf.keras.layers.Embedding(input_vocab_size, d_model)
    self.pos_encoding = positional_encoding(maximum_position_encoding, 
                                            self.d_model)
    
    
    self.enc_layers = [EncoderLayer(d_model, num_heads, dff, rate) 
                       for _ in range(num_layers)]
  
    self.dropout = tf.keras.layers.Dropout(rate)
        
  def call(self, x, training, mask):

    seq_len = tf.shape(x)[1]
    
    # adding embedding and position encoding.
    x = self.embedding(x)  # (batch_size, input_seq_len, d_model)
    x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
    x += self.pos_encoding[:, :seq_len, :]

    x = self.dropout(x, training=training)
    
    for i in range(self.num_layers):
      x = self.enc_layers[i](x, training, mask)
    
    return x  # (batch_size, input_seq_len, d_model)
sample_encoder = Encoder(num_layers=2, d_model=512, num_heads=8, 
                         dff=2048, input_vocab_size=8500,
                         maximum_position_encoding=10000)
temp_input = tf.random.uniform((64, 62), dtype=tf.int64, minval=0, maxval=200)

sample_encoder_output = sample_encoder(temp_input, training=False, mask=None)

print (sample_encoder_output.shape)  # (batch_size, input_seq_len, d_model)
(64, 62, 512)

Decoder

The Decoder consists of:

  1. Output Embedding
  2. Positional Encoding
  3. N decoder layers

The target is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the decoder layers. The output of the decoder is the input to the final linear layer.

class Decoder(tf.keras.layers.Layer):
  def __init__(self, num_layers, d_model, num_heads, dff, target_vocab_size,
               maximum_position_encoding, rate=0.1):
    super(Decoder, self).__init__()

    self.d_model = d_model
    self.num_layers = num_layers
    
    self.embedding = tf.keras.layers.Embedding(target_vocab_size, d_model)
    self.pos_encoding = positional_encoding(maximum_position_encoding, d_model)
    
    self.dec_layers = [DecoderLayer(d_model, num_heads, dff, rate) 
                       for _ in range(num_layers)]
    self.dropout = tf.keras.layers.Dropout(rate)
    
  def call(self, x, enc_output, training, 
           look_ahead_mask, padding_mask):

    seq_len = tf.shape(x)[1]
    attention_weights = {}
    
    x = self.embedding(x)  # (batch_size, target_seq_len, d_model)
    x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
    x += self.pos_encoding[:, :seq_len, :]
    
    x = self.dropout(x, training=training)

    for i in range(self.num_layers):
      x, block1, block2 = self.dec_layers[i](x, enc_output, training,
                                             look_ahead_mask, padding_mask)
      
      attention_weights['decoder_layer{}_block1'.format(i+1)] = block1
      attention_weights['decoder_layer{}_block2'.format(i+1)] = block2
    
    # x.shape == (batch_size, target_seq_len, d_model)
    return x, attention_weights
sample_decoder = Decoder(num_layers=2, d_model=512, num_heads=8, 
                         dff=2048, target_vocab_size=8000,
                         maximum_position_encoding=5000)
temp_input = tf.random.uniform((64, 26), dtype=tf.int64, minval=0, maxval=200)

output, attn = sample_decoder(temp_input, 
                              enc_output=sample_encoder_output, 
                              training=False,
                              look_ahead_mask=None, 
                              padding_mask=None)

output.shape, attn['decoder_layer2_block2'].shape
(TensorShape([64, 26, 512]), TensorShape([64, 8, 26, 62]))

Create the Transformer

Transformer consists of the encoder, decoder and a final linear layer. The output of the decoder is the input to the linear layer and its output is returned.

class Transformer(tf.keras.Model):
  def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size, 
               target_vocab_size, pe_input, pe_target, rate=0.1):
    super(Transformer, self).__init__()

    self.encoder = Encoder(num_layers, d_model, num_heads, dff, 
                           input_vocab_size, pe_input, rate)

    self.decoder = Decoder(num_layers, d_model, num_heads, dff, 
                           target_vocab_size, pe_target, rate)

    self.final_layer = tf.keras.layers.Dense(target_vocab_size)
    
  def call(self, inp, tar, training, enc_padding_mask, 
           look_ahead_mask, dec_padding_mask):

    enc_output = self.encoder(inp, training, enc_padding_mask)  # (batch_size, inp_seq_len, d_model)
    
    # dec_output.shape == (batch_size, tar_seq_len, d_model)
    dec_output, attention_weights = self.decoder(
        tar, enc_output, training, look_ahead_mask, dec_padding_mask)
    
    final_output = self.final_layer(dec_output)  # (batch_size, tar_seq_len, target_vocab_size)
    
    return final_output, attention_weights
sample_transformer = Transformer(
    num_layers=2, d_model=512, num_heads=8, dff=2048, 
    input_vocab_size=8500, target_vocab_size=8000, 
    pe_input=10000, pe_target=6000)

temp_input = tf.random.uniform((64, 38), dtype=tf.int64, minval=0, maxval=200)
temp_target = tf.random.uniform((64, 36), dtype=tf.int64, minval=0, maxval=200)

fn_out, _ = sample_transformer(temp_input, temp_target, training=False, 
                               enc_padding_mask=None, 
                               look_ahead_mask=None,
                               dec_padding_mask=None)

fn_out.shape  # (batch_size, tar_seq_len, target_vocab_size)
TensorShape([64, 36, 8000])

Set hyperparameters

To keep this example small and relatively fast, the values for num_layers, d_model, and dff have been reduced.

The values used in the base model of transformer were; num_layers=6, d_model = 512, dff = 2048. See the paper for all the other versions of the transformer.

num_layers = 4
d_model = 128
dff = 512
num_heads = 8

input_vocab_size = tokenizer_pt.vocab_size + 2
target_vocab_size = tokenizer_en.vocab_size + 2
dropout_rate = 0.1

Optimizer

Use the Adam optimizer with a custom learning rate scheduler according to the formula in the paper.

$$\Large{lrate = d_{model}^{-0.5} * min(step{\_}num^{-0.5}, step{\_}num * warmup{\_}steps^{-1.5})}$$
class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
  def __init__(self, d_model, warmup_steps=4000):
    super(CustomSchedule, self).__init__()
    
    self.d_model = d_model
    self.d_model = tf.cast(self.d_model, tf.float32)

    self.warmup_steps = warmup_steps
    
  def __call__(self, step):
    arg1 = tf.math.rsqrt(step)
    arg2 = step * (self.warmup_steps ** -1.5)
    
    return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)
learning_rate = CustomSchedule(d_model)

optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98, 
                                     epsilon=1e-9)
temp_learning_rate_schedule = CustomSchedule(d_model)

plt.plot(temp_learning_rate_schedule(tf.range(40000, dtype=tf.float32)))
plt.ylabel("Learning Rate")
plt.xlabel("Train Step")
Text(0.5, 0, 'Train Step')

png

Loss and metrics

Since the target sequences are padded, it is important to apply a padding mask when calculating the loss.

loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
    from_logits=True, reduction='none')
def loss_function(real, pred):
  mask = tf.math.logical_not(tf.math.equal(real, 0))
  loss_ = loss_object(real, pred)

  mask = tf.cast(mask, dtype=loss_.dtype)
  loss_ *= mask
  
  return tf.reduce_sum(loss_)/tf.reduce_sum(mask)
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
    name='train_accuracy')

Training and checkpointing

transformer = Transformer(num_layers, d_model, num_heads, dff,
                          input_vocab_size, target_vocab_size, 
                          pe_input=input_vocab_size, 
                          pe_target=target_vocab_size,
                          rate=dropout_rate)
def create_masks(inp, tar):
  # Encoder padding mask
  enc_padding_mask = create_padding_mask(inp)
  
  # Used in the 2nd attention block in the decoder.
  # This padding mask is used to mask the encoder outputs.
  dec_padding_mask = create_padding_mask(inp)
  
  # Used in the 1st attention block in the decoder.
  # It is used to pad and mask future tokens in the input received by 
  # the decoder.
  look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1])
  dec_target_padding_mask = create_padding_mask(tar)
  combined_mask = tf.maximum(dec_target_padding_mask, look_ahead_mask)
  
  return enc_padding_mask, combined_mask, dec_padding_mask

Create the checkpoint path and the checkpoint manager. This will be used to save checkpoints every n epochs.

checkpoint_path = "./checkpoints/train"

ckpt = tf.train.Checkpoint(transformer=transformer,
                           optimizer=optimizer)

ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)

# if a checkpoint exists, restore the latest checkpoint.
if ckpt_manager.latest_checkpoint:
  ckpt.restore(ckpt_manager.latest_checkpoint)
  print ('Latest checkpoint restored!!')

The target is divided into tar_inp and tar_real. tar_inp is passed as an input to the decoder. tar_real is that same input shifted by 1: At each location in tar_input, tar_real contains the next token that should be predicted.

For example, sentence = "SOS A lion in the jungle is sleeping EOS"

tar_inp = "SOS A lion in the jungle is sleeping"

tar_real = "A lion in the jungle is sleeping EOS"

The transformer is an auto-regressive model: it makes predictions one part at a time, and uses its output so far to decide what to do next.

During training this example uses teacher-forcing (like in the text generation tutorial). Teacher forcing is passing the true output to the next time step regardless of what the model predicts at the current time step.

As the transformer predicts each word, self-attention allows it to look at the previous words in the input sequence to better predict the next word.

To prevent the model from peeking at the expected output the model uses a look-ahead mask.

EPOCHS = 20
# The @tf.function trace-compiles train_step into a TF graph for faster
# execution. The function specializes to the precise shape of the argument
# tensors. To avoid re-tracing due to the variable sequence lengths or variable
# batch sizes (the last batch is smaller), use input_signature to specify
# more generic shapes.

train_step_signature = [
    tf.TensorSpec(shape=(None, None), dtype=tf.int64),
    tf.TensorSpec(shape=(None, None), dtype=tf.int64),
]

@tf.function(input_signature=train_step_signature)
def train_step(inp, tar):
  tar_inp = tar[:, :-1]
  tar_real = tar[:, 1:]
  
  enc_padding_mask, combined_mask, dec_padding_mask = create_masks(inp, tar_inp)
  
  with tf.GradientTape() as tape:
    predictions, _ = transformer(inp, tar_inp, 
                                 True, 
                                 enc_padding_mask, 
                                 combined_mask, 
                                 dec_padding_mask)
    loss = loss_function(tar_real, predictions)

  gradients = tape.gradient(loss, transformer.trainable_variables)    
  optimizer.apply_gradients(zip(gradients, transformer.trainable_variables))
  
  train_loss(loss)
  train_accuracy(tar_real, predictions)

Portuguese is used as the input language and English is the target language.

for epoch in range(EPOCHS):
  start = time.time()
  
  train_loss.reset_states()
  train_accuracy.reset_states()
  
  # inp -> portuguese, tar -> english
  for (batch, (inp, tar)) in enumerate(train_dataset):
    train_step(inp, tar)
    
    if batch % 50 == 0:
      print ('Epoch {} Batch {} Loss {:.4f} Accuracy {:.4f}'.format(
          epoch + 1, batch, train_loss.result(), train_accuracy.result()))
      
  if (epoch + 1) % 5 == 0:
    ckpt_save_path = ckpt_manager.save()
    print ('Saving checkpoint for epoch {} at {}'.format(epoch+1,
                                                         ckpt_save_path))
    
  print ('Epoch {} Loss {:.4f} Accuracy {:.4f}'.format(epoch + 1, 
                                                train_loss.result(), 
                                                train_accuracy.result()))

  print ('Time taken for 1 epoch: {} secs\n'.format(time.time() - start))
Epoch 1 Batch 0 Loss 8.9863 Accuracy 0.0000
Epoch 1 Batch 50 Loss 8.9328 Accuracy 0.0123
Epoch 1 Batch 100 Loss 8.8492 Accuracy 0.0196
Epoch 1 Batch 150 Loss 8.7521 Accuracy 0.0221
Epoch 1 Batch 200 Loss 8.6286 Accuracy 0.0233
Epoch 1 Batch 250 Loss 8.4740 Accuracy 0.0243
Epoch 1 Batch 300 Loss 8.2965 Accuracy 0.0274
Epoch 1 Batch 350 Loss 8.1120 Accuracy 0.0306
Epoch 1 Batch 400 Loss 7.9307 Accuracy 0.0333
Epoch 1 Batch 450 Loss 7.7643 Accuracy 0.0363
Epoch 1 Batch 500 Loss 7.6181 Accuracy 0.0391
Epoch 1 Batch 550 Loss 7.4853 Accuracy 0.0418
Epoch 1 Batch 600 Loss 7.3643 Accuracy 0.0449
Epoch 1 Batch 650 Loss 7.2504 Accuracy 0.0482
Epoch 1 Batch 700 Loss 7.1408 Accuracy 0.0513
Epoch 1 Loss 7.1365 Accuracy 0.0514
Time taken for 1 epoch: 61.71607947349548 secs

Epoch 2 Batch 0 Loss 5.6297 Accuracy 0.0925
Epoch 2 Batch 50 Loss 5.5559 Accuracy 0.0991
Epoch 2 Batch 100 Loss 5.5005 Accuracy 0.1020
Epoch 2 Batch 150 Loss 5.4556 Accuracy 0.1042
Epoch 2 Batch 200 Loss 5.4023 Accuracy 0.1063
Epoch 2 Batch 250 Loss 5.3582 Accuracy 0.1086
Epoch 2 Batch 300 Loss 5.3200 Accuracy 0.1107
Epoch 2 Batch 350 Loss 5.2763 Accuracy 0.1125
Epoch 2 Batch 400 Loss 5.2362 Accuracy 0.1142
Epoch 2 Batch 450 Loss 5.2000 Accuracy 0.1160
Epoch 2 Batch 500 Loss 5.1650 Accuracy 0.1175
Epoch 2 Batch 550 Loss 5.1319 Accuracy 0.1191
Epoch 2 Batch 600 Loss 5.1018 Accuracy 0.1206
Epoch 2 Batch 650 Loss 5.0738 Accuracy 0.1220
Epoch 2 Batch 700 Loss 5.0471 Accuracy 0.1233
Epoch 2 Loss 5.0464 Accuracy 0.1233
Time taken for 1 epoch: 31.906673669815063 secs

Epoch 3 Batch 0 Loss 4.8322 Accuracy 0.1430
Epoch 3 Batch 50 Loss 4.6299 Accuracy 0.1443
Epoch 3 Batch 100 Loss 4.6111 Accuracy 0.1447
Epoch 3 Batch 150 Loss 4.5981 Accuracy 0.1452
Epoch 3 Batch 200 Loss 4.5917 Accuracy 0.1453
Epoch 3 Batch 250 Loss 4.5793 Accuracy 0.1460
Epoch 3 Batch 300 Loss 4.5627 Accuracy 0.1462
Epoch 3 Batch 350 Loss 4.5453 Accuracy 0.1469
Epoch 3 Batch 400 Loss 4.5320 Accuracy 0.1476
Epoch 3 Batch 450 Loss 4.5125 Accuracy 0.1486
Epoch 3 Batch 500 Loss 4.4986 Accuracy 0.1496
Epoch 3 Batch 550 Loss 4.4832 Accuracy 0.1503
Epoch 3 Batch 600 Loss 4.4697 Accuracy 0.1512
Epoch 3 Batch 650 Loss 4.4547 Accuracy 0.1519
Epoch 3 Batch 700 Loss 4.4373 Accuracy 0.1529
Epoch 3 Loss 4.4372 Accuracy 0.1529
Time taken for 1 epoch: 32.18184280395508 secs

Epoch 4 Batch 0 Loss 4.0442 Accuracy 0.1619
Epoch 4 Batch 50 Loss 4.1083 Accuracy 0.1684
Epoch 4 Batch 100 Loss 4.0977 Accuracy 0.1703
Epoch 4 Batch 150 Loss 4.0803 Accuracy 0.1704
Epoch 4 Batch 200 Loss 4.0626 Accuracy 0.1717
Epoch 4 Batch 250 Loss 4.0442 Accuracy 0.1732
Epoch 4 Batch 300 Loss 4.0364 Accuracy 0.1743
Epoch 4 Batch 350 Loss 4.0204 Accuracy 0.1755
Epoch 4 Batch 400 Loss 4.0069 Accuracy 0.1767
Epoch 4 Batch 450 Loss 3.9899 Accuracy 0.1776
Epoch 4 Batch 500 Loss 3.9717 Accuracy 0.1786
Epoch 4 Batch 550 Loss 3.9563 Accuracy 0.1794
Epoch 4 Batch 600 Loss 3.9413 Accuracy 0.1805
Epoch 4 Batch 650 Loss 3.9273 Accuracy 0.1814
Epoch 4 Batch 700 Loss 3.9126 Accuracy 0.1824
Epoch 4 Loss 3.9119 Accuracy 0.1824
Time taken for 1 epoch: 32.31039619445801 secs

Epoch 5 Batch 0 Loss 3.5062 Accuracy 0.2116
Epoch 5 Batch 50 Loss 3.5757 Accuracy 0.1969
Epoch 5 Batch 100 Loss 3.5561 Accuracy 0.2029
Epoch 5 Batch 150 Loss 3.5531 Accuracy 0.2029
Epoch 5 Batch 200 Loss 3.5463 Accuracy 0.2032
Epoch 5 Batch 250 Loss 3.5319 Accuracy 0.2032
Epoch 5 Batch 300 Loss 3.5194 Accuracy 0.2039
Epoch 5 Batch 350 Loss 3.5095 Accuracy 0.2040
Epoch 5 Batch 400 Loss 3.5026 Accuracy 0.2046
Epoch 5 Batch 450 Loss 3.4929 Accuracy 0.2053
Epoch 5 Batch 500 Loss 3.4848 Accuracy 0.2055
Epoch 5 Batch 550 Loss 3.4747 Accuracy 0.2062
Epoch 5 Batch 600 Loss 3.4660 Accuracy 0.2068
Epoch 5 Batch 650 Loss 3.4563 Accuracy 0.2076
Epoch 5 Batch 700 Loss 3.4465 Accuracy 0.2082
Saving checkpoint for epoch 5 at ./checkpoints/train/ckpt-1
Epoch 5 Loss 3.4463 Accuracy 0.2083
Time taken for 1 epoch: 32.58843159675598 secs

Epoch 6 Batch 0 Loss 3.2565 Accuracy 0.2391
Epoch 6 Batch 50 Loss 3.1245 Accuracy 0.2228
Epoch 6 Batch 100 Loss 3.1328 Accuracy 0.2248
Epoch 6 Batch 150 Loss 3.1232 Accuracy 0.2244
Epoch 6 Batch 200 Loss 3.1199 Accuracy 0.2241
Epoch 6 Batch 250 Loss 3.1204 Accuracy 0.2240
Epoch 6 Batch 300 Loss 3.1179 Accuracy 0.2244
Epoch 6 Batch 350 Loss 3.1116 Accuracy 0.2247
Epoch 6 Batch 400 Loss 3.1040 Accuracy 0.2252
Epoch 6 Batch 450 Loss 3.0987 Accuracy 0.2256
Epoch 6 Batch 500 Loss 3.0949 Accuracy 0.2259
Epoch 6 Batch 550 Loss 3.0885 Accuracy 0.2263
Epoch 6 Batch 600 Loss 3.0813 Accuracy 0.2267
Epoch 6 Batch 650 Loss 3.0744 Accuracy 0.2271
Epoch 6 Batch 700 Loss 3.0667 Accuracy 0.2275
Epoch 6 Loss 3.0665 Accuracy 0.2274
Time taken for 1 epoch: 31.416507482528687 secs

Epoch 7 Batch 0 Loss 2.6655 Accuracy 0.2500
Epoch 7 Batch 50 Loss 2.7371 Accuracy 0.2439
Epoch 7 Batch 100 Loss 2.7421 Accuracy 0.2447
Epoch 7 Batch 150 Loss 2.7409 Accuracy 0.2440
Epoch 7 Batch 200 Loss 2.7414 Accuracy 0.2447
Epoch 7 Batch 250 Loss 2.7272 Accuracy 0.2453
Epoch 7 Batch 300 Loss 2.7241 Accuracy 0.2456
Epoch 7 Batch 350 Loss 2.7177 Accuracy 0.2464
Epoch 7 Batch 400 Loss 2.7124 Accuracy 0.2463
Epoch 7 Batch 450 Loss 2.7082 Accuracy 0.2467
Epoch 7 Batch 500 Loss 2.7018 Accuracy 0.2476
Epoch 7 Batch 550 Loss 2.6982 Accuracy 0.2480
Epoch 7 Batch 600 Loss 2.6915 Accuracy 0.2484
Epoch 7 Batch 650 Loss 2.6869 Accuracy 0.2486
Epoch 7 Batch 700 Loss 2.6807 Accuracy 0.2489
Epoch 7 Loss 2.6803 Accuracy 0.2490
Time taken for 1 epoch: 31.46870255470276 secs

Epoch 8 Batch 0 Loss 2.4336 Accuracy 0.2466
Epoch 8 Batch 50 Loss 2.3716 Accuracy 0.2651
Epoch 8 Batch 100 Loss 2.3681 Accuracy 0.2648
Epoch 8 Batch 150 Loss 2.3793 Accuracy 0.2642
Epoch 8 Batch 200 Loss 2.3749 Accuracy 0.2654
Epoch 8 Batch 250 Loss 2.3797 Accuracy 0.2646
Epoch 8 Batch 300 Loss 2.3796 Accuracy 0.2649
Epoch 8 Batch 350 Loss 2.3769 Accuracy 0.2658
Epoch 8 Batch 400 Loss 2.3742 Accuracy 0.2661
Epoch 8 Batch 450 Loss 2.3734 Accuracy 0.2660
Epoch 8 Batch 500 Loss 2.3727 Accuracy 0.2663
Epoch 8 Batch 550 Loss 2.3699 Accuracy 0.2664
Epoch 8 Batch 600 Loss 2.3669 Accuracy 0.2666
Epoch 8 Batch 650 Loss 2.3682 Accuracy 0.2666
Epoch 8 Batch 700 Loss 2.3671 Accuracy 0.2667
Epoch 8 Loss 2.3672 Accuracy 0.2667
Time taken for 1 epoch: 31.854146003723145 secs

Epoch 9 Batch 0 Loss 2.1325 Accuracy 0.2644
Epoch 9 Batch 50 Loss 2.1114 Accuracy 0.2760
Epoch 9 Batch 100 Loss 2.1172 Accuracy 0.2791
Epoch 9 Batch 150 Loss 2.1190 Accuracy 0.2802
Epoch 9 Batch 200 Loss 2.1242 Accuracy 0.2803
Epoch 9 Batch 250 Loss 2.1288 Accuracy 0.2800
Epoch 9 Batch 300 Loss 2.1272 Accuracy 0.2796
Epoch 9 Batch 350 Loss 2.1266 Accuracy 0.2795
Epoch 9 Batch 400 Loss 2.1274 Accuracy 0.2795
Epoch 9 Batch 450 Loss 2.1279 Accuracy 0.2797
Epoch 9 Batch 500 Loss 2.1310 Accuracy 0.2795
Epoch 9 Batch 550 Loss 2.1328 Accuracy 0.2795
Epoch 9 Batch 600 Loss 2.1319 Accuracy 0.2796
Epoch 9 Batch 650 Loss 2.1333 Accuracy 0.2797
Epoch 9 Batch 700 Loss 2.1371 Accuracy 0.2798
Epoch 9 Loss 2.1376 Accuracy 0.2798
Time taken for 1 epoch: 31.95617175102234 secs

Epoch 10 Batch 0 Loss 1.7014 Accuracy 0.3333
Epoch 10 Batch 50 Loss 1.8816 Accuracy 0.2970
Epoch 10 Batch 100 Loss 1.9117 Accuracy 0.2947
Epoch 10 Batch 150 Loss 1.9187 Accuracy 0.2927
Epoch 10 Batch 200 Loss 1.9318 Accuracy 0.2917
Epoch 10 Batch 250 Loss 1.9370 Accuracy 0.2913
Epoch 10 Batch 300 Loss 1.9376 Accuracy 0.2922
Epoch 10 Batch 350 Loss 1.9426 Accuracy 0.2916
Epoch 10 Batch 400 Loss 1.9437 Accuracy 0.2918
Epoch 10 Batch 450 Loss 1.9471 Accuracy 0.2918
Epoch 10 Batch 500 Loss 1.9526 Accuracy 0.2913
Epoch 10 Batch 550 Loss 1.9550 Accuracy 0.2913
Epoch 10 Batch 600 Loss 1.9575 Accuracy 0.2910
Epoch 10 Batch 650 Loss 1.9581 Accuracy 0.2910
Epoch 10 Batch 700 Loss 1.9628 Accuracy 0.2904
Saving checkpoint for epoch 10 at ./checkpoints/train/ckpt-2
Epoch 10 Loss 1.9629 Accuracy 0.2905
Time taken for 1 epoch: 32.070839643478394 secs

Epoch 11 Batch 0 Loss 1.7319 Accuracy 0.3249
Epoch 11 Batch 50 Loss 1.7663 Accuracy 0.3034
Epoch 11 Batch 100 Loss 1.7661 Accuracy 0.3030
Epoch 11 Batch 150 Loss 1.7744 Accuracy 0.3031
Epoch 11 Batch 200 Loss 1.7808 Accuracy 0.3030
Epoch 11 Batch 250 Loss 1.7861 Accuracy 0.3026
Epoch 11 Batch 300 Loss 1.7904 Accuracy 0.3015
Epoch 11 Batch 350 Loss 1.7941 Accuracy 0.3011
Epoch 11 Batch 400 Loss 1.7989 Accuracy 0.3007
Epoch 11 Batch 450 Loss 1.8049 Accuracy 0.3003
Epoch 11 Batch 500 Loss 1.8102 Accuracy 0.3003
Epoch 11 Batch 550 Loss 1.8130 Accuracy 0.3003
Epoch 11 Batch 600 Loss 1.8161 Accuracy 0.2999
Epoch 11 Batch 650 Loss 1.8175 Accuracy 0.2997
Epoch 11 Batch 700 Loss 1.8206 Accuracy 0.2992
Epoch 11 Loss 1.8210 Accuracy 0.2992
Time taken for 1 epoch: 33.863057374954224 secs

Epoch 12 Batch 0 Loss 1.7457 Accuracy 0.3375
Epoch 12 Batch 50 Loss 1.6432 Accuracy 0.3101
Epoch 12 Batch 100 Loss 1.6399 Accuracy 0.3114
Epoch 12 Batch 150 Loss 1.6536 Accuracy 0.3099
Epoch 12 Batch 200 Loss 1.6665 Accuracy 0.3089
Epoch 12 Batch 250 Loss 1.6649 Accuracy 0.3094
Epoch 12 Batch 300 Loss 1.6700 Accuracy 0.3088
Epoch 12 Batch 350 Loss 1.6730 Accuracy 0.3087
Epoch 12 Batch 400 Loss 1.6727 Accuracy 0.3082
Epoch 12 Batch 450 Loss 1.6764 Accuracy 0.3083
Epoch 12 Batch 500 Loss 1.6813 Accuracy 0.3083
Epoch 12 Batch 550 Loss 1.6886 Accuracy 0.3079
Epoch 12 Batch 600 Loss 1.6939 Accuracy 0.3075
Epoch 12 Batch 650 Loss 1.6980 Accuracy 0.3072
Epoch 12 Batch 700 Loss 1.7031 Accuracy 0.3069
Epoch 12 Loss 1.7034 Accuracy 0.3069
Time taken for 1 epoch: 32.46517491340637 secs

Epoch 13 Batch 0 Loss 1.7229 Accuracy 0.3171
Epoch 13 Batch 50 Loss 1.5344 Accuracy 0.3198
Epoch 13 Batch 100 Loss 1.5353 Accuracy 0.3180
Epoch 13 Batch 150 Loss 1.5407 Accuracy 0.3161
Epoch 13 Batch 200 Loss 1.5510 Accuracy 0.3167
Epoch 13 Batch 250 Loss 1.5624 Accuracy 0.3156
Epoch 13 Batch 300 Loss 1.5702 Accuracy 0.3149
Epoch 13 Batch 350 Loss 1.5744 Accuracy 0.3145
Epoch 13 Batch 400 Loss 1.5783 Accuracy 0.3140
Epoch 13 Batch 450 Loss 1.5812 Accuracy 0.3142
Epoch 13 Batch 500 Loss 1.5859 Accuracy 0.3137
Epoch 13 Batch 550 Loss 1.5904 Accuracy 0.3135
Epoch 13 Batch 600 Loss 1.5955 Accuracy 0.3134
Epoch 13 Batch 650 Loss 1.6006 Accuracy 0.3133
Epoch 13 Batch 700 Loss 1.6054 Accuracy 0.3130
Epoch 13 Loss 1.6056 Accuracy 0.3130
Time taken for 1 epoch: 32.23147201538086 secs

Epoch 14 Batch 0 Loss 1.4287 Accuracy 0.2956
Epoch 14 Batch 50 Loss 1.4237 Accuracy 0.3234
Epoch 14 Batch 100 Loss 1.4341 Accuracy 0.3249
Epoch 14 Batch 150 Loss 1.4436 Accuracy 0.3244
Epoch 14 Batch 200 Loss 1.4549 Accuracy 0.3237
Epoch 14 Batch 250 Loss 1.4693 Accuracy 0.3220
Epoch 14 Batch 300 Loss 1.4779 Accuracy 0.3214
Epoch 14 Batch 350 Loss 1.4811 Accuracy 0.3224
Epoch 14 Batch 400 Loss 1.4866 Accuracy 0.3222
Epoch 14 Batch 450 Loss 1.4924 Accuracy 0.3219
Epoch 14 Batch 500 Loss 1.4977 Accuracy 0.3211
Epoch 14 Batch 550 Loss 1.5043 Accuracy 0.3202
Epoch 14 Batch 600 Loss 1.5090 Accuracy 0.3199
Epoch 14 Batch 650 Loss 1.5128 Accuracy 0.3197
Epoch 14 Batch 700 Loss 1.5172 Accuracy 0.3191
Epoch 14 Loss 1.5174 Accuracy 0.3192
Time taken for 1 epoch: 32.95823097229004 secs

Epoch 15 Batch 0 Loss 1.4420 Accuracy 0.3674
Epoch 15 Batch 50 Loss 1.3535 Accuracy 0.3295
Epoch 15 Batch 100 Loss 1.3716 Accuracy 0.3278
Epoch 15 Batch 150 Loss 1.3823 Accuracy 0.3270
Epoch 15 Batch 200 Loss 1.3942 Accuracy 0.3268
Epoch 15 Batch 250 Loss 1.3982 Accuracy 0.3267
Epoch 15 Batch 300 Loss 1.4003 Accuracy 0.3264
Epoch 15 Batch 350 Loss 1.4058 Accuracy 0.3264
Epoch 15 Batch 400 Loss 1.4079 Accuracy 0.3262
Epoch 15 Batch 450 Loss 1.4134 Accuracy 0.3260
Epoch 15 Batch 500 Loss 1.4188 Accuracy 0.3256
Epoch 15 Batch 550 Loss 1.4230 Accuracy 0.3254
Epoch 15 Batch 600 Loss 1.4291 Accuracy 0.3250
Epoch 15 Batch 650 Loss 1.4352 Accuracy 0.3250
Epoch 15 Batch 700 Loss 1.4406 Accuracy 0.3249
Saving checkpoint for epoch 15 at ./checkpoints/train/ckpt-3
Epoch 15 Loss 1.4409 Accuracy 0.3248
Time taken for 1 epoch: 32.75894618034363 secs

Epoch 16 Batch 0 Loss 1.3760 Accuracy 0.3479
Epoch 16 Batch 50 Loss 1.2931 Accuracy 0.3367
Epoch 16 Batch 100 Loss 1.3007 Accuracy 0.3361
Epoch 16 Batch 150 Loss 1.3079 Accuracy 0.3346
Epoch 16 Batch 200 Loss 1.3174 Accuracy 0.3338
Epoch 16 Batch 250 Loss 1.3246 Accuracy 0.3328
Epoch 16 Batch 300 Loss 1.3335 Accuracy 0.3319
Epoch 16 Batch 350 Loss 1.3366 Accuracy 0.3318
Epoch 16 Batch 400 Loss 1.3426 Accuracy 0.3313
Epoch 16 Batch 450 Loss 1.3486 Accuracy 0.3314
Epoch 16 Batch 500 Loss 1.3544 Accuracy 0.3309
Epoch 16 Batch 550 Loss 1.3610 Accuracy 0.3304
Epoch 16 Batch 600 Loss 1.3661 Accuracy 0.3302
Epoch 16 Batch 650 Loss 1.3708 Accuracy 0.3297
Epoch 16 Batch 700 Loss 1.3762 Accuracy 0.3294
Epoch 16 Loss 1.3767 Accuracy 0.3294
Time taken for 1 epoch: 32.443130016326904 secs

Epoch 17 Batch 0 Loss 1.2618 Accuracy 0.3316
Epoch 17 Batch 50 Loss 1.2238 Accuracy 0.3431
Epoch 17 Batch 100 Loss 1.2359 Accuracy 0.3427
Epoch 17 Batch 150 Loss 1.2401 Accuracy 0.3404
Epoch 17 Batch 200 Loss 1.2499 Accuracy 0.3386
Epoch 17 Batch 250 Loss 1.2612 Accuracy 0.3365
Epoch 17 Batch 300 Loss 1.2730 Accuracy 0.3350
Epoch 17 Batch 350 Loss 1.2812 Accuracy 0.3340
Epoch 17 Batch 400 Loss 1.2865 Accuracy 0.3336
Epoch 17 Batch 450 Loss 1.2920 Accuracy 0.3342
Epoch 17 Batch 500 Loss 1.2951 Accuracy 0.3343
Epoch 17 Batch 550 Loss 1.2995 Accuracy 0.3339
Epoch 17 Batch 600 Loss 1.3054 Accuracy 0.3338
Epoch 17 Batch 650 Loss 1.3099 Accuracy 0.3334
Epoch 17 Batch 700 Loss 1.3151 Accuracy 0.3332
Epoch 17 Loss 1.3156 Accuracy 0.3331
Time taken for 1 epoch: 32.265987157821655 secs

Epoch 18 Batch 0 Loss 1.0617 Accuracy 0.3577
Epoch 18 Batch 50 Loss 1.1608 Accuracy 0.3432
Epoch 18 Batch 100 Loss 1.1767 Accuracy 0.3428
Epoch 18 Batch 150 Loss 1.1881 Accuracy 0.3438
Epoch 18 Batch 200 Loss 1.2026 Accuracy 0.3411
Epoch 18 Batch 250 Loss 1.2089 Accuracy 0.3406
Epoch 18 Batch 300 Loss 1.2144 Accuracy 0.3403
Epoch 18 Batch 350 Loss 1.2197 Accuracy 0.3404
Epoch 18 Batch 400 Loss 1.2252 Accuracy 0.3398
Epoch 18 Batch 450 Loss 1.2300 Accuracy 0.3399
Epoch 18 Batch 500 Loss 1.2357 Accuracy 0.3389
Epoch 18 Batch 550 Loss 1.2406 Accuracy 0.3391
Epoch 18 Batch 600 Loss 1.2472 Accuracy 0.3388
Epoch 18 Batch 650 Loss 1.2530 Accuracy 0.3385
Epoch 18 Batch 700 Loss 1.2610 Accuracy 0.3376
Epoch 18 Loss 1.2613 Accuracy 0.3377
Time taken for 1 epoch: 32.5123724937439 secs

Epoch 19 Batch 0 Loss 1.0554 Accuracy 0.3310
Epoch 19 Batch 50 Loss 1.1357 Accuracy 0.3431
Epoch 19 Batch 100 Loss 1.1340 Accuracy 0.3449
Epoch 19 Batch 150 Loss 1.1455 Accuracy 0.3458
Epoch 19 Batch 200 Loss 1.1557 Accuracy 0.3458
Epoch 19 Batch 250 Loss 1.1656 Accuracy 0.3446
Epoch 19 Batch 300 Loss 1.1730 Accuracy 0.3444
Epoch 19 Batch 350 Loss 1.1772 Accuracy 0.3438
Epoch 19 Batch 400 Loss 1.1803 Accuracy 0.3433
Epoch 19 Batch 450 Loss 1.1864 Accuracy 0.3426
Epoch 19 Batch 500 Loss 1.1923 Accuracy 0.3424
Epoch 19 Batch 550 Loss 1.1989 Accuracy 0.3423
Epoch 19 Batch 600 Loss 1.2036 Accuracy 0.3416
Epoch 19 Batch 650 Loss 1.2095 Accuracy 0.3407
Epoch 19 Batch 700 Loss 1.2148 Accuracy 0.3406
Epoch 19 Loss 1.2151 Accuracy 0.3406
Time taken for 1 epoch: 32.376850605010986 secs

Epoch 20 Batch 0 Loss 1.0415 Accuracy 0.3674
Epoch 20 Batch 50 Loss 1.0739 Accuracy 0.3484
Epoch 20 Batch 100 Loss 1.0864 Accuracy 0.3487
Epoch 20 Batch 150 Loss 1.1010 Accuracy 0.3473
Epoch 20 Batch 200 Loss 1.1088 Accuracy 0.3475
Epoch 20 Batch 250 Loss 1.1175 Accuracy 0.3485
Epoch 20 Batch 300 Loss 1.1256 Accuracy 0.3478
Epoch 20 Batch 350 Loss 1.1310 Accuracy 0.3470
Epoch 20 Batch 400 Loss 1.1385 Accuracy 0.3460
Epoch 20 Batch 450 Loss 1.1459 Accuracy 0.3459
Epoch 20 Batch 500 Loss 1.1482 Accuracy 0.3458
Epoch 20 Batch 550 Loss 1.1524 Accuracy 0.3453
Epoch 20 Batch 600 Loss 1.1587 Accuracy 0.3451
Epoch 20 Batch 650 Loss 1.1636 Accuracy 0.3447
Epoch 20 Batch 700 Loss 1.1691 Accuracy 0.3441
Saving checkpoint for epoch 20 at ./checkpoints/train/ckpt-4
Epoch 20 Loss 1.1694 Accuracy 0.3441
Time taken for 1 epoch: 33.20027160644531 secs


Evaluate

The following steps are used for evaluation:

  • Encode the input sentence using the Portuguese tokenizer (tokenizer_pt). Moreover, add the start and end token so the input is equivalent to what the model is trained with. This is the encoder input.
  • The decoder input is the start token == tokenizer_en.vocab_size.
  • Calculate the padding masks and the look ahead masks.
  • The decoder then outputs the predictions by looking at the encoder output and its own output (self-attention).
  • Select the last word and calculate the argmax of that.
  • Concatentate the predicted word to the decoder input as pass it to the decoder.
  • In this approach, the decoder predicts the next word based on the previous words it predicted.
def evaluate(inp_sentence):
  start_token = [tokenizer_pt.vocab_size]
  end_token = [tokenizer_pt.vocab_size + 1]
  
  # inp sentence is portuguese, hence adding the start and end token
  inp_sentence = start_token + tokenizer_pt.encode(inp_sentence) + end_token
  encoder_input = tf.expand_dims(inp_sentence, 0)
  
  # as the target is english, the first word to the transformer should be the
  # english start token.
  decoder_input = [tokenizer_en.vocab_size]
  output = tf.expand_dims(decoder_input, 0)
    
  for i in range(MAX_LENGTH):
    enc_padding_mask, combined_mask, dec_padding_mask = create_masks(
        encoder_input, output)
  
    # predictions.shape == (batch_size, seq_len, vocab_size)
    predictions, attention_weights = transformer(encoder_input, 
                                                 output,
                                                 False,
                                                 enc_padding_mask,
                                                 combined_mask,
                                                 dec_padding_mask)
    
    # select the last word from the seq_len dimension
    predictions = predictions[: ,-1:, :]  # (batch_size, 1, vocab_size)

    predicted_id = tf.cast(tf.argmax(predictions, axis=-1), tf.int32)
    
    # return the result if the predicted_id is equal to the end token
    if predicted_id == tokenizer_en.vocab_size+1:
      return tf.squeeze(output, axis=0), attention_weights
    
    # concatentate the predicted_id to the output which is given to the decoder
    # as its input.
    output = tf.concat([output, predicted_id], axis=-1)

  return tf.squeeze(output, axis=0), attention_weights
def plot_attention_weights(attention, sentence, result, layer):
  fig = plt.figure(figsize=(16, 8))
  
  sentence = tokenizer_pt.encode(sentence)
  
  attention = tf.squeeze(attention[layer], axis=0)
  
  for head in range(attention.shape[0]):
    ax = fig.add_subplot(2, 4, head+1)
    
    # plot the attention weights
    ax.matshow(attention[head][:-1, :], cmap='viridis')

    fontdict = {'fontsize': 10}
    
    ax.set_xticks(range(len(sentence)+2))
    ax.set_yticks(range(len(result)))
    
    ax.set_ylim(len(result)-1.5, -0.5)
        
    ax.set_xticklabels(
        ['<start>']+[tokenizer_pt.decode([i]) for i in sentence]+['<end>'], 
        fontdict=fontdict, rotation=90)
    
    ax.set_yticklabels([tokenizer_en.decode([i]) for i in result 
                        if i < tokenizer_en.vocab_size], 
                       fontdict=fontdict)
    
    ax.set_xlabel('Head {}'.format(head+1))
  
  plt.tight_layout()
  plt.show()
def translate(sentence, plot=''):
  result, attention_weights = evaluate(sentence)
  
  predicted_sentence = tokenizer_en.decode([i for i in result 
                                            if i < tokenizer_en.vocab_size])  

  print('Input: {}'.format(sentence))
  print('Predicted translation: {}'.format(predicted_sentence))
  
  if plot:
    plot_attention_weights(attention_weights, sentence, result, plot)
translate("este é um problema que temos que resolver.")
print ("Real translation: this is a problem we have to solve .")
Input: este é um problema que temos que resolver.
Predicted translation: this is a problem that we have to solve .....
Real translation: this is a problem we have to solve .

translate("os meus vizinhos ouviram sobre esta ideia.")
print ("Real translation: and my neighboring homes heard about this idea .")
Input: os meus vizinhos ouviram sobre esta ideia.
Predicted translation: my neighbors heard about this idea .
Real translation: and my neighboring homes heard about this idea .

translate("vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.")
print ("Real translation: so i 'll just share with you some stories very quickly of some magical things that have happened .")
Input: vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.
Predicted translation: so i 'm going to just share with you some stories of some magic things that happened there .
Real translation: so i 'll just share with you some stories very quickly of some magical things that have happened .

You can pass different layers and attention blocks of the decoder to the plot parameter.

translate("este é o primeiro livro que eu fiz.", plot='decoder_layer4_block2')
print ("Real translation: this is the first book i've ever done.")
Input: este é o primeiro livro que eu fiz.
Predicted translation: this is the first book i had to..

png

Real translation: this is the first book i've ever done.

Summary

In this tutorial, you learned about positional encoding, multi-head attention, the importance of masking and how to create a transformer.

Try using a different dataset to train the transformer. You can also create the base transformer or transformer XL by changing the hyperparameters above. You can also use the layers defined here to create BERT and train state of the art models. Futhermore, you can implement beam search to get better predictions.