Transformer model for language understanding

View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook

This tutorial trains a Transformer model to translate Portuguese to English. This is an advanced example that assumes knowledge of text generation and attention.

The core idea behind the Transformer model is self-attention—the ability to attend to different positions of the input sequence to compute a representation of that sequence. Transformer creates stacks of self-attention layers and is explained below in the sections Scaled dot product attention and Multi-head attention.

A transformer model handles variable-sized input using stacks of self-attention layers instead of RNNs or CNNs. This general architecture has a number of advantages:

  • It make no assumptions about the temporal/spatial relationships across the data. This is ideal for processing a set of objects (for example, StarCraft units).
  • Layer outputs can be calculated in parallel, instead of a series like an RNN.
  • Distant items can affect each other's output without passing through many RNN-steps, or convolution layers (see Scene Memory Transformer for example).
  • It can learn long-range dependencies. This is a challenge in many sequence tasks.

The downsides of this architecture are:

  • For a time-series, the output for a time-step is calculated from the entire history instead of only the inputs and current hidden-state. This may be less efficient.
  • If the input does have a temporal/spatial relationship, like text, some positional encoding must be added or the model will effectively see a bag of words.

After training the model in this notebook, you will be able to input a Portuguese sentence and return the English translation.

Attention heatmap

from __future__ import absolute_import, division, print_function, unicode_literals

try:
  !pip install -q tf-nightly
except Exception:
  pass
import tensorflow_datasets as tfds
import tensorflow as tf

import time
import numpy as np
import matplotlib.pyplot as plt
ERROR: tensorflow 2.1.0 has requirement gast==0.2.2, but you'll have gast 0.3.3 which is incompatible.

Setup input pipeline

Use TFDS to load the Portugese-English translation dataset from the TED Talks Open Translation Project.

This dataset contains approximately 50000 training examples, 1100 validation examples, and 2000 test examples.

examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en', with_info=True,
                               as_supervised=True)
train_examples, val_examples = examples['train'], examples['validation']
Downloading and preparing dataset ted_hrlr_translate/pt_to_en/1.0.0 (download: 124.94 MiB, generated: Unknown size, total: 124.94 MiB) to /home/kbuilder/tensorflow_datasets/ted_hrlr_translate/pt_to_en/1.0.0...

HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Dl Completed...', max=1.0, style=Progre…
HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Dl Size...', max=1.0, style=ProgressSty…
HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Extraction completed...', max=1.0, styl…







HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
Shuffling and writing examples to /home/kbuilder/tensorflow_datasets/ted_hrlr_translate/pt_to_en/1.0.0.incomplete6T0PGO/ted_hrlr_translate-train.tfrecord

HBox(children=(FloatProgress(value=0.0, max=51785.0), HTML(value='')))


HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
Shuffling and writing examples to /home/kbuilder/tensorflow_datasets/ted_hrlr_translate/pt_to_en/1.0.0.incomplete6T0PGO/ted_hrlr_translate-validation.tfrecord

HBox(children=(FloatProgress(value=0.0, max=1193.0), HTML(value='')))


HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
Shuffling and writing examples to /home/kbuilder/tensorflow_datasets/ted_hrlr_translate/pt_to_en/1.0.0.incomplete6T0PGO/ted_hrlr_translate-test.tfrecord

HBox(children=(FloatProgress(value=0.0, max=1803.0), HTML(value='')))
Dataset ted_hrlr_translate downloaded and prepared to /home/kbuilder/tensorflow_datasets/ted_hrlr_translate/pt_to_en/1.0.0. Subsequent calls will reuse this data.

Create a custom subwords tokenizer from the training dataset.

tokenizer_en = tfds.features.text.SubwordTextEncoder.build_from_corpus(
    (en.numpy() for pt, en in train_examples), target_vocab_size=2**13)

tokenizer_pt = tfds.features.text.SubwordTextEncoder.build_from_corpus(
    (pt.numpy() for pt, en in train_examples), target_vocab_size=2**13)
sample_string = 'Transformer is awesome.'

tokenized_string = tokenizer_en.encode(sample_string)
print ('Tokenized string is {}'.format(tokenized_string))

original_string = tokenizer_en.decode(tokenized_string)
print ('The original string: {}'.format(original_string))

assert original_string == sample_string
Tokenized string is [7915, 1248, 7946, 7194, 13, 2799, 7877]
The original string: Transformer is awesome.

The tokenizer encodes the string by breaking it into subwords if the word is not in its dictionary.

for ts in tokenized_string:
  print ('{} ----> {}'.format(ts, tokenizer_en.decode([ts])))
7915 ----> T
1248 ----> ran
7946 ----> s
7194 ----> former 
13 ----> is 
2799 ----> awesome
7877 ----> .
BUFFER_SIZE = 20000
BATCH_SIZE = 64

Add a start and end token to the input and target.

def encode(lang1, lang2):
  lang1 = [tokenizer_pt.vocab_size] + tokenizer_pt.encode(
      lang1.numpy()) + [tokenizer_pt.vocab_size+1]

  lang2 = [tokenizer_en.vocab_size] + tokenizer_en.encode(
      lang2.numpy()) + [tokenizer_en.vocab_size+1]
  
  return lang1, lang2

You want to use Dataset.map to apply this function to each element of the dataset. Dataset.map runs in graph mode.

  • Graph tensors do not have a value.
  • In graph mode you can only use TensorFlow Ops and functions.

So you can't .map this function directly: You need to wrap it in a tf.py_function. The tf.py_function will pass regular tensors (with a value and a .numpy() method to access it), to the wrapped python function.

def tf_encode(pt, en):
  result_pt, result_en = tf.py_function(encode, [pt, en], [tf.int64, tf.int64])
  result_pt.set_shape([None])
  result_en.set_shape([None])

  return result_pt, result_en
MAX_LENGTH = 40
def filter_max_length(x, y, max_length=MAX_LENGTH):
  return tf.logical_and(tf.size(x) <= max_length,
                        tf.size(y) <= max_length)
train_dataset = train_examples.map(tf_encode)
train_dataset = train_dataset.filter(filter_max_length)
# cache the dataset to memory to get a speedup while reading from it.
train_dataset = train_dataset.cache()
train_dataset = train_dataset.shuffle(BUFFER_SIZE).padded_batch(BATCH_SIZE)
train_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE)


val_dataset = val_examples.map(tf_encode)
val_dataset = val_dataset.filter(filter_max_length).padded_batch(BATCH_SIZE)
pt_batch, en_batch = next(iter(val_dataset))
pt_batch, en_batch
(<tf.Tensor: shape=(64, 38), dtype=int64, numpy=
 array([[8214,  342, 3032, ...,    0,    0,    0],
        [8214,   95,  198, ...,    0,    0,    0],
        [8214, 4479, 7990, ...,    0,    0,    0],
        ...,
        [8214,  584,   12, ...,    0,    0,    0],
        [8214,   59, 1548, ...,    0,    0,    0],
        [8214,  118,   34, ...,    0,    0,    0]])>,
 <tf.Tensor: shape=(64, 40), dtype=int64, numpy=
 array([[8087,   98,   25, ...,    0,    0,    0],
        [8087,   12,   20, ...,    0,    0,    0],
        [8087,   12, 5453, ...,    0,    0,    0],
        ...,
        [8087,   18, 2059, ...,    0,    0,    0],
        [8087,   16, 1436, ...,    0,    0,    0],
        [8087,   15,   57, ...,    0,    0,    0]])>)

Positional encoding

Since this model doesn't contain any recurrence or convolution, positional encoding is added to give the model some information about the relative position of the words in the sentence.

The positional encoding vector is added to the embedding vector. Embeddings represent a token in a d-dimensional space where tokens with similar meaning will be closer to each other. But the embeddings do not encode the relative position of words in a sentence. So after adding the positional encoding, words will be closer to each other based on the similarity of their meaning and their position in the sentence, in the d-dimensional space.

See the notebook on positional encoding to learn more about it. The formula for calculating the positional encoding is as follows:

$$\Large{PE_{(pos, 2i)} = sin(pos / 10000^{2i / d_{model}})} $$
$$\Large{PE_{(pos, 2i+1)} = cos(pos / 10000^{2i / d_{model}})} $$
def get_angles(pos, i, d_model):
  angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model))
  return pos * angle_rates
def positional_encoding(position, d_model):
  angle_rads = get_angles(np.arange(position)[:, np.newaxis],
                          np.arange(d_model)[np.newaxis, :],
                          d_model)
  
  # apply sin to even indices in the array; 2i
  angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2])
  
  # apply cos to odd indices in the array; 2i+1
  angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2])
    
  pos_encoding = angle_rads[np.newaxis, ...]
    
  return tf.cast(pos_encoding, dtype=tf.float32)
pos_encoding = positional_encoding(50, 512)
print (pos_encoding.shape)

plt.pcolormesh(pos_encoding[0], cmap='RdBu')
plt.xlabel('Depth')
plt.xlim((0, 512))
plt.ylabel('Position')
plt.colorbar()
plt.show()
(1, 50, 512)

png

Masking

Mask all the pad tokens in the batch of sequence. It ensures that the model does not treat padding as the input. The mask indicates where pad value 0 is present: it outputs a 1 at those locations, and a 0 otherwise.

def create_padding_mask(seq):
  seq = tf.cast(tf.math.equal(seq, 0), tf.float32)
  
  # add extra dimensions to add the padding
  # to the attention logits.
  return seq[:, tf.newaxis, tf.newaxis, :]  # (batch_size, 1, 1, seq_len)
x = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]])
create_padding_mask(x)
<tf.Tensor: shape=(3, 1, 1, 5), dtype=float32, numpy=
array([[[[0., 0., 1., 1., 0.]]],


       [[[0., 0., 0., 1., 1.]]],


       [[[1., 1., 1., 0., 0.]]]], dtype=float32)>

The look-ahead mask is used to mask the future tokens in a sequence. In other words, the mask indicates which entries should not be used.

This means that to predict the third word, only the first and second word will be used. Similarly to predict the fourth word, only the first, second and the third word will be used and so on.

def create_look_ahead_mask(size):
  mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)
  return mask  # (seq_len, seq_len)
x = tf.random.uniform((1, 3))
temp = create_look_ahead_mask(x.shape[1])
temp
<tf.Tensor: shape=(3, 3), dtype=float32, numpy=
array([[0., 1., 1.],
       [0., 0., 1.],
       [0., 0., 0.]], dtype=float32)>

Scaled dot product attention

scaled_dot_product_attention

The attention function used by the transformer takes three inputs: Q (query), K (key), V (value). The equation used to calculate the attention weights is:

$$\Large{Attention(Q, K, V) = softmax_k(\frac{QK^T}{\sqrt{d_k}}) V} $$

The dot-product attention is scaled by a factor of square root of the depth. This is done because for large values of depth, the dot product grows large in magnitude pushing the softmax function where it has small gradients resulting in a very hard softmax.

For example, consider that Q and K have a mean of 0 and variance of 1. Their matrix multiplication will have a mean of 0 and variance of dk. Hence, square root of dk is used for scaling (and not any other number) because the matmul of Q and K should have a mean of 0 and variance of 1, and you get a gentler softmax.

The mask is multiplied with -1e9 (close to negative infinity). This is done because the mask is summed with the scaled matrix multiplication of Q and K and is applied immediately before a softmax. The goal is to zero out these cells, and large negative inputs to softmax are near zero in the output.

def scaled_dot_product_attention(q, k, v, mask):
  """Calculate the attention weights.
  q, k, v must have matching leading dimensions.
  k, v must have matching penultimate dimension, i.e.: seq_len_k = seq_len_v.
  The mask has different shapes depending on its type(padding or look ahead) 
  but it must be broadcastable for addition.
  
  Args:
    q: query shape == (..., seq_len_q, depth)
    k: key shape == (..., seq_len_k, depth)
    v: value shape == (..., seq_len_v, depth_v)
    mask: Float tensor with shape broadcastable 
          to (..., seq_len_q, seq_len_k). Defaults to None.
    
  Returns:
    output, attention_weights
  """

  matmul_qk = tf.matmul(q, k, transpose_b=True)  # (..., seq_len_q, seq_len_k)
  
  # scale matmul_qk
  dk = tf.cast(tf.shape(k)[-1], tf.float32)
  scaled_attention_logits = matmul_qk / tf.math.sqrt(dk)

  # add the mask to the scaled tensor.
  if mask is not None:
    scaled_attention_logits += (mask * -1e9)  

  # softmax is normalized on the last axis (seq_len_k) so that the scores
  # add up to 1.
  attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1)  # (..., seq_len_q, seq_len_k)

  output = tf.matmul(attention_weights, v)  # (..., seq_len_q, depth_v)

  return output, attention_weights

As the softmax normalization is done on K, its values decide the amount of importance given to Q.

The output represents the multiplication of the attention weights and the V (value) vector. This ensures that the words you want to focus on are kept as-is and the irrelevant words are flushed out.

def print_out(q, k, v):
  temp_out, temp_attn = scaled_dot_product_attention(
      q, k, v, None)
  print ('Attention weights are:')
  print (temp_attn)
  print ('Output is:')
  print (temp_out)
np.set_printoptions(suppress=True)

temp_k = tf.constant([[10,0,0],
                      [0,10,0],
                      [0,0,10],
                      [0,0,10]], dtype=tf.float32)  # (4, 3)

temp_v = tf.constant([[   1,0],
                      [  10,0],
                      [ 100,5],
                      [1000,6]], dtype=tf.float32)  # (4, 2)

# This `query` aligns with the second `key`,
# so the second `value` is returned.
temp_q = tf.constant([[0, 10, 0]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0. 1. 0. 0.]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[10.  0.]], shape=(1, 2), dtype=float32)
# This query aligns with a repeated key (third and fourth), 
# so all associated values get averaged.
temp_q = tf.constant([[0, 0, 10]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0.  0.  0.5 0.5]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[550.    5.5]], shape=(1, 2), dtype=float32)
# This query aligns equally with the first and second key, 
# so their values get averaged.
temp_q = tf.constant([[10, 10, 0]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0.5 0.5 0.  0. ]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[5.5 0. ]], shape=(1, 2), dtype=float32)

Pass all the queries together.

temp_q = tf.constant([[0, 0, 10], [0, 10, 0], [10, 10, 0]], dtype=tf.float32)  # (3, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor(
[[0.  0.  0.5 0.5]
 [0.  1.  0.  0. ]
 [0.5 0.5 0.  0. ]], shape=(3, 4), dtype=float32)
Output is:
tf.Tensor(
[[550.    5.5]
 [ 10.    0. ]
 [  5.5   0. ]], shape=(3, 2), dtype=float32)

Multi-head attention

multi-head attention

Multi-head attention consists of four parts:

  • Linear layers and split into heads.
  • Scaled dot-product attention.
  • Concatenation of heads.
  • Final linear layer.

Each multi-head attention block gets three inputs; Q (query), K (key), V (value). These are put through linear (Dense) layers and split up into multiple heads.

The scaled_dot_product_attention defined above is applied to each head (broadcasted for efficiency). An appropriate mask must be used in the attention step. The attention output for each head is then concatenated (using tf.transpose, and tf.reshape) and put through a final Dense layer.

Instead of one single attention head, Q, K, and V are split into multiple heads because it allows the model to jointly attend to information at different positions from different representational spaces. After the split each head has a reduced dimensionality, so the total computation cost is the same as a single head attention with full dimensionality.

class MultiHeadAttention(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads):
    super(MultiHeadAttention, self).__init__()
    self.num_heads = num_heads
    self.d_model = d_model
    
    assert d_model % self.num_heads == 0
    
    self.depth = d_model // self.num_heads
    
    self.wq = tf.keras.layers.Dense(d_model)
    self.wk = tf.keras.layers.Dense(d_model)
    self.wv = tf.keras.layers.Dense(d_model)
    
    self.dense = tf.keras.layers.Dense(d_model)
        
  def split_heads(self, x, batch_size):
    """Split the last dimension into (num_heads, depth).
    Transpose the result such that the shape is (batch_size, num_heads, seq_len, depth)
    """
    x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth))
    return tf.transpose(x, perm=[0, 2, 1, 3])
    
  def call(self, v, k, q, mask):
    batch_size = tf.shape(q)[0]
    
    q = self.wq(q)  # (batch_size, seq_len, d_model)
    k = self.wk(k)  # (batch_size, seq_len, d_model)
    v = self.wv(v)  # (batch_size, seq_len, d_model)
    
    q = self.split_heads(q, batch_size)  # (batch_size, num_heads, seq_len_q, depth)
    k = self.split_heads(k, batch_size)  # (batch_size, num_heads, seq_len_k, depth)
    v = self.split_heads(v, batch_size)  # (batch_size, num_heads, seq_len_v, depth)
    
    # scaled_attention.shape == (batch_size, num_heads, seq_len_q, depth)
    # attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k)
    scaled_attention, attention_weights = scaled_dot_product_attention(
        q, k, v, mask)
    
    scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3])  # (batch_size, seq_len_q, num_heads, depth)

    concat_attention = tf.reshape(scaled_attention, 
                                  (batch_size, -1, self.d_model))  # (batch_size, seq_len_q, d_model)

    output = self.dense(concat_attention)  # (batch_size, seq_len_q, d_model)
        
    return output, attention_weights

Create a MultiHeadAttention layer to try out. At each location in the sequence, y, the MultiHeadAttention runs all 8 attention heads across all other locations in the sequence, returning a new vector of the same length at each location.

temp_mha = MultiHeadAttention(d_model=512, num_heads=8)
y = tf.random.uniform((1, 60, 512))  # (batch_size, encoder_sequence, d_model)
out, attn = temp_mha(y, k=y, q=y, mask=None)
out.shape, attn.shape
(TensorShape([1, 60, 512]), TensorShape([1, 8, 60, 60]))

Point wise feed forward network

Point wise feed forward network consists of two fully-connected layers with a ReLU activation in between.

def point_wise_feed_forward_network(d_model, dff):
  return tf.keras.Sequential([
      tf.keras.layers.Dense(dff, activation='relu'),  # (batch_size, seq_len, dff)
      tf.keras.layers.Dense(d_model)  # (batch_size, seq_len, d_model)
  ])
sample_ffn = point_wise_feed_forward_network(512, 2048)
sample_ffn(tf.random.uniform((64, 50, 512))).shape
TensorShape([64, 50, 512])

Encoder and decoder

transformer

The transformer model follows the same general pattern as a standard sequence to sequence with attention model.

  • The input sentence is passed through N encoder layers that generates an output for each word/token in the sequence.
  • The decoder attends on the encoder's output and its own input (self-attention) to predict the next word.

Encoder layer

Each encoder layer consists of sublayers:

  1. Multi-head attention (with padding mask)
  2. Point wise feed forward networks.

Each of these sublayers has a residual connection around it followed by a layer normalization. Residual connections help in avoiding the vanishing gradient problem in deep networks.

The output of each sublayer is LayerNorm(x + Sublayer(x)). The normalization is done on the d_model (last) axis. There are N encoder layers in the transformer.

class EncoderLayer(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads, dff, rate=0.1):
    super(EncoderLayer, self).__init__()

    self.mha = MultiHeadAttention(d_model, num_heads)
    self.ffn = point_wise_feed_forward_network(d_model, dff)

    self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    
    self.dropout1 = tf.keras.layers.Dropout(rate)
    self.dropout2 = tf.keras.layers.Dropout(rate)
    
  def call(self, x, training, mask):

    attn_output, _ = self.mha(x, x, x, mask)  # (batch_size, input_seq_len, d_model)
    attn_output = self.dropout1(attn_output, training=training)
    out1 = self.layernorm1(x + attn_output)  # (batch_size, input_seq_len, d_model)
    
    ffn_output = self.ffn(out1)  # (batch_size, input_seq_len, d_model)
    ffn_output = self.dropout2(ffn_output, training=training)
    out2 = self.layernorm2(out1 + ffn_output)  # (batch_size, input_seq_len, d_model)
    
    return out2
sample_encoder_layer = EncoderLayer(512, 8, 2048)

sample_encoder_layer_output = sample_encoder_layer(
    tf.random.uniform((64, 43, 512)), False, None)

sample_encoder_layer_output.shape  # (batch_size, input_seq_len, d_model)
TensorShape([64, 43, 512])

Decoder layer

Each decoder layer consists of sublayers:

  1. Masked multi-head attention (with look ahead mask and padding mask)
  2. Multi-head attention (with padding mask). V (value) and K (key) receive the encoder output as inputs. Q (query) receives the output from the masked multi-head attention sublayer.
  3. Point wise feed forward networks

Each of these sublayers has a residual connection around it followed by a layer normalization. The output of each sublayer is LayerNorm(x + Sublayer(x)). The normalization is done on the d_model (last) axis.

There are N decoder layers in the transformer.

As Q receives the output from decoder's first attention block, and K receives the encoder output, the attention weights represent the importance given to the decoder's input based on the encoder's output. In other words, the decoder predicts the next word by looking at the encoder output and self-attending to its own output. See the demonstration above in the scaled dot product attention section.

class DecoderLayer(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads, dff, rate=0.1):
    super(DecoderLayer, self).__init__()

    self.mha1 = MultiHeadAttention(d_model, num_heads)
    self.mha2 = MultiHeadAttention(d_model, num_heads)

    self.ffn = point_wise_feed_forward_network(d_model, dff)
 
    self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm3 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    
    self.dropout1 = tf.keras.layers.Dropout(rate)
    self.dropout2 = tf.keras.layers.Dropout(rate)
    self.dropout3 = tf.keras.layers.Dropout(rate)
    
    
  def call(self, x, enc_output, training, 
           look_ahead_mask, padding_mask):
    # enc_output.shape == (batch_size, input_seq_len, d_model)

    attn1, attn_weights_block1 = self.mha1(x, x, x, look_ahead_mask)  # (batch_size, target_seq_len, d_model)
    attn1 = self.dropout1(attn1, training=training)
    out1 = self.layernorm1(attn1 + x)
    
    attn2, attn_weights_block2 = self.mha2(
        enc_output, enc_output, out1, padding_mask)  # (batch_size, target_seq_len, d_model)
    attn2 = self.dropout2(attn2, training=training)
    out2 = self.layernorm2(attn2 + out1)  # (batch_size, target_seq_len, d_model)
    
    ffn_output = self.ffn(out2)  # (batch_size, target_seq_len, d_model)
    ffn_output = self.dropout3(ffn_output, training=training)
    out3 = self.layernorm3(ffn_output + out2)  # (batch_size, target_seq_len, d_model)
    
    return out3, attn_weights_block1, attn_weights_block2
sample_decoder_layer = DecoderLayer(512, 8, 2048)

sample_decoder_layer_output, _, _ = sample_decoder_layer(
    tf.random.uniform((64, 50, 512)), sample_encoder_layer_output, 
    False, None, None)

sample_decoder_layer_output.shape  # (batch_size, target_seq_len, d_model)
TensorShape([64, 50, 512])

Encoder

The Encoder consists of:

  1. Input Embedding
  2. Positional Encoding
  3. N encoder layers

The input is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the encoder layers. The output of the encoder is the input to the decoder.

class Encoder(tf.keras.layers.Layer):
  def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
               maximum_position_encoding, rate=0.1):
    super(Encoder, self).__init__()

    self.d_model = d_model
    self.num_layers = num_layers
    
    self.embedding = tf.keras.layers.Embedding(input_vocab_size, d_model)
    self.pos_encoding = positional_encoding(maximum_position_encoding, 
                                            self.d_model)
    
    
    self.enc_layers = [EncoderLayer(d_model, num_heads, dff, rate) 
                       for _ in range(num_layers)]
  
    self.dropout = tf.keras.layers.Dropout(rate)
        
  def call(self, x, training, mask):

    seq_len = tf.shape(x)[1]
    
    # adding embedding and position encoding.
    x = self.embedding(x)  # (batch_size, input_seq_len, d_model)
    x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
    x += self.pos_encoding[:, :seq_len, :]

    x = self.dropout(x, training=training)
    
    for i in range(self.num_layers):
      x = self.enc_layers[i](x, training, mask)
    
    return x  # (batch_size, input_seq_len, d_model)
sample_encoder = Encoder(num_layers=2, d_model=512, num_heads=8, 
                         dff=2048, input_vocab_size=8500,
                         maximum_position_encoding=10000)
temp_input = tf.random.uniform((64, 62), dtype=tf.int64, minval=0, maxval=200)

sample_encoder_output = sample_encoder(temp_input, training=False, mask=None)

print (sample_encoder_output.shape)  # (batch_size, input_seq_len, d_model)
(64, 62, 512)

Decoder

The Decoder consists of:

  1. Output Embedding
  2. Positional Encoding
  3. N decoder layers

The target is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the decoder layers. The output of the decoder is the input to the final linear layer.

class Decoder(tf.keras.layers.Layer):
  def __init__(self, num_layers, d_model, num_heads, dff, target_vocab_size,
               maximum_position_encoding, rate=0.1):
    super(Decoder, self).__init__()

    self.d_model = d_model
    self.num_layers = num_layers
    
    self.embedding = tf.keras.layers.Embedding(target_vocab_size, d_model)
    self.pos_encoding = positional_encoding(maximum_position_encoding, d_model)
    
    self.dec_layers = [DecoderLayer(d_model, num_heads, dff, rate) 
                       for _ in range(num_layers)]
    self.dropout = tf.keras.layers.Dropout(rate)
    
  def call(self, x, enc_output, training, 
           look_ahead_mask, padding_mask):

    seq_len = tf.shape(x)[1]
    attention_weights = {}
    
    x = self.embedding(x)  # (batch_size, target_seq_len, d_model)
    x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
    x += self.pos_encoding[:, :seq_len, :]
    
    x = self.dropout(x, training=training)

    for i in range(self.num_layers):
      x, block1, block2 = self.dec_layers[i](x, enc_output, training,
                                             look_ahead_mask, padding_mask)
      
      attention_weights['decoder_layer{}_block1'.format(i+1)] = block1
      attention_weights['decoder_layer{}_block2'.format(i+1)] = block2
    
    # x.shape == (batch_size, target_seq_len, d_model)
    return x, attention_weights
sample_decoder = Decoder(num_layers=2, d_model=512, num_heads=8, 
                         dff=2048, target_vocab_size=8000,
                         maximum_position_encoding=5000)
temp_input = tf.random.uniform((64, 26), dtype=tf.int64, minval=0, maxval=200)

output, attn = sample_decoder(temp_input, 
                              enc_output=sample_encoder_output, 
                              training=False,
                              look_ahead_mask=None, 
                              padding_mask=None)

output.shape, attn['decoder_layer2_block2'].shape
(TensorShape([64, 26, 512]), TensorShape([64, 8, 26, 62]))

Create the Transformer

Transformer consists of the encoder, decoder and a final linear layer. The output of the decoder is the input to the linear layer and its output is returned.

class Transformer(tf.keras.Model):
  def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size, 
               target_vocab_size, pe_input, pe_target, rate=0.1):
    super(Transformer, self).__init__()

    self.encoder = Encoder(num_layers, d_model, num_heads, dff, 
                           input_vocab_size, pe_input, rate)

    self.decoder = Decoder(num_layers, d_model, num_heads, dff, 
                           target_vocab_size, pe_target, rate)

    self.final_layer = tf.keras.layers.Dense(target_vocab_size)
    
  def call(self, inp, tar, training, enc_padding_mask, 
           look_ahead_mask, dec_padding_mask):

    enc_output = self.encoder(inp, training, enc_padding_mask)  # (batch_size, inp_seq_len, d_model)
    
    # dec_output.shape == (batch_size, tar_seq_len, d_model)
    dec_output, attention_weights = self.decoder(
        tar, enc_output, training, look_ahead_mask, dec_padding_mask)
    
    final_output = self.final_layer(dec_output)  # (batch_size, tar_seq_len, target_vocab_size)
    
    return final_output, attention_weights
sample_transformer = Transformer(
    num_layers=2, d_model=512, num_heads=8, dff=2048, 
    input_vocab_size=8500, target_vocab_size=8000, 
    pe_input=10000, pe_target=6000)

temp_input = tf.random.uniform((64, 38), dtype=tf.int64, minval=0, maxval=200)
temp_target = tf.random.uniform((64, 36), dtype=tf.int64, minval=0, maxval=200)

fn_out, _ = sample_transformer(temp_input, temp_target, training=False, 
                               enc_padding_mask=None, 
                               look_ahead_mask=None,
                               dec_padding_mask=None)

fn_out.shape  # (batch_size, tar_seq_len, target_vocab_size)
TensorShape([64, 36, 8000])

Set hyperparameters

To keep this example small and relatively fast, the values for num_layers, d_model, and dff have been reduced.

The values used in the base model of transformer were; num_layers=6, d_model = 512, dff = 2048. See the paper for all the other versions of the transformer.

num_layers = 4
d_model = 128
dff = 512
num_heads = 8

input_vocab_size = tokenizer_pt.vocab_size + 2
target_vocab_size = tokenizer_en.vocab_size + 2
dropout_rate = 0.1

Optimizer

Use the Adam optimizer with a custom learning rate scheduler according to the formula in the paper.

$$\Large{lrate = d_{model}^{-0.5} * min(step{\_}num^{-0.5}, step{\_}num * warmup{\_}steps^{-1.5})}$$
class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
  def __init__(self, d_model, warmup_steps=4000):
    super(CustomSchedule, self).__init__()
    
    self.d_model = d_model
    self.d_model = tf.cast(self.d_model, tf.float32)

    self.warmup_steps = warmup_steps
    
  def __call__(self, step):
    arg1 = tf.math.rsqrt(step)
    arg2 = step * (self.warmup_steps ** -1.5)
    
    return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)
learning_rate = CustomSchedule(d_model)

optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98, 
                                     epsilon=1e-9)
temp_learning_rate_schedule = CustomSchedule(d_model)

plt.plot(temp_learning_rate_schedule(tf.range(40000, dtype=tf.float32)))
plt.ylabel("Learning Rate")
plt.xlabel("Train Step")
Text(0.5, 0, 'Train Step')

png

Loss and metrics

Since the target sequences are padded, it is important to apply a padding mask when calculating the loss.

loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
    from_logits=True, reduction='none')
def loss_function(real, pred):
  mask = tf.math.logical_not(tf.math.equal(real, 0))
  loss_ = loss_object(real, pred)

  mask = tf.cast(mask, dtype=loss_.dtype)
  loss_ *= mask
  
  return tf.reduce_mean(loss_)
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
    name='train_accuracy')

Training and checkpointing

transformer = Transformer(num_layers, d_model, num_heads, dff,
                          input_vocab_size, target_vocab_size, 
                          pe_input=input_vocab_size, 
                          pe_target=target_vocab_size,
                          rate=dropout_rate)
def create_masks(inp, tar):
  # Encoder padding mask
  enc_padding_mask = create_padding_mask(inp)
  
  # Used in the 2nd attention block in the decoder.
  # This padding mask is used to mask the encoder outputs.
  dec_padding_mask = create_padding_mask(inp)
  
  # Used in the 1st attention block in the decoder.
  # It is used to pad and mask future tokens in the input received by 
  # the decoder.
  look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1])
  dec_target_padding_mask = create_padding_mask(tar)
  combined_mask = tf.maximum(dec_target_padding_mask, look_ahead_mask)
  
  return enc_padding_mask, combined_mask, dec_padding_mask

Create the checkpoint path and the checkpoint manager. This will be used to save checkpoints every n epochs.

checkpoint_path = "./checkpoints/train"

ckpt = tf.train.Checkpoint(transformer=transformer,
                           optimizer=optimizer)

ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)

# if a checkpoint exists, restore the latest checkpoint.
if ckpt_manager.latest_checkpoint:
  ckpt.restore(ckpt_manager.latest_checkpoint)
  print ('Latest checkpoint restored!!')

The target is divided into tar_inp and tar_real. tar_inp is passed as an input to the decoder. tar_real is that same input shifted by 1: At each location in tar_input, tar_real contains the next token that should be predicted.

For example, sentence = "SOS A lion in the jungle is sleeping EOS"

tar_inp = "SOS A lion in the jungle is sleeping"

tar_real = "A lion in the jungle is sleeping EOS"

The transformer is an auto-regressive model: it makes predictions one part at a time, and uses its output so far to decide what to do next.

During training this example uses teacher-forcing (like in the text generation tutorial). Teacher forcing is passing the true output to the next time step regardless of what the model predicts at the current time step.

As the transformer predicts each word, self-attention allows it to look at the previous words in the input sequence to better predict the next word.

To prevent the model from peaking at the expected output the model uses a look-ahead mask.

EPOCHS = 20
# The @tf.function trace-compiles train_step into a TF graph for faster
# execution. The function specializes to the precise shape of the argument
# tensors. To avoid re-tracing due to the variable sequence lengths or variable
# batch sizes (the last batch is smaller), use input_signature to specify
# more generic shapes.

train_step_signature = [
    tf.TensorSpec(shape=(None, None), dtype=tf.int64),
    tf.TensorSpec(shape=(None, None), dtype=tf.int64),
]

@tf.function(input_signature=train_step_signature)
def train_step(inp, tar):
  tar_inp = tar[:, :-1]
  tar_real = tar[:, 1:]
  
  enc_padding_mask, combined_mask, dec_padding_mask = create_masks(inp, tar_inp)
  
  with tf.GradientTape() as tape:
    predictions, _ = transformer(inp, tar_inp, 
                                 True, 
                                 enc_padding_mask, 
                                 combined_mask, 
                                 dec_padding_mask)
    loss = loss_function(tar_real, predictions)

  gradients = tape.gradient(loss, transformer.trainable_variables)    
  optimizer.apply_gradients(zip(gradients, transformer.trainable_variables))
  
  train_loss(loss)
  train_accuracy(tar_real, predictions)

Portuguese is used as the input language and English is the target language.

for epoch in range(EPOCHS):
  start = time.time()
  
  train_loss.reset_states()
  train_accuracy.reset_states()
  
  # inp -> portuguese, tar -> english
  for (batch, (inp, tar)) in enumerate(train_dataset):
    train_step(inp, tar)
    
    if batch % 50 == 0:
      print ('Epoch {} Batch {} Loss {:.4f} Accuracy {:.4f}'.format(
          epoch + 1, batch, train_loss.result(), train_accuracy.result()))
      
  if (epoch + 1) % 5 == 0:
    ckpt_save_path = ckpt_manager.save()
    print ('Saving checkpoint for epoch {} at {}'.format(epoch+1,
                                                         ckpt_save_path))
    
  print ('Epoch {} Loss {:.4f} Accuracy {:.4f}'.format(epoch + 1, 
                                                train_loss.result(), 
                                                train_accuracy.result()))

  print ('Time taken for 1 epoch: {} secs\n'.format(time.time() - start))
Epoch 1 Batch 0 Loss 4.0974 Accuracy 0.0000
Epoch 1 Batch 50 Loss 4.2258 Accuracy 0.0008
Epoch 1 Batch 100 Loss 4.1818 Accuracy 0.0123
Epoch 1 Batch 150 Loss 4.1287 Accuracy 0.0167
Epoch 1 Batch 200 Loss 4.0642 Accuracy 0.0190
Epoch 1 Batch 250 Loss 3.9945 Accuracy 0.0222
Epoch 1 Batch 300 Loss 3.9112 Accuracy 0.0265
Epoch 1 Batch 350 Loss 3.8268 Accuracy 0.0299
Epoch 1 Batch 400 Loss 3.7428 Accuracy 0.0324
Epoch 1 Batch 450 Loss 3.6657 Accuracy 0.0350
Epoch 1 Batch 500 Loss 3.5983 Accuracy 0.0380
Epoch 1 Batch 550 Loss 3.5357 Accuracy 0.0417
Epoch 1 Batch 600 Loss 3.4784 Accuracy 0.0454
Epoch 1 Batch 650 Loss 3.4216 Accuracy 0.0488
Epoch 1 Batch 700 Loss 3.3684 Accuracy 0.0522
Epoch 1 Loss 3.3666 Accuracy 0.0523
Time taken for 1 epoch: 62.33258581161499 secs

Epoch 2 Batch 0 Loss 2.7554 Accuracy 0.0987
Epoch 2 Batch 50 Loss 2.5884 Accuracy 0.1034
Epoch 2 Batch 100 Loss 2.5527 Accuracy 0.1050
Epoch 2 Batch 150 Loss 2.5346 Accuracy 0.1073
Epoch 2 Batch 200 Loss 2.5093 Accuracy 0.1095
Epoch 2 Batch 250 Loss 2.4779 Accuracy 0.1110
Epoch 2 Batch 300 Loss 2.4609 Accuracy 0.1128
Epoch 2 Batch 350 Loss 2.4480 Accuracy 0.1144
Epoch 2 Batch 400 Loss 2.4298 Accuracy 0.1159
Epoch 2 Batch 450 Loss 2.4174 Accuracy 0.1174
Epoch 2 Batch 500 Loss 2.4110 Accuracy 0.1191
Epoch 2 Batch 550 Loss 2.3981 Accuracy 0.1204
Epoch 2 Batch 600 Loss 2.3867 Accuracy 0.1217
Epoch 2 Batch 650 Loss 2.3727 Accuracy 0.1228
Epoch 2 Batch 700 Loss 2.3635 Accuracy 0.1240
Epoch 2 Loss 2.3628 Accuracy 0.1240
Time taken for 1 epoch: 32.552741050720215 secs

Epoch 3 Batch 0 Loss 2.2059 Accuracy 0.1398
Epoch 3 Batch 50 Loss 2.1217 Accuracy 0.1407
Epoch 3 Batch 100 Loss 2.1369 Accuracy 0.1419
Epoch 3 Batch 150 Loss 2.1476 Accuracy 0.1435
Epoch 3 Batch 200 Loss 2.1422 Accuracy 0.1439
Epoch 3 Batch 250 Loss 2.1449 Accuracy 0.1444
Epoch 3 Batch 300 Loss 2.1399 Accuracy 0.1451
Epoch 3 Batch 350 Loss 2.1327 Accuracy 0.1460
Epoch 3 Batch 400 Loss 2.1247 Accuracy 0.1463
Epoch 3 Batch 450 Loss 2.1194 Accuracy 0.1469
Epoch 3 Batch 500 Loss 2.1151 Accuracy 0.1476
Epoch 3 Batch 550 Loss 2.1145 Accuracy 0.1482
Epoch 3 Batch 600 Loss 2.1109 Accuracy 0.1486
Epoch 3 Batch 650 Loss 2.1101 Accuracy 0.1492
Epoch 3 Batch 700 Loss 2.1065 Accuracy 0.1497
Epoch 3 Loss 2.1064 Accuracy 0.1497
Time taken for 1 epoch: 32.88726329803467 secs

Epoch 4 Batch 0 Loss 2.0100 Accuracy 0.1706
Epoch 4 Batch 50 Loss 1.9691 Accuracy 0.1610
Epoch 4 Batch 100 Loss 1.9736 Accuracy 0.1620
Epoch 4 Batch 150 Loss 1.9757 Accuracy 0.1630
Epoch 4 Batch 200 Loss 1.9684 Accuracy 0.1637
Epoch 4 Batch 250 Loss 1.9571 Accuracy 0.1645
Epoch 4 Batch 300 Loss 1.9536 Accuracy 0.1653
Epoch 4 Batch 350 Loss 1.9494 Accuracy 0.1663
Epoch 4 Batch 400 Loss 1.9479 Accuracy 0.1670
Epoch 4 Batch 450 Loss 1.9416 Accuracy 0.1677
Epoch 4 Batch 500 Loss 1.9341 Accuracy 0.1685
Epoch 4 Batch 550 Loss 1.9263 Accuracy 0.1696
Epoch 4 Batch 600 Loss 1.9189 Accuracy 0.1706
Epoch 4 Batch 650 Loss 1.9126 Accuracy 0.1716
Epoch 4 Batch 700 Loss 1.9062 Accuracy 0.1726
Epoch 4 Loss 1.9062 Accuracy 0.1726
Time taken for 1 epoch: 32.49288010597229 secs

Epoch 5 Batch 0 Loss 1.6194 Accuracy 0.2059
Epoch 5 Batch 50 Loss 1.7398 Accuracy 0.1920
Epoch 5 Batch 100 Loss 1.7410 Accuracy 0.1908
Epoch 5 Batch 150 Loss 1.7299 Accuracy 0.1918
Epoch 5 Batch 200 Loss 1.7139 Accuracy 0.1919
Epoch 5 Batch 250 Loss 1.7120 Accuracy 0.1930
Epoch 5 Batch 300 Loss 1.7128 Accuracy 0.1938
Epoch 5 Batch 350 Loss 1.7083 Accuracy 0.1945
Epoch 5 Batch 400 Loss 1.7075 Accuracy 0.1957
Epoch 5 Batch 450 Loss 1.7038 Accuracy 0.1967
Epoch 5 Batch 500 Loss 1.7007 Accuracy 0.1978
Epoch 5 Batch 550 Loss 1.6936 Accuracy 0.1986
Epoch 5 Batch 600 Loss 1.6890 Accuracy 0.1992
Epoch 5 Batch 650 Loss 1.6821 Accuracy 0.1996
Epoch 5 Batch 700 Loss 1.6775 Accuracy 0.2005
Saving checkpoint for epoch 5 at ./checkpoints/train/ckpt-1
Epoch 5 Loss 1.6774 Accuracy 0.2005
Time taken for 1 epoch: 33.03623414039612 secs

Epoch 6 Batch 0 Loss 1.4467 Accuracy 0.2165
Epoch 6 Batch 50 Loss 1.5069 Accuracy 0.2158
Epoch 6 Batch 100 Loss 1.5077 Accuracy 0.2171
Epoch 6 Batch 150 Loss 1.5055 Accuracy 0.2166
Epoch 6 Batch 200 Loss 1.5095 Accuracy 0.2167
Epoch 6 Batch 250 Loss 1.5068 Accuracy 0.2172
Epoch 6 Batch 300 Loss 1.5134 Accuracy 0.2183
Epoch 6 Batch 350 Loss 1.5129 Accuracy 0.2191
Epoch 6 Batch 400 Loss 1.5126 Accuracy 0.2194
Epoch 6 Batch 450 Loss 1.5037 Accuracy 0.2195
Epoch 6 Batch 500 Loss 1.5027 Accuracy 0.2200
Epoch 6 Batch 550 Loss 1.4983 Accuracy 0.2205
Epoch 6 Batch 600 Loss 1.4935 Accuracy 0.2210
Epoch 6 Batch 650 Loss 1.4896 Accuracy 0.2216
Epoch 6 Batch 700 Loss 1.4855 Accuracy 0.2220
Epoch 6 Loss 1.4854 Accuracy 0.2220
Time taken for 1 epoch: 32.71041440963745 secs

Epoch 7 Batch 0 Loss 1.5816 Accuracy 0.2487
Epoch 7 Batch 50 Loss 1.3230 Accuracy 0.2376
Epoch 7 Batch 100 Loss 1.3333 Accuracy 0.2390
Epoch 7 Batch 150 Loss 1.3294 Accuracy 0.2400
Epoch 7 Batch 200 Loss 1.3254 Accuracy 0.2403
Epoch 7 Batch 250 Loss 1.3277 Accuracy 0.2400
Epoch 7 Batch 300 Loss 1.3300 Accuracy 0.2412
Epoch 7 Batch 350 Loss 1.3246 Accuracy 0.2419
Epoch 7 Batch 400 Loss 1.3194 Accuracy 0.2424
Epoch 7 Batch 450 Loss 1.3184 Accuracy 0.2430
Epoch 7 Batch 500 Loss 1.3147 Accuracy 0.2434
Epoch 7 Batch 550 Loss 1.3087 Accuracy 0.2436
Epoch 7 Batch 600 Loss 1.3046 Accuracy 0.2441
Epoch 7 Batch 650 Loss 1.3009 Accuracy 0.2441
Epoch 7 Batch 700 Loss 1.2983 Accuracy 0.2445
Epoch 7 Loss 1.2980 Accuracy 0.2444
Time taken for 1 epoch: 33.00773882865906 secs

Epoch 8 Batch 0 Loss 1.0183 Accuracy 0.2400
Epoch 8 Batch 50 Loss 1.1415 Accuracy 0.2588
Epoch 8 Batch 100 Loss 1.1486 Accuracy 0.2609
Epoch 8 Batch 150 Loss 1.1520 Accuracy 0.2617
Epoch 8 Batch 200 Loss 1.1441 Accuracy 0.2604
Epoch 8 Batch 250 Loss 1.1460 Accuracy 0.2607
Epoch 8 Batch 300 Loss 1.1425 Accuracy 0.2610
Epoch 8 Batch 350 Loss 1.1454 Accuracy 0.2619
Epoch 8 Batch 400 Loss 1.1438 Accuracy 0.2622
Epoch 8 Batch 450 Loss 1.1415 Accuracy 0.2627
Epoch 8 Batch 500 Loss 1.1425 Accuracy 0.2627
Epoch 8 Batch 550 Loss 1.1434 Accuracy 0.2630
Epoch 8 Batch 600 Loss 1.1421 Accuracy 0.2636
Epoch 8 Batch 650 Loss 1.1421 Accuracy 0.2636
Epoch 8 Batch 700 Loss 1.1411 Accuracy 0.2638
Epoch 8 Loss 1.1411 Accuracy 0.2638
Time taken for 1 epoch: 33.592687368392944 secs

Epoch 9 Batch 0 Loss 0.9413 Accuracy 0.2758
Epoch 9 Batch 50 Loss 1.0062 Accuracy 0.2770
Epoch 9 Batch 100 Loss 1.0113 Accuracy 0.2783
Epoch 9 Batch 150 Loss 1.0045 Accuracy 0.2780
Epoch 9 Batch 200 Loss 1.0120 Accuracy 0.2773
Epoch 9 Batch 250 Loss 1.0167 Accuracy 0.2770
Epoch 9 Batch 300 Loss 1.0209 Accuracy 0.2779
Epoch 9 Batch 350 Loss 1.0248 Accuracy 0.2779
Epoch 9 Batch 400 Loss 1.0243 Accuracy 0.2780
Epoch 9 Batch 450 Loss 1.0250 Accuracy 0.2784
Epoch 9 Batch 500 Loss 1.0271 Accuracy 0.2787
Epoch 9 Batch 550 Loss 1.0259 Accuracy 0.2783
Epoch 9 Batch 600 Loss 1.0263 Accuracy 0.2783
Epoch 9 Batch 650 Loss 1.0268 Accuracy 0.2782
Epoch 9 Batch 700 Loss 1.0263 Accuracy 0.2782
Epoch 9 Loss 1.0265 Accuracy 0.2783
Time taken for 1 epoch: 32.79008436203003 secs

Epoch 10 Batch 0 Loss 0.8181 Accuracy 0.2829
Epoch 10 Batch 50 Loss 0.9051 Accuracy 0.2915
Epoch 10 Batch 100 Loss 0.9141 Accuracy 0.2922
Epoch 10 Batch 150 Loss 0.9216 Accuracy 0.2917
Epoch 10 Batch 200 Loss 0.9241 Accuracy 0.2902
Epoch 10 Batch 250 Loss 0.9287 Accuracy 0.2890
Epoch 10 Batch 300 Loss 0.9274 Accuracy 0.2891
Epoch 10 Batch 350 Loss 0.9314 Accuracy 0.2891
Epoch 10 Batch 400 Loss 0.9305 Accuracy 0.2889
Epoch 10 Batch 450 Loss 0.9328 Accuracy 0.2889
Epoch 10 Batch 500 Loss 0.9350 Accuracy 0.2893
Epoch 10 Batch 550 Loss 0.9375 Accuracy 0.2890
Epoch 10 Batch 600 Loss 0.9392 Accuracy 0.2890
Epoch 10 Batch 650 Loss 0.9412 Accuracy 0.2889
Epoch 10 Batch 700 Loss 0.9418 Accuracy 0.2887
Saving checkpoint for epoch 10 at ./checkpoints/train/ckpt-2
Epoch 10 Loss 0.9415 Accuracy 0.2886
Time taken for 1 epoch: 32.973877906799316 secs

Epoch 11 Batch 0 Loss 0.7585 Accuracy 0.2993
Epoch 11 Batch 50 Loss 0.8333 Accuracy 0.2960
Epoch 11 Batch 100 Loss 0.8422 Accuracy 0.2996
Epoch 11 Batch 150 Loss 0.8513 Accuracy 0.2996
Epoch 11 Batch 200 Loss 0.8514 Accuracy 0.2983
Epoch 11 Batch 250 Loss 0.8527 Accuracy 0.2986
Epoch 11 Batch 300 Loss 0.8555 Accuracy 0.2983
Epoch 11 Batch 350 Loss 0.8581 Accuracy 0.2979
Epoch 11 Batch 400 Loss 0.8600 Accuracy 0.2978
Epoch 11 Batch 450 Loss 0.8623 Accuracy 0.2985
Epoch 11 Batch 500 Loss 0.8641 Accuracy 0.2983
Epoch 11 Batch 550 Loss 0.8660 Accuracy 0.2979
Epoch 11 Batch 600 Loss 0.8681 Accuracy 0.2977
Epoch 11 Batch 650 Loss 0.8698 Accuracy 0.2979
Epoch 11 Batch 700 Loss 0.8731 Accuracy 0.2977
Epoch 11 Loss 0.8732 Accuracy 0.2976
Time taken for 1 epoch: 32.73931550979614 secs

Epoch 12 Batch 0 Loss 0.8507 Accuracy 0.3377
Epoch 12 Batch 50 Loss 0.7917 Accuracy 0.3129
Epoch 12 Batch 100 Loss 0.7900 Accuracy 0.3099
Epoch 12 Batch 150 Loss 0.7911 Accuracy 0.3095
Epoch 12 Batch 200 Loss 0.7949 Accuracy 0.3090
Epoch 12 Batch 250 Loss 0.7930 Accuracy 0.3076
Epoch 12 Batch 300 Loss 0.7948 Accuracy 0.3068
Epoch 12 Batch 350 Loss 0.7978 Accuracy 0.3064
Epoch 12 Batch 400 Loss 0.7974 Accuracy 0.3059
Epoch 12 Batch 450 Loss 0.8019 Accuracy 0.3064
Epoch 12 Batch 500 Loss 0.8047 Accuracy 0.3060
Epoch 12 Batch 550 Loss 0.8083 Accuracy 0.3057
Epoch 12 Batch 600 Loss 0.8106 Accuracy 0.3055
Epoch 12 Batch 650 Loss 0.8128 Accuracy 0.3053
Epoch 12 Batch 700 Loss 0.8166 Accuracy 0.3052
Epoch 12 Loss 0.8167 Accuracy 0.3052
Time taken for 1 epoch: 32.706034660339355 secs

Epoch 13 Batch 0 Loss 0.7031 Accuracy 0.3277
Epoch 13 Batch 50 Loss 0.7232 Accuracy 0.3151
Epoch 13 Batch 100 Loss 0.7350 Accuracy 0.3164
Epoch 13 Batch 150 Loss 0.7368 Accuracy 0.3150
Epoch 13 Batch 200 Loss 0.7407 Accuracy 0.3151
Epoch 13 Batch 250 Loss 0.7483 Accuracy 0.3146
Epoch 13 Batch 300 Loss 0.7507 Accuracy 0.3144
Epoch 13 Batch 350 Loss 0.7536 Accuracy 0.3138
Epoch 13 Batch 400 Loss 0.7570 Accuracy 0.3140
Epoch 13 Batch 450 Loss 0.7572 Accuracy 0.3133
Epoch 13 Batch 500 Loss 0.7587 Accuracy 0.3126
Epoch 13 Batch 550 Loss 0.7603 Accuracy 0.3122
Epoch 13 Batch 600 Loss 0.7628 Accuracy 0.3120
Epoch 13 Batch 650 Loss 0.7665 Accuracy 0.3119
Epoch 13 Batch 700 Loss 0.7680 Accuracy 0.3116
Epoch 13 Loss 0.7683 Accuracy 0.3116
Time taken for 1 epoch: 32.632673501968384 secs

Epoch 14 Batch 0 Loss 0.5540 Accuracy 0.3290
Epoch 14 Batch 50 Loss 0.6895 Accuracy 0.3241
Epoch 14 Batch 100 Loss 0.6919 Accuracy 0.3230
Epoch 14 Batch 150 Loss 0.6966 Accuracy 0.3245
Epoch 14 Batch 200 Loss 0.6982 Accuracy 0.3228
Epoch 14 Batch 250 Loss 0.7032 Accuracy 0.3221
Epoch 14 Batch 300 Loss 0.7065 Accuracy 0.3208
Epoch 14 Batch 350 Loss 0.7104 Accuracy 0.3212
Epoch 14 Batch 400 Loss 0.7131 Accuracy 0.3209
Epoch 14 Batch 450 Loss 0.7148 Accuracy 0.3198
Epoch 14 Batch 500 Loss 0.7183 Accuracy 0.3198
Epoch 14 Batch 550 Loss 0.7224 Accuracy 0.3193
Epoch 14 Batch 600 Loss 0.7250 Accuracy 0.3190
Epoch 14 Batch 650 Loss 0.7262 Accuracy 0.3184
Epoch 14 Batch 700 Loss 0.7287 Accuracy 0.3180
Epoch 14 Loss 0.7285 Accuracy 0.3180
Time taken for 1 epoch: 32.88183283805847 secs

Epoch 15 Batch 0 Loss 0.7164 Accuracy 0.3438
Epoch 15 Batch 50 Loss 0.6428 Accuracy 0.3267
Epoch 15 Batch 100 Loss 0.6532 Accuracy 0.3280
Epoch 15 Batch 150 Loss 0.6570 Accuracy 0.3249
Epoch 15 Batch 200 Loss 0.6634 Accuracy 0.3244
Epoch 15 Batch 250 Loss 0.6669 Accuracy 0.3245
Epoch 15 Batch 300 Loss 0.6694 Accuracy 0.3251
Epoch 15 Batch 350 Loss 0.6724 Accuracy 0.3247
Epoch 15 Batch 400 Loss 0.6761 Accuracy 0.3243
Epoch 15 Batch 450 Loss 0.6773 Accuracy 0.3239
Epoch 15 Batch 500 Loss 0.6802 Accuracy 0.3230
Epoch 15 Batch 550 Loss 0.6819 Accuracy 0.3226
Epoch 15 Batch 600 Loss 0.6849 Accuracy 0.3225
Epoch 15 Batch 650 Loss 0.6890 Accuracy 0.3223
Epoch 15 Batch 700 Loss 0.6921 Accuracy 0.3223
Saving checkpoint for epoch 15 at ./checkpoints/train/ckpt-3
Epoch 15 Loss 0.6920 Accuracy 0.3222
Time taken for 1 epoch: 33.07868552207947 secs

Epoch 16 Batch 0 Loss 0.5939 Accuracy 0.3117
Epoch 16 Batch 50 Loss 0.6186 Accuracy 0.3355
Epoch 16 Batch 100 Loss 0.6219 Accuracy 0.3306
Epoch 16 Batch 150 Loss 0.6206 Accuracy 0.3296
Epoch 16 Batch 200 Loss 0.6289 Accuracy 0.3295
Epoch 16 Batch 250 Loss 0.6349 Accuracy 0.3296
Epoch 16 Batch 300 Loss 0.6381 Accuracy 0.3294
Epoch 16 Batch 350 Loss 0.6385 Accuracy 0.3293
Epoch 16 Batch 400 Loss 0.6408 Accuracy 0.3290
Epoch 16 Batch 450 Loss 0.6434 Accuracy 0.3286
Epoch 16 Batch 500 Loss 0.6477 Accuracy 0.3289
Epoch 16 Batch 550 Loss 0.6500 Accuracy 0.3283
Epoch 16 Batch 600 Loss 0.6524 Accuracy 0.3278
Epoch 16 Batch 650 Loss 0.6546 Accuracy 0.3275
Epoch 16 Batch 700 Loss 0.6585 Accuracy 0.3273
Epoch 16 Loss 0.6589 Accuracy 0.3273
Time taken for 1 epoch: 32.945231437683105 secs

Epoch 17 Batch 0 Loss 0.5783 Accuracy 0.3424
Epoch 17 Batch 50 Loss 0.5952 Accuracy 0.3379
Epoch 17 Batch 100 Loss 0.5975 Accuracy 0.3375
Epoch 17 Batch 150 Loss 0.6055 Accuracy 0.3375
Epoch 17 Batch 200 Loss 0.6073 Accuracy 0.3370
Epoch 17 Batch 250 Loss 0.6099 Accuracy 0.3365
Epoch 17 Batch 300 Loss 0.6119 Accuracy 0.3359
Epoch 17 Batch 350 Loss 0.6158 Accuracy 0.3362
Epoch 17 Batch 400 Loss 0.6176 Accuracy 0.3349
Epoch 17 Batch 450 Loss 0.6205 Accuracy 0.3343
Epoch 17 Batch 500 Loss 0.6221 Accuracy 0.3338
Epoch 17 Batch 550 Loss 0.6246 Accuracy 0.3331
Epoch 17 Batch 600 Loss 0.6273 Accuracy 0.3329
Epoch 17 Batch 650 Loss 0.6292 Accuracy 0.3324
Epoch 17 Batch 700 Loss 0.6311 Accuracy 0.3317
Epoch 17 Loss 0.6315 Accuracy 0.3317
Time taken for 1 epoch: 32.830636739730835 secs

Epoch 18 Batch 0 Loss 0.7152 Accuracy 0.3960
Epoch 18 Batch 50 Loss 0.5675 Accuracy 0.3375
Epoch 18 Batch 100 Loss 0.5697 Accuracy 0.3383
Epoch 18 Batch 150 Loss 0.5727 Accuracy 0.3387
Epoch 18 Batch 200 Loss 0.5785 Accuracy 0.3388
Epoch 18 Batch 250 Loss 0.5818 Accuracy 0.3386
Epoch 18 Batch 300 Loss 0.5835 Accuracy 0.3376
Epoch 18 Batch 350 Loss 0.5869 Accuracy 0.3375
Epoch 18 Batch 400 Loss 0.5899 Accuracy 0.3377
Epoch 18 Batch 450 Loss 0.5935 Accuracy 0.3377
Epoch 18 Batch 500 Loss 0.5957 Accuracy 0.3372
Epoch 18 Batch 550 Loss 0.5973 Accuracy 0.3366
Epoch 18 Batch 600 Loss 0.6012 Accuracy 0.3361
Epoch 18 Batch 650 Loss 0.6040 Accuracy 0.3359
Epoch 18 Batch 700 Loss 0.6071 Accuracy 0.3356
Epoch 18 Loss 0.6074 Accuracy 0.3356
Time taken for 1 epoch: 33.422287940979004 secs

Epoch 19 Batch 0 Loss 0.4556 Accuracy 0.3345
Epoch 19 Batch 50 Loss 0.5324 Accuracy 0.3404
Epoch 19 Batch 100 Loss 0.5406 Accuracy 0.3426
Epoch 19 Batch 150 Loss 0.5476 Accuracy 0.3433
Epoch 19 Batch 200 Loss 0.5507 Accuracy 0.3420
Epoch 19 Batch 250 Loss 0.5547 Accuracy 0.3428
Epoch 19 Batch 300 Loss 0.5585 Accuracy 0.3427
Epoch 19 Batch 350 Loss 0.5625 Accuracy 0.3431
Epoch 19 Batch 400 Loss 0.5642 Accuracy 0.3422
Epoch 19 Batch 450 Loss 0.5682 Accuracy 0.3420
Epoch 19 Batch 500 Loss 0.5711 Accuracy 0.3410
Epoch 19 Batch 550 Loss 0.5730 Accuracy 0.3407
Epoch 19 Batch 600 Loss 0.5774 Accuracy 0.3404
Epoch 19 Batch 650 Loss 0.5788 Accuracy 0.3398
Epoch 19 Batch 700 Loss 0.5835 Accuracy 0.3395
Epoch 19 Loss 0.5836 Accuracy 0.3395
Time taken for 1 epoch: 32.78404426574707 secs

Epoch 20 Batch 0 Loss 0.5919 Accuracy 0.3615
Epoch 20 Batch 50 Loss 0.5133 Accuracy 0.3495
Epoch 20 Batch 100 Loss 0.5166 Accuracy 0.3471
Epoch 20 Batch 150 Loss 0.5235 Accuracy 0.3460
Epoch 20 Batch 200 Loss 0.5289 Accuracy 0.3465
Epoch 20 Batch 250 Loss 0.5337 Accuracy 0.3466
Epoch 20 Batch 300 Loss 0.5384 Accuracy 0.3470
Epoch 20 Batch 350 Loss 0.5434 Accuracy 0.3467
Epoch 20 Batch 400 Loss 0.5460 Accuracy 0.3463
Epoch 20 Batch 450 Loss 0.5482 Accuracy 0.3453
Epoch 20 Batch 500 Loss 0.5504 Accuracy 0.3449
Epoch 20 Batch 550 Loss 0.5537 Accuracy 0.3446
Epoch 20 Batch 600 Loss 0.5567 Accuracy 0.3442
Epoch 20 Batch 650 Loss 0.5589 Accuracy 0.3439
Epoch 20 Batch 700 Loss 0.5612 Accuracy 0.3431
Saving checkpoint for epoch 20 at ./checkpoints/train/ckpt-4
Epoch 20 Loss 0.5613 Accuracy 0.3430
Time taken for 1 epoch: 32.95736527442932 secs

Evaluate

The following steps are used for evaluation:

  • Encode the input sentence using the Portuguese tokenizer (tokenizer_pt). Moreover, add the start and end token so the input is equivalent to what the model is trained with. This is the encoder input.
  • The decoder input is the start token == tokenizer_en.vocab_size.
  • Calculate the padding masks and the look ahead masks.
  • The decoder then outputs the predictions by looking at the encoder output and its own output (self-attention).
  • Select the last word and calculate the argmax of that.
  • Concatentate the predicted word to the decoder input as pass it to the decoder.
  • In this approach, the decoder predicts the next word based on the previous words it predicted.
def evaluate(inp_sentence):
  start_token = [tokenizer_pt.vocab_size]
  end_token = [tokenizer_pt.vocab_size + 1]
  
  # inp sentence is portuguese, hence adding the start and end token
  inp_sentence = start_token + tokenizer_pt.encode(inp_sentence) + end_token
  encoder_input = tf.expand_dims(inp_sentence, 0)
  
  # as the target is english, the first word to the transformer should be the
  # english start token.
  decoder_input = [tokenizer_en.vocab_size]
  output = tf.expand_dims(decoder_input, 0)
    
  for i in range(MAX_LENGTH):
    enc_padding_mask, combined_mask, dec_padding_mask = create_masks(
        encoder_input, output)
  
    # predictions.shape == (batch_size, seq_len, vocab_size)
    predictions, attention_weights = transformer(encoder_input, 
                                                 output,
                                                 False,
                                                 enc_padding_mask,
                                                 combined_mask,
                                                 dec_padding_mask)
    
    # select the last word from the seq_len dimension
    predictions = predictions[: ,-1:, :]  # (batch_size, 1, vocab_size)

    predicted_id = tf.cast(tf.argmax(predictions, axis=-1), tf.int32)
    
    # return the result if the predicted_id is equal to the end token
    if predicted_id == tokenizer_en.vocab_size+1:
      return tf.squeeze(output, axis=0), attention_weights
    
    # concatentate the predicted_id to the output which is given to the decoder
    # as its input.
    output = tf.concat([output, predicted_id], axis=-1)

  return tf.squeeze(output, axis=0), attention_weights
def plot_attention_weights(attention, sentence, result, layer):
  fig = plt.figure(figsize=(16, 8))
  
  sentence = tokenizer_pt.encode(sentence)
  
  attention = tf.squeeze(attention[layer], axis=0)
  
  for head in range(attention.shape[0]):
    ax = fig.add_subplot(2, 4, head+1)
    
    # plot the attention weights
    ax.matshow(attention[head][:-1, :], cmap='viridis')

    fontdict = {'fontsize': 10}
    
    ax.set_xticks(range(len(sentence)+2))
    ax.set_yticks(range(len(result)))
    
    ax.set_ylim(len(result)-1.5, -0.5)
        
    ax.set_xticklabels(
        ['<start>']+[tokenizer_pt.decode([i]) for i in sentence]+['<end>'], 
        fontdict=fontdict, rotation=90)
    
    ax.set_yticklabels([tokenizer_en.decode([i]) for i in result 
                        if i < tokenizer_en.vocab_size], 
                       fontdict=fontdict)
    
    ax.set_xlabel('Head {}'.format(head+1))
  
  plt.tight_layout()
  plt.show()
def translate(sentence, plot=''):
  result, attention_weights = evaluate(sentence)
  
  predicted_sentence = tokenizer_en.decode([i for i in result 
                                            if i < tokenizer_en.vocab_size])  

  print('Input: {}'.format(sentence))
  print('Predicted translation: {}'.format(predicted_sentence))
  
  if plot:
    plot_attention_weights(attention_weights, sentence, result, plot)
translate("este é um problema que temos que resolver.")
print ("Real translation: this is a problem we have to solve .")
Input: este é um problema que temos que resolver.
Predicted translation: this is a problem that we have to solve the problem of progress .
Real translation: this is a problem we have to solve .
translate("os meus vizinhos ouviram sobre esta ideia.")
print ("Real translation: and my neighboring homes heard about this idea .")
Input: os meus vizinhos ouviram sobre esta ideia.
Predicted translation: my neighbors heard about this idea .
Real translation: and my neighboring homes heard about this idea .
translate("vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.")
print ("Real translation: so i 'll just share with you some stories very quickly of some magical things that have happened .")
Input: vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.
Predicted translation: so i 'm going to be quickly to share with you some of some magic things that happened .
Real translation: so i 'll just share with you some stories very quickly of some magical things that have happened .

You can pass different layers and attention blocks of the decoder to the plot parameter.

translate("este é o primeiro livro que eu fiz.", plot='decoder_layer4_block2')
print ("Real translation: this is the first book i've ever done.")
Input: este é o primeiro livro que eu fiz.
Predicted translation: this is the first book i did .

png

Real translation: this is the first book i've ever done.

Summary

In this tutorial, you learned about positional encoding, multi-head attention, the importance of masking and how to create a transformer.

Try using a different dataset to train the transformer. You can also create the base transformer or transformer XL by changing the hyperparameters above. You can also use the layers defined here to create BERT and train state of the art models. Futhermore, you can implement beam search to get better predictions.