Missed TensorFlow World? Check out the recap. Learn more

Transformer model for language understanding

View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook

This tutorial trains a Transformer model to translate Portuguese to English. This is an advanced example that assumes knowledge of text generation and attention.

The core idea behind the Transformer model is self-attention—the ability to attend to different positions of the input sequence to compute a representation of that sequence. Transformer creates stacks of self-attention layers and is explained below in the sections Scaled dot product attention and Multi-head attention.

A transformer model handles variable-sized input using stacks of self-attention layers instead of RNNs or CNNs. This general architecture has a number of advantages:

  • It make no assumptions about the temporal/spatial relationships across the data. This is ideal for processing a set of objects (for example, StarCraft units).
  • Layer outputs can be calculated in parallel, instead of a series like an RNN.
  • Distant items can affect each other's output without passing through many RNN-steps, or convolution layers (see Scene Memory Transformer for example).
  • It can learn long-range dependencies. This is a challenge in many sequence tasks.

The downsides of this architecture are:

  • For a time-series, the output for a time-step is calculated from the entire history instead of only the inputs and current hidden-state. This may be less efficient.
  • If the input does have a temporal/spatial relationship, like text, some positional encoding must be added or the model will effectively see a bag of words.

After training the model in this notebook, you will be able to input a Portuguese sentence and return the English translation.

Attention heatmap

from __future__ import absolute_import, division, print_function, unicode_literals

import tensorflow_datasets as tfds
import tensorflow as tf

import time
import numpy as np
import matplotlib.pyplot as plt

Setup input pipeline

Use TFDS to load the Portugese-English translation dataset from the TED Talks Open Translation Project.

This dataset contains approximately 50000 training examples, 1100 validation examples, and 2000 test examples.

examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en', with_info=True,
                               as_supervised=True)
train_examples, val_examples = examples['train'], examples['validation']
Downloading and preparing dataset ted_hrlr_translate (124.94 MiB) to /home/kbuilder/tensorflow_datasets/ted_hrlr_translate/pt_to_en/0.0.1...

HBox(children=(IntProgress(value=1, bar_style='info', description='Dl Completed...', max=1, style=ProgressStyl…
HBox(children=(IntProgress(value=1, bar_style='info', description='Dl Size...', max=1, style=ProgressStyle(des…
HBox(children=(IntProgress(value=1, bar_style='info', description='Extraction completed...', max=1, style=Prog…







HBox(children=(IntProgress(value=1, bar_style='info', max=1), HTML(value='')))


HBox(children=(IntProgress(value=0, description='Shuffling...', max=1, style=ProgressStyle(description_width='…
WARNING:tensorflow:From /home/kbuilder/.local/lib/python3.6/site-packages/tensorflow_datasets/core/file_format_adapter.py:209: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and: 
`tf.data.TFRecordDataset(path)`

WARNING:tensorflow:From /home/kbuilder/.local/lib/python3.6/site-packages/tensorflow_datasets/core/file_format_adapter.py:209: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and: 
`tf.data.TFRecordDataset(path)`

HBox(children=(IntProgress(value=1, bar_style='info', description='Reading...', max=1, style=ProgressStyle(des…
HBox(children=(IntProgress(value=0, description='Writing...', max=51785, style=ProgressStyle(description_width…


HBox(children=(IntProgress(value=1, bar_style='info', max=1), HTML(value='')))


HBox(children=(IntProgress(value=0, description='Shuffling...', max=1, style=ProgressStyle(description_width='…
HBox(children=(IntProgress(value=1, bar_style='info', description='Reading...', max=1, style=ProgressStyle(des…
HBox(children=(IntProgress(value=0, description='Writing...', max=1193, style=ProgressStyle(description_width=…


HBox(children=(IntProgress(value=1, bar_style='info', max=1), HTML(value='')))


HBox(children=(IntProgress(value=0, description='Shuffling...', max=1, style=ProgressStyle(description_width='…
HBox(children=(IntProgress(value=1, bar_style='info', description='Reading...', max=1, style=ProgressStyle(des…
HBox(children=(IntProgress(value=0, description='Writing...', max=1803, style=ProgressStyle(description_width=…
Dataset ted_hrlr_translate downloaded and prepared to /home/kbuilder/tensorflow_datasets/ted_hrlr_translate/pt_to_en/0.0.1. Subsequent calls will reuse this data.

Create a custom subwords tokenizer from the training dataset.

tokenizer_en = tfds.features.text.SubwordTextEncoder.build_from_corpus(
    (en.numpy() for pt, en in train_examples), target_vocab_size=2**13)

tokenizer_pt = tfds.features.text.SubwordTextEncoder.build_from_corpus(
    (pt.numpy() for pt, en in train_examples), target_vocab_size=2**13)
sample_string = 'Transformer is awesome.'

tokenized_string = tokenizer_en.encode(sample_string)
print ('Tokenized string is {}'.format(tokenized_string))

original_string = tokenizer_en.decode(tokenized_string)
print ('The original string: {}'.format(original_string))

assert original_string == sample_string
Tokenized string is [7915, 1248, 7946, 7194, 13, 2799, 7877]
The original string: Transformer is awesome.

The tokenizer encodes the string by breaking it into subwords if the word is not in its dictionary.

for ts in tokenized_string:
  print ('{} ----> {}'.format(ts, tokenizer_en.decode([ts])))
7915 ----> T
1248 ----> ran
7946 ----> s
7194 ----> former 
13 ----> is 
2799 ----> awesome
7877 ----> .
BUFFER_SIZE = 20000
BATCH_SIZE = 64

Add a start and end token to the input and target.

def encode(lang1, lang2):
  lang1 = [tokenizer_pt.vocab_size] + tokenizer_pt.encode(
      lang1.numpy()) + [tokenizer_pt.vocab_size+1]

  lang2 = [tokenizer_en.vocab_size] + tokenizer_en.encode(
      lang2.numpy()) + [tokenizer_en.vocab_size+1]
  
  return lang1, lang2
MAX_LENGTH = 40
def filter_max_length(x, y, max_length=MAX_LENGTH):
  return tf.logical_and(tf.size(x) <= max_length,
                        tf.size(y) <= max_length)

Operations inside .map() run in graph mode and receive a graph tensor that do not have a numpy attribute. The tokenizer expects a string or Unicode symbol to encode it into integers. Hence, you need to run the encoding inside a tf.py_function, which receives an eager tensor having a numpy attribute that contains the string value.

def tf_encode(pt, en):
  return tf.py_function(encode, [pt, en], [tf.int64, tf.int64])
train_dataset = train_examples.map(tf_encode)
train_dataset = train_dataset.filter(filter_max_length)
# cache the dataset to memory to get a speedup while reading from it.
train_dataset = train_dataset.cache()
train_dataset = train_dataset.shuffle(BUFFER_SIZE).padded_batch(
    BATCH_SIZE, padded_shapes=([-1], [-1]))
train_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE)


val_dataset = val_examples.map(tf_encode)
val_dataset = val_dataset.filter(filter_max_length).padded_batch(
    BATCH_SIZE, padded_shapes=([-1], [-1]))
pt_batch, en_batch = next(iter(val_dataset))
pt_batch, en_batch
(<tf.Tensor: id=207688, shape=(64, 40), dtype=int64, numpy=
 array([[8214, 1259,    5, ...,    0,    0,    0],
        [8214,  299,   13, ...,    0,    0,    0],
        [8214,   59,    8, ...,    0,    0,    0],
        ...,
        [8214,   95,    3, ...,    0,    0,    0],
        [8214, 5157,    1, ...,    0,    0,    0],
        [8214, 4479, 7990, ...,    0,    0,    0]])>,
 <tf.Tensor: id=207689, shape=(64, 40), dtype=int64, numpy=
 array([[8087,   18,   12, ...,    0,    0,    0],
        [8087,  634,   30, ...,    0,    0,    0],
        [8087,   16,   13, ...,    0,    0,    0],
        ...,
        [8087,   12,   20, ...,    0,    0,    0],
        [8087,   17, 4981, ...,    0,    0,    0],
        [8087,   12, 5453, ...,    0,    0,    0]])>)

Positional encoding

Since this model doesn't contain any recurrence or convolution, positional encoding is added to give the model some information about the relative position of the words in the sentence.

The positional encoding vector is added to the embedding vector. Embeddings represent a token in a d-dimensional space where tokens with similar meaning will be closer to each other. But the embeddings do not encode the relative position of words in a sentence. So after adding the positional encoding, words will be closer to each other based on the similarity of their meaning and their position in the sentence, in the d-dimensional space.

See the notebook on positional encoding to learn more about it. The formula for calculating the positional encoding is as follows:

$$\Large{PE_{(pos, 2i)} = sin(pos / 10000^{2i / d_{model}})} $$
$$\Large{PE_{(pos, 2i+1)} = cos(pos / 10000^{2i / d_{model}})} $$
def get_angles(pos, i, d_model):
  angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model))
  return pos * angle_rates
def positional_encoding(position, d_model):
  angle_rads = get_angles(np.arange(position)[:, np.newaxis],
                          np.arange(d_model)[np.newaxis, :],
                          d_model)
  
  # apply sin to even indices in the array; 2i
  angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2])
  
  # apply cos to odd indices in the array; 2i+1
  angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2])
    
  pos_encoding = angle_rads[np.newaxis, ...]
    
  return tf.cast(pos_encoding, dtype=tf.float32)
pos_encoding = positional_encoding(50, 512)
print (pos_encoding.shape)

plt.pcolormesh(pos_encoding[0], cmap='RdBu')
plt.xlabel('Depth')
plt.xlim((0, 512))
plt.ylabel('Position')
plt.colorbar()
plt.show()
(1, 50, 512)

png

Masking

Mask all the pad tokens in the batch of sequence. It ensures that the model does not treat padding as the input. The mask indicates where pad value 0 is present: it outputs a 1 at those locations, and a 0 otherwise.

def create_padding_mask(seq):
  seq = tf.cast(tf.math.equal(seq, 0), tf.float32)
  
  # add extra dimensions to add the padding
  # to the attention logits.
  return seq[:, tf.newaxis, tf.newaxis, :]  # (batch_size, 1, 1, seq_len)
x = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]])
create_padding_mask(x)
<tf.Tensor: id=207703, shape=(3, 1, 1, 5), dtype=float32, numpy=
array([[[[0., 0., 1., 1., 0.]]],


       [[[0., 0., 0., 1., 1.]]],


       [[[1., 1., 1., 0., 0.]]]], dtype=float32)>

The look-ahead mask is used to mask the future tokens in a sequence. In other words, the mask indicates which entries should not be used.

This means that to predict the third word, only the first and second word will be used. Similarly to predict the fourth word, only the first, second and the third word will be used and so on.

def create_look_ahead_mask(size):
  mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)
  return mask  # (seq_len, seq_len)
x = tf.random.uniform((1, 3))
temp = create_look_ahead_mask(x.shape[1])
temp
<tf.Tensor: id=207718, shape=(3, 3), dtype=float32, numpy=
array([[0., 1., 1.],
       [0., 0., 1.],
       [0., 0., 0.]], dtype=float32)>

Scaled dot product attention

scaled_dot_product_attention

The attention function used by the transformer takes three inputs: Q (query), K (key), V (value). The equation used to calculate the attention weights is:

$$\Large{Attention(Q, K, V) = softmax_k(\frac{QK^T}{\sqrt{d_k}}) V} $$

The dot-product attention is scaled by a factor of square root of the depth. This is done because for large values of depth, the dot product grows large in magnitude pushing the softmax function where it has small gradients resulting in a very hard softmax.

For example, consider that Q and K have a mean of 0 and variance of 1. Their matrix multiplication will have a mean of 0 and variance of dk. Hence, square root of dk is used for scaling (and not any other number) because the matmul of Q and K should have a mean of 0 and variance of 1, and you get a gentler softmax.

The mask is multiplied with -1e9 (close to negative infinity). This is done because the mask is summed with the scaled matrix multiplication of Q and K and is applied immediately before a softmax. The goal is to zero out these cells, and large negative inputs to softmax are near zero in the output.

def scaled_dot_product_attention(q, k, v, mask):
  """Calculate the attention weights.
  q, k, v must have matching leading dimensions.
  k, v must have matching penultimate dimension, i.e.: seq_len_k = seq_len_v.
  The mask has different shapes depending on its type(padding or look ahead) 
  but it must be broadcastable for addition.
  
  Args:
    q: query shape == (..., seq_len_q, depth)
    k: key shape == (..., seq_len_k, depth)
    v: value shape == (..., seq_len_v, depth_v)
    mask: Float tensor with shape broadcastable 
          to (..., seq_len_q, seq_len_k). Defaults to None.
    
  Returns:
    output, attention_weights
  """

  matmul_qk = tf.matmul(q, k, transpose_b=True)  # (..., seq_len_q, seq_len_k)
  
  # scale matmul_qk
  dk = tf.cast(tf.shape(k)[-1], tf.float32)
  scaled_attention_logits = matmul_qk / tf.math.sqrt(dk)

  # add the mask to the scaled tensor.
  if mask is not None:
    scaled_attention_logits += (mask * -1e9)  

  # softmax is normalized on the last axis (seq_len_k) so that the scores
  # add up to 1.
  attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1)  # (..., seq_len_q, seq_len_k)

  output = tf.matmul(attention_weights, v)  # (..., seq_len_q, depth_v)

  return output, attention_weights

As the softmax normalization is done on K, its values decide the amount of importance given to Q.

The output represents the multiplication of the attention weights and the V (value) vector. This ensures that the words you want to focus on are kept as-is and the irrelevant words are flushed out.

def print_out(q, k, v):
  temp_out, temp_attn = scaled_dot_product_attention(
      q, k, v, None)
  print ('Attention weights are:')
  print (temp_attn)
  print ('Output is:')
  print (temp_out)
np.set_printoptions(suppress=True)

temp_k = tf.constant([[10,0,0],
                      [0,10,0],
                      [0,0,10],
                      [0,0,10]], dtype=tf.float32)  # (4, 3)

temp_v = tf.constant([[   1,0],
                      [  10,0],
                      [ 100,5],
                      [1000,6]], dtype=tf.float32)  # (4, 2)

# This `query` aligns with the second `key`,
# so the second `value` is returned.
temp_q = tf.constant([[0, 10, 0]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0. 1. 0. 0.]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[10.  0.]], shape=(1, 2), dtype=float32)
# This query aligns with a repeated key (third and fourth), 
# so all associated values get averaged.
temp_q = tf.constant([[0, 0, 10]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0.  0.  0.5 0.5]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[550.    5.5]], shape=(1, 2), dtype=float32)
# This query aligns equally with the first and second key, 
# so their values get averaged.
temp_q = tf.constant([[10, 10, 0]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0.5 0.5 0.  0. ]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[5.5 0. ]], shape=(1, 2), dtype=float32)

Pass all the queries together.

temp_q = tf.constant([[0, 0, 10], [0, 10, 0], [10, 10, 0]], dtype=tf.float32)  # (3, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor(
[[0.  0.  0.5 0.5]
 [0.  1.  0.  0. ]
 [0.5 0.5 0.  0. ]], shape=(3, 4), dtype=float32)
Output is:
tf.Tensor(
[[550.    5.5]
 [ 10.    0. ]
 [  5.5   0. ]], shape=(3, 2), dtype=float32)

Multi-head attention

multi-head attention

Multi-head attention consists of four parts: * Linear layers and split into heads. * Scaled dot-product attention. * Concatenation of heads. * Final linear layer.

Each multi-head attention block gets three inputs; Q (query), K (key), V (value). These are put through linear (Dense) layers and split up into multiple heads.

The scaled_dot_product_attention defined above is applied to each head (broadcasted for efficiency). An appropriate mask must be used in the attention step. The attention output for each head is then concatenated (using tf.transpose, and tf.reshape) and put through a final Dense layer.

Instead of one single attention head, Q, K, and V are split into multiple heads because it allows the model to jointly attend to information at different positions from different representational spaces. After the split each head has a reduced dimensionality, so the total computation cost is the same as a single head attention with full dimensionality.

class MultiHeadAttention(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads):
    super(MultiHeadAttention, self).__init__()
    self.num_heads = num_heads
    self.d_model = d_model
    
    assert d_model % self.num_heads == 0
    
    self.depth = d_model // self.num_heads
    
    self.wq = tf.keras.layers.Dense(d_model)
    self.wk = tf.keras.layers.Dense(d_model)
    self.wv = tf.keras.layers.Dense(d_model)
    
    self.dense = tf.keras.layers.Dense(d_model)
        
  def split_heads(self, x, batch_size):
    """Split the last dimension into (num_heads, depth).
    Transpose the result such that the shape is (batch_size, num_heads, seq_len, depth)
    """
    x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth))
    return tf.transpose(x, perm=[0, 2, 1, 3])
    
  def call(self, v, k, q, mask):
    batch_size = tf.shape(q)[0]
    
    q = self.wq(q)  # (batch_size, seq_len, d_model)
    k = self.wk(k)  # (batch_size, seq_len, d_model)
    v = self.wv(v)  # (batch_size, seq_len, d_model)
    
    q = self.split_heads(q, batch_size)  # (batch_size, num_heads, seq_len_q, depth)
    k = self.split_heads(k, batch_size)  # (batch_size, num_heads, seq_len_k, depth)
    v = self.split_heads(v, batch_size)  # (batch_size, num_heads, seq_len_v, depth)
    
    # scaled_attention.shape == (batch_size, num_heads, seq_len_q, depth)
    # attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k)
    scaled_attention, attention_weights = scaled_dot_product_attention(
        q, k, v, mask)
    
    scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3])  # (batch_size, seq_len_q, num_heads, depth)

    concat_attention = tf.reshape(scaled_attention, 
                                  (batch_size, -1, self.d_model))  # (batch_size, seq_len_q, d_model)

    output = self.dense(concat_attention)  # (batch_size, seq_len_q, d_model)
        
    return output, attention_weights

Create a MultiHeadAttention layer to try out. At each location in the sequence, y, the MultiHeadAttention runs all 8 attention heads across all other locations in the sequence, returning a new vector of the same length at each location.

temp_mha = MultiHeadAttention(d_model=512, num_heads=8)
y = tf.random.uniform((1, 60, 512))  # (batch_size, encoder_sequence, d_model)
out, attn = temp_mha(y, k=y, q=y, mask=None)
out.shape, attn.shape
(TensorShape([1, 60, 512]), TensorShape([1, 8, 60, 60]))

Point wise feed forward network

Point wise feed forward network consists of two fully-connected layers with a ReLU activation in between.

def point_wise_feed_forward_network(d_model, dff):
  return tf.keras.Sequential([
      tf.keras.layers.Dense(dff, activation='relu'),  # (batch_size, seq_len, dff)
      tf.keras.layers.Dense(d_model)  # (batch_size, seq_len, d_model)
  ])
sample_ffn = point_wise_feed_forward_network(512, 2048)
sample_ffn(tf.random.uniform((64, 50, 512))).shape
TensorShape([64, 50, 512])

Encoder and decoder

transformer

The transformer model follows the same general pattern as a standard sequence to sequence with attention model.

  • The input sentence is passed through N encoder layers that generates an output for each word/token in the sequence.
  • The decoder attends on the encoder's output and its own input (self-attention) to predict the next word.

Encoder layer

Each encoder layer consists of sublayers:

  1. Multi-head attention (with padding mask)
  2. Point wise feed forward networks.

Each of these sublayers has a residual connection around it followed by a layer normalization. Residual connections help in avoiding the vanishing gradient problem in deep networks.

The output of each sublayer is LayerNorm(x + Sublayer(x)). The normalization is done on the d_model (last) axis. There are N encoder layers in the transformer.

class EncoderLayer(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads, dff, rate=0.1):
    super(EncoderLayer, self).__init__()

    self.mha = MultiHeadAttention(d_model, num_heads)
    self.ffn = point_wise_feed_forward_network(d_model, dff)

    self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    
    self.dropout1 = tf.keras.layers.Dropout(rate)
    self.dropout2 = tf.keras.layers.Dropout(rate)
    
  def call(self, x, training, mask):

    attn_output, _ = self.mha(x, x, x, mask)  # (batch_size, input_seq_len, d_model)
    attn_output = self.dropout1(attn_output, training=training)
    out1 = self.layernorm1(x + attn_output)  # (batch_size, input_seq_len, d_model)
    
    ffn_output = self.ffn(out1)  # (batch_size, input_seq_len, d_model)
    ffn_output = self.dropout2(ffn_output, training=training)
    out2 = self.layernorm2(out1 + ffn_output)  # (batch_size, input_seq_len, d_model)
    
    return out2
sample_encoder_layer = EncoderLayer(512, 8, 2048)

sample_encoder_layer_output = sample_encoder_layer(
    tf.random.uniform((64, 43, 512)), False, None)

sample_encoder_layer_output.shape  # (batch_size, input_seq_len, d_model)
TensorShape([64, 43, 512])

Decoder layer

Each decoder layer consists of sublayers:

  1. Masked multi-head attention (with look ahead mask and padding mask)
  2. Multi-head attention (with padding mask). V (value) and K (key) receive the encoder output as inputs. Q (query) receives the output from the masked multi-head attention sublayer.
  3. Point wise feed forward networks

Each of these sublayers has a residual connection around it followed by a layer normalization. The output of each sublayer is LayerNorm(x + Sublayer(x)). The normalization is done on the d_model (last) axis.

There are N decoder layers in the transformer.

As Q receives the output from decoder's first attention block, and K receives the encoder output, the attention weights represent the importance given to the decoder's input based on the encoder's output. In other words, the decoder predicts the next word by looking at the encoder output and self-attending to its own output. See the demonstration above in the scaled dot product attention section.

class DecoderLayer(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads, dff, rate=0.1):
    super(DecoderLayer, self).__init__()

    self.mha1 = MultiHeadAttention(d_model, num_heads)
    self.mha2 = MultiHeadAttention(d_model, num_heads)

    self.ffn = point_wise_feed_forward_network(d_model, dff)
 
    self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm3 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    
    self.dropout1 = tf.keras.layers.Dropout(rate)
    self.dropout2 = tf.keras.layers.Dropout(rate)
    self.dropout3 = tf.keras.layers.Dropout(rate)
    
    
  def call(self, x, enc_output, training, 
           look_ahead_mask, padding_mask):
    # enc_output.shape == (batch_size, input_seq_len, d_model)

    attn1, attn_weights_block1 = self.mha1(x, x, x, look_ahead_mask)  # (batch_size, target_seq_len, d_model)
    attn1 = self.dropout1(attn1, training=training)
    out1 = self.layernorm1(attn1 + x)
    
    attn2, attn_weights_block2 = self.mha2(
        enc_output, enc_output, out1, padding_mask)  # (batch_size, target_seq_len, d_model)
    attn2 = self.dropout2(attn2, training=training)
    out2 = self.layernorm2(attn2 + out1)  # (batch_size, target_seq_len, d_model)
    
    ffn_output = self.ffn(out2)  # (batch_size, target_seq_len, d_model)
    ffn_output = self.dropout3(ffn_output, training=training)
    out3 = self.layernorm3(ffn_output + out2)  # (batch_size, target_seq_len, d_model)
    
    return out3, attn_weights_block1, attn_weights_block2
sample_decoder_layer = DecoderLayer(512, 8, 2048)

sample_decoder_layer_output, _, _ = sample_decoder_layer(
    tf.random.uniform((64, 50, 512)), sample_encoder_layer_output, 
    False, None, None)

sample_decoder_layer_output.shape  # (batch_size, target_seq_len, d_model)
TensorShape([64, 50, 512])

Encoder

The Encoder consists of: 1. Input Embedding 2. Positional Encoding 3. N encoder layers

The input is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the encoder layers. The output of the encoder is the input to the decoder.

class Encoder(tf.keras.layers.Layer):
  def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
               maximum_position_encoding, rate=0.1):
    super(Encoder, self).__init__()

    self.d_model = d_model
    self.num_layers = num_layers
    
    self.embedding = tf.keras.layers.Embedding(input_vocab_size, d_model)
    self.pos_encoding = positional_encoding(maximum_position_encoding, 
                                            self.d_model)
    
    
    self.enc_layers = [EncoderLayer(d_model, num_heads, dff, rate) 
                       for _ in range(num_layers)]
  
    self.dropout = tf.keras.layers.Dropout(rate)
        
  def call(self, x, training, mask):

    seq_len = tf.shape(x)[1]
    
    # adding embedding and position encoding.
    x = self.embedding(x)  # (batch_size, input_seq_len, d_model)
    x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
    x += self.pos_encoding[:, :seq_len, :]

    x = self.dropout(x, training=training)
    
    for i in range(self.num_layers):
      x = self.enc_layers[i](x, training, mask)
    
    return x  # (batch_size, input_seq_len, d_model)
sample_encoder = Encoder(num_layers=2, d_model=512, num_heads=8, 
                         dff=2048, input_vocab_size=8500,
                         maximum_position_encoding=10000)
temp_input = tf.random.uniform((64, 62), dtype=tf.int64, minval=0, maxval=200)

sample_encoder_output = sample_encoder(temp_input, training=False, mask=None)

print (sample_encoder_output.shape)  # (batch_size, input_seq_len, d_model)
(64, 62, 512)

Decoder

The Decoder consists of: 1. Output Embedding 2. Positional Encoding 3. N decoder layers

The target is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the decoder layers. The output of the decoder is the input to the final linear layer.

class Decoder(tf.keras.layers.Layer):
  def __init__(self, num_layers, d_model, num_heads, dff, target_vocab_size,
               maximum_position_encoding, rate=0.1):
    super(Decoder, self).__init__()

    self.d_model = d_model
    self.num_layers = num_layers
    
    self.embedding = tf.keras.layers.Embedding(target_vocab_size, d_model)
    self.pos_encoding = positional_encoding(maximum_position_encoding, d_model)
    
    self.dec_layers = [DecoderLayer(d_model, num_heads, dff, rate) 
                       for _ in range(num_layers)]
    self.dropout = tf.keras.layers.Dropout(rate)
    
  def call(self, x, enc_output, training, 
           look_ahead_mask, padding_mask):

    seq_len = tf.shape(x)[1]
    attention_weights = {}
    
    x = self.embedding(x)  # (batch_size, target_seq_len, d_model)
    x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
    x += self.pos_encoding[:, :seq_len, :]
    
    x = self.dropout(x, training=training)

    for i in range(self.num_layers):
      x, block1, block2 = self.dec_layers[i](x, enc_output, training,
                                             look_ahead_mask, padding_mask)
      
      attention_weights['decoder_layer{}_block1'.format(i+1)] = block1
      attention_weights['decoder_layer{}_block2'.format(i+1)] = block2
    
    # x.shape == (batch_size, target_seq_len, d_model)
    return x, attention_weights
sample_decoder = Decoder(num_layers=2, d_model=512, num_heads=8, 
                         dff=2048, target_vocab_size=8000,
                         maximum_position_encoding=5000)
temp_input = tf.random.uniform((64, 26), dtype=tf.int64, minval=0, maxval=200)

output, attn = sample_decoder(temp_input, 
                              enc_output=sample_encoder_output, 
                              training=False,
                              look_ahead_mask=None, 
                              padding_mask=None)

output.shape, attn['decoder_layer2_block2'].shape
(TensorShape([64, 26, 512]), TensorShape([64, 8, 26, 62]))

Create the Transformer

Transformer consists of the encoder, decoder and a final linear layer. The output of the decoder is the input to the linear layer and its output is returned.

class Transformer(tf.keras.Model):
  def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size, 
               target_vocab_size, pe_input, pe_target, rate=0.1):
    super(Transformer, self).__init__()

    self.encoder = Encoder(num_layers, d_model, num_heads, dff, 
                           input_vocab_size, pe_input, rate)

    self.decoder = Decoder(num_layers, d_model, num_heads, dff, 
                           target_vocab_size, pe_target, rate)

    self.final_layer = tf.keras.layers.Dense(target_vocab_size)
    
  def call(self, inp, tar, training, enc_padding_mask, 
           look_ahead_mask, dec_padding_mask):

    enc_output = self.encoder(inp, training, enc_padding_mask)  # (batch_size, inp_seq_len, d_model)
    
    # dec_output.shape == (batch_size, tar_seq_len, d_model)
    dec_output, attention_weights = self.decoder(
        tar, enc_output, training, look_ahead_mask, dec_padding_mask)
    
    final_output = self.final_layer(dec_output)  # (batch_size, tar_seq_len, target_vocab_size)
    
    return final_output, attention_weights
sample_transformer = Transformer(
    num_layers=2, d_model=512, num_heads=8, dff=2048, 
    input_vocab_size=8500, target_vocab_size=8000, 
    pe_input=10000, pe_target=6000)

temp_input = tf.random.uniform((64, 38), dtype=tf.int64, minval=0, maxval=200)
temp_target = tf.random.uniform((64, 36), dtype=tf.int64, minval=0, maxval=200)

fn_out, _ = sample_transformer(temp_input, temp_target, training=False, 
                               enc_padding_mask=None, 
                               look_ahead_mask=None,
                               dec_padding_mask=None)

fn_out.shape  # (batch_size, tar_seq_len, target_vocab_size)
TensorShape([64, 36, 8000])

Set hyperparameters

To keep this example small and relatively fast, the values for num_layers, d_model, and dff have been reduced.

The values used in the base model of transformer were; num_layers=6, d_model = 512, dff = 2048. See the paper for all the other versions of the transformer.

num_layers = 4
d_model = 128
dff = 512
num_heads = 8

input_vocab_size = tokenizer_pt.vocab_size + 2
target_vocab_size = tokenizer_en.vocab_size + 2
dropout_rate = 0.1

Optimizer

Use the Adam optimizer with a custom learning rate scheduler according to the formula in the paper.

$$\Large{lrate = d_{model}^{-0.5} * min(step{\_}num^{-0.5}, step{\_}num * warmup{\_}steps^{-1.5})}$$
class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
  def __init__(self, d_model, warmup_steps=4000):
    super(CustomSchedule, self).__init__()
    
    self.d_model = d_model
    self.d_model = tf.cast(self.d_model, tf.float32)

    self.warmup_steps = warmup_steps
    
  def __call__(self, step):
    arg1 = tf.math.rsqrt(step)
    arg2 = step * (self.warmup_steps ** -1.5)
    
    return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)
learning_rate = CustomSchedule(d_model)

optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98, 
                                     epsilon=1e-9)
temp_learning_rate_schedule = CustomSchedule(d_model)

plt.plot(temp_learning_rate_schedule(tf.range(40000, dtype=tf.float32)))
plt.ylabel("Learning Rate")
plt.xlabel("Train Step")
Text(0.5, 0, 'Train Step')

png

Loss and metrics

Since the target sequences are padded, it is important to apply a padding mask when calculating the loss.

loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
    from_logits=True, reduction='none')
def loss_function(real, pred):
  mask = tf.math.logical_not(tf.math.equal(real, 0))
  loss_ = loss_object(real, pred)

  mask = tf.cast(mask, dtype=loss_.dtype)
  loss_ *= mask
  
  return tf.reduce_mean(loss_)
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
    name='train_accuracy')

Training and checkpointing

transformer = Transformer(num_layers, d_model, num_heads, dff,
                          input_vocab_size, target_vocab_size, 
                          pe_input=input_vocab_size, 
                          pe_target=target_vocab_size,
                          rate=dropout_rate)
def create_masks(inp, tar):
  # Encoder padding mask
  enc_padding_mask = create_padding_mask(inp)
  
  # Used in the 2nd attention block in the decoder.
  # This padding mask is used to mask the encoder outputs.
  dec_padding_mask = create_padding_mask(inp)
  
  # Used in the 1st attention block in the decoder.
  # It is used to pad and mask future tokens in the input received by 
  # the decoder.
  look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1])
  dec_target_padding_mask = create_padding_mask(tar)
  combined_mask = tf.maximum(dec_target_padding_mask, look_ahead_mask)
  
  return enc_padding_mask, combined_mask, dec_padding_mask

Create the checkpoint path and the checkpoint manager. This will be used to save checkpoints every n epochs.

checkpoint_path = "./checkpoints/train"

ckpt = tf.train.Checkpoint(transformer=transformer,
                           optimizer=optimizer)

ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)

# if a checkpoint exists, restore the latest checkpoint.
if ckpt_manager.latest_checkpoint:
  ckpt.restore(ckpt_manager.latest_checkpoint)
  print ('Latest checkpoint restored!!')

The target is divided into tar_inp and tar_real. tar_inp is passed as an input to the decoder. tar_real is that same input shifted by 1: At each location in tar_input, tar_real contains the next token that should be predicted.

For example, sentence = "SOS A lion in the jungle is sleeping EOS"

tar_inp = "SOS A lion in the jungle is sleeping"

tar_real = "A lion in the jungle is sleeping EOS"

The transformer is an auto-regressive model: it makes predictions one part at a time, and uses its output so far to decide what to do next.

During training this example uses teacher-forcing (like in the text generation tutorial). Teacher forcing is passing the true output to the next time step regardless of what the model predicts at the current time step.

As the transformer predicts each word, self-attention allows it to look at the previous words in the input sequence to better predict the next word.

To prevent the model from peaking at the expected output the model uses a look-ahead mask.

EPOCHS = 20
# The @tf.function trace-compiles train_step into a TF graph for faster
# execution. The function specializes to the precise shape of the argument
# tensors. To avoid re-tracing due to the variable sequence lengths or variable
# batch sizes (the last batch is smaller), use input_signature to specify
# more generic shapes.

train_step_signature = [
    tf.TensorSpec(shape=(None, None), dtype=tf.int64),
    tf.TensorSpec(shape=(None, None), dtype=tf.int64),
]

@tf.function(input_signature=train_step_signature)
def train_step(inp, tar):
  tar_inp = tar[:, :-1]
  tar_real = tar[:, 1:]
  
  enc_padding_mask, combined_mask, dec_padding_mask = create_masks(inp, tar_inp)
  
  with tf.GradientTape() as tape:
    predictions, _ = transformer(inp, tar_inp, 
                                 True, 
                                 enc_padding_mask, 
                                 combined_mask, 
                                 dec_padding_mask)
    loss = loss_function(tar_real, predictions)

  gradients = tape.gradient(loss, transformer.trainable_variables)    
  optimizer.apply_gradients(zip(gradients, transformer.trainable_variables))
  
  train_loss(loss)
  train_accuracy(tar_real, predictions)

Portuguese is used as the input language and English is the target language.

for epoch in range(EPOCHS):
  start = time.time()
  
  train_loss.reset_states()
  train_accuracy.reset_states()
  
  # inp -> portuguese, tar -> english
  for (batch, (inp, tar)) in enumerate(train_dataset):
    train_step(inp, tar)
    
    if batch % 50 == 0:
      print ('Epoch {} Batch {} Loss {:.4f} Accuracy {:.4f}'.format(
          epoch + 1, batch, train_loss.result(), train_accuracy.result()))
      
  if (epoch + 1) % 5 == 0:
    ckpt_save_path = ckpt_manager.save()
    print ('Saving checkpoint for epoch {} at {}'.format(epoch+1,
                                                         ckpt_save_path))
    
  print ('Epoch {} Loss {:.4f} Accuracy {:.4f}'.format(epoch + 1, 
                                                train_loss.result(), 
                                                train_accuracy.result()))

  print ('Time taken for 1 epoch: {} secs\n'.format(time.time() - start))
Epoch 1 Batch 0 Loss 4.4931 Accuracy 0.0000
Epoch 1 Batch 50 Loss 4.2370 Accuracy 0.0007
Epoch 1 Batch 100 Loss 4.2078 Accuracy 0.0129
Epoch 1 Batch 150 Loss 4.1596 Accuracy 0.0176
Epoch 1 Batch 200 Loss 4.1016 Accuracy 0.0199
Epoch 1 Batch 250 Loss 4.0178 Accuracy 0.0214
Epoch 1 Batch 300 Loss 3.9340 Accuracy 0.0233
Epoch 1 Batch 350 Loss 3.8430 Accuracy 0.0277
Epoch 1 Batch 400 Loss 3.7612 Accuracy 0.0315
Epoch 1 Batch 450 Loss 3.6796 Accuracy 0.0349
Epoch 1 Batch 500 Loss 3.6099 Accuracy 0.0379
Epoch 1 Batch 550 Loss 3.5429 Accuracy 0.0413
Epoch 1 Batch 600 Loss 3.4834 Accuracy 0.0449
Epoch 1 Batch 650 Loss 3.4272 Accuracy 0.0484
Epoch 1 Batch 700 Loss 3.3724 Accuracy 0.0519
Epoch 1 Loss 3.3703 Accuracy 0.0520
Time taken for 1 epoch: 68.7436408996582 secs

Epoch 2 Batch 0 Loss 2.8022 Accuracy 0.1026
Epoch 2 Batch 50 Loss 2.6126 Accuracy 0.1041
Epoch 2 Batch 100 Loss 2.5722 Accuracy 0.1068
Epoch 2 Batch 150 Loss 2.5367 Accuracy 0.1086
Epoch 2 Batch 200 Loss 2.5073 Accuracy 0.1101
Epoch 2 Batch 250 Loss 2.4837 Accuracy 0.1123
Epoch 2 Batch 300 Loss 2.4680 Accuracy 0.1142
Epoch 2 Batch 350 Loss 2.4563 Accuracy 0.1161
Epoch 2 Batch 400 Loss 2.4411 Accuracy 0.1177
Epoch 2 Batch 450 Loss 2.4259 Accuracy 0.1191
Epoch 2 Batch 500 Loss 2.4100 Accuracy 0.1205
Epoch 2 Batch 550 Loss 2.3961 Accuracy 0.1219
Epoch 2 Batch 600 Loss 2.3850 Accuracy 0.1233
Epoch 2 Batch 650 Loss 2.3738 Accuracy 0.1244
Epoch 2 Batch 700 Loss 2.3627 Accuracy 0.1255
Epoch 2 Loss 2.3626 Accuracy 0.1256
Time taken for 1 epoch: 38.41443157196045 secs

Epoch 3 Batch 0 Loss 2.4421 Accuracy 0.1540
Epoch 3 Batch 50 Loss 2.1561 Accuracy 0.1422
Epoch 3 Batch 100 Loss 2.1610 Accuracy 0.1434
Epoch 3 Batch 150 Loss 2.1667 Accuracy 0.1445
Epoch 3 Batch 200 Loss 2.1549 Accuracy 0.1452
Epoch 3 Batch 250 Loss 2.1521 Accuracy 0.1461
Epoch 3 Batch 300 Loss 2.1410 Accuracy 0.1467
Epoch 3 Batch 350 Loss 2.1331 Accuracy 0.1468
Epoch 3 Batch 400 Loss 2.1236 Accuracy 0.1475
Epoch 3 Batch 450 Loss 2.1211 Accuracy 0.1484
Epoch 3 Batch 500 Loss 2.1148 Accuracy 0.1493
Epoch 3 Batch 550 Loss 2.1084 Accuracy 0.1501
Epoch 3 Batch 600 Loss 2.1014 Accuracy 0.1510
Epoch 3 Batch 650 Loss 2.0964 Accuracy 0.1519
Epoch 3 Batch 700 Loss 2.0895 Accuracy 0.1527
Epoch 3 Loss 2.0897 Accuracy 0.1527
Time taken for 1 epoch: 38.03043484687805 secs

Epoch 4 Batch 0 Loss 1.8626 Accuracy 0.1558
Epoch 4 Batch 50 Loss 1.9223 Accuracy 0.1686
Epoch 4 Batch 100 Loss 1.9191 Accuracy 0.1699
Epoch 4 Batch 150 Loss 1.9195 Accuracy 0.1706
Epoch 4 Batch 200 Loss 1.9181 Accuracy 0.1722
Epoch 4 Batch 250 Loss 1.9124 Accuracy 0.1730
Epoch 4 Batch 300 Loss 1.9079 Accuracy 0.1740
Epoch 4 Batch 350 Loss 1.9008 Accuracy 0.1752
Epoch 4 Batch 400 Loss 1.8925 Accuracy 0.1759
Epoch 4 Batch 450 Loss 1.8843 Accuracy 0.1769
Epoch 4 Batch 500 Loss 1.8752 Accuracy 0.1779
Epoch 4 Batch 550 Loss 1.8678 Accuracy 0.1787
Epoch 4 Batch 600 Loss 1.8611 Accuracy 0.1797
Epoch 4 Batch 650 Loss 1.8558 Accuracy 0.1807
Epoch 4 Batch 700 Loss 1.8504 Accuracy 0.1817
Epoch 4 Loss 1.8510 Accuracy 0.1817
Time taken for 1 epoch: 37.69970345497131 secs

Epoch 5 Batch 0 Loss 1.8364 Accuracy 0.2048
Epoch 5 Batch 50 Loss 1.7186 Accuracy 0.2018
Epoch 5 Batch 100 Loss 1.6948 Accuracy 0.2021
Epoch 5 Batch 150 Loss 1.6943 Accuracy 0.2020
Epoch 5 Batch 200 Loss 1.6846 Accuracy 0.2025
Epoch 5 Batch 250 Loss 1.6761 Accuracy 0.2024
Epoch 5 Batch 300 Loss 1.6696 Accuracy 0.2031
Epoch 5 Batch 350 Loss 1.6671 Accuracy 0.2038
Epoch 5 Batch 400 Loss 1.6616 Accuracy 0.2045
Epoch 5 Batch 450 Loss 1.6581 Accuracy 0.2051
Epoch 5 Batch 500 Loss 1.6540 Accuracy 0.2058
Epoch 5 Batch 550 Loss 1.6475 Accuracy 0.2063
Epoch 5 Batch 600 Loss 1.6422 Accuracy 0.2065
Epoch 5 Batch 650 Loss 1.6373 Accuracy 0.2071
Epoch 5 Batch 700 Loss 1.6345 Accuracy 0.2077
Saving checkpoint for epoch 5 at ./checkpoints/train/ckpt-1
Epoch 5 Loss 1.6344 Accuracy 0.2077
Time taken for 1 epoch: 38.16182470321655 secs

Epoch 6 Batch 0 Loss 1.5389 Accuracy 0.2264
Epoch 6 Batch 50 Loss 1.4369 Accuracy 0.2221
Epoch 6 Batch 100 Loss 1.4590 Accuracy 0.2236
Epoch 6 Batch 150 Loss 1.4649 Accuracy 0.2234
Epoch 6 Batch 200 Loss 1.4718 Accuracy 0.2241
Epoch 6 Batch 250 Loss 1.4736 Accuracy 0.2243
Epoch 6 Batch 300 Loss 1.4740 Accuracy 0.2248
Epoch 6 Batch 350 Loss 1.4695 Accuracy 0.2250
Epoch 6 Batch 400 Loss 1.4683 Accuracy 0.2260
Epoch 6 Batch 450 Loss 1.4618 Accuracy 0.2266
Epoch 6 Batch 500 Loss 1.4599 Accuracy 0.2268
Epoch 6 Batch 550 Loss 1.4576 Accuracy 0.2273
Epoch 6 Batch 600 Loss 1.4557 Accuracy 0.2276
Epoch 6 Batch 650 Loss 1.4528 Accuracy 0.2278
Epoch 6 Batch 700 Loss 1.4496 Accuracy 0.2281
Epoch 6 Loss 1.4496 Accuracy 0.2281
Time taken for 1 epoch: 37.65458273887634 secs

Epoch 7 Batch 0 Loss 1.2567 Accuracy 0.2348
Epoch 7 Batch 50 Loss 1.2878 Accuracy 0.2437
Epoch 7 Batch 100 Loss 1.2899 Accuracy 0.2456
Epoch 7 Batch 150 Loss 1.2906 Accuracy 0.2460
Epoch 7 Batch 200 Loss 1.2897 Accuracy 0.2465
Epoch 7 Batch 250 Loss 1.2874 Accuracy 0.2468
Epoch 7 Batch 300 Loss 1.2870 Accuracy 0.2462
Epoch 7 Batch 350 Loss 1.2811 Accuracy 0.2463
Epoch 7 Batch 400 Loss 1.2799 Accuracy 0.2469
Epoch 7 Batch 450 Loss 1.2766 Accuracy 0.2472
Epoch 7 Batch 500 Loss 1.2735 Accuracy 0.2476
Epoch 7 Batch 550 Loss 1.2716 Accuracy 0.2477
Epoch 7 Batch 600 Loss 1.2690 Accuracy 0.2482
Epoch 7 Batch 650 Loss 1.2651 Accuracy 0.2486
Epoch 7 Batch 700 Loss 1.2631 Accuracy 0.2489
Epoch 7 Loss 1.2631 Accuracy 0.2489
Time taken for 1 epoch: 38.05015230178833 secs

Epoch 8 Batch 0 Loss 1.1485 Accuracy 0.3144
Epoch 8 Batch 50 Loss 1.1057 Accuracy 0.2638
Epoch 8 Batch 100 Loss 1.1110 Accuracy 0.2647
Epoch 8 Batch 150 Loss 1.1172 Accuracy 0.2657
Epoch 8 Batch 200 Loss 1.1173 Accuracy 0.2668
Epoch 8 Batch 250 Loss 1.1184 Accuracy 0.2669
Epoch 8 Batch 300 Loss 1.1171 Accuracy 0.2665
Epoch 8 Batch 350 Loss 1.1152 Accuracy 0.2662
Epoch 8 Batch 400 Loss 1.1142 Accuracy 0.2660
Epoch 8 Batch 450 Loss 1.1123 Accuracy 0.2657
Epoch 8 Batch 500 Loss 1.1153 Accuracy 0.2662
Epoch 8 Batch 550 Loss 1.1131 Accuracy 0.2664
Epoch 8 Batch 600 Loss 1.1124 Accuracy 0.2666
Epoch 8 Batch 650 Loss 1.1142 Accuracy 0.2666
Epoch 8 Batch 700 Loss 1.1143 Accuracy 0.2667
Epoch 8 Loss 1.1145 Accuracy 0.2667
Time taken for 1 epoch: 37.94010066986084 secs

Epoch 9 Batch 0 Loss 1.0197 Accuracy 0.2779
Epoch 9 Batch 50 Loss 0.9917 Accuracy 0.2825
Epoch 9 Batch 100 Loss 0.9977 Accuracy 0.2824
Epoch 9 Batch 150 Loss 0.9976 Accuracy 0.2819
Epoch 9 Batch 200 Loss 1.0027 Accuracy 0.2824
Epoch 9 Batch 250 Loss 0.9983 Accuracy 0.2807
Epoch 9 Batch 300 Loss 0.9985 Accuracy 0.2802
Epoch 9 Batch 350 Loss 0.9996 Accuracy 0.2807
Epoch 9 Batch 400 Loss 1.0019 Accuracy 0.2807
Epoch 9 Batch 450 Loss 1.0025 Accuracy 0.2801
Epoch 9 Batch 500 Loss 1.0045 Accuracy 0.2808
Epoch 9 Batch 550 Loss 1.0054 Accuracy 0.2806
Epoch 9 Batch 600 Loss 1.0049 Accuracy 0.2806
Epoch 9 Batch 650 Loss 1.0058 Accuracy 0.2802
Epoch 9 Batch 700 Loss 1.0077 Accuracy 0.2803
Epoch 9 Loss 1.0074 Accuracy 0.2803
Time taken for 1 epoch: 37.99892783164978 secs

Epoch 10 Batch 0 Loss 0.8486 Accuracy 0.2726
Epoch 10 Batch 50 Loss 0.8977 Accuracy 0.2923
Epoch 10 Batch 100 Loss 0.9052 Accuracy 0.2921
Epoch 10 Batch 150 Loss 0.9128 Accuracy 0.2917
Epoch 10 Batch 200 Loss 0.9246 Accuracy 0.2933
Epoch 10 Batch 250 Loss 0.9239 Accuracy 0.2926
Epoch 10 Batch 300 Loss 0.9193 Accuracy 0.2918
Epoch 10 Batch 350 Loss 0.9161 Accuracy 0.2914
Epoch 10 Batch 400 Loss 0.9196 Accuracy 0.2915
Epoch 10 Batch 450 Loss 0.9226 Accuracy 0.2914
Epoch 10 Batch 500 Loss 0.9235 Accuracy 0.2911
Epoch 10 Batch 550 Loss 0.9236 Accuracy 0.2909
Epoch 10 Batch 600 Loss 0.9251 Accuracy 0.2906
Epoch 10 Batch 650 Loss 0.9254 Accuracy 0.2905
Epoch 10 Batch 700 Loss 0.9274 Accuracy 0.2905
Saving checkpoint for epoch 10 at ./checkpoints/train/ckpt-2
Epoch 10 Loss 0.9275 Accuracy 0.2906
Time taken for 1 epoch: 38.11031985282898 secs

Epoch 11 Batch 0 Loss 0.9445 Accuracy 0.3155
Epoch 11 Batch 50 Loss 0.8422 Accuracy 0.3068
Epoch 11 Batch 100 Loss 0.8350 Accuracy 0.3023
Epoch 11 Batch 150 Loss 0.8382 Accuracy 0.3031
Epoch 11 Batch 200 Loss 0.8410 Accuracy 0.3019
Epoch 11 Batch 250 Loss 0.8446 Accuracy 0.3015
Epoch 11 Batch 300 Loss 0.8494 Accuracy 0.3017
Epoch 11 Batch 350 Loss 0.8515 Accuracy 0.3011
Epoch 11 Batch 400 Loss 0.8510 Accuracy 0.2998
Epoch 11 Batch 450 Loss 0.8541 Accuracy 0.3000
Epoch 11 Batch 500 Loss 0.8556 Accuracy 0.2996
Epoch 11 Batch 550 Loss 0.8568 Accuracy 0.2992
Epoch 11 Batch 600 Loss 0.8578 Accuracy 0.2992
Epoch 11 Batch 650 Loss 0.8600 Accuracy 0.2994
Epoch 11 Batch 700 Loss 0.8610 Accuracy 0.2995
Epoch 11 Loss 0.8611 Accuracy 0.2995
Time taken for 1 epoch: 38.26294016838074 secs

Epoch 12 Batch 0 Loss 0.7527 Accuracy 0.3208
Epoch 12 Batch 50 Loss 0.7622 Accuracy 0.3113
Epoch 12 Batch 100 Loss 0.7717 Accuracy 0.3105
Epoch 12 Batch 150 Loss 0.7763 Accuracy 0.3094
Epoch 12 Batch 200 Loss 0.7813 Accuracy 0.3089
Epoch 12 Batch 250 Loss 0.7857 Accuracy 0.3094
Epoch 12 Batch 300 Loss 0.7864 Accuracy 0.3095
Epoch 12 Batch 350 Loss 0.7902 Accuracy 0.3092
Epoch 12 Batch 400 Loss 0.7915 Accuracy 0.3086
Epoch 12 Batch 450 Loss 0.7944 Accuracy 0.3081
Epoch 12 Batch 500 Loss 0.7968 Accuracy 0.3077
Epoch 12 Batch 550 Loss 0.8004 Accuracy 0.3077
Epoch 12 Batch 600 Loss 0.8012 Accuracy 0.3072
Epoch 12 Batch 650 Loss 0.8038 Accuracy 0.3070
Epoch 12 Batch 700 Loss 0.8058 Accuracy 0.3069
Epoch 12 Loss 0.8059 Accuracy 0.3069
Time taken for 1 epoch: 37.68547224998474 secs

Epoch 13 Batch 0 Loss 0.6591 Accuracy 0.3005
Epoch 13 Batch 50 Loss 0.7286 Accuracy 0.3217
Epoch 13 Batch 100 Loss 0.7263 Accuracy 0.3186
Epoch 13 Batch 150 Loss 0.7280 Accuracy 0.3195
Epoch 13 Batch 200 Loss 0.7343 Accuracy 0.3188
Epoch 13 Batch 250 Loss 0.7384 Accuracy 0.3182
Epoch 13 Batch 300 Loss 0.7435 Accuracy 0.3182
Epoch 13 Batch 350 Loss 0.7458 Accuracy 0.3176
Epoch 13 Batch 400 Loss 0.7457 Accuracy 0.3165
Epoch 13 Batch 450 Loss 0.7449 Accuracy 0.3157
Epoch 13 Batch 500 Loss 0.7499 Accuracy 0.3154
Epoch 13 Batch 550 Loss 0.7527 Accuracy 0.3148
Epoch 13 Batch 600 Loss 0.7555 Accuracy 0.3145
Epoch 13 Batch 650 Loss 0.7580 Accuracy 0.3140
Epoch 13 Batch 700 Loss 0.7606 Accuracy 0.3139
Epoch 13 Loss 0.7607 Accuracy 0.3140
Time taken for 1 epoch: 37.76511287689209 secs

Epoch 14 Batch 0 Loss 0.6179 Accuracy 0.3113
Epoch 14 Batch 50 Loss 0.6647 Accuracy 0.3244
Epoch 14 Batch 100 Loss 0.6792 Accuracy 0.3260
Epoch 14 Batch 150 Loss 0.6842 Accuracy 0.3239
Epoch 14 Batch 200 Loss 0.6900 Accuracy 0.3226
Epoch 14 Batch 250 Loss 0.6935 Accuracy 0.3218
Epoch 14 Batch 300 Loss 0.6947 Accuracy 0.3209
Epoch 14 Batch 350 Loss 0.6973 Accuracy 0.3202
Epoch 14 Batch 400 Loss 0.7015 Accuracy 0.3204
Epoch 14 Batch 450 Loss 0.7050 Accuracy 0.3200
Epoch 14 Batch 500 Loss 0.7082 Accuracy 0.3195
Epoch 14 Batch 550 Loss 0.7122 Accuracy 0.3196
Epoch 14 Batch 600 Loss 0.7151 Accuracy 0.3198
Epoch 14 Batch 650 Loss 0.7174 Accuracy 0.3191
Epoch 14 Batch 700 Loss 0.7192 Accuracy 0.3189
Epoch 14 Loss 0.7196 Accuracy 0.3189
Time taken for 1 epoch: 37.8477246761322 secs

Epoch 15 Batch 0 Loss 0.6355 Accuracy 0.3385
Epoch 15 Batch 50 Loss 0.6518 Accuracy 0.3323
Epoch 15 Batch 100 Loss 0.6525 Accuracy 0.3313
Epoch 15 Batch 150 Loss 0.6558 Accuracy 0.3307
Epoch 15 Batch 200 Loss 0.6593 Accuracy 0.3300
Epoch 15 Batch 250 Loss 0.6599 Accuracy 0.3287
Epoch 15 Batch 300 Loss 0.6607 Accuracy 0.3282
Epoch 15 Batch 350 Loss 0.6657 Accuracy 0.3282
Epoch 15 Batch 400 Loss 0.6688 Accuracy 0.3283
Epoch 15 Batch 450 Loss 0.6698 Accuracy 0.3269
Epoch 15 Batch 500 Loss 0.6724 Accuracy 0.3263
Epoch 15 Batch 550 Loss 0.6758 Accuracy 0.3255
Epoch 15 Batch 600 Loss 0.6787 Accuracy 0.3248
Epoch 15 Batch 650 Loss 0.6815 Accuracy 0.3248
Epoch 15 Batch 700 Loss 0.6837 Accuracy 0.3243
Saving checkpoint for epoch 15 at ./checkpoints/train/ckpt-3
Epoch 15 Loss 0.6838 Accuracy 0.3243
Time taken for 1 epoch: 37.87045168876648 secs

Epoch 16 Batch 0 Loss 0.6264 Accuracy 0.3557
Epoch 16 Batch 50 Loss 0.6172 Accuracy 0.3350
Epoch 16 Batch 100 Loss 0.6183 Accuracy 0.3348
Epoch 16 Batch 150 Loss 0.6230 Accuracy 0.3334
Epoch 16 Batch 200 Loss 0.6240 Accuracy 0.3331
Epoch 16 Batch 250 Loss 0.6263 Accuracy 0.3338
Epoch 16 Batch 300 Loss 0.6294 Accuracy 0.3325
Epoch 16 Batch 350 Loss 0.6319 Accuracy 0.3319
Epoch 16 Batch 400 Loss 0.6361 Accuracy 0.3319
Epoch 16 Batch 450 Loss 0.6404 Accuracy 0.3313
Epoch 16 Batch 500 Loss 0.6414 Accuracy 0.3306
Epoch 16 Batch 550 Loss 0.6442 Accuracy 0.3302
Epoch 16 Batch 600 Loss 0.6492 Accuracy 0.3300
Epoch 16 Batch 650 Loss 0.6524 Accuracy 0.3297
Epoch 16 Batch 700 Loss 0.6533 Accuracy 0.3290
Epoch 16 Loss 0.6533 Accuracy 0.3290
Time taken for 1 epoch: 37.91380953788757 secs

Epoch 17 Batch 0 Loss 0.6592 Accuracy 0.3303
Epoch 17 Batch 50 Loss 0.5879 Accuracy 0.3427
Epoch 17 Batch 100 Loss 0.5895 Accuracy 0.3380
Epoch 17 Batch 150 Loss 0.5918 Accuracy 0.3369
Epoch 17 Batch 200 Loss 0.5977 Accuracy 0.3381
Epoch 17 Batch 250 Loss 0.5991 Accuracy 0.3382
Epoch 17 Batch 300 Loss 0.6047 Accuracy 0.3376
Epoch 17 Batch 350 Loss 0.6072 Accuracy 0.3375
Epoch 17 Batch 400 Loss 0.6106 Accuracy 0.3374
Epoch 17 Batch 450 Loss 0.6121 Accuracy 0.3367
Epoch 17 Batch 500 Loss 0.6150 Accuracy 0.3361
Epoch 17 Batch 550 Loss 0.6171 Accuracy 0.3356
Epoch 17 Batch 600 Loss 0.6206 Accuracy 0.3350
Epoch 17 Batch 650 Loss 0.6235 Accuracy 0.3345
Epoch 17 Batch 700 Loss 0.6252 Accuracy 0.3339
Epoch 17 Loss 0.6253 Accuracy 0.3339
Time taken for 1 epoch: 37.70887017250061 secs

Epoch 18 Batch 0 Loss 0.5141 Accuracy 0.3321
Epoch 18 Batch 50 Loss 0.5593 Accuracy 0.3499
Epoch 18 Batch 100 Loss 0.5611 Accuracy 0.3477
Epoch 18 Batch 150 Loss 0.5674 Accuracy 0.3441
Epoch 18 Batch 200 Loss 0.5742 Accuracy 0.3429
Epoch 18 Batch 250 Loss 0.5787 Accuracy 0.3422
Epoch 18 Batch 300 Loss 0.5837 Accuracy 0.3409
Epoch 18 Batch 350 Loss 0.5857 Accuracy 0.3406
Epoch 18 Batch 400 Loss 0.5892 Accuracy 0.3400
Epoch 18 Batch 450 Loss 0.5909 Accuracy 0.3396
Epoch 18 Batch 500 Loss 0.5926 Accuracy 0.3393
Epoch 18 Batch 550 Loss 0.5941 Accuracy 0.3386
Epoch 18 Batch 600 Loss 0.5960 Accuracy 0.3380
Epoch 18 Batch 650 Loss 0.5985 Accuracy 0.3378
Epoch 18 Batch 700 Loss 0.6021 Accuracy 0.3377
Epoch 18 Loss 0.6022 Accuracy 0.3376
Time taken for 1 epoch: 37.764400005340576 secs

Epoch 19 Batch 0 Loss 0.4913 Accuracy 0.3338
Epoch 19 Batch 50 Loss 0.5298 Accuracy 0.3462
Epoch 19 Batch 100 Loss 0.5381 Accuracy 0.3486
Epoch 19 Batch 150 Loss 0.5421 Accuracy 0.3477
Epoch 19 Batch 200 Loss 0.5445 Accuracy 0.3465
Epoch 19 Batch 250 Loss 0.5503 Accuracy 0.3460
Epoch 19 Batch 300 Loss 0.5538 Accuracy 0.3444
Epoch 19 Batch 350 Loss 0.5563 Accuracy 0.3440
Epoch 19 Batch 400 Loss 0.5583 Accuracy 0.3430
Epoch 19 Batch 450 Loss 0.5637 Accuracy 0.3427
Epoch 19 Batch 500 Loss 0.5665 Accuracy 0.3428
Epoch 19 Batch 550 Loss 0.5693 Accuracy 0.3424
Epoch 19 Batch 600 Loss 0.5720 Accuracy 0.3418
Epoch 19 Batch 650 Loss 0.5740 Accuracy 0.3409
Epoch 19 Batch 700 Loss 0.5767 Accuracy 0.3404
Epoch 19 Loss 0.5764 Accuracy 0.3403
Time taken for 1 epoch: 38.24162793159485 secs

Epoch 20 Batch 0 Loss 0.5209 Accuracy 0.3498
Epoch 20 Batch 50 Loss 0.5282 Accuracy 0.3509
Epoch 20 Batch 100 Loss 0.5232 Accuracy 0.3475
Epoch 20 Batch 150 Loss 0.5275 Accuracy 0.3470
Epoch 20 Batch 200 Loss 0.5249 Accuracy 0.3458
Epoch 20 Batch 250 Loss 0.5266 Accuracy 0.3456
Epoch 20 Batch 300 Loss 0.5309 Accuracy 0.3466
Epoch 20 Batch 350 Loss 0.5342 Accuracy 0.3459
Epoch 20 Batch 400 Loss 0.5371 Accuracy 0.3457
Epoch 20 Batch 450 Loss 0.5391 Accuracy 0.3456
Epoch 20 Batch 500 Loss 0.5420 Accuracy 0.3448
Epoch 20 Batch 550 Loss 0.5459 Accuracy 0.3449
Epoch 20 Batch 600 Loss 0.5493 Accuracy 0.3444
Epoch 20 Batch 650 Loss 0.5518 Accuracy 0.3437
Epoch 20 Batch 700 Loss 0.5549 Accuracy 0.3435
Saving checkpoint for epoch 20 at ./checkpoints/train/ckpt-4
Epoch 20 Loss 0.5551 Accuracy 0.3435
Time taken for 1 epoch: 37.921107053756714 secs

Evaluate

The following steps are used for evaluation:

  • Encode the input sentence using the Portuguese tokenizer (tokenizer_pt). Moreover, add the start and end token so the input is equivalent to what the model is trained with. This is the encoder input.
  • The decoder input is the start token == tokenizer_en.vocab_size.
  • Calculate the padding masks and the look ahead masks.
  • The decoder then outputs the predictions by looking at the encoder output and its own output (self-attention).
  • Select the last word and calculate the argmax of that.
  • Concatentate the predicted word to the decoder input as pass it to the decoder.
  • In this approach, the decoder predicts the next word based on the previous words it predicted.
def evaluate(inp_sentence):
  start_token = [tokenizer_pt.vocab_size]
  end_token = [tokenizer_pt.vocab_size + 1]
  
  # inp sentence is portuguese, hence adding the start and end token
  inp_sentence = start_token + tokenizer_pt.encode(inp_sentence) + end_token
  encoder_input = tf.expand_dims(inp_sentence, 0)
  
  # as the target is english, the first word to the transformer should be the
  # english start token.
  decoder_input = [tokenizer_en.vocab_size]
  output = tf.expand_dims(decoder_input, 0)
    
  for i in range(MAX_LENGTH):
    enc_padding_mask, combined_mask, dec_padding_mask = create_masks(
        encoder_input, output)
  
    # predictions.shape == (batch_size, seq_len, vocab_size)
    predictions, attention_weights = transformer(encoder_input, 
                                                 output,
                                                 False,
                                                 enc_padding_mask,
                                                 combined_mask,
                                                 dec_padding_mask)
    
    # select the last word from the seq_len dimension
    predictions = predictions[: ,-1:, :]  # (batch_size, 1, vocab_size)

    predicted_id = tf.cast(tf.argmax(predictions, axis=-1), tf.int32)
    
    # return the result if the predicted_id is equal to the end token
    if predicted_id == tokenizer_en.vocab_size+1:
      return tf.squeeze(output, axis=0), attention_weights
    
    # concatentate the predicted_id to the output which is given to the decoder
    # as its input.
    output = tf.concat([output, predicted_id], axis=-1)

  return tf.squeeze(output, axis=0), attention_weights
def plot_attention_weights(attention, sentence, result, layer):
  fig = plt.figure(figsize=(16, 8))
  
  sentence = tokenizer_pt.encode(sentence)
  
  attention = tf.squeeze(attention[layer], axis=0)
  
  for head in range(attention.shape[0]):
    ax = fig.add_subplot(2, 4, head+1)
    
    # plot the attention weights
    ax.matshow(attention[head][:-1, :], cmap='viridis')

    fontdict = {'fontsize': 10}
    
    ax.set_xticks(range(len(sentence)+2))
    ax.set_yticks(range(len(result)))
    
    ax.set_ylim(len(result)-1.5, -0.5)
        
    ax.set_xticklabels(
        ['<start>']+[tokenizer_pt.decode([i]) for i in sentence]+['<end>'], 
        fontdict=fontdict, rotation=90)
    
    ax.set_yticklabels([tokenizer_en.decode([i]) for i in result 
                        if i < tokenizer_en.vocab_size], 
                       fontdict=fontdict)
    
    ax.set_xlabel('Head {}'.format(head+1))
  
  plt.tight_layout()
  plt.show()
def translate(sentence, plot=''):
  result, attention_weights = evaluate(sentence)
  
  predicted_sentence = tokenizer_en.decode([i for i in result 
                                            if i < tokenizer_en.vocab_size])  

  print('Input: {}'.format(sentence))
  print('Predicted translation: {}'.format(predicted_sentence))
  
  if plot:
    plot_attention_weights(attention_weights, sentence, result, plot)
translate("este é um problema que temos que resolver.")
print ("Real translation: this is a problem we have to solve .")
Input: este é um problema que temos que resolver.
Predicted translation: this is a problem we have to solve ..
Real translation: this is a problem we have to solve .
translate("os meus vizinhos ouviram sobre esta ideia.")
print ("Real translation: and my neighboring homes heard about this idea .")
Input: os meus vizinhos ouviram sobre esta ideia.
Predicted translation: my neighbors heard about this idea .
Real translation: and my neighboring homes heard about this idea .
translate("vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.")
print ("Real translation: so i 'll just share with you some stories very quickly of some magical things that have happened .")
Input: vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.
Predicted translation: so i 'm going to very quickly share with you some stories of some of some magic things that happened .
Real translation: so i 'll just share with you some stories very quickly of some magical things that have happened .

You can pass different layers and attention blocks of the decoder to the plot parameter.

translate("este é o primeiro livro que eu fiz.", plot='decoder_layer4_block2')
print ("Real translation: this is the first book i've ever done.")
Input: este é o primeiro livro que eu fiz.
Predicted translation: this is the first book i did .

png

Real translation: this is the first book i've ever done.

Summary

In this tutorial, you learned about positional encoding, multi-head attention, the importance of masking and how to create a transformer.

Try using a different dataset to train the transformer. You can also create the base transformer or transformer XL by changing the hyperparameters above. You can also use the layers defined here to create BERT and train state of the art models. Futhermore, you can implement beam search to get better predictions.