Registration is open for TensorFlow Dev Summit 2020 Learn more

Transformer model for language understanding

View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook

This tutorial trains a Transformer model to translate Portuguese to English. This is an advanced example that assumes knowledge of text generation and attention.

The core idea behind the Transformer model is self-attention—the ability to attend to different positions of the input sequence to compute a representation of that sequence. Transformer creates stacks of self-attention layers and is explained below in the sections Scaled dot product attention and Multi-head attention.

A transformer model handles variable-sized input using stacks of self-attention layers instead of RNNs or CNNs. This general architecture has a number of advantages:

  • It make no assumptions about the temporal/spatial relationships across the data. This is ideal for processing a set of objects (for example, StarCraft units).
  • Layer outputs can be calculated in parallel, instead of a series like an RNN.
  • Distant items can affect each other's output without passing through many RNN-steps, or convolution layers (see Scene Memory Transformer for example).
  • It can learn long-range dependencies. This is a challenge in many sequence tasks.

The downsides of this architecture are:

  • For a time-series, the output for a time-step is calculated from the entire history instead of only the inputs and current hidden-state. This may be less efficient.
  • If the input does have a temporal/spatial relationship, like text, some positional encoding must be added or the model will effectively see a bag of words.

After training the model in this notebook, you will be able to input a Portuguese sentence and return the English translation.

Attention heatmap

from __future__ import absolute_import, division, print_function, unicode_literals

try:
  !pip install -q tf-nightly
except Exception:
  pass
import tensorflow_datasets as tfds
import tensorflow as tf

import time
import numpy as np
import matplotlib.pyplot as plt

Setup input pipeline

Use TFDS to load the Portugese-English translation dataset from the TED Talks Open Translation Project.

This dataset contains approximately 50000 training examples, 1100 validation examples, and 2000 test examples.

examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en', with_info=True,
                               as_supervised=True)
train_examples, val_examples = examples['train'], examples['validation']
Downloading and preparing dataset ted_hrlr_translate (124.94 MiB) to /home/kbuilder/tensorflow_datasets/ted_hrlr_translate/pt_to_en/0.0.1...

HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Dl Completed...', max=1.0, style=Progre…
HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Dl Size...', max=1.0, style=ProgressSty…
HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Extraction completed...', max=1.0, styl…







HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))


HBox(children=(FloatProgress(value=0.0, description='Shuffling...', max=1.0, style=ProgressStyle(description_w…
WARNING:tensorflow:From /home/kbuilder/.local/lib/python3.6/site-packages/tensorflow_datasets/core/file_format_adapter.py:210: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and: 
`tf.data.TFRecordDataset(path)`

WARNING:tensorflow:From /home/kbuilder/.local/lib/python3.6/site-packages/tensorflow_datasets/core/file_format_adapter.py:210: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and: 
`tf.data.TFRecordDataset(path)`

HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Reading...', max=1.0, style=ProgressSty…
HBox(children=(FloatProgress(value=0.0, description='Writing...', max=51785.0, style=ProgressStyle(description…


HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))


HBox(children=(FloatProgress(value=0.0, description='Shuffling...', max=1.0, style=ProgressStyle(description_w…
HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Reading...', max=1.0, style=ProgressSty…
HBox(children=(FloatProgress(value=0.0, description='Writing...', max=1193.0, style=ProgressStyle(description_…


HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))


HBox(children=(FloatProgress(value=0.0, description='Shuffling...', max=1.0, style=ProgressStyle(description_w…
HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Reading...', max=1.0, style=ProgressSty…
HBox(children=(FloatProgress(value=0.0, description='Writing...', max=1803.0, style=ProgressStyle(description_…
Dataset ted_hrlr_translate downloaded and prepared to /home/kbuilder/tensorflow_datasets/ted_hrlr_translate/pt_to_en/0.0.1. Subsequent calls will reuse this data.

Create a custom subwords tokenizer from the training dataset.

tokenizer_en = tfds.features.text.SubwordTextEncoder.build_from_corpus(
    (en.numpy() for pt, en in train_examples), target_vocab_size=2**13)

tokenizer_pt = tfds.features.text.SubwordTextEncoder.build_from_corpus(
    (pt.numpy() for pt, en in train_examples), target_vocab_size=2**13)
sample_string = 'Transformer is awesome.'

tokenized_string = tokenizer_en.encode(sample_string)
print ('Tokenized string is {}'.format(tokenized_string))

original_string = tokenizer_en.decode(tokenized_string)
print ('The original string: {}'.format(original_string))

assert original_string == sample_string
Tokenized string is [7915, 1248, 7946, 7194, 13, 2799, 7877]
The original string: Transformer is awesome.

The tokenizer encodes the string by breaking it into subwords if the word is not in its dictionary.

for ts in tokenized_string:
  print ('{} ----> {}'.format(ts, tokenizer_en.decode([ts])))
7915 ----> T
1248 ----> ran
7946 ----> s
7194 ----> former 
13 ----> is 
2799 ----> awesome
7877 ----> .
BUFFER_SIZE = 20000
BATCH_SIZE = 64

Add a start and end token to the input and target.

def encode(lang1, lang2):
  lang1 = [tokenizer_pt.vocab_size] + tokenizer_pt.encode(
      lang1.numpy()) + [tokenizer_pt.vocab_size+1]

  lang2 = [tokenizer_en.vocab_size] + tokenizer_en.encode(
      lang2.numpy()) + [tokenizer_en.vocab_size+1]
  
  return lang1, lang2

You want to use Dataset.map to apply this function to each element of the dataset. Dataset.map runs in graph mode.

  • Graph tensors do not have a value.
  • In graph mode you can only use TensorFlow Ops and functions.

So you can't .map this function directly: You need to wrap it in a tf.py_function. The tf.py_function will pass regular tensors (with a value and a .numpy() method to access it), to the wrapped python function.

def tf_encode(pt, en):
  result_pt, result_en = tf.py_function(encode, [pt, en], [tf.int64, tf.int64])
  result_pt.set_shape([None])
  result_en.set_shape([None])

  return result_pt, result_en
MAX_LENGTH = 40
def filter_max_length(x, y, max_length=MAX_LENGTH):
  return tf.logical_and(tf.size(x) <= max_length,
                        tf.size(y) <= max_length)
train_dataset = train_examples.map(tf_encode)
train_dataset = train_dataset.filter(filter_max_length)
# cache the dataset to memory to get a speedup while reading from it.
train_dataset = train_dataset.cache()
train_dataset = train_dataset.shuffle(BUFFER_SIZE).padded_batch(BATCH_SIZE)
train_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE)


val_dataset = val_examples.map(tf_encode)
val_dataset = val_dataset.filter(filter_max_length).padded_batch(BATCH_SIZE)
pt_batch, en_batch = next(iter(val_dataset))
pt_batch, en_batch
(<tf.Tensor: shape=(64, 40), dtype=int64, numpy=
 array([[8214, 1259,    5, ...,    0,    0,    0],
        [8214,  299,   13, ...,    0,    0,    0],
        [8214,   59,    8, ...,    0,    0,    0],
        ...,
        [8214,   95,    3, ...,    0,    0,    0],
        [8214, 5157,    1, ...,    0,    0,    0],
        [8214, 4479, 7990, ...,    0,    0,    0]])>,
 <tf.Tensor: shape=(64, 40), dtype=int64, numpy=
 array([[8087,   18,   12, ...,    0,    0,    0],
        [8087,  634,   30, ...,    0,    0,    0],
        [8087,   16,   13, ...,    0,    0,    0],
        ...,
        [8087,   12,   20, ...,    0,    0,    0],
        [8087,   17, 4981, ...,    0,    0,    0],
        [8087,   12, 5453, ...,    0,    0,    0]])>)

Positional encoding

Since this model doesn't contain any recurrence or convolution, positional encoding is added to give the model some information about the relative position of the words in the sentence.

The positional encoding vector is added to the embedding vector. Embeddings represent a token in a d-dimensional space where tokens with similar meaning will be closer to each other. But the embeddings do not encode the relative position of words in a sentence. So after adding the positional encoding, words will be closer to each other based on the similarity of their meaning and their position in the sentence, in the d-dimensional space.

See the notebook on positional encoding to learn more about it. The formula for calculating the positional encoding is as follows:

$$\Large{PE_{(pos, 2i)} = sin(pos / 10000^{2i / d_{model}})} $$
$$\Large{PE_{(pos, 2i+1)} = cos(pos / 10000^{2i / d_{model}})} $$
def get_angles(pos, i, d_model):
  angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model))
  return pos * angle_rates
def positional_encoding(position, d_model):
  angle_rads = get_angles(np.arange(position)[:, np.newaxis],
                          np.arange(d_model)[np.newaxis, :],
                          d_model)
  
  # apply sin to even indices in the array; 2i
  angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2])
  
  # apply cos to odd indices in the array; 2i+1
  angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2])
    
  pos_encoding = angle_rads[np.newaxis, ...]
    
  return tf.cast(pos_encoding, dtype=tf.float32)
pos_encoding = positional_encoding(50, 512)
print (pos_encoding.shape)

plt.pcolormesh(pos_encoding[0], cmap='RdBu')
plt.xlabel('Depth')
plt.xlim((0, 512))
plt.ylabel('Position')
plt.colorbar()
plt.show()
(1, 50, 512)

png

Masking

Mask all the pad tokens in the batch of sequence. It ensures that the model does not treat padding as the input. The mask indicates where pad value 0 is present: it outputs a 1 at those locations, and a 0 otherwise.

def create_padding_mask(seq):
  seq = tf.cast(tf.math.equal(seq, 0), tf.float32)
  
  # add extra dimensions to add the padding
  # to the attention logits.
  return seq[:, tf.newaxis, tf.newaxis, :]  # (batch_size, 1, 1, seq_len)
x = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]])
create_padding_mask(x)
<tf.Tensor: shape=(3, 1, 1, 5), dtype=float32, numpy=
array([[[[0., 0., 1., 1., 0.]]],


       [[[0., 0., 0., 1., 1.]]],


       [[[1., 1., 1., 0., 0.]]]], dtype=float32)>

The look-ahead mask is used to mask the future tokens in a sequence. In other words, the mask indicates which entries should not be used.

This means that to predict the third word, only the first and second word will be used. Similarly to predict the fourth word, only the first, second and the third word will be used and so on.

def create_look_ahead_mask(size):
  mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)
  return mask  # (seq_len, seq_len)
x = tf.random.uniform((1, 3))
temp = create_look_ahead_mask(x.shape[1])
temp
<tf.Tensor: shape=(3, 3), dtype=float32, numpy=
array([[0., 1., 1.],
       [0., 0., 1.],
       [0., 0., 0.]], dtype=float32)>

Scaled dot product attention

scaled_dot_product_attention

The attention function used by the transformer takes three inputs: Q (query), K (key), V (value). The equation used to calculate the attention weights is:

$$\Large{Attention(Q, K, V) = softmax_k(\frac{QK^T}{\sqrt{d_k}}) V} $$

The dot-product attention is scaled by a factor of square root of the depth. This is done because for large values of depth, the dot product grows large in magnitude pushing the softmax function where it has small gradients resulting in a very hard softmax.

For example, consider that Q and K have a mean of 0 and variance of 1. Their matrix multiplication will have a mean of 0 and variance of dk. Hence, square root of dk is used for scaling (and not any other number) because the matmul of Q and K should have a mean of 0 and variance of 1, and you get a gentler softmax.

The mask is multiplied with -1e9 (close to negative infinity). This is done because the mask is summed with the scaled matrix multiplication of Q and K and is applied immediately before a softmax. The goal is to zero out these cells, and large negative inputs to softmax are near zero in the output.

def scaled_dot_product_attention(q, k, v, mask):
  """Calculate the attention weights.
  q, k, v must have matching leading dimensions.
  k, v must have matching penultimate dimension, i.e.: seq_len_k = seq_len_v.
  The mask has different shapes depending on its type(padding or look ahead) 
  but it must be broadcastable for addition.
  
  Args:
    q: query shape == (..., seq_len_q, depth)
    k: key shape == (..., seq_len_k, depth)
    v: value shape == (..., seq_len_v, depth_v)
    mask: Float tensor with shape broadcastable 
          to (..., seq_len_q, seq_len_k). Defaults to None.
    
  Returns:
    output, attention_weights
  """

  matmul_qk = tf.matmul(q, k, transpose_b=True)  # (..., seq_len_q, seq_len_k)
  
  # scale matmul_qk
  dk = tf.cast(tf.shape(k)[-1], tf.float32)
  scaled_attention_logits = matmul_qk / tf.math.sqrt(dk)

  # add the mask to the scaled tensor.
  if mask is not None:
    scaled_attention_logits += (mask * -1e9)  

  # softmax is normalized on the last axis (seq_len_k) so that the scores
  # add up to 1.
  attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1)  # (..., seq_len_q, seq_len_k)

  output = tf.matmul(attention_weights, v)  # (..., seq_len_q, depth_v)

  return output, attention_weights

As the softmax normalization is done on K, its values decide the amount of importance given to Q.

The output represents the multiplication of the attention weights and the V (value) vector. This ensures that the words you want to focus on are kept as-is and the irrelevant words are flushed out.

def print_out(q, k, v):
  temp_out, temp_attn = scaled_dot_product_attention(
      q, k, v, None)
  print ('Attention weights are:')
  print (temp_attn)
  print ('Output is:')
  print (temp_out)
np.set_printoptions(suppress=True)

temp_k = tf.constant([[10,0,0],
                      [0,10,0],
                      [0,0,10],
                      [0,0,10]], dtype=tf.float32)  # (4, 3)

temp_v = tf.constant([[   1,0],
                      [  10,0],
                      [ 100,5],
                      [1000,6]], dtype=tf.float32)  # (4, 2)

# This `query` aligns with the second `key`,
# so the second `value` is returned.
temp_q = tf.constant([[0, 10, 0]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0. 1. 0. 0.]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[10.  0.]], shape=(1, 2), dtype=float32)
# This query aligns with a repeated key (third and fourth), 
# so all associated values get averaged.
temp_q = tf.constant([[0, 0, 10]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0.  0.  0.5 0.5]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[550.    5.5]], shape=(1, 2), dtype=float32)
# This query aligns equally with the first and second key, 
# so their values get averaged.
temp_q = tf.constant([[10, 10, 0]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0.5 0.5 0.  0. ]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[5.5 0. ]], shape=(1, 2), dtype=float32)

Pass all the queries together.

temp_q = tf.constant([[0, 0, 10], [0, 10, 0], [10, 10, 0]], dtype=tf.float32)  # (3, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor(
[[0.  0.  0.5 0.5]
 [0.  1.  0.  0. ]
 [0.5 0.5 0.  0. ]], shape=(3, 4), dtype=float32)
Output is:
tf.Tensor(
[[550.    5.5]
 [ 10.    0. ]
 [  5.5   0. ]], shape=(3, 2), dtype=float32)

Multi-head attention

multi-head attention

Multi-head attention consists of four parts:

  • Linear layers and split into heads.
  • Scaled dot-product attention.
  • Concatenation of heads.
  • Final linear layer.

Each multi-head attention block gets three inputs; Q (query), K (key), V (value). These are put through linear (Dense) layers and split up into multiple heads.

The scaled_dot_product_attention defined above is applied to each head (broadcasted for efficiency). An appropriate mask must be used in the attention step. The attention output for each head is then concatenated (using tf.transpose, and tf.reshape) and put through a final Dense layer.

Instead of one single attention head, Q, K, and V are split into multiple heads because it allows the model to jointly attend to information at different positions from different representational spaces. After the split each head has a reduced dimensionality, so the total computation cost is the same as a single head attention with full dimensionality.

class MultiHeadAttention(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads):
    super(MultiHeadAttention, self).__init__()
    self.num_heads = num_heads
    self.d_model = d_model
    
    assert d_model % self.num_heads == 0
    
    self.depth = d_model // self.num_heads
    
    self.wq = tf.keras.layers.Dense(d_model)
    self.wk = tf.keras.layers.Dense(d_model)
    self.wv = tf.keras.layers.Dense(d_model)
    
    self.dense = tf.keras.layers.Dense(d_model)
        
  def split_heads(self, x, batch_size):
    """Split the last dimension into (num_heads, depth).
    Transpose the result such that the shape is (batch_size, num_heads, seq_len, depth)
    """
    x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth))
    return tf.transpose(x, perm=[0, 2, 1, 3])
    
  def call(self, v, k, q, mask):
    batch_size = tf.shape(q)[0]
    
    q = self.wq(q)  # (batch_size, seq_len, d_model)
    k = self.wk(k)  # (batch_size, seq_len, d_model)
    v = self.wv(v)  # (batch_size, seq_len, d_model)
    
    q = self.split_heads(q, batch_size)  # (batch_size, num_heads, seq_len_q, depth)
    k = self.split_heads(k, batch_size)  # (batch_size, num_heads, seq_len_k, depth)
    v = self.split_heads(v, batch_size)  # (batch_size, num_heads, seq_len_v, depth)
    
    # scaled_attention.shape == (batch_size, num_heads, seq_len_q, depth)
    # attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k)
    scaled_attention, attention_weights = scaled_dot_product_attention(
        q, k, v, mask)
    
    scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3])  # (batch_size, seq_len_q, num_heads, depth)

    concat_attention = tf.reshape(scaled_attention, 
                                  (batch_size, -1, self.d_model))  # (batch_size, seq_len_q, d_model)

    output = self.dense(concat_attention)  # (batch_size, seq_len_q, d_model)
        
    return output, attention_weights

Create a MultiHeadAttention layer to try out. At each location in the sequence, y, the MultiHeadAttention runs all 8 attention heads across all other locations in the sequence, returning a new vector of the same length at each location.

temp_mha = MultiHeadAttention(d_model=512, num_heads=8)
y = tf.random.uniform((1, 60, 512))  # (batch_size, encoder_sequence, d_model)
out, attn = temp_mha(y, k=y, q=y, mask=None)
out.shape, attn.shape
(TensorShape([1, 60, 512]), TensorShape([1, 8, 60, 60]))

Point wise feed forward network

Point wise feed forward network consists of two fully-connected layers with a ReLU activation in between.

def point_wise_feed_forward_network(d_model, dff):
  return tf.keras.Sequential([
      tf.keras.layers.Dense(dff, activation='relu'),  # (batch_size, seq_len, dff)
      tf.keras.layers.Dense(d_model)  # (batch_size, seq_len, d_model)
  ])
sample_ffn = point_wise_feed_forward_network(512, 2048)
sample_ffn(tf.random.uniform((64, 50, 512))).shape
TensorShape([64, 50, 512])

Encoder and decoder

transformer

The transformer model follows the same general pattern as a standard sequence to sequence with attention model.

  • The input sentence is passed through N encoder layers that generates an output for each word/token in the sequence.
  • The decoder attends on the encoder's output and its own input (self-attention) to predict the next word.

Encoder layer

Each encoder layer consists of sublayers:

  1. Multi-head attention (with padding mask)
  2. Point wise feed forward networks.

Each of these sublayers has a residual connection around it followed by a layer normalization. Residual connections help in avoiding the vanishing gradient problem in deep networks.

The output of each sublayer is LayerNorm(x + Sublayer(x)). The normalization is done on the d_model (last) axis. There are N encoder layers in the transformer.

class EncoderLayer(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads, dff, rate=0.1):
    super(EncoderLayer, self).__init__()

    self.mha = MultiHeadAttention(d_model, num_heads)
    self.ffn = point_wise_feed_forward_network(d_model, dff)

    self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    
    self.dropout1 = tf.keras.layers.Dropout(rate)
    self.dropout2 = tf.keras.layers.Dropout(rate)
    
  def call(self, x, training, mask):

    attn_output, _ = self.mha(x, x, x, mask)  # (batch_size, input_seq_len, d_model)
    attn_output = self.dropout1(attn_output, training=training)
    out1 = self.layernorm1(x + attn_output)  # (batch_size, input_seq_len, d_model)
    
    ffn_output = self.ffn(out1)  # (batch_size, input_seq_len, d_model)
    ffn_output = self.dropout2(ffn_output, training=training)
    out2 = self.layernorm2(out1 + ffn_output)  # (batch_size, input_seq_len, d_model)
    
    return out2
sample_encoder_layer = EncoderLayer(512, 8, 2048)

sample_encoder_layer_output = sample_encoder_layer(
    tf.random.uniform((64, 43, 512)), False, None)

sample_encoder_layer_output.shape  # (batch_size, input_seq_len, d_model)
TensorShape([64, 43, 512])

Decoder layer

Each decoder layer consists of sublayers:

  1. Masked multi-head attention (with look ahead mask and padding mask)
  2. Multi-head attention (with padding mask). V (value) and K (key) receive the encoder output as inputs. Q (query) receives the output from the masked multi-head attention sublayer.
  3. Point wise feed forward networks

Each of these sublayers has a residual connection around it followed by a layer normalization. The output of each sublayer is LayerNorm(x + Sublayer(x)). The normalization is done on the d_model (last) axis.

There are N decoder layers in the transformer.

As Q receives the output from decoder's first attention block, and K receives the encoder output, the attention weights represent the importance given to the decoder's input based on the encoder's output. In other words, the decoder predicts the next word by looking at the encoder output and self-attending to its own output. See the demonstration above in the scaled dot product attention section.

class DecoderLayer(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads, dff, rate=0.1):
    super(DecoderLayer, self).__init__()

    self.mha1 = MultiHeadAttention(d_model, num_heads)
    self.mha2 = MultiHeadAttention(d_model, num_heads)

    self.ffn = point_wise_feed_forward_network(d_model, dff)
 
    self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm3 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    
    self.dropout1 = tf.keras.layers.Dropout(rate)
    self.dropout2 = tf.keras.layers.Dropout(rate)
    self.dropout3 = tf.keras.layers.Dropout(rate)
    
    
  def call(self, x, enc_output, training, 
           look_ahead_mask, padding_mask):
    # enc_output.shape == (batch_size, input_seq_len, d_model)

    attn1, attn_weights_block1 = self.mha1(x, x, x, look_ahead_mask)  # (batch_size, target_seq_len, d_model)
    attn1 = self.dropout1(attn1, training=training)
    out1 = self.layernorm1(attn1 + x)
    
    attn2, attn_weights_block2 = self.mha2(
        enc_output, enc_output, out1, padding_mask)  # (batch_size, target_seq_len, d_model)
    attn2 = self.dropout2(attn2, training=training)
    out2 = self.layernorm2(attn2 + out1)  # (batch_size, target_seq_len, d_model)
    
    ffn_output = self.ffn(out2)  # (batch_size, target_seq_len, d_model)
    ffn_output = self.dropout3(ffn_output, training=training)
    out3 = self.layernorm3(ffn_output + out2)  # (batch_size, target_seq_len, d_model)
    
    return out3, attn_weights_block1, attn_weights_block2
sample_decoder_layer = DecoderLayer(512, 8, 2048)

sample_decoder_layer_output, _, _ = sample_decoder_layer(
    tf.random.uniform((64, 50, 512)), sample_encoder_layer_output, 
    False, None, None)

sample_decoder_layer_output.shape  # (batch_size, target_seq_len, d_model)
TensorShape([64, 50, 512])

Encoder

The Encoder consists of:

  1. Input Embedding
  2. Positional Encoding
  3. N encoder layers

The input is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the encoder layers. The output of the encoder is the input to the decoder.

class Encoder(tf.keras.layers.Layer):
  def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
               maximum_position_encoding, rate=0.1):
    super(Encoder, self).__init__()

    self.d_model = d_model
    self.num_layers = num_layers
    
    self.embedding = tf.keras.layers.Embedding(input_vocab_size, d_model)
    self.pos_encoding = positional_encoding(maximum_position_encoding, 
                                            self.d_model)
    
    
    self.enc_layers = [EncoderLayer(d_model, num_heads, dff, rate) 
                       for _ in range(num_layers)]
  
    self.dropout = tf.keras.layers.Dropout(rate)
        
  def call(self, x, training, mask):

    seq_len = tf.shape(x)[1]
    
    # adding embedding and position encoding.
    x = self.embedding(x)  # (batch_size, input_seq_len, d_model)
    x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
    x += self.pos_encoding[:, :seq_len, :]

    x = self.dropout(x, training=training)
    
    for i in range(self.num_layers):
      x = self.enc_layers[i](x, training, mask)
    
    return x  # (batch_size, input_seq_len, d_model)
sample_encoder = Encoder(num_layers=2, d_model=512, num_heads=8, 
                         dff=2048, input_vocab_size=8500,
                         maximum_position_encoding=10000)
temp_input = tf.random.uniform((64, 62), dtype=tf.int64, minval=0, maxval=200)

sample_encoder_output = sample_encoder(temp_input, training=False, mask=None)

print (sample_encoder_output.shape)  # (batch_size, input_seq_len, d_model)
(64, 62, 512)

Decoder

The Decoder consists of:

  1. Output Embedding
  2. Positional Encoding
  3. N decoder layers

The target is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the decoder layers. The output of the decoder is the input to the final linear layer.

class Decoder(tf.keras.layers.Layer):
  def __init__(self, num_layers, d_model, num_heads, dff, target_vocab_size,
               maximum_position_encoding, rate=0.1):
    super(Decoder, self).__init__()

    self.d_model = d_model
    self.num_layers = num_layers
    
    self.embedding = tf.keras.layers.Embedding(target_vocab_size, d_model)
    self.pos_encoding = positional_encoding(maximum_position_encoding, d_model)
    
    self.dec_layers = [DecoderLayer(d_model, num_heads, dff, rate) 
                       for _ in range(num_layers)]
    self.dropout = tf.keras.layers.Dropout(rate)
    
  def call(self, x, enc_output, training, 
           look_ahead_mask, padding_mask):

    seq_len = tf.shape(x)[1]
    attention_weights = {}
    
    x = self.embedding(x)  # (batch_size, target_seq_len, d_model)
    x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
    x += self.pos_encoding[:, :seq_len, :]
    
    x = self.dropout(x, training=training)

    for i in range(self.num_layers):
      x, block1, block2 = self.dec_layers[i](x, enc_output, training,
                                             look_ahead_mask, padding_mask)
      
      attention_weights['decoder_layer{}_block1'.format(i+1)] = block1
      attention_weights['decoder_layer{}_block2'.format(i+1)] = block2
    
    # x.shape == (batch_size, target_seq_len, d_model)
    return x, attention_weights
sample_decoder = Decoder(num_layers=2, d_model=512, num_heads=8, 
                         dff=2048, target_vocab_size=8000,
                         maximum_position_encoding=5000)
temp_input = tf.random.uniform((64, 26), dtype=tf.int64, minval=0, maxval=200)

output, attn = sample_decoder(temp_input, 
                              enc_output=sample_encoder_output, 
                              training=False,
                              look_ahead_mask=None, 
                              padding_mask=None)

output.shape, attn['decoder_layer2_block2'].shape
(TensorShape([64, 26, 512]), TensorShape([64, 8, 26, 62]))

Create the Transformer

Transformer consists of the encoder, decoder and a final linear layer. The output of the decoder is the input to the linear layer and its output is returned.

class Transformer(tf.keras.Model):
  def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size, 
               target_vocab_size, pe_input, pe_target, rate=0.1):
    super(Transformer, self).__init__()

    self.encoder = Encoder(num_layers, d_model, num_heads, dff, 
                           input_vocab_size, pe_input, rate)

    self.decoder = Decoder(num_layers, d_model, num_heads, dff, 
                           target_vocab_size, pe_target, rate)

    self.final_layer = tf.keras.layers.Dense(target_vocab_size)
    
  def call(self, inp, tar, training, enc_padding_mask, 
           look_ahead_mask, dec_padding_mask):

    enc_output = self.encoder(inp, training, enc_padding_mask)  # (batch_size, inp_seq_len, d_model)
    
    # dec_output.shape == (batch_size, tar_seq_len, d_model)
    dec_output, attention_weights = self.decoder(
        tar, enc_output, training, look_ahead_mask, dec_padding_mask)
    
    final_output = self.final_layer(dec_output)  # (batch_size, tar_seq_len, target_vocab_size)
    
    return final_output, attention_weights
sample_transformer = Transformer(
    num_layers=2, d_model=512, num_heads=8, dff=2048, 
    input_vocab_size=8500, target_vocab_size=8000, 
    pe_input=10000, pe_target=6000)

temp_input = tf.random.uniform((64, 38), dtype=tf.int64, minval=0, maxval=200)
temp_target = tf.random.uniform((64, 36), dtype=tf.int64, minval=0, maxval=200)

fn_out, _ = sample_transformer(temp_input, temp_target, training=False, 
                               enc_padding_mask=None, 
                               look_ahead_mask=None,
                               dec_padding_mask=None)

fn_out.shape  # (batch_size, tar_seq_len, target_vocab_size)
TensorShape([64, 36, 8000])

Set hyperparameters

To keep this example small and relatively fast, the values for num_layers, d_model, and dff have been reduced.

The values used in the base model of transformer were; num_layers=6, d_model = 512, dff = 2048. See the paper for all the other versions of the transformer.

num_layers = 4
d_model = 128
dff = 512
num_heads = 8

input_vocab_size = tokenizer_pt.vocab_size + 2
target_vocab_size = tokenizer_en.vocab_size + 2
dropout_rate = 0.1

Optimizer

Use the Adam optimizer with a custom learning rate scheduler according to the formula in the paper.

$$\Large{lrate = d_{model}^{-0.5} * min(step{\_}num^{-0.5}, step{\_}num * warmup{\_}steps^{-1.5})}$$
class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
  def __init__(self, d_model, warmup_steps=4000):
    super(CustomSchedule, self).__init__()
    
    self.d_model = d_model
    self.d_model = tf.cast(self.d_model, tf.float32)

    self.warmup_steps = warmup_steps
    
  def __call__(self, step):
    arg1 = tf.math.rsqrt(step)
    arg2 = step * (self.warmup_steps ** -1.5)
    
    return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)
learning_rate = CustomSchedule(d_model)

optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98, 
                                     epsilon=1e-9)
temp_learning_rate_schedule = CustomSchedule(d_model)

plt.plot(temp_learning_rate_schedule(tf.range(40000, dtype=tf.float32)))
plt.ylabel("Learning Rate")
plt.xlabel("Train Step")
Text(0.5, 0, 'Train Step')

png

Loss and metrics

Since the target sequences are padded, it is important to apply a padding mask when calculating the loss.

loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
    from_logits=True, reduction='none')
def loss_function(real, pred):
  mask = tf.math.logical_not(tf.math.equal(real, 0))
  loss_ = loss_object(real, pred)

  mask = tf.cast(mask, dtype=loss_.dtype)
  loss_ *= mask
  
  return tf.reduce_mean(loss_)
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
    name='train_accuracy')

Training and checkpointing

transformer = Transformer(num_layers, d_model, num_heads, dff,
                          input_vocab_size, target_vocab_size, 
                          pe_input=input_vocab_size, 
                          pe_target=target_vocab_size,
                          rate=dropout_rate)
def create_masks(inp, tar):
  # Encoder padding mask
  enc_padding_mask = create_padding_mask(inp)
  
  # Used in the 2nd attention block in the decoder.
  # This padding mask is used to mask the encoder outputs.
  dec_padding_mask = create_padding_mask(inp)
  
  # Used in the 1st attention block in the decoder.
  # It is used to pad and mask future tokens in the input received by 
  # the decoder.
  look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1])
  dec_target_padding_mask = create_padding_mask(tar)
  combined_mask = tf.maximum(dec_target_padding_mask, look_ahead_mask)
  
  return enc_padding_mask, combined_mask, dec_padding_mask

Create the checkpoint path and the checkpoint manager. This will be used to save checkpoints every n epochs.

checkpoint_path = "./checkpoints/train"

ckpt = tf.train.Checkpoint(transformer=transformer,
                           optimizer=optimizer)

ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)

# if a checkpoint exists, restore the latest checkpoint.
if ckpt_manager.latest_checkpoint:
  ckpt.restore(ckpt_manager.latest_checkpoint)
  print ('Latest checkpoint restored!!')

The target is divided into tar_inp and tar_real. tar_inp is passed as an input to the decoder. tar_real is that same input shifted by 1: At each location in tar_input, tar_real contains the next token that should be predicted.

For example, sentence = "SOS A lion in the jungle is sleeping EOS"

tar_inp = "SOS A lion in the jungle is sleeping"

tar_real = "A lion in the jungle is sleeping EOS"

The transformer is an auto-regressive model: it makes predictions one part at a time, and uses its output so far to decide what to do next.

During training this example uses teacher-forcing (like in the text generation tutorial). Teacher forcing is passing the true output to the next time step regardless of what the model predicts at the current time step.

As the transformer predicts each word, self-attention allows it to look at the previous words in the input sequence to better predict the next word.

To prevent the model from peaking at the expected output the model uses a look-ahead mask.

EPOCHS = 20
# The @tf.function trace-compiles train_step into a TF graph for faster
# execution. The function specializes to the precise shape of the argument
# tensors. To avoid re-tracing due to the variable sequence lengths or variable
# batch sizes (the last batch is smaller), use input_signature to specify
# more generic shapes.

train_step_signature = [
    tf.TensorSpec(shape=(None, None), dtype=tf.int64),
    tf.TensorSpec(shape=(None, None), dtype=tf.int64),
]

@tf.function(input_signature=train_step_signature)
def train_step(inp, tar):
  tar_inp = tar[:, :-1]
  tar_real = tar[:, 1:]
  
  enc_padding_mask, combined_mask, dec_padding_mask = create_masks(inp, tar_inp)
  
  with tf.GradientTape() as tape:
    predictions, _ = transformer(inp, tar_inp, 
                                 True, 
                                 enc_padding_mask, 
                                 combined_mask, 
                                 dec_padding_mask)
    loss = loss_function(tar_real, predictions)

  gradients = tape.gradient(loss, transformer.trainable_variables)    
  optimizer.apply_gradients(zip(gradients, transformer.trainable_variables))
  
  train_loss(loss)
  train_accuracy(tar_real, predictions)

Portuguese is used as the input language and English is the target language.

for epoch in range(EPOCHS):
  start = time.time()
  
  train_loss.reset_states()
  train_accuracy.reset_states()
  
  # inp -> portuguese, tar -> english
  for (batch, (inp, tar)) in enumerate(train_dataset):
    train_step(inp, tar)
    
    if batch % 50 == 0:
      print ('Epoch {} Batch {} Loss {:.4f} Accuracy {:.4f}'.format(
          epoch + 1, batch, train_loss.result(), train_accuracy.result()))
      
  if (epoch + 1) % 5 == 0:
    ckpt_save_path = ckpt_manager.save()
    print ('Saving checkpoint for epoch {} at {}'.format(epoch+1,
                                                         ckpt_save_path))
    
  print ('Epoch {} Loss {:.4f} Accuracy {:.4f}'.format(epoch + 1, 
                                                train_loss.result(), 
                                                train_accuracy.result()))

  print ('Time taken for 1 epoch: {} secs\n'.format(time.time() - start))
Epoch 1 Batch 0 Loss 3.9309 Accuracy 0.0000
Epoch 1 Batch 50 Loss 4.2013 Accuracy 0.0060
Epoch 1 Batch 100 Loss 4.1497 Accuracy 0.0163
Epoch 1 Batch 150 Loss 4.0909 Accuracy 0.0199
Epoch 1 Batch 200 Loss 4.0447 Accuracy 0.0217
Epoch 1 Batch 250 Loss 3.9861 Accuracy 0.0230
Epoch 1 Batch 300 Loss 3.9042 Accuracy 0.0249
Epoch 1 Batch 350 Loss 3.8193 Accuracy 0.0292
Epoch 1 Batch 400 Loss 3.7338 Accuracy 0.0332
Epoch 1 Batch 450 Loss 3.6571 Accuracy 0.0365
Epoch 1 Batch 500 Loss 3.5944 Accuracy 0.0393
Epoch 1 Batch 550 Loss 3.5332 Accuracy 0.0422
Epoch 1 Batch 600 Loss 3.4753 Accuracy 0.0456
Epoch 1 Batch 650 Loss 3.4180 Accuracy 0.0489
Epoch 1 Batch 700 Loss 3.3662 Accuracy 0.0520
Epoch 1 Loss 3.3641 Accuracy 0.0521
Time taken for 1 epoch: 55.2388596534729 secs

Epoch 2 Batch 0 Loss 2.8413 Accuracy 0.1047
Epoch 2 Batch 50 Loss 2.5892 Accuracy 0.1001
Epoch 2 Batch 100 Loss 2.5900 Accuracy 0.1033
Epoch 2 Batch 150 Loss 2.5760 Accuracy 0.1058
Epoch 2 Batch 200 Loss 2.5435 Accuracy 0.1080
Epoch 2 Batch 250 Loss 2.5170 Accuracy 0.1103
Epoch 2 Batch 300 Loss 2.4980 Accuracy 0.1124
Epoch 2 Batch 350 Loss 2.4757 Accuracy 0.1141
Epoch 2 Batch 400 Loss 2.4573 Accuracy 0.1159
Epoch 2 Batch 450 Loss 2.4371 Accuracy 0.1174
Epoch 2 Batch 500 Loss 2.4209 Accuracy 0.1189
Epoch 2 Batch 550 Loss 2.4075 Accuracy 0.1203
Epoch 2 Batch 600 Loss 2.3948 Accuracy 0.1216
Epoch 2 Batch 650 Loss 2.3815 Accuracy 0.1230
Epoch 2 Batch 700 Loss 2.3697 Accuracy 0.1243
Epoch 2 Loss 2.3698 Accuracy 0.1243
Time taken for 1 epoch: 30.77263855934143 secs

Epoch 3 Batch 0 Loss 1.9162 Accuracy 0.1427
Epoch 3 Batch 50 Loss 2.1435 Accuracy 0.1442
Epoch 3 Batch 100 Loss 2.1582 Accuracy 0.1456
Epoch 3 Batch 150 Loss 2.1557 Accuracy 0.1459
Epoch 3 Batch 200 Loss 2.1495 Accuracy 0.1456
Epoch 3 Batch 250 Loss 2.1479 Accuracy 0.1460
Epoch 3 Batch 300 Loss 2.1461 Accuracy 0.1469
Epoch 3 Batch 350 Loss 2.1399 Accuracy 0.1477
Epoch 3 Batch 400 Loss 2.1273 Accuracy 0.1481
Epoch 3 Batch 450 Loss 2.1195 Accuracy 0.1491
Epoch 3 Batch 500 Loss 2.1101 Accuracy 0.1497
Epoch 3 Batch 550 Loss 2.1058 Accuracy 0.1508
Epoch 3 Batch 600 Loss 2.1034 Accuracy 0.1518
Epoch 3 Batch 650 Loss 2.0954 Accuracy 0.1527
Epoch 3 Batch 700 Loss 2.0902 Accuracy 0.1537
Epoch 3 Loss 2.0902 Accuracy 0.1537
Time taken for 1 epoch: 30.567124843597412 secs

Epoch 4 Batch 0 Loss 1.7908 Accuracy 0.1698
Epoch 4 Batch 50 Loss 1.9342 Accuracy 0.1708
Epoch 4 Batch 100 Loss 1.9264 Accuracy 0.1718
Epoch 4 Batch 150 Loss 1.9134 Accuracy 0.1723
Epoch 4 Batch 200 Loss 1.9076 Accuracy 0.1734
Epoch 4 Batch 250 Loss 1.9054 Accuracy 0.1747
Epoch 4 Batch 300 Loss 1.8962 Accuracy 0.1757
Epoch 4 Batch 350 Loss 1.8907 Accuracy 0.1765
Epoch 4 Batch 400 Loss 1.8842 Accuracy 0.1774
Epoch 4 Batch 450 Loss 1.8767 Accuracy 0.1785
Epoch 4 Batch 500 Loss 1.8719 Accuracy 0.1794
Epoch 4 Batch 550 Loss 1.8618 Accuracy 0.1801
Epoch 4 Batch 600 Loss 1.8545 Accuracy 0.1808
Epoch 4 Batch 650 Loss 1.8513 Accuracy 0.1817
Epoch 4 Batch 700 Loss 1.8434 Accuracy 0.1823
Epoch 4 Loss 1.8433 Accuracy 0.1823
Time taken for 1 epoch: 31.120911359786987 secs

Epoch 5 Batch 0 Loss 1.8566 Accuracy 0.2192
Epoch 5 Batch 50 Loss 1.6991 Accuracy 0.1990
Epoch 5 Batch 100 Loss 1.6896 Accuracy 0.1994
Epoch 5 Batch 150 Loss 1.6890 Accuracy 0.2012
Epoch 5 Batch 200 Loss 1.6764 Accuracy 0.2024
Epoch 5 Batch 250 Loss 1.6776 Accuracy 0.2033
Epoch 5 Batch 300 Loss 1.6727 Accuracy 0.2039
Epoch 5 Batch 350 Loss 1.6639 Accuracy 0.2039
Epoch 5 Batch 400 Loss 1.6556 Accuracy 0.2043
Epoch 5 Batch 450 Loss 1.6536 Accuracy 0.2050
Epoch 5 Batch 500 Loss 1.6506 Accuracy 0.2056
Epoch 5 Batch 550 Loss 1.6438 Accuracy 0.2060
Epoch 5 Batch 600 Loss 1.6387 Accuracy 0.2067
Epoch 5 Batch 650 Loss 1.6365 Accuracy 0.2072
Epoch 5 Batch 700 Loss 1.6315 Accuracy 0.2075
Saving checkpoint for epoch 5 at ./checkpoints/train/ckpt-1
Epoch 5 Loss 1.6317 Accuracy 0.2075
Time taken for 1 epoch: 30.875771522521973 secs

Epoch 6 Batch 0 Loss 1.6206 Accuracy 0.2573
Epoch 6 Batch 50 Loss 1.4679 Accuracy 0.2210
Epoch 6 Batch 100 Loss 1.4976 Accuracy 0.2247
Epoch 6 Batch 150 Loss 1.4848 Accuracy 0.2246
Epoch 6 Batch 200 Loss 1.4851 Accuracy 0.2241
Epoch 6 Batch 250 Loss 1.4827 Accuracy 0.2250
Epoch 6 Batch 300 Loss 1.4789 Accuracy 0.2253
Epoch 6 Batch 350 Loss 1.4777 Accuracy 0.2259
Epoch 6 Batch 400 Loss 1.4724 Accuracy 0.2257
Epoch 6 Batch 450 Loss 1.4741 Accuracy 0.2265
Epoch 6 Batch 500 Loss 1.4704 Accuracy 0.2264
Epoch 6 Batch 550 Loss 1.4672 Accuracy 0.2265
Epoch 6 Batch 600 Loss 1.4625 Accuracy 0.2270
Epoch 6 Batch 650 Loss 1.4579 Accuracy 0.2275
Epoch 6 Batch 700 Loss 1.4569 Accuracy 0.2277
Epoch 6 Loss 1.4567 Accuracy 0.2277
Time taken for 1 epoch: 30.576682090759277 secs

Epoch 7 Batch 0 Loss 1.3955 Accuracy 0.2264
Epoch 7 Batch 50 Loss 1.3122 Accuracy 0.2437
Epoch 7 Batch 100 Loss 1.3008 Accuracy 0.2422
Epoch 7 Batch 150 Loss 1.2955 Accuracy 0.2425
Epoch 7 Batch 200 Loss 1.3033 Accuracy 0.2436
Epoch 7 Batch 250 Loss 1.3022 Accuracy 0.2444
Epoch 7 Batch 300 Loss 1.2960 Accuracy 0.2441
Epoch 7 Batch 350 Loss 1.2960 Accuracy 0.2446
Epoch 7 Batch 400 Loss 1.2927 Accuracy 0.2450
Epoch 7 Batch 450 Loss 1.2908 Accuracy 0.2454
Epoch 7 Batch 500 Loss 1.2879 Accuracy 0.2458
Epoch 7 Batch 550 Loss 1.2853 Accuracy 0.2467
Epoch 7 Batch 600 Loss 1.2842 Accuracy 0.2472
Epoch 7 Batch 650 Loss 1.2818 Accuracy 0.2474
Epoch 7 Batch 700 Loss 1.2805 Accuracy 0.2478
Epoch 7 Loss 1.2807 Accuracy 0.2478
Time taken for 1 epoch: 30.476401567459106 secs

Epoch 8 Batch 0 Loss 1.2037 Accuracy 0.3105
Epoch 8 Batch 50 Loss 1.1160 Accuracy 0.2693
Epoch 8 Batch 100 Loss 1.1325 Accuracy 0.2666
Epoch 8 Batch 150 Loss 1.1290 Accuracy 0.2658
Epoch 8 Batch 200 Loss 1.1314 Accuracy 0.2662
Epoch 8 Batch 250 Loss 1.1316 Accuracy 0.2656
Epoch 8 Batch 300 Loss 1.1331 Accuracy 0.2656
Epoch 8 Batch 350 Loss 1.1344 Accuracy 0.2661
Epoch 8 Batch 400 Loss 1.1344 Accuracy 0.2664
Epoch 8 Batch 450 Loss 1.1297 Accuracy 0.2657
Epoch 8 Batch 500 Loss 1.1313 Accuracy 0.2657
Epoch 8 Batch 550 Loss 1.1302 Accuracy 0.2654
Epoch 8 Batch 600 Loss 1.1294 Accuracy 0.2658
Epoch 8 Batch 650 Loss 1.1297 Accuracy 0.2657
Epoch 8 Batch 700 Loss 1.1292 Accuracy 0.2657
Epoch 8 Loss 1.1295 Accuracy 0.2657
Time taken for 1 epoch: 30.569132804870605 secs

Epoch 9 Batch 0 Loss 0.9520 Accuracy 0.2652
Epoch 9 Batch 50 Loss 0.9949 Accuracy 0.2807
Epoch 9 Batch 100 Loss 1.0021 Accuracy 0.2816
Epoch 9 Batch 150 Loss 1.0126 Accuracy 0.2804
Epoch 9 Batch 200 Loss 1.0097 Accuracy 0.2794
Epoch 9 Batch 250 Loss 1.0119 Accuracy 0.2787
Epoch 9 Batch 300 Loss 1.0113 Accuracy 0.2782
Epoch 9 Batch 350 Loss 1.0132 Accuracy 0.2788
Epoch 9 Batch 400 Loss 1.0124 Accuracy 0.2789
Epoch 9 Batch 450 Loss 1.0149 Accuracy 0.2791
Epoch 9 Batch 500 Loss 1.0166 Accuracy 0.2791
Epoch 9 Batch 550 Loss 1.0155 Accuracy 0.2788
Epoch 9 Batch 600 Loss 1.0172 Accuracy 0.2787
Epoch 9 Batch 650 Loss 1.0191 Accuracy 0.2788
Epoch 9 Batch 700 Loss 1.0192 Accuracy 0.2790
Epoch 9 Loss 1.0192 Accuracy 0.2790
Time taken for 1 epoch: 30.513157844543457 secs

Epoch 10 Batch 0 Loss 0.9379 Accuracy 0.3299
Epoch 10 Batch 50 Loss 0.9048 Accuracy 0.2918
Epoch 10 Batch 100 Loss 0.9192 Accuracy 0.2929
Epoch 10 Batch 150 Loss 0.9257 Accuracy 0.2936
Epoch 10 Batch 200 Loss 0.9282 Accuracy 0.2928
Epoch 10 Batch 250 Loss 0.9225 Accuracy 0.2921
Epoch 10 Batch 300 Loss 0.9252 Accuracy 0.2918
Epoch 10 Batch 350 Loss 0.9272 Accuracy 0.2917
Epoch 10 Batch 400 Loss 0.9267 Accuracy 0.2916
Epoch 10 Batch 450 Loss 0.9282 Accuracy 0.2910
Epoch 10 Batch 500 Loss 0.9305 Accuracy 0.2909
Epoch 10 Batch 550 Loss 0.9291 Accuracy 0.2903
Epoch 10 Batch 600 Loss 0.9315 Accuracy 0.2901
Epoch 10 Batch 650 Loss 0.9332 Accuracy 0.2902
Epoch 10 Batch 700 Loss 0.9346 Accuracy 0.2903
Saving checkpoint for epoch 10 at ./checkpoints/train/ckpt-2
Epoch 10 Loss 0.9348 Accuracy 0.2903
Time taken for 1 epoch: 30.691904544830322 secs

Epoch 11 Batch 0 Loss 0.9086 Accuracy 0.3045
Epoch 11 Batch 50 Loss 0.8389 Accuracy 0.3057
Epoch 11 Batch 100 Loss 0.8365 Accuracy 0.3020
Epoch 11 Batch 150 Loss 0.8402 Accuracy 0.3003
Epoch 11 Batch 200 Loss 0.8416 Accuracy 0.3003
Epoch 11 Batch 250 Loss 0.8433 Accuracy 0.3012
Epoch 11 Batch 300 Loss 0.8467 Accuracy 0.3007
Epoch 11 Batch 350 Loss 0.8507 Accuracy 0.3004
Epoch 11 Batch 400 Loss 0.8533 Accuracy 0.2997
Epoch 11 Batch 450 Loss 0.8539 Accuracy 0.2992
Epoch 11 Batch 500 Loss 0.8546 Accuracy 0.2989
Epoch 11 Batch 550 Loss 0.8599 Accuracy 0.2992
Epoch 11 Batch 600 Loss 0.8616 Accuracy 0.2992
Epoch 11 Batch 650 Loss 0.8635 Accuracy 0.2987
Epoch 11 Batch 700 Loss 0.8648 Accuracy 0.2985
Epoch 11 Loss 0.8654 Accuracy 0.2985
Time taken for 1 epoch: 30.568967819213867 secs

Epoch 12 Batch 0 Loss 0.8505 Accuracy 0.3335
Epoch 12 Batch 50 Loss 0.7716 Accuracy 0.3111
Epoch 12 Batch 100 Loss 0.7790 Accuracy 0.3108
Epoch 12 Batch 150 Loss 0.7833 Accuracy 0.3103
Epoch 12 Batch 200 Loss 0.7841 Accuracy 0.3090
Epoch 12 Batch 250 Loss 0.7858 Accuracy 0.3080
Epoch 12 Batch 300 Loss 0.7910 Accuracy 0.3079
Epoch 12 Batch 350 Loss 0.7935 Accuracy 0.3078
Epoch 12 Batch 400 Loss 0.7920 Accuracy 0.3069
Epoch 12 Batch 450 Loss 0.7925 Accuracy 0.3068
Epoch 12 Batch 500 Loss 0.7940 Accuracy 0.3067
Epoch 12 Batch 550 Loss 0.7971 Accuracy 0.3065
Epoch 12 Batch 600 Loss 0.8012 Accuracy 0.3063
Epoch 12 Batch 650 Loss 0.8058 Accuracy 0.3064
Epoch 12 Batch 700 Loss 0.8079 Accuracy 0.3064
Epoch 12 Loss 0.8080 Accuracy 0.3064
Time taken for 1 epoch: 30.706134796142578 secs

Epoch 13 Batch 0 Loss 0.6729 Accuracy 0.3157
Epoch 13 Batch 50 Loss 0.7198 Accuracy 0.3143
Epoch 13 Batch 100 Loss 0.7231 Accuracy 0.3157
Epoch 13 Batch 150 Loss 0.7310 Accuracy 0.3160
Epoch 13 Batch 200 Loss 0.7319 Accuracy 0.3154
Epoch 13 Batch 250 Loss 0.7381 Accuracy 0.3158
Epoch 13 Batch 300 Loss 0.7413 Accuracy 0.3149
Epoch 13 Batch 350 Loss 0.7440 Accuracy 0.3146
Epoch 13 Batch 400 Loss 0.7465 Accuracy 0.3142
Epoch 13 Batch 450 Loss 0.7482 Accuracy 0.3136
Epoch 13 Batch 500 Loss 0.7522 Accuracy 0.3139
Epoch 13 Batch 550 Loss 0.7542 Accuracy 0.3138
Epoch 13 Batch 600 Loss 0.7575 Accuracy 0.3137
Epoch 13 Batch 650 Loss 0.7587 Accuracy 0.3133
Epoch 13 Batch 700 Loss 0.7604 Accuracy 0.3130
Epoch 13 Loss 0.7608 Accuracy 0.3131
Time taken for 1 epoch: 30.505637884140015 secs

Epoch 14 Batch 0 Loss 0.6366 Accuracy 0.3064
Epoch 14 Batch 50 Loss 0.6707 Accuracy 0.3218
Epoch 14 Batch 100 Loss 0.6839 Accuracy 0.3237
Epoch 14 Batch 150 Loss 0.6880 Accuracy 0.3225
Epoch 14 Batch 200 Loss 0.6909 Accuracy 0.3219
Epoch 14 Batch 250 Loss 0.6954 Accuracy 0.3215
Epoch 14 Batch 300 Loss 0.6986 Accuracy 0.3210
Epoch 14 Batch 350 Loss 0.6979 Accuracy 0.3203
Epoch 14 Batch 400 Loss 0.7002 Accuracy 0.3202
Epoch 14 Batch 450 Loss 0.7040 Accuracy 0.3204
Epoch 14 Batch 500 Loss 0.7066 Accuracy 0.3195
Epoch 14 Batch 550 Loss 0.7105 Accuracy 0.3192
Epoch 14 Batch 600 Loss 0.7135 Accuracy 0.3192
Epoch 14 Batch 650 Loss 0.7169 Accuracy 0.3187
Epoch 14 Batch 700 Loss 0.7206 Accuracy 0.3189
Epoch 14 Loss 0.7206 Accuracy 0.3189
Time taken for 1 epoch: 30.793264150619507 secs

Epoch 15 Batch 0 Loss 0.6081 Accuracy 0.3237
Epoch 15 Batch 50 Loss 0.6350 Accuracy 0.3288
Epoch 15 Batch 100 Loss 0.6420 Accuracy 0.3288
Epoch 15 Batch 150 Loss 0.6460 Accuracy 0.3278
Epoch 15 Batch 200 Loss 0.6539 Accuracy 0.3271
Epoch 15 Batch 250 Loss 0.6571 Accuracy 0.3274
Epoch 15 Batch 300 Loss 0.6619 Accuracy 0.3277
Epoch 15 Batch 350 Loss 0.6657 Accuracy 0.3277
Epoch 15 Batch 400 Loss 0.6689 Accuracy 0.3273
Epoch 15 Batch 450 Loss 0.6718 Accuracy 0.3265
Epoch 15 Batch 500 Loss 0.6752 Accuracy 0.3263
Epoch 15 Batch 550 Loss 0.6767 Accuracy 0.3258
Epoch 15 Batch 600 Loss 0.6787 Accuracy 0.3254
Epoch 15 Batch 650 Loss 0.6810 Accuracy 0.3243
Epoch 15 Batch 700 Loss 0.6835 Accuracy 0.3240
Saving checkpoint for epoch 15 at ./checkpoints/train/ckpt-3
Epoch 15 Loss 0.6837 Accuracy 0.3240
Time taken for 1 epoch: 30.99902367591858 secs

Epoch 16 Batch 0 Loss 0.5566 Accuracy 0.2945
Epoch 16 Batch 50 Loss 0.6243 Accuracy 0.3378
Epoch 16 Batch 100 Loss 0.6222 Accuracy 0.3356
Epoch 16 Batch 150 Loss 0.6200 Accuracy 0.3339
Epoch 16 Batch 200 Loss 0.6277 Accuracy 0.3336
Epoch 16 Batch 250 Loss 0.6278 Accuracy 0.3324
Epoch 16 Batch 300 Loss 0.6314 Accuracy 0.3324
Epoch 16 Batch 350 Loss 0.6327 Accuracy 0.3313
Epoch 16 Batch 400 Loss 0.6342 Accuracy 0.3310
Epoch 16 Batch 450 Loss 0.6369 Accuracy 0.3310
Epoch 16 Batch 500 Loss 0.6386 Accuracy 0.3302
Epoch 16 Batch 550 Loss 0.6425 Accuracy 0.3294
Epoch 16 Batch 600 Loss 0.6451 Accuracy 0.3289
Epoch 16 Batch 650 Loss 0.6493 Accuracy 0.3291
Epoch 16 Batch 700 Loss 0.6524 Accuracy 0.3289
Epoch 16 Loss 0.6527 Accuracy 0.3290
Time taken for 1 epoch: 30.544687271118164 secs

Epoch 17 Batch 0 Loss 0.5590 Accuracy 0.3464
Epoch 17 Batch 50 Loss 0.5772 Accuracy 0.3355
Epoch 17 Batch 100 Loss 0.5882 Accuracy 0.3399
Epoch 17 Batch 150 Loss 0.5937 Accuracy 0.3404
Epoch 17 Batch 200 Loss 0.5982 Accuracy 0.3395
Epoch 17 Batch 250 Loss 0.6016 Accuracy 0.3388
Epoch 17 Batch 300 Loss 0.6027 Accuracy 0.3374
Epoch 17 Batch 350 Loss 0.6055 Accuracy 0.3367
Epoch 17 Batch 400 Loss 0.6085 Accuracy 0.3365
Epoch 17 Batch 450 Loss 0.6099 Accuracy 0.3358
Epoch 17 Batch 500 Loss 0.6128 Accuracy 0.3354
Epoch 17 Batch 550 Loss 0.6162 Accuracy 0.3345
Epoch 17 Batch 600 Loss 0.6177 Accuracy 0.3339
Epoch 17 Batch 650 Loss 0.6202 Accuracy 0.3332
Epoch 17 Batch 700 Loss 0.6231 Accuracy 0.3329
Epoch 17 Loss 0.6232 Accuracy 0.3329
Time taken for 1 epoch: 30.26451539993286 secs

Epoch 18 Batch 0 Loss 0.6310 Accuracy 0.3737
Epoch 18 Batch 50 Loss 0.5485 Accuracy 0.3409
Epoch 18 Batch 100 Loss 0.5587 Accuracy 0.3434
Epoch 18 Batch 150 Loss 0.5623 Accuracy 0.3411
Epoch 18 Batch 200 Loss 0.5652 Accuracy 0.3398
Epoch 18 Batch 250 Loss 0.5695 Accuracy 0.3399
Epoch 18 Batch 300 Loss 0.5738 Accuracy 0.3395
Epoch 18 Batch 350 Loss 0.5765 Accuracy 0.3390
Epoch 18 Batch 400 Loss 0.5802 Accuracy 0.3387
Epoch 18 Batch 450 Loss 0.5824 Accuracy 0.3383
Epoch 18 Batch 500 Loss 0.5860 Accuracy 0.3380
Epoch 18 Batch 550 Loss 0.5892 Accuracy 0.3382
Epoch 18 Batch 600 Loss 0.5918 Accuracy 0.3374
Epoch 18 Batch 650 Loss 0.5947 Accuracy 0.3367
Epoch 18 Batch 700 Loss 0.5964 Accuracy 0.3362
Epoch 18 Loss 0.5964 Accuracy 0.3362
Time taken for 1 epoch: 30.315643787384033 secs

Epoch 19 Batch 0 Loss 0.5290 Accuracy 0.3321
Epoch 19 Batch 50 Loss 0.5302 Accuracy 0.3484
Epoch 19 Batch 100 Loss 0.5385 Accuracy 0.3446
Epoch 19 Batch 150 Loss 0.5386 Accuracy 0.3437
Epoch 19 Batch 200 Loss 0.5442 Accuracy 0.3438
Epoch 19 Batch 250 Loss 0.5459 Accuracy 0.3430
Epoch 19 Batch 300 Loss 0.5496 Accuracy 0.3423
Epoch 19 Batch 350 Loss 0.5526 Accuracy 0.3413
Epoch 19 Batch 400 Loss 0.5569 Accuracy 0.3414
Epoch 19 Batch 450 Loss 0.5601 Accuracy 0.3417
Epoch 19 Batch 500 Loss 0.5631 Accuracy 0.3415
Epoch 19 Batch 550 Loss 0.5660 Accuracy 0.3412
Epoch 19 Batch 600 Loss 0.5684 Accuracy 0.3409
Epoch 19 Batch 650 Loss 0.5709 Accuracy 0.3404
Epoch 19 Batch 700 Loss 0.5731 Accuracy 0.3398
Epoch 19 Loss 0.5731 Accuracy 0.3398
Time taken for 1 epoch: 30.29426908493042 secs

Epoch 20 Batch 0 Loss 0.5429 Accuracy 0.3313
Epoch 20 Batch 50 Loss 0.5187 Accuracy 0.3509
Epoch 20 Batch 100 Loss 0.5268 Accuracy 0.3550
Epoch 20 Batch 150 Loss 0.5325 Accuracy 0.3525
Epoch 20 Batch 200 Loss 0.5315 Accuracy 0.3498
Epoch 20 Batch 250 Loss 0.5342 Accuracy 0.3492
Epoch 20 Batch 300 Loss 0.5346 Accuracy 0.3491
Epoch 20 Batch 350 Loss 0.5395 Accuracy 0.3483
Epoch 20 Batch 400 Loss 0.5421 Accuracy 0.3473
Epoch 20 Batch 450 Loss 0.5422 Accuracy 0.3462
Epoch 20 Batch 500 Loss 0.5438 Accuracy 0.3456
Epoch 20 Batch 550 Loss 0.5460 Accuracy 0.3452
Epoch 20 Batch 600 Loss 0.5487 Accuracy 0.3448
Epoch 20 Batch 650 Loss 0.5518 Accuracy 0.3443
Epoch 20 Batch 700 Loss 0.5536 Accuracy 0.3438
Saving checkpoint for epoch 20 at ./checkpoints/train/ckpt-4
Epoch 20 Loss 0.5539 Accuracy 0.3438
Time taken for 1 epoch: 30.426174640655518 secs

Evaluate

The following steps are used for evaluation:

  • Encode the input sentence using the Portuguese tokenizer (tokenizer_pt). Moreover, add the start and end token so the input is equivalent to what the model is trained with. This is the encoder input.
  • The decoder input is the start token == tokenizer_en.vocab_size.
  • Calculate the padding masks and the look ahead masks.
  • The decoder then outputs the predictions by looking at the encoder output and its own output (self-attention).
  • Select the last word and calculate the argmax of that.
  • Concatentate the predicted word to the decoder input as pass it to the decoder.
  • In this approach, the decoder predicts the next word based on the previous words it predicted.
def evaluate(inp_sentence):
  start_token = [tokenizer_pt.vocab_size]
  end_token = [tokenizer_pt.vocab_size + 1]
  
  # inp sentence is portuguese, hence adding the start and end token
  inp_sentence = start_token + tokenizer_pt.encode(inp_sentence) + end_token
  encoder_input = tf.expand_dims(inp_sentence, 0)
  
  # as the target is english, the first word to the transformer should be the
  # english start token.
  decoder_input = [tokenizer_en.vocab_size]
  output = tf.expand_dims(decoder_input, 0)
    
  for i in range(MAX_LENGTH):
    enc_padding_mask, combined_mask, dec_padding_mask = create_masks(
        encoder_input, output)
  
    # predictions.shape == (batch_size, seq_len, vocab_size)
    predictions, attention_weights = transformer(encoder_input, 
                                                 output,
                                                 False,
                                                 enc_padding_mask,
                                                 combined_mask,
                                                 dec_padding_mask)
    
    # select the last word from the seq_len dimension
    predictions = predictions[: ,-1:, :]  # (batch_size, 1, vocab_size)

    predicted_id = tf.cast(tf.argmax(predictions, axis=-1), tf.int32)
    
    # return the result if the predicted_id is equal to the end token
    if predicted_id == tokenizer_en.vocab_size+1:
      return tf.squeeze(output, axis=0), attention_weights
    
    # concatentate the predicted_id to the output which is given to the decoder
    # as its input.
    output = tf.concat([output, predicted_id], axis=-1)

  return tf.squeeze(output, axis=0), attention_weights
def plot_attention_weights(attention, sentence, result, layer):
  fig = plt.figure(figsize=(16, 8))
  
  sentence = tokenizer_pt.encode(sentence)
  
  attention = tf.squeeze(attention[layer], axis=0)
  
  for head in range(attention.shape[0]):
    ax = fig.add_subplot(2, 4, head+1)
    
    # plot the attention weights
    ax.matshow(attention[head][:-1, :], cmap='viridis')

    fontdict = {'fontsize': 10}
    
    ax.set_xticks(range(len(sentence)+2))
    ax.set_yticks(range(len(result)))
    
    ax.set_ylim(len(result)-1.5, -0.5)
        
    ax.set_xticklabels(
        ['<start>']+[tokenizer_pt.decode([i]) for i in sentence]+['<end>'], 
        fontdict=fontdict, rotation=90)
    
    ax.set_yticklabels([tokenizer_en.decode([i]) for i in result 
                        if i < tokenizer_en.vocab_size], 
                       fontdict=fontdict)
    
    ax.set_xlabel('Head {}'.format(head+1))
  
  plt.tight_layout()
  plt.show()
def translate(sentence, plot=''):
  result, attention_weights = evaluate(sentence)
  
  predicted_sentence = tokenizer_en.decode([i for i in result 
                                            if i < tokenizer_en.vocab_size])  

  print('Input: {}'.format(sentence))
  print('Predicted translation: {}'.format(predicted_sentence))
  
  if plot:
    plot_attention_weights(attention_weights, sentence, result, plot)
translate("este é um problema que temos que resolver.")
print ("Real translation: this is a problem we have to solve .")
Input: este é um problema que temos que resolver.
Predicted translation: this is a problem that we have to solve for the world .
Real translation: this is a problem we have to solve .
translate("os meus vizinhos ouviram sobre esta ideia.")
print ("Real translation: and my neighboring homes heard about this idea .")
Input: os meus vizinhos ouviram sobre esta ideia.
Predicted translation: my neighbors heard about this idea .
Real translation: and my neighboring homes heard about this idea .
translate("vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.")
print ("Real translation: so i 'll just share with you some stories very quickly of some magical things that have happened .")
Input: vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.
Predicted translation: so i 'm going to close you a little bit of you through some magic stories that have happened .
Real translation: so i 'll just share with you some stories very quickly of some magical things that have happened .

You can pass different layers and attention blocks of the decoder to the plot parameter.

translate("este é o primeiro livro que eu fiz.", plot='decoder_layer4_block2')
print ("Real translation: this is the first book i've ever done.")
Input: este é o primeiro livro que eu fiz.
Predicted translation: this is the first book that i did .

png

Real translation: this is the first book i've ever done.

Summary

In this tutorial, you learned about positional encoding, multi-head attention, the importance of masking and how to create a transformer.

Try using a different dataset to train the transformer. You can also create the base transformer or transformer XL by changing the hyperparameters above. You can also use the layers defined here to create BERT and train state of the art models. Futhermore, you can implement beam search to get better predictions.