TensorFlow 2.0 Beta is available Learn more

Transformer model for language understanding

View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook

This tutorial trains a Transformer model to translate Portuguese to English. This is an advanced example that assumes knowledge of text generation and attention.

The core idea behind the Transformer model is self-attention—the ability to attend to different positions of the input sequence to compute a representation of that sequence. Transformer creates stacks of self-attention layers and is explained below in the sections Scaled dot product attention and Multi-head attention.

A transformer model handles variable-sized input using stacks of self-attention layers instead of RNNs or CNNs. This general architecture has a number of advantages:

  • It make no assumptions about the temporal/spatial relationships across the data. This is ideal for processing a set of objects (for example, StarCraft units).
  • Layer outputs can be calculated in parallel, instead of a series like an RNN.
  • Distant items can affect each other's output without passing through many RNN-steps, or convolution layers (see Scene Memory Transformer for example).
  • It can learn long-range dependencies. This is a challenge in many sequence tasks.

The downsides of this architecture are:

  • For a time-series, the output for a time-step is calculated from the entire history instead of only the inputs and current hidden-state. This may be less efficient.
  • If the input does have a temporal/spatial relationship, like text, some positional encoding must be added or the model will effectively see a bag of words.

After training the model in this notebook, you will be able to input a Portuguese sentence and return the English translation.

Attention heatmap

from __future__ import absolute_import, division, print_function, unicode_literals

!pip install -q tensorflow-gpu==2.0.0-beta1
import tensorflow_datasets as tfds
import tensorflow as tf

import time
import numpy as np
import matplotlib.pyplot as plt

Setup input pipeline

Use TFDS to load the Portugese-English translation dataset from the TED Talks Open Translation Project.

This dataset contains approximately 50000 training examples, 1100 validation examples, and 2000 test examples.

examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en', with_info=True,
                               as_supervised=True)
train_examples, val_examples = examples['train'], examples['validation']
Downloading and preparing dataset ted_hrlr_translate (124.94 MiB) to /home/kbuilder/tensorflow_datasets/ted_hrlr_translate/pt_to_en/0.0.1...

HBox(children=(IntProgress(value=1, bar_style='info', description='Dl Completed...', max=1, style=ProgressStyl…
HBox(children=(IntProgress(value=1, bar_style='info', description='Dl Size...', max=1, style=ProgressStyle(des…
HBox(children=(IntProgress(value=1, bar_style='info', description='Extraction completed...', max=1, style=Prog…





HBox(children=(IntProgress(value=1, bar_style='info', max=1), HTML(value='')))


HBox(children=(IntProgress(value=0, description='Shuffling...', max=1, style=ProgressStyle(description_width='…
WARNING: Logging before flag parsing goes to stderr.
W0709 21:09:51.854153 139941552117504 deprecation.py:323] From /home/kbuilder/.local/lib/python3.5/site-packages/tensorflow_datasets/core/file_format_adapter.py:209: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and: 
`tf.data.TFRecordDataset(path)`

HBox(children=(IntProgress(value=1, bar_style='info', description='Reading...', max=1, style=ProgressStyle(des…
HBox(children=(IntProgress(value=0, description='Writing...', max=51785, style=ProgressStyle(description_width…


HBox(children=(IntProgress(value=1, bar_style='info', max=1), HTML(value='')))


HBox(children=(IntProgress(value=0, description='Shuffling...', max=1, style=ProgressStyle(description_width='…
HBox(children=(IntProgress(value=1, bar_style='info', description='Reading...', max=1, style=ProgressStyle(des…
HBox(children=(IntProgress(value=0, description='Writing...', max=1193, style=ProgressStyle(description_width=…


HBox(children=(IntProgress(value=1, bar_style='info', max=1), HTML(value='')))


HBox(children=(IntProgress(value=0, description='Shuffling...', max=1, style=ProgressStyle(description_width='…
HBox(children=(IntProgress(value=1, bar_style='info', description='Reading...', max=1, style=ProgressStyle(des…
HBox(children=(IntProgress(value=0, description='Writing...', max=1803, style=ProgressStyle(description_width=…
W0709 21:09:53.259662 139941552117504 dataset_builder.py:384] Warning: Setting shuffle_files=True because split=TRAIN and shuffle_files=None. This behavior will be deprecated on 2019-08-06, at which point shuffle_files=False will be the default for all splits.

Dataset ted_hrlr_translate downloaded and prepared to /home/kbuilder/tensorflow_datasets/ted_hrlr_translate/pt_to_en/0.0.1. Subsequent calls will reuse this data.

Create a custom subwords tokenizer from the training dataset.

tokenizer_en = tfds.features.text.SubwordTextEncoder.build_from_corpus(
    (en.numpy() for pt, en in train_examples), target_vocab_size=2**13)

tokenizer_pt = tfds.features.text.SubwordTextEncoder.build_from_corpus(
    (pt.numpy() for pt, en in train_examples), target_vocab_size=2**13)
sample_string = 'Transformer is awesome.'

tokenized_string = tokenizer_en.encode(sample_string)
print ('Tokenized string is {}'.format(tokenized_string))

original_string = tokenizer_en.decode(tokenized_string)
print ('The original string: {}'.format(original_string))

assert original_string == sample_string
Tokenized string is [7915, 1248, 7946, 7194, 13, 2799, 7877]
The original string: Transformer is awesome.

The tokenizer encodes the string by breaking it into subwords if the word is not in its dictionary.

for ts in tokenized_string:
  print ('{} ----> {}'.format(ts, tokenizer_en.decode([ts])))
7915 ----> T
1248 ----> ran
7946 ----> s
7194 ----> former 
13 ----> is 
2799 ----> awesome
7877 ----> .
BUFFER_SIZE = 20000
BATCH_SIZE = 64

Add a start and end token to the input and target.

def encode(lang1, lang2):
  lang1 = [tokenizer_pt.vocab_size] + tokenizer_pt.encode(
      lang1.numpy()) + [tokenizer_pt.vocab_size+1]

  lang2 = [tokenizer_en.vocab_size] + tokenizer_en.encode(
      lang2.numpy()) + [tokenizer_en.vocab_size+1]
  
  return lang1, lang2
MAX_LENGTH = 40
def filter_max_length(x, y, max_length=MAX_LENGTH):
  return tf.logical_and(tf.size(x) <= max_length,
                        tf.size(y) <= max_length)

Operations inside .map() run in graph mode and receive a graph tensor that do not have a numpy attribute. The tokenizer expects a string or Unicode symbol to encode it into integers. Hence, you need to run the encoding inside a tf.py_function, which receives an eager tensor having a numpy attribute that contains the string value.

def tf_encode(pt, en):
  return tf.py_function(encode, [pt, en], [tf.int64, tf.int64])
train_dataset = train_examples.map(tf_encode)
train_dataset = train_dataset.filter(filter_max_length)
# cache the dataset to memory to get a speedup while reading from it.
train_dataset = train_dataset.cache()
train_dataset = train_dataset.shuffle(BUFFER_SIZE).padded_batch(
    BATCH_SIZE, padded_shapes=([-1], [-1]))
train_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE)


val_dataset = val_examples.map(tf_encode)
val_dataset = val_dataset.filter(filter_max_length).padded_batch(
    BATCH_SIZE, padded_shapes=([-1], [-1]))
pt_batch, en_batch = next(iter(val_dataset))
pt_batch, en_batch
W0709 21:12:38.880039 139935336101632 backprop.py:842] The dtype of the watched tensor must be floating (e.g. tf.float32), got tf.string
W0709 21:12:38.881456 139935336101632 backprop.py:842] The dtype of the watched tensor must be floating (e.g. tf.float32), got tf.string
W0709 21:12:38.883410 139935327708928 backprop.py:842] The dtype of the watched tensor must be floating (e.g. tf.float32), got tf.string
W0709 21:12:38.884525 139935327708928 backprop.py:842] The dtype of the watched tensor must be floating (e.g. tf.float32), got tf.string
W0709 21:12:38.886205 139935327708928 backprop.py:842] The dtype of the watched tensor must be floating (e.g. tf.float32), got tf.string

(<tf.Tensor: id=311422, shape=(64, 40), dtype=int64, numpy=
 array([[8214, 1259,    5, ...,    0,    0,    0],
        [8214,  299,   13, ...,    0,    0,    0],
        [8214,   59,    8, ...,    0,    0,    0],
        ...,
        [8214,   95,    3, ...,    0,    0,    0],
        [8214, 5157,    1, ...,    0,    0,    0],
        [8214, 4479, 7990, ...,    0,    0,    0]])>,
 <tf.Tensor: id=311423, shape=(64, 40), dtype=int64, numpy=
 array([[8087,   18,   12, ...,    0,    0,    0],
        [8087,  634,   30, ...,    0,    0,    0],
        [8087,   16,   13, ...,    0,    0,    0],
        ...,
        [8087,   12,   20, ...,    0,    0,    0],
        [8087,   17, 4981, ...,    0,    0,    0],
        [8087,   12, 5453, ...,    0,    0,    0]])>)

Positional encoding

Since this model doesn't contain any recurrence or convolution, positional encoding is added to give the model some information about the relative position of the words in the sentence.

The positional encoding vector is added to the embedding vector. Embeddings represent a token in a d-dimensional space where tokens with similar meaning will be closer to each other. But the embeddings do not encode the relative position of words in a sentence. So after adding the positional encoding, words will be closer to each other based on the similarity of their meaning and their position in the sentence, in the d-dimensional space.

See the notebook on positional encoding to learn more about it. The formula for calculating the positional encoding is as follows:

$$\Large{PE_{(pos, 2i)} = sin(pos / 10000^{2i / d_{model}})} $$
$$\Large{PE_{(pos, 2i+1)} = cos(pos / 10000^{2i / d_{model}})} $$
def get_angles(pos, i, d_model):
  angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model))
  return pos * angle_rates
def positional_encoding(position, d_model):
  angle_rads = get_angles(np.arange(position)[:, np.newaxis],
                          np.arange(d_model)[np.newaxis, :],
                          d_model)
  
  # apply sin to even indices in the array; 2i
  sines = np.sin(angle_rads[:, 0::2])
  
  # apply cos to odd indices in the array; 2i+1
  cosines = np.cos(angle_rads[:, 1::2])
  
  pos_encoding = np.concatenate([sines, cosines], axis=-1)
  
  pos_encoding = pos_encoding[np.newaxis, ...]
    
  return tf.cast(pos_encoding, dtype=tf.float32)
pos_encoding = positional_encoding(50, 512)
print (pos_encoding.shape)

plt.pcolormesh(pos_encoding[0], cmap='RdBu')
plt.xlabel('Depth')
plt.xlim((0, 512))
plt.ylabel('Position')
plt.colorbar()
plt.show()
(1, 50, 512)

png

Masking

Mask all the pad tokens in the batch of sequence. It ensures that the model does not treat padding as the input. The mask indicates where pad value 0 is present: it outputs a 1 at those locations, and a 0 otherwise.

def create_padding_mask(seq):
  seq = tf.cast(tf.math.equal(seq, 0), tf.float32)
  
  # add extra dimensions so that we can add the padding
  # to the attention logits.
  return seq[:, tf.newaxis, tf.newaxis, :]  # (batch_size, 1, 1, seq_len)
x = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]])
create_padding_mask(x)
<tf.Tensor: id=311439, shape=(3, 1, 1, 5), dtype=float32, numpy=
array([[[[0., 0., 1., 1., 0.]]],


       [[[0., 0., 0., 1., 1.]]],


       [[[1., 1., 1., 0., 0.]]]], dtype=float32)>

The look-ahead mask is used to mask the future tokens in a sequence. In other words, the mask indicates which entries should not be used.

This means that to predict the third word, only the first and second word will be used. Similarly to predict the fourth word, only the first, second and the third word will be used and so on.

def create_look_ahead_mask(size):
  mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)
  return mask  # (seq_len, seq_len)
x = tf.random.uniform((1, 3))
temp = create_look_ahead_mask(x.shape[1])
temp
<tf.Tensor: id=311455, shape=(3, 3), dtype=float32, numpy=
array([[0., 1., 1.],
       [0., 0., 1.],
       [0., 0., 0.]], dtype=float32)>

Scaled dot product attention

scaled_dot_product_attention

The attention function used by the transformer takes three inputs: Q (query), K (key), V (value). The equation used to calculate the attention weights is:

$$\Large{Attention(Q, K, V) = softmax_k(\frac{QK^T}{\sqrt{d_k}}) V} $$

The dot-product attention is scaled by a factor of square root of the depth. This is done because for large values of depth, the dot product grows large in magnitude pushing the softmax function where it has small gradients resulting in a very hard softmax.

For example, consider that Q and K have a mean of 0 and variance of 1. Their matrix multiplication will have a mean of 0 and variance of dk. Hence, square root of dk is used for scaling (and not any other number) because the matmul of Q and K should have a mean of 0 and variance of 1, so that we get a gentler softmax.

The mask is multiplied with -1e9 (close to negative infinity). This is done because the mask is summed with the scaled matrix multiplication of Q and K and is applied immediately before a softmax. The goal is to zero out these cells, and large negative inputs to softmax are near zero in the output.

def scaled_dot_product_attention(q, k, v, mask):
  """Calculate the attention weights.
  q, k, v must have matching leading dimensions.
  k, v must have matching penultimate dimension, i.e.: seq_len_k = seq_len_v.
  The mask has different shapes depending on its type(padding or look ahead) 
  but it must be broadcastable for addition.
  
  Args:
    q: query shape == (..., seq_len_q, depth)
    k: key shape == (..., seq_len_k, depth)
    v: value shape == (..., seq_len_v, depth_v)
    mask: Float tensor with shape broadcastable 
          to (..., seq_len_q, seq_len_k). Defaults to None.
    
  Returns:
    output, attention_weights
  """

  matmul_qk = tf.matmul(q, k, transpose_b=True)  # (..., seq_len_q, seq_len_k)
  
  # scale matmul_qk
  dk = tf.cast(tf.shape(k)[-1], tf.float32)
  scaled_attention_logits = matmul_qk / tf.math.sqrt(dk)

  # add the mask to the scaled tensor.
  if mask is not None:
    scaled_attention_logits += (mask * -1e9)  

  # softmax is normalized on the last axis (seq_len_k) so that the scores
  # add up to 1.
  attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1)  # (..., seq_len_q, seq_len_k)

  output = tf.matmul(attention_weights, v)  # (..., seq_len_q, depth_v)

  return output, attention_weights

As the softmax normalization is done on K, its values decide the amount of importance given to Q.

The output represents the multiplication of the attention weights and the V (value) vector. This ensures that the words we want to focus on are kept as is and the irrelevant words are flushed out.

def print_out(q, k, v):
  temp_out, temp_attn = scaled_dot_product_attention(
      q, k, v, None)
  print ('Attention weights are:')
  print (temp_attn)
  print ('Output is:')
  print (temp_out)
np.set_printoptions(suppress=True)

temp_k = tf.constant([[10,0,0],
                      [0,10,0],
                      [0,0,10],
                      [0,0,10]], dtype=tf.float32)  # (4, 3)

temp_v = tf.constant([[   1,0],
                      [  10,0],
                      [ 100,5],
                      [1000,6]], dtype=tf.float32)  # (4, 2)

# This `query` aligns with the second `key`,
# so the second `value` is returned.
temp_q = tf.constant([[0, 10, 0]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0. 1. 0. 0.]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[10.  0.]], shape=(1, 2), dtype=float32)
# This query aligns with a repeated key (third and fourth), 
# so all associated values get averaged.
temp_q = tf.constant([[0, 0, 10]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0.  0.  0.5 0.5]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[550.    5.5]], shape=(1, 2), dtype=float32)
# This query aligns equally with the first and second key, 
# so their values get averaged.
temp_q = tf.constant([[10, 10, 0]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0.5 0.5 0.  0. ]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[5.5 0. ]], shape=(1, 2), dtype=float32)

Pass all the queries together.

temp_q = tf.constant([[0, 0, 10], [0, 10, 0], [10, 10, 0]], dtype=tf.float32)  # (3, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor(
[[0.  0.  0.5 0.5]
 [0.  1.  0.  0. ]
 [0.5 0.5 0.  0. ]], shape=(3, 4), dtype=float32)
Output is:
tf.Tensor(
[[550.    5.5]
 [ 10.    0. ]
 [  5.5   0. ]], shape=(3, 2), dtype=float32)

Multi-head attention

multi-head attention

Multi-head attention consists of four parts: * Linear layers and split into heads. * Scaled dot-product attention. * Concatenation of heads. * Final linear layer.

Each multi-head attention block gets three inputs; Q (query), K (key), V (value). These are put through linear (Dense) layers and split up into multiple heads.

The scaled_dot_product_attention defined above is applied to each head (broadcasted for efficiency). An appropriate mask must be used in the attention step. The attention output for each head is then concatenated (using tf.transpose, and tf.reshape) and put through a final Dense layer.

Instead of one single attention head, Q, K, and V are split into multiple heads because it allows the model to jointly attend to information at different positions from different representational spaces. After the split each head has a reduced dimensionality, so the total computation cost is the same as a single head attention with full dimensionality.

class MultiHeadAttention(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads):
    super(MultiHeadAttention, self).__init__()
    self.num_heads = num_heads
    self.d_model = d_model
    
    assert d_model % self.num_heads == 0
    
    self.depth = d_model // self.num_heads
    
    self.wq = tf.keras.layers.Dense(d_model)
    self.wk = tf.keras.layers.Dense(d_model)
    self.wv = tf.keras.layers.Dense(d_model)
    
    self.dense = tf.keras.layers.Dense(d_model)
        
  def split_heads(self, x, batch_size):
    """Split the last dimension into (num_heads, depth).
    Transpose the result such that the shape is (batch_size, num_heads, seq_len, depth)
    """
    x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth))
    return tf.transpose(x, perm=[0, 2, 1, 3])
    
  def call(self, v, k, q, mask):
    batch_size = tf.shape(q)[0]
    
    q = self.wq(q)  # (batch_size, seq_len, d_model)
    k = self.wk(k)  # (batch_size, seq_len, d_model)
    v = self.wv(v)  # (batch_size, seq_len, d_model)
    
    q = self.split_heads(q, batch_size)  # (batch_size, num_heads, seq_len_q, depth)
    k = self.split_heads(k, batch_size)  # (batch_size, num_heads, seq_len_k, depth)
    v = self.split_heads(v, batch_size)  # (batch_size, num_heads, seq_len_v, depth)
    
    # scaled_attention.shape == (batch_size, num_heads, seq_len_q, depth)
    # attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k)
    scaled_attention, attention_weights = scaled_dot_product_attention(
        q, k, v, mask)
    
    scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3])  # (batch_size, seq_len_q, num_heads, depth)

    concat_attention = tf.reshape(scaled_attention, 
                                  (batch_size, -1, self.d_model))  # (batch_size, seq_len_q, d_model)

    output = self.dense(concat_attention)  # (batch_size, seq_len_q, d_model)
        
    return output, attention_weights

Create a MultiHeadAttention layer to try out. At each location in the sequence, y, the MultiHeadAttention runs all 8 attention heads across all other locations in the sequence, returning a new vector of the same length at each location.

temp_mha = MultiHeadAttention(d_model=512, num_heads=8)
y = tf.random.uniform((1, 60, 512))  # (batch_size, encoder_sequence, d_model)
out, attn = temp_mha(y, k=y, q=y, mask=None)
out.shape, attn.shape
(TensorShape([1, 60, 512]), TensorShape([1, 8, 60, 60]))

Point wise feed forward network

Point wise feed forward network consists of two fully-connected layers with a ReLU activation in between.

def point_wise_feed_forward_network(d_model, dff):
  return tf.keras.Sequential([
      tf.keras.layers.Dense(dff, activation='relu'),  # (batch_size, seq_len, dff)
      tf.keras.layers.Dense(d_model)  # (batch_size, seq_len, d_model)
  ])
sample_ffn = point_wise_feed_forward_network(512, 2048)
sample_ffn(tf.random.uniform((64, 50, 512))).shape
TensorShape([64, 50, 512])

Encoder and decoder

transformer

The transformer model follows the same general pattern as a standard sequence to sequence with attention model.

  • The input sentence is passed through N encoder layers that generates an output for each word/token in the sequence.
  • The decoder attends on the encoder's output and its own input (self-attention) to predict the next word.

Encoder layer

Each encoder layer consists of sublayers:

  1. Multi-head attention (with padding mask)
  2. Point wise feed forward networks.

Each of these sublayers has a residual connection around it followed by a layer normalization. Residual connections help in avoiding the vanishing gradient problem in deep networks.

The output of each sublayer is LayerNorm(x + Sublayer(x)). The normalization is done on the d_model (last) axis. There are N encoder layers in the transformer.

class EncoderLayer(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads, dff, rate=0.1):
    super(EncoderLayer, self).__init__()

    self.mha = MultiHeadAttention(d_model, num_heads)
    self.ffn = point_wise_feed_forward_network(d_model, dff)

    self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    
    self.dropout1 = tf.keras.layers.Dropout(rate)
    self.dropout2 = tf.keras.layers.Dropout(rate)
    
  def call(self, x, training, mask):

    attn_output, _ = self.mha(x, x, x, mask)  # (batch_size, input_seq_len, d_model)
    attn_output = self.dropout1(attn_output, training=training)
    out1 = self.layernorm1(x + attn_output)  # (batch_size, input_seq_len, d_model)
    
    ffn_output = self.ffn(out1)  # (batch_size, input_seq_len, d_model)
    ffn_output = self.dropout2(ffn_output, training=training)
    out2 = self.layernorm2(out1 + ffn_output)  # (batch_size, input_seq_len, d_model)
    
    return out2
sample_encoder_layer = EncoderLayer(512, 8, 2048)

sample_encoder_layer_output = sample_encoder_layer(
    tf.random.uniform((64, 43, 512)), False, None)

sample_encoder_layer_output.shape  # (batch_size, input_seq_len, d_model)
TensorShape([64, 43, 512])

Decoder layer

Each decoder layer consists of sublayers:

  1. Masked multi-head attention (with look ahead mask and padding mask)
  2. Multi-head attention (with padding mask). V (value) and K (key) receive the encoder output as inputs. Q (query) receives the output from the masked multi-head attention sublayer.
  3. Point wise feed forward networks

Each of these sublayers has a residual connection around it followed by a layer normalization. The output of each sublayer is LayerNorm(x + Sublayer(x)). The normalization is done on the d_model (last) axis.

There are N decoder layers in the transformer.

As Q receives the output from decoder's first attention block, and K receives the encoder output, the attention weights represent the importance given to the decoder's input based on the encoder's output. In other words, the decoder predicts the next word by looking at the encoder output and self-attending to its own output. See the demonstration above in the scaled dot product attention section.

class DecoderLayer(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads, dff, rate=0.1):
    super(DecoderLayer, self).__init__()

    self.mha1 = MultiHeadAttention(d_model, num_heads)
    self.mha2 = MultiHeadAttention(d_model, num_heads)

    self.ffn = point_wise_feed_forward_network(d_model, dff)
 
    self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm3 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    
    self.dropout1 = tf.keras.layers.Dropout(rate)
    self.dropout2 = tf.keras.layers.Dropout(rate)
    self.dropout3 = tf.keras.layers.Dropout(rate)
    
    
  def call(self, x, enc_output, training, 
           look_ahead_mask, padding_mask):
    # enc_output.shape == (batch_size, input_seq_len, d_model)

    attn1, attn_weights_block1 = self.mha1(x, x, x, look_ahead_mask)  # (batch_size, target_seq_len, d_model)
    attn1 = self.dropout1(attn1, training=training)
    out1 = self.layernorm1(attn1 + x)
    
    attn2, attn_weights_block2 = self.mha2(
        enc_output, enc_output, out1, padding_mask)  # (batch_size, target_seq_len, d_model)
    attn2 = self.dropout2(attn2, training=training)
    out2 = self.layernorm2(attn2 + out1)  # (batch_size, target_seq_len, d_model)
    
    ffn_output = self.ffn(out2)  # (batch_size, target_seq_len, d_model)
    ffn_output = self.dropout3(ffn_output, training=training)
    out3 = self.layernorm3(ffn_output + out2)  # (batch_size, target_seq_len, d_model)
    
    return out3, attn_weights_block1, attn_weights_block2
sample_decoder_layer = DecoderLayer(512, 8, 2048)

sample_decoder_layer_output, _, _ = sample_decoder_layer(
    tf.random.uniform((64, 50, 512)), sample_encoder_layer_output, 
    False, None, None)

sample_decoder_layer_output.shape  # (batch_size, target_seq_len, d_model)
TensorShape([64, 50, 512])

Encoder

The Encoder consists of: 1. Input Embedding 2. Positional Encoding 3. N encoder layers

The input is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the encoder layers. The output of the encoder is the input to the decoder.

class Encoder(tf.keras.layers.Layer):
  def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size, 
               rate=0.1):
    super(Encoder, self).__init__()

    self.d_model = d_model
    self.num_layers = num_layers
    
    self.embedding = tf.keras.layers.Embedding(input_vocab_size, d_model)
    self.pos_encoding = positional_encoding(input_vocab_size, self.d_model)
    
    
    self.enc_layers = [EncoderLayer(d_model, num_heads, dff, rate) 
                       for _ in range(num_layers)]
  
    self.dropout = tf.keras.layers.Dropout(rate)
        
  def call(self, x, training, mask):

    seq_len = tf.shape(x)[1]
    
    # adding embedding and position encoding.
    x = self.embedding(x)  # (batch_size, input_seq_len, d_model)
    x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
    x += self.pos_encoding[:, :seq_len, :]

    x = self.dropout(x, training=training)
    
    for i in range(self.num_layers):
      x = self.enc_layers[i](x, training, mask)
    
    return x  # (batch_size, input_seq_len, d_model)
sample_encoder = Encoder(num_layers=2, d_model=512, num_heads=8, 
                         dff=2048, input_vocab_size=8500)

sample_encoder_output = sample_encoder(tf.random.uniform((64, 62)), 
                                       training=False, mask=None)

print (sample_encoder_output.shape)  # (batch_size, input_seq_len, d_model)
(64, 62, 512)

Decoder

The Decoder consists of: 1. Output Embedding 2. Positional Encoding 3. N decoder layers

The target is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the decoder layers. The output of the decoder is the input to the final linear layer.

class Decoder(tf.keras.layers.Layer):
  def __init__(self, num_layers, d_model, num_heads, dff, target_vocab_size, 
               rate=0.1):
    super(Decoder, self).__init__()

    self.d_model = d_model
    self.num_layers = num_layers
    
    self.embedding = tf.keras.layers.Embedding(target_vocab_size, d_model)
    self.pos_encoding = positional_encoding(target_vocab_size, self.d_model)
    
    self.dec_layers = [DecoderLayer(d_model, num_heads, dff, rate) 
                       for _ in range(num_layers)]
    self.dropout = tf.keras.layers.Dropout(rate)
    
  def call(self, x, enc_output, training, 
           look_ahead_mask, padding_mask):

    seq_len = tf.shape(x)[1]
    attention_weights = {}
    
    x = self.embedding(x)  # (batch_size, target_seq_len, d_model)
    x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
    x += self.pos_encoding[:, :seq_len, :]
    
    x = self.dropout(x, training=training)

    for i in range(self.num_layers):
      x, block1, block2 = self.dec_layers[i](x, enc_output, training,
                                             look_ahead_mask, padding_mask)
      
      attention_weights['decoder_layer{}_block1'.format(i+1)] = block1
      attention_weights['decoder_layer{}_block2'.format(i+1)] = block2
    
    # x.shape == (batch_size, target_seq_len, d_model)
    return x, attention_weights
sample_decoder = Decoder(num_layers=2, d_model=512, num_heads=8, 
                         dff=2048, target_vocab_size=8000)

output, attn = sample_decoder(tf.random.uniform((64, 26)), 
                              enc_output=sample_encoder_output, 
                              training=False, look_ahead_mask=None, 
                              padding_mask=None)

output.shape, attn['decoder_layer2_block2'].shape
(TensorShape([64, 26, 512]), TensorShape([64, 8, 26, 62]))

Create the Transformer

Transformer consists of the encoder, decoder and a final linear layer. The output of the decoder is the input to the linear layer and its output is returned.

class Transformer(tf.keras.Model):
  def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size, 
               target_vocab_size, rate=0.1):
    super(Transformer, self).__init__()

    self.encoder = Encoder(num_layers, d_model, num_heads, dff, 
                           input_vocab_size, rate)

    self.decoder = Decoder(num_layers, d_model, num_heads, dff, 
                           target_vocab_size, rate)

    self.final_layer = tf.keras.layers.Dense(target_vocab_size)
    
  def call(self, inp, tar, training, enc_padding_mask, 
           look_ahead_mask, dec_padding_mask):

    enc_output = self.encoder(inp, training, enc_padding_mask)  # (batch_size, inp_seq_len, d_model)
    
    # dec_output.shape == (batch_size, tar_seq_len, d_model)
    dec_output, attention_weights = self.decoder(
        tar, enc_output, training, look_ahead_mask, dec_padding_mask)
    
    final_output = self.final_layer(dec_output)  # (batch_size, tar_seq_len, target_vocab_size)
    
    return final_output, attention_weights
sample_transformer = Transformer(
    num_layers=2, d_model=512, num_heads=8, dff=2048, 
    input_vocab_size=8500, target_vocab_size=8000)

temp_input = tf.random.uniform((64, 62))
temp_target = tf.random.uniform((64, 26))

fn_out, _ = sample_transformer(temp_input, temp_target, training=False, 
                               enc_padding_mask=None, 
                               look_ahead_mask=None,
                               dec_padding_mask=None)

fn_out.shape  # (batch_size, tar_seq_len, target_vocab_size)
TensorShape([64, 26, 8000])

Set hyperparameters

To keep this example small and relatively fast, the values for num_layers, d_model, and dff have been reduced.

The values used in the base model of transformer were; num_layers=6, d_model = 512, dff = 2048. See the paper for all the other versions of the transformer.

num_layers = 4
d_model = 128
dff = 512
num_heads = 8

input_vocab_size = tokenizer_pt.vocab_size + 2
target_vocab_size = tokenizer_en.vocab_size + 2
dropout_rate = 0.1

Optimizer

Use the Adam optimizer with a custom learning rate scheduler according to the formula in the paper.

$$\Large{lrate = d_{model}^{-0.5} * min(step{\_}num^{-0.5}, step{\_}num * warmup{\_}steps^{-1.5})}$$
class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
  def __init__(self, d_model, warmup_steps=4000):
    super(CustomSchedule, self).__init__()
    
    self.d_model = d_model
    self.d_model = tf.cast(self.d_model, tf.float32)

    self.warmup_steps = warmup_steps
    
  def __call__(self, step):
    arg1 = tf.math.rsqrt(step)
    arg2 = step * (self.warmup_steps ** -1.5)
    
    return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)
learning_rate = CustomSchedule(d_model)

optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98, 
                                     epsilon=1e-9)
temp_learning_rate_schedule = CustomSchedule(d_model)

plt.plot(temp_learning_rate_schedule(tf.range(40000, dtype=tf.float32)))
plt.ylabel("Learning Rate")
plt.xlabel("Train Step")
Text(0.5, 0, 'Train Step')

png

Loss and metrics

Since the target sequences are padded, it is important to apply a padding mask when calculating the loss.

loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
    from_logits=True, reduction='none')
def loss_function(real, pred):
  mask = tf.math.logical_not(tf.math.equal(real, 0))
  loss_ = loss_object(real, pred)

  mask = tf.cast(mask, dtype=loss_.dtype)
  loss_ *= mask
  
  return tf.reduce_mean(loss_)
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
    name='train_accuracy')

Training and checkpointing

transformer = Transformer(num_layers, d_model, num_heads, dff,
                          input_vocab_size, target_vocab_size, dropout_rate)
def create_masks(inp, tar):
  # Encoder padding mask
  enc_padding_mask = create_padding_mask(inp)
  
  # Used in the 2nd attention block in the decoder.
  # This padding mask is used to mask the encoder outputs.
  dec_padding_mask = create_padding_mask(inp)
  
  # Used in the 1st attention block in the decoder.
  # It is used to pad and mask future tokens in the input received by 
  # the decoder.
  look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1])
  dec_target_padding_mask = create_padding_mask(tar)
  combined_mask = tf.maximum(dec_target_padding_mask, look_ahead_mask)
  
  return enc_padding_mask, combined_mask, dec_padding_mask

Create the checkpoint path and the checkpoint manager. This will be used to save checkpoints every n epochs.

checkpoint_path = "./checkpoints/train"

ckpt = tf.train.Checkpoint(transformer=transformer,
                           optimizer=optimizer)

ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)

# if a checkpoint exists, restore the latest checkpoint.
if ckpt_manager.latest_checkpoint:
  ckpt.restore(ckpt_manager.latest_checkpoint)
  print ('Latest checkpoint restored!!')

The target is divided into tar_inp and tar_real. tar_inp is passed as an input to the decoder. tar_real is that same input shifted by 1: At each location in tar_input, tar_real contains the next token that should be predicted.

For example, sentence = "SOS A lion in the jungle is sleeping EOS"

tar_inp = "SOS A lion in the jungle is sleeping"

tar_real = "A lion in the jungle is sleeping EOS"

The transformer is an auto-regressive model: it makes predictions one part at a time, and uses its output so far to decide what to do next.

During training this example uses teacher-forcing (like in the text generation tutorial). Teacher forcing is passing the true output to the next time step regardless of what the model predicts at the current time step.

As the transformer predicts each word, self-attention allows it to look at the previous words in the input sequence to better predict the next word.

To prevent the model from peaking at the expected output the model uses a look-ahead mask.

EPOCHS = 20
# The @tf.function trace-compiles train_step into a TF graph for faster
# execution. The function specializes to the precise shape of the argument
# tensors. To avoid re-tracing due to the variable sequence lengths or variable
# batch sizes (the last batch is smaller), use input_signature to specify
# more generic shapes.

train_step_signature = [
    tf.TensorSpec(shape=(None, None), dtype=tf.int64),
    tf.TensorSpec(shape=(None, None), dtype=tf.int64),
]

@tf.function(input_signature=train_step_signature)
def train_step(inp, tar):
  tar_inp = tar[:, :-1]
  tar_real = tar[:, 1:]
  
  enc_padding_mask, combined_mask, dec_padding_mask = create_masks(inp, tar_inp)
  
  with tf.GradientTape() as tape:
    predictions, _ = transformer(inp, tar_inp, 
                                 True, 
                                 enc_padding_mask, 
                                 combined_mask, 
                                 dec_padding_mask)
    loss = loss_function(tar_real, predictions)

  gradients = tape.gradient(loss, transformer.trainable_variables)    
  optimizer.apply_gradients(zip(gradients, transformer.trainable_variables))
  
  train_loss(loss)
  train_accuracy(tar_real, predictions)

Portuguese is used as the input language and English is the target language.

for epoch in range(EPOCHS):
  start = time.time()
  
  train_loss.reset_states()
  train_accuracy.reset_states()
  
  # inp -> portuguese, tar -> english
  for (batch, (inp, tar)) in enumerate(train_dataset):
    train_step(inp, tar)
    
    if batch % 50 == 0:
      print ('Epoch {} Batch {} Loss {:.4f} Accuracy {:.4f}'.format(
          epoch + 1, batch, train_loss.result(), train_accuracy.result()))
      
  if (epoch + 1) % 5 == 0:
    ckpt_save_path = ckpt_manager.save()
    print ('Saving checkpoint for epoch {} at {}'.format(epoch+1,
                                                         ckpt_save_path))
    
  print ('Epoch {} Loss {:.4f} Accuracy {:.4f}'.format(epoch + 1, 
                                                train_loss.result(), 
                                                train_accuracy.result()))

  print ('Time taken for 1 epoch: {} secs\n'.format(time.time() - start))
Epoch 1 Batch 0 Loss 4.2486 Accuracy 0.0000
Epoch 1 Batch 50 Loss 4.1704 Accuracy 0.0003
Epoch 1 Batch 100 Loss 4.1625 Accuracy 0.0121
Epoch 1 Batch 150 Loss 4.1206 Accuracy 0.0170
Epoch 1 Batch 200 Loss 4.0570 Accuracy 0.0194
Epoch 1 Batch 250 Loss 3.9888 Accuracy 0.0210
Epoch 1 Batch 300 Loss 3.9094 Accuracy 0.0238
Epoch 1 Batch 350 Loss 3.8285 Accuracy 0.0275
Epoch 1 Batch 400 Loss 3.7415 Accuracy 0.0308
Epoch 1 Batch 450 Loss 3.6645 Accuracy 0.0342
Epoch 1 Batch 500 Loss 3.5929 Accuracy 0.0372
Epoch 1 Batch 550 Loss 3.5328 Accuracy 0.0406
Epoch 1 Batch 600 Loss 3.4753 Accuracy 0.0440
Epoch 1 Batch 650 Loss 3.4183 Accuracy 0.0475
Epoch 1 Batch 700 Loss 3.3657 Accuracy 0.0510
Epoch 1 Loss 3.3639 Accuracy 0.0511
Time taken for 1 epoch: 579.2321665287018 secs

Epoch 2 Batch 0 Loss 2.6904 Accuracy 0.0986
Epoch 2 Batch 50 Loss 2.5739 Accuracy 0.1012
Epoch 2 Batch 100 Loss 2.5706 Accuracy 0.1038
Epoch 2 Batch 150 Loss 2.5499 Accuracy 0.1058
Epoch 2 Batch 200 Loss 2.5206 Accuracy 0.1078
Epoch 2 Batch 250 Loss 2.5023 Accuracy 0.1100
Epoch 2 Batch 300 Loss 2.4843 Accuracy 0.1119
Epoch 2 Batch 350 Loss 2.4705 Accuracy 0.1140
Epoch 2 Batch 400 Loss 2.4535 Accuracy 0.1154
Epoch 2 Batch 450 Loss 2.4376 Accuracy 0.1172
Epoch 2 Batch 500 Loss 2.4204 Accuracy 0.1186
Epoch 2 Batch 550 Loss 2.4080 Accuracy 0.1202
Epoch 2 Batch 600 Loss 2.3963 Accuracy 0.1214
Epoch 2 Batch 650 Loss 2.3824 Accuracy 0.1228
Epoch 2 Batch 700 Loss 2.3701 Accuracy 0.1242
Epoch 2 Loss 2.3698 Accuracy 0.1242
Time taken for 1 epoch: 61.051878452301025 secs

Epoch 3 Batch 0 Loss 2.2230 Accuracy 0.1418
Epoch 3 Batch 50 Loss 2.1611 Accuracy 0.1399
Epoch 3 Batch 100 Loss 2.1744 Accuracy 0.1414
Epoch 3 Batch 150 Loss 2.1701 Accuracy 0.1424
Epoch 3 Batch 200 Loss 2.1580 Accuracy 0.1432
Epoch 3 Batch 250 Loss 2.1549 Accuracy 0.1440
Epoch 3 Batch 300 Loss 2.1508 Accuracy 0.1447
Epoch 3 Batch 350 Loss 2.1484 Accuracy 0.1459
Epoch 3 Batch 400 Loss 2.1415 Accuracy 0.1463
Epoch 3 Batch 450 Loss 2.1351 Accuracy 0.1471
Epoch 3 Batch 500 Loss 2.1268 Accuracy 0.1476
Epoch 3 Batch 550 Loss 2.1218 Accuracy 0.1484
Epoch 3 Batch 600 Loss 2.1182 Accuracy 0.1488
Epoch 3 Batch 650 Loss 2.1115 Accuracy 0.1494
Epoch 3 Batch 700 Loss 2.1055 Accuracy 0.1499
Epoch 3 Loss 2.1054 Accuracy 0.1499
Time taken for 1 epoch: 61.1329231262207 secs

Epoch 4 Batch 0 Loss 2.0414 Accuracy 0.1603
Epoch 4 Batch 50 Loss 1.9880 Accuracy 0.1556
Epoch 4 Batch 100 Loss 1.9987 Accuracy 0.1578
Epoch 4 Batch 150 Loss 1.9916 Accuracy 0.1595
Epoch 4 Batch 200 Loss 1.9781 Accuracy 0.1607
Epoch 4 Batch 250 Loss 1.9742 Accuracy 0.1618
Epoch 4 Batch 300 Loss 1.9696 Accuracy 0.1627
Epoch 4 Batch 350 Loss 1.9665 Accuracy 0.1641
Epoch 4 Batch 400 Loss 1.9584 Accuracy 0.1648
Epoch 4 Batch 450 Loss 1.9498 Accuracy 0.1662
Epoch 4 Batch 500 Loss 1.9397 Accuracy 0.1671
Epoch 4 Batch 550 Loss 1.9325 Accuracy 0.1683
Epoch 4 Batch 600 Loss 1.9264 Accuracy 0.1692
Epoch 4 Batch 650 Loss 1.9172 Accuracy 0.1703
Epoch 4 Batch 700 Loss 1.9087 Accuracy 0.1714
Epoch 4 Loss 1.9084 Accuracy 0.1714
Time taken for 1 epoch: 61.96270132064819 secs

Epoch 5 Batch 0 Loss 1.8124 Accuracy 0.1883
Epoch 5 Batch 50 Loss 1.7531 Accuracy 0.1851
Epoch 5 Batch 100 Loss 1.7650 Accuracy 0.1866
Epoch 5 Batch 150 Loss 1.7594 Accuracy 0.1881
Epoch 5 Batch 200 Loss 1.7465 Accuracy 0.1891
Epoch 5 Batch 250 Loss 1.7419 Accuracy 0.1900
Epoch 5 Batch 300 Loss 1.7360 Accuracy 0.1914
Epoch 5 Batch 350 Loss 1.7337 Accuracy 0.1927
Epoch 5 Batch 400 Loss 1.7259 Accuracy 0.1937
Epoch 5 Batch 450 Loss 1.7180 Accuracy 0.1950
Epoch 5 Batch 500 Loss 1.7084 Accuracy 0.1958
Epoch 5 Batch 550 Loss 1.7014 Accuracy 0.1970
Epoch 5 Batch 600 Loss 1.6965 Accuracy 0.1977
Epoch 5 Batch 650 Loss 1.6900 Accuracy 0.1984
Epoch 5 Batch 700 Loss 1.6821 Accuracy 0.1994
Saving checkpoint for epoch 5 at ./checkpoints/train/ckpt-1
Epoch 5 Loss 1.6819 Accuracy 0.1994
Time taken for 1 epoch: 62.948086738586426 secs

Epoch 6 Batch 0 Loss 1.6270 Accuracy 0.2087
Epoch 6 Batch 50 Loss 1.5474 Accuracy 0.2101
Epoch 6 Batch 100 Loss 1.5596 Accuracy 0.2109
Epoch 6 Batch 150 Loss 1.5552 Accuracy 0.2119
Epoch 6 Batch 200 Loss 1.5445 Accuracy 0.2128
Epoch 6 Batch 250 Loss 1.5401 Accuracy 0.2138
Epoch 6 Batch 300 Loss 1.5380 Accuracy 0.2143
Epoch 6 Batch 350 Loss 1.5362 Accuracy 0.2155
Epoch 6 Batch 400 Loss 1.5295 Accuracy 0.2160
Epoch 6 Batch 450 Loss 1.5223 Accuracy 0.2172
Epoch 6 Batch 500 Loss 1.5133 Accuracy 0.2180
Epoch 6 Batch 550 Loss 1.5071 Accuracy 0.2191
Epoch 6 Batch 600 Loss 1.5032 Accuracy 0.2196
Epoch 6 Batch 650 Loss 1.4976 Accuracy 0.2202
Epoch 6 Batch 700 Loss 1.4905 Accuracy 0.2210
Epoch 6 Loss 1.4903 Accuracy 0.2210
Time taken for 1 epoch: 66.7133617401123 secs

Epoch 7 Batch 0 Loss 1.4532 Accuracy 0.2280
Epoch 7 Batch 50 Loss 1.3662 Accuracy 0.2293
Epoch 7 Batch 100 Loss 1.3742 Accuracy 0.2310
Epoch 7 Batch 150 Loss 1.3699 Accuracy 0.2327
Epoch 7 Batch 200 Loss 1.3587 Accuracy 0.2335
Epoch 7 Batch 250 Loss 1.3550 Accuracy 0.2344
Epoch 7 Batch 300 Loss 1.3507 Accuracy 0.2355
Epoch 7 Batch 350 Loss 1.3488 Accuracy 0.2366
Epoch 7 Batch 400 Loss 1.3419 Accuracy 0.2373
Epoch 7 Batch 450 Loss 1.3338 Accuracy 0.2387
Epoch 7 Batch 500 Loss 1.3251 Accuracy 0.2395
Epoch 7 Batch 550 Loss 1.3186 Accuracy 0.2406
Epoch 7 Batch 600 Loss 1.3146 Accuracy 0.2412
Epoch 7 Batch 650 Loss 1.3092 Accuracy 0.2417
Epoch 7 Batch 700 Loss 1.3021 Accuracy 0.2427
Epoch 7 Loss 1.3019 Accuracy 0.2428
Time taken for 1 epoch: 61.15065956115723 secs

Epoch 8 Batch 0 Loss 1.2876 Accuracy 0.2476
Epoch 8 Batch 50 Loss 1.1861 Accuracy 0.2512
Epoch 8 Batch 100 Loss 1.1959 Accuracy 0.2527
Epoch 8 Batch 150 Loss 1.1941 Accuracy 0.2536
Epoch 8 Batch 200 Loss 1.1854 Accuracy 0.2542
Epoch 8 Batch 250 Loss 1.1834 Accuracy 0.2550
Epoch 8 Batch 300 Loss 1.1785 Accuracy 0.2563
Epoch 8 Batch 350 Loss 1.1776 Accuracy 0.2573
Epoch 8 Batch 400 Loss 1.1713 Accuracy 0.2580
Epoch 8 Batch 450 Loss 1.1650 Accuracy 0.2591
Epoch 8 Batch 500 Loss 1.1582 Accuracy 0.2596
Epoch 8 Batch 550 Loss 1.1533 Accuracy 0.2607
Epoch 8 Batch 600 Loss 1.1507 Accuracy 0.2612
Epoch 8 Batch 650 Loss 1.1463 Accuracy 0.2617
Epoch 8 Batch 700 Loss 1.1414 Accuracy 0.2624
Epoch 8 Loss 1.1411 Accuracy 0.2624
Time taken for 1 epoch: 61.611565828323364 secs

Epoch 9 Batch 0 Loss 1.1347 Accuracy 0.2708
Epoch 9 Batch 50 Loss 1.0491 Accuracy 0.2696
Epoch 9 Batch 100 Loss 1.0593 Accuracy 0.2705
Epoch 9 Batch 150 Loss 1.0603 Accuracy 0.2711
Epoch 9 Batch 200 Loss 1.0569 Accuracy 0.2711
Epoch 9 Batch 250 Loss 1.0561 Accuracy 0.2716
Epoch 9 Batch 300 Loss 1.0530 Accuracy 0.2724
Epoch 9 Batch 350 Loss 1.0527 Accuracy 0.2733
Epoch 9 Batch 400 Loss 1.0478 Accuracy 0.2738
Epoch 9 Batch 450 Loss 1.0433 Accuracy 0.2747
Epoch 9 Batch 500 Loss 1.0384 Accuracy 0.2750
Epoch 9 Batch 550 Loss 1.0344 Accuracy 0.2759
Epoch 9 Batch 600 Loss 1.0329 Accuracy 0.2762
Epoch 9 Batch 650 Loss 1.0301 Accuracy 0.2766
Epoch 9 Batch 700 Loss 1.0260 Accuracy 0.2772
Epoch 9 Loss 1.0257 Accuracy 0.2772
Time taken for 1 epoch: 61.86583065986633 secs

Epoch 10 Batch 0 Loss 1.0636 Accuracy 0.2772
Epoch 10 Batch 50 Loss 0.9516 Accuracy 0.2813
Epoch 10 Batch 100 Loss 0.9624 Accuracy 0.2827
Epoch 10 Batch 150 Loss 0.9662 Accuracy 0.2828
Epoch 10 Batch 200 Loss 0.9633 Accuracy 0.2827
Epoch 10 Batch 250 Loss 0.9637 Accuracy 0.2831
Epoch 10 Batch 300 Loss 0.9607 Accuracy 0.2841
Epoch 10 Batch 350 Loss 0.9610 Accuracy 0.2850
Epoch 10 Batch 400 Loss 0.9569 Accuracy 0.2855
Epoch 10 Batch 450 Loss 0.9537 Accuracy 0.2862
Epoch 10 Batch 500 Loss 0.9501 Accuracy 0.2863
Epoch 10 Batch 550 Loss 0.9472 Accuracy 0.2872
Epoch 10 Batch 600 Loss 0.9466 Accuracy 0.2874
Epoch 10 Batch 650 Loss 0.9443 Accuracy 0.2876
Epoch 10 Batch 700 Loss 0.9408 Accuracy 0.2881
Saving checkpoint for epoch 10 at ./checkpoints/train/ckpt-2
Epoch 10 Loss 0.9406 Accuracy 0.2881
Time taken for 1 epoch: 61.58380126953125 secs

Epoch 11 Batch 0 Loss 0.9859 Accuracy 0.2917
Epoch 11 Batch 50 Loss 0.8757 Accuracy 0.2909
Epoch 11 Batch 100 Loss 0.8915 Accuracy 0.2914
Epoch 11 Batch 150 Loss 0.8941 Accuracy 0.2920
Epoch 11 Batch 200 Loss 0.8926 Accuracy 0.2917
Epoch 11 Batch 250 Loss 0.8924 Accuracy 0.2922
Epoch 11 Batch 300 Loss 0.8890 Accuracy 0.2933
Epoch 11 Batch 350 Loss 0.8899 Accuracy 0.2941
Epoch 11 Batch 400 Loss 0.8869 Accuracy 0.2944
Epoch 11 Batch 450 Loss 0.8841 Accuracy 0.2950
Epoch 11 Batch 500 Loss 0.8806 Accuracy 0.2953
Epoch 11 Batch 550 Loss 0.8780 Accuracy 0.2960
Epoch 11 Batch 600 Loss 0.8777 Accuracy 0.2961
Epoch 11 Batch 650 Loss 0.8757 Accuracy 0.2963
Epoch 11 Batch 700 Loss 0.8725 Accuracy 0.2969
Epoch 11 Loss 0.8722 Accuracy 0.2969
Time taken for 1 epoch: 65.017573595047 secs

Epoch 12 Batch 0 Loss 0.9121 Accuracy 0.2961
Epoch 12 Batch 50 Loss 0.8199 Accuracy 0.2982
Epoch 12 Batch 100 Loss 0.8331 Accuracy 0.2994
Epoch 12 Batch 150 Loss 0.8338 Accuracy 0.3004
Epoch 12 Batch 200 Loss 0.8328 Accuracy 0.2997
Epoch 12 Batch 250 Loss 0.8324 Accuracy 0.3003
Epoch 12 Batch 300 Loss 0.8294 Accuracy 0.3015
Epoch 12 Batch 350 Loss 0.8304 Accuracy 0.3023
Epoch 12 Batch 400 Loss 0.8276 Accuracy 0.3024
Epoch 12 Batch 450 Loss 0.8256 Accuracy 0.3030
Epoch 12 Batch 500 Loss 0.8225 Accuracy 0.3031
Epoch 12 Batch 550 Loss 0.8207 Accuracy 0.3038
Epoch 12 Batch 600 Loss 0.8207 Accuracy 0.3040
Epoch 12 Batch 650 Loss 0.8192 Accuracy 0.3041
Epoch 12 Batch 700 Loss 0.8163 Accuracy 0.3047
Epoch 12 Loss 0.8160 Accuracy 0.3047
Time taken for 1 epoch: 60.86408472061157 secs

Epoch 13 Batch 0 Loss 0.8726 Accuracy 0.3029
Epoch 13 Batch 50 Loss 0.7778 Accuracy 0.3034
Epoch 13 Batch 100 Loss 0.7860 Accuracy 0.3052
Epoch 13 Batch 150 Loss 0.7863 Accuracy 0.3062
Epoch 13 Batch 200 Loss 0.7832 Accuracy 0.3065
Epoch 13 Batch 250 Loss 0.7840 Accuracy 0.3069
Epoch 13 Batch 300 Loss 0.7815 Accuracy 0.3079
Epoch 13 Batch 350 Loss 0.7826 Accuracy 0.3086
Epoch 13 Batch 400 Loss 0.7801 Accuracy 0.3088
Epoch 13 Batch 450 Loss 0.7778 Accuracy 0.3094
Epoch 13 Batch 500 Loss 0.7751 Accuracy 0.3095
Epoch 13 Batch 550 Loss 0.7734 Accuracy 0.3102
Epoch 13 Batch 600 Loss 0.7743 Accuracy 0.3101
Epoch 13 Batch 650 Loss 0.7735 Accuracy 0.3102
Epoch 13 Batch 700 Loss 0.7708 Accuracy 0.3107
Epoch 13 Loss 0.7705 Accuracy 0.3107
Time taken for 1 epoch: 61.48792362213135 secs

Epoch 14 Batch 0 Loss 0.7816 Accuracy 0.3149
Epoch 14 Batch 50 Loss 0.7341 Accuracy 0.3094
Epoch 14 Batch 100 Loss 0.7434 Accuracy 0.3116
Epoch 14 Batch 150 Loss 0.7438 Accuracy 0.3123
Epoch 14 Batch 200 Loss 0.7411 Accuracy 0.3127
Epoch 14 Batch 250 Loss 0.7412 Accuracy 0.3132
Epoch 14 Batch 300 Loss 0.7396 Accuracy 0.3140
Epoch 14 Batch 350 Loss 0.7404 Accuracy 0.3148
Epoch 14 Batch 400 Loss 0.7392 Accuracy 0.3147
Epoch 14 Batch 450 Loss 0.7366 Accuracy 0.3154
Epoch 14 Batch 500 Loss 0.7344 Accuracy 0.3154
Epoch 14 Batch 550 Loss 0.7334 Accuracy 0.3159
Epoch 14 Batch 600 Loss 0.7346 Accuracy 0.3159
Epoch 14 Batch 650 Loss 0.7332 Accuracy 0.3160
Epoch 14 Batch 700 Loss 0.7309 Accuracy 0.3164
Epoch 14 Loss 0.7306 Accuracy 0.3165
Time taken for 1 epoch: 61.5917706489563 secs

Epoch 15 Batch 0 Loss 0.7698 Accuracy 0.3105
Epoch 15 Batch 50 Loss 0.6924 Accuracy 0.3164
Epoch 15 Batch 100 Loss 0.7049 Accuracy 0.3177
Epoch 15 Batch 150 Loss 0.7064 Accuracy 0.3182
Epoch 15 Batch 200 Loss 0.7039 Accuracy 0.3183
Epoch 15 Batch 250 Loss 0.7040 Accuracy 0.3185
Epoch 15 Batch 300 Loss 0.7028 Accuracy 0.3193
Epoch 15 Batch 350 Loss 0.7044 Accuracy 0.3199
Epoch 15 Batch 400 Loss 0.7029 Accuracy 0.3199
Epoch 15 Batch 450 Loss 0.7006 Accuracy 0.3205
Epoch 15 Batch 500 Loss 0.6991 Accuracy 0.3203
Epoch 15 Batch 550 Loss 0.6984 Accuracy 0.3208
Epoch 15 Batch 600 Loss 0.6993 Accuracy 0.3208
Epoch 15 Batch 650 Loss 0.6981 Accuracy 0.3209
Epoch 15 Batch 700 Loss 0.6961 Accuracy 0.3213
Saving checkpoint for epoch 15 at ./checkpoints/train/ckpt-3
Epoch 15 Loss 0.6959 Accuracy 0.3214
Time taken for 1 epoch: 62.4166374206543 secs

Epoch 16 Batch 0 Loss 0.7333 Accuracy 0.3209
Epoch 16 Batch 50 Loss 0.6638 Accuracy 0.3197
Epoch 16 Batch 100 Loss 0.6744 Accuracy 0.3214
Epoch 16 Batch 150 Loss 0.6756 Accuracy 0.3219
Epoch 16 Batch 200 Loss 0.6742 Accuracy 0.3217
Epoch 16 Batch 250 Loss 0.6742 Accuracy 0.3222
Epoch 16 Batch 300 Loss 0.6727 Accuracy 0.3231
Epoch 16 Batch 350 Loss 0.6738 Accuracy 0.3239
Epoch 16 Batch 400 Loss 0.6724 Accuracy 0.3240
Epoch 16 Batch 450 Loss 0.6699 Accuracy 0.3248
Epoch 16 Batch 500 Loss 0.6678 Accuracy 0.3247
Epoch 16 Batch 550 Loss 0.6666 Accuracy 0.3254
Epoch 16 Batch 600 Loss 0.6676 Accuracy 0.3254
Epoch 16 Batch 650 Loss 0.6663 Accuracy 0.3256
Epoch 16 Batch 700 Loss 0.6646 Accuracy 0.3260
Epoch 16 Loss 0.6643 Accuracy 0.3260
Time taken for 1 epoch: 63.641923666000366 secs

Epoch 17 Batch 0 Loss 0.6949 Accuracy 0.3217
Epoch 17 Batch 50 Loss 0.6327 Accuracy 0.3247
Epoch 17 Batch 100 Loss 0.6468 Accuracy 0.3259
Epoch 17 Batch 150 Loss 0.6461 Accuracy 0.3270
Epoch 17 Batch 200 Loss 0.6449 Accuracy 0.3266
Epoch 17 Batch 250 Loss 0.6444 Accuracy 0.3270
Epoch 17 Batch 300 Loss 0.6429 Accuracy 0.3280
Epoch 17 Batch 350 Loss 0.6445 Accuracy 0.3287
Epoch 17 Batch 400 Loss 0.6425 Accuracy 0.3291
Epoch 17 Batch 450 Loss 0.6403 Accuracy 0.3298
Epoch 17 Batch 500 Loss 0.6380 Accuracy 0.3297
Epoch 17 Batch 550 Loss 0.6373 Accuracy 0.3302
Epoch 17 Batch 600 Loss 0.6381 Accuracy 0.3302
Epoch 17 Batch 650 Loss 0.6371 Accuracy 0.3303
Epoch 17 Batch 700 Loss 0.6353 Accuracy 0.3307
Epoch 17 Loss 0.6351 Accuracy 0.3307
Time taken for 1 epoch: 61.69080376625061 secs

Epoch 18 Batch 0 Loss 0.6706 Accuracy 0.3317
Epoch 18 Batch 50 Loss 0.6090 Accuracy 0.3282
Epoch 18 Batch 100 Loss 0.6222 Accuracy 0.3294
Epoch 18 Batch 150 Loss 0.6206 Accuracy 0.3304
Epoch 18 Batch 200 Loss 0.6187 Accuracy 0.3305
Epoch 18 Batch 250 Loss 0.6189 Accuracy 0.3309
Epoch 18 Batch 300 Loss 0.6171 Accuracy 0.3318
Epoch 18 Batch 350 Loss 0.6182 Accuracy 0.3327
Epoch 18 Batch 400 Loss 0.6161 Accuracy 0.3329
Epoch 18 Batch 450 Loss 0.6142 Accuracy 0.3336
Epoch 18 Batch 500 Loss 0.6122 Accuracy 0.3336
Epoch 18 Batch 550 Loss 0.6109 Accuracy 0.3343
Epoch 18 Batch 600 Loss 0.6121 Accuracy 0.3342
Epoch 18 Batch 650 Loss 0.6113 Accuracy 0.3342
Epoch 18 Batch 700 Loss 0.6098 Accuracy 0.3346
Epoch 18 Loss 0.6097 Accuracy 0.3346
Time taken for 1 epoch: 60.85848331451416 secs

Epoch 19 Batch 0 Loss 0.6510 Accuracy 0.3321
Epoch 19 Batch 50 Loss 0.5843 Accuracy 0.3323
Epoch 19 Batch 100 Loss 0.5969 Accuracy 0.3333
Epoch 19 Batch 150 Loss 0.5979 Accuracy 0.3338
Epoch 19 Batch 200 Loss 0.5959 Accuracy 0.3339
Epoch 19 Batch 250 Loss 0.5972 Accuracy 0.3343
Epoch 19 Batch 300 Loss 0.5956 Accuracy 0.3352
Epoch 19 Batch 350 Loss 0.5965 Accuracy 0.3362
Epoch 19 Batch 400 Loss 0.5946 Accuracy 0.3363
Epoch 19 Batch 450 Loss 0.5925 Accuracy 0.3370
Epoch 19 Batch 500 Loss 0.5909 Accuracy 0.3368
Epoch 19 Batch 550 Loss 0.5896 Accuracy 0.3375
Epoch 19 Batch 600 Loss 0.5903 Accuracy 0.3374
Epoch 19 Batch 650 Loss 0.5892 Accuracy 0.3375
Epoch 19 Batch 700 Loss 0.5878 Accuracy 0.3379
Epoch 19 Loss 0.5875 Accuracy 0.3379
Time taken for 1 epoch: 61.443421840667725 secs

Epoch 20 Batch 0 Loss 0.6148 Accuracy 0.3349
Epoch 20 Batch 50 Loss 0.5628 Accuracy 0.3359
Epoch 20 Batch 100 Loss 0.5729 Accuracy 0.3382
Epoch 20 Batch 150 Loss 0.5750 Accuracy 0.3384
Epoch 20 Batch 200 Loss 0.5737 Accuracy 0.3382
Epoch 20 Batch 250 Loss 0.5733 Accuracy 0.3386
Epoch 20 Batch 300 Loss 0.5722 Accuracy 0.3393
Epoch 20 Batch 350 Loss 0.5729 Accuracy 0.3402
Epoch 20 Batch 400 Loss 0.5715 Accuracy 0.3402
Epoch 20 Batch 450 Loss 0.5696 Accuracy 0.3409
Epoch 20 Batch 500 Loss 0.5685 Accuracy 0.3407
Epoch 20 Batch 550 Loss 0.5672 Accuracy 0.3413
Epoch 20 Batch 600 Loss 0.5679 Accuracy 0.3413
Epoch 20 Batch 650 Loss 0.5675 Accuracy 0.3412
Epoch 20 Batch 700 Loss 0.5663 Accuracy 0.3415
Saving checkpoint for epoch 20 at ./checkpoints/train/ckpt-4
Epoch 20 Loss 0.5661 Accuracy 0.3415
Time taken for 1 epoch: 61.53413367271423 secs

Evaluate

The following steps are used for evaluation:

  • Encode the input sentence using the Portuguese tokenizer (tokenizer_pt). Moreover, add the start and end token so the input is equivalent to what the model is trained with. This is the encoder input.
  • The decoder input is the start token == tokenizer_en.vocab_size.
  • Calculate the padding masks and the look ahead masks.
  • The decoder then outputs the predictions by looking at the encoder output and its own output (self-attention).
  • Select the last word and calculate the argmax of that.
  • Concatentate the predicted word to the decoder input as pass it to the decoder.
  • In this approach, the decoder predicts the next word based on the previous words it predicted.
def evaluate(inp_sentence):
  start_token = [tokenizer_pt.vocab_size]
  end_token = [tokenizer_pt.vocab_size + 1]
  
  # inp sentence is portuguese, hence adding the start and end token
  inp_sentence = start_token + tokenizer_pt.encode(inp_sentence) + end_token
  encoder_input = tf.expand_dims(inp_sentence, 0)
  
  # as the target is english, the first word to the transformer should be the
  # english start token.
  decoder_input = [tokenizer_en.vocab_size]
  output = tf.expand_dims(decoder_input, 0)
    
  for i in range(MAX_LENGTH):
    enc_padding_mask, combined_mask, dec_padding_mask = create_masks(
        encoder_input, output)
  
    # predictions.shape == (batch_size, seq_len, vocab_size)
    predictions, attention_weights = transformer(encoder_input, 
                                                 output,
                                                 False,
                                                 enc_padding_mask,
                                                 combined_mask,
                                                 dec_padding_mask)
    
    # select the last word from the seq_len dimension
    predictions = predictions[: ,-1:, :]  # (batch_size, 1, vocab_size)

    predicted_id = tf.cast(tf.argmax(predictions, axis=-1), tf.int32)
    
    # return the result if the predicted_id is equal to the end token
    if tf.equal(predicted_id, tokenizer_en.vocab_size+1):
      return tf.squeeze(output, axis=0), attention_weights
    
    # concatentate the predicted_id to the output which is given to the decoder
    # as its input.
    output = tf.concat([output, predicted_id], axis=-1)

  return tf.squeeze(output, axis=0), attention_weights
def plot_attention_weights(attention, sentence, result, layer):
  fig = plt.figure(figsize=(16, 8))
  
  sentence = tokenizer_pt.encode(sentence)
  
  attention = tf.squeeze(attention[layer], axis=0)
  
  for head in range(attention.shape[0]):
    ax = fig.add_subplot(2, 4, head+1)
    
    # plot the attention weights
    ax.matshow(attention[head][:-1, :], cmap='viridis')

    fontdict = {'fontsize': 10}
    
    ax.set_xticks(range(len(sentence)+2))
    ax.set_yticks(range(len(result)))
    
    ax.set_ylim(len(result)-1.5, -0.5)
        
    ax.set_xticklabels(
        ['<start>']+[tokenizer_pt.decode([i]) for i in sentence]+['<end>'], 
        fontdict=fontdict, rotation=90)
    
    ax.set_yticklabels([tokenizer_en.decode([i]) for i in result 
                        if i < tokenizer_en.vocab_size], 
                       fontdict=fontdict)
    
    ax.set_xlabel('Head {}'.format(head+1))
  
  plt.tight_layout()
  plt.show()
def translate(sentence, plot=''):
  result, attention_weights = evaluate(sentence)
  
  predicted_sentence = tokenizer_en.decode([i for i in result 
                                            if i < tokenizer_en.vocab_size])  

  print('Input: {}'.format(sentence))
  print('Predicted translation: {}'.format(predicted_sentence))
  
  if plot:
    plot_attention_weights(attention_weights, sentence, result, plot)
translate("este é um problema que temos que resolver.")
print ("Real translation: this is a problem we have to solve .")
Input: este é um problema que temos que resolver.
Predicted translation: this is a problem we have to solve the united states .
Real translation: this is a problem we have to solve .
translate("os meus vizinhos ouviram sobre esta ideia.")
print ("Real translation: and my neighboring homes heard about this idea .")
Input: os meus vizinhos ouviram sobre esta ideia.
Predicted translation: my neighbors heard about this idea .
Real translation: and my neighboring homes heard about this idea .
translate("vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.")
print ("Real translation: so i 'll just share with you some stories very quickly of some magical things that have happened .")
Input: vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.
Predicted translation: so i 'll just go back to you a few of little magic stories that happened .
Real translation: so i 'll just share with you some stories very quickly of some magical things that have happened .

You can pass different layers and attention blocks of the decoder to the plot parameter.

translate("este é o primeiro livro que eu fiz.", plot='decoder_layer4_block2')
print ("Real translation: this is the first book i've ever done.")
Input: este é o primeiro livro que eu fiz.
Predicted translation: this is the first book i did .

png

Real translation: this is the first book i've ever done.

Summary

In this tutorial, you learned about positional encoding, multi-head attention, the importance of masking and how to create a transformer.

Try using a different dataset to train the transformer. You can also create the base transformer or transformer XL by changing the hyperparameters above. You can also use the layers defined here to create BERT and train state of the art models. Futhermore, you can implement beam search to get better predictions.