Join us at TensorFlow World, Oct 28-31. Use code TF20 for 20% off select passes. Register now

Transformer model for language understanding

View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook

This tutorial trains a Transformer model to translate Portuguese to English. This is an advanced example that assumes knowledge of text generation and attention.

The core idea behind the Transformer model is self-attention—the ability to attend to different positions of the input sequence to compute a representation of that sequence. Transformer creates stacks of self-attention layers and is explained below in the sections Scaled dot product attention and Multi-head attention.

A transformer model handles variable-sized input using stacks of self-attention layers instead of RNNs or CNNs. This general architecture has a number of advantages:

  • It make no assumptions about the temporal/spatial relationships across the data. This is ideal for processing a set of objects (for example, StarCraft units).
  • Layer outputs can be calculated in parallel, instead of a series like an RNN.
  • Distant items can affect each other's output without passing through many RNN-steps, or convolution layers (see Scene Memory Transformer for example).
  • It can learn long-range dependencies. This is a challenge in many sequence tasks.

The downsides of this architecture are:

  • For a time-series, the output for a time-step is calculated from the entire history instead of only the inputs and current hidden-state. This may be less efficient.
  • If the input does have a temporal/spatial relationship, like text, some positional encoding must be added or the model will effectively see a bag of words.

After training the model in this notebook, you will be able to input a Portuguese sentence and return the English translation.

Attention heatmap

from __future__ import absolute_import, division, print_function, unicode_literals

try:
  %tensorflow_version 2.x
except Exception:
  pass
import tensorflow_datasets as tfds
import tensorflow as tf

import time
import numpy as np
import matplotlib.pyplot as plt
ERROR: tensorflow-gpu 2.0.0b1 has requirement tb-nightly<1.14.0a20190604,>=1.14.0a20190603, but you'll have tb-nightly 1.15.0a20190813 which is incompatible.

Setup input pipeline

Use TFDS to load the Portugese-English translation dataset from the TED Talks Open Translation Project.

This dataset contains approximately 50000 training examples, 1100 validation examples, and 2000 test examples.

examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en', with_info=True,
                               as_supervised=True)
train_examples, val_examples = examples['train'], examples['validation']
Downloading and preparing dataset ted_hrlr_translate (124.94 MiB) to /home/kbuilder/tensorflow_datasets/ted_hrlr_translate/pt_to_en/0.0.1...

HBox(children=(IntProgress(value=1, bar_style='info', description='Dl Completed...', max=1, style=ProgressStyl…
HBox(children=(IntProgress(value=1, bar_style='info', description='Dl Size...', max=1, style=ProgressStyle(des…
HBox(children=(IntProgress(value=1, bar_style='info', description='Extraction completed...', max=1, style=Prog…





HBox(children=(IntProgress(value=1, bar_style='info', max=1), HTML(value='')))


HBox(children=(IntProgress(value=0, description='Shuffling...', max=1, style=ProgressStyle(description_width='…
WARNING: Logging before flag parsing goes to stderr.
W0814 01:03:33.881107 140098807473920 deprecation.py:323] From /home/kbuilder/.local/lib/python3.5/site-packages/tensorflow_datasets/core/file_format_adapter.py:209: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and: 
`tf.data.TFRecordDataset(path)`

HBox(children=(IntProgress(value=1, bar_style='info', description='Reading...', max=1, style=ProgressStyle(des…
HBox(children=(IntProgress(value=0, description='Writing...', max=51785, style=ProgressStyle(description_width…


HBox(children=(IntProgress(value=1, bar_style='info', max=1), HTML(value='')))


HBox(children=(IntProgress(value=0, description='Shuffling...', max=1, style=ProgressStyle(description_width='…
HBox(children=(IntProgress(value=1, bar_style='info', description='Reading...', max=1, style=ProgressStyle(des…
HBox(children=(IntProgress(value=0, description='Writing...', max=1193, style=ProgressStyle(description_width=…


HBox(children=(IntProgress(value=1, bar_style='info', max=1), HTML(value='')))


HBox(children=(IntProgress(value=0, description='Shuffling...', max=1, style=ProgressStyle(description_width='…
HBox(children=(IntProgress(value=1, bar_style='info', description='Reading...', max=1, style=ProgressStyle(des…
HBox(children=(IntProgress(value=0, description='Writing...', max=1803, style=ProgressStyle(description_width=…
W0814 01:03:35.059806 140098807473920 dataset_builder.py:439] Warning: Setting shuffle_files=True because split=TRAIN and shuffle_files=None. This behavior will be deprecated on 2019-08-06, at which point shuffle_files=False will be the default for all splits.

Dataset ted_hrlr_translate downloaded and prepared to /home/kbuilder/tensorflow_datasets/ted_hrlr_translate/pt_to_en/0.0.1. Subsequent calls will reuse this data.

Create a custom subwords tokenizer from the training dataset.

tokenizer_en = tfds.features.text.SubwordTextEncoder.build_from_corpus(
    (en.numpy() for pt, en in train_examples), target_vocab_size=2**13)

tokenizer_pt = tfds.features.text.SubwordTextEncoder.build_from_corpus(
    (pt.numpy() for pt, en in train_examples), target_vocab_size=2**13)
sample_string = 'Transformer is awesome.'

tokenized_string = tokenizer_en.encode(sample_string)
print ('Tokenized string is {}'.format(tokenized_string))

original_string = tokenizer_en.decode(tokenized_string)
print ('The original string: {}'.format(original_string))

assert original_string == sample_string
Tokenized string is [7915, 1248, 7946, 7194, 13, 2799, 7877]
The original string: Transformer is awesome.

The tokenizer encodes the string by breaking it into subwords if the word is not in its dictionary.

for ts in tokenized_string:
  print ('{} ----> {}'.format(ts, tokenizer_en.decode([ts])))
7915 ----> T
1248 ----> ran
7946 ----> s
7194 ----> former 
13 ----> is 
2799 ----> awesome
7877 ----> .
BUFFER_SIZE = 20000
BATCH_SIZE = 64

Add a start and end token to the input and target.

def encode(lang1, lang2):
  lang1 = [tokenizer_pt.vocab_size] + tokenizer_pt.encode(
      lang1.numpy()) + [tokenizer_pt.vocab_size+1]

  lang2 = [tokenizer_en.vocab_size] + tokenizer_en.encode(
      lang2.numpy()) + [tokenizer_en.vocab_size+1]
  
  return lang1, lang2
MAX_LENGTH = 40
def filter_max_length(x, y, max_length=MAX_LENGTH):
  return tf.logical_and(tf.size(x) <= max_length,
                        tf.size(y) <= max_length)

Operations inside .map() run in graph mode and receive a graph tensor that do not have a numpy attribute. The tokenizer expects a string or Unicode symbol to encode it into integers. Hence, you need to run the encoding inside a tf.py_function, which receives an eager tensor having a numpy attribute that contains the string value.

def tf_encode(pt, en):
  return tf.py_function(encode, [pt, en], [tf.int64, tf.int64])
train_dataset = train_examples.map(tf_encode)
train_dataset = train_dataset.filter(filter_max_length)
# cache the dataset to memory to get a speedup while reading from it.
train_dataset = train_dataset.cache()
train_dataset = train_dataset.shuffle(BUFFER_SIZE).padded_batch(
    BATCH_SIZE, padded_shapes=([-1], [-1]))
train_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE)


val_dataset = val_examples.map(tf_encode)
val_dataset = val_dataset.filter(filter_max_length).padded_batch(
    BATCH_SIZE, padded_shapes=([-1], [-1]))
pt_batch, en_batch = next(iter(val_dataset))
pt_batch, en_batch
(<tf.Tensor: id=207697, shape=(64, 40), dtype=int64, numpy=
 array([[8214, 1259,    5, ...,    0,    0,    0],
        [8214,  299,   13, ...,    0,    0,    0],
        [8214,   59,    8, ...,    0,    0,    0],
        ...,
        [8214,   95,    3, ...,    0,    0,    0],
        [8214, 5157,    1, ...,    0,    0,    0],
        [8214, 4479, 7990, ...,    0,    0,    0]])>,
 <tf.Tensor: id=207698, shape=(64, 40), dtype=int64, numpy=
 array([[8087,   18,   12, ...,    0,    0,    0],
        [8087,  634,   30, ...,    0,    0,    0],
        [8087,   16,   13, ...,    0,    0,    0],
        ...,
        [8087,   12,   20, ...,    0,    0,    0],
        [8087,   17, 4981, ...,    0,    0,    0],
        [8087,   12, 5453, ...,    0,    0,    0]])>)

Positional encoding

Since this model doesn't contain any recurrence or convolution, positional encoding is added to give the model some information about the relative position of the words in the sentence.

The positional encoding vector is added to the embedding vector. Embeddings represent a token in a d-dimensional space where tokens with similar meaning will be closer to each other. But the embeddings do not encode the relative position of words in a sentence. So after adding the positional encoding, words will be closer to each other based on the similarity of their meaning and their position in the sentence, in the d-dimensional space.

See the notebook on positional encoding to learn more about it. The formula for calculating the positional encoding is as follows:

$$\Large{PE_{(pos, 2i)} = sin(pos / 10000^{2i / d_{model}})} $$
$$\Large{PE_{(pos, 2i+1)} = cos(pos / 10000^{2i / d_{model}})} $$
def get_angles(pos, i, d_model):
  angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model))
  return pos * angle_rates
def positional_encoding(position, d_model):
  angle_rads = get_angles(np.arange(position)[:, np.newaxis],
                          np.arange(d_model)[np.newaxis, :],
                          d_model)
  
  # apply sin to even indices in the array; 2i
  angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2])
  
  # apply cos to odd indices in the array; 2i+1
  angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2])
    
  pos_encoding = angle_rads[np.newaxis, ...]
    
  return tf.cast(pos_encoding, dtype=tf.float32)
pos_encoding = positional_encoding(50, 512)
print (pos_encoding.shape)

plt.pcolormesh(pos_encoding[0], cmap='RdBu')
plt.xlabel('Depth')
plt.xlim((0, 512))
plt.ylabel('Position')
plt.colorbar()
plt.show()
(1, 50, 512)

png

Masking

Mask all the pad tokens in the batch of sequence. It ensures that the model does not treat padding as the input. The mask indicates where pad value 0 is present: it outputs a 1 at those locations, and a 0 otherwise.

def create_padding_mask(seq):
  seq = tf.cast(tf.math.equal(seq, 0), tf.float32)
  
  # add extra dimensions to add the padding
  # to the attention logits.
  return seq[:, tf.newaxis, tf.newaxis, :]  # (batch_size, 1, 1, seq_len)
x = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]])
create_padding_mask(x)
<tf.Tensor: id=207712, shape=(3, 1, 1, 5), dtype=float32, numpy=
array([[[[0., 0., 1., 1., 0.]]],


       [[[0., 0., 0., 1., 1.]]],


       [[[1., 1., 1., 0., 0.]]]], dtype=float32)>

The look-ahead mask is used to mask the future tokens in a sequence. In other words, the mask indicates which entries should not be used.

This means that to predict the third word, only the first and second word will be used. Similarly to predict the fourth word, only the first, second and the third word will be used and so on.

def create_look_ahead_mask(size):
  mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)
  return mask  # (seq_len, seq_len)
x = tf.random.uniform((1, 3))
temp = create_look_ahead_mask(x.shape[1])
temp
<tf.Tensor: id=207727, shape=(3, 3), dtype=float32, numpy=
array([[0., 1., 1.],
       [0., 0., 1.],
       [0., 0., 0.]], dtype=float32)>

Scaled dot product attention

scaled_dot_product_attention

The attention function used by the transformer takes three inputs: Q (query), K (key), V (value). The equation used to calculate the attention weights is:

$$\Large{Attention(Q, K, V) = softmax_k(\frac{QK^T}{\sqrt{d_k}}) V} $$

The dot-product attention is scaled by a factor of square root of the depth. This is done because for large values of depth, the dot product grows large in magnitude pushing the softmax function where it has small gradients resulting in a very hard softmax.

For example, consider that Q and K have a mean of 0 and variance of 1. Their matrix multiplication will have a mean of 0 and variance of dk. Hence, square root of dk is used for scaling (and not any other number) because the matmul of Q and K should have a mean of 0 and variance of 1, and you get a gentler softmax.

The mask is multiplied with -1e9 (close to negative infinity). This is done because the mask is summed with the scaled matrix multiplication of Q and K and is applied immediately before a softmax. The goal is to zero out these cells, and large negative inputs to softmax are near zero in the output.

def scaled_dot_product_attention(q, k, v, mask):
  """Calculate the attention weights.
  q, k, v must have matching leading dimensions.
  k, v must have matching penultimate dimension, i.e.: seq_len_k = seq_len_v.
  The mask has different shapes depending on its type(padding or look ahead) 
  but it must be broadcastable for addition.
  
  Args:
    q: query shape == (..., seq_len_q, depth)
    k: key shape == (..., seq_len_k, depth)
    v: value shape == (..., seq_len_v, depth_v)
    mask: Float tensor with shape broadcastable 
          to (..., seq_len_q, seq_len_k). Defaults to None.
    
  Returns:
    output, attention_weights
  """

  matmul_qk = tf.matmul(q, k, transpose_b=True)  # (..., seq_len_q, seq_len_k)
  
  # scale matmul_qk
  dk = tf.cast(tf.shape(k)[-1], tf.float32)
  scaled_attention_logits = matmul_qk / tf.math.sqrt(dk)

  # add the mask to the scaled tensor.
  if mask is not None:
    scaled_attention_logits += (mask * -1e9)  

  # softmax is normalized on the last axis (seq_len_k) so that the scores
  # add up to 1.
  attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1)  # (..., seq_len_q, seq_len_k)

  output = tf.matmul(attention_weights, v)  # (..., seq_len_q, depth_v)

  return output, attention_weights

As the softmax normalization is done on K, its values decide the amount of importance given to Q.

The output represents the multiplication of the attention weights and the V (value) vector. This ensures that the words you want to focus on are kept as-is and the irrelevant words are flushed out.

def print_out(q, k, v):
  temp_out, temp_attn = scaled_dot_product_attention(
      q, k, v, None)
  print ('Attention weights are:')
  print (temp_attn)
  print ('Output is:')
  print (temp_out)
np.set_printoptions(suppress=True)

temp_k = tf.constant([[10,0,0],
                      [0,10,0],
                      [0,0,10],
                      [0,0,10]], dtype=tf.float32)  # (4, 3)

temp_v = tf.constant([[   1,0],
                      [  10,0],
                      [ 100,5],
                      [1000,6]], dtype=tf.float32)  # (4, 2)

# This `query` aligns with the second `key`,
# so the second `value` is returned.
temp_q = tf.constant([[0, 10, 0]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0. 1. 0. 0.]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[10.  0.]], shape=(1, 2), dtype=float32)
# This query aligns with a repeated key (third and fourth), 
# so all associated values get averaged.
temp_q = tf.constant([[0, 0, 10]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0.  0.  0.5 0.5]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[550.    5.5]], shape=(1, 2), dtype=float32)
# This query aligns equally with the first and second key, 
# so their values get averaged.
temp_q = tf.constant([[10, 10, 0]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor([[0.5 0.5 0.  0. ]], shape=(1, 4), dtype=float32)
Output is:
tf.Tensor([[5.5 0. ]], shape=(1, 2), dtype=float32)

Pass all the queries together.

temp_q = tf.constant([[0, 0, 10], [0, 10, 0], [10, 10, 0]], dtype=tf.float32)  # (3, 3)
print_out(temp_q, temp_k, temp_v)
Attention weights are:
tf.Tensor(
[[0.  0.  0.5 0.5]
 [0.  1.  0.  0. ]
 [0.5 0.5 0.  0. ]], shape=(3, 4), dtype=float32)
Output is:
tf.Tensor(
[[550.    5.5]
 [ 10.    0. ]
 [  5.5   0. ]], shape=(3, 2), dtype=float32)

Multi-head attention

multi-head attention

Multi-head attention consists of four parts: * Linear layers and split into heads. * Scaled dot-product attention. * Concatenation of heads. * Final linear layer.

Each multi-head attention block gets three inputs; Q (query), K (key), V (value). These are put through linear (Dense) layers and split up into multiple heads.

The scaled_dot_product_attention defined above is applied to each head (broadcasted for efficiency). An appropriate mask must be used in the attention step. The attention output for each head is then concatenated (using tf.transpose, and tf.reshape) and put through a final Dense layer.

Instead of one single attention head, Q, K, and V are split into multiple heads because it allows the model to jointly attend to information at different positions from different representational spaces. After the split each head has a reduced dimensionality, so the total computation cost is the same as a single head attention with full dimensionality.

class MultiHeadAttention(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads):
    super(MultiHeadAttention, self).__init__()
    self.num_heads = num_heads
    self.d_model = d_model
    
    assert d_model % self.num_heads == 0
    
    self.depth = d_model // self.num_heads
    
    self.wq = tf.keras.layers.Dense(d_model)
    self.wk = tf.keras.layers.Dense(d_model)
    self.wv = tf.keras.layers.Dense(d_model)
    
    self.dense = tf.keras.layers.Dense(d_model)
        
  def split_heads(self, x, batch_size):
    """Split the last dimension into (num_heads, depth).
    Transpose the result such that the shape is (batch_size, num_heads, seq_len, depth)
    """
    x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth))
    return tf.transpose(x, perm=[0, 2, 1, 3])
    
  def call(self, v, k, q, mask):
    batch_size = tf.shape(q)[0]
    
    q = self.wq(q)  # (batch_size, seq_len, d_model)
    k = self.wk(k)  # (batch_size, seq_len, d_model)
    v = self.wv(v)  # (batch_size, seq_len, d_model)
    
    q = self.split_heads(q, batch_size)  # (batch_size, num_heads, seq_len_q, depth)
    k = self.split_heads(k, batch_size)  # (batch_size, num_heads, seq_len_k, depth)
    v = self.split_heads(v, batch_size)  # (batch_size, num_heads, seq_len_v, depth)
    
    # scaled_attention.shape == (batch_size, num_heads, seq_len_q, depth)
    # attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k)
    scaled_attention, attention_weights = scaled_dot_product_attention(
        q, k, v, mask)
    
    scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3])  # (batch_size, seq_len_q, num_heads, depth)

    concat_attention = tf.reshape(scaled_attention, 
                                  (batch_size, -1, self.d_model))  # (batch_size, seq_len_q, d_model)

    output = self.dense(concat_attention)  # (batch_size, seq_len_q, d_model)
        
    return output, attention_weights

Create a MultiHeadAttention layer to try out. At each location in the sequence, y, the MultiHeadAttention runs all 8 attention heads across all other locations in the sequence, returning a new vector of the same length at each location.

temp_mha = MultiHeadAttention(d_model=512, num_heads=8)
y = tf.random.uniform((1, 60, 512))  # (batch_size, encoder_sequence, d_model)
out, attn = temp_mha(y, k=y, q=y, mask=None)
out.shape, attn.shape
(TensorShape([1, 60, 512]), TensorShape([1, 8, 60, 60]))

Point wise feed forward network

Point wise feed forward network consists of two fully-connected layers with a ReLU activation in between.

def point_wise_feed_forward_network(d_model, dff):
  return tf.keras.Sequential([
      tf.keras.layers.Dense(dff, activation='relu'),  # (batch_size, seq_len, dff)
      tf.keras.layers.Dense(d_model)  # (batch_size, seq_len, d_model)
  ])
sample_ffn = point_wise_feed_forward_network(512, 2048)
sample_ffn(tf.random.uniform((64, 50, 512))).shape
TensorShape([64, 50, 512])

Encoder and decoder

transformer

The transformer model follows the same general pattern as a standard sequence to sequence with attention model.

  • The input sentence is passed through N encoder layers that generates an output for each word/token in the sequence.
  • The decoder attends on the encoder's output and its own input (self-attention) to predict the next word.

Encoder layer

Each encoder layer consists of sublayers:

  1. Multi-head attention (with padding mask)
  2. Point wise feed forward networks.

Each of these sublayers has a residual connection around it followed by a layer normalization. Residual connections help in avoiding the vanishing gradient problem in deep networks.

The output of each sublayer is LayerNorm(x + Sublayer(x)). The normalization is done on the d_model (last) axis. There are N encoder layers in the transformer.

class EncoderLayer(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads, dff, rate=0.1):
    super(EncoderLayer, self).__init__()

    self.mha = MultiHeadAttention(d_model, num_heads)
    self.ffn = point_wise_feed_forward_network(d_model, dff)

    self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    
    self.dropout1 = tf.keras.layers.Dropout(rate)
    self.dropout2 = tf.keras.layers.Dropout(rate)
    
  def call(self, x, training, mask):

    attn_output, _ = self.mha(x, x, x, mask)  # (batch_size, input_seq_len, d_model)
    attn_output = self.dropout1(attn_output, training=training)
    out1 = self.layernorm1(x + attn_output)  # (batch_size, input_seq_len, d_model)
    
    ffn_output = self.ffn(out1)  # (batch_size, input_seq_len, d_model)
    ffn_output = self.dropout2(ffn_output, training=training)
    out2 = self.layernorm2(out1 + ffn_output)  # (batch_size, input_seq_len, d_model)
    
    return out2
sample_encoder_layer = EncoderLayer(512, 8, 2048)

sample_encoder_layer_output = sample_encoder_layer(
    tf.random.uniform((64, 43, 512)), False, None)

sample_encoder_layer_output.shape  # (batch_size, input_seq_len, d_model)
TensorShape([64, 43, 512])

Decoder layer

Each decoder layer consists of sublayers:

  1. Masked multi-head attention (with look ahead mask and padding mask)
  2. Multi-head attention (with padding mask). V (value) and K (key) receive the encoder output as inputs. Q (query) receives the output from the masked multi-head attention sublayer.
  3. Point wise feed forward networks

Each of these sublayers has a residual connection around it followed by a layer normalization. The output of each sublayer is LayerNorm(x + Sublayer(x)). The normalization is done on the d_model (last) axis.

There are N decoder layers in the transformer.

As Q receives the output from decoder's first attention block, and K receives the encoder output, the attention weights represent the importance given to the decoder's input based on the encoder's output. In other words, the decoder predicts the next word by looking at the encoder output and self-attending to its own output. See the demonstration above in the scaled dot product attention section.

class DecoderLayer(tf.keras.layers.Layer):
  def __init__(self, d_model, num_heads, dff, rate=0.1):
    super(DecoderLayer, self).__init__()

    self.mha1 = MultiHeadAttention(d_model, num_heads)
    self.mha2 = MultiHeadAttention(d_model, num_heads)

    self.ffn = point_wise_feed_forward_network(d_model, dff)
 
    self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    self.layernorm3 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
    
    self.dropout1 = tf.keras.layers.Dropout(rate)
    self.dropout2 = tf.keras.layers.Dropout(rate)
    self.dropout3 = tf.keras.layers.Dropout(rate)
    
    
  def call(self, x, enc_output, training, 
           look_ahead_mask, padding_mask):
    # enc_output.shape == (batch_size, input_seq_len, d_model)

    attn1, attn_weights_block1 = self.mha1(x, x, x, look_ahead_mask)  # (batch_size, target_seq_len, d_model)
    attn1 = self.dropout1(attn1, training=training)
    out1 = self.layernorm1(attn1 + x)
    
    attn2, attn_weights_block2 = self.mha2(
        enc_output, enc_output, out1, padding_mask)  # (batch_size, target_seq_len, d_model)
    attn2 = self.dropout2(attn2, training=training)
    out2 = self.layernorm2(attn2 + out1)  # (batch_size, target_seq_len, d_model)
    
    ffn_output = self.ffn(out2)  # (batch_size, target_seq_len, d_model)
    ffn_output = self.dropout3(ffn_output, training=training)
    out3 = self.layernorm3(ffn_output + out2)  # (batch_size, target_seq_len, d_model)
    
    return out3, attn_weights_block1, attn_weights_block2
sample_decoder_layer = DecoderLayer(512, 8, 2048)

sample_decoder_layer_output, _, _ = sample_decoder_layer(
    tf.random.uniform((64, 50, 512)), sample_encoder_layer_output, 
    False, None, None)

sample_decoder_layer_output.shape  # (batch_size, target_seq_len, d_model)
TensorShape([64, 50, 512])

Encoder

The Encoder consists of: 1. Input Embedding 2. Positional Encoding 3. N encoder layers

The input is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the encoder layers. The output of the encoder is the input to the decoder.

class Encoder(tf.keras.layers.Layer):
  def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size, 
               rate=0.1):
    super(Encoder, self).__init__()

    self.d_model = d_model
    self.num_layers = num_layers
    
    self.embedding = tf.keras.layers.Embedding(input_vocab_size, d_model)
    self.pos_encoding = positional_encoding(input_vocab_size, self.d_model)
    
    
    self.enc_layers = [EncoderLayer(d_model, num_heads, dff, rate) 
                       for _ in range(num_layers)]
  
    self.dropout = tf.keras.layers.Dropout(rate)
        
  def call(self, x, training, mask):

    seq_len = tf.shape(x)[1]
    
    # adding embedding and position encoding.
    x = self.embedding(x)  # (batch_size, input_seq_len, d_model)
    x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
    x += self.pos_encoding[:, :seq_len, :]

    x = self.dropout(x, training=training)
    
    for i in range(self.num_layers):
      x = self.enc_layers[i](x, training, mask)
    
    return x  # (batch_size, input_seq_len, d_model)
sample_encoder = Encoder(num_layers=2, d_model=512, num_heads=8, 
                         dff=2048, input_vocab_size=8500)

sample_encoder_output = sample_encoder(tf.random.uniform((64, 62)), 
                                       training=False, mask=None)

print (sample_encoder_output.shape)  # (batch_size, input_seq_len, d_model)
(64, 62, 512)

Decoder

The Decoder consists of: 1. Output Embedding 2. Positional Encoding 3. N decoder layers

The target is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the decoder layers. The output of the decoder is the input to the final linear layer.

class Decoder(tf.keras.layers.Layer):
  def __init__(self, num_layers, d_model, num_heads, dff, target_vocab_size, 
               rate=0.1):
    super(Decoder, self).__init__()

    self.d_model = d_model
    self.num_layers = num_layers
    
    self.embedding = tf.keras.layers.Embedding(target_vocab_size, d_model)
    self.pos_encoding = positional_encoding(target_vocab_size, d_model)
    
    self.dec_layers = [DecoderLayer(d_model, num_heads, dff, rate) 
                       for _ in range(num_layers)]
    self.dropout = tf.keras.layers.Dropout(rate)
    
  def call(self, x, enc_output, training, 
           look_ahead_mask, padding_mask):

    seq_len = tf.shape(x)[1]
    attention_weights = {}
    
    x = self.embedding(x)  # (batch_size, target_seq_len, d_model)
    x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
    x += self.pos_encoding[:, :seq_len, :]
    
    x = self.dropout(x, training=training)

    for i in range(self.num_layers):
      x, block1, block2 = self.dec_layers[i](x, enc_output, training,
                                             look_ahead_mask, padding_mask)
      
      attention_weights['decoder_layer{}_block1'.format(i+1)] = block1
      attention_weights['decoder_layer{}_block2'.format(i+1)] = block2
    
    # x.shape == (batch_size, target_seq_len, d_model)
    return x, attention_weights
sample_decoder = Decoder(num_layers=2, d_model=512, num_heads=8, 
                         dff=2048, target_vocab_size=8000)

output, attn = sample_decoder(tf.random.uniform((64, 26)), 
                              enc_output=sample_encoder_output, 
                              training=False, look_ahead_mask=None, 
                              padding_mask=None)

output.shape, attn['decoder_layer2_block2'].shape
(TensorShape([64, 26, 512]), TensorShape([64, 8, 26, 62]))

Create the Transformer

Transformer consists of the encoder, decoder and a final linear layer. The output of the decoder is the input to the linear layer and its output is returned.

class Transformer(tf.keras.Model):
  def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size, 
               target_vocab_size, rate=0.1):
    super(Transformer, self).__init__()

    self.encoder = Encoder(num_layers, d_model, num_heads, dff, 
                           input_vocab_size, rate)

    self.decoder = Decoder(num_layers, d_model, num_heads, dff, 
                           target_vocab_size, rate)

    self.final_layer = tf.keras.layers.Dense(target_vocab_size)
    
  def call(self, inp, tar, training, enc_padding_mask, 
           look_ahead_mask, dec_padding_mask):

    enc_output = self.encoder(inp, training, enc_padding_mask)  # (batch_size, inp_seq_len, d_model)
    
    # dec_output.shape == (batch_size, tar_seq_len, d_model)
    dec_output, attention_weights = self.decoder(
        tar, enc_output, training, look_ahead_mask, dec_padding_mask)
    
    final_output = self.final_layer(dec_output)  # (batch_size, tar_seq_len, target_vocab_size)
    
    return final_output, attention_weights
sample_transformer = Transformer(
    num_layers=2, d_model=512, num_heads=8, dff=2048, 
    input_vocab_size=8500, target_vocab_size=8000)

temp_input = tf.random.uniform((64, 62))
temp_target = tf.random.uniform((64, 26))

fn_out, _ = sample_transformer(temp_input, temp_target, training=False, 
                               enc_padding_mask=None, 
                               look_ahead_mask=None,
                               dec_padding_mask=None)

fn_out.shape  # (batch_size, tar_seq_len, target_vocab_size)
TensorShape([64, 26, 8000])

Set hyperparameters

To keep this example small and relatively fast, the values for num_layers, d_model, and dff have been reduced.

The values used in the base model of transformer were; num_layers=6, d_model = 512, dff = 2048. See the paper for all the other versions of the transformer.

num_layers = 4
d_model = 128
dff = 512
num_heads = 8

input_vocab_size = tokenizer_pt.vocab_size + 2
target_vocab_size = tokenizer_en.vocab_size + 2
dropout_rate = 0.1

Optimizer

Use the Adam optimizer with a custom learning rate scheduler according to the formula in the paper.

$$\Large{lrate = d_{model}^{-0.5} * min(step{\_}num^{-0.5}, step{\_}num * warmup{\_}steps^{-1.5})}$$
class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
  def __init__(self, d_model, warmup_steps=4000):
    super(CustomSchedule, self).__init__()
    
    self.d_model = d_model
    self.d_model = tf.cast(self.d_model, tf.float32)

    self.warmup_steps = warmup_steps
    
  def __call__(self, step):
    arg1 = tf.math.rsqrt(step)
    arg2 = step * (self.warmup_steps ** -1.5)
    
    return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)
learning_rate = CustomSchedule(d_model)

optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98, 
                                     epsilon=1e-9)
temp_learning_rate_schedule = CustomSchedule(d_model)

plt.plot(temp_learning_rate_schedule(tf.range(40000, dtype=tf.float32)))
plt.ylabel("Learning Rate")
plt.xlabel("Train Step")
Text(0.5, 0, 'Train Step')

png

Loss and metrics

Since the target sequences are padded, it is important to apply a padding mask when calculating the loss.

loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
    from_logits=True, reduction='none')
def loss_function(real, pred):
  mask = tf.math.logical_not(tf.math.equal(real, 0))
  loss_ = loss_object(real, pred)

  mask = tf.cast(mask, dtype=loss_.dtype)
  loss_ *= mask
  
  return tf.reduce_mean(loss_)
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
    name='train_accuracy')

Training and checkpointing

transformer = Transformer(num_layers, d_model, num_heads, dff,
                          input_vocab_size, target_vocab_size, dropout_rate)
def create_masks(inp, tar):
  # Encoder padding mask
  enc_padding_mask = create_padding_mask(inp)
  
  # Used in the 2nd attention block in the decoder.
  # This padding mask is used to mask the encoder outputs.
  dec_padding_mask = create_padding_mask(inp)
  
  # Used in the 1st attention block in the decoder.
  # It is used to pad and mask future tokens in the input received by 
  # the decoder.
  look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1])
  dec_target_padding_mask = create_padding_mask(tar)
  combined_mask = tf.maximum(dec_target_padding_mask, look_ahead_mask)
  
  return enc_padding_mask, combined_mask, dec_padding_mask

Create the checkpoint path and the checkpoint manager. This will be used to save checkpoints every n epochs.

checkpoint_path = "./checkpoints/train"

ckpt = tf.train.Checkpoint(transformer=transformer,
                           optimizer=optimizer)

ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)

# if a checkpoint exists, restore the latest checkpoint.
if ckpt_manager.latest_checkpoint:
  ckpt.restore(ckpt_manager.latest_checkpoint)
  print ('Latest checkpoint restored!!')

The target is divided into tar_inp and tar_real. tar_inp is passed as an input to the decoder. tar_real is that same input shifted by 1: At each location in tar_input, tar_real contains the next token that should be predicted.

For example, sentence = "SOS A lion in the jungle is sleeping EOS"

tar_inp = "SOS A lion in the jungle is sleeping"

tar_real = "A lion in the jungle is sleeping EOS"

The transformer is an auto-regressive model: it makes predictions one part at a time, and uses its output so far to decide what to do next.

During training this example uses teacher-forcing (like in the text generation tutorial). Teacher forcing is passing the true output to the next time step regardless of what the model predicts at the current time step.

As the transformer predicts each word, self-attention allows it to look at the previous words in the input sequence to better predict the next word.

To prevent the model from peaking at the expected output the model uses a look-ahead mask.

EPOCHS = 20
# The @tf.function trace-compiles train_step into a TF graph for faster
# execution. The function specializes to the precise shape of the argument
# tensors. To avoid re-tracing due to the variable sequence lengths or variable
# batch sizes (the last batch is smaller), use input_signature to specify
# more generic shapes.

train_step_signature = [
    tf.TensorSpec(shape=(None, None), dtype=tf.int64),
    tf.TensorSpec(shape=(None, None), dtype=tf.int64),
]

@tf.function(input_signature=train_step_signature)
def train_step(inp, tar):
  tar_inp = tar[:, :-1]
  tar_real = tar[:, 1:]
  
  enc_padding_mask, combined_mask, dec_padding_mask = create_masks(inp, tar_inp)
  
  with tf.GradientTape() as tape:
    predictions, _ = transformer(inp, tar_inp, 
                                 True, 
                                 enc_padding_mask, 
                                 combined_mask, 
                                 dec_padding_mask)
    loss = loss_function(tar_real, predictions)

  gradients = tape.gradient(loss, transformer.trainable_variables)    
  optimizer.apply_gradients(zip(gradients, transformer.trainable_variables))
  
  train_loss(loss)
  train_accuracy(tar_real, predictions)

Portuguese is used as the input language and English is the target language.

for epoch in range(EPOCHS):
  start = time.time()
  
  train_loss.reset_states()
  train_accuracy.reset_states()
  
  # inp -> portuguese, tar -> english
  for (batch, (inp, tar)) in enumerate(train_dataset):
    train_step(inp, tar)
    
    if batch % 50 == 0:
      print ('Epoch {} Batch {} Loss {:.4f} Accuracy {:.4f}'.format(
          epoch + 1, batch, train_loss.result(), train_accuracy.result()))
      
  if (epoch + 1) % 5 == 0:
    ckpt_save_path = ckpt_manager.save()
    print ('Saving checkpoint for epoch {} at {}'.format(epoch+1,
                                                         ckpt_save_path))
    
  print ('Epoch {} Loss {:.4f} Accuracy {:.4f}'.format(epoch + 1, 
                                                train_loss.result(), 
                                                train_accuracy.result()))

  print ('Time taken for 1 epoch: {} secs\n'.format(time.time() - start))
W0814 01:06:36.753235 140098807473920 deprecation.py:323] From /tmpfs/src/tf_docs_env/lib/python3.5/site-packages/tensorflow_core/python/keras/optimizer_v2/optimizer_v2.py:455: BaseResourceVariable.constraint (from tensorflow.python.ops.resource_variable_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Apply a constraint manually following the optimizer update step.

Epoch 1 Batch 0 Loss 4.7365 Accuracy 0.0000
Epoch 1 Batch 50 Loss 4.3028 Accuracy 0.0033
Epoch 1 Batch 100 Loss 4.1992 Accuracy 0.0140
Epoch 1 Batch 150 Loss 4.1569 Accuracy 0.0182
Epoch 1 Batch 200 Loss 4.0963 Accuracy 0.0204
Epoch 1 Batch 250 Loss 4.0199 Accuracy 0.0217
Epoch 1 Batch 300 Loss 3.9262 Accuracy 0.0242
Epoch 1 Batch 350 Loss 3.8337 Accuracy 0.0278
Epoch 1 Batch 400 Loss 3.7477 Accuracy 0.0305
Epoch 1 Batch 450 Loss 3.6682 Accuracy 0.0332
Epoch 1 Batch 500 Loss 3.6032 Accuracy 0.0367
Epoch 1 Batch 550 Loss 3.5408 Accuracy 0.0405
Epoch 1 Batch 600 Loss 3.4777 Accuracy 0.0443
Epoch 1 Batch 650 Loss 3.4197 Accuracy 0.0479
Epoch 1 Batch 700 Loss 3.3672 Accuracy 0.0514
Epoch 1 Loss 3.3650 Accuracy 0.0515
Time taken for 1 epoch: 576.2345867156982 secs

Epoch 2 Batch 0 Loss 2.4194 Accuracy 0.1030
Epoch 2 Batch 50 Loss 2.5576 Accuracy 0.1030
Epoch 2 Batch 100 Loss 2.5341 Accuracy 0.1051
Epoch 2 Batch 150 Loss 2.5218 Accuracy 0.1076
Epoch 2 Batch 200 Loss 2.4960 Accuracy 0.1095
Epoch 2 Batch 250 Loss 2.4707 Accuracy 0.1115
Epoch 2 Batch 300 Loss 2.4528 Accuracy 0.1133
Epoch 2 Batch 350 Loss 2.4393 Accuracy 0.1150
Epoch 2 Batch 400 Loss 2.4268 Accuracy 0.1165
Epoch 2 Batch 450 Loss 2.4125 Accuracy 0.1182
Epoch 2 Batch 500 Loss 2.4002 Accuracy 0.1196
Epoch 2 Batch 550 Loss 2.3885 Accuracy 0.1209
Epoch 2 Batch 600 Loss 2.3758 Accuracy 0.1222
Epoch 2 Batch 650 Loss 2.3651 Accuracy 0.1235
Epoch 2 Batch 700 Loss 2.3557 Accuracy 0.1247
Epoch 2 Loss 2.3552 Accuracy 0.1247
Time taken for 1 epoch: 341.75365233421326 secs

Epoch 3 Batch 0 Loss 1.8798 Accuracy 0.1347
Epoch 3 Batch 50 Loss 2.1781 Accuracy 0.1438
Epoch 3 Batch 100 Loss 2.1810 Accuracy 0.1444
Epoch 3 Batch 150 Loss 2.1796 Accuracy 0.1452
Epoch 3 Batch 200 Loss 2.1759 Accuracy 0.1462
Epoch 3 Batch 250 Loss 2.1710 Accuracy 0.1471
Epoch 3 Batch 300 Loss 2.1625 Accuracy 0.1473
Epoch 3 Batch 350 Loss 2.1520 Accuracy 0.1476
Epoch 3 Batch 400 Loss 2.1411 Accuracy 0.1481
Epoch 3 Batch 450 Loss 2.1306 Accuracy 0.1484
Epoch 3 Batch 500 Loss 2.1276 Accuracy 0.1490
Epoch 3 Batch 550 Loss 2.1231 Accuracy 0.1497
Epoch 3 Batch 600 Loss 2.1143 Accuracy 0.1500
Epoch 3 Batch 650 Loss 2.1063 Accuracy 0.1508
Epoch 3 Batch 700 Loss 2.1034 Accuracy 0.1519
Epoch 3 Loss 2.1036 Accuracy 0.1519
Time taken for 1 epoch: 328.1187334060669 secs

Epoch 4 Batch 0 Loss 2.0632 Accuracy 0.1622
Epoch 4 Batch 50 Loss 1.9662 Accuracy 0.1642
Epoch 4 Batch 100 Loss 1.9674 Accuracy 0.1656
Epoch 4 Batch 150 Loss 1.9682 Accuracy 0.1667
Epoch 4 Batch 200 Loss 1.9538 Accuracy 0.1679
Epoch 4 Batch 250 Loss 1.9385 Accuracy 0.1683
Epoch 4 Batch 300 Loss 1.9296 Accuracy 0.1694
Epoch 4 Batch 350 Loss 1.9248 Accuracy 0.1705
Epoch 4 Batch 400 Loss 1.9178 Accuracy 0.1716
Epoch 4 Batch 450 Loss 1.9068 Accuracy 0.1724
Epoch 4 Batch 500 Loss 1.8983 Accuracy 0.1735
Epoch 4 Batch 550 Loss 1.8905 Accuracy 0.1745
Epoch 4 Batch 600 Loss 1.8851 Accuracy 0.1757
Epoch 4 Batch 650 Loss 1.8793 Accuracy 0.1768
Epoch 4 Batch 700 Loss 1.8742 Accuracy 0.1779
Epoch 4 Loss 1.8746 Accuracy 0.1780
Time taken for 1 epoch: 326.3032810688019 secs

Epoch 5 Batch 0 Loss 1.9596 Accuracy 0.1979
Epoch 5 Batch 50 Loss 1.7048 Accuracy 0.1961
Epoch 5 Batch 100 Loss 1.6949 Accuracy 0.1969
Epoch 5 Batch 150 Loss 1.6942 Accuracy 0.1986
Epoch 5 Batch 200 Loss 1.6876 Accuracy 0.1992
Epoch 5 Batch 250 Loss 1.6827 Accuracy 0.1994
Epoch 5 Batch 300 Loss 1.6776 Accuracy 0.2006
Epoch 5 Batch 350 Loss 1.6740 Accuracy 0.2013
Epoch 5 Batch 400 Loss 1.6706 Accuracy 0.2019
Epoch 5 Batch 450 Loss 1.6656 Accuracy 0.2028
Epoch 5 Batch 500 Loss 1.6599 Accuracy 0.2035
Epoch 5 Batch 550 Loss 1.6558 Accuracy 0.2040
Epoch 5 Batch 600 Loss 1.6519 Accuracy 0.2047
Epoch 5 Batch 650 Loss 1.6510 Accuracy 0.2053
Epoch 5 Batch 700 Loss 1.6453 Accuracy 0.2058
Saving checkpoint for epoch 5 at ./checkpoints/train/ckpt-1
Epoch 5 Loss 1.6453 Accuracy 0.2058
Time taken for 1 epoch: 307.13636589050293 secs

Epoch 6 Batch 0 Loss 1.5280 Accuracy 0.2127
Epoch 6 Batch 50 Loss 1.5062 Accuracy 0.2214
Epoch 6 Batch 100 Loss 1.5121 Accuracy 0.2225
Epoch 6 Batch 150 Loss 1.5051 Accuracy 0.2216
Epoch 6 Batch 200 Loss 1.5014 Accuracy 0.2219
Epoch 6 Batch 250 Loss 1.4984 Accuracy 0.2222
Epoch 6 Batch 300 Loss 1.4966 Accuracy 0.2232
Epoch 6 Batch 350 Loss 1.4929 Accuracy 0.2231
Epoch 6 Batch 400 Loss 1.4900 Accuracy 0.2234
Epoch 6 Batch 450 Loss 1.4836 Accuracy 0.2237
Epoch 6 Batch 500 Loss 1.4792 Accuracy 0.2241
Epoch 6 Batch 550 Loss 1.4727 Accuracy 0.2245
Epoch 6 Batch 600 Loss 1.4695 Accuracy 0.2251
Epoch 6 Batch 650 Loss 1.4659 Accuracy 0.2256
Epoch 6 Batch 700 Loss 1.4625 Accuracy 0.2262
Epoch 6 Loss 1.4619 Accuracy 0.2262
Time taken for 1 epoch: 303.32839941978455 secs

Epoch 7 Batch 0 Loss 1.1667 Accuracy 0.2262
Epoch 7 Batch 50 Loss 1.3010 Accuracy 0.2407
Epoch 7 Batch 100 Loss 1.3009 Accuracy 0.2400
Epoch 7 Batch 150 Loss 1.2983 Accuracy 0.2414
Epoch 7 Batch 200 Loss 1.2959 Accuracy 0.2428
Epoch 7 Batch 250 Loss 1.2948 Accuracy 0.2436
Epoch 7 Batch 300 Loss 1.2928 Accuracy 0.2439
Epoch 7 Batch 350 Loss 1.2901 Accuracy 0.2442
Epoch 7 Batch 400 Loss 1.2831 Accuracy 0.2448
Epoch 7 Batch 450 Loss 1.2844 Accuracy 0.2458
Epoch 7 Batch 500 Loss 1.2832 Accuracy 0.2463
Epoch 7 Batch 550 Loss 1.2827 Accuracy 0.2469
Epoch 7 Batch 600 Loss 1.2786 Accuracy 0.2470
Epoch 7 Batch 650 Loss 1.2738 Accuracy 0.2473
Epoch 7 Batch 700 Loss 1.2737 Accuracy 0.2480
Epoch 7 Loss 1.2737 Accuracy 0.2480
Time taken for 1 epoch: 314.8111472129822 secs

Epoch 8 Batch 0 Loss 1.1562 Accuracy 0.2611
Epoch 8 Batch 50 Loss 1.1305 Accuracy 0.2637
Epoch 8 Batch 100 Loss 1.1262 Accuracy 0.2644
Epoch 8 Batch 150 Loss 1.1193 Accuracy 0.2639
Epoch 8 Batch 200 Loss 1.1210 Accuracy 0.2645
Epoch 8 Batch 250 Loss 1.1177 Accuracy 0.2651
Epoch 8 Batch 300 Loss 1.1182 Accuracy 0.2648
Epoch 8 Batch 350 Loss 1.1200 Accuracy 0.2653
Epoch 8 Batch 400 Loss 1.1212 Accuracy 0.2655
Epoch 8 Batch 450 Loss 1.1207 Accuracy 0.2653
Epoch 8 Batch 500 Loss 1.1222 Accuracy 0.2660
Epoch 8 Batch 550 Loss 1.1219 Accuracy 0.2664
Epoch 8 Batch 600 Loss 1.1229 Accuracy 0.2663
Epoch 8 Batch 650 Loss 1.1211 Accuracy 0.2664
Epoch 8 Batch 700 Loss 1.1206 Accuracy 0.2668
Epoch 8 Loss 1.1207 Accuracy 0.2668
Time taken for 1 epoch: 301.5652780532837 secs

Epoch 9 Batch 0 Loss 0.8384 Accuracy 0.2751
Epoch 9 Batch 50 Loss 0.9923 Accuracy 0.2793
Epoch 9 Batch 100 Loss 0.9958 Accuracy 0.2796
Epoch 9 Batch 150 Loss 0.9953 Accuracy 0.2787
Epoch 9 Batch 200 Loss 0.9937 Accuracy 0.2790
Epoch 9 Batch 250 Loss 0.9988 Accuracy 0.2800
Epoch 9 Batch 300 Loss 0.9999 Accuracy 0.2801
Epoch 9 Batch 350 Loss 1.0021 Accuracy 0.2800
Epoch 9 Batch 400 Loss 1.0001 Accuracy 0.2800
Epoch 9 Batch 450 Loss 1.0013 Accuracy 0.2800
Epoch 9 Batch 500 Loss 1.0027 Accuracy 0.2805
Epoch 9 Batch 550 Loss 1.0034 Accuracy 0.2804
Epoch 9 Batch 600 Loss 1.0071 Accuracy 0.2810
Epoch 9 Batch 650 Loss 1.0076 Accuracy 0.2810
Epoch 9 Batch 700 Loss 1.0075 Accuracy 0.2806
Epoch 9 Loss 1.0076 Accuracy 0.2806
Time taken for 1 epoch: 304.53144931793213 secs

Epoch 10 Batch 0 Loss 0.9130 Accuracy 0.3057
Epoch 10 Batch 50 Loss 0.8950 Accuracy 0.2966
Epoch 10 Batch 100 Loss 0.9066 Accuracy 0.2967
Epoch 10 Batch 150 Loss 0.9128 Accuracy 0.2958
Epoch 10 Batch 200 Loss 0.9099 Accuracy 0.2943
Epoch 10 Batch 250 Loss 0.9131 Accuracy 0.2935
Epoch 10 Batch 300 Loss 0.9155 Accuracy 0.2930
Epoch 10 Batch 350 Loss 0.9144 Accuracy 0.2922
Epoch 10 Batch 400 Loss 0.9148 Accuracy 0.2922
Epoch 10 Batch 450 Loss 0.9170 Accuracy 0.2916
Epoch 10 Batch 500 Loss 0.9164 Accuracy 0.2910
Epoch 10 Batch 550 Loss 0.9175 Accuracy 0.2908
Epoch 10 Batch 600 Loss 0.9193 Accuracy 0.2908
Epoch 10 Batch 650 Loss 0.9229 Accuracy 0.2907
Epoch 10 Batch 700 Loss 0.9245 Accuracy 0.2910
Saving checkpoint for epoch 10 at ./checkpoints/train/ckpt-2
Epoch 10 Loss 0.9247 Accuracy 0.2910
Time taken for 1 epoch: 308.50231170654297 secs

Epoch 11 Batch 0 Loss 0.8796 Accuracy 0.3030
Epoch 11 Batch 50 Loss 0.8186 Accuracy 0.3025
Epoch 11 Batch 100 Loss 0.8268 Accuracy 0.3020
Epoch 11 Batch 150 Loss 0.8422 Accuracy 0.3026
Epoch 11 Batch 200 Loss 0.8453 Accuracy 0.3023
Epoch 11 Batch 250 Loss 0.8472 Accuracy 0.3020
Epoch 11 Batch 300 Loss 0.8478 Accuracy 0.3019
Epoch 11 Batch 350 Loss 0.8488 Accuracy 0.3018
Epoch 11 Batch 400 Loss 0.8509 Accuracy 0.3017
Epoch 11 Batch 450 Loss 0.8505 Accuracy 0.3012
Epoch 11 Batch 500 Loss 0.8505 Accuracy 0.3009
Epoch 11 Batch 550 Loss 0.8514 Accuracy 0.3005
Epoch 11 Batch 600 Loss 0.8541 Accuracy 0.3001
Epoch 11 Batch 650 Loss 0.8568 Accuracy 0.2998
Epoch 11 Batch 700 Loss 0.8581 Accuracy 0.2995
Epoch 11 Loss 0.8586 Accuracy 0.2996
Time taken for 1 epoch: 326.4959843158722 secs

Epoch 12 Batch 0 Loss 0.8353 Accuracy 0.3318
Epoch 12 Batch 50 Loss 0.7892 Accuracy 0.3161
Epoch 12 Batch 100 Loss 0.7778 Accuracy 0.3134
Epoch 12 Batch 150 Loss 0.7817 Accuracy 0.3132
Epoch 12 Batch 200 Loss 0.7845 Accuracy 0.3132
Epoch 12 Batch 250 Loss 0.7881 Accuracy 0.3124
Epoch 12 Batch 300 Loss 0.7903 Accuracy 0.3122
Epoch 12 Batch 350 Loss 0.7894 Accuracy 0.3107
Epoch 12 Batch 400 Loss 0.7889 Accuracy 0.3097
Epoch 12 Batch 450 Loss 0.7917 Accuracy 0.3089
Epoch 12 Batch 500 Loss 0.7947 Accuracy 0.3089
Epoch 12 Batch 550 Loss 0.7965 Accuracy 0.3087
Epoch 12 Batch 600 Loss 0.7990 Accuracy 0.3082
Epoch 12 Batch 650 Loss 0.8002 Accuracy 0.3077
Epoch 12 Batch 700 Loss 0.8026 Accuracy 0.3076
Epoch 12 Loss 0.8028 Accuracy 0.3076
Time taken for 1 epoch: 306.4404299259186 secs

Epoch 13 Batch 0 Loss 0.7718 Accuracy 0.3059
Epoch 13 Batch 50 Loss 0.7275 Accuracy 0.3206
Epoch 13 Batch 100 Loss 0.7308 Accuracy 0.3206
Epoch 13 Batch 150 Loss 0.7317 Accuracy 0.3186
Epoch 13 Batch 200 Loss 0.7342 Accuracy 0.3174
Epoch 13 Batch 250 Loss 0.7349 Accuracy 0.3171
Epoch 13 Batch 300 Loss 0.7374 Accuracy 0.3167
Epoch 13 Batch 350 Loss 0.7397 Accuracy 0.3166
Epoch 13 Batch 400 Loss 0.7410 Accuracy 0.3163
Epoch 13 Batch 450 Loss 0.7415 Accuracy 0.3154
Epoch 13 Batch 500 Loss 0.7434 Accuracy 0.3150
Epoch 13 Batch 550 Loss 0.7466 Accuracy 0.3148
Epoch 13 Batch 600 Loss 0.7490 Accuracy 0.3142
Epoch 13 Batch 650 Loss 0.7522 Accuracy 0.3142
Epoch 13 Batch 700 Loss 0.7552 Accuracy 0.3142
Epoch 13 Loss 0.7554 Accuracy 0.3142
Time taken for 1 epoch: 299.16382122039795 secs

Epoch 14 Batch 0 Loss 0.6654 Accuracy 0.3193
Epoch 14 Batch 50 Loss 0.6744 Accuracy 0.3277
Epoch 14 Batch 100 Loss 0.6809 Accuracy 0.3237
Epoch 14 Batch 150 Loss 0.6830 Accuracy 0.3238
Epoch 14 Batch 200 Loss 0.6875 Accuracy 0.3235
Epoch 14 Batch 250 Loss 0.6942 Accuracy 0.3238
Epoch 14 Batch 300 Loss 0.6976 Accuracy 0.3231
Epoch 14 Batch 350 Loss 0.7000 Accuracy 0.3230
Epoch 14 Batch 400 Loss 0.7019 Accuracy 0.3222
Epoch 14 Batch 450 Loss 0.7035 Accuracy 0.3212
Epoch 14 Batch 500 Loss 0.7077 Accuracy 0.3207
Epoch 14 Batch 550 Loss 0.7078 Accuracy 0.3201
Epoch 14 Batch 600 Loss 0.7095 Accuracy 0.3196
Epoch 14 Batch 650 Loss 0.7127 Accuracy 0.3197
Epoch 14 Batch 700 Loss 0.7148 Accuracy 0.3193
Epoch 14 Loss 0.7153 Accuracy 0.3194
Time taken for 1 epoch: 294.01167726516724 secs

Epoch 15 Batch 0 Loss 0.6159 Accuracy 0.3546
Epoch 15 Batch 50 Loss 0.6416 Accuracy 0.3339
Epoch 15 Batch 100 Loss 0.6477 Accuracy 0.3323
Epoch 15 Batch 150 Loss 0.6480 Accuracy 0.3300
Epoch 15 Batch 200 Loss 0.6518 Accuracy 0.3286
Epoch 15 Batch 250 Loss 0.6536 Accuracy 0.3283
Epoch 15 Batch 300 Loss 0.6576 Accuracy 0.3276
Epoch 15 Batch 350 Loss 0.6618 Accuracy 0.3274
Epoch 15 Batch 400 Loss 0.6657 Accuracy 0.3272
Epoch 15 Batch 450 Loss 0.6689 Accuracy 0.3269
Epoch 15 Batch 500 Loss 0.6693 Accuracy 0.3263
Epoch 15 Batch 550 Loss 0.6711 Accuracy 0.3255
Epoch 15 Batch 600 Loss 0.6740 Accuracy 0.3249
Epoch 15 Batch 650 Loss 0.6775 Accuracy 0.3250
Epoch 15 Batch 700 Loss 0.6796 Accuracy 0.3247
Saving checkpoint for epoch 15 at ./checkpoints/train/ckpt-3
Epoch 15 Loss 0.6800 Accuracy 0.3247
Time taken for 1 epoch: 296.7416775226593 secs

Epoch 16 Batch 0 Loss 0.6764 Accuracy 0.3298
Epoch 16 Batch 50 Loss 0.6024 Accuracy 0.3335
Epoch 16 Batch 100 Loss 0.6089 Accuracy 0.3345
Epoch 16 Batch 150 Loss 0.6135 Accuracy 0.3315
Epoch 16 Batch 200 Loss 0.6191 Accuracy 0.3323
Epoch 16 Batch 250 Loss 0.6214 Accuracy 0.3324
Epoch 16 Batch 300 Loss 0.6230 Accuracy 0.3315
Epoch 16 Batch 350 Loss 0.6268 Accuracy 0.3313
Epoch 16 Batch 400 Loss 0.6294 Accuracy 0.3309
Epoch 16 Batch 450 Loss 0.6325 Accuracy 0.3306
Epoch 16 Batch 500 Loss 0.6350 Accuracy 0.3300
Epoch 16 Batch 550 Loss 0.6385 Accuracy 0.3298
Epoch 16 Batch 600 Loss 0.6405 Accuracy 0.3293
Epoch 16 Batch 650 Loss 0.6434 Accuracy 0.3291
Epoch 16 Batch 700 Loss 0.6472 Accuracy 0.3289
Epoch 16 Loss 0.6476 Accuracy 0.3290
Time taken for 1 epoch: 302.5653040409088 secs

Epoch 17 Batch 0 Loss 0.7453 Accuracy 0.3696
Epoch 17 Batch 50 Loss 0.5800 Accuracy 0.3427
Epoch 17 Batch 100 Loss 0.5841 Accuracy 0.3422
Epoch 17 Batch 150 Loss 0.5912 Accuracy 0.3409
Epoch 17 Batch 200 Loss 0.5911 Accuracy 0.3384
Epoch 17 Batch 250 Loss 0.5962 Accuracy 0.3389
Epoch 17 Batch 300 Loss 0.5997 Accuracy 0.3389
Epoch 17 Batch 350 Loss 0.6017 Accuracy 0.3383
Epoch 17 Batch 400 Loss 0.6042 Accuracy 0.3376
Epoch 17 Batch 450 Loss 0.6077 Accuracy 0.3375
Epoch 17 Batch 500 Loss 0.6106 Accuracy 0.3369
Epoch 17 Batch 550 Loss 0.6127 Accuracy 0.3361
Epoch 17 Batch 600 Loss 0.6148 Accuracy 0.3352
Epoch 17 Batch 650 Loss 0.6171 Accuracy 0.3346
Epoch 17 Batch 700 Loss 0.6195 Accuracy 0.3339
Epoch 17 Loss 0.6196 Accuracy 0.3339
Time taken for 1 epoch: 303.3943374156952 secs

Epoch 18 Batch 0 Loss 0.4733 Accuracy 0.3313
Epoch 18 Batch 50 Loss 0.5544 Accuracy 0.3395
Epoch 18 Batch 100 Loss 0.5637 Accuracy 0.3435
Epoch 18 Batch 150 Loss 0.5625 Accuracy 0.3421
Epoch 18 Batch 200 Loss 0.5686 Accuracy 0.3421
Epoch 18 Batch 250 Loss 0.5714 Accuracy 0.3413
Epoch 18 Batch 300 Loss 0.5727 Accuracy 0.3407
Epoch 18 Batch 350 Loss 0.5770 Accuracy 0.3406
Epoch 18 Batch 400 Loss 0.5759 Accuracy 0.3394
Epoch 18 Batch 450 Loss 0.5779 Accuracy 0.3390
Epoch 18 Batch 500 Loss 0.5810 Accuracy 0.3392
Epoch 18 Batch 550 Loss 0.5836 Accuracy 0.3388
Epoch 18 Batch 600 Loss 0.5870 Accuracy 0.3379
Epoch 18 Batch 650 Loss 0.5905 Accuracy 0.3378
Epoch 18 Batch 700 Loss 0.5945 Accuracy 0.3376
Epoch 18 Loss 0.5947 Accuracy 0.3376
Time taken for 1 epoch: 298.2541983127594 secs

Epoch 19 Batch 0 Loss 0.5082 Accuracy 0.3261
Epoch 19 Batch 50 Loss 0.5285 Accuracy 0.3451
Epoch 19 Batch 100 Loss 0.5336 Accuracy 0.3472
Epoch 19 Batch 150 Loss 0.5322 Accuracy 0.3440
Epoch 19 Batch 200 Loss 0.5355 Accuracy 0.3439
Epoch 19 Batch 250 Loss 0.5413 Accuracy 0.3441
Epoch 19 Batch 300 Loss 0.5461 Accuracy 0.3443
Epoch 19 Batch 350 Loss 0.5519 Accuracy 0.3441
Epoch 19 Batch 400 Loss 0.5548 Accuracy 0.3436
Epoch 19 Batch 450 Loss 0.5561 Accuracy 0.3427
Epoch 19 Batch 500 Loss 0.5595 Accuracy 0.3423
Epoch 19 Batch 550 Loss 0.5616 Accuracy 0.3416
Epoch 19 Batch 600 Loss 0.5658 Accuracy 0.3412
Epoch 19 Batch 650 Loss 0.5684 Accuracy 0.3407
Epoch 19 Batch 700 Loss 0.5707 Accuracy 0.3405
Epoch 19 Loss 0.5709 Accuracy 0.3406
Time taken for 1 epoch: 297.59109830856323 secs

Epoch 20 Batch 0 Loss 0.6551 Accuracy 0.3720
Epoch 20 Batch 50 Loss 0.5086 Accuracy 0.3527
Epoch 20 Batch 100 Loss 0.5160 Accuracy 0.3495
Epoch 20 Batch 150 Loss 0.5196 Accuracy 0.3495
Epoch 20 Batch 200 Loss 0.5210 Accuracy 0.3490
Epoch 20 Batch 250 Loss 0.5241 Accuracy 0.3487
Epoch 20 Batch 300 Loss 0.5287 Accuracy 0.3486
Epoch 20 Batch 350 Loss 0.5312 Accuracy 0.3477
Epoch 20 Batch 400 Loss 0.5337 Accuracy 0.3475
Epoch 20 Batch 450 Loss 0.5369 Accuracy 0.3469
Epoch 20 Batch 500 Loss 0.5377 Accuracy 0.3458
Epoch 20 Batch 550 Loss 0.5400 Accuracy 0.3453
Epoch 20 Batch 600 Loss 0.5441 Accuracy 0.3450
Epoch 20 Batch 650 Loss 0.5469 Accuracy 0.3445
Epoch 20 Batch 700 Loss 0.5507 Accuracy 0.3440
Saving checkpoint for epoch 20 at ./checkpoints/train/ckpt-4
Epoch 20 Loss 0.5507 Accuracy 0.3440
Time taken for 1 epoch: 303.6011939048767 secs

Evaluate

The following steps are used for evaluation:

  • Encode the input sentence using the Portuguese tokenizer (tokenizer_pt). Moreover, add the start and end token so the input is equivalent to what the model is trained with. This is the encoder input.
  • The decoder input is the start token == tokenizer_en.vocab_size.
  • Calculate the padding masks and the look ahead masks.
  • The decoder then outputs the predictions by looking at the encoder output and its own output (self-attention).
  • Select the last word and calculate the argmax of that.
  • Concatentate the predicted word to the decoder input as pass it to the decoder.
  • In this approach, the decoder predicts the next word based on the previous words it predicted.
def evaluate(inp_sentence):
  start_token = [tokenizer_pt.vocab_size]
  end_token = [tokenizer_pt.vocab_size + 1]
  
  # inp sentence is portuguese, hence adding the start and end token
  inp_sentence = start_token + tokenizer_pt.encode(inp_sentence) + end_token
  encoder_input = tf.expand_dims(inp_sentence, 0)
  
  # as the target is english, the first word to the transformer should be the
  # english start token.
  decoder_input = [tokenizer_en.vocab_size]
  output = tf.expand_dims(decoder_input, 0)
    
  for i in range(MAX_LENGTH):
    enc_padding_mask, combined_mask, dec_padding_mask = create_masks(
        encoder_input, output)
  
    # predictions.shape == (batch_size, seq_len, vocab_size)
    predictions, attention_weights = transformer(encoder_input, 
                                                 output,
                                                 False,
                                                 enc_padding_mask,
                                                 combined_mask,
                                                 dec_padding_mask)
    
    # select the last word from the seq_len dimension
    predictions = predictions[: ,-1:, :]  # (batch_size, 1, vocab_size)

    predicted_id = tf.cast(tf.argmax(predictions, axis=-1), tf.int32)
    
    # return the result if the predicted_id is equal to the end token
    if predicted_id == tokenizer_en.vocab_size+1:
      return tf.squeeze(output, axis=0), attention_weights
    
    # concatentate the predicted_id to the output which is given to the decoder
    # as its input.
    output = tf.concat([output, predicted_id], axis=-1)

  return tf.squeeze(output, axis=0), attention_weights
def plot_attention_weights(attention, sentence, result, layer):
  fig = plt.figure(figsize=(16, 8))
  
  sentence = tokenizer_pt.encode(sentence)
  
  attention = tf.squeeze(attention[layer], axis=0)
  
  for head in range(attention.shape[0]):
    ax = fig.add_subplot(2, 4, head+1)
    
    # plot the attention weights
    ax.matshow(attention[head][:-1, :], cmap='viridis')

    fontdict = {'fontsize': 10}
    
    ax.set_xticks(range(len(sentence)+2))
    ax.set_yticks(range(len(result)))
    
    ax.set_ylim(len(result)-1.5, -0.5)
        
    ax.set_xticklabels(
        ['<start>']+[tokenizer_pt.decode([i]) for i in sentence]+['<end>'], 
        fontdict=fontdict, rotation=90)
    
    ax.set_yticklabels([tokenizer_en.decode([i]) for i in result 
                        if i < tokenizer_en.vocab_size], 
                       fontdict=fontdict)
    
    ax.set_xlabel('Head {}'.format(head+1))
  
  plt.tight_layout()
  plt.show()
def translate(sentence, plot=''):
  result, attention_weights = evaluate(sentence)
  
  predicted_sentence = tokenizer_en.decode([i for i in result 
                                            if i < tokenizer_en.vocab_size])  

  print('Input: {}'.format(sentence))
  print('Predicted translation: {}'.format(predicted_sentence))
  
  if plot:
    plot_attention_weights(attention_weights, sentence, result, plot)
translate("este é um problema que temos que resolver.")
print ("Real translation: this is a problem we have to solve .")
Input: este é um problema que temos que resolver.
Predicted translation: this is a problem that we have to solve ..5 to the world .
Real translation: this is a problem we have to solve .
translate("os meus vizinhos ouviram sobre esta ideia.")
print ("Real translation: and my neighboring homes heard about this idea .")
Input: os meus vizinhos ouviram sobre esta ideia.
Predicted translation: my neighbors have heard about this idea of this idea .
Real translation: and my neighboring homes heard about this idea .
translate("vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.")
print ("Real translation: so i 'll just share with you some stories very quickly of some magical things that have happened .")
Input: vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.
Predicted translation: so i 'm going to share very quickly with you some stories of some magic things that happened .
Real translation: so i 'll just share with you some stories very quickly of some magical things that have happened .

You can pass different layers and attention blocks of the decoder to the plot parameter.

translate("este é o primeiro livro que eu fiz.", plot='decoder_layer4_block2')
print ("Real translation: this is the first book i've ever done.")
Input: este é o primeiro livro que eu fiz.
Predicted translation: this is the first book i did in the u.s. book .

png

Real translation: this is the first book i've ever done.

Summary

In this tutorial, you learned about positional encoding, multi-head attention, the importance of masking and how to create a transformer.

Try using a different dataset to train the transformer. You can also create the base transformer or transformer XL by changing the hyperparameters above. You can also use the layers defined here to create BERT and train state of the art models. Futhermore, you can implement beam search to get better predictions.