Wiki40B Language Models

View on TensorFlow.org Run in Google Colab View on GitHub Download notebook See TF Hub models

Generate Wikipedia-like text using the Wiki40B language models from TensorFlow Hub!

This notebook illustrates how to:

  • Load the 41 monolingual and 2 multilingual language models that are part of the Wiki40b-LM collection on TF-Hub
  • Use the models to obtain perplexity, per layer activations, and word embeddings for a given piece of text
  • Generate text token-by-token from a piece of seed text

The language models are trained on the newly published, cleaned-up Wiki40B dataset available on TensorFlow Datasets. The training setup is based on the paper “Wiki-40B: Multilingual Language Model Dataset”.

Setup

Installing Dependencies

Imports

2023-12-08 13:06:23.563120: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2023-12-08 13:06:24.329384: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2023-12-08 13:06:24.329482: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2023-12-08 13:06:24.329492: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.

Choose Language

Let's choose which language model to load from TF-Hub and the length of text to be generated.

Using the https://tfhub.dev/google/wiki40b-lm-en/1 model to generate sequences of max length 20.

Build the Model

Okay, now that we've configured which pre-trained model to use, let's configure it to generate text up to max_gen_len. We will need to load the language model from TF-Hub, feed in a piece of starter text, and then iteratively feed in tokens as they are generated.

Load the language model pieces

2023-12-08 13:06:32.463390: W tensorflow/core/common_runtime/graph_constructor.cc:1526] Importing a graph with a lower producer version 359 into an existing graph with producer version 1286. Shape inference will have run different parts of the graph with different producer versions.
2023-12-08 13:06:34.423955: E tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:267] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected

Construct the per-token generation graph

Build the statically unrolled graph for max_gen_len tokens

Generate some text

Let's generate some text! We'll set a text seed to prompt the language model.

You can use one of the predefined seeds or optionally enter your own. This text will be used as seed for the language model to help prompt the language model for what to generate next.

You can use the following special tokens precede special parts of the generated article. Use _START_ARTICLE_ to indicate the beginning of the article, _START_SECTION_ to indicate the beginning of a section, and _START_PARAGRAPH_ to generate text in the article

Predefined Seeds

Enter your own seed (Optional).

Generating text from seed:

_START_ARTICLE_
1882 Prince Edward Island general election
_START_PARAGRAPH_
The 1882 Prince Edward Island election was held on May 8, 1882 to elect members of the House of Assembly of the province of Prince Edward Island, Canada.

Initialize session.

Generate text

This election were also the first time that two members first met. A majority of twelve elected members (

We can also look at the other outputs of the model - the perplexity, the token ids, the intermediate activations, and the embeddings

ppl_result
array([23.507736], dtype=float32)
token_ids_result
array([[   8,    3, 6794, 1579, 1582,  721,  489,  448,    8,    5,   26,
        6794, 1579, 1582,  721,  448,   17,  245,   22,  166, 2928, 6794,
          16, 7690,  384,   11,    7,  402,   11, 1172,   11,    7, 2115,
          11, 1579, 1582,  721,    9,  646,   10]], dtype=int32)
activations_result.shape
(12, 1, 39, 768)
embeddings_result
array([[[ 0.12262525,  5.548009  ,  1.4743135 , ...,  2.4388404 ,
         -2.2788858 ,  2.172028  ],
        [-2.3905468 , -0.97108954, -1.5513545 , ...,  8.458472  ,
         -2.8723319 ,  0.6534524 ],
        [-0.83790785,  0.41630274, -0.8740793 , ...,  1.6446769 ,
         -0.9074106 ,  0.3339265 ],
        ...,
        [-0.8054745 , -1.2495526 ,  2.6232922 , ...,  2.893288  ,
         -0.91287214, -1.1259722 ],
        [ 0.64944506,  3.3696785 ,  0.09543293, ..., -0.7839227 ,
         -1.3573489 ,  1.862214  ],
        [-1.2970612 ,  0.5961366 ,  3.3531897 , ...,  3.2853985 ,
         -1.6212384 ,  0.30257902]]], dtype=float32)