Load text

View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook

This tutorial demonstrates two ways to load and preprocess text.

  • First, you will use Keras utilities and layers. If you are new to TensorFlow, you should start with these.

  • Next, you will use lower-level utilities like tf.data.TextLineDataset to load text files, and tf.text to preprocess the data for finer-grain control.

import collections
import pathlib
import re
import string

import tensorflow as tf

from tensorflow.keras import layers
from tensorflow.keras import losses
from tensorflow.keras import preprocessing
from tensorflow.keras import utils
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization

import tensorflow_datasets as tfds
pip install tensorflow-text
Collecting tensorflow-text
[?25l  Downloading https://files.pythonhosted.org/packages/28/b2/2dbd90b93913afd07e6101b8b84327c401c394e60141c1e98590038060b3/tensorflow_text-2.3.0-cp36-cp36m-manylinux1_x86_64.whl (2.6MB)
[K     |████████████████████████████████| 2.6MB 2.9MB/s 
[?25hRequirement already satisfied: tensorflow<2.4,>=2.3.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-text) (2.3.0)
Requirement already satisfied: numpy<1.19.0,>=1.16.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2.4,>=2.3.0->tensorflow-text) (1.18.5)
Requirement already satisfied: keras-preprocessing<1.2,>=1.1.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2.4,>=2.3.0->tensorflow-text) (1.1.2)
Requirement already satisfied: scipy==1.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2.4,>=2.3.0->tensorflow-text) (1.4.1)
Requirement already satisfied: tensorboard<3,>=2.3.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2.4,>=2.3.0->tensorflow-text) (2.3.0)
Requirement already satisfied: h5py<2.11.0,>=2.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2.4,>=2.3.0->tensorflow-text) (2.10.0)
Requirement already satisfied: google-pasta>=0.1.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2.4,>=2.3.0->tensorflow-text) (0.2.0)
Requirement already satisfied: astunparse==1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2.4,>=2.3.0->tensorflow-text) (1.6.3)
Requirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2.4,>=2.3.0->tensorflow-text) (1.15.0)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2.4,>=2.3.0->tensorflow-text) (0.35.1)
Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2.4,>=2.3.0->tensorflow-text) (1.12.1)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2.4,>=2.3.0->tensorflow-text) (1.32.0)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2.4,>=2.3.0->tensorflow-text) (3.3.0)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2.4,>=2.3.0->tensorflow-text) (1.1.0)
Requirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2.4,>=2.3.0->tensorflow-text) (0.10.0)
Requirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2.4,>=2.3.0->tensorflow-text) (3.12.4)
Requirement already satisfied: gast==0.3.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2.4,>=2.3.0->tensorflow-text) (0.3.3)
Requirement already satisfied: tensorflow-estimator<2.4.0,>=2.3.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow<2.4,>=2.3.0->tensorflow-text) (2.3.0)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow<2.4,>=2.3.0->tensorflow-text) (1.17.2)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow<2.4,>=2.3.0->tensorflow-text) (1.0.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow<2.4,>=2.3.0->tensorflow-text) (3.3.2)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow<2.4,>=2.3.0->tensorflow-text) (50.3.0)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow<2.4,>=2.3.0->tensorflow-text) (2.23.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow<2.4,>=2.3.0->tensorflow-text) (1.7.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard<3,>=2.3.0->tensorflow<2.4,>=2.3.0->tensorflow-text) (0.4.1)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow<2.4,>=2.3.0->tensorflow-text) (0.2.8)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow<2.4,>=2.3.0->tensorflow-text) (4.1.1)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow<2.4,>=2.3.0->tensorflow-text) (4.6)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tensorboard<3,>=2.3.0->tensorflow<2.4,>=2.3.0->tensorflow-text) (2.0.0)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow<2.4,>=2.3.0->tensorflow-text) (2020.6.20)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow<2.4,>=2.3.0->tensorflow-text) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow<2.4,>=2.3.0->tensorflow-text) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow<2.4,>=2.3.0->tensorflow-text) (2.10)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow<2.4,>=2.3.0->tensorflow-text) (1.3.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.6/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow<2.4,>=2.3.0->tensorflow-text) (0.4.8)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard<3,>=2.3.0->tensorflow<2.4,>=2.3.0->tensorflow-text) (3.3.1)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow<2.4,>=2.3.0->tensorflow-text) (3.1.0)
Installing collected packages: tensorflow-text
Successfully installed tensorflow-text-2.3.0

import tensorflow_text as tf_text

Example 1: Predict the tag for a Stack Overflow question

As a first example, you will download a dataset of programming questions from Stack Overflow. Each question ("How do I sort a dictionary by value?") is labeled with exactly one tag (Python, CSharp, JavaScript, or Java). Your task is to develop a model that predicts the tag for a question. This is an example of multi-class classification, an important and widely applicable kind of machine learning problem.

Download and explore the dataset

Next, you will download the dataset, and explore the directory structure.

data_url = 'https://storage.googleapis.com/download.tensorflow.org/data/stack_overflow_16k.tar.gz'
dataset = utils.get_file(
    'stack_overflow_16k.tar.gz',
    data_url,
    untar=True,
    cache_dir='stack_overflow',
    cache_subdir='')
dataset_dir = pathlib.Path(dataset).parent
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/stack_overflow_16k.tar.gz
6053888/6053168 [==============================] - 0s 0us/step

list(dataset_dir.iterdir())
[PosixPath('/tmp/.keras/README.md'),
 PosixPath('/tmp/.keras/test'),
 PosixPath('/tmp/.keras/stack_overflow_16k.tar.gz.tar.gz'),
 PosixPath('/tmp/.keras/train')]
train_dir = dataset_dir/'train'
list(train_dir.iterdir())
[PosixPath('/tmp/.keras/train/csharp'),
 PosixPath('/tmp/.keras/train/python'),
 PosixPath('/tmp/.keras/train/javascript'),
 PosixPath('/tmp/.keras/train/java')]

The train/csharp, train/java, train/python and train/javascript directories contain many text files, each of which is a Stack Overflow question. Print a file and inspect the data.

sample_file = train_dir/'python/1755.txt'
with open(sample_file) as f:
  print(f.read())
why does this blank program print true x=true.def stupid():.    x=false.stupid().print x


Load the dataset

Next, you will load the data off disk and prepare it into a format suitable for training. To do so, you will use text_dataset_from_directory utility to create a labeled tf.data.Dataset. If you're new to tf.data, it's a powerful collection of tools for building input pipelines.

The preprocessing.text_dataset_from_directory expects a directory structure as follows.

train/
...csharp/
......1.txt
......2.txt
...java/
......1.txt
......2.txt
...javascript/
......1.txt
......2.txt
...python/
......1.txt
......2.txt

When running a machine learning experiment, it is a best practice to divide your dataset into three splits: train, validation, and test. The Stack Overflow dataset has already been divided into train and test, but it lacks a validation set. Create a validation set using an 80:20 split of the training data by using the validation_split argument below.

batch_size = 32
seed = 42

raw_train_ds = preprocessing.text_dataset_from_directory(
    train_dir,
    batch_size=batch_size,
    validation_split=0.2,
    subset='training',
    seed=seed)
Found 8000 files belonging to 4 classes.
Using 6400 files for training.

As you can see above, there are 8,000 examples in the training folder, of which you will use 80% (or 6,400) for training. As you will see in a moment, you can train a model by passing a tf.data.Dataset directly to model.fit. First, iterate over the dataset and print out a few examples, to get a feel for the data.

for text_batch, label_batch in raw_train_ds.take(1):
  for i in range(10):
    print("Question: ", text_batch.numpy()[i][:100], '...')
    print("Label:", label_batch.numpy()[i])
Question:  b'"convert binary to string not works i created a simple program...i create a string and compress it b' ...
Label: 0
Question:  b'"how to calculate sin using system.data.compute() in blank i\'m new here! i\'m a newbie in blank. one ' ...
Label: 0
Question:  b'"how can i execute a js function from the top of the stack? i\'d like to execute a function from the ' ...
Label: 2
Question:  b'"print a reportviewer automatically when you open a form is it possible, when i open a form with rep' ...
Label: 0
Question:  b'"can i use node.childnodes in the if clause? can i do..if (node.childnodes) {.  // do something.}...' ...
Label: 2
Question:  b'"compare strings in array after split in blank i have a string that i use split function on it in or' ...
Label: 1
Question:  b'"adding an array of div ids to a var so i have animate multiple divs across site i found some js on ' ...
Label: 2
Question:  b'"is it necessary for the namespace to be the name of the file in blank? for example, i name my file(' ...
Label: 0
Question:  b'"how can i convert contiguous letters to number separated by dash in blank? i\'d like to convert alph' ...
Label: 1
Question:  b'"ternary operator to minimize code how i can minimize the following code using ternary operator..if ' ...
Label: 0

The labels are 0, 1, 2 or 3. To see which of these correspond to which string label, you can check the class_names property on the dataset.

for i, label in enumerate(raw_train_ds.class_names):
  print("Label", i, "corresponds to", label)
Label 0 corresponds to csharp
Label 1 corresponds to java
Label 2 corresponds to javascript
Label 3 corresponds to python

Next, you will create a validation and test dataset. You will use the remaining 1,600 reviews from the training set for validation.

raw_val_ds = preprocessing.text_dataset_from_directory(
    train_dir,
    batch_size=batch_size,
    validation_split=0.2,
    subset='validation',
    seed=seed)
Found 8000 files belonging to 4 classes.
Using 1600 files for validation.

test_dir = dataset_dir/'test'
raw_test_ds = preprocessing.text_dataset_from_directory(
    test_dir, batch_size=batch_size)
Found 8000 files belonging to 4 classes.

Prepare the dataset for training

Next, you will standardize, tokenize, and vectorize the data using the preprocessing.TextVectorization layer.

  • Standardization refers to preprocessing the text, typically to remove punctuation or HTML elements to simplify the dataset.

  • Tokenization refers to splitting strings into tokens (for example, splitting a sentence into individual words by splitting on whitespace).

  • Vectorization refers to converting tokens into numbers so they can be fed into a neural network.

All of these tasks can be accomplished with this layer. You can learn more about each of these in the API doc.

  • The default standardization converts text to lowercase and removes punctuation.

  • The default tokenizer splits on whitespace.

  • The default vectorization mode is int. This outputs integer indices (one per token). This mode can be used to build models that take word order into account. You can also use other modes, like binary, to build bag-of-word models.

You will build two modes to learn more about these. First, you will use the binary model to build a bag-of-words model. Next, you will use the int mode with a 1D ConvNet.

VOCAB_SIZE = 10000

binary_vectorize_layer = TextVectorization(
    max_tokens=VOCAB_SIZE,
    output_mode='binary')

For int mode, in addition to maximum vocabulary size, you need to set an explicit maximum sequence length, which will cause the layer to pad or truncate sequences to exactly sequence_length values.

MAX_SEQUENCE_LENGTH = 250

int_vectorize_layer = TextVectorization(
    max_tokens=VOCAB_SIZE,
    output_mode='int',
    output_sequence_length=MAX_SEQUENCE_LENGTH)

Next, you will call adapt to fit the state of the preprocessing layer to the dataset. This will cause the model to build an index of strings to integers.

# Make a text-only dataset (without labels), then call adapt
train_text = raw_train_ds.map(lambda text, labels: text)
binary_vectorize_layer.adapt(train_text)
int_vectorize_layer.adapt(train_text)

See the result of using these layers to preprocess data:

def binary_vectorize_text(text, label):
  text = tf.expand_dims(text, -1)
  return binary_vectorize_layer(text), label
def int_vectorize_text(text, label):
  text = tf.expand_dims(text, -1)
  return int_vectorize_layer(text), label
# Retrieve a batch (of 32 reviews and labels) from the dataset
text_batch, label_batch = next(iter(raw_train_ds))
first_question, first_label = text_batch[0], label_batch[0]
print("Question", first_question)
print("Label", first_label)
Question tf.Tensor(b'"function expected error in blank for dynamically created check box when it is clicked i want to grab the attribute value.it is working in ie 8,9,10 but not working in ie 11,chrome shows function expected error..&lt;input type=checkbox checked=\'checked\' id=\'symptomfailurecodeid\' tabindex=\'54\' style=\'cursor:pointer;\' onclick=chkclickevt(this);  failurecodeid=""1"" &gt;...function chkclickevt(obj) { .    alert(obj.attributes(""failurecodeid""));.}"\n', shape=(), dtype=string)
Label tf.Tensor(2, shape=(), dtype=int32)

print("'binary' vectorized question:", 
      binary_vectorize_text(first_question, first_label)[0])
'binary' vectorized question: tf.Tensor([[1. 1. 1. ... 0. 0. 0.]], shape=(1, 10000), dtype=float32)

print("'int' vectorized question:",
      int_vectorize_text(first_question, first_label)[0])
'int' vectorized question: tf.Tensor(
[[  38  450   65    7   16   12  892  265  186  451   44   11    6  685
     3   46    4 2062    2  485    1    6  158    7  479    1   26   20
   158    7  479    1  502   38  450    1 1767 1763    1    1    1    1
     1    1    1    1    0    0    0    0    0    0    0    0    0    0
     0    0    0    0    0    0    0    0    0    0    0    0    0    0
     0    0    0    0    0    0    0    0    0    0    0    0    0    0
     0    0    0    0    0    0    0    0    0    0    0    0    0    0
     0    0    0    0    0    0    0    0    0    0    0    0    0    0
     0    0    0    0    0    0    0    0    0    0    0    0    0    0
     0    0    0    0    0    0    0    0    0    0    0    0    0    0
     0    0    0    0    0    0    0    0    0    0    0    0    0    0
     0    0    0    0    0    0    0    0    0    0    0    0    0    0
     0    0    0    0    0    0    0    0    0    0    0    0    0    0
     0    0    0    0    0    0    0    0    0    0    0    0    0    0
     0    0    0    0    0    0    0    0    0    0    0    0    0    0
     0    0    0    0    0    0    0    0    0    0    0    0    0    0
     0    0    0    0    0    0    0    0    0    0    0    0    0    0
     0    0    0    0    0    0    0    0    0    0    0    0]], shape=(1, 250), dtype=int64)

As you can see above, binary mode returns an array denoting which tokens exist at least once in the input, while int mode replaces each token by an integer, thus preserving their order. You can lookup the token (string) that each integer corresponds to by calling .get_vocabulary() on the layer.

print("1289 ---> ", int_vectorize_layer.get_vocabulary()[1289])
print("313 ---> ", int_vectorize_layer.get_vocabulary()[313])
print("Vocabulary size: {}".format(len(int_vectorize_layer.get_vocabulary())))
1289 --->  roman
313 --->  source
Vocabulary size: 10000

You are nearly ready to train your model. As a final preprocessing step, you will apply the TextVectorization layers you created earlier to the train, validation, and test dataset.

binary_train_ds = raw_train_ds.map(binary_vectorize_text)
binary_val_ds = raw_val_ds.map(binary_vectorize_text)
binary_test_ds = raw_test_ds.map(binary_vectorize_text)

int_train_ds = raw_train_ds.map(int_vectorize_text)
int_val_ds = raw_val_ds.map(int_vectorize_text)
int_test_ds = raw_test_ds.map(int_vectorize_text)

Configure the dataset for performance

These are two important methods you should use when loading data to make sure that I/O does not become blocking.

.cache() keeps data in memory after it's loaded off disk. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache, which is more efficient to read than many small files.

.prefetch() overlaps data preprocessing and model execution while training.

You can learn more about both methods, as well as how to cache data to disk in the data performance guide.

AUTOTUNE = tf.data.experimental.AUTOTUNE

def configure_dataset(dataset):
  return dataset.cache().prefetch(buffer_size=AUTOTUNE)
binary_train_ds = configure_dataset(binary_train_ds)
binary_val_ds = configure_dataset(binary_val_ds)
binary_test_ds = configure_dataset(binary_test_ds)

int_train_ds = configure_dataset(int_train_ds)
int_val_ds = configure_dataset(int_val_ds)
int_test_ds = configure_dataset(int_test_ds)

Train the model

It's time to create our neural network. For the binary vectorized data, train a simple bag-of-words linear model:

binary_model = tf.keras.Sequential([layers.Dense(4)])
binary_model.compile(
    loss=losses.SparseCategoricalCrossentropy(from_logits=True),
    optimizer='adam',
    metrics=['accuracy'])
history = binary_model.fit(
    binary_train_ds, validation_data=binary_val_ds, epochs=10)
Epoch 1/10
200/200 [==============================] - 2s 12ms/step - loss: 1.1198 - accuracy: 0.6419 - val_loss: 0.9171 - val_accuracy: 0.7669
Epoch 2/10
200/200 [==============================] - 0s 2ms/step - loss: 0.7788 - accuracy: 0.8195 - val_loss: 0.7518 - val_accuracy: 0.7881
Epoch 3/10
200/200 [==============================] - 0s 2ms/step - loss: 0.6274 - accuracy: 0.8634 - val_loss: 0.6655 - val_accuracy: 0.8037
Epoch 4/10
200/200 [==============================] - 1s 3ms/step - loss: 0.5340 - accuracy: 0.8855 - val_loss: 0.6117 - val_accuracy: 0.8169
Epoch 5/10
200/200 [==============================] - 1s 3ms/step - loss: 0.4681 - accuracy: 0.9050 - val_loss: 0.5747 - val_accuracy: 0.8294
Epoch 6/10
200/200 [==============================] - 0s 2ms/step - loss: 0.4178 - accuracy: 0.9189 - val_loss: 0.5479 - val_accuracy: 0.8300
Epoch 7/10
200/200 [==============================] - 1s 3ms/step - loss: 0.3776 - accuracy: 0.9289 - val_loss: 0.5277 - val_accuracy: 0.8338
Epoch 8/10
200/200 [==============================] - 0s 2ms/step - loss: 0.3443 - accuracy: 0.9370 - val_loss: 0.5120 - val_accuracy: 0.8325
Epoch 9/10
200/200 [==============================] - 0s 2ms/step - loss: 0.3161 - accuracy: 0.9427 - val_loss: 0.4997 - val_accuracy: 0.8356
Epoch 10/10
200/200 [==============================] - 0s 2ms/step - loss: 0.2918 - accuracy: 0.9495 - val_loss: 0.4899 - val_accuracy: 0.8394

Next, you will use the int vectorized layer to build a 1D ConvNet.

def create_model(vocab_size, num_labels):
  model = tf.keras.Sequential([
      layers.Embedding(vocab_size, 64, mask_zero=True),
      layers.Conv1D(64, 5, padding="valid", activation="relu", strides=2),
      layers.GlobalMaxPooling1D(),
      layers.Dense(num_labels)
  ])
  return model
# vocab_size is VOCAB_SIZE + 1 since 0 is used additionally for padding.
int_model = create_model(vocab_size=VOCAB_SIZE + 1, num_labels=4)
int_model.compile(
    loss=losses.SparseCategoricalCrossentropy(from_logits=True),
    optimizer='adam',
    metrics=['accuracy'])
history = int_model.fit(int_train_ds, validation_data=int_val_ds, epochs=5)
Epoch 1/5
200/200 [==============================] - 4s 18ms/step - loss: 1.1293 - accuracy: 0.5183 - val_loss: 0.7424 - val_accuracy: 0.7100
Epoch 2/5
200/200 [==============================] - 2s 9ms/step - loss: 0.6095 - accuracy: 0.7727 - val_loss: 0.5349 - val_accuracy: 0.8062
Epoch 3/5
200/200 [==============================] - 2s 9ms/step - loss: 0.3565 - accuracy: 0.8892 - val_loss: 0.4734 - val_accuracy: 0.8125
Epoch 4/5
200/200 [==============================] - 2s 9ms/step - loss: 0.1898 - accuracy: 0.9559 - val_loss: 0.4760 - val_accuracy: 0.8194
Epoch 5/5
200/200 [==============================] - 2s 9ms/step - loss: 0.0909 - accuracy: 0.9864 - val_loss: 0.5039 - val_accuracy: 0.8144

Compare the two models:

print("Linear model on binary vectorized data:")
print(binary_model.summary())
Linear model on binary vectorized data:
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 4)                 40004     
=================================================================
Total params: 40,004
Trainable params: 40,004
Non-trainable params: 0
_________________________________________________________________
None

print("ConvNet model on int vectorized data:")
print(int_model.summary())
ConvNet model on int vectorized data:
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding (Embedding)        (None, None, 64)          640064    
_________________________________________________________________
conv1d (Conv1D)              (None, None, 64)          20544     
_________________________________________________________________
global_max_pooling1d (Global (None, 64)                0         
_________________________________________________________________
dense_1 (Dense)              (None, 4)                 260       
=================================================================
Total params: 660,868
Trainable params: 660,868
Non-trainable params: 0
_________________________________________________________________
None

Evaluate both models on the test data:

binary_loss, binary_accuracy = binary_model.evaluate(binary_test_ds)
int_loss, int_accuracy = int_model.evaluate(int_test_ds)

print("Binary model accuracy: {:2.2%}".format(binary_accuracy))
print("Int model accuracy: {:2.2%}".format(int_accuracy))
250/250 [==============================] - 2s 8ms/step - loss: 0.5178 - accuracy: 0.8141
250/250 [==============================] - 2s 10ms/step - loss: 0.5194 - accuracy: 0.8091
Binary model accuracy: 81.41%
Int model accuracy: 80.91%

Export the model

In the code above, you applied the TextVectorization layer to the dataset before feeding text to the model. If you want to make your model capable of processing raw strings (for example, to simplify deploying it), you can include the TextVectorization layer inside your model. To do so, you can create a new model using the weights you just trained.

export_model = tf.keras.Sequential(
    [binary_vectorize_layer, binary_model,
     layers.Activation('sigmoid')])

export_model.compile(
    loss=losses.SparseCategoricalCrossentropy(from_logits=False),
    optimizer='adam',
    metrics=['accuracy'])

# Test it with `raw_test_ds`, which yields raw strings
loss, accuracy = export_model.evaluate(raw_test_ds)
print("Accuracy: {:2.2%}".format(binary_accuracy))
250/250 [==============================] - 3s 11ms/step - loss: 0.7002 - accuracy: 0.8141
Accuracy: 81.41%

Now your model can take raw strings as input and predict a score for each label using model.predict. Define a function to find the label with the maximum score:

def get_string_labels(predicted_scores_batch):
  predicted_int_labels = tf.argmax(predicted_scores_batch, axis=1)
  predicted_labels = tf.gather(raw_train_ds.class_names, predicted_int_labels)
  return predicted_labels

Run inference on new data

inputs = [
    "how do I extract keys from a dict into a list?",  # python
    "debug public static void main(string[] args) {...}",  # java
]
predicted_scores = export_model.predict(inputs)
predicted_labels = get_string_labels(predicted_scores)
for input, label in zip(inputs, predicted_labels):
  print("Question: ", input)
  print("Predicted label: ", label.numpy())
Question:  how do I extract keys from a dict into a list?
Predicted label:  b'python'
Question:  debug public static void main(string[] args) {...}
Predicted label:  b'java'

Including the text preprocessing logic inside your model enables you to export a model for production that simplifies deployment, and reduces the potential for train/test skew.

There is a performance difference to keep in mind when choosing where to apply your TextVectorization layer. Using it outside of your model enables you to do asynchronous CPU processing and buffering of your data when training on GPU. So, if you're training your model on the GPU, you probably want to go with this option to get the best performance while developing your model, then switch to including the TextVectorization layer inside your model when you're ready to prepare for deployment.

Visit this tutorial to learn more about saving models.

Example 2: Predict the author of Illiad translations

The following provides an example of using tf.data.TextLineDataset to load examples from text files, and tf.text to preprocess the data. In this example, you will use three different English translations of the same work, Homer's Illiad, and train a model to identify the translator given a single line of text.

Download and explore the dataset

The texts of the three translations are by:

The text files used in this tutorial have undergone some typical preprocessing tasks like removing document header and footer, line numbers and chapter titles. Download these lightly munged files locally.

DIRECTORY_URL = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/'
FILE_NAMES = ['cowper.txt', 'derby.txt', 'butler.txt']

for name in FILE_NAMES:
  text_dir = utils.get_file(name, origin=DIRECTORY_URL + name)

parent_dir = pathlib.Path(text_dir).parent
list(parent_dir.iterdir())
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/illiad/cowper.txt
819200/815980 [==============================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/illiad/derby.txt
811008/809730 [==============================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/illiad/butler.txt
811008/807992 [==============================] - 0s 0us/step

[PosixPath('/root/.keras/datasets/derby.txt'),
 PosixPath('/root/.keras/datasets/cowper.txt'),
 PosixPath('/root/.keras/datasets/butler.txt')]

Load the dataset

You will use TextLineDataset, which is designed to create a tf.data.Dataset from a text file in which each example is a line of text from the original file, whereas text_dataset_from_directory treats all contents of a file as a single example. TextLineDataset is useful for text data that is primarily line-based (for example, poetry or error logs).

Iterate through these files, loading each one into its own dataset. Each example needs to be individually labeled, so use tf.data.Dataset.map to apply a labeler function to each one. This will iterate over every example in the dataset, returning (example, label) pairs.

def labeler(example, index):
  return example, tf.cast(index, tf.int64)
labeled_data_sets = []

for i, file_name in enumerate(FILE_NAMES):
  lines_dataset = tf.data.TextLineDataset(str(parent_dir/file_name))
  labeled_dataset = lines_dataset.map(lambda ex: labeler(ex, i))
  labeled_data_sets.append(labeled_dataset)

Next, you'll combine these labeled datasets into a single dataset, and shuffle it.

BUFFER_SIZE = 50000
BATCH_SIZE = 64
VALIDATION_SIZE = 5000
all_labeled_data = labeled_data_sets[0]
for labeled_dataset in labeled_data_sets[1:]:
  all_labeled_data = all_labeled_data.concatenate(labeled_dataset)

all_labeled_data = all_labeled_data.shuffle(
    BUFFER_SIZE, reshuffle_each_iteration=False)

Print out a few examples as before. The dataset hasn't been batched yet, hence each entry in all_labeled_data corresponds to one data point:

for text, label in all_labeled_data.take(10):
  print("Sentence: ", text.numpy())
  print("Label:", label.numpy())
Sentence:  b"By the brisk breeze impell'd, and winnower's force;"
Label: 1
Sentence:  b"On this a brazen canister she plac'd,"
Label: 1
Sentence:  b'Trod in his steps, ere settled yet the dust.'
Label: 1
Sentence:  b'And stately \xc3\x86py. Their confederate powers'
Label: 0
Sentence:  b'Ill-fated parents both! nor thou to him,'
Label: 1
Sentence:  b'Let him not seek to terrify with force'
Label: 0
Sentence:  b'the tomb of Aepytus, where the people fight hand to hand; the men of'
Label: 2
Sentence:  b"And Theseus, AEgeus' more than mortal son."
Label: 1
Sentence:  b'To whom in wrath Achilles swift of foot;'
Label: 1
Sentence:  b'The lance of Diomed, behind his neck,'
Label: 1

Prepare the dataset for training

Instead of using the Keras TextVectorization layer to preprocess our text dataset, you will now use the tf.text API to standardize and tokenize the data, build a vocabulary and use StaticVocabularyTable to map tokens to integers to feed to the model.

While tf.text provides various tokenizers, you will use the UnicodeScriptTokenizer to tokenize our dataset. Define a function to convert the text to lower-case and tokenize it. You will use tf.data.Dataset.map to apply the tokenization to the dataset.

tokenizer = tf_text.UnicodeScriptTokenizer()
def tokenize(text, unused_label):
  lower_case = tf_text.case_fold_utf8(text)
  return tokenizer.tokenize(lower_case)
tokenized_ds = all_labeled_data.map(tokenize)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/util/dispatch.py:201: batch_gather (from tensorflow.python.ops.array_ops) is deprecated and will be removed after 2017-10-25.
Instructions for updating:
`tf.batch_gather` is deprecated, please use `tf.gather` with `batch_dims=-1` instead.

You can iterate over the dataset and print out a few tokenized examples.

for text_batch in tokenized_ds.take(5):
  print("Tokens: ", text_batch.numpy())
Tokens:  [b'by' b'the' b'brisk' b'breeze' b'impell' b"'" b'd' b',' b'and'
 b'winnower' b"'" b's' b'force' b';']
Tokens:  [b'on' b'this' b'a' b'brazen' b'canister' b'she' b'plac' b"'" b'd' b',']
Tokens:  [b'trod' b'in' b'his' b'steps' b',' b'ere' b'settled' b'yet' b'the'
 b'dust' b'.']
Tokens:  [b'and' b'stately' b'\xc3\xa6py' b'.' b'their' b'confederate' b'powers']
Tokens:  [b'ill' b'-' b'fated' b'parents' b'both' b'!' b'nor' b'thou' b'to' b'him'
 b',']

Next, you will build a vocabulary by sorting tokens by frequency and keeping the top VOCAB_SIZE tokens.

tokenized_ds = configure_dataset(tokenized_ds)

vocab_dict = collections.defaultdict(lambda: 0)
for toks in tokenized_ds.as_numpy_iterator():
  for tok in toks:
    vocab_dict[tok] += 1

vocab = sorted(vocab_dict.items(), key=lambda x: x[1], reverse=True)
vocab = [token for token, count in vocab]
vocab = vocab[:VOCAB_SIZE]
vocab_size = len(vocab)
print("Vocab size: ", vocab_size)
print("First five vocab entries:", vocab[:5])
Vocab size:  10000
First five vocab entries: [b',', b'the', b'and', b"'", b'of']

To convert the tokens into integers, use the vocab set to create a StaticVocabularyTable. You will map tokens to integers in the range [2, vocab_size + 2]. As with the TextVectorization layer, 0 is reserved to denote padding and 1 is reserved to denote an out-of-vocabulary (OOV) token.

keys = vocab
values = range(2, len(vocab) + 2)  # reserve 0 for padding, 1 for OOV

init = tf.lookup.KeyValueTensorInitializer(
    keys, values, key_dtype=tf.string, value_dtype=tf.int64)

num_oov_buckets = 1
vocab_table = tf.lookup.StaticVocabularyTable(init, num_oov_buckets)

Finally, define a fuction to standardize, tokenize and vectorize the dataset using the tokenizer and lookup table:

def preprocess_text(text, label):
  standardized = tf_text.case_fold_utf8(text)
  tokenized = tokenizer.tokenize(standardized)
  vectorized = vocab_table.lookup(tokenized)
  return vectorized, label

You can try this on a single example to see the output:

example_text, example_label = next(iter(all_labeled_data))
print("Sentence: ", example_text.numpy())
vectorized_text, example_label = preprocess_text(example_text, example_label)
print("Vectorized sentence: ", vectorized_text.numpy())
Sentence:  b"By the brisk breeze impell'd, and winnower's force;"
Vectorized sentence:  [  26    3 3286 2263 1959    5    9    2    4 7806    5   29  260   10]

Now run the preprocess function on the dataset using tf.data.Dataset.map.

all_encoded_data = all_labeled_data.map(preprocess_text)

Split the dataset into train and test

The Keras TextVectorization layer also batches and pads the vectorized data. Padding is required because the examples inside of a batch need to be the same size and shape, but the examples in these datasets are not all the same size — each line of text has a different number of words. tf.data.Dataset supports splitting and padded-batching datasets:

train_data = all_encoded_data.skip(VALIDATION_SIZE).shuffle(BUFFER_SIZE)
validation_data = all_encoded_data.take(VALIDATION_SIZE)
train_data = train_data.padded_batch(BATCH_SIZE)
validation_data = validation_data.padded_batch(BATCH_SIZE)

Now, validation_data and train_data are not collections of (example, label) pairs, but collections of batches. Each batch is a pair of (many examples, many labels) represented as arrays. To illustrate:

sample_text, sample_labels = next(iter(validation_data))
print("Text batch shape: ", sample_text.shape)
print("Label batch shape: ", sample_labels.shape)
print("First text example: ", sample_text[0])
print("First label example: ", sample_labels[0])
Text batch shape:  (64, 18)
Label batch shape:  (64,)
First text example:  tf.Tensor(
[  26    3 3286 2263 1959    5    9    2    4 7806    5   29  260   10
    0    0    0    0], shape=(18,), dtype=int64)
First label example:  tf.Tensor(1, shape=(), dtype=int64)

Since we use 0 for padding and 1 for out-of-vocabulary (OOV) tokens, the vocabulary size has increased by two.

vocab_size += 2

Configure the datasets for better performance as before.

train_data = configure_dataset(train_data)
validation_data = configure_dataset(validation_data)

Train the model

You can train a model on this dataset as before.

model = create_model(vocab_size=vocab_size, num_labels=3)
model.compile(
    optimizer='adam',
    loss=losses.SparseCategoricalCrossentropy(from_logits=True),
    metrics=['accuracy'])
history = model.fit(train_data, validation_data=validation_data, epochs=3)
Epoch 1/3
697/697 [==============================] - 9s 13ms/step - loss: 0.5224 - accuracy: 0.7665 - val_loss: 0.3657 - val_accuracy: 0.8460
Epoch 2/3
697/697 [==============================] - 6s 8ms/step - loss: 0.2834 - accuracy: 0.8836 - val_loss: 0.3596 - val_accuracy: 0.8488
Epoch 3/3
697/697 [==============================] - 6s 9ms/step - loss: 0.1914 - accuracy: 0.9262 - val_loss: 0.3951 - val_accuracy: 0.8476

loss, accuracy = model.evaluate(validation_data)

print("Loss: ", loss)
print("Accuracy: {:2.2%}".format(accuracy))
79/79 [==============================] - 0s 2ms/step - loss: 0.3951 - accuracy: 0.8476
Loss:  0.39514628052711487
Accuracy: 84.76%

Export the model

To make our model capable to taking raw strings as input, you will create a TextVectorization layer that performs the same steps as our custom preprocessing function. Since you already trained a vocabulary, you can use set_vocaublary instead of adapt which trains a new vocabulary.

preprocess_layer = TextVectorization(
    max_tokens=vocab_size,
    standardize=tf_text.case_fold_utf8,
    split=tokenizer.tokenize,
    output_mode='int',
    output_sequence_length=MAX_SEQUENCE_LENGTH)
preprocess_layer.set_vocabulary(vocab)
export_model = tf.keras.Sequential(
    [preprocess_layer, model,
     layers.Activation('sigmoid')])

export_model.compile(
    loss=losses.SparseCategoricalCrossentropy(from_logits=False),
    optimizer='adam',
    metrics=['accuracy'])
# Create a test dataset of raw strings
test_ds = all_labeled_data.take(VALIDATION_SIZE).batch(BATCH_SIZE)
test_ds = configure_dataset(test_ds)
loss, accuracy = export_model.evaluate(test_ds)
print("Loss: ", loss)
print("Accuracy: {:2.2%}".format(accuracy))
79/79 [==============================] - 1s 10ms/step - loss: 0.5304 - accuracy: 0.7978
Loss:  0.5304141640663147
Accuracy: 79.78%

The loss and accuracy for the model on encoded validation set and the exported model on the raw validation set are the same, as expected.

Run inference on new data

inputs = [
    "Join'd to th' Ionians with their flowing robes,",  # Label: 1
    "the allies, and his armour flashed about him so that he seemed to all",  # Label: 2
    "And with loud clangor of his arms he fell.",  # Label: 0
]
predicted_scores = export_model.predict(inputs)
predicted_labels = tf.argmax(predicted_scores, axis=1)
for input, label in zip(inputs, predicted_labels):
  print("Question: ", input)
  print("Predicted label: ", label.numpy())
Question:  Join'd to th' Ionians with their flowing robes,
Predicted label:  1
Question:  the allies, and his armour flashed about him so that he seemed to all
Predicted label:  2
Question:  And with loud clangor of his arms he fell.
Predicted label:  0

Downloading more datasets using TensorFlow Datasets (TFDS)

You can download many more datasets from TensorFlow Datasets. As an example, you will download the IMDB Large Movie Review dataset, and use it to train a model for sentiment classification.

train_ds = tfds.load(
    'imdb_reviews',
    split='train',
    batch_size=BATCH_SIZE,
    shuffle_files=True,
    as_supervised=True)
Downloading and preparing dataset imdb_reviews/plain_text/1.0.0 (download: 80.23 MiB, generated: Unknown size, total: 80.23 MiB) to /root/tensorflow_datasets/imdb_reviews/plain_text/1.0.0...

HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Dl Completed...', max=1.0, style=Progre…
HBox(children=(FloatProgress(value=1.0, bar_style='info', description='Dl Size...', max=1.0, style=ProgressSty…





HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
Shuffling and writing examples to /root/tensorflow_datasets/imdb_reviews/plain_text/1.0.0.incompleteBKTU7R/imdb_reviews-train.tfrecord

HBox(children=(FloatProgress(value=0.0, max=25000.0), HTML(value='')))

HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
Shuffling and writing examples to /root/tensorflow_datasets/imdb_reviews/plain_text/1.0.0.incompleteBKTU7R/imdb_reviews-test.tfrecord

HBox(children=(FloatProgress(value=0.0, max=25000.0), HTML(value='')))

HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))
Shuffling and writing examples to /root/tensorflow_datasets/imdb_reviews/plain_text/1.0.0.incompleteBKTU7R/imdb_reviews-unsupervised.tfrecord

HBox(children=(FloatProgress(value=0.0, max=50000.0), HTML(value='')))
Dataset imdb_reviews downloaded and prepared to /root/tensorflow_datasets/imdb_reviews/plain_text/1.0.0. Subsequent calls will reuse this data.

val_ds = tfds.load(
    'imdb_reviews',
    split='train',
    batch_size=BATCH_SIZE,
    shuffle_files=True,
    as_supervised=True)

Print a few examples.

for review_batch, label_batch in val_ds.take(1):
  for i in range(5):
    print("Review: ", review_batch[i].numpy())
    print("Label: ", label_batch[i].numpy())
Review:  b"This was an absolutely terrible movie. Don't be lured in by Christopher Walken or Michael Ironside. Both are great actors, but this must simply be their worst role in history. Even their great acting could not redeem this movie's ridiculous storyline. This movie is an early nineties US propaganda piece. The most pathetic scenes were those when the Columbian rebels were making their cases for revolutions. Maria Conchita Alonso appeared phony, and her pseudo-love affair with Walken was nothing but a pathetic emotional plug in a movie that was devoid of any real meaning. I am disappointed that there are movies like this, ruining actor's like Christopher Walken's good name. I could barely sit through it."
Label:  0
Review:  b'I have been known to fall asleep during films, but this is usually due to a combination of things including, really tired, being warm and comfortable on the sette and having just eaten a lot. However on this occasion I fell asleep because the film was rubbish. The plot development was constant. Constantly slow and boring. Things seemed to happen, but with no explanation of what was causing them or why. I admit, I may have missed part of the film, but i watched the majority of it and everything just seemed to happen of its own accord without any real concern for anything else. I cant recommend this film at all.'
Label:  0
Review:  b'Mann photographs the Alberta Rocky Mountains in a superb fashion, and Jimmy Stewart and Walter Brennan give enjoyable performances as they always seem to do. <br /><br />But come on Hollywood - a Mountie telling the people of Dawson City, Yukon to elect themselves a marshal (yes a marshal!) and to enforce the law themselves, then gunfighters battling it out on the streets for control of the town? <br /><br />Nothing even remotely resembling that happened on the Canadian side of the border during the Klondike gold rush. Mr. Mann and company appear to have mistaken Dawson City for Deadwood, the Canadian North for the American Wild West.<br /><br />Canadian viewers be prepared for a Reefer Madness type of enjoyable howl with this ludicrous plot, or, to shake your head in disgust.'
Label:  0
Review:  b'This is the kind of film for a snowy Sunday afternoon when the rest of the world can go ahead with its own business as you descend into a big arm-chair and mellow for a couple of hours. Wonderful performances from Cher and Nicolas Cage (as always) gently row the plot along. There are no rapids to cross, no dangerous waters, just a warm and witty paddle through New York life at its best. A family film in every sense and one that deserves the praise it received.'
Label:  1
Review:  b'As others have mentioned, all the women that go nude in this film are mostly absolutely gorgeous. The plot very ably shows the hypocrisy of the female libido. When men are around they want to be pursued, but when no "men" are around, they become the pursuers of a 14 year old boy. And the boy becomes a man really fast (we should all be so lucky at this age!). He then gets up the courage to pursue his true love.'
Label:  1

You can now preprocess the data and train a model as before.

Prepare the dataset for training

vectorize_layer = TextVectorization(
    max_tokens=VOCAB_SIZE,
    output_mode='int',
    output_sequence_length=MAX_SEQUENCE_LENGTH)

# Make a text-only dataset (without labels), then call adapt
train_text = train_ds.map(lambda text, labels: text)
vectorize_layer.adapt(train_text)
def vectorize_text(text, label):
  text = tf.expand_dims(text, -1)
  return vectorize_layer(text), label
train_ds = train_ds.map(vectorize_text)
val_ds = val_ds.map(vectorize_text)
# Configure datasets for performance as before
train_ds = configure_dataset(train_ds)
val_ds = configure_dataset(val_ds)

Train the model

model = create_model(vocab_size=VOCAB_SIZE + 1, num_labels=1)
model.summary()
Model: "sequential_5"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_2 (Embedding)      (None, None, 64)          640064    
_________________________________________________________________
conv1d_2 (Conv1D)            (None, None, 64)          20544     
_________________________________________________________________
global_max_pooling1d_2 (Glob (None, 64)                0         
_________________________________________________________________
dense_3 (Dense)              (None, 1)                 65        
=================================================================
Total params: 660,673
Trainable params: 660,673
Non-trainable params: 0
_________________________________________________________________

model.compile(
    loss=losses.BinaryCrossentropy(from_logits=True),
    optimizer='adam',
    metrics=['accuracy'])
history = model.fit(train_ds, validation_data=val_ds, epochs=3)
Epoch 1/3
391/391 [==============================] - 9s 23ms/step - loss: 0.4976 - accuracy: 0.7044 - val_loss: 0.2976 - val_accuracy: 0.8832
Epoch 2/3
391/391 [==============================] - 5s 14ms/step - loss: 0.2748 - accuracy: 0.8801 - val_loss: 0.1696 - val_accuracy: 0.9458
Epoch 3/3
391/391 [==============================] - 6s 14ms/step - loss: 0.1697 - accuracy: 0.9351 - val_loss: 0.0956 - val_accuracy: 0.9770

loss, accuracy = model.evaluate(val_ds)

print("Loss: ", loss)
print("Accuracy: {:2.2%}".format(accuracy))
391/391 [==============================] - 1s 3ms/step - loss: 0.0956 - accuracy: 0.9770
Loss:  0.09558165073394775
Accuracy: 97.70%

Export the model

export_model = tf.keras.Sequential(
    [vectorize_layer, model,
     layers.Activation('sigmoid')])

export_model.compile(
    loss=losses.SparseCategoricalCrossentropy(from_logits=False),
    optimizer='adam',
    metrics=['accuracy'])
# 0 --> negative review
# 1 --> positive review
inputs = [
    "This is a fantastic movie.",
    "This is a bad movie.",
    "This movie was so bad that it was good.",
    "I will never say yes to watching this movie.",
]
predicted_scores = export_model.predict(inputs)
predicted_labels = [int(round(x[0])) for x in predicted_scores]
for input, label in zip(inputs, predicted_labels):
  print("Question: ", input)
  print("Predicted label: ", label)
Question:  This is a fantastic movie.
Predicted label:  1
Question:  This is a bad movie.
Predicted label:  0
Question:  This movie was so bad that it was good.
Predicted label:  0
Question:  I will never say yes to watching this movie.
Predicted label:  1

Conclusion

This tutorial demonstrated several ways to load and preprocess text. As a next step, you can explore additional tutorials on the website, or download new datasets from TensorFlow Datasets.