¡El Día de la Comunidad de ML es el 9 de noviembre! Únase a nosotros para recibir actualizaciones de TensorFlow, JAX, y más Más información

Generación de texto con un RNN

Ver en TensorFlow.org Ejecutar en Google Colab Ver fuente en GitHub Descargar cuaderno

Este tutorial demuestra cómo generar texto usando un RNN basado en caracteres. Usted va a trabajar con un conjunto de datos de la escritura de Shakespeare a partir de Andrej Karpathy El irrazonable eficacia de las redes neuronales recurrentes . Dada una secuencia de caracteres de estos datos ("Shakespear"), entrene un modelo para predecir el siguiente carácter de la secuencia ("e"). Se pueden generar secuencias de texto más largas llamando al modelo repetidamente.

Este tutorial incluye el código ejecutable implementado usando tf.keras y ejecución ansiosos . El siguiente es el resultado de muestra cuando el modelo de este tutorial se entrenó durante 30 épocas y comenzó con el mensaje "Q":

QUEENE:
I had thought thou hadst a Roman; for the oracle,
Thus by All bids the man against the word,
Which are so weak of care, by old care done;
Your children were in your holy love,
And the precipitation through the bleeding throne.

BISHOP OF ELY:
Marry, and will, my lord, to weep in such a one were prettiest;
Yet now I was adopted heir
Of the world's lamentable day,
To watch the next way with his father with his face?

ESCALUS:
The cause why then we are all resolved more sons.

VOLUMNIA:
O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,
And love and pale as any will to that word.

QUEEN ELIZABETH:
But how long have I heard the soul for this world,
And show his hands of life be proved to stand.

PETRUCHIO:
I say he look'd on, if I must be content
To stay him from the fatal of our country's bliss.
His lordship pluck'd from this sentence then for prey,
And then let us twain, being the moon,
were she such a case as fills m

Si bien algunas de las oraciones son gramaticales, la mayoría no tienen sentido. El modelo no ha aprendido el significado de las palabras, pero considere:

  • El modelo está basado en personajes. Cuando comenzó el entrenamiento, el modelo no sabía cómo deletrear una palabra en inglés o que las palabras eran incluso una unidad de texto.

  • La estructura de la salida se asemeja a una obra de teatro: los bloques de texto generalmente comienzan con el nombre de un hablante, en letras mayúsculas similar al conjunto de datos.

  • Como se demuestra a continuación, el modelo se entrena en pequeños lotes de texto (100 caracteres cada uno) y aún puede generar una secuencia de texto más larga con una estructura coherente.

Configuración

Importar TensorFlow y otras bibliotecas

import tensorflow as tf
from tensorflow.keras.layers.experimental import preprocessing

import numpy as np
import os
import time

Descarga el conjunto de datos de Shakespeare

Cambie la siguiente línea para ejecutar este código en sus propios datos.

path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt
1122304/1115394 [==============================] - 0s 0us/step
1130496/1115394 [==============================] - 0s 0us/step

Leer los datos

Primero, mire el texto:

# Read, then decode for py2 compat.
text = open(path_to_file, 'rb').read().decode(encoding='utf-8')
# length of text is the number of characters in it
print(f'Length of text: {len(text)} characters')
Length of text: 1115394 characters
# Take a look at the first 250 characters in text
print(text[:250])
First Citizen:
Before we proceed any further, hear me speak.

All:
Speak, speak.

First Citizen:
You are all resolved rather to die than to famish?

All:
Resolved. resolved.

First Citizen:
First, you know Caius Marcius is chief enemy to the people.
# The unique characters in the file
vocab = sorted(set(text))
print(f'{len(vocab)} unique characters')
65 unique characters

Procesar el texto

Vectorizar el texto

Antes de entrenar, debe convertir las cadenas en una representación numérica.

El preprocessing.StringLookup capa puede convertir cada carácter en un ID numérico. Solo necesita que el texto se divida en tokens primero.

example_texts = ['abcdefg', 'xyz']

chars = tf.strings.unicode_split(example_texts, input_encoding='UTF-8')
chars
2021-08-11 18:24:53.295402: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-11 18:24:53.303654: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-11 18:24:53.304580: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-11 18:24:53.306209: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-08-11 18:24:53.306828: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-11 18:24:53.307802: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-11 18:24:53.308798: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-11 18:24:53.896425: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-11 18:24:53.897329: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-11 18:24:53.898198: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-11 18:24:53.899171: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14648 MB memory:  -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0
<tf.RaggedTensor [[b'a', b'b', b'c', b'd', b'e', b'f', b'g'], [b'x', b'y', b'z']]>

Ahora crea el preprocessing.StringLookup capa:

ids_from_chars = preprocessing.StringLookup(
    vocabulary=list(vocab), mask_token=None)

Convierte tokens de formulario en ID de personaje:

ids = ids_from_chars(chars)
ids
<tf.RaggedTensor [[40, 41, 42, 43, 44, 45, 46], [63, 64, 65]]>

Dado que el objetivo de este tutorial es generar texto, también será importante invertir esta representación y recuperar cadenas legibles por humanos a partir de ella. Para ello se puede utilizar preprocessing.StringLookup(..., invert=True) .

chars_from_ids = tf.keras.layers.experimental.preprocessing.StringLookup(
    vocabulary=ids_from_chars.get_vocabulary(), invert=True, mask_token=None)

Esta capa recupera los personajes de los vectores de identificadores, y vuelve como una tf.RaggedTensor de caracteres:

chars = chars_from_ids(ids)
chars
<tf.RaggedTensor [[b'a', b'b', b'c', b'd', b'e', b'f', b'g'], [b'x', b'y', b'z']]>

Puede tf.strings.reduce_join a unirse a los personajes de nuevo en las cuerdas.

tf.strings.reduce_join(chars, axis=-1).numpy()
array([b'abcdefg', b'xyz'], dtype=object)
def text_from_ids(ids):
  return tf.strings.reduce_join(chars_from_ids(ids), axis=-1)

La tarea de predicción

Dado un carácter, o una secuencia de caracteres, ¿cuál es el próximo carácter más probable? Esta es la tarea para la que está entrenando al modelo. La entrada al modelo será una secuencia de caracteres y usted entrena el modelo para predecir la salida: el siguiente carácter en cada paso de tiempo.

Dado que los RNN mantienen un estado interno que depende de los elementos vistos anteriormente, dados todos los caracteres calculados hasta este momento, ¿cuál es el siguiente carácter?

Crea ejemplos y objetivos de formación

A continuación, divida el texto en secuencias de ejemplo. Cada secuencia de entrada contendrá seq_length caracteres del texto.

Para cada secuencia de entrada, los destinos correspondientes contienen la misma longitud de texto, excepto que se han desplazado un carácter hacia la derecha.

Así dividir el texto en trozos de seq_length+1 . Por ejemplo, digamos seq_length es 4 y nuestro texto es "Hola". La secuencia de entrada sería "Hell" y la secuencia de destino "ello".

Para ello el primer uso de los tf.data.Dataset.from_tensor_slices funcionan para convertir el texto vector en una corriente de los índices de caracteres.

all_ids = ids_from_chars(tf.strings.unicode_split(text, 'UTF-8'))
all_ids
<tf.Tensor: shape=(1115394,), dtype=int64, numpy=array([19, 48, 57, ..., 46,  9,  1])>
ids_dataset = tf.data.Dataset.from_tensor_slices(all_ids)
for ids in ids_dataset.take(10):
    print(chars_from_ids(ids).numpy().decode('utf-8'))
F
i
r
s
t
 
C
i
t
i
seq_length = 100
examples_per_epoch = len(text)//(seq_length+1)

El batch método le permite convertir fácilmente estos caracteres individuales a secuencias del tamaño deseado.

sequences = ids_dataset.batch(seq_length+1, drop_remainder=True)

for seq in sequences.take(1):
  print(chars_from_ids(seq))
tf.Tensor(
[b'F' b'i' b'r' b's' b't' b' ' b'C' b'i' b't' b'i' b'z' b'e' b'n' b':'
 b'\n' b'B' b'e' b'f' b'o' b'r' b'e' b' ' b'w' b'e' b' ' b'p' b'r' b'o'
 b'c' b'e' b'e' b'd' b' ' b'a' b'n' b'y' b' ' b'f' b'u' b'r' b't' b'h'
 b'e' b'r' b',' b' ' b'h' b'e' b'a' b'r' b' ' b'm' b'e' b' ' b's' b'p'
 b'e' b'a' b'k' b'.' b'\n' b'\n' b'A' b'l' b'l' b':' b'\n' b'S' b'p' b'e'
 b'a' b'k' b',' b' ' b's' b'p' b'e' b'a' b'k' b'.' b'\n' b'\n' b'F' b'i'
 b'r' b's' b't' b' ' b'C' b'i' b't' b'i' b'z' b'e' b'n' b':' b'\n' b'Y'
 b'o' b'u' b' '], shape=(101,), dtype=string)

Es más fácil ver lo que está haciendo si vuelve a unir los tokens en cadenas:

for seq in sequences.take(5):
  print(text_from_ids(seq).numpy())
b'First Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou '
b'are all resolved rather to die than to famish?\n\nAll:\nResolved. resolved.\n\nFirst Citizen:\nFirst, you k'
b"now Caius Marcius is chief enemy to the people.\n\nAll:\nWe know't, we know't.\n\nFirst Citizen:\nLet us ki"
b"ll him, and we'll have corn at our own price.\nIs't a verdict?\n\nAll:\nNo more talking on't; let it be d"
b'one: away, away!\n\nSecond Citizen:\nOne word, good citizens.\n\nFirst Citizen:\nWe are accounted poor citi'

Para el entrenamiento se necesita un conjunto de datos de (input, label) pares. Cuando input y label son secuencias. En cada paso de tiempo, la entrada es el carácter actual y la etiqueta es el carácter siguiente.

Aquí hay una función que toma una secuencia como entrada, la duplica y la desplaza para alinear la entrada y la etiqueta para cada paso de tiempo:

def split_input_target(sequence):
    input_text = sequence[:-1]
    target_text = sequence[1:]
    return input_text, target_text
split_input_target(list("Tensorflow"))
(['T', 'e', 'n', 's', 'o', 'r', 'f', 'l', 'o'],
 ['e', 'n', 's', 'o', 'r', 'f', 'l', 'o', 'w'])
dataset = sequences.map(split_input_target)
for input_example, target_example in dataset.take(1):
    print("Input :", text_from_ids(input_example).numpy())
    print("Target:", text_from_ids(target_example).numpy())
Input : b'First Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou'
Target: b'irst Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou '
2021-08-11 18:24:54.893532: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)

Crea lotes de entrenamiento

Que utilizó tf.data para dividir el texto en secuencias manejables. Pero antes de introducir estos datos en el modelo, debe mezclarlos y empaquetarlos en lotes.

# Batch size
BATCH_SIZE = 64

# Buffer size to shuffle the dataset
# (TF data is designed to work with possibly infinite sequences,
# so it doesn't attempt to shuffle the entire sequence in memory. Instead,
# it maintains a buffer in which it shuffles elements).
BUFFER_SIZE = 10000

dataset = (
    dataset
    .shuffle(BUFFER_SIZE)
    .batch(BATCH_SIZE, drop_remainder=True)
    .prefetch(tf.data.experimental.AUTOTUNE))

dataset
<PrefetchDataset shapes: ((64, 100), (64, 100)), types: (tf.int64, tf.int64)>

Construye el modelo

Esta sección define el modelo como un keras.Model subclase (Para detalles ver Hacer nuevos Capas y modelos a través de subclases ).

Este modelo tiene tres capas:

  • tf.keras.layers.Embedding : la capa de entrada. Una tabla de búsqueda entrenable que asignar cada carácter-ID a un vector con embedding_dim dimensiones;
  • tf.keras.layers.GRU : Un tipo de RNN con tamaño units=rnn_units (También se puede utilizar una capa LSTM aquí.)
  • tf.keras.layers.Dense : la capa de salida, con vocab_size salidas. Genera un logit para cada carácter del vocabulario. Estos son los logaritmos de probabilidad de cada personaje según el modelo.
# Length of the vocabulary in chars
vocab_size = len(vocab)

# The embedding dimension
embedding_dim = 256

# Number of RNN units
rnn_units = 1024
class MyModel(tf.keras.Model):
  def __init__(self, vocab_size, embedding_dim, rnn_units):
    super().__init__(self)
    self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
    self.gru = tf.keras.layers.GRU(rnn_units,
                                   return_sequences=True,
                                   return_state=True)
    self.dense = tf.keras.layers.Dense(vocab_size)

  def call(self, inputs, states=None, return_state=False, training=False):
    x = inputs
    x = self.embedding(x, training=training)
    if states is None:
      states = self.gru.get_initial_state(x)
    x, states = self.gru(x, initial_state=states, training=training)
    x = self.dense(x, training=training)

    if return_state:
      return x, states
    else:
      return x
model = MyModel(
    # Be sure the vocabulary size matches the `StringLookup` layers.
    vocab_size=len(ids_from_chars.get_vocabulary()),
    embedding_dim=embedding_dim,
    rnn_units=rnn_units)

Para cada carácter, el modelo busca la incrustación, ejecuta el GRU un paso de tiempo con la incrustación como entrada y aplica la capa densa para generar logits que predicen la probabilidad logarítmica del siguiente carácter:

Un dibujo de los datos que pasan por el modelo.

Prueba el modelo

Ahora ejecute el modelo para ver que se comporta como se esperaba.

Primero verifique la forma de la salida:

for input_example_batch, target_example_batch in dataset.take(1):
    example_batch_predictions = model(input_example_batch)
    print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)")
2021-08-11 18:24:57.345541: I tensorflow/stream_executor/cuda/cuda_dnn.cc:369] Loaded cuDNN version 8100
(64, 100, 66) # (batch_size, sequence_length, vocab_size)

En el ejemplo anterior la longitud de secuencia de la entrada es 100 , pero el modelo se puede ejecutar en entradas de cualquier longitud:

model.summary()
Model: "my_model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding (Embedding)        multiple                  16896     
_________________________________________________________________
gru (GRU)                    multiple                  3938304   
_________________________________________________________________
dense (Dense)                multiple                  67650     
=================================================================
Total params: 4,022,850
Trainable params: 4,022,850
Non-trainable params: 0
_________________________________________________________________

Para obtener predicciones reales del modelo, debe tomar muestras de la distribución de salida para obtener índices de caracteres reales. Esta distribución está definida por los logits sobre el vocabulario de los personajes.

Pruébelo para el primer ejemplo del lote:

sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1)
sampled_indices = tf.squeeze(sampled_indices, axis=-1).numpy()

Esto nos da, en cada paso de tiempo, una predicción del siguiente índice de caracteres:

sampled_indices
array([41, 38,  9, 28,  6, 50, 20, 59, 44,  5, 51, 19, 40, 61, 13, 18, 32,
        0, 13,  0, 27, 37, 10, 46, 38, 40, 28, 22, 14, 44, 35, 22, 44, 16,
       17,  8, 55, 17, 39, 47, 47, 23,  3, 32, 30, 15, 10, 32,  8,  8,  3,
       47, 40, 38, 13,  5, 57, 12, 39,  5,  6, 14, 30, 12, 63, 51, 10, 14,
       52,  1, 47, 15, 48, 28, 38, 16, 22,  7, 59, 45, 44, 62, 23, 32, 36,
       40, 28, 65, 60,  7,  8,  0, 19, 28, 32, 62, 61, 20, 64,  6])

Decodifíquelos para ver el texto predicho por este modelo inexperto:

print("Input:\n", text_from_ids(input_example_batch[0]).numpy())
print()
print("Next Char Predictions:\n", text_from_ids(sampled_indices).numpy())
Input:
 b'ous, and not valiant, you have shamed me\nIn your condemned seconds.\n\nCOMINIUS:\nIf I should tell thee'

Next Char Predictions:
 b"bY.O'kGte&lFav?ES[UNK]?[UNK]NX3gYaOIAeVIeCD-pDZhhJ!SQB3S--!haY?&r;Z&'AQ;xl3Am\nhBiOYCI,tfewJSWaOzu,-[UNK]FOSwvGy'"

Entrena el modelo

En este punto, el problema puede tratarse como un problema de clasificación estándar. Dado el estado anterior de RNN y la entrada de este paso de tiempo, predice la clase del siguiente carácter.

Adjunte un optimizador y una función de pérdida

El estándar tf.keras.losses.sparse_categorical_crossentropy función de pérdida trabaja en este caso, ya que se aplica a través de la última dimensión de las predicciones.

Debido a que su modelo logit regresa, es necesario establecer el from_logits bandera.

loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True)
example_batch_loss = loss(target_example_batch, example_batch_predictions)
mean_loss = example_batch_loss.numpy().mean()
print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)")
print("Mean loss:        ", mean_loss)
Prediction shape:  (64, 100, 66)  # (batch_size, sequence_length, vocab_size)
Mean loss:         4.191435

Un modelo recién inicializado no debería estar muy seguro de sí mismo, todos los logits de salida deberían tener magnitudes similares. Para confirmar esto, puede verificar que el exponencial de la pérdida media sea aproximadamente igual al tamaño del vocabulario. Una pérdida mucho mayor significa que el modelo está seguro de sus respuestas incorrectas y está mal inicializado:

tf.exp(mean_loss).numpy()
66.11759

Configurar el procedimiento de formación mediante el tf.keras.Model.compile método. Utilice tf.keras.optimizers.Adam con los argumentos por defecto y la función de pérdida.

model.compile(optimizer='adam', loss=loss)

Configurar puntos de control

Use un tf.keras.callbacks.ModelCheckpoint para asegurar que los puntos de control se guardan durante el entrenamiento:

# Directory where the checkpoints will be saved
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")

checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
    filepath=checkpoint_prefix,
    save_weights_only=True)

Ejecuta la formación

Para mantener un tiempo de entrenamiento razonable, use 10 épocas para entrenar al modelo. En Colab, configure el tiempo de ejecución en GPU para un entrenamiento más rápido.

EPOCHS = 20
history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback])
Epoch 1/20
172/172 [==============================] - 6s 23ms/step - loss: 2.7361
Epoch 2/20
172/172 [==============================] - 5s 23ms/step - loss: 2.0067
Epoch 3/20
172/172 [==============================] - 5s 23ms/step - loss: 1.7364
Epoch 4/20
172/172 [==============================] - 5s 23ms/step - loss: 1.5729
Epoch 5/20
172/172 [==============================] - 5s 23ms/step - loss: 1.4700
Epoch 6/20
172/172 [==============================] - 5s 23ms/step - loss: 1.4000
Epoch 7/20
172/172 [==============================] - 5s 23ms/step - loss: 1.3465
Epoch 8/20
172/172 [==============================] - 5s 23ms/step - loss: 1.3007
Epoch 9/20
172/172 [==============================] - 5s 23ms/step - loss: 1.2610
Epoch 10/20
172/172 [==============================] - 5s 23ms/step - loss: 1.2223
Epoch 11/20
172/172 [==============================] - 5s 23ms/step - loss: 1.1842
Epoch 12/20
172/172 [==============================] - 5s 23ms/step - loss: 1.1460
Epoch 13/20
172/172 [==============================] - 5s 23ms/step - loss: 1.1055
Epoch 14/20
172/172 [==============================] - 5s 23ms/step - loss: 1.0626
Epoch 15/20
172/172 [==============================] - 5s 24ms/step - loss: 1.0170
Epoch 16/20
172/172 [==============================] - 5s 23ms/step - loss: 0.9692
Epoch 17/20
172/172 [==============================] - 5s 23ms/step - loss: 0.9181
Epoch 18/20
172/172 [==============================] - 5s 23ms/step - loss: 0.8670
Epoch 19/20
172/172 [==============================] - 5s 23ms/step - loss: 0.8143
Epoch 20/20
172/172 [==============================] - 5s 23ms/step - loss: 0.7647

Generar texto

La forma más sencilla de generar texto con este modelo es ejecutarlo en un bucle y realizar un seguimiento del estado interno del modelo a medida que lo ejecuta.

Para generar texto, la salida del modelo se retroalimenta a la entrada

Cada vez que llamas al modelo, pasas un texto y un estado interno. El modelo devuelve una predicción para el siguiente carácter y su nuevo estado. Vuelva a pasar la predicción y el estado para continuar generando texto.

Lo siguiente hace una predicción de un solo paso:

class OneStep(tf.keras.Model):
  def __init__(self, model, chars_from_ids, ids_from_chars, temperature=1.0):
    super().__init__()
    self.temperature = temperature
    self.model = model
    self.chars_from_ids = chars_from_ids
    self.ids_from_chars = ids_from_chars

    # Create a mask to prevent "[UNK]" from being generated.
    skip_ids = self.ids_from_chars(['[UNK]'])[:, None]
    sparse_mask = tf.SparseTensor(
        # Put a -inf at each bad index.
        values=[-float('inf')]*len(skip_ids),
        indices=skip_ids,
        # Match the shape to the vocabulary
        dense_shape=[len(ids_from_chars.get_vocabulary())])
    self.prediction_mask = tf.sparse.to_dense(sparse_mask)

  @tf.function
  def generate_one_step(self, inputs, states=None):
    # Convert strings to token IDs.
    input_chars = tf.strings.unicode_split(inputs, 'UTF-8')
    input_ids = self.ids_from_chars(input_chars).to_tensor()

    # Run the model.
    # predicted_logits.shape is [batch, char, next_char_logits]
    predicted_logits, states = self.model(inputs=input_ids, states=states,
                                          return_state=True)
    # Only use the last prediction.
    predicted_logits = predicted_logits[:, -1, :]
    predicted_logits = predicted_logits/self.temperature
    # Apply the prediction mask: prevent "[UNK]" from being generated.
    predicted_logits = predicted_logits + self.prediction_mask

    # Sample the output logits to generate token IDs.
    predicted_ids = tf.random.categorical(predicted_logits, num_samples=1)
    predicted_ids = tf.squeeze(predicted_ids, axis=-1)

    # Convert from token ids to characters
    predicted_chars = self.chars_from_ids(predicted_ids)

    # Return the characters and model state.
    return predicted_chars, states
one_step_model = OneStep(model, chars_from_ids, ids_from_chars)

Ejecútelo en un bucle para generar algo de texto. Al observar el texto generado, verá que el modelo sabe cuándo usar mayúsculas, hacer párrafos e imita un vocabulario de escritura similar al de Shakespeare. Con el pequeño número de épocas de entrenamiento, aún no ha aprendido a formar oraciones coherentes.

start = time.time()
states = None
next_char = tf.constant(['ROMEO:'])
result = [next_char]

for n in range(1000):
  next_char, states = one_step_model.generate_one_step(next_char, states=states)
  result.append(next_char)

result = tf.strings.join(result)
end = time.time()
print(result[0].numpy().decode('utf-8'), '\n\n' + '_'*80)
print('\nRun time:', end - start)
ROMEO:
It is a very example
Here done to Elcompash of her griefs, wherein Choise,
Without my enemy; you are o'er this scene
Thoughts that sown'd off to have a sufficient mon
hath made it on the people, break our case:
Who inciddst the hour, think you be gone?

MENENIUS:
For what I see, I doubt there was more periol to their friends?

GLOUCESTER:
Have you not hear? the senate pass down forth,
Countenance, prefermants, devised in courtezage,
Of it at punishes, and cry batter King Henry's use!

JULIET:
If they did I but last; I say to thir,
And fly: my vooking in those thing, it brings;
After an act, may stand in my foe instant?

FRIAR LAURENCE:
So much upon the serving-creature.

Second Katharinan,
Save you this young father, news, will kiss
your honour to a covert fance to Farcius' blaze is expiled
till choose and call the foem of cheer himself.
Not so deliver, for this night shall be a cut-out
Yourselfs; as the flowers cannot no: what he pleg-son,
As the pay to her heavy, marches?

MARCIUS:
 

________________________________________________________________________________

Run time: 2.3087921142578125

La cosa más fácil que usted puede hacer para mejorar los resultados es entrenar por más tiempo (TRY EPOCHS = 30 ).

También puede experimentar con una cadena de inicio diferente, intentar agregar otra capa RNN para mejorar la precisión del modelo o ajustar el parámetro de temperatura para generar predicciones más o menos aleatorias.

Si desea que el modelo para generar texto más rápido la cosa más fácil que puede hacer es la generación de lotes de texto. En el ejemplo siguiente, el modelo genera 5 salidas aproximadamente en el mismo tiempo que tardó en generar 1 anterior.

start = time.time()
states = None
next_char = tf.constant(['ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:'])
result = [next_char]

for n in range(1000):
  next_char, states = one_step_model.generate_one_step(next_char, states=states)
  result.append(next_char)

result = tf.strings.join(result)
end = time.time()
print(result, '\n\n' + '_'*80)
print('\nRun time:', end - start)
tf.Tensor(
[b"ROMEO:\nIt is my daughter, whom thou hast, no, no, what many which ho\ncaused for fear. Then?\n\nFirst Citizen:\nCousin of Buckingham, and therefore wast thou thin,\nBy Jove her thunder, not on him.\n\nFLORIZEL:\nMy lord,\nYou never spow him so perform her life;\nBut had thought the wanted counsel on the world,\nThe baid of old tale from him by foes,\nLike all forms, he doth not the duke well for herself.\nThe sons and fam is strucken murder;\nAnd bless he shall not be long.\nWhereto he better nothing, by the east,\nWas factionary against Exeter!\n\nHERMION:\nWhere is your pain? hings in a soldier.\n\nShepherd:\n'Tis south; I will not go by this; he loves' me\nThough noble Contro's shump.\n\nAEdile:\nHe's sudden; tood my friends are too sun\nPat on him an embastiest York by day, my liege,\nProfesses to follow Marcius.\n\nCOMINIUS:\nIt was come to us!\nBut, our queen, those weeping pay the formers any other;\nAnon even he should seem to dry.\n\nHESS OF YORK:\nMy lord, he both be so farther,\nBut 'tis as banish'd from the mind of "
 b"ROMEO:\nIt is spoke for triumphant garly, fis\nFresh out my daughter and the deed-joy\njeasons that I was lost innation and eyes from the\nthy glims.\n\nFROTH:\nHere comes this way, and sellow'd for and\nspeechange; cry 'D; inchance his down and with the or-house,\nWhere indeed the sedicing scholarging disdains\nDrows you.\n\nAlipan:\nWhere's Clifford; we will confess too,\nOr, by this song, nor pray now what I did\nHer uncle Rivers stands you to take away;\nBut in the like known thereof discresed at his\nheart wept humble as a pitch'd any right.\nWhereto I, 'Hill Henry, and you, my lord,\nKnow't again by Angelo, the head maid\nFalse to another scorns thus daring for\nAn angry ay angry. Veriling you\nThan which you are heart, gave war nor none within;\nTell he that first wretched to her dower, though it begin.\n\nDUKE VINCENTIO:\nWhere is Aufidius sister? how much factos loath\nto pride: King Richard in Bianco's singing.\n\nMARIANA:\nWhy art thou harst: for, to retire yourself\nTo County many thousand humble stains.\nSawnt"
 b"ROMEO:\nSatisfy!\nThink'st thou hast thou out of true applace: throw away\nThe rather for incapab-torment.\n\nGLoUCESTER:\nSo Gaunt in Eye wrong'd, belike.\n\nQUEEN:\n'Tis little friend, thou couldst know; mencle, Clifford.\nDid ut up the flesh; the sons and blubter\nTannot countervail the conquest of thyself.\nBut how must be a king, as hideous ass\nShould you go's assural trembling adjer!\nWhy shall deserve you but assuar their\ncoats of such persons to be your castle.\nCondemning soul to him and heir more than\nHer sups, moresely three women\none and a hongy: you have like his curediar,\nAnd chase him in the infirmine breachs.\n\nKING EDWARD IV:\nCansault thou son? She's a word.\n\nSICINIUS:\nThis shows assurance how the house of love\nLidst both our subjects as the senate's death;\nSoce thou consent to bitter, by the way to life\nBut my entity to give I agree:\nHield!\n\nBUCKINGHAM:\nMy lord, this last out with our complexions\nCherish rooted distapsups and call folls.\n\nLADY ANNE:\nWere he that wonders to us all the chan"
 b"ROMEO:\nI pray you, gentlemen.\n\nJULIET:\nMy lord, gath nothing in Padua for a\npiece of cut as a horseman I please;\nI'll follow what we speak again of love,\nIs broke an oath from false for me.\n\nGLOUCESTER:\nWell, jost ignorant of despite of my grief;\nAnd thus I pity three thou wast born.\n\nQUEEN ELIZABETH:\nWhy have you not done, Henry's coming smiles,\n'Tis like one inferious vengeance condemn'd\nBy Heavens and noblence foldying\nto her honour. what he comes long eate?\n\nHASTINGS:\nGo, get thee even to thus, that flies;\nI would adont the royally out of dist;\nAnd thus I turn and much since that make fair\nSun with such finger in quiet wnat, and Sariant\nShould have been either queen.\n\nISABELLA:\nPetruchio! Who is is the supper venge.\n\nSecond Murderer:\nO looken soul!\n\nA Forders, Earl of Clarence,--here is coming him.\n\nHORTENSIO:\nSay, when you saw you shall bectwary.\n\nCOMINIUS:\nYou have fought it the elder, the\nson: xishonour here the soretire passing slaves.\nAnd in his tidly I brought my good deed,\nAre nev"
 b"ROMEO:\nVillanted the blood reign purpose\nnot more and she would quench it. Should Such a\npentinus lipt from worth of charity.\nHow can we fing it, like a drum of me?\nSpeak, tending, O, how can I have seen your\nsaids, lest the hirs weeping earth, one shall\nIn such as you to bitter, but we east for King of\nThe pretties of his officer: yet your bey,\nThe curn'd deputy nexty. Tybalt, that's\nunfortunage, take this poor delivers to a friend,\nAnd grief hath kept in sign of knotking note.\nWelcome! Saint yet Murderer: to this scoldif cares\nThat I have not in my desire.\nNay, what will you such things prevent it, hands.\n\nKING RICHARD II:\nHow now, by thee!\n\nCLAUDIO:\nNo, good father.\n\nDUKE VINCENTIO:\nHow now, is gone to Raptatur, add, took fortune between\nmy life for time put forth parture most straitle queen's.\n\nHENRY BOLINGBROKE:\nUrge in any, unhappy by this news,\nWhilst thou lies She not remain, as if\nher fortune is not so rise report the queen?\n\nGLOUCESTER:\nStand up, Oncring me?\n\nLADYARAN:\n\nHERMIONE:\nN"], shape=(5,), dtype=string) 

________________________________________________________________________________

Run time: 2.1990060806274414

Exportar el generador

Este modelo de un solo paso puede ser fácilmente guardado y restaurado , lo que le permite utilizarlo en cualquier lugar de una tf.saved_model se acepta.

tf.saved_model.save(one_step_model, 'one_step')
one_step_reloaded = tf.saved_model.load('one_step')
WARNING:tensorflow:Skipping full serialization of Keras layer <__main__.OneStep object at 0x7fdfad429d90>, because it is not built.
2021-08-11 18:26:53.785069: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
WARNING:absl:Found untraced functions such as gru_cell_layer_call_fn, gru_cell_layer_call_and_return_conditional_losses, gru_cell_layer_call_fn, gru_cell_layer_call_and_return_conditional_losses, gru_cell_layer_call_and_return_conditional_losses while saving (showing 5 of 5). These functions will not be directly callable after loading.
INFO:tensorflow:Assets written to: one_step/assets
INFO:tensorflow:Assets written to: one_step/assets
states = None
next_char = tf.constant(['ROMEO:'])
result = [next_char]

for n in range(100):
  next_char, states = one_step_reloaded.generate_one_step(next_char, states=states)
  result.append(next_char)

print(tf.strings.join(result)[0].numpy().decode("utf-8"))
ROMEO:
Be a booqued banish'd: sly us or old
Yeed Margaret: and therefore follow'd there?

BUCKINGHAM:
Why,

Avanzado: entrenamiento personalizado

El procedimiento de entrenamiento anterior es simple, pero no le da mucho control. Utiliza la fuerza del profesor que evita que las malas predicciones se realimenten al modelo, por lo que el modelo nunca aprende a recuperarse de los errores.

Entonces, ahora que ha visto cómo ejecutar el modelo manualmente, a continuación implementará el ciclo de entrenamiento. Esto da un punto de partida si, por ejemplo, se desea implementar el aprendizaje curricular para ayudar a estabilizar la salida de bucle abierto del modelo.

La parte más importante de un ciclo de entrenamiento personalizado es la función de paso de tren.

Utilice tf.GradientTape para realizar un seguimiento de los gradientes. Usted puede aprender más acerca de este enfoque mediante la lectura de la guía de ejecución ansiosos .

El procedimiento básico es:

  1. Ejecutar el modelo y calcular la pérdida bajo un tf.GradientTape .
  2. Calcule las actualizaciones y aplíquelas al modelo usando el optimizador.
class CustomTraining(MyModel):
  @tf.function
  def train_step(self, inputs):
      inputs, labels = inputs
      with tf.GradientTape() as tape:
          predictions = self(inputs, training=True)
          loss = self.loss(labels, predictions)
      grads = tape.gradient(loss, model.trainable_variables)
      self.optimizer.apply_gradients(zip(grads, model.trainable_variables))

      return {'loss': loss}

La implementación anterior de la train_step método sigue Keras' train_step convenciones . Esto es opcional, pero le permite cambiar el comportamiento del paso de trenes y seguir utilizando Keras' Model.compile y Model.fit métodos.

model = CustomTraining(
    vocab_size=len(ids_from_chars.get_vocabulary()),
    embedding_dim=embedding_dim,
    rnn_units=rnn_units)
model.compile(optimizer = tf.keras.optimizers.Adam(),
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True))
model.fit(dataset, epochs=1)
172/172 [==============================] - 7s 23ms/step - loss: 2.7296
<keras.callbacks.History at 0x7fdfad7bf090>

O si necesita más control, puede escribir su propio ciclo de entrenamiento personalizado completo:

EPOCHS = 10

mean = tf.metrics.Mean()

for epoch in range(EPOCHS):
    start = time.time()

    mean.reset_states()
    for (batch_n, (inp, target)) in enumerate(dataset):
        logs = model.train_step([inp, target])
        mean.update_state(logs['loss'])

        if batch_n % 50 == 0:
            template = f"Epoch {epoch+1} Batch {batch_n} Loss {logs['loss']:.4f}"
            print(template)

    # saving (checkpoint) the model every 5 epochs
    if (epoch + 1) % 5 == 0:
        model.save_weights(checkpoint_prefix.format(epoch=epoch))

    print()
    print(f'Epoch {epoch+1} Loss: {mean.result().numpy():.4f}')
    print(f'Time taken for 1 epoch {time.time() - start:.2f} sec')
    print("_"*80)

model.save_weights(checkpoint_prefix.format(epoch=epoch))
Epoch 1 Batch 0 Loss 2.1729
Epoch 1 Batch 50 Loss 2.0531
Epoch 1 Batch 100 Loss 1.9573
Epoch 1 Batch 150 Loss 1.8028

Epoch 1 Loss: 1.9959
Time taken for 1 epoch 5.83 sec
________________________________________________________________________________
Epoch 2 Batch 0 Loss 1.8247
Epoch 2 Batch 50 Loss 1.7950
Epoch 2 Batch 100 Loss 1.7317
Epoch 2 Batch 150 Loss 1.6410

Epoch 2 Loss: 1.7202
Time taken for 1 epoch 5.28 sec
________________________________________________________________________________
Epoch 3 Batch 0 Loss 1.6101
Epoch 3 Batch 50 Loss 1.5863
Epoch 3 Batch 100 Loss 1.5252
Epoch 3 Batch 150 Loss 1.5194

Epoch 3 Loss: 1.5582
Time taken for 1 epoch 5.23 sec
________________________________________________________________________________
Epoch 4 Batch 0 Loss 1.4622
Epoch 4 Batch 50 Loss 1.4623
Epoch 4 Batch 100 Loss 1.4729
Epoch 4 Batch 150 Loss 1.4334

Epoch 4 Loss: 1.4580
Time taken for 1 epoch 5.30 sec
________________________________________________________________________________
Epoch 5 Batch 0 Loss 1.4144
Epoch 5 Batch 50 Loss 1.4157
Epoch 5 Batch 100 Loss 1.3952
Epoch 5 Batch 150 Loss 1.3634

Epoch 5 Loss: 1.3902
Time taken for 1 epoch 5.48 sec
________________________________________________________________________________
Epoch 6 Batch 0 Loss 1.3419
Epoch 6 Batch 50 Loss 1.3228
Epoch 6 Batch 100 Loss 1.3308
Epoch 6 Batch 150 Loss 1.3092

Epoch 6 Loss: 1.3365
Time taken for 1 epoch 5.22 sec
________________________________________________________________________________
Epoch 7 Batch 0 Loss 1.3353
Epoch 7 Batch 50 Loss 1.2958
Epoch 7 Batch 100 Loss 1.2993
Epoch 7 Batch 150 Loss 1.3049

Epoch 7 Loss: 1.2915
Time taken for 1 epoch 5.33 sec
________________________________________________________________________________
Epoch 8 Batch 0 Loss 1.2323
Epoch 8 Batch 50 Loss 1.2712
Epoch 8 Batch 100 Loss 1.2089
Epoch 8 Batch 150 Loss 1.2661

Epoch 8 Loss: 1.2513
Time taken for 1 epoch 5.21 sec
________________________________________________________________________________
Epoch 9 Batch 0 Loss 1.2154
Epoch 9 Batch 50 Loss 1.2268
Epoch 9 Batch 100 Loss 1.2334
Epoch 9 Batch 150 Loss 1.2292

Epoch 9 Loss: 1.2124
Time taken for 1 epoch 5.24 sec
________________________________________________________________________________
Epoch 10 Batch 0 Loss 1.1712
Epoch 10 Batch 50 Loss 1.1542
Epoch 10 Batch 100 Loss 1.1887
Epoch 10 Batch 150 Loss 1.2040

Epoch 10 Loss: 1.1734
Time taken for 1 epoch 5.56 sec
________________________________________________________________________________