Recomendados de TensorFlow: inicio rápido

Ver en TensorFlow.org Ejecutar en Google Colab Ver fuente en GitHub Descargar cuaderno

En este tutorial, construimos un modelo simple de factorización de la matriz utilizando el conjunto de datos MovieLens 100K con TGF. Podemos utilizar este modelo para recomendar películas a un usuario determinado.

Importar TFRS

Primero, instale e importe TFRS:

pip install -q tensorflow-recommenders
pip install -q --upgrade tensorflow-datasets
from typing import Dict, Text

import numpy as np
import tensorflow as tf

import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs

Leer los datos

# Ratings data.
ratings = tfds.load('movielens/100k-ratings', split="train")
# Features of all the available movies.
movies = tfds.load('movielens/100k-movies', split="train")

# Select the basic features.
ratings = ratings.map(lambda x: {
    "movie_title": x["movie_title"],
    "user_id": x["user_id"]
})
movies = movies.map(lambda x: x["movie_title"])
2021-10-02 12:07:32.719766: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected

Cree vocabularios para convertir los identificadores de usuario y los títulos de las películas en índices enteros para incrustar capas:

user_ids_vocabulary = tf.keras.layers.StringLookup(mask_token=None)
user_ids_vocabulary.adapt(ratings.map(lambda x: x["user_id"]))

movie_titles_vocabulary = tf.keras.layers.StringLookup(mask_token=None)
movie_titles_vocabulary.adapt(movies)

Definir un modelo

Podemos definir un modelo TfRs heredando de tfrs.Model e implementar el compute_loss método:

class MovieLensModel(tfrs.Model):
  # We derive from a custom base class to help reduce boilerplate. Under the hood,
  # these are still plain Keras Models.

  def __init__(
      self,
      user_model: tf.keras.Model,
      movie_model: tf.keras.Model,
      task: tfrs.tasks.Retrieval):
    super().__init__()

    # Set up user and movie representations.
    self.user_model = user_model
    self.movie_model = movie_model

    # Set up a retrieval task.
    self.task = task

  def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:
    # Define how the loss is computed.

    user_embeddings = self.user_model(features["user_id"])
    movie_embeddings = self.movie_model(features["movie_title"])

    return self.task(user_embeddings, movie_embeddings)

Defina los dos modelos y la tarea de recuperación.

# Define user and movie models.
user_model = tf.keras.Sequential([
    user_ids_vocabulary,
    tf.keras.layers.Embedding(user_ids_vocabulary.vocab_size(), 64)
])
movie_model = tf.keras.Sequential([
    movie_titles_vocabulary,
    tf.keras.layers.Embedding(movie_titles_vocabulary.vocab_size(), 64)
])

# Define your objectives.
task = tfrs.tasks.Retrieval(metrics=tfrs.metrics.FactorizedTopK(
    movies.batch(128).map(movie_model)
  )
)
WARNING:tensorflow:vocab_size is deprecated, please use vocabulary_size.
WARNING:tensorflow:vocab_size is deprecated, please use vocabulary_size.
WARNING:tensorflow:vocab_size is deprecated, please use vocabulary_size.
WARNING:tensorflow:vocab_size is deprecated, please use vocabulary_size.

Ajústelo y evalúelo.

Cree el modelo, entrénelo y genere predicciones:

# Create a retrieval model.
model = MovieLensModel(user_model, movie_model, task)
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.5))

# Train for 3 epochs.
model.fit(ratings.batch(4096), epochs=3)

# Use brute-force search to set up retrieval using the trained representations.
index = tfrs.layers.factorized_top_k.BruteForce(model.user_model)
index.index_from_dataset(
    movies.batch(100).map(lambda title: (title, model.movie_model(title))))

# Get some recommendations.
_, titles = index(np.array(["42"]))
print(f"Top 3 recommendations for user 42: {titles[0, :3]}")
Epoch 1/3
25/25 [==============================] - 6s 194ms/step - factorized_top_k/top_1_categorical_accuracy: 3.0000e-05 - factorized_top_k/top_5_categorical_accuracy: 0.0016 - factorized_top_k/top_10_categorical_accuracy: 0.0052 - factorized_top_k/top_50_categorical_accuracy: 0.0442 - factorized_top_k/top_100_categorical_accuracy: 0.1010 - loss: 33092.9163 - regularization_loss: 0.0000e+00 - total_loss: 33092.9163
Epoch 2/3
25/25 [==============================] - 5s 194ms/step - factorized_top_k/top_1_categorical_accuracy: 1.7000e-04 - factorized_top_k/top_5_categorical_accuracy: 0.0052 - factorized_top_k/top_10_categorical_accuracy: 0.0148 - factorized_top_k/top_50_categorical_accuracy: 0.1054 - factorized_top_k/top_100_categorical_accuracy: 0.2114 - loss: 31008.8447 - regularization_loss: 0.0000e+00 - total_loss: 31008.8447
Epoch 3/3
25/25 [==============================] - 5s 193ms/step - factorized_top_k/top_1_categorical_accuracy: 3.4000e-04 - factorized_top_k/top_5_categorical_accuracy: 0.0086 - factorized_top_k/top_10_categorical_accuracy: 0.0222 - factorized_top_k/top_50_categorical_accuracy: 0.1438 - factorized_top_k/top_100_categorical_accuracy: 0.2694 - loss: 30417.8776 - regularization_loss: 0.0000e+00 - total_loss: 30417.8776
Top 3 recommendations for user 42: [b'Rent-a-Kid (1995)' b'Just Cause (1995)'
 b'Land Before Time III: The Time of the Great Giving (1995) (V)']