Attend the Women in ML Symposium on December 7 Register now

TensorFlow Recommenders: Quickstart

Stay organized with collections Save and categorize content based on your preferences.

View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook

In this tutorial, we build a simple matrix factorization model using the MovieLens 100K dataset with TFRS. We can use this model to recommend movies for a given user.

Import TFRS

First, install and import TFRS:

pip install -q tensorflow-recommenders
pip install -q --upgrade tensorflow-datasets
from typing import Dict, Text

import numpy as np
import tensorflow as tf

import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs

Read the data

# Ratings data.
ratings = tfds.load('movielens/100k-ratings', split="train")
# Features of all the available movies.
movies = tfds.load('movielens/100k-movies', split="train")

# Select the basic features.
ratings = ratings.map(lambda x: {
    "movie_title": x["movie_title"],
    "user_id": x["user_id"]
})
movies = movies.map(lambda x: x["movie_title"])

Build vocabularies to convert user ids and movie titles into integer indices for embedding layers:

user_ids_vocabulary = tf.keras.layers.StringLookup(mask_token=None)
user_ids_vocabulary.adapt(ratings.map(lambda x: x["user_id"]))

movie_titles_vocabulary = tf.keras.layers.StringLookup(mask_token=None)
movie_titles_vocabulary.adapt(movies)

Define a model

We can define a TFRS model by inheriting from tfrs.Model and implementing the compute_loss method:

class MovieLensModel(tfrs.Model):
  # We derive from a custom base class to help reduce boilerplate. Under the hood,
  # these are still plain Keras Models.

  def __init__(
      self,
      user_model: tf.keras.Model,
      movie_model: tf.keras.Model,
      task: tfrs.tasks.Retrieval):
    super().__init__()

    # Set up user and movie representations.
    self.user_model = user_model
    self.movie_model = movie_model

    # Set up a retrieval task.
    self.task = task

  def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:
    # Define how the loss is computed.

    user_embeddings = self.user_model(features["user_id"])
    movie_embeddings = self.movie_model(features["movie_title"])

    return self.task(user_embeddings, movie_embeddings)

Define the two models and the retrieval task.

# Define user and movie models.
user_model = tf.keras.Sequential([
    user_ids_vocabulary,
    tf.keras.layers.Embedding(user_ids_vocabulary.vocab_size(), 64)
])
movie_model = tf.keras.Sequential([
    movie_titles_vocabulary,
    tf.keras.layers.Embedding(movie_titles_vocabulary.vocab_size(), 64)
])

# Define your objectives.
task = tfrs.tasks.Retrieval(metrics=tfrs.metrics.FactorizedTopK(
    movies.batch(128).map(movie_model)
  )
)
WARNING:tensorflow:vocab_size is deprecated, please use vocabulary_size.
WARNING:tensorflow:vocab_size is deprecated, please use vocabulary_size.
WARNING:tensorflow:vocab_size is deprecated, please use vocabulary_size.
WARNING:tensorflow:vocab_size is deprecated, please use vocabulary_size.

Fit and evaluate it.

Create the model, train it, and generate predictions:

# Create a retrieval model.
model = MovieLensModel(user_model, movie_model, task)
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.5))

# Train for 3 epochs.
model.fit(ratings.batch(4096), epochs=3)

# Use brute-force search to set up retrieval using the trained representations.
index = tfrs.layers.factorized_top_k.BruteForce(model.user_model)
index.index_from_dataset(
    movies.batch(100).map(lambda title: (title, model.movie_model(title))))

# Get some recommendations.
_, titles = index(np.array(["42"]))
print(f"Top 3 recommendations for user 42: {titles[0, :3]}")
Epoch 1/3
25/25 [==============================] - 6s 181ms/step - factorized_top_k/top_1_categorical_accuracy: 6.0000e-05 - factorized_top_k/top_5_categorical_accuracy: 0.0016 - factorized_top_k/top_10_categorical_accuracy: 0.0048 - factorized_top_k/top_50_categorical_accuracy: 0.0450 - factorized_top_k/top_100_categorical_accuracy: 0.1005 - loss: 33087.5925 - regularization_loss: 0.0000e+00 - total_loss: 33087.5925
Epoch 2/3
25/25 [==============================] - 5s 184ms/step - factorized_top_k/top_1_categorical_accuracy: 1.0000e-04 - factorized_top_k/top_5_categorical_accuracy: 0.0050 - factorized_top_k/top_10_categorical_accuracy: 0.0144 - factorized_top_k/top_50_categorical_accuracy: 0.1046 - factorized_top_k/top_100_categorical_accuracy: 0.2111 - loss: 31013.3981 - regularization_loss: 0.0000e+00 - total_loss: 31013.3981
Epoch 3/3
25/25 [==============================] - 5s 182ms/step - factorized_top_k/top_1_categorical_accuracy: 3.1000e-04 - factorized_top_k/top_5_categorical_accuracy: 0.0082 - factorized_top_k/top_10_categorical_accuracy: 0.0218 - factorized_top_k/top_50_categorical_accuracy: 0.1433 - factorized_top_k/top_100_categorical_accuracy: 0.2680 - loss: 30421.0709 - regularization_loss: 0.0000e+00 - total_loss: 30421.0709
Top 3 recommendations for user 42: [b'Rent-a-Kid (1995)' b'Just Cause (1995)'
 b'Land Before Time III: The Time of the Great Giving (1995) (V)']