Missed TensorFlow Dev Summit? Check out the video playlist. Watch recordings

Graph regularization for document classification using natural graphs

View on TensorFlow.org Run in Google Colab View source on GitHub

Overview

Graph regularization is a specific technique under the broader paradigm of Neural Graph Learning (Bui et al., 2018). The core idea is to train neural network models with a graph-regularized objective, harnessing both labeled and unlabeled data.

In this tutorial, we will explore the use of graph regularization to classify documents that form a natural (organic) graph.

The general recipe for creating a graph-regularized model using the Neural Structured Learning (NSL) framework is as follows:

  1. Generate training data from the input graph and sample features. Nodes in the graph correspond to samples and edges in the graph correspond to similarity between pairs of samples. The resulting training data will contain neighbor features in addition to the original node features.
  2. Create a neural network as a base model using the Keras sequential, functional, or subclass API.
  3. Wrap the base model with the GraphRegularization wrapper class, which is provided by the NSL framework, to create a new graph Keras model. This new model will include a graph regularization loss as the regularization term in its training objective.
  4. Train and evaluate the graph Keras model.

Setup

  1. Install TensorFlow 2.0.x to create an interactive development environment with eager execution.
  2. Install the Neural Structured Learning package.
pip install tensorflow-gpu==2.0.1
Collecting tensorflow-gpu==2.0.1
  Using cached tensorflow_gpu-2.0.1-cp35-cp35m-manylinux2010_x86_64.whl (380.8 MB)
Requirement already satisfied: astor>=0.6.0 in /tmpfs/src/tf_docs_env/lib/python3.5/site-packages (from tensorflow-gpu==2.0.1) (0.8.1)
Collecting tensorflow-estimator<2.1.0,>=2.0.0
  Using cached tensorflow_estimator-2.0.1-py2.py3-none-any.whl (449 kB)
Requirement already satisfied: absl-py>=0.7.0 in /home/kbuilder/.local/lib/python3.5/site-packages (from tensorflow-gpu==2.0.1) (0.9.0)
Requirement already satisfied: keras-applications>=1.0.8 in /tmpfs/src/tf_docs_env/lib/python3.5/site-packages (from tensorflow-gpu==2.0.1) (1.0.8)
Requirement already satisfied: six>=1.10.0 in /home/kbuilder/.local/lib/python3.5/site-packages (from tensorflow-gpu==2.0.1) (1.14.0)
Requirement already satisfied: termcolor>=1.1.0 in /home/kbuilder/.local/lib/python3.5/site-packages (from tensorflow-gpu==2.0.1) (1.1.0)
Requirement already satisfied: protobuf>=3.6.1 in /tmpfs/src/tf_docs_env/lib/python3.5/site-packages (from tensorflow-gpu==2.0.1) (3.11.3)
Requirement already satisfied: grpcio>=1.8.6 in /tmpfs/src/tf_docs_env/lib/python3.5/site-packages (from tensorflow-gpu==2.0.1) (1.27.2)
Collecting tensorboard<2.1.0,>=2.0.0
  Using cached tensorboard-2.0.2-py3-none-any.whl (3.8 MB)
Requirement already satisfied: numpy<2.0,>=1.16.0 in /home/kbuilder/.local/lib/python3.5/site-packages (from tensorflow-gpu==2.0.1) (1.18.2)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in /tmpfs/src/tf_docs_env/lib/python3.5/site-packages (from tensorflow-gpu==2.0.1) (0.34.2)
Requirement already satisfied: opt-einsum>=2.3.2 in /tmpfs/src/tf_docs_env/lib/python3.5/site-packages (from tensorflow-gpu==2.0.1) (3.2.0)
Requirement already satisfied: wrapt>=1.11.1 in /tmpfs/src/tf_docs_env/lib/python3.5/site-packages (from tensorflow-gpu==2.0.1) (1.12.1)
Requirement already satisfied: google-pasta>=0.1.6 in /tmpfs/src/tf_docs_env/lib/python3.5/site-packages (from tensorflow-gpu==2.0.1) (0.2.0)
Requirement already satisfied: gast==0.2.2 in /tmpfs/src/tf_docs_env/lib/python3.5/site-packages (from tensorflow-gpu==2.0.1) (0.2.2)
Requirement already satisfied: keras-preprocessing>=1.0.5 in /tmpfs/src/tf_docs_env/lib/python3.5/site-packages (from tensorflow-gpu==2.0.1) (1.1.0)
Requirement already satisfied: h5py in /tmpfs/src/tf_docs_env/lib/python3.5/site-packages (from keras-applications>=1.0.8->tensorflow-gpu==2.0.1) (2.10.0)
Requirement already satisfied: setuptools in /tmpfs/src/tf_docs_env/lib/python3.5/site-packages (from protobuf>=3.6.1->tensorflow-gpu==2.0.1) (46.0.0)
Requirement already satisfied: google-auth<2,>=1.6.3 in /tmpfs/src/tf_docs_env/lib/python3.5/site-packages (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.1) (1.11.3)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /tmpfs/src/tf_docs_env/lib/python3.5/site-packages (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.1) (0.4.1)
Requirement already satisfied: markdown>=2.6.8 in /tmpfs/src/tf_docs_env/lib/python3.5/site-packages (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.1) (3.2.1)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.5/dist-packages (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.1) (0.14.1)
Requirement already satisfied: requests<3,>=2.21.0 in /home/kbuilder/.local/lib/python3.5/site-packages (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.1) (2.23.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.5/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.1) (0.2.3)
Requirement already satisfied: rsa<4.1,>=3.1.4 in /usr/local/lib/python3.5/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.1) (4.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.5/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.1) (2.0.1)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.5/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.1) (0.8.0)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.5/dist-packages (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.1) (1.24.1)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.5/dist-packages (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.1) (2.8)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.5/dist-packages (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.1) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.5/dist-packages (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.1) (2018.11.29)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.1 in /usr/local/lib/python3.5/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.1) (0.4.5)
Requirement already satisfied: oauthlib>=0.6.2 in /usr/local/lib/python3.5/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.1) (3.0.0)
ERROR: tensorflow 2.1.0 has requirement tensorboard<2.2.0,>=2.1.0, but you'll have tensorboard 2.0.2 which is incompatible.
ERROR: tensorflow 2.1.0 has requirement tensorflow-estimator<2.2.0,>=2.1.0rc0, but you'll have tensorflow-estimator 2.0.1 which is incompatible.
Installing collected packages: tensorflow-estimator, tensorboard, tensorflow-gpu
  Attempting uninstall: tensorflow-estimator
    Found existing installation: tensorflow-estimator 2.1.0
    Uninstalling tensorflow-estimator-2.1.0:
      Successfully uninstalled tensorflow-estimator-2.1.0
  Attempting uninstall: tensorboard
    Found existing installation: tensorboard 2.1.1
    Uninstalling tensorboard-2.1.1:
      Successfully uninstalled tensorboard-2.1.1
Successfully installed tensorboard-2.0.2 tensorflow-estimator-2.0.1 tensorflow-gpu-2.0.1

pip install --quiet neural-structured-learning

Dependencies and imports

from __future__ import absolute_import, division, print_function, unicode_literals

import neural_structured_learning as nsl

import tensorflow as tf

# Resets notebook state
tf.keras.backend.clear_session()

print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
Version:  2.0.1
Eager mode:  True
GPU is NOT AVAILABLE

Cora dataset

The Cora dataset is a citation graph where nodes represent machine learning papers and edges represent citations between pairs of papers. The task involved is document classification where the goal is to categorize each paper into one of 7 categories. In other words, this is a multi-class classification problem with 7 classes.

Graph

The original graph is directed. However, for the purpose of this example, we consider the undirected version of this graph. So, if paper A cites paper B, we also consider paper B to have cited A. Although this is not necessarily true, in this example, we consider citations as a proxy for similarity, which is usually a commutative property.

Features

Each paper in the input effectively contains 2 features:

  1. Words: A dense, multi-hot bag-of-words representation of the text in the paper. The vocabulary for the Cora dataset contains 1433 unique words. So, the length of this feature is 1433, and the value at position 'i' is 0/1 indicating whether word 'i' in the vocabulary exists in the given paper or not.

  2. Label: A single integer representing the class ID (category) of the paper.

Download the Cora dataset

wget --quiet -P /tmp https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz
tar -C /tmp -xvzf /tmp/cora.tgz
cora/
cora/README
cora/cora.content
cora/cora.cites

Convert the Cora data to the NSL format

In order to preprocess the Cora dataset and convert it to the format required by Neural Structured Learning, we will run the 'preprocess_cora_dataset.py' script, which is included in the NSL github repository. This script does the following:

  1. Generate neighbor features using the original node features and the graph.
  2. Generate train and test data splits containing tf.train.Example instances.
  3. Persist the resulting train and test data in the TFRecord format.
!wget https://raw.githubusercontent.com/tensorflow/neural-structured-learning/master/neural_structured_learning/examples/preprocess/cora/preprocess_cora_dataset.py

!python preprocess_cora_dataset.py \
--input_cora_content=/tmp/cora/cora.content \
--input_cora_graph=/tmp/cora/cora.cites \
--max_nbrs=5 \
--output_train_data=/tmp/cora/train_merged_examples.tfr \
--output_test_data=/tmp/cora/test_examples.tfr
--2020-03-19 04:40:27--  https://raw.githubusercontent.com/tensorflow/neural-structured-learning/master/neural_structured_learning/examples/preprocess/cora/preprocess_cora_dataset.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 11419 (11K) [text/plain]
Saving to: ‘preprocess_cora_dataset.py’

preprocess_cora_dat 100%[===================>]  11.15K  --.-KB/s    in 0s      

2020-03-19 04:40:27 (128 MB/s) - ‘preprocess_cora_dataset.py’ saved [11419/11419]

Reading graph file: /tmp/cora/cora.cites...
Done reading 5429 edges from: /tmp/cora/cora.cites (0.01 seconds).
Making all edges bi-directional...
Done (0.01 seconds). Total graph nodes: 2708
Joining seed and neighbor tf.train.Examples with graph edges...
Done creating and writing 2155 merged tf.train.Examples (1.31 seconds).
Out-degree histogram: [(1, 386), (2, 468), (3, 452), (4, 309), (5, 540)]
Output training data written to TFRecord file: /tmp/cora/train_merged_examples.tfr.
Output test data written to TFRecord file: /tmp/cora/test_examples.tfr.
Total running time: 0.04 minutes.

Global variables

The file paths to the train and test data are based on the command line flag values used to invoke the 'preprocess_cora_dataset.py' script above.

### Experiment dataset
TRAIN_DATA_PATH = '/tmp/cora/train_merged_examples.tfr'
TEST_DATA_PATH = '/tmp/cora/test_examples.tfr'

### Constants used to identify neighbor features in the input.
NBR_FEATURE_PREFIX = 'NL_nbr_'
NBR_WEIGHT_SUFFIX = '_weight'

Hyperparameters

We will use an instance of HParams to include various hyperparameters and constants used for training and evaluation. We briefly describe each of them below:

  • num_classes: There are a total 7 different classes

  • max_seq_length: This is the size of the vocabulary and all instances in the input have a dense multi-hot, bag-of-words representation. In other words, a value of 1 for a word indicates that the word is present in the input and a value of 0 indicates that it is not.

  • distance_type: This is the distance metric used to regularize the sample with its neighbors.

  • graph_regularization_multiplier: This controls the relative weight of the graph regularization term in the overall loss function.

  • num_neighbors: The number of neighbors used for graph regularization. This value has to be less than or equal to the max_nbrs command-line argument used above when running preprocess_cora_dataset.py.

  • num_fc_units: The number of fully connected layers in our neural network.

  • train_epochs: The number of training epochs.

  • batch_size: Batch size used for training and evaluation.

  • dropout_rate: Controls the rate of dropout following each fully connected layer

  • eval_steps: The number of batches to process before deeming evaluation is complete. If set to None, all instances in the test set are evaluated.

class HParams(object):
  """Hyperparameters used for training."""
  def __init__(self):
    ### dataset parameters
    self.num_classes = 7
    self.max_seq_length = 1433
    ### neural graph learning parameters
    self.distance_type = nsl.configs.DistanceType.L2
    self.graph_regularization_multiplier = 0.1
    self.num_neighbors = 1
    ### model architecture
    self.num_fc_units = [50, 50]
    ### training parameters
    self.train_epochs = 100
    self.batch_size = 128
    self.dropout_rate = 0.5
    ### eval parameters
    self.eval_steps = None  # All instances in the test set are evaluated.

HPARAMS = HParams()

Load train and test data

As described earlier in this notebook, the input training and test data have been created by the 'preprocess_cora_dataset.py'. We will load them into two tf.data.Dataset objects -- one for train and one for test.

In the input layer of our model, we will extract not just the 'words' and the 'label' features from each sample, but also corresponding neighbor features based on the hparams.num_neighbors value. Instances with fewer neighbors than hparams.num_neighbors will be assigned dummy values for those non-existent neighbor features.

def parse_example(example_proto):
  """Extracts relevant fields from the `example_proto`.

  Args:
    example_proto: An instance of `tf.train.Example`.

  Returns:
    A pair whose first value is a dictionary containing relevant features
    and whose second value contains the ground truth label.
  """
  # The 'words' feature is a multi-hot, bag-of-words representation of the
  # original raw text. A default value is required for examples that don't
  # have the feature.
  feature_spec = {
      'words':
          tf.io.FixedLenFeature([HPARAMS.max_seq_length],
                                tf.int64,
                                default_value=tf.constant(
                                    0,
                                    dtype=tf.int64,
                                    shape=[HPARAMS.max_seq_length])),
      'label':
          tf.io.FixedLenFeature((), tf.int64, default_value=-1),
  }
  # We also extract corresponding neighbor features in a similar manner to
  # the features above.
  for i in range(HPARAMS.num_neighbors):
    nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'words')
    nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, i, NBR_WEIGHT_SUFFIX)
    feature_spec[nbr_feature_key] = tf.io.FixedLenFeature(
        [HPARAMS.max_seq_length],
        tf.int64,
        default_value=tf.constant(
            0, dtype=tf.int64, shape=[HPARAMS.max_seq_length]))

    # We assign a default value of 0.0 for the neighbor weight so that
    # graph regularization is done on samples based on their exact number
    # of neighbors. In other words, non-existent neighbors are discounted.
    feature_spec[nbr_weight_key] = tf.io.FixedLenFeature(
        [1], tf.float32, default_value=tf.constant([0.0]))

  features = tf.io.parse_single_example(example_proto, feature_spec)
  label = features.pop('label')
  return features, label


def make_dataset(file_path, training=False):
  """Creates a `tf.data.TFRecordDataset`.

  Args:
    file_path: Name of the file in the `.tfrecord` format containing
      `tf.train.Example` objects.
    training: Boolean indicating if we are in training mode.

  Returns:
    An instance of `tf.data.TFRecordDataset` containing the `tf.train.Example`
    objects.
  """
  dataset = tf.data.TFRecordDataset([file_path])
  if training:
    dataset = dataset.shuffle(10000)
  dataset = dataset.map(parse_example)
  dataset = dataset.batch(HPARAMS.batch_size)
  return dataset


train_dataset = make_dataset(TRAIN_DATA_PATH, training=True)
test_dataset = make_dataset(TEST_DATA_PATH)

Let's peek into the train dataset to look at its contents.

for feature_batch, label_batch in train_dataset.take(1):
  print('Feature list:', list(feature_batch.keys()))
  print('Batch of inputs:', feature_batch['words'])
  nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, 0, 'words')
  nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, 0, NBR_WEIGHT_SUFFIX)
  print('Batch of neighbor inputs:', feature_batch[nbr_feature_key])
  print('Batch of neighbor weights:',
        tf.reshape(feature_batch[nbr_weight_key], [-1]))
  print('Batch of labels:', label_batch)
Feature list: ['words', 'NL_nbr_0_weight', 'NL_nbr_0_words']
Batch of inputs: tf.Tensor(
[[0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]
 ...
 [0 0 0 ... 0 0 0]
 [0 0 1 ... 0 0 0]
 [0 0 0 ... 0 0 0]], shape=(128, 1433), dtype=int64)
Batch of neighbor inputs: tf.Tensor(
[[0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]
 ...
 [0 0 0 ... 0 0 0]
 [0 0 1 ... 0 0 0]
 [0 0 0 ... 0 0 0]], shape=(128, 1433), dtype=int64)
Batch of neighbor weights: tf.Tensor(
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.

 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
 1. 1. 1. 1. 1. 1. 1. 1.], shape=(128,), dtype=float32)
Batch of labels: tf.Tensor(
[2 2 1 3 1 6 3 1 6 2 2 2 4 1 3 4 1 3 0 0 3 0 6 1 0 3 6 2 5 6 4 0 2 5 3 2 6
 5 1 2 6 0 3 3 2 0 2 2 4 1 2 4 2 6 4 3 3 4 3 2 3 4 6 0 3 1 1 4 0 5 2 3 2 2
 1 3 3 1 6 2 2 2 5 6 6 1 2 3 2 2 6 1 2 3 3 3 2 4 3 3 2 3 4 1 2 0 1 2 3 1 1
 2 2 0 2 1 1 6 1 1 5 2 3 2 6 2 2 2], shape=(128,), dtype=int64)

Let's peek into the test dataset to look at its contents.

for feature_batch, label_batch in test_dataset.take(1):
  print('Feature list:', list(feature_batch.keys()))
  print('Batch of inputs:', feature_batch['words'])
  nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, 0, 'words')
  nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, 0, NBR_WEIGHT_SUFFIX)
  print('Batch of neighbor inputs:', feature_batch[nbr_feature_key])
  print('Batch of neighbor weights:',
        tf.reshape(feature_batch[nbr_weight_key], [-1]))
  print('Batch of labels:', label_batch)
Feature list: ['words', 'NL_nbr_0_weight', 'NL_nbr_0_words']
Batch of inputs: tf.Tensor(
[[0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]
 ...
 [0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]], shape=(128, 1433), dtype=int64)
Batch of neighbor inputs: tf.Tensor(
[[0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]
 ...
 [0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]], shape=(128, 1433), dtype=int64)
Batch of neighbor weights: tf.Tensor(
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.

 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
 0. 0. 0. 0. 0. 0. 0. 0.], shape=(128,), dtype=float32)
Batch of labels: tf.Tensor(
[3 2 2 6 1 2 4 2 3 3 6 4 3 2 6 2 3 5 2 1 3 2 4 2 2 2 1 2 5 6 2 4 1 5 1 3 4
 6 6 2 5 1 2 0 0 1 2 0 1 1 6 0 6 2 4 6 6 3 0 2 1 6 1 1 6 0 2 2 2 2 2 0 2 3
 1 2 2 1 4 2 3 1 1 4 2 2 3 2 2 2 2 0 6 0 3 2 6 0 6 6 0 0 0 2 2 0 2 1 5 2 5
 1 3 3 3 2 0 6 1 0 2 6 5 2 6 2 1 3], shape=(128,), dtype=int64)

Model definition

In order to demonstrate the use of graph regularization, we build a base model for this problem first. We will use a simple feed-forward neural network with 2 hidden layers and dropout in between. We illustrate the creation of the base model using all model types supported by the tf.Keras framework -- sequential, functional, and subclass.

Sequential base model

def make_mlp_sequential_model(hparams):
  """Creates a sequential multi-layer perceptron model."""
  model = tf.keras.Sequential()
  model.add(
      tf.keras.layers.InputLayer(
          input_shape=(hparams.max_seq_length,), name='words'))
  # Input is already one-hot encoded in the integer format. We cast it to
  # floating point format here.
  model.add(
      tf.keras.layers.Lambda(lambda x: tf.keras.backend.cast(x, tf.float32)))
  for num_units in hparams.num_fc_units:
    model.add(tf.keras.layers.Dense(num_units, activation='relu'))
    # For sequential models, by default, Keras ensures that the 'dropout' layer
    # is invoked only during training.
    model.add(tf.keras.layers.Dropout(hparams.dropout_rate))
  model.add(tf.keras.layers.Dense(hparams.num_classes, activation='softmax'))
  return model

Functional base model

def make_mlp_functional_model(hparams):
  """Creates a functional API-based multi-layer perceptron model."""
  inputs = tf.keras.Input(
      shape=(hparams.max_seq_length,), dtype='int64', name='words')

  # Input is already one-hot encoded in the integer format. We cast it to
  # floating point format here.
  cur_layer = tf.keras.layers.Lambda(
      lambda x: tf.keras.backend.cast(x, tf.float32))(
          inputs)

  for num_units in hparams.num_fc_units:
    cur_layer = tf.keras.layers.Dense(num_units, activation='relu')(cur_layer)
    # For functional models, by default, Keras ensures that the 'dropout' layer
    # is invoked only during training.
    cur_layer = tf.keras.layers.Dropout(hparams.dropout_rate)(cur_layer)

  outputs = tf.keras.layers.Dense(
      hparams.num_classes, activation='softmax')(
          cur_layer)

  model = tf.keras.Model(inputs, outputs=outputs)
  return model

Subclass base model

def make_mlp_subclass_model(hparams):
  """Creates a multi-layer perceptron subclass model in Keras."""

  class MLP(tf.keras.Model):
    """Subclass model defining a multi-layer perceptron."""

    def __init__(self):
      super(MLP, self).__init__()
      # Input is already one-hot encoded in the integer format. We create a
      # layer to cast it to floating point format here.
      self.cast_to_float_layer = tf.keras.layers.Lambda(
          lambda x: tf.keras.backend.cast(x, tf.float32))
      self.dense_layers = [
          tf.keras.layers.Dense(num_units, activation='relu')
          for num_units in hparams.num_fc_units
      ]
      self.dropout_layer = tf.keras.layers.Dropout(hparams.dropout_rate)
      self.output_layer = tf.keras.layers.Dense(
          hparams.num_classes, activation='softmax')

    def call(self, inputs, training=False):
      cur_layer = self.cast_to_float_layer(inputs['words'])
      for dense_layer in self.dense_layers:
        cur_layer = dense_layer(cur_layer)
        cur_layer = self.dropout_layer(cur_layer, training=training)

      outputs = self.output_layer(cur_layer)

      return outputs

  return MLP()

Create base model(s)

# Create a base MLP model using the functional API.
# Alternatively, you can also create a sequential or subclass base model using
# the make_mlp_sequential_model() or make_mlp_subclass_model() functions
# respectively, defined above. Note that if a subclass model is used, its
# summary cannot be generated until it is built.
base_model_tag, base_model = 'FUNCTIONAL', make_mlp_functional_model(HPARAMS)
base_model.summary()
Model: "model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
words (InputLayer)           [(None, 1433)]            0         
_________________________________________________________________
lambda (Lambda)              (None, 1433)              0         
_________________________________________________________________
dense (Dense)                (None, 50)                71700     
_________________________________________________________________
dropout (Dropout)            (None, 50)                0         
_________________________________________________________________
dense_1 (Dense)              (None, 50)                2550      
_________________________________________________________________
dropout_1 (Dropout)          (None, 50)                0         
_________________________________________________________________
dense_2 (Dense)              (None, 7)                 357       
=================================================================
Total params: 74,607
Trainable params: 74,607
Non-trainable params: 0
_________________________________________________________________

Train base MLP model

# Compile and train the base MLP model
base_model.compile(
    optimizer='adam',
    loss='sparse_categorical_crossentropy',
    metrics=['accuracy'])
base_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
Epoch 1/100
17/17 [==============================] - 1s 54ms/step - loss: 1.9274 - accuracy: 0.1958
Epoch 2/100
17/17 [==============================] - 0s 15ms/step - loss: 1.8350 - accuracy: 0.3039
Epoch 3/100
17/17 [==============================] - 0s 14ms/step - loss: 1.7418 - accuracy: 0.3299
Epoch 4/100
17/17 [==============================] - 0s 15ms/step - loss: 1.6426 - accuracy: 0.3633
Epoch 5/100
17/17 [==============================] - 0s 15ms/step - loss: 1.5119 - accuracy: 0.4292
Epoch 6/100
17/17 [==============================] - 0s 15ms/step - loss: 1.3784 - accuracy: 0.4896
Epoch 7/100
17/17 [==============================] - 0s 14ms/step - loss: 1.2378 - accuracy: 0.5592
Epoch 8/100
17/17 [==============================] - 0s 14ms/step - loss: 1.1237 - accuracy: 0.6209
Epoch 9/100
17/17 [==============================] - 0s 14ms/step - loss: 1.0121 - accuracy: 0.6677
Epoch 10/100
17/17 [==============================] - 0s 14ms/step - loss: 0.9100 - accuracy: 0.7002
Epoch 11/100
17/17 [==============================] - 0s 14ms/step - loss: 0.8355 - accuracy: 0.7281
Epoch 12/100
17/17 [==============================] - 0s 15ms/step - loss: 0.7524 - accuracy: 0.7638
Epoch 13/100
17/17 [==============================] - 0s 14ms/step - loss: 0.6760 - accuracy: 0.7879
Epoch 14/100
17/17 [==============================] - 0s 14ms/step - loss: 0.6464 - accuracy: 0.8111
Epoch 15/100
17/17 [==============================] - 0s 14ms/step - loss: 0.5673 - accuracy: 0.8320
Epoch 16/100
17/17 [==============================] - 0s 15ms/step - loss: 0.5369 - accuracy: 0.8385
Epoch 17/100
17/17 [==============================] - 0s 14ms/step - loss: 0.4737 - accuracy: 0.8589
Epoch 18/100
17/17 [==============================] - 0s 14ms/step - loss: 0.4356 - accuracy: 0.8710
Epoch 19/100
17/17 [==============================] - 0s 15ms/step - loss: 0.4140 - accuracy: 0.8733
Epoch 20/100
17/17 [==============================] - 0s 15ms/step - loss: 0.3701 - accuracy: 0.8947
Epoch 21/100
17/17 [==============================] - 0s 15ms/step - loss: 0.3635 - accuracy: 0.8858
Epoch 22/100
17/17 [==============================] - 0s 14ms/step - loss: 0.3354 - accuracy: 0.8993
Epoch 23/100
17/17 [==============================] - 0s 14ms/step - loss: 0.3212 - accuracy: 0.9063
Epoch 24/100
17/17 [==============================] - 0s 14ms/step - loss: 0.3068 - accuracy: 0.9077
Epoch 25/100
17/17 [==============================] - 0s 14ms/step - loss: 0.2833 - accuracy: 0.9179
Epoch 26/100
17/17 [==============================] - 0s 14ms/step - loss: 0.2589 - accuracy: 0.9299
Epoch 27/100
17/17 [==============================] - 0s 15ms/step - loss: 0.2658 - accuracy: 0.9234
Epoch 28/100
17/17 [==============================] - 0s 15ms/step - loss: 0.2465 - accuracy: 0.9276
Epoch 29/100
17/17 [==============================] - 0s 14ms/step - loss: 0.2163 - accuracy: 0.9452
Epoch 30/100
17/17 [==============================] - 0s 14ms/step - loss: 0.2140 - accuracy: 0.9383
Epoch 31/100
17/17 [==============================] - 0s 14ms/step - loss: 0.2205 - accuracy: 0.9415
Epoch 32/100
17/17 [==============================] - 0s 13ms/step - loss: 0.1969 - accuracy: 0.9406
Epoch 33/100
17/17 [==============================] - 0s 15ms/step - loss: 0.2002 - accuracy: 0.9415
Epoch 34/100
17/17 [==============================] - 0s 14ms/step - loss: 0.1804 - accuracy: 0.9476
Epoch 35/100
17/17 [==============================] - 0s 15ms/step - loss: 0.1617 - accuracy: 0.9559
Epoch 36/100
17/17 [==============================] - 0s 15ms/step - loss: 0.1603 - accuracy: 0.9564
Epoch 37/100
17/17 [==============================] - 0s 15ms/step - loss: 0.1652 - accuracy: 0.9508
Epoch 38/100
17/17 [==============================] - 0s 14ms/step - loss: 0.1562 - accuracy: 0.9531
Epoch 39/100
17/17 [==============================] - 0s 15ms/step - loss: 0.1545 - accuracy: 0.9545
Epoch 40/100
17/17 [==============================] - 0s 14ms/step - loss: 0.1395 - accuracy: 0.9619
Epoch 41/100
17/17 [==============================] - 0s 14ms/step - loss: 0.1381 - accuracy: 0.9652
Epoch 42/100
17/17 [==============================] - 0s 15ms/step - loss: 0.1383 - accuracy: 0.9606
Epoch 43/100
17/17 [==============================] - 0s 15ms/step - loss: 0.1396 - accuracy: 0.9652
Epoch 44/100
17/17 [==============================] - 0s 14ms/step - loss: 0.1176 - accuracy: 0.9675
Epoch 45/100
17/17 [==============================] - 0s 15ms/step - loss: 0.1218 - accuracy: 0.9689
Epoch 46/100
17/17 [==============================] - 0s 15ms/step - loss: 0.1265 - accuracy: 0.9592
Epoch 47/100
17/17 [==============================] - 0s 15ms/step - loss: 0.1082 - accuracy: 0.9726
Epoch 48/100
17/17 [==============================] - 0s 14ms/step - loss: 0.1107 - accuracy: 0.9684
Epoch 49/100
17/17 [==============================] - 0s 14ms/step - loss: 0.1069 - accuracy: 0.9722
Epoch 50/100
17/17 [==============================] - 0s 14ms/step - loss: 0.1031 - accuracy: 0.9703
Epoch 51/100
17/17 [==============================] - 0s 15ms/step - loss: 0.1004 - accuracy: 0.9731
Epoch 52/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0952 - accuracy: 0.9712
Epoch 53/100
17/17 [==============================] - 0s 14ms/step - loss: 0.1086 - accuracy: 0.9675
Epoch 54/100
17/17 [==============================] - 0s 15ms/step - loss: 0.1051 - accuracy: 0.9712
Epoch 55/100
17/17 [==============================] - 0s 15ms/step - loss: 0.1000 - accuracy: 0.9763
Epoch 56/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0937 - accuracy: 0.9735
Epoch 57/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0880 - accuracy: 0.9763
Epoch 58/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0805 - accuracy: 0.9768
Epoch 59/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0977 - accuracy: 0.9703
Epoch 60/100
17/17 [==============================] - 0s 15ms/step - loss: 0.0910 - accuracy: 0.9740
Epoch 61/100
17/17 [==============================] - 0s 15ms/step - loss: 0.0941 - accuracy: 0.9675
Epoch 62/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0811 - accuracy: 0.9796
Epoch 63/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0840 - accuracy: 0.9768
Epoch 64/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0890 - accuracy: 0.9773
Epoch 65/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0723 - accuracy: 0.9824
Epoch 66/100
17/17 [==============================] - 0s 15ms/step - loss: 0.0734 - accuracy: 0.9777
Epoch 67/100
17/17 [==============================] - 0s 15ms/step - loss: 0.0744 - accuracy: 0.9777
Epoch 68/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0662 - accuracy: 0.9828
Epoch 69/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0774 - accuracy: 0.9777
Epoch 70/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0794 - accuracy: 0.9782
Epoch 71/100
17/17 [==============================] - 0s 15ms/step - loss: 0.0687 - accuracy: 0.9819
Epoch 72/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0664 - accuracy: 0.9810
Epoch 73/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0768 - accuracy: 0.9787
Epoch 74/100
17/17 [==============================] - 0s 15ms/step - loss: 0.0654 - accuracy: 0.9800
Epoch 75/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0661 - accuracy: 0.9814
Epoch 76/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0546 - accuracy: 0.9856
Epoch 77/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0565 - accuracy: 0.9865
Epoch 78/100
17/17 [==============================] - 0s 15ms/step - loss: 0.0519 - accuracy: 0.9861
Epoch 79/100
17/17 [==============================] - 0s 15ms/step - loss: 0.0688 - accuracy: 0.9773
Epoch 80/100
17/17 [==============================] - 0s 13ms/step - loss: 0.0627 - accuracy: 0.9824
Epoch 81/100
17/17 [==============================] - 0s 15ms/step - loss: 0.0563 - accuracy: 0.9847
Epoch 82/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0504 - accuracy: 0.9879
Epoch 83/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0596 - accuracy: 0.9861
Epoch 84/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0555 - accuracy: 0.9861
Epoch 85/100
17/17 [==============================] - 0s 15ms/step - loss: 0.0521 - accuracy: 0.9856
Epoch 86/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0588 - accuracy: 0.9838
Epoch 87/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0539 - accuracy: 0.9847
Epoch 88/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0513 - accuracy: 0.9861
Epoch 89/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0623 - accuracy: 0.9796
Epoch 90/100
17/17 [==============================] - 0s 15ms/step - loss: 0.0514 - accuracy: 0.9879
Epoch 91/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0546 - accuracy: 0.9842
Epoch 92/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0442 - accuracy: 0.9879
Epoch 93/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0480 - accuracy: 0.9856
Epoch 94/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0521 - accuracy: 0.9842
Epoch 95/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0467 - accuracy: 0.9879
Epoch 96/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0432 - accuracy: 0.9861
Epoch 97/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0438 - accuracy: 0.9898
Epoch 98/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0459 - accuracy: 0.9847
Epoch 99/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0405 - accuracy: 0.9903
Epoch 100/100
17/17 [==============================] - 0s 14ms/step - loss: 0.0456 - accuracy: 0.9865

<tensorflow.python.keras.callbacks.History at 0x7f048021dba8>

Evaluate base MLP model

# Helper function to print evaluation metrics.
def print_metrics(model_desc, eval_metrics):
  """Prints evaluation metrics.

  Args:
    model_desc: A description of the model.
    eval_metrics: A dictionary mapping metric names to corresponding values. It
      must contain the loss and accuracy metrics.
  """
  print('\n')
  print('Eval accuracy for ', model_desc, ': ', eval_metrics['accuracy'])
  print('Eval loss for ', model_desc, ': ', eval_metrics['loss'])
  if 'graph_loss' in eval_metrics:
    print('Eval graph loss for ', model_desc, ': ', eval_metrics['graph_loss'])
eval_results = dict(
    zip(base_model.metrics_names,
        base_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))
print_metrics('Base MLP model', eval_results)
5/5 [==============================] - 0s 27ms/step - loss: 1.3428 - accuracy: 0.7884


Eval accuracy for  Base MLP model :  0.78842676
Eval loss for  Base MLP model :  1.34275164604187

Train MLP model with graph regularization

Incorporating graph regularization into the loss term of an existing tf.Keras.Model requires just a few lines of code. The base model is wrapped to create a new tf.Keras subclass model, whose loss includes graph regularization.

To assess the incremental benefit of graph regularization, we will create a new base model instance. This is because base_model has already been trained for a few iterations, and reusing this trained model to create a graph-regularized model will not be a fair comparison for base_model.

# Build a new base MLP model.
base_reg_model_tag, base_reg_model = 'FUNCTIONAL', make_mlp_functional_model(
    HPARAMS)
# Wrap the base MLP model with graph regularization.
graph_reg_config = nsl.configs.make_graph_reg_config(
    max_neighbors=HPARAMS.num_neighbors,
    multiplier=HPARAMS.graph_regularization_multiplier,
    distance_type=HPARAMS.distance_type,
    sum_over_axis=-1)
graph_reg_model = nsl.keras.GraphRegularization(base_reg_model,
                                                graph_reg_config)
graph_reg_model.compile(
    optimizer='adam',
    loss='sparse_categorical_crossentropy',
    metrics=['accuracy'])
graph_reg_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
Epoch 1/100

/tmpfs/src/tf_docs_env/lib/python3.5/site-packages/tensorflow_core/python/framework/indexed_slices.py:424: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
/tmpfs/src/tf_docs_env/lib/python3.5/site-packages/tensorflow_core/python/framework/indexed_slices.py:424: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "

17/17 [==============================] - 1s 81ms/step - loss: 1.9075 - accuracy: 0.2241 - graph_loss: 0.0098
Epoch 2/100
17/17 [==============================] - 0s 16ms/step - loss: 1.8243 - accuracy: 0.3063 - graph_loss: 0.0168
Epoch 3/100
17/17 [==============================] - 0s 15ms/step - loss: 1.7141 - accuracy: 0.3434 - graph_loss: 0.0328
Epoch 4/100
17/17 [==============================] - 0s 16ms/step - loss: 1.6096 - accuracy: 0.3916 - graph_loss: 0.0528
Epoch 5/100
17/17 [==============================] - 0s 15ms/step - loss: 1.5003 - accuracy: 0.4445 - graph_loss: 0.0770
Epoch 6/100
17/17 [==============================] - 0s 16ms/step - loss: 1.3946 - accuracy: 0.5104 - graph_loss: 0.1044
Epoch 7/100
17/17 [==============================] - 0s 16ms/step - loss: 1.2514 - accuracy: 0.5652 - graph_loss: 0.1379
Epoch 8/100
17/17 [==============================] - 0s 16ms/step - loss: 1.1101 - accuracy: 0.6320 - graph_loss: 0.1793
Epoch 9/100
17/17 [==============================] - 0s 15ms/step - loss: 0.9925 - accuracy: 0.6817 - graph_loss: 0.2056
Epoch 10/100
17/17 [==============================] - 0s 16ms/step - loss: 0.8959 - accuracy: 0.7114 - graph_loss: 0.2309
Epoch 11/100
17/17 [==============================] - 0s 16ms/step - loss: 0.7889 - accuracy: 0.7680 - graph_loss: 0.2502
Epoch 12/100
17/17 [==============================] - 0s 16ms/step - loss: 0.7401 - accuracy: 0.7740 - graph_loss: 0.2777
Epoch 13/100
17/17 [==============================] - 0s 15ms/step - loss: 0.6633 - accuracy: 0.8028 - graph_loss: 0.2861
Epoch 14/100
17/17 [==============================] - 0s 16ms/step - loss: 0.6270 - accuracy: 0.8116 - graph_loss: 0.2959
Epoch 15/100
17/17 [==============================] - 0s 15ms/step - loss: 0.5759 - accuracy: 0.8316 - graph_loss: 0.3046
Epoch 16/100
17/17 [==============================] - 0s 16ms/step - loss: 0.5505 - accuracy: 0.8357 - graph_loss: 0.3073
Epoch 17/100
17/17 [==============================] - 0s 15ms/step - loss: 0.4891 - accuracy: 0.8659 - graph_loss: 0.3041
Epoch 18/100
17/17 [==============================] - 0s 15ms/step - loss: 0.4377 - accuracy: 0.8789 - graph_loss: 0.3110
Epoch 19/100
17/17 [==============================] - 0s 15ms/step - loss: 0.4268 - accuracy: 0.8845 - graph_loss: 0.3092
Epoch 20/100
17/17 [==============================] - 0s 16ms/step - loss: 0.3927 - accuracy: 0.8956 - graph_loss: 0.3224
Epoch 21/100
17/17 [==============================] - 0s 15ms/step - loss: 0.3908 - accuracy: 0.8923 - graph_loss: 0.3253
Epoch 22/100
17/17 [==============================] - 0s 16ms/step - loss: 0.3585 - accuracy: 0.9081 - graph_loss: 0.3237
Epoch 23/100
17/17 [==============================] - 0s 16ms/step - loss: 0.3365 - accuracy: 0.9146 - graph_loss: 0.3299
Epoch 24/100
17/17 [==============================] - 0s 16ms/step - loss: 0.3261 - accuracy: 0.9114 - graph_loss: 0.3244
Epoch 25/100
17/17 [==============================] - 0s 15ms/step - loss: 0.3060 - accuracy: 0.9169 - graph_loss: 0.3312
Epoch 26/100
17/17 [==============================] - 0s 15ms/step - loss: 0.3111 - accuracy: 0.9197 - graph_loss: 0.3314
Epoch 27/100
17/17 [==============================] - 0s 16ms/step - loss: 0.2898 - accuracy: 0.9290 - graph_loss: 0.3323
Epoch 28/100
17/17 [==============================] - 0s 16ms/step - loss: 0.2683 - accuracy: 0.9411 - graph_loss: 0.3382
Epoch 29/100
17/17 [==============================] - 0s 16ms/step - loss: 0.2589 - accuracy: 0.9360 - graph_loss: 0.3420
Epoch 30/100
17/17 [==============================] - 0s 15ms/step - loss: 0.2502 - accuracy: 0.9327 - graph_loss: 0.3381
Epoch 31/100
17/17 [==============================] - 0s 16ms/step - loss: 0.2291 - accuracy: 0.9401 - graph_loss: 0.3284
Epoch 32/100
17/17 [==============================] - 0s 16ms/step - loss: 0.2129 - accuracy: 0.9536 - graph_loss: 0.3393
Epoch 33/100
17/17 [==============================] - 0s 16ms/step - loss: 0.2220 - accuracy: 0.9476 - graph_loss: 0.3534
Epoch 34/100
17/17 [==============================] - 0s 15ms/step - loss: 0.1990 - accuracy: 0.9550 - graph_loss: 0.3378
Epoch 35/100
17/17 [==============================] - 0s 16ms/step - loss: 0.2006 - accuracy: 0.9559 - graph_loss: 0.3506
Epoch 36/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1975 - accuracy: 0.9531 - graph_loss: 0.3349
Epoch 37/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1942 - accuracy: 0.9582 - graph_loss: 0.3417
Epoch 38/100
17/17 [==============================] - 0s 15ms/step - loss: 0.1885 - accuracy: 0.9555 - graph_loss: 0.3382
Epoch 39/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1819 - accuracy: 0.9647 - graph_loss: 0.3335
Epoch 40/100
17/17 [==============================] - 0s 15ms/step - loss: 0.1655 - accuracy: 0.9647 - graph_loss: 0.3347
Epoch 41/100
17/17 [==============================] - 0s 15ms/step - loss: 0.1787 - accuracy: 0.9610 - graph_loss: 0.3340
Epoch 42/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1680 - accuracy: 0.9592 - graph_loss: 0.3412
Epoch 43/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1523 - accuracy: 0.9708 - graph_loss: 0.3470
Epoch 44/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1568 - accuracy: 0.9712 - graph_loss: 0.3370
Epoch 45/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1469 - accuracy: 0.9698 - graph_loss: 0.3484
Epoch 46/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1653 - accuracy: 0.9638 - graph_loss: 0.3426
Epoch 47/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1413 - accuracy: 0.9712 - graph_loss: 0.3387
Epoch 48/100
17/17 [==============================] - 0s 15ms/step - loss: 0.1425 - accuracy: 0.9722 - graph_loss: 0.3455
Epoch 49/100
17/17 [==============================] - 0s 17ms/step - loss: 0.1426 - accuracy: 0.9712 - graph_loss: 0.3369
Epoch 50/100
17/17 [==============================] - 0s 15ms/step - loss: 0.1344 - accuracy: 0.9759 - graph_loss: 0.3477
Epoch 51/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1323 - accuracy: 0.9735 - graph_loss: 0.3422
Epoch 52/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1352 - accuracy: 0.9740 - graph_loss: 0.3367
Epoch 53/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1315 - accuracy: 0.9731 - graph_loss: 0.3480
Epoch 54/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1165 - accuracy: 0.9810 - graph_loss: 0.3344
Epoch 55/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1253 - accuracy: 0.9745 - graph_loss: 0.3548
Epoch 56/100
17/17 [==============================] - 0s 15ms/step - loss: 0.1233 - accuracy: 0.9773 - graph_loss: 0.3379
Epoch 57/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1323 - accuracy: 0.9708 - graph_loss: 0.3450
Epoch 58/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1253 - accuracy: 0.9759 - graph_loss: 0.3480
Epoch 59/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1226 - accuracy: 0.9726 - graph_loss: 0.3373
Epoch 60/100
17/17 [==============================] - 0s 15ms/step - loss: 0.1165 - accuracy: 0.9796 - graph_loss: 0.3409
Epoch 61/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1152 - accuracy: 0.9819 - graph_loss: 0.3394
Epoch 62/100
17/17 [==============================] - 0s 15ms/step - loss: 0.1113 - accuracy: 0.9810 - graph_loss: 0.3418
Epoch 63/100
17/17 [==============================] - 0s 15ms/step - loss: 0.1154 - accuracy: 0.9800 - graph_loss: 0.3387
Epoch 64/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1107 - accuracy: 0.9800 - graph_loss: 0.3337
Epoch 65/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1178 - accuracy: 0.9777 - graph_loss: 0.3495
Epoch 66/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1036 - accuracy: 0.9800 - graph_loss: 0.3394
Epoch 67/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1011 - accuracy: 0.9828 - graph_loss: 0.3518
Epoch 68/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1021 - accuracy: 0.9787 - graph_loss: 0.3441
Epoch 69/100
17/17 [==============================] - 0s 17ms/step - loss: 0.1151 - accuracy: 0.9768 - graph_loss: 0.3484
Epoch 70/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0993 - accuracy: 0.9824 - graph_loss: 0.3408
Epoch 71/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1004 - accuracy: 0.9814 - graph_loss: 0.3405
Epoch 72/100
17/17 [==============================] - 0s 15ms/step - loss: 0.0991 - accuracy: 0.9838 - graph_loss: 0.3472
Epoch 73/100
17/17 [==============================] - 0s 15ms/step - loss: 0.0935 - accuracy: 0.9828 - graph_loss: 0.3346
Epoch 74/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0897 - accuracy: 0.9842 - graph_loss: 0.3305
Epoch 75/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0961 - accuracy: 0.9814 - graph_loss: 0.3486
Epoch 76/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0893 - accuracy: 0.9875 - graph_loss: 0.3347
Epoch 77/100
17/17 [==============================] - 0s 15ms/step - loss: 0.0908 - accuracy: 0.9856 - graph_loss: 0.3310
Epoch 78/100
17/17 [==============================] - 0s 15ms/step - loss: 0.0994 - accuracy: 0.9828 - graph_loss: 0.3357
Epoch 79/100
17/17 [==============================] - 0s 16ms/step - loss: 0.1019 - accuracy: 0.9782 - graph_loss: 0.3353
Epoch 80/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0868 - accuracy: 0.9875 - graph_loss: 0.3370
Epoch 81/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0920 - accuracy: 0.9842 - graph_loss: 0.3424
Epoch 82/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0874 - accuracy: 0.9870 - graph_loss: 0.3385
Epoch 83/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0837 - accuracy: 0.9884 - graph_loss: 0.3337
Epoch 84/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0825 - accuracy: 0.9870 - graph_loss: 0.3366
Epoch 85/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0917 - accuracy: 0.9842 - graph_loss: 0.3409
Epoch 86/100
17/17 [==============================] - 0s 15ms/step - loss: 0.0889 - accuracy: 0.9833 - graph_loss: 0.3369
Epoch 87/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0846 - accuracy: 0.9833 - graph_loss: 0.3348
Epoch 88/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0842 - accuracy: 0.9861 - graph_loss: 0.3337
Epoch 89/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0847 - accuracy: 0.9884 - graph_loss: 0.3426
Epoch 90/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0822 - accuracy: 0.9847 - graph_loss: 0.3387
Epoch 91/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0810 - accuracy: 0.9856 - graph_loss: 0.3351
Epoch 92/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0896 - accuracy: 0.9814 - graph_loss: 0.3490
Epoch 93/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0791 - accuracy: 0.9870 - graph_loss: 0.3363
Epoch 94/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0893 - accuracy: 0.9847 - graph_loss: 0.3391
Epoch 95/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0791 - accuracy: 0.9865 - graph_loss: 0.3467
Epoch 96/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0829 - accuracy: 0.9893 - graph_loss: 0.3366
Epoch 97/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0850 - accuracy: 0.9856 - graph_loss: 0.3367
Epoch 98/100
17/17 [==============================] - 0s 17ms/step - loss: 0.0774 - accuracy: 0.9898 - graph_loss: 0.3363
Epoch 99/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0816 - accuracy: 0.9865 - graph_loss: 0.3454
Epoch 100/100
17/17 [==============================] - 0s 16ms/step - loss: 0.0741 - accuracy: 0.9870 - graph_loss: 0.3410

<tensorflow.python.keras.callbacks.History at 0x7f045862fef0>

Evaluate MLP model with graph regularization

eval_results = dict(
    zip(graph_reg_model.metrics_names,
        graph_reg_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))
print_metrics('MLP + graph regularization', eval_results)
5/5 [==============================] - 0s 28ms/step - loss: 1.2247 - accuracy: 0.8192


Eval accuracy for  MLP + graph regularization :  0.81916815
Eval loss for  MLP + graph regularization :  1.2247399926185607

The graph-regularized model's accuracy is about 2-3% higher than that of the base model (base_model).

Conclusion

We have demonstrated the use of graph regularization for document classification on a natural citation graph (Cora) using the Neural Structured Learning (NSL) framework. Our advanced tutorial involves synthesizing graphs based on sample embeddings before training a neural network with graph regularization. This approach is useful if the input does not contain an explicit graph.

We encourage users to experiment further by varying the amount of supervision as well as trying different neural architectures for graph regularization.