이 페이지는 Cloud Translation API를 통해 번역되었습니다.
Switch to English

자연 그래프를 사용한 문서 분류를위한 그래프 정규화

TensorFlow.org에서보기 Google Colab에서 실행 GitHub에서 소스보기

개요

그래프 정규화는 신경 그래프 학습 ( Bui et al., 2018 )의 더 넓은 패러다임 하에서 특정 기술입니다. 핵심 아이디어는 레이블이 지정된 데이터와 레이블이없는 데이터를 모두 활용하여 그래프 정규화 된 목표로 신경망 모델을 훈련시키는 것입니다.

이 자습서에서는 그래프 정규화를 사용하여 자연 (유기적) 그래프를 형성하는 문서를 분류하는 방법을 살펴 봅니다.

NSL (Neural Structured Learning) 프레임 워크를 사용하여 그래프 정규화 모델을 생성하는 일반적인 방법은 다음과 같습니다.

  1. 입력 그래프 및 샘플 기능에서 훈련 데이터를 생성합니다. 그래프의 노드는 샘플에 해당하고 그래프의 간선은 샘플 쌍 간의 유사성에 해당합니다. 결과 훈련 데이터에는 원래 노드 기능 외에도 인접 기능이 포함됩니다.
  2. Keras 순차, 기능 또는 하위 클래스 API를 사용하여 신경망을 기본 모델로 만듭니다.
  3. NSL 프레임 워크에서 제공하는 GraphRegularization 래퍼 클래스로 기본 모델을 래핑하여 새 그래프 Keras 모델을 만듭니다. 이 새로운 모델은 훈련 목표의 정규화 용어로 그래프 정규화 손실을 포함합니다.
  4. 그래프 Keras 모델을 Keras 시키고 평가합니다.

설정

신경 구조적 학습 패키지를 설치합니다.

pip install --quiet neural-structured-learning

종속성 및 가져 오기

import neural_structured_learning as nsl

import tensorflow as tf

# Resets notebook state
tf.keras.backend.clear_session()

print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print(
    "GPU is",
    "available" if tf.config.list_physical_devices("GPU") else "NOT AVAILABLE")
Version:  2.2.0
Eager mode:  True
GPU is NOT AVAILABLE

Cora 데이터 세트

Cora 데이터 세트 는 노드가 기계 학습 논문을 나타내고 가장자리가 논문 쌍 사이의 인용을 나타내는 인용 그래프입니다. 관련된 작업은 각 논문을 7 가지 범주 중 하나로 분류하는 것을 목표로하는 문서 분류입니다. 즉, 7 개의 클래스가있는 다중 클래스 분류 문제입니다.

그래프

원래 그래프가 지시됩니다. 그러나이 예에서는이 그래프의 무 방향 버전을 고려합니다. 따라서 A 논문이 B 논문을 인용하면 B 논문도 A를 인용 한 것으로 간주합니다. 이것이 반드시 사실은 아니지만이 예에서는 인용을 유사성에 대한 대리로 간주하며 일반적으로 교환 속성입니다.

풍모

입력의 각 논문에는 효과적으로 두 가지 기능이 포함되어 있습니다.

  1. 단어 : 종이에있는 텍스트를 조밀하고 여러 번에 걸쳐 표현한 단어입니다. Cora 데이터 세트의 어휘에는 1433 개의 고유 한 단어가 포함되어 있습니다. 따라서이 기능의 길이는 1433이고 위치 'i'의 값은 주어진 논문에 단어 'i'가 존재하는지 여부를 나타내는 0/1입니다.

  2. 라벨 : 논문의 클래스 ID (카테고리)를 나타내는 단일 정수.

Cora 데이터 세트 다운로드

wget --quiet -P /tmp https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz
tar -C /tmp -xvzf /tmp/cora.tgz
cora/
cora/README
cora/cora.cites
cora/cora.content

Cora 데이터를 NSL 형식으로 변환

Cora 데이터 세트를 전처리하고 신경 구조적 학습에 필요한 형식으로 변환하기 위해 NSL github 저장소에 포함 된 'preprocess_cora_dataset.py' 스크립트를 실행합니다. 이 스크립트는 다음을 수행합니다.

  1. 원래 노드 기능과 그래프를 사용하여 인접 기능을 생성합니다.
  2. tf.train.Example 인스턴스를 포함하는 학습 및 테스트 데이터 분할을 tf.train.Example 합니다.
  3. 결과 기차 및 테스트 데이터를 TFRecord 형식으로 TFRecord 합니다.
!wget https://raw.githubusercontent.com/tensorflow/neural-structured-learning/master/neural_structured_learning/examples/preprocess/cora/preprocess_cora_dataset.py

!python preprocess_cora_dataset.py \
--input_cora_content=/tmp/cora/cora.content \
--input_cora_graph=/tmp/cora/cora.cites \
--max_nbrs=5 \
--output_train_data=/tmp/cora/train_merged_examples.tfr \
--output_test_data=/tmp/cora/test_examples.tfr
--2020-07-01 11:15:33--  https://raw.githubusercontent.com/tensorflow/neural-structured-learning/master/neural_structured_learning/examples/preprocess/cora/preprocess_cora_dataset.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.192.133, 151.101.128.133, 151.101.64.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.192.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 11640 (11K) [text/plain]
Saving to: ‘preprocess_cora_dataset.py’

preprocess_cora_dat 100%[===================>]  11.37K  --.-KB/s    in 0s      

2020-07-01 11:15:33 (84.9 MB/s) - ‘preprocess_cora_dataset.py’ saved [11640/11640]

Reading graph file: /tmp/cora/cora.cites...
Done reading 5429 edges from: /tmp/cora/cora.cites (0.01 seconds).
Making all edges bi-directional...
Done (0.06 seconds). Total graph nodes: 2708
Joining seed and neighbor tf.train.Examples with graph edges...
Done creating and writing 2155 merged tf.train.Examples (1.38 seconds).
Out-degree histogram: [(1, 386), (2, 468), (3, 452), (4, 309), (5, 540)]
Output training data written to TFRecord file: /tmp/cora/train_merged_examples.tfr.
Output test data written to TFRecord file: /tmp/cora/test_examples.tfr.
Total running time: 0.04 minutes.

전역 변수

기차 및 테스트 데이터에 대한 파일 경로는 위의 'preprocess_cora_dataset.py' 스크립트를 호출하는 데 사용 된 명령 줄 플래그 값을 기반으로합니다.

### Experiment dataset
TRAIN_DATA_PATH = '/tmp/cora/train_merged_examples.tfr'
TEST_DATA_PATH = '/tmp/cora/test_examples.tfr'

### Constants used to identify neighbor features in the input.
NBR_FEATURE_PREFIX = 'NL_nbr_'
NBR_WEIGHT_SUFFIX = '_weight'

하이퍼 파라미터

HParams 의 인스턴스를 사용하여 훈련 및 평가에 사용되는 다양한 하이퍼 파라미터 및 상수를 포함합니다. 아래에서 각각에 대해 간략하게 설명합니다.

  • num_classes : 총 7 개의 클래스가 있습니다.

  • max_seq_length : 이것은 어휘의 크기이며 입력의 모든 인스턴스는 밀집된 멀티-핫, bag-of-words 표현을 갖습니다. 즉, 단어의 값이 1이면 해당 단어가 입력에 있음을 나타내고 0 값은 그렇지 않음을 나타냅니다.

  • distance_type : 샘플을 이웃으로 정규화하는 데 사용되는 거리 측정 항목입니다.

  • graph_regularization_multiplier : 전체 손실 함수에서 그래프 정규화 항의 상대적 가중치를 제어합니다.

  • num_neighbors : 그래프 정규화에 사용되는 이웃 수입니다. 이 값은 preprocess_cora_dataset.py 실행할 때 위에 사용 된 max_nbrs 명령 줄 인수보다 작거나 같아야합니다.

  • num_fc_units : 신경망에서 완전히 연결된 계층의 수입니다.

  • train_epochs : 훈련 시대의 수입니다.

  • batch_size : 학습 및 평가에 사용되는 배치 크기입니다.

  • dropout_rate : 완전히 연결된 각 레이어를 따르는 드롭 아웃 비율을 제어합니다.

  • eval_steps : 평가가 완료된 것으로 간주하기 전에 처리 할 배치 수입니다. None 설정하면 테스트 세트의 모든 인스턴스가 평가됩니다.

class HParams(object):
  """Hyperparameters used for training."""
  def __init__(self):
    ### dataset parameters
    self.num_classes = 7
    self.max_seq_length = 1433
    ### neural graph learning parameters
    self.distance_type = nsl.configs.DistanceType.L2
    self.graph_regularization_multiplier = 0.1
    self.num_neighbors = 1
    ### model architecture
    self.num_fc_units = [50, 50]
    ### training parameters
    self.train_epochs = 100
    self.batch_size = 128
    self.dropout_rate = 0.5
    ### eval parameters
    self.eval_steps = None  # All instances in the test set are evaluated.

HPARAMS = HParams()

기차 및 테스트 데이터로드

이 노트북의 앞부분에서 설명한 것처럼 입력 학습 및 테스트 데이터는 'preprocess_cora_dataset.py'에 의해 생성되었습니다. 두 개의 tf.data.Dataset 객체에로드합니다. 하나는 훈련 용이고 다른 하나는 테스트 용입니다.

모델의 입력 계층에서 각 샘플의 '단어'및 '라벨'특성뿐만 아니라 hparams.num_neighbors 값을 기반으로 해당 인접 특성도 hparams.num_neighbors 합니다. hparams.num_neighbors 보다 인접 항목이 적은 인스턴스에는 존재하지 않는 인접 피쳐에 대해 더미 값이 할당됩니다.

def make_dataset(file_path, training=False):
  """Creates a `tf.data.TFRecordDataset`.

  Args:
    file_path: Name of the file in the `.tfrecord` format containing
      `tf.train.Example` objects.
    training: Boolean indicating if we are in training mode.

  Returns:
    An instance of `tf.data.TFRecordDataset` containing the `tf.train.Example`
    objects.
  """

  def parse_example(example_proto):
    """Extracts relevant fields from the `example_proto`.

    Args:
      example_proto: An instance of `tf.train.Example`.

    Returns:
      A pair whose first value is a dictionary containing relevant features
      and whose second value contains the ground truth label.
    """
    # The 'words' feature is a multi-hot, bag-of-words representation of the
    # original raw text. A default value is required for examples that don't
    # have the feature.
    feature_spec = {
        'words':
            tf.io.FixedLenFeature([HPARAMS.max_seq_length],
                                  tf.int64,
                                  default_value=tf.constant(
                                      0,
                                      dtype=tf.int64,
                                      shape=[HPARAMS.max_seq_length])),
        'label':
            tf.io.FixedLenFeature((), tf.int64, default_value=-1),
    }
    # We also extract corresponding neighbor features in a similar manner to
    # the features above during training.
    if training:
      for i in range(HPARAMS.num_neighbors):
        nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'words')
        nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, i,
                                         NBR_WEIGHT_SUFFIX)
        feature_spec[nbr_feature_key] = tf.io.FixedLenFeature(
            [HPARAMS.max_seq_length],
            tf.int64,
            default_value=tf.constant(
                0, dtype=tf.int64, shape=[HPARAMS.max_seq_length]))

        # We assign a default value of 0.0 for the neighbor weight so that
        # graph regularization is done on samples based on their exact number
        # of neighbors. In other words, non-existent neighbors are discounted.
        feature_spec[nbr_weight_key] = tf.io.FixedLenFeature(
            [1], tf.float32, default_value=tf.constant([0.0]))

    features = tf.io.parse_single_example(example_proto, feature_spec)

    label = features.pop('label')
    return features, label

  dataset = tf.data.TFRecordDataset([file_path])
  if training:
    dataset = dataset.shuffle(10000)
  dataset = dataset.map(parse_example)
  dataset = dataset.batch(HPARAMS.batch_size)
  return dataset


train_dataset = make_dataset(TRAIN_DATA_PATH, training=True)
test_dataset = make_dataset(TEST_DATA_PATH)

그 내용을보기 위해 train 데이터 세트를 살펴 보겠습니다.

for feature_batch, label_batch in train_dataset.take(1):
  print('Feature list:', list(feature_batch.keys()))
  print('Batch of inputs:', feature_batch['words'])
  nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, 0, 'words')
  nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, 0, NBR_WEIGHT_SUFFIX)
  print('Batch of neighbor inputs:', feature_batch[nbr_feature_key])
  print('Batch of neighbor weights:',
        tf.reshape(feature_batch[nbr_weight_key], [-1]))
  print('Batch of labels:', label_batch)
Feature list: ['NL_nbr_0_weight', 'NL_nbr_0_words', 'words']
Batch of inputs: tf.Tensor(
[[0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]
 ...
 [0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]], shape=(128, 1433), dtype=int64)
Batch of neighbor inputs: tf.Tensor(
[[0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]
 ...
 [0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]], shape=(128, 1433), dtype=int64)
Batch of neighbor weights: tf.Tensor(
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.

 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
 1. 1. 1. 1. 1. 1. 1. 1.], shape=(128,), dtype=float32)
Batch of labels: tf.Tensor(
[4 3 1 2 1 6 2 5 6 2 2 6 5 0 2 2 1 6 2 2 2 2 5 4 2 0 2 1 1 2 0 5 2 2 2 0 2
 2 0 6 1 1 0 2 1 2 3 2 0 0 0 4 1 3 3 1 2 5 3 3 1 1 6 0 0 4 6 5 6 0 3 4 2 2
 2 3 3 2 4 0 2 3 2 2 3 1 2 2 1 0 6 1 2 1 6 2 1 0 4 3 2 5 2 3 1 0 3 4 3 4 1
 0 5 6 4 2 1 1 2 5 3 4 3 1 3 2 6 3], shape=(128,), dtype=int64)

내용을보기 위해 테스트 데이터 세트를 살펴 보겠습니다.

for feature_batch, label_batch in test_dataset.take(1):
  print('Feature list:', list(feature_batch.keys()))
  print('Batch of inputs:', feature_batch['words'])
  print('Batch of labels:', label_batch)
Feature list: ['words']
Batch of inputs: tf.Tensor(
[[0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]
 ...
 [0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]
 [0 0 0 ... 0 0 0]], shape=(128, 1433), dtype=int64)
Batch of labels: tf.Tensor(
[5 2 2 2 1 2 6 3 2 3 6 1 3 6 4 4 2 3 3 0 2 0 5 2 1 0 6 3 6 4 2 2 3 0 4 2 2
 2 2 3 2 2 2 0 2 2 2 2 4 2 3 4 0 2 6 2 1 4 2 0 0 1 4 2 6 0 5 2 2 3 2 5 2 5
 2 3 2 2 2 2 2 6 6 3 2 4 2 6 3 2 2 6 2 4 2 2 1 3 4 6 0 0 2 4 2 1 3 6 6 2 6
 6 6 1 4 6 4 3 6 6 0 0 2 6 2 4 0 0], shape=(128,), dtype=int64)

모델 정의

그래프 정규화의 사용을 보여주기 위해 먼저이 문제에 대한 기본 모델을 구축합니다. 2 개의 은닉 레이어와 그 사이에 드롭 아웃이있는 간단한 피드 포워드 신경망을 사용합니다. tf.Keras 프레임 워크에서 지원하는 모든 모델 유형 (순차, 기능 및 하위 클래스)을 사용하여 기본 모델 생성을 설명합니다.

순차 기본 모델

def make_mlp_sequential_model(hparams):
  """Creates a sequential multi-layer perceptron model."""
  model = tf.keras.Sequential()
  model.add(
      tf.keras.layers.InputLayer(
          input_shape=(hparams.max_seq_length,), name='words'))
  # Input is already one-hot encoded in the integer format. We cast it to
  # floating point format here.
  model.add(
      tf.keras.layers.Lambda(lambda x: tf.keras.backend.cast(x, tf.float32)))
  for num_units in hparams.num_fc_units:
    model.add(tf.keras.layers.Dense(num_units, activation='relu'))
    # For sequential models, by default, Keras ensures that the 'dropout' layer
    # is invoked only during training.
    model.add(tf.keras.layers.Dropout(hparams.dropout_rate))
  model.add(tf.keras.layers.Dense(hparams.num_classes, activation='softmax'))
  return model

기능성 기본 모델

def make_mlp_functional_model(hparams):
  """Creates a functional API-based multi-layer perceptron model."""
  inputs = tf.keras.Input(
      shape=(hparams.max_seq_length,), dtype='int64', name='words')

  # Input is already one-hot encoded in the integer format. We cast it to
  # floating point format here.
  cur_layer = tf.keras.layers.Lambda(
      lambda x: tf.keras.backend.cast(x, tf.float32))(
          inputs)

  for num_units in hparams.num_fc_units:
    cur_layer = tf.keras.layers.Dense(num_units, activation='relu')(cur_layer)
    # For functional models, by default, Keras ensures that the 'dropout' layer
    # is invoked only during training.
    cur_layer = tf.keras.layers.Dropout(hparams.dropout_rate)(cur_layer)

  outputs = tf.keras.layers.Dense(
      hparams.num_classes, activation='softmax')(
          cur_layer)

  model = tf.keras.Model(inputs, outputs=outputs)
  return model

하위 클래스 기본 모델

def make_mlp_subclass_model(hparams):
  """Creates a multi-layer perceptron subclass model in Keras."""

  class MLP(tf.keras.Model):
    """Subclass model defining a multi-layer perceptron."""

    def __init__(self):
      super(MLP, self).__init__()
      # Input is already one-hot encoded in the integer format. We create a
      # layer to cast it to floating point format here.
      self.cast_to_float_layer = tf.keras.layers.Lambda(
          lambda x: tf.keras.backend.cast(x, tf.float32))
      self.dense_layers = [
          tf.keras.layers.Dense(num_units, activation='relu')
          for num_units in hparams.num_fc_units
      ]
      self.dropout_layer = tf.keras.layers.Dropout(hparams.dropout_rate)
      self.output_layer = tf.keras.layers.Dense(
          hparams.num_classes, activation='softmax')

    def call(self, inputs, training=False):
      cur_layer = self.cast_to_float_layer(inputs['words'])
      for dense_layer in self.dense_layers:
        cur_layer = dense_layer(cur_layer)
        cur_layer = self.dropout_layer(cur_layer, training=training)

      outputs = self.output_layer(cur_layer)

      return outputs

  return MLP()

기본 모델 생성

# Create a base MLP model using the functional API.
# Alternatively, you can also create a sequential or subclass base model using
# the make_mlp_sequential_model() or make_mlp_subclass_model() functions
# respectively, defined above. Note that if a subclass model is used, its
# summary cannot be generated until it is built.
base_model_tag, base_model = 'FUNCTIONAL', make_mlp_functional_model(HPARAMS)
base_model.summary()
Model: "model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
words (InputLayer)           [(None, 1433)]            0         
_________________________________________________________________
lambda (Lambda)              (None, 1433)              0         
_________________________________________________________________
dense (Dense)                (None, 50)                71700     
_________________________________________________________________
dropout (Dropout)            (None, 50)                0         
_________________________________________________________________
dense_1 (Dense)              (None, 50)                2550      
_________________________________________________________________
dropout_1 (Dropout)          (None, 50)                0         
_________________________________________________________________
dense_2 (Dense)              (None, 7)                 357       
=================================================================
Total params: 74,607
Trainable params: 74,607
Non-trainable params: 0
_________________________________________________________________

기본 MLP 모델 학습

# Compile and train the base MLP model
base_model.compile(
    optimizer='adam',
    loss='sparse_categorical_crossentropy',
    metrics=['accuracy'])
base_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
Epoch 1/100
17/17 [==============================] - 0s 11ms/step - loss: 1.9256 - accuracy: 0.1870
Epoch 2/100
17/17 [==============================] - 0s 10ms/step - loss: 1.8410 - accuracy: 0.2835
Epoch 3/100
17/17 [==============================] - 0s 9ms/step - loss: 1.7479 - accuracy: 0.3374
Epoch 4/100
17/17 [==============================] - 0s 10ms/step - loss: 1.6384 - accuracy: 0.3884
Epoch 5/100
17/17 [==============================] - 0s 9ms/step - loss: 1.5086 - accuracy: 0.4390
Epoch 6/100
17/17 [==============================] - 0s 10ms/step - loss: 1.3606 - accuracy: 0.5016
Epoch 7/100
17/17 [==============================] - 0s 9ms/step - loss: 1.2165 - accuracy: 0.5791
Epoch 8/100
17/17 [==============================] - 0s 10ms/step - loss: 1.0783 - accuracy: 0.6311
Epoch 9/100
17/17 [==============================] - 0s 9ms/step - loss: 0.9552 - accuracy: 0.6947
Epoch 10/100
17/17 [==============================] - 0s 9ms/step - loss: 0.8680 - accuracy: 0.7090
Epoch 11/100
17/17 [==============================] - 0s 9ms/step - loss: 0.7915 - accuracy: 0.7425
Epoch 12/100
17/17 [==============================] - 0s 9ms/step - loss: 0.7124 - accuracy: 0.7773
Epoch 13/100
17/17 [==============================] - 0s 9ms/step - loss: 0.6582 - accuracy: 0.7907
Epoch 14/100
17/17 [==============================] - 0s 10ms/step - loss: 0.6021 - accuracy: 0.8065
Epoch 15/100
17/17 [==============================] - 0s 10ms/step - loss: 0.5416 - accuracy: 0.8325
Epoch 16/100
17/17 [==============================] - 0s 10ms/step - loss: 0.5042 - accuracy: 0.8473
Epoch 17/100
17/17 [==============================] - 0s 10ms/step - loss: 0.4433 - accuracy: 0.8761
Epoch 18/100
17/17 [==============================] - 0s 10ms/step - loss: 0.4310 - accuracy: 0.8640
Epoch 19/100
17/17 [==============================] - 0s 9ms/step - loss: 0.3894 - accuracy: 0.8840
Epoch 20/100
17/17 [==============================] - 0s 9ms/step - loss: 0.3676 - accuracy: 0.8891
Epoch 21/100
17/17 [==============================] - 0s 10ms/step - loss: 0.3576 - accuracy: 0.8812
Epoch 22/100
17/17 [==============================] - 0s 9ms/step - loss: 0.3132 - accuracy: 0.9067
Epoch 23/100
17/17 [==============================] - 0s 9ms/step - loss: 0.3058 - accuracy: 0.9142
Epoch 24/100
17/17 [==============================] - 0s 9ms/step - loss: 0.2924 - accuracy: 0.9155
Epoch 25/100
17/17 [==============================] - 0s 9ms/step - loss: 0.2769 - accuracy: 0.9197
Epoch 26/100
17/17 [==============================] - 0s 9ms/step - loss: 0.2636 - accuracy: 0.9244
Epoch 27/100
17/17 [==============================] - 0s 9ms/step - loss: 0.2429 - accuracy: 0.9313
Epoch 28/100
17/17 [==============================] - 0s 9ms/step - loss: 0.2324 - accuracy: 0.9323
Epoch 29/100
17/17 [==============================] - 0s 9ms/step - loss: 0.2285 - accuracy: 0.9346
Epoch 30/100
17/17 [==============================] - 0s 9ms/step - loss: 0.2039 - accuracy: 0.9374
Epoch 31/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1943 - accuracy: 0.9471
Epoch 32/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1898 - accuracy: 0.9439
Epoch 33/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1879 - accuracy: 0.9425
Epoch 34/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1828 - accuracy: 0.9443
Epoch 35/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1635 - accuracy: 0.9541
Epoch 36/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1648 - accuracy: 0.9476
Epoch 37/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1603 - accuracy: 0.9499
Epoch 38/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1428 - accuracy: 0.9624
Epoch 39/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1483 - accuracy: 0.9601
Epoch 40/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1352 - accuracy: 0.9582
Epoch 41/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1379 - accuracy: 0.9555
Epoch 42/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1410 - accuracy: 0.9582
Epoch 43/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1198 - accuracy: 0.9684
Epoch 44/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1148 - accuracy: 0.9731
Epoch 45/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1228 - accuracy: 0.9657
Epoch 46/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1135 - accuracy: 0.9703
Epoch 47/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1134 - accuracy: 0.9661
Epoch 48/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1175 - accuracy: 0.9619
Epoch 49/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1002 - accuracy: 0.9703
Epoch 50/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1143 - accuracy: 0.9671
Epoch 51/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0923 - accuracy: 0.9777
Epoch 52/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1068 - accuracy: 0.9731
Epoch 53/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0972 - accuracy: 0.9712
Epoch 54/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0828 - accuracy: 0.9796
Epoch 55/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1036 - accuracy: 0.9703
Epoch 56/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0954 - accuracy: 0.9745
Epoch 57/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0883 - accuracy: 0.9768
Epoch 58/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0859 - accuracy: 0.9777
Epoch 59/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0856 - accuracy: 0.9759
Epoch 60/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0858 - accuracy: 0.9754
Epoch 61/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0848 - accuracy: 0.9726
Epoch 62/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0840 - accuracy: 0.9763
Epoch 63/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0770 - accuracy: 0.9805
Epoch 64/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0823 - accuracy: 0.9745
Epoch 65/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0665 - accuracy: 0.9828
Epoch 66/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0788 - accuracy: 0.9777
Epoch 67/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0690 - accuracy: 0.9800
Epoch 68/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0683 - accuracy: 0.9805
Epoch 69/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0615 - accuracy: 0.9838
Epoch 70/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0618 - accuracy: 0.9833
Epoch 71/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0659 - accuracy: 0.9810
Epoch 72/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0704 - accuracy: 0.9800
Epoch 73/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0645 - accuracy: 0.9814
Epoch 74/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0645 - accuracy: 0.9791
Epoch 75/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0638 - accuracy: 0.9791
Epoch 76/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0648 - accuracy: 0.9814
Epoch 77/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0591 - accuracy: 0.9838
Epoch 78/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0606 - accuracy: 0.9861
Epoch 79/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0699 - accuracy: 0.9814
Epoch 80/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0603 - accuracy: 0.9828
Epoch 81/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0629 - accuracy: 0.9828
Epoch 82/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0596 - accuracy: 0.9828
Epoch 83/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0542 - accuracy: 0.9828
Epoch 84/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0452 - accuracy: 0.9893
Epoch 85/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0551 - accuracy: 0.9838
Epoch 86/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0555 - accuracy: 0.9842
Epoch 87/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0514 - accuracy: 0.9824
Epoch 88/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0553 - accuracy: 0.9847
Epoch 89/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0475 - accuracy: 0.9884
Epoch 90/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0476 - accuracy: 0.9893
Epoch 91/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0427 - accuracy: 0.9903
Epoch 92/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0475 - accuracy: 0.9847
Epoch 93/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0423 - accuracy: 0.9893
Epoch 94/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0473 - accuracy: 0.9865
Epoch 95/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0560 - accuracy: 0.9819
Epoch 96/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0547 - accuracy: 0.9810
Epoch 97/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0576 - accuracy: 0.9814
Epoch 98/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0429 - accuracy: 0.9893
Epoch 99/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0440 - accuracy: 0.9875
Epoch 100/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0513 - accuracy: 0.9838

<tensorflow.python.keras.callbacks.History at 0x7fc47a3c78d0>

기본 MLP 모델 평가

# Helper function to print evaluation metrics.
def print_metrics(model_desc, eval_metrics):
  """Prints evaluation metrics.

  Args:
    model_desc: A description of the model.
    eval_metrics: A dictionary mapping metric names to corresponding values. It
      must contain the loss and accuracy metrics.
  """
  print('\n')
  print('Eval accuracy for ', model_desc, ': ', eval_metrics['accuracy'])
  print('Eval loss for ', model_desc, ': ', eval_metrics['loss'])
  if 'graph_loss' in eval_metrics:
    print('Eval graph loss for ', model_desc, ': ', eval_metrics['graph_loss'])
eval_results = dict(
    zip(base_model.metrics_names,
        base_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))
print_metrics('Base MLP model', eval_results)
5/5 [==============================] - 0s 5ms/step - loss: 1.3380 - accuracy: 0.7740


Eval accuracy for  Base MLP model :  0.7739602327346802
Eval loss for  Base MLP model :  1.3379606008529663

그래프 정규화로 MLP 모델 학습

그래프 정규화를 기존 tf.Keras.Model 의 손실 기간에 tf.Keras.Model 하려면 몇 줄의 코드 만 tf.Keras.Model 됩니다. 기본 모델은 래핑되어 새로운 tf.Keras 하위 클래스 모델을 생성하며 손실에는 그래프 정규화가 포함됩니다.

그래프 정규화의 점진적 이점을 평가하기 위해 새 기본 모델 인스턴스를 생성합니다. 이는 base_model 이 이미 몇 번의 반복에 대해 base_model 된 모델을 재사용하여 그래프 정규화 모델을 만드는 것은 base_model 대한 공정한 비교가되지 않기 때문입니다.

# Build a new base MLP model.
base_reg_model_tag, base_reg_model = 'FUNCTIONAL', make_mlp_functional_model(
    HPARAMS)
# Wrap the base MLP model with graph regularization.
graph_reg_config = nsl.configs.make_graph_reg_config(
    max_neighbors=HPARAMS.num_neighbors,
    multiplier=HPARAMS.graph_regularization_multiplier,
    distance_type=HPARAMS.distance_type,
    sum_over_axis=-1)
graph_reg_model = nsl.keras.GraphRegularization(base_reg_model,
                                                graph_reg_config)
graph_reg_model.compile(
    optimizer='adam',
    loss='sparse_categorical_crossentropy',
    metrics=['accuracy'])
graph_reg_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
Epoch 1/100

/tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/framework/indexed_slices.py:434: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "

17/17 [==============================] - 0s 10ms/step - loss: 1.9454 - accuracy: 0.1652 - graph_loss: 0.0076
Epoch 2/100
17/17 [==============================] - 0s 10ms/step - loss: 1.8517 - accuracy: 0.2956 - graph_loss: 0.0117
Epoch 3/100
17/17 [==============================] - 0s 10ms/step - loss: 1.7589 - accuracy: 0.3151 - graph_loss: 0.0261
Epoch 4/100
17/17 [==============================] - 0s 10ms/step - loss: 1.6714 - accuracy: 0.3392 - graph_loss: 0.0476
Epoch 5/100
17/17 [==============================] - 0s 9ms/step - loss: 1.5607 - accuracy: 0.4037 - graph_loss: 0.0622
Epoch 6/100
17/17 [==============================] - 0s 10ms/step - loss: 1.4486 - accuracy: 0.4807 - graph_loss: 0.0921
Epoch 7/100
17/17 [==============================] - 0s 10ms/step - loss: 1.3135 - accuracy: 0.5383 - graph_loss: 0.1236
Epoch 8/100
17/17 [==============================] - 0s 10ms/step - loss: 1.1902 - accuracy: 0.5912 - graph_loss: 0.1616
Epoch 9/100
17/17 [==============================] - 0s 10ms/step - loss: 1.0647 - accuracy: 0.6575 - graph_loss: 0.1920
Epoch 10/100
17/17 [==============================] - 0s 9ms/step - loss: 0.9416 - accuracy: 0.7067 - graph_loss: 0.2181
Epoch 11/100
17/17 [==============================] - 0s 10ms/step - loss: 0.8601 - accuracy: 0.7378 - graph_loss: 0.2470
Epoch 12/100
17/17 [==============================] - 0s 9ms/step - loss: 0.7968 - accuracy: 0.7462 - graph_loss: 0.2565
Epoch 13/100
17/17 [==============================] - 0s 10ms/step - loss: 0.6881 - accuracy: 0.7912 - graph_loss: 0.2681
Epoch 14/100
17/17 [==============================] - 0s 10ms/step - loss: 0.6548 - accuracy: 0.8139 - graph_loss: 0.2941
Epoch 15/100
17/17 [==============================] - 0s 10ms/step - loss: 0.5874 - accuracy: 0.8376 - graph_loss: 0.3010
Epoch 16/100
17/17 [==============================] - 0s 9ms/step - loss: 0.5537 - accuracy: 0.8348 - graph_loss: 0.3014
Epoch 17/100
17/17 [==============================] - 0s 10ms/step - loss: 0.5123 - accuracy: 0.8529 - graph_loss: 0.3097
Epoch 18/100
17/17 [==============================] - 0s 10ms/step - loss: 0.4771 - accuracy: 0.8640 - graph_loss: 0.3192
Epoch 19/100
17/17 [==============================] - 0s 10ms/step - loss: 0.4294 - accuracy: 0.8826 - graph_loss: 0.3182
Epoch 20/100
17/17 [==============================] - 0s 10ms/step - loss: 0.4109 - accuracy: 0.8854 - graph_loss: 0.3169
Epoch 21/100
17/17 [==============================] - 0s 9ms/step - loss: 0.3901 - accuracy: 0.8965 - graph_loss: 0.3250
Epoch 22/100
17/17 [==============================] - 0s 9ms/step - loss: 0.3700 - accuracy: 0.8956 - graph_loss: 0.3349
Epoch 23/100
17/17 [==============================] - 0s 10ms/step - loss: 0.3716 - accuracy: 0.8974 - graph_loss: 0.3408
Epoch 24/100
17/17 [==============================] - 0s 10ms/step - loss: 0.3258 - accuracy: 0.9202 - graph_loss: 0.3361
Epoch 25/100
17/17 [==============================] - 0s 10ms/step - loss: 0.3043 - accuracy: 0.9253 - graph_loss: 0.3351
Epoch 26/100
17/17 [==============================] - 0s 10ms/step - loss: 0.2919 - accuracy: 0.9253 - graph_loss: 0.3361
Epoch 27/100
17/17 [==============================] - 0s 10ms/step - loss: 0.3005 - accuracy: 0.9202 - graph_loss: 0.3249
Epoch 28/100
17/17 [==============================] - 0s 10ms/step - loss: 0.2629 - accuracy: 0.9336 - graph_loss: 0.3442
Epoch 29/100
17/17 [==============================] - 0s 10ms/step - loss: 0.2617 - accuracy: 0.9401 - graph_loss: 0.3302
Epoch 30/100
17/17 [==============================] - 0s 10ms/step - loss: 0.2510 - accuracy: 0.9383 - graph_loss: 0.3436
Epoch 31/100
17/17 [==============================] - 0s 10ms/step - loss: 0.2452 - accuracy: 0.9411 - graph_loss: 0.3364
Epoch 32/100
17/17 [==============================] - 0s 10ms/step - loss: 0.2397 - accuracy: 0.9466 - graph_loss: 0.3333
Epoch 33/100
17/17 [==============================] - 0s 10ms/step - loss: 0.2239 - accuracy: 0.9466 - graph_loss: 0.3373
Epoch 34/100
17/17 [==============================] - 0s 9ms/step - loss: 0.2084 - accuracy: 0.9513 - graph_loss: 0.3330
Epoch 35/100
17/17 [==============================] - 0s 10ms/step - loss: 0.2075 - accuracy: 0.9499 - graph_loss: 0.3383
Epoch 36/100
17/17 [==============================] - 0s 10ms/step - loss: 0.2064 - accuracy: 0.9513 - graph_loss: 0.3394
Epoch 37/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1857 - accuracy: 0.9568 - graph_loss: 0.3371
Epoch 38/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1799 - accuracy: 0.9601 - graph_loss: 0.3477
Epoch 39/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1844 - accuracy: 0.9573 - graph_loss: 0.3385
Epoch 40/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1823 - accuracy: 0.9592 - graph_loss: 0.3445
Epoch 41/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1713 - accuracy: 0.9615 - graph_loss: 0.3451
Epoch 42/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1669 - accuracy: 0.9624 - graph_loss: 0.3398
Epoch 43/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1692 - accuracy: 0.9671 - graph_loss: 0.3483
Epoch 44/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1605 - accuracy: 0.9647 - graph_loss: 0.3437
Epoch 45/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1485 - accuracy: 0.9703 - graph_loss: 0.3338
Epoch 46/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1467 - accuracy: 0.9717 - graph_loss: 0.3405
Epoch 47/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1492 - accuracy: 0.9694 - graph_loss: 0.3466
Epoch 48/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1577 - accuracy: 0.9666 - graph_loss: 0.3338
Epoch 49/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1363 - accuracy: 0.9773 - graph_loss: 0.3424
Epoch 50/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1511 - accuracy: 0.9694 - graph_loss: 0.3402
Epoch 51/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1366 - accuracy: 0.9759 - graph_loss: 0.3385
Epoch 52/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1254 - accuracy: 0.9777 - graph_loss: 0.3474
Epoch 53/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1289 - accuracy: 0.9740 - graph_loss: 0.3469
Epoch 54/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1410 - accuracy: 0.9689 - graph_loss: 0.3475
Epoch 55/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1356 - accuracy: 0.9703 - graph_loss: 0.3483
Epoch 56/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1283 - accuracy: 0.9773 - graph_loss: 0.3412
Epoch 57/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1264 - accuracy: 0.9745 - graph_loss: 0.3473
Epoch 58/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1242 - accuracy: 0.9740 - graph_loss: 0.3443
Epoch 59/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1144 - accuracy: 0.9782 - graph_loss: 0.3440
Epoch 60/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1250 - accuracy: 0.9735 - graph_loss: 0.3357
Epoch 61/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1190 - accuracy: 0.9787 - graph_loss: 0.3400
Epoch 62/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1141 - accuracy: 0.9814 - graph_loss: 0.3419
Epoch 63/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1085 - accuracy: 0.9787 - graph_loss: 0.3395
Epoch 64/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1148 - accuracy: 0.9768 - graph_loss: 0.3504
Epoch 65/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1137 - accuracy: 0.9791 - graph_loss: 0.3360
Epoch 66/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1121 - accuracy: 0.9745 - graph_loss: 0.3469
Epoch 67/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1046 - accuracy: 0.9810 - graph_loss: 0.3476
Epoch 68/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1112 - accuracy: 0.9791 - graph_loss: 0.3431
Epoch 69/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1075 - accuracy: 0.9787 - graph_loss: 0.3455
Epoch 70/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0986 - accuracy: 0.9875 - graph_loss: 0.3403
Epoch 71/100
17/17 [==============================] - 0s 9ms/step - loss: 0.1141 - accuracy: 0.9782 - graph_loss: 0.3508
Epoch 72/100
17/17 [==============================] - 0s 10ms/step - loss: 0.1012 - accuracy: 0.9814 - graph_loss: 0.3453
Epoch 73/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0958 - accuracy: 0.9833 - graph_loss: 0.3430
Epoch 74/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0958 - accuracy: 0.9842 - graph_loss: 0.3447
Epoch 75/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0988 - accuracy: 0.9842 - graph_loss: 0.3430
Epoch 76/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0915 - accuracy: 0.9856 - graph_loss: 0.3475
Epoch 77/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0960 - accuracy: 0.9833 - graph_loss: 0.3353
Epoch 78/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0916 - accuracy: 0.9838 - graph_loss: 0.3441
Epoch 79/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0979 - accuracy: 0.9800 - graph_loss: 0.3476
Epoch 80/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0994 - accuracy: 0.9782 - graph_loss: 0.3400
Epoch 81/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0978 - accuracy: 0.9838 - graph_loss: 0.3386
Epoch 82/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0994 - accuracy: 0.9805 - graph_loss: 0.3416
Epoch 83/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0957 - accuracy: 0.9838 - graph_loss: 0.3398
Epoch 84/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0896 - accuracy: 0.9879 - graph_loss: 0.3379
Epoch 85/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0891 - accuracy: 0.9838 - graph_loss: 0.3441
Epoch 86/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0906 - accuracy: 0.9847 - graph_loss: 0.3445
Epoch 87/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0891 - accuracy: 0.9852 - graph_loss: 0.3506
Epoch 88/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0821 - accuracy: 0.9898 - graph_loss: 0.3448
Epoch 89/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0803 - accuracy: 0.9865 - graph_loss: 0.3370
Epoch 90/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0896 - accuracy: 0.9828 - graph_loss: 0.3428
Epoch 91/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0887 - accuracy: 0.9852 - graph_loss: 0.3505
Epoch 92/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0882 - accuracy: 0.9847 - graph_loss: 0.3396
Epoch 93/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0807 - accuracy: 0.9879 - graph_loss: 0.3473
Epoch 94/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0820 - accuracy: 0.9861 - graph_loss: 0.3367
Epoch 95/100
17/17 [==============================] - 0s 9ms/step - loss: 0.0864 - accuracy: 0.9838 - graph_loss: 0.3353
Epoch 96/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0786 - accuracy: 0.9889 - graph_loss: 0.3392
Epoch 97/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0735 - accuracy: 0.9912 - graph_loss: 0.3443
Epoch 98/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0861 - accuracy: 0.9842 - graph_loss: 0.3381
Epoch 99/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0850 - accuracy: 0.9833 - graph_loss: 0.3376
Epoch 100/100
17/17 [==============================] - 0s 10ms/step - loss: 0.0841 - accuracy: 0.9879 - graph_loss: 0.3510

<tensorflow.python.keras.callbacks.History at 0x7fc3d853ce10>

그래프 정규화로 MLP 모델 평가

eval_results = dict(
    zip(graph_reg_model.metrics_names,
        graph_reg_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))
print_metrics('MLP + graph regularization', eval_results)
5/5 [==============================] - 0s 6ms/step - loss: 1.2475 - accuracy: 0.8192


Eval accuracy for  MLP + graph regularization :  0.8191681504249573
Eval loss for  MLP + graph regularization :  1.2474583387374878

그래프 정규화 모델의 정확도는 기본 모델 ( base_model )보다 약 2-3 % 높습니다.

결론

NSL (Neural Structured Learning) 프레임 워크를 사용하여 자연 인용 그래프 (Cora)에서 문서 분류를 위해 그래프 정규화를 사용하는 방법을 시연했습니다. 고급 자습서 에는 그래프 정규화로 신경망을 훈련하기 전에 샘플 임베딩을 기반으로 그래프를 합성하는 것이 포함됩니다. 이 접근 방식은 입력에 명시 적 그래프가 포함되지 않은 경우 유용합니다.

사용자가 감독의 양을 변경하고 그래프 정규화를 위해 다양한 신경 아키텍처를 시도하여 추가 실험을하도록 권장합니다.