本頁面由 Cloud Translation API 翻譯而成。
Switch to English

微調一個BERT模式

查看上TensorFlow.org 在谷歌Colab運行 GitHub上查看源代碼 下載筆記本

在這個例子中,我們將使用tensorflow的模型PIP封裝通過微調工作BERT模式。

預訓練BERT模型本教程是基於也可在TensorFlow中心 ,來看看如何使用它指的是輪轂附錄

建立

安裝TensorFlow示範園PIP封裝

  • tf-models-nightly是創建每天自動夜間示範園包。
  • PIP會自動安裝所有模型和相關性。
pip install -q tf-nightly
pip install -q tf-models-nightly

進口

 import os

import numpy as np
import matplotlib.pyplot as plt

import tensorflow as tf

import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()

from official.modeling import tf_utils
from official import nlp
from official.nlp import bert

# Load the required submodules
import official.nlp.optimization
import official.nlp.bert.bert_models
import official.nlp.bert.configs
import official.nlp.bert.run_classifier
import official.nlp.bert.tokenization
import official.nlp.data.classifier_data_lib
import official.nlp.modeling.losses
import official.nlp.modeling.models
import official.nlp.modeling.networks
 
/tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_addons/utils/ensure_tf_install.py:44: UserWarning: You are currently using a nightly version of TensorFlow (2.3.0-dev20200623). 
TensorFlow Addons offers no support for the nightly versions of TensorFlow. Some things might work, some other might not. 
If you encounter a bug, do not file an issue on GitHub.
  UserWarning,

資源

該目錄包含了配置,詞彙,而在本教程中使用預訓練的檢查點:

 gs_folder_bert = "gs://cloud-tpu-checkpoints/bert/keras_bert/uncased_L-12_H-768_A-12"
tf.io.gfile.listdir(gs_folder_bert)
 
['bert_config.json',
 'bert_model.ckpt.data-00000-of-00001',
 'bert_model.ckpt.index',
 'vocab.txt']

你可以在這裡得到TensorFlow樞紐預訓練BERT編碼器:

 hub_url_bert = "https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2"
 

數據

在這個例子中,我們使用了來自TFDS膠MRPC數據集

此數據集未設置,以便它可以被直接送入BERT模式,所以這部分也處理了必要的預處理。

得到TensorFlow數據集的數據集

微軟研究院釋義語料庫(刀郎與布洛基,2005)是句對的語料庫在線新聞源自動提取,在對句子是否語義上等同人類的註解。

  • 標籤數量:2。
  • 訓練數據集的大小:3668。
  • 評價數據集的大小:408。
  • 訓練和評估數據集的最大序列長度:128。
 glue, info = tfds.load('glue/mrpc', with_info=True,
                       # It's small, load the whole dataset
                       batch_size=-1)
 
Downloading and preparing dataset glue/mrpc/1.0.0 (download: 1.43 MiB, generated: Unknown size, total: 1.43 MiB) to /home/kbuilder/tensorflow_datasets/glue/mrpc/1.0.0...

/usr/lib/python3/dist-packages/urllib3/connectionpool.py:860: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning)
/usr/lib/python3/dist-packages/urllib3/connectionpool.py:860: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning)
/usr/lib/python3/dist-packages/urllib3/connectionpool.py:860: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning)

Shuffling and writing examples to /home/kbuilder/tensorflow_datasets/glue/mrpc/1.0.0.incomplete1RTRDK/glue-train.tfrecord
Shuffling and writing examples to /home/kbuilder/tensorflow_datasets/glue/mrpc/1.0.0.incomplete1RTRDK/glue-validation.tfrecord
Shuffling and writing examples to /home/kbuilder/tensorflow_datasets/glue/mrpc/1.0.0.incomplete1RTRDK/glue-test.tfrecord
Dataset glue downloaded and prepared to /home/kbuilder/tensorflow_datasets/glue/mrpc/1.0.0. Subsequent calls will reuse this data.

 list(glue.keys())
 
['test', 'train', 'validation']

info對象描述的數據集,它的特點:

 info.features
 
FeaturesDict({
    'idx': tf.int32,
    'label': ClassLabel(shape=(), dtype=tf.int64, num_classes=2),
    'sentence1': Text(shape=(), dtype=tf.string),
    'sentence2': Text(shape=(), dtype=tf.string),
})

這兩個類是:

 info.features['label'].names
 
['not_equivalent', 'equivalent']

下面是從訓練組中的一個例子:

 glue_train = glue['train']

for key, value in glue_train.items():
  print(f"{key:9s}: {value[0].numpy()}")
 
idx      : 1680
label    : 0
sentence1: b'The identical rovers will act as robotic geologists , searching for evidence of past water .'
sentence2: b'The rovers act as robotic geologists , moving on six wheels .'

BERT的標記生成器

要微調預訓練的模型,你需要確保您使用完全相同的標記化,詞彙和索引映射作為訓練時使用。

在本教程中使用的BERT分詞器是用純Python(它沒有內置TensorFlow OPS的輸出)。所以,你不能只是將其插入模型作為keras.layer像您可以用preprocessing.TextVectorization

下面的代碼重建的是使用由所述基本模型標記生成器:

 # Set up tokenizer to generate Tensorflow dataset
tokenizer = bert.tokenization.FullTokenizer(
    vocab_file=os.path.join(gs_folder_bert, "vocab.txt"),
     do_lower_case=True)

print("Vocab size:", len(tokenizer.vocab))
 
Vocab size: 30522

令牌化的句子:

 tokens = tokenizer.tokenize("Hello TensorFlow!")
print(tokens)
ids = tokenizer.convert_tokens_to_ids(tokens)
print(ids)
 
['hello', 'tensor', '##flow', '!']
[7592, 23435, 12314, 999]

預處理數據

的部分手動預處理該數據集到由模型預期的格式。

此數據集很小,所以預處理可以在內存中快速,輕鬆地完成。對於更大的數據集tf_models庫包括預處理和重新序列化數據集的一些工具。見附錄:編碼重的大型數據集的詳細信息。

編碼的句子

該模型預計其兩個輸入的句子被連接在一起。該輸入預計開始與[CLS]這是一個分類問題”標記,並且每個句子應該用結束[SEP]分隔符”的令牌:

 tokenizer.convert_tokens_to_ids(['[CLS]', '[SEP]'])
 
[101, 102]

通過編碼所有的句子,同時附加一個開始[SEP]標記,並把它們包裝成衣衫襤褸的張量:

 def encode_sentence(s):
   tokens = list(tokenizer.tokenize(s.numpy()))
   tokens.append('[SEP]')
   return tokenizer.convert_tokens_to_ids(tokens)

sentence1 = tf.ragged.constant([
    encode_sentence(s) for s in glue_train["sentence1"]])
sentence2 = tf.ragged.constant([
    encode_sentence(s) for s in glue_train["sentence2"]])
 
 print("Sentence1 shape:", sentence1.shape.as_list())
print("Sentence2 shape:", sentence2.shape.as_list())
 
Sentence1 shape: [3668, None]
Sentence2 shape: [3668, None]

現在前面加上一個[CLS]令牌,並連接所有的參差不齊的張量,以形成單個input_word_ids對於每個實施例張量。 RaggedTensor.to_tensor()零墊到最長序列。

 cls = [tokenizer.convert_tokens_to_ids(['[CLS]'])]*sentence1.shape[0]
input_word_ids = tf.concat([cls, sentence1, sentence2], axis=-1)
_ = plt.pcolormesh(input_word_ids.to_tensor())
 

PNG

面具和輸入型

該模型需要兩個額外的輸入:

  • 輸入掩碼
  • 輸入類型

掩模允許模型的內容和填充之間乾淨地進行區分。掩模具有相同的形狀input_word_ids ,並且包含1的任意位置的input_word_ids未填充。

 input_mask = tf.ones_like(input_word_ids).to_tensor()

plt.pcolormesh(input_mask)
 
<matplotlib.collections.QuadMesh at 0x7f82246c0cf8>

PNG

在“輸入類型”也具有相同的形狀,但該非填充區域的內側,含有01指示哪個句子令牌的一部分。

 type_cls = tf.zeros_like(cls)
type_s1 = tf.zeros_like(sentence1)
type_s2 = tf.ones_like(sentence2)
input_type_ids = tf.concat([type_cls, type_s1, type_s2], axis=-1).to_tensor()

plt.pcolormesh(input_type_ids)
 
<matplotlib.collections.QuadMesh at 0x7f8224668438>

PNG

把它放在一起

收集上面的文本解析代碼到一個單一的功能,並將其應用到的每個分割glue/mrpc數據集。

 def encode_sentence(s, tokenizer):
   tokens = list(tokenizer.tokenize(s))
   tokens.append('[SEP]')
   return tokenizer.convert_tokens_to_ids(tokens)

def bert_encode(glue_dict, tokenizer):
  num_examples = len(glue_dict["sentence1"])
  
  sentence1 = tf.ragged.constant([
      encode_sentence(s, tokenizer)
      for s in np.array(glue_dict["sentence1"])])
  sentence2 = tf.ragged.constant([
      encode_sentence(s, tokenizer)
       for s in np.array(glue_dict["sentence2"])])

  cls = [tokenizer.convert_tokens_to_ids(['[CLS]'])]*sentence1.shape[0]
  input_word_ids = tf.concat([cls, sentence1, sentence2], axis=-1)

  input_mask = tf.ones_like(input_word_ids).to_tensor()

  type_cls = tf.zeros_like(cls)
  type_s1 = tf.zeros_like(sentence1)
  type_s2 = tf.ones_like(sentence2)
  input_type_ids = tf.concat(
      [type_cls, type_s1, type_s2], axis=-1).to_tensor()

  inputs = {
      'input_word_ids': input_word_ids.to_tensor(),
      'input_mask': input_mask,
      'input_type_ids': input_type_ids}

  return inputs
 
 glue_train = bert_encode(glue['train'], tokenizer)
glue_train_labels = glue['train']['label']

glue_validation = bert_encode(glue['validation'], tokenizer)
glue_validation_labels = glue['validation']['label']

glue_test = bert_encode(glue['test'], tokenizer)
glue_test_labels  = glue['test']['label']
 

數據的每個子集已被轉換為特徵的字典,和一組標籤。在輸入字典的每個特徵具有相同的形狀,並且標記物的數量應匹配:

 for key, value in glue_train.items():
  print(f'{key:15s} shape: {value.shape}')

print(f'glue_train_labels shape: {glue_train_labels.shape}')
 
input_word_ids  shape: (3668, 103)
input_mask      shape: (3668, 103)
input_type_ids  shape: (3668, 103)
glue_train_labels shape: (3668,)

該模型

構建模型

第一步是下載配置預先訓練的模式。

 import json

bert_config_file = os.path.join(gs_folder_bert, "bert_config.json")
config_dict = json.loads(tf.io.gfile.GFile(bert_config_file).read())

bert_config = bert.configs.BertConfig.from_dict(config_dict)

config_dict
 
{'attention_probs_dropout_prob': 0.1,
 'hidden_act': 'gelu',
 'hidden_dropout_prob': 0.1,
 'hidden_size': 768,
 'initializer_range': 0.02,
 'intermediate_size': 3072,
 'max_position_embeddings': 512,
 'num_attention_heads': 12,
 'num_hidden_layers': 12,
 'type_vocab_size': 2,
 'vocab_size': 30522}

config定義了核心BERT模型,這是一個Keras模型來預測的輸出num_classes從具有最大序列長度的輸入max_seq_length

該函數返回兩個編碼器和分類。

 bert_classifier, bert_encoder = bert.bert_models.classifier_model(
    bert_config, num_labels=2)
 

分類器具有三個輸入和一個輸出:

 tf.keras.utils.plot_model(bert_classifier, show_shapes=True, dpi=48)
 

PNG

上的測試批次數據從訓練集中10實例的運行它。輸出是兩個階級的logits:

 glue_batch = {key: val[:10] for key, val in glue_train.items()}

bert_classifier(
    glue_batch, training=True
).numpy()
 
array([[ 0.05488977, -0.26042116],
       [ 0.11358108, -0.09727937],
       [ 0.14350253, -0.2465629 ],
       [ 0.2775127 , -0.09028438],
       [ 0.3606584 , -0.17138724],
       [ 0.3287397 , -0.14672714],
       [ 0.18621178, -0.13080403],
       [ 0.21898738,  0.10716071],
       [ 0.18413854, -0.13491377],
       [ 0.20307963, -0.05396855]], dtype=float32)

所述TransformerEncoder上面在分類器的中心 bert_encoder

檢查編碼器,我們看到其的疊層Transformer的層連接到這些相同的三個輸入:

 tf.keras.utils.plot_model(bert_encoder, show_shapes=True, dpi=48)
 

PNG

恢復編碼器權重

當內置編碼器隨機初始化。從檢查點恢復編碼器的權重:

 checkpoint = tf.train.Checkpoint(model=bert_encoder)
checkpoint.restore(
    os.path.join(gs_folder_bert, 'bert_model.ckpt')).assert_consumed()
 
<tensorflow.python.training.tracking.util.CheckpointLoadStatus at 0x7f8242dadc88>

設置優化

BERT採用與權衰減(又名“亞當優化AdamW ”)。它還採用了一個學習的稅率表,從0先預熱並隨後衰減至0。

 # Set up epochs and steps
epochs = 3
batch_size = 32
eval_batch_size = 32

train_data_size = len(glue_train_labels)
steps_per_epoch = int(train_data_size / batch_size)
num_train_steps = steps_per_epoch * epochs
warmup_steps = int(epochs * train_data_size * 0.1 / batch_size)

# creates an optimizer with learning rate schedule
optimizer = nlp.optimization.create_optimizer(
    2e-5, num_train_steps=num_train_steps, num_warmup_steps=warmup_steps)
 

這將返回AdamWeightDecay優化與學習稅率表集:

 type(optimizer)
 
official.nlp.optimization.AdamWeightDecay

要查看如何自定義優化和它的時間表的一個實例,請參閱優化調度附錄

訓練模型

該指標是準確度,我們使用稀疏分類交叉熵的損失。

 metrics = [tf.keras.metrics.SparseCategoricalAccuracy('accuracy', dtype=tf.float32)]
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

bert_classifier.compile(
    optimizer=optimizer,
    loss=loss,
    metrics=metrics)

bert_classifier.fit(
      glue_train, glue_train_labels,
      validation_data=(glue_validation, glue_validation_labels),
      batch_size=32,
      epochs=epochs)
 
Epoch 1/3
115/115 [==============================] - 25s 218ms/step - loss: 0.7047 - accuracy: 0.6101 - val_loss: 0.5219 - val_accuracy: 0.7181
Epoch 2/3
115/115 [==============================] - 24s 210ms/step - loss: 0.5068 - accuracy: 0.7560 - val_loss: 0.5047 - val_accuracy: 0.7794
Epoch 3/3
115/115 [==============================] - 24s 209ms/step - loss: 0.3812 - accuracy: 0.8332 - val_loss: 0.4839 - val_accuracy: 0.8137

<tensorflow.python.keras.callbacks.History at 0x7f82107c8cf8>

現在,在定制例如運行微調模型看,它的作品。

通過編碼一些句子對開始:

 my_examples = bert_encode(
    glue_dict = {
        'sentence1':[
            'The rain in Spain falls mainly on the plain.',
            'Look I fine tuned BERT.'],
        'sentence2':[
            'It mostly rains on the flat lands of Spain.',
            'Is it working? This does not match.']
    },
    tokenizer=tokenizer)
 

模型應報告類1的第一示例和類“匹配” 0 “無匹配”為第二:

 result = bert_classifier(my_examples, training=False)

result = tf.argmax(result).numpy()
result
 
array([1, 0])
 np.array(info.features['label'].names)[result]
 
array(['equivalent', 'not_equivalent'], dtype='<U14')

保存模型

通常的訓練模型的目標是使用它的東西,所以導出模型,然後將其還原,以確保它的工作原理。

 export_dir='./saved_model'
tf.saved_model.save(bert_classifier, export_dir=export_dir)
 
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/tracking/tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/tracking/tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/tracking/tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/tracking/tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.

INFO:tensorflow:Assets written to: ./saved_model/assets

INFO:tensorflow:Assets written to: ./saved_model/assets

 reloaded = tf.saved_model.load(export_dir)
reloaded_result = reloaded([my_examples['input_word_ids'],
                            my_examples['input_mask'],
                            my_examples['input_type_ids']], training=False)

original_result = bert_classifier(my_examples, training=False)

# The results are (nearly) identical:
print(original_result.numpy())
print()
print(reloaded_result.numpy())
 
[[-1.1238481   0.92107666]
 [ 0.35722053 -0.4061358 ]]

[[-1.1238478   0.9210764 ]
 [ 0.35722044 -0.40613574]]

附錄

重新編碼一個大的數據集

本教程你重新編碼在內存中的數據集,為清楚起見。

這是唯一可能的,因為glue/mrpc是一個非常小的數據集。為了應對更大的數據集tf_models庫包含用於處理和重新編碼的高效訓練數據集的一些工具。

第一步是描述該數據集的特徵應當被轉化:

 processor = nlp.data.classifier_data_lib.TfdsProcessor(
    tfds_params="dataset=glue/mrpc,text_key=sentence1,text_b_key=sentence2",
    process_text_fn=bert.tokenization.convert_to_unicode)
 

然後應用變換產生新的TFRecord文件。

 # Set up output of training and evaluation Tensorflow dataset
train_data_output_path="./mrpc_train.tf_record"
eval_data_output_path="./mrpc_eval.tf_record"

max_seq_length = 128
batch_size = 32
eval_batch_size = 32

# Generate and save training data into a tf record file
input_meta_data = (
    nlp.data.classifier_data_lib.generate_tf_record_from_data_file(
      processor=processor,
      data_dir=None,  # It is `None` because data is from tfds, not local dir.
      tokenizer=tokenizer,
      train_data_output_path=train_data_output_path,
      eval_data_output_path=eval_data_output_path,
      max_seq_length=max_seq_length))
 

最後創建tf.data從這些TFRecord文件輸入管道:

 training_dataset = bert.run_classifier.get_dataset_fn(
    train_data_output_path,
    max_seq_length,
    batch_size,
    is_training=True)()

evaluation_dataset = bert.run_classifier.get_dataset_fn(
    eval_data_output_path,
    max_seq_length,
    eval_batch_size,
    is_training=False)()

 

得到的tf.data.Datasets回報(features, labels)對,如預期keras.Model.fit

 training_dataset.element_spec
 
({'input_word_ids': TensorSpec(shape=(32, 128), dtype=tf.int32, name=None),
  'input_mask': TensorSpec(shape=(32, 128), dtype=tf.int32, name=None),
  'input_type_ids': TensorSpec(shape=(32, 128), dtype=tf.int32, name=None)},
 TensorSpec(shape=(32,), dtype=tf.int32, name=None))

培訓和評估創建tf.data.Dataset

如果你需要在這裡修改數據加載是一些代碼,讓你開始:

 def create_classifier_dataset(file_path, seq_length, batch_size, is_training):
  """Creates input dataset from (tf)records files for train/eval."""
  dataset = tf.data.TFRecordDataset(file_path)
  if is_training:
    dataset = dataset.shuffle(100)
    dataset = dataset.repeat()

  def decode_record(record):
    name_to_features = {
      'input_ids': tf.io.FixedLenFeature([seq_length], tf.int64),
      'input_mask': tf.io.FixedLenFeature([seq_length], tf.int64),
      'segment_ids': tf.io.FixedLenFeature([seq_length], tf.int64),
      'label_ids': tf.io.FixedLenFeature([], tf.int64),
    }
    return tf.io.parse_single_example(record, name_to_features)

  def _select_data_from_record(record):
    x = {
        'input_word_ids': record['input_ids'],
        'input_mask': record['input_mask'],
        'input_type_ids': record['segment_ids']
    }
    y = record['label_ids']
    return (x, y)

  dataset = dataset.map(decode_record,
                        num_parallel_calls=tf.data.experimental.AUTOTUNE)
  dataset = dataset.map(
      _select_data_from_record,
      num_parallel_calls=tf.data.experimental.AUTOTUNE)
  dataset = dataset.batch(batch_size, drop_remainder=is_training)
  dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
  return dataset
 
 # Set up batch sizes
batch_size = 32
eval_batch_size = 32

# Return Tensorflow dataset
training_dataset = create_classifier_dataset(
    train_data_output_path,
    input_meta_data['max_seq_length'],
    batch_size,
    is_training=True)

evaluation_dataset = create_classifier_dataset(
    eval_data_output_path,
    input_meta_data['max_seq_length'],
    eval_batch_size,
    is_training=False)
 
 training_dataset.element_spec
 
({'input_word_ids': TensorSpec(shape=(32, 128), dtype=tf.int64, name=None),
  'input_mask': TensorSpec(shape=(32, 128), dtype=tf.int64, name=None),
  'input_type_ids': TensorSpec(shape=(32, 128), dtype=tf.int64, name=None)},
 TensorSpec(shape=(32,), dtype=tf.int64, name=None))

TFModels BERT上TFHub

你可以得到的BERT模式關閉從貨架TFHub 。它不會是很難在這之上添加一個分類頭hub.KerasLayer

 # Note: 350MB download.
import tensorflow_hub as hub
hub_encoder = hub.KerasLayer(hub_url_bert, trainable=True)

print(f"The Hub encoder has {len(hub_encoder.trainable_variables)} trainable variables")
 
The Hub encoder has 199 trainable variables

試驗對一批數據的運行:

 result = hub_encoder(
    inputs=[glue_train['input_word_ids'][:10],
            glue_train['input_mask'][:10],
            glue_train['input_type_ids'][:10],],
    training=False,
)

print("Pooled output shape:", result[0].shape)
print("Sequence output shape:", result[1].shape)
 
Pooled output shape: (10, 768)
Sequence output shape: (10, 103, 768)

在這一點上是簡單的將自己添加分類頭。

bert_models.classifier_model功能也可以建立一個分類到從TensorFlow樞紐編碼器:

 hub_classifier, hub_encoder = bert.bert_models.classifier_model(
    # Caution: Most of `bert_config` is ignored if you pass a hub url.
    bert_config=bert_config, hub_module_url=hub_url_bert, num_labels=2)
 

一個缺點在加載從TFHub這個模型是內部keras層的結構是不可恢復的。所以它更難以檢查或修改模型。該TransformerEncoder現在模型是單層:

 tf.keras.utils.plot_model(hub_classifier, show_shapes=True, dpi=64)
 

PNG

 try:
  tf.keras.utils.plot_model(hub_encoder, show_shapes=True, dpi=64)
  assert False
except Exception as e:
  print(f"{type(e).__name__}: {e}")
 
AttributeError: 'KerasLayer' object has no attribute 'layers'

低層次模型的建立

如果你需要在模型的建設更多的控制值得一提的是, classifier_model功能之前使用僅僅是一個瘦包裝在nlp.modeling.networks.TransformerEncodernlp.modeling.models.BertClassifier類。只要記住,如果你開始修改架構可能不正確,或有可能重新裝入預先訓練關卡所以你需要從頭開始重新培訓。

構建編碼器:

 transformer_config = config_dict.copy()

# You need to rename a few fields to make this work:
transformer_config['attention_dropout_rate'] = transformer_config.pop('attention_probs_dropout_prob')
transformer_config['activation'] = tf_utils.get_activation(transformer_config.pop('hidden_act'))
transformer_config['dropout_rate'] = transformer_config.pop('hidden_dropout_prob')
transformer_config['initializer'] = tf.keras.initializers.TruncatedNormal(
          stddev=transformer_config.pop('initializer_range'))
transformer_config['max_sequence_length'] = transformer_config.pop('max_position_embeddings')
transformer_config['num_layers'] = transformer_config.pop('num_hidden_layers')

transformer_config
 
{'hidden_size': 768,
 'intermediate_size': 3072,
 'num_attention_heads': 12,
 'type_vocab_size': 2,
 'vocab_size': 30522,
 'attention_dropout_rate': 0.1,
 'activation': <function official.modeling.activations.gelu.gelu(x)>,
 'dropout_rate': 0.1,
 'initializer': <tensorflow.python.keras.initializers.initializers_v2.TruncatedNormal at 0x7f81145cb3c8>,
 'max_sequence_length': 512,
 'num_layers': 12}
 manual_encoder = nlp.modeling.networks.TransformerEncoder(**transformer_config)
 

還原權值:

 checkpoint = tf.train.Checkpoint(model=manual_encoder)
checkpoint.restore(
    os.path.join(gs_folder_bert, 'bert_model.ckpt')).assert_consumed()
 
<tensorflow.python.training.tracking.util.CheckpointLoadStatus at 0x7f813c336fd0>

測試運行:

 result = manual_encoder(my_examples, training=True)

print("Sequence output shape:", result[0].shape)
print("Pooled output shape:", result[1].shape)
 
Sequence output shape: (2, 23, 768)
Pooled output shape: (2, 768)

它包裝在一個分類:

 manual_classifier = nlp.modeling.models.BertClassifier(
        bert_encoder,
        num_classes=2,
        dropout_rate=transformer_config['dropout_rate'],
        initializer=tf.keras.initializers.TruncatedNormal(
          stddev=bert_config.initializer_range))
 
 manual_classifier(my_examples, training=True).numpy()
 
array([[-0.22512403,  0.07213479],
       [-0.21233292,  0.1311737 ]], dtype=float32)

優化和時間表

用於訓練模型的優化是使用創建nlp.optimization.create_optimizer功能:

 optimizer = nlp.optimization.create_optimizer(
    2e-5, num_train_steps=num_train_steps, num_warmup_steps=warmup_steps)
 

高層次的包裝樹立學習速率調度和優化。

這裡使用的基礎學習稅率表是在訓練跑步的線性衰減到零:

 epochs = 3
batch_size = 32
eval_batch_size = 32

train_data_size = len(glue_train_labels)
steps_per_epoch = int(train_data_size / batch_size)
num_train_steps = steps_per_epoch * epochs
 
 decay_schedule = tf.keras.optimizers.schedules.PolynomialDecay(
      initial_learning_rate=2e-5,
      decay_steps=num_train_steps,
      end_learning_rate=0)

plt.plot([decay_schedule(n) for n in range(num_train_steps)])
 
[<matplotlib.lines.Line2D at 0x7f8115ab5320>]

PNG

這又被包裹在一個WarmUp時間表線性增加學習率超過訓練的第一10%的目標值:

 warmup_steps = num_train_steps * 0.1

warmup_schedule = nlp.optimization.WarmUp(
        initial_learning_rate=2e-5,
        decay_schedule_fn=decay_schedule,
        warmup_steps=warmup_steps)

# The warmup overshoots, because it warms up to the `initial_learning_rate`
# following the original implementation. You can set
# `initial_learning_rate=decay_schedule(warmup_steps)` if you don't like the
# overshoot.
plt.plot([warmup_schedule(n) for n in range(num_train_steps)])
 
[<matplotlib.lines.Line2D at 0x7f81150c27f0>]

PNG

然後創建nlp.optimization.AdamWeightDecay使用時間表,配置了BERT模式:

 optimizer = nlp.optimization.AdamWeightDecay(
        learning_rate=warmup_schedule,
        weight_decay_rate=0.01,
        epsilon=1e-6,
        exclude_from_weight_decay=['LayerNorm', 'layer_norm', 'bias'])