# 概述

## 设置

``````import tensorflow as tf
from matplotlib import pyplot as plt
``````
```TensorFlow 2.x selected.
```
``````# Hyperparameters
batch_size=64
epochs=10
``````

# 构建模型

``````model_1 = tf.keras.Sequential([
tf.keras.layers.Dense(64, input_shape=(784,), activation='relu', name='dense_1'),
tf.keras.layers.Dense(64, activation='relu', name='dense_2'),
tf.keras.layers.Dense(10, activation='softmax', name='predictions'),
])
``````

# 准备数据

``````# Load MNIST dataset as NumPy arrays
dataset = {}
num_validation = 10000
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()

# Preprocess the data
x_train = x_train.reshape(-1, 784).astype('float32') / 255
x_test = x_test.reshape(-1, 784).astype('float32') / 255
``````
```Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
```

# 定义自定义回调函数

``````def frobenius_norm(m):
"""This function is to calculate the frobenius norm of the matrix of all
layer's weight.

Args:
m: is a list of weights param for each layers.
"""
total_reduce_sum = 0
for i in range(len(m)):
total_reduce_sum = total_reduce_sum + tf.math.reduce_sum(m[i]**2)
norm = total_reduce_sum**0.5
return norm
``````
``````CG_frobenius_norm_of_weight = []
CG_get_weight_norm = tf.keras.callbacks.LambdaCallback(
on_epoch_end=lambda batch, logs: CG_frobenius_norm_of_weight.append(
frobenius_norm(model_1.trainable_weights).numpy()))
``````

# 训练和评估：使用 CG 作为优化器

``````# Compile the model
model_1.compile(
learning_rate=0.99949, lambda_=203),  # Utilize TFA optimizer
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])

history_cg = model_1.fit(
x_train,
y_train,
batch_size=batch_size,
validation_data=(x_test, y_test),
epochs=epochs,
callbacks=[CG_get_weight_norm])
``````
```Train on 60000 samples, validate on 10000 samples
Epoch 1/10
60000/60000 [==============================] - 5s 85us/sample - loss: 0.3745 - accuracy: 0.8894 - val_loss: 0.2323 - val_accuracy: 0.9275
Epoch 2/10
60000/60000 [==============================] - 3s 50us/sample - loss: 0.1908 - accuracy: 0.9430 - val_loss: 0.1538 - val_accuracy: 0.9547
Epoch 3/10
60000/60000 [==============================] - 3s 49us/sample - loss: 0.1497 - accuracy: 0.9548 - val_loss: 0.1473 - val_accuracy: 0.9560
Epoch 4/10
60000/60000 [==============================] - 3s 49us/sample - loss: 0.1306 - accuracy: 0.9612 - val_loss: 0.1215 - val_accuracy: 0.9609
Epoch 5/10
60000/60000 [==============================] - 3s 49us/sample - loss: 0.1211 - accuracy: 0.9636 - val_loss: 0.1114 - val_accuracy: 0.9660
Epoch 6/10
60000/60000 [==============================] - 3s 48us/sample - loss: 0.1125 - accuracy: 0.9663 - val_loss: 0.1260 - val_accuracy: 0.9640
Epoch 7/10
60000/60000 [==============================] - 3s 50us/sample - loss: 0.1108 - accuracy: 0.9665 - val_loss: 0.1009 - val_accuracy: 0.9697
Epoch 8/10
60000/60000 [==============================] - 3s 51us/sample - loss: 0.1081 - accuracy: 0.9676 - val_loss: 0.1129 - val_accuracy: 0.9647
Epoch 9/10
60000/60000 [==============================] - 3s 50us/sample - loss: 0.1065 - accuracy: 0.9675 - val_loss: 0.1058 - val_accuracy: 0.9683
Epoch 10/10
60000/60000 [==============================] - 3s 51us/sample - loss: 0.1039 - accuracy: 0.9683 - val_loss: 0.1126 - val_accuracy: 0.9646
```

# 训练和评估：使用 SGD 作为优化器

``````model_2 = tf.keras.Sequential([
tf.keras.layers.Dense(64, input_shape=(784,), activation='relu', name='dense_1'),
tf.keras.layers.Dense(64, activation='relu', name='dense_2'),
tf.keras.layers.Dense(10, activation='softmax', name='predictions'),
])
``````
``````SGD_frobenius_norm_of_weight = []
SGD_get_weight_norm = tf.keras.callbacks.LambdaCallback(
on_epoch_end=lambda batch, logs: SGD_frobenius_norm_of_weight.append(
frobenius_norm(model_2.trainable_weights).numpy()))
``````
``````# Compile the model
model_2.compile(
optimizer=tf.keras.optimizers.SGD(0.01),  # Utilize SGD optimizer
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])

history_sgd = model_2.fit(
x_train,
y_train,
batch_size=batch_size,
validation_data=(x_test, y_test),
epochs=epochs,
callbacks=[SGD_get_weight_norm])
``````
```Train on 60000 samples, validate on 10000 samples
Epoch 1/10
60000/60000 [==============================] - 3s 46us/sample - loss: 0.9498 - accuracy: 0.7523 - val_loss: 0.4306 - val_accuracy: 0.8844
Epoch 2/10
60000/60000 [==============================] - 2s 41us/sample - loss: 0.3851 - accuracy: 0.8916 - val_loss: 0.3298 - val_accuracy: 0.9068
Epoch 3/10
60000/60000 [==============================] - 3s 42us/sample - loss: 0.3230 - accuracy: 0.9064 - val_loss: 0.2917 - val_accuracy: 0.9150
Epoch 4/10
60000/60000 [==============================] - 2s 41us/sample - loss: 0.2897 - accuracy: 0.9169 - val_loss: 0.2676 - val_accuracy: 0.9241
Epoch 5/10
60000/60000 [==============================] - 3s 43us/sample - loss: 0.2658 - accuracy: 0.9237 - val_loss: 0.2485 - val_accuracy: 0.9288
Epoch 6/10
60000/60000 [==============================] - 2s 41us/sample - loss: 0.2467 - accuracy: 0.9301 - val_loss: 0.2374 - val_accuracy: 0.9285
Epoch 7/10
60000/60000 [==============================] - 3s 42us/sample - loss: 0.2308 - accuracy: 0.9343 - val_loss: 0.2201 - val_accuracy: 0.9358
Epoch 8/10
60000/60000 [==============================] - 2s 41us/sample - loss: 0.2169 - accuracy: 0.9388 - val_loss: 0.2096 - val_accuracy: 0.9388
Epoch 9/10
60000/60000 [==============================] - 2s 42us/sample - loss: 0.2046 - accuracy: 0.9421 - val_loss: 0.2009 - val_accuracy: 0.9404
Epoch 10/10
60000/60000 [==============================] - 2s 41us/sample - loss: 0.1939 - accuracy: 0.9448 - val_loss: 0.1900 - val_accuracy: 0.9442
```

# 权重的弗罗宾尼斯范数：CG 与 SGD

CG 优化器的当前实现基于弗罗宾尼斯范数，并考虑将弗罗宾尼斯范数作为目标函数中的正则化器。因此，我们将 CG 的正则化效果与尚未采用弗罗宾尼斯范数正则化器的 SGD 优化器进行比较。

``````plt.plot(
CG_frobenius_norm_of_weight,
color='r',
label='CG_frobenius_norm_of_weights')
plt.plot(
SGD_frobenius_norm_of_weight,
color='b',
label='SGD_frobenius_norm_of_weights')
plt.xlabel('Epoch')
plt.ylabel('Frobenius norm of weights')
plt.legend(loc=1)
``````
```<matplotlib.legend.Legend at 0x7f2cd4f5f6d8>
```

# 训练和验证准确率：CG 与 SGD

``````plt.plot(history_cg.history['accuracy'], color='r', label='CG_train')
plt.plot(history_cg.history['val_accuracy'], color='g', label='CG_test')
plt.plot(history_sgd.history['accuracy'], color='pink', label='SGD_train')
plt.plot(history_sgd.history['val_accuracy'], color='b', label='SGD_test')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(loc=4)
``````
```<matplotlib.legend.Legend at 0x7f2cd4a3f668>
```

[{ "type": "thumb-down", "id": "missingTheInformationINeed", "label":"没有我需要的信息" },{ "type": "thumb-down", "id": "tooComplicatedTooManySteps", "label":"太复杂/步骤太多" },{ "type": "thumb-down", "id": "outOfDate", "label":"内容需要更新" },{ "type": "thumb-down", "id": "translationIssue", "label":"翻译问题" },{ "type": "thumb-down", "id": "samplesCodeIssue", "label":"Samples / code issue" },{ "type": "thumb-down", "id": "otherDown", "label":"其他" }]
[{ "type": "thumb-up", "id": "easyToUnderstand", "label":"易于理解" },{ "type": "thumb-up", "id": "solvedMyProblem", "label":"解决了我的问题" },{ "type": "thumb-up", "id": "otherUp", "label":"其他" }]