Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
tensorflow
GitHub Repository: tensorflow/docs-l10n
Path: blob/master/site/zh-cn/addons/tutorials/optimizers_conditionalgradient.ipynb
25118 views
Kernel: Python 3
#@title Licensed under the Apache License, Version 2.0 # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.

概述

此笔记本将演示如何使用 Addons 软件包中的条件梯度优化器。

ConditionalGradient

由于潜在的正则化效果,约束神经网络的参数已被证明对训练有益。通常,参数通过软惩罚(从不保证约束满足)或通过投影运算(计算资源消耗大)进行约束。另一方面,条件梯度 (CG) 优化器可严格执行约束,而无需消耗资源的投影步骤。它通过最大程度减小约束集中目标的线性逼近来工作。在此笔记本中,我们通过 MNIST 数据集上的 CG 优化器演示弗罗宾尼斯范数约束的应用。CG 现在可以作为 Tensorflow API 提供。有关优化器的更多详细信息,请参阅 https://arxiv.org/pdf/1803.06453.pdf

设置

!pip install -U tensorflow-addons
import tensorflow as tf import tensorflow_addons as tfa from matplotlib import pyplot as plt
# Hyperparameters batch_size=64 epochs=10

构建模型

model_1 = tf.keras.Sequential([ tf.keras.layers.Dense(64, input_shape=(784,), activation='relu', name='dense_1'), tf.keras.layers.Dense(64, activation='relu', name='dense_2'), tf.keras.layers.Dense(10, activation='softmax', name='predictions'), ])

准备数据

# Load MNIST dataset as NumPy arrays dataset = {} num_validation = 10000 (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() # Preprocess the data x_train = x_train.reshape(-1, 784).astype('float32') / 255 x_test = x_test.reshape(-1, 784).astype('float32') / 255

定义自定义回调函数

def frobenius_norm(m): """This function is to calculate the frobenius norm of the matrix of all layer's weight. Args: m: is a list of weights param for each layers. """ total_reduce_sum = 0 for i in range(len(m)): total_reduce_sum = total_reduce_sum + tf.math.reduce_sum(m[i]**2) norm = total_reduce_sum**0.5 return norm
CG_frobenius_norm_of_weight = [] CG_get_weight_norm = tf.keras.callbacks.LambdaCallback( on_epoch_end=lambda batch, logs: CG_frobenius_norm_of_weight.append( frobenius_norm(model_1.trainable_weights).numpy()))

训练和评估:使用 CG 作为优化器

只需用新的 TFA 优化器替换典型的 Keras 优化器

# Compile the model model_1.compile( optimizer=tfa.optimizers.ConditionalGradient( learning_rate=0.99949, lambda_=203), # Utilize TFA optimizer loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) history_cg = model_1.fit( x_train, y_train, batch_size=batch_size, validation_data=(x_test, y_test), epochs=epochs, callbacks=[CG_get_weight_norm])

训练和评估:使用 SGD 作为优化器

model_2 = tf.keras.Sequential([ tf.keras.layers.Dense(64, input_shape=(784,), activation='relu', name='dense_1'), tf.keras.layers.Dense(64, activation='relu', name='dense_2'), tf.keras.layers.Dense(10, activation='softmax', name='predictions'), ])
SGD_frobenius_norm_of_weight = [] SGD_get_weight_norm = tf.keras.callbacks.LambdaCallback( on_epoch_end=lambda batch, logs: SGD_frobenius_norm_of_weight.append( frobenius_norm(model_2.trainable_weights).numpy()))
# Compile the model model_2.compile( optimizer=tf.keras.optimizers.SGD(0.01), # Utilize SGD optimizer loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) history_sgd = model_2.fit( x_train, y_train, batch_size=batch_size, validation_data=(x_test, y_test), epochs=epochs, callbacks=[SGD_get_weight_norm])

权重的弗罗宾尼斯范数:CG 与 SGD

CG 优化器的当前实现基于弗罗宾尼斯范数,并将弗罗宾尼斯范数视为目标函数中的正则化器。因此,我们将 CG 的正则化效果与尚未采用弗罗宾尼斯范数正则化器的 SGD 优化器进行了比较。

plt.plot( CG_frobenius_norm_of_weight, color='r', label='CG_frobenius_norm_of_weights') plt.plot( SGD_frobenius_norm_of_weight, color='b', label='SGD_frobenius_norm_of_weights') plt.xlabel('Epoch') plt.ylabel('Frobenius norm of weights') plt.legend(loc=1)

训练和验证准确率:CG 与 SGD

plt.plot(history_cg.history['accuracy'], color='r', label='CG_train') plt.plot(history_cg.history['val_accuracy'], color='g', label='CG_test') plt.plot(history_sgd.history['accuracy'], color='pink', label='SGD_train') plt.plot(history_sgd.history['val_accuracy'], color='b', label='SGD_test') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.legend(loc=4)