Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
tensorflow
GitHub Repository: tensorflow/docs-l10n
Path: blob/master/site/zh-cn/addons/tutorials/average_optimizers_callback.ipynb
25118 views
Kernel: Python 3
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.

概述

此笔记本将演示如何使用 TensorFlow Addons 软件包中的移动平均优化器和模型平均检查点。

移动平均

移动平均值的优点在于,它们在最新批次中不易出现重大的损失变动或不规则的数据表示。在某一时刻之前,它会为模型训练提供一个平滑而笼统的思路。

随机平均值

随机加权平均会收敛于更广泛的最优值。在这种情况下,它就像几何集成。作为其他优化器的封装容器和内部优化器不同轨迹点的平均结果时,SWA 是一种提高模型性能的简单方法。

模型平均检查点

callbacks.ModelCheckpoint 无法让您在训练过程中保存移动平均权重,这就是为什么模型平均优化器需要自定义回调的原因。使用 update_weights 参数,ModelAverageCheckpoint 允许您:

  1. 将移动平均权重分配给模型,然后保存它们。

  2. 保留旧的非平均权重,但保存的模型使用平均权重。

设置

!pip install -U tensorflow-addons
import tensorflow as tf import tensorflow_addons as tfa
import numpy as np import os

构建模型

def create_model(opt): model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer=opt, loss='sparse_categorical_crossentropy', metrics=['accuracy']) return model

准备数据集

#Load Fashion MNIST dataset train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) test_images, test_labels = test

我们在这里比较三个优化器:

  • 解包的 SGD

  • 带移动平均的 SGD

  • 带随机加权平均的 SGD

查看它们在同一模型上的性能。

#Optimizers sgd = tf.keras.optimizers.SGD(0.01) moving_avg_sgd = tfa.optimizers.MovingAverage(sgd) stocastic_avg_sgd = tfa.optimizers.SWA(sgd)

MovingAverageStocasticAverage 优化器均使用 ModelAverageCheckpoint

#Callback checkpoint_path = "./training/cp-{epoch:04d}.ckpt" checkpoint_dir = os.path.dirname(checkpoint_path) cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_dir, save_weights_only=True, verbose=1) avg_callback = tfa.callbacks.AverageModelCheckpoint(filepath=checkpoint_dir, update_weights=True)

训练模型

Vanilla SGD 优化器

#Build Model model = create_model(sgd) #Train the network model.fit(fmnist_train_ds, epochs=5, callbacks=[cp_callback])
#Evalute results model.load_weights(checkpoint_dir) loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2) print("Loss :", loss) print("Accuracy :", accuracy)

移动平均 SGD

#Build Model model = create_model(moving_avg_sgd) #Train the network model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback])
#Evalute results model.load_weights(checkpoint_dir) loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2) print("Loss :", loss) print("Accuracy :", accuracy)

随机加权平均 SGD

#Build Model model = create_model(stocastic_avg_sgd) #Train the network model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback])
#Evalute results model.load_weights(checkpoint_dir) loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2) print("Loss :", loss) print("Accuracy :", accuracy)