Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
tensorflow
GitHub Repository: tensorflow/docs-l10n
Path: blob/master/site/zh-cn/guide/migrate/early_stopping.ipynb
25118 views
Kernel: Python 3
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.

本笔记本演示了如何使用提前停止设置模型训练。首先,在 TensorFlow 1 中使用 tf.estimator.Estimator 和提前停止钩子,然后在 TensorFlow 2 中使用 Keras API 或自定义训练循环。 提前停止是一种正则化技术,可在验证损失达到特定阈值时停止训练。

在 TensorFlow 2 中,可以通过三种方式实现提前停止:

  • 使用内置的 Keras 回调 tf.keras.callbacks.EarlyStopping 并将其传递给 Model.fit

  • 定义自定义回调并将其传递给 Keras Model.fit

  • 自定义训练循环中编写自定义提前停止规则(使用 tf.GradientTape)。

安装

import time import numpy as np import tensorflow as tf import tensorflow.compat.v1 as tf1 import tensorflow_datasets as tfds

TensorFlow 1:使用提前停止钩子和 tf.estimator 提前停止

首先,定义用于 MNIST 数据集加载和预处理的函数,以及与 tf.estimator.Estimator 一起使用的模型定义:

def normalize_img(image, label): return tf.cast(image, tf.float32) / 255., label def _input_fn(): ds_train = tfds.load( name='mnist', split='train', shuffle_files=True, as_supervised=True) ds_train = ds_train.map( normalize_img, num_parallel_calls=tf.data.AUTOTUNE) ds_train = ds_train.batch(128) ds_train = ds_train.repeat(100) return ds_train def _eval_input_fn(): ds_test = tfds.load( name='mnist', split='test', shuffle_files=True, as_supervised=True) ds_test = ds_test.map( normalize_img, num_parallel_calls=tf.data.AUTOTUNE) ds_test = ds_test.batch(128) return ds_test def _model_fn(features, labels, mode): flatten = tf1.layers.Flatten()(features) features = tf1.layers.Dense(128, 'relu')(flatten) logits = tf1.layers.Dense(10)(features) loss = tf1.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits) optimizer = tf1.train.AdagradOptimizer(0.005) train_op = optimizer.minimize(loss, global_step=tf1.train.get_global_step()) return tf1.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)

在 TensorFlow 1 中,提前停止的工作方式是使用 tf.estimator.experimental.make_early_stopping_hook 设置提前停止钩子。将钩子传递给 make_early_stopping_hook 方法作为 should_stop_fn 的参数,它可以接受不带任何参数的函数。一旦 should_stop_fn 返回 True,训练就会停止。

下面的示例演示了如何实现将训练时间限制为最多 20 秒的提前停止技术:

estimator = tf1.estimator.Estimator(model_fn=_model_fn) start_time = time.time() max_train_seconds = 20 def should_stop_fn(): return time.time() - start_time > max_train_seconds early_stopping_hook = tf1.estimator.experimental.make_early_stopping_hook( estimator=estimator, should_stop_fn=should_stop_fn, run_every_secs=1, run_every_steps=None) train_spec = tf1.estimator.TrainSpec( input_fn=_input_fn, hooks=[early_stopping_hook]) eval_spec = tf1.estimator.EvalSpec(input_fn=_eval_input_fn) tf1.estimator.train_and_evaluate(estimator, train_spec, eval_spec)

TensorFlow 2:使用内置回调和 Model.fit 提前停止

准备 MNIST 数据集和一个简单的 Keras 模型:

(ds_train, ds_test), ds_info = tfds.load( 'mnist', split=['train', 'test'], shuffle_files=True, as_supervised=True, with_info=True, ) ds_train = ds_train.map( normalize_img, num_parallel_calls=tf.data.AUTOTUNE) ds_train = ds_train.batch(128) ds_test = ds_test.map( normalize_img, num_parallel_calls=tf.data.AUTOTUNE) ds_test = ds_test.batch(128) model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(10) ]) model.compile( optimizer=tf.keras.optimizers.Adam(0.005), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()], )

在 TensorFlow 2 中,当您使用内置的 Keras Model.fit(或 Model.evaluate)时,可以通过将内置回调 tf.keras.callbacks.EarlyStopping 传递给 Model.fitcallbacks 参数来配置提前停止。

EarlyStopping 回调会监视用户指定的指标,并在停止改进时结束训练。(请查看使用内置方法进行训练和评估API 文档来了解详情。)

下面是一个提前停止回调的示例,它监视损失并在显示没有改进的周期数设置为 3 (patience) 后停止训练:

callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3) # Only around 25 epochs are run during training, instead of 100. history = model.fit( ds_train, epochs=100, validation_data=ds_test, callbacks=[callback] ) len(history.history['loss'])

TensorFlow 2:使用自定义回调和 Model.fit 提前停止

您也可以实现自定义的提前停止回调,此回调也可以传递给 Model.fit(或 Model.evaluate)的 callbacks 参数。

在此示例中,一旦 self.model.stop_training 设置为 True,训练过程就会停止:

class LimitTrainingTime(tf.keras.callbacks.Callback): def __init__(self, max_time_s): super().__init__() self.max_time_s = max_time_s self.start_time = None def on_train_begin(self, logs): self.start_time = time.time() def on_train_batch_end(self, batch, logs): now = time.time() if now - self.start_time > self.max_time_s: self.model.stop_training = True
# Limit the training time to 30 seconds. callback = LimitTrainingTime(30) history = model.fit( ds_train, epochs=100, validation_data=ds_test, callbacks=[callback] ) len(history.history['loss'])

TensorFlow 2:使用自定义训练循环提前停止

在 TensorFlow 2 中,如果您不使用内置 Keras 方法进行训练和评估,则可以在自定义训练循环中实现提前停止。

首先,使用 Keras API 定义另一个简单的模型、优化器、损失函数和指标:

model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(10) ]) optimizer = tf.keras.optimizers.Adam(0.005) loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) train_acc_metric = tf.keras.metrics.SparseCategoricalAccuracy() train_loss_metric = tf.keras.metrics.SparseCategoricalCrossentropy() val_acc_metric = tf.keras.metrics.SparseCategoricalAccuracy() val_loss_metric = tf.keras.metrics.SparseCategoricalCrossentropy()

使用 tf.GradientTape@tf.function 装饰器定义参数更新函数以加快速度

@tf.function def train_step(x, y): with tf.GradientTape() as tape: logits = model(x, training=True) loss_value = loss_fn(y, logits) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) train_acc_metric.update_state(y, logits) train_loss_metric.update_state(y, logits) return loss_value @tf.function def test_step(x, y): logits = model(x, training=False) val_acc_metric.update_state(y, logits) val_loss_metric.update_state(y, logits)

接下来,编写一个自定义训练循环,可以在其中手动实现提前停止规则。

下面的示例显示了当验证损失在一定数量的周期内没有改进时如何停止训练:

epochs = 100 patience = 5 wait = 0 best = float('inf') for epoch in range(epochs): print("\nStart of epoch %d" % (epoch,)) start_time = time.time() for step, (x_batch_train, y_batch_train) in enumerate(ds_train): loss_value = train_step(x_batch_train, y_batch_train) if step % 200 == 0: print("Training loss at step %d: %.4f" % (step, loss_value.numpy())) print("Seen so far: %s samples" % ((step + 1) * 128)) train_acc = train_acc_metric.result() train_loss = train_loss_metric.result() train_acc_metric.reset_states() train_loss_metric.reset_states() print("Training acc over epoch: %.4f" % (train_acc.numpy())) for x_batch_val, y_batch_val in ds_test: test_step(x_batch_val, y_batch_val) val_acc = val_acc_metric.result() val_loss = val_loss_metric.result() val_acc_metric.reset_states() val_loss_metric.reset_states() print("Validation acc: %.4f" % (float(val_acc),)) print("Time taken: %.2fs" % (time.time() - start_time)) # The early stopping strategy: stop the training if `val_loss` does not # decrease over a certain number of epochs. wait += 1 if val_loss < best: best = val_loss wait = 0 if wait >= patience: break

后续步骤