Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
tensorflow
GitHub Repository: tensorflow/docs-l10n
Path: blob/master/site/zh-cn/lite/performance/post_training_quant.ipynb
25118 views
Kernel: Python 3
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.

训练后动态范围量化

概述

TensorFlow Lite 现在支持将权重转换为 8 位精度,作为从 TensorFlow GraphDef 到 TensorFlow Lite FlatBuffer 格式的模型转换的一部分。动态范围量化能使模型大小缩减至原来的四分之一。此外,TFLite 支持对激活进行实时量化和反量化以实现以下效果:

  1. 在可用时使用量化内核加快实现速度。

  2. 将计算图不同部分的浮点内核与量化内核混合。

激活始终以浮点进行存储。对于支持量化内核的算子,激活会在处理前动态量化为 8 位精度,并在处理后反量化为浮点精度。根据被转换的模型,这可以提供比纯浮点计算更快的速度。

量化感知训练相比,在此方法中,权重会在训练后量化,激活会在推断时动态量化。因此,不会重新训练模型权重以补偿量化引起的误差。请务必检查量化模型的准确率,以确保下降程度可以接受。

本教程将从头开始训练一个 MNIST 模型,在 TensorFlow 中检查其准确率,然后使用动态范围量化将此模型转换为 Tensorflow Lite FlatBuffer 格式。最后,检查转换后模型的准确率,并将其与原始浮点模型进行比较。

构建 MNIST 模型

设置

import logging logging.getLogger("tensorflow").setLevel(logging.DEBUG) import tensorflow as tf from tensorflow import keras import numpy as np import pathlib

训练 TensorFlow 模型

# Load MNIST dataset mnist = keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() # Normalize the input image so that each pixel value is between 0 to 1. train_images = train_images / 255.0 test_images = test_images / 255.0 # Define the model architecture model = keras.Sequential([ keras.layers.InputLayer(input_shape=(28, 28)), keras.layers.Reshape(target_shape=(28, 28, 1)), keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu), keras.layers.MaxPooling2D(pool_size=(2, 2)), keras.layers.Flatten(), keras.layers.Dense(10) ]) # Train the digit classification model model.compile(optimizer='adam', loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.fit( train_images, train_labels, epochs=1, validation_data=(test_images, test_labels) )

在此示例中,由于您只对模型进行了一个周期的训练,因此只训练到约 96% 的准确率。

转换为 TensorFlow Lite 模型

现在,您可以使用 TensorFlow Lite Converter 将训练后的模型转换为 TensorFlow Lite 模型。

现在使用 TFLiteConverter 加载模型:

converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert()

将其写入 TFLite 文件:

tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/") tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_file = tflite_models_dir/"mnist_model.tflite" tflite_model_file.write_bytes(tflite_model)

要在导出时量化模型,请设置 optimizations 标记以优化大小:

converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_quant_model = converter.convert() tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite" tflite_model_quant_file.write_bytes(tflite_quant_model)

请注意,生成文件的大小约为 1/4

!ls -lh {tflite_models_dir}

运行 TFLite 模型

使用 Python TensorFlow Lite 解释器运行 TensorFlow Lite 模型。

将模型加载到解释器中

interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file)) interpreter.allocate_tensors()
interpreter_quant = tf.lite.Interpreter(model_path=str(tflite_model_quant_file)) interpreter_quant.allocate_tensors()

在单个图像上测试模型

test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32) input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] interpreter.set_tensor(input_index, test_image) interpreter.invoke() predictions = interpreter.get_tensor(output_index)
import matplotlib.pylab as plt plt.imshow(test_images[0]) template = "True:{true}, predicted:{predict}" _ = plt.title(template.format(true= str(test_labels[0]), predict=str(np.argmax(predictions[0])))) plt.grid(False)

评估模型

# A helper function to evaluate the TF Lite model using "test" dataset. def evaluate_model(interpreter): input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] # Run predictions on every image in the "test" dataset. prediction_digits = [] for test_image in test_images: # Pre-processing: add batch dimension and convert to float32 to match with # the model's input data format. test_image = np.expand_dims(test_image, axis=0).astype(np.float32) interpreter.set_tensor(input_index, test_image) # Run inference. interpreter.invoke() # Post-processing: remove batch dimension and find the digit with highest # probability. output = interpreter.tensor(output_index) digit = np.argmax(output()[0]) prediction_digits.append(digit) # Compare prediction results with ground truth labels to calculate accuracy. accurate_count = 0 for index in range(len(prediction_digits)): if prediction_digits[index] == test_labels[index]: accurate_count += 1 accuracy = accurate_count * 1.0 / len(prediction_digits) return accuracy
print(evaluate_model(interpreter))

在动态范围量化模型上重复评估,以获得如下结果:

print(evaluate_model(interpreter_quant))

在此示例中,压缩后的模型在准确率方面没有差别。

优化现有模型

带有预激活层的 ResNet (ResNet-v2) 被广泛用于视觉应用。用于 ResNet-v2-101 的预训练冻结计算图可在 Tensorflow Hub 上获得。

您可以通过执行以下代码,使用量化将冻结计算图转换为 TensorFLow Lite FlatBuffer 格式:

import tensorflow_hub as hub resnet_v2_101 = tf.keras.Sequential([ keras.layers.InputLayer(input_shape=(224, 224, 3)), hub.KerasLayer("https://tfhub.dev/google/imagenet/resnet_v2_101/classification/4") ]) converter = tf.lite.TFLiteConverter.from_keras_model(resnet_v2_101)
# Convert to TF Lite without quantization resnet_tflite_file = tflite_models_dir/"resnet_v2_101.tflite" resnet_tflite_file.write_bytes(converter.convert())
# Convert to TF Lite with quantization converter.optimizations = [tf.lite.Optimize.DEFAULT] resnet_quantized_tflite_file = tflite_models_dir/"resnet_v2_101_quantized.tflite" resnet_quantized_tflite_file.write_bytes(converter.convert())
!ls -lh {tflite_models_dir}/*.tflite

模型大小从 171 MB 减小到 43 MB。可以使用为 TFLite 准确率测量提供的脚本来评估此模型在 ImageNet 上的准确率。

优化后模型的 Top-1 准确率为 76.8,与浮点模型相同。