Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
tensorflow
GitHub Repository: tensorflow/docs-l10n
Path: blob/master/site/zh-cn/model_optimization/guide/combine/pcqat_example.ipynb
25118 views
Kernel: Python 3

Copyright 2021 The TensorFlow Authors.

#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.

稀疏性和聚类保留量化感知训练 (PCQAT) Keras 示例

文本特征向量

这是一个展示稀疏性和聚类保留量化感知训练 (PCQAT) API 用法的端到端示例,该 API 是 TensorFlow 模型优化工具包的协作优化流水线的一部分。

其他页面

有关流水线和其他可用技术的简介,请参阅协作优化概述页面

目录

在本教程中,您将:

  1. 从头开始为 MNIST 数据集训练一个 tf.keras 模型。

  2. 通过剪枝对模型进行微调,查看准确率,并观察模型已被成功剪枝。

  3. 对剪枝后的模型应用稀疏性保留聚类,并观察之前应用的稀疏性已被保留。

  4. 应用 QAT 并观察稀疏性和聚类的损失。

  5. 应用 PCQAT 并观察之前应用的稀疏性和聚类已被保留。

  6. 生成一个 TFLite 模型并观察对其应用 PCQAT 的效果。

  7. 比较不同模型的大小,以观察应用稀疏性以及稀疏性保留聚类和 PCQAT 等协作优化技术的压缩优势。

  8. 将完全优化模型的准确率与未优化的基线模型准确率进行比较。

安装

您可以在本地 virtualenvColab 中运行此 Jupyter 笔记本。有关设置依赖项的详细信息,请参阅安装指南

! pip install -q tensorflow-model-optimization
import tensorflow as tf import numpy as np import tempfile import zipfile import os

为 MNIST 训练待剪枝和聚类的 tf.keras 模型

# Load MNIST dataset mnist = tf.keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() # Normalize the input image so that each pixel value is between 0 to 1. train_images = train_images / 255.0 test_images = test_images / 255.0 model = tf.keras.Sequential([ tf.keras.layers.InputLayer(input_shape=(28, 28)), tf.keras.layers.Reshape(target_shape=(28, 28, 1)), tf.keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu), tf.keras.layers.MaxPooling2D(pool_size=(2, 2)), tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) opt = tf.keras.optimizers.Adam(learning_rate=1e-3) # Train the digit classification model model.compile(optimizer=opt, loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.fit( train_images, train_labels, validation_split=0.1, epochs=10 )

评估基准模型并保存以备稍后使用

_, baseline_model_accuracy = model.evaluate( test_images, test_labels, verbose=0) print('Baseline test accuracy:', baseline_model_accuracy) _, keras_file = tempfile.mkstemp('.h5') print('Saving model to: ', keras_file) tf.keras.models.save_model(model, keras_file, include_optimizer=False)

将模型剪枝和微调至 50% 稀疏性

应用 prune_low_magnitude() API 来获得要在下一步聚类的剪枝模型。有关剪枝 API 的更多信息,请参阅剪枝综合指南

定义模型并应用稀疏性 API

请注意,使用的是预训练模型。

import tensorflow_model_optimization as tfmot prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude pruning_params = { 'pruning_schedule': tfmot.sparsity.keras.ConstantSparsity(0.5, begin_step=0, frequency=100) } callbacks = [ tfmot.sparsity.keras.UpdatePruningStep() ] pruned_model = prune_low_magnitude(model, **pruning_params) # Use smaller learning rate for fine-tuning opt = tf.keras.optimizers.Adam(learning_rate=1e-5) pruned_model.compile( loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer=opt, metrics=['accuracy'])

微调模型,检查稀疏性,并根据基准评估准确率

在 3 个周期内使用剪枝对模型进行微调。

# Fine-tune model pruned_model.fit( train_images, train_labels, epochs=3, validation_split=0.1, callbacks=callbacks)

定义辅助函数来计算和打印模型的稀疏性和聚类。

def print_model_weights_sparsity(model): for layer in model.layers: if isinstance(layer, tf.keras.layers.Wrapper): weights = layer.trainable_weights else: weights = layer.weights for weight in weights: if "kernel" not in weight.name or "centroid" in weight.name: continue weight_size = weight.numpy().size zero_num = np.count_nonzero(weight == 0) print( f"{weight.name}: {zero_num/weight_size:.2%} sparsity ", f"({zero_num}/{weight_size})", ) def print_model_weight_clusters(model): for layer in model.layers: if isinstance(layer, tf.keras.layers.Wrapper): weights = layer.trainable_weights else: weights = layer.weights for weight in weights: # ignore auxiliary quantization weights if "quantize_layer" in weight.name: continue if "kernel" in weight.name: unique_count = len(np.unique(weight)) print( f"{layer.name}/{weight.name}: {unique_count} clusters " )

我们先剥离剪枝包装器,然后检查模型内核是否已正确剪枝。

stripped_pruned_model = tfmot.sparsity.keras.strip_pruning(pruned_model) print_model_weights_sparsity(stripped_pruned_model)

应用稀疏性保留聚类并检查其在两种情况下对模型稀疏性的影响

接下来,对剪枝后的模型应用稀疏性保留聚类,并观察聚类数量和确认稀疏性已保留。

import tensorflow_model_optimization as tfmot from tensorflow_model_optimization.python.core.clustering.keras.experimental import ( cluster, ) cluster_weights = tfmot.clustering.keras.cluster_weights CentroidInitialization = tfmot.clustering.keras.CentroidInitialization cluster_weights = cluster.cluster_weights clustering_params = { 'number_of_clusters': 8, 'cluster_centroids_init': CentroidInitialization.KMEANS_PLUS_PLUS, 'preserve_sparsity': True } sparsity_clustered_model = cluster_weights(stripped_pruned_model, **clustering_params) sparsity_clustered_model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) print('Train sparsity preserving clustering model:') sparsity_clustered_model.fit(train_images, train_labels,epochs=3, validation_split=0.1)

先剥离聚类包装器,然后检查模型是否正确剪枝和聚类。

stripped_clustered_model = tfmot.clustering.keras.strip_clustering(sparsity_clustered_model) print("Model sparsity:\n") print_model_weights_sparsity(stripped_clustered_model) print("\nModel clusters:\n") print_model_weight_clusters(stripped_clustered_model)

应用 QAT 和 PCQAT 并检查对模型聚类和稀疏性的影响

接下来,对稀疏性聚类模型应用 QAT 和 PCQAT,并观察 PCQAT 在模型中保留了权重稀疏性和聚类。请注意,剥离后的模型会传递给 QAT 和 PCQAT API。

# QAT qat_model = tfmot.quantization.keras.quantize_model(stripped_clustered_model) qat_model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) print('Train qat model:') qat_model.fit(train_images, train_labels, batch_size=128, epochs=1, validation_split=0.1) # PCQAT quant_aware_annotate_model = tfmot.quantization.keras.quantize_annotate_model( stripped_clustered_model) pcqat_model = tfmot.quantization.keras.quantize_apply( quant_aware_annotate_model, tfmot.experimental.combine.Default8BitClusterPreserveQuantizeScheme(preserve_sparsity=True)) pcqat_model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) print('Train pcqat model:') pcqat_model.fit(train_images, train_labels, batch_size=128, epochs=1, validation_split=0.1)
print("QAT Model clusters:") print_model_weight_clusters(qat_model) print("\nQAT Model sparsity:") print_model_weights_sparsity(qat_model) print("\nPCQAT Model clusters:") print_model_weight_clusters(pcqat_model) print("\nPCQAT Model sparsity:") print_model_weights_sparsity(pcqat_model)

查看 PCQAT 模型的压缩优势

定义辅助函数以获取压缩的模型文件。

def get_gzipped_model_size(file): # It returns the size of the gzipped model in kilobytes. _, zipped_file = tempfile.mkstemp('.zip') with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f: f.write(file) return os.path.getsize(zipped_file)/1000

观察将稀疏性、聚类和 PCQAT 应用于模型会产生显著的压缩优势。

# QAT model converter = tf.lite.TFLiteConverter.from_keras_model(qat_model) converter.optimizations = [tf.lite.Optimize.DEFAULT] qat_tflite_model = converter.convert() qat_model_file = 'qat_model.tflite' # Save the model. with open(qat_model_file, 'wb') as f: f.write(qat_tflite_model) # PCQAT model converter = tf.lite.TFLiteConverter.from_keras_model(pcqat_model) converter.optimizations = [tf.lite.Optimize.DEFAULT] pcqat_tflite_model = converter.convert() pcqat_model_file = 'pcqat_model.tflite' # Save the model. with open(pcqat_model_file, 'wb') as f: f.write(pcqat_tflite_model) print("QAT model size: ", get_gzipped_model_size(qat_model_file), ' KB') print("PCQAT model size: ", get_gzipped_model_size(pcqat_model_file), ' KB')

查看从 TF 到 TFLite 的准确率持久性

定义一个辅助函数,基于测试数据集评估 TFLite 模型。

def eval_model(interpreter): input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] # Run predictions on every image in the "test" dataset. prediction_digits = [] for i, test_image in enumerate(test_images): if i % 1000 == 0: print(f"Evaluated on {i} results so far.") # Pre-processing: add batch dimension and convert to float32 to match with # the model's input data format. test_image = np.expand_dims(test_image, axis=0).astype(np.float32) interpreter.set_tensor(input_index, test_image) # Run inference. interpreter.invoke() # Post-processing: remove batch dimension and find the digit with highest # probability. output = interpreter.tensor(output_index) digit = np.argmax(output()[0]) prediction_digits.append(digit) print('\n') # Compare prediction results with ground truth labels to calculate accuracy. prediction_digits = np.array(prediction_digits) accuracy = (prediction_digits == test_labels).mean() return accuracy

评估已被剪枝、聚类和量化的模型,然后看到 TFLite 后端保持 TensorFlow 的准确率。

interpreter = tf.lite.Interpreter(pcqat_model_file) interpreter.allocate_tensors() pcqat_test_accuracy = eval_model(interpreter) print('Pruned, clustered and quantized TFLite test_accuracy:', pcqat_test_accuracy) print('Baseline TF test accuracy:', baseline_model_accuracy)

结论

在本教程中,您学习了如何创建模型,使用 prune_low_magnitude() API 对其进行剪枝,以及使用 cluster_weights() API 应用稀疏性保留聚类,以在对权重进行聚类时保留稀疏性。

接下来,应用稀疏性和聚类保留量化感知训练 (PCQAT) 以在使用 QAT 时保留模型的稀疏性和聚类。将最终 PCQAT 模型与 QAT 模型进行了比较,表明稀疏性和聚类在前者中得到保留,在后者中则丢失。

接下来,将模型转换为 TFLite 以显示链式稀疏性、聚类和 PCQAT 模型优化技术的压缩优势,并对 TFLite 模型进行评估以确保在 TFLite 后端保持准确率。

最后,将 PCQAT TFLite 模型准确率与预优化基准模型准确率进行比较,表明协作优化技术在保持与原始模型相似的准确率的同时获得了压缩优势。