Path: blob/master/site/en-snapshot/lite/performance/post_training_quant.ipynb
25118 views
Copyright 2019 The TensorFlow Authors.
Post-training dynamic range quantization
Overview
TensorFlow Lite now supports converting weights to 8 bit precision as part of model conversion from tensorflow graphdefs to TensorFlow Lite's flat buffer format. Dynamic range quantization achieves a 4x reduction in the model size. In addition, TFLite supports on the fly quantization and dequantization of activations to allow for:
Using quantized kernels for faster implementation when available.
Mixing of floating-point kernels with quantized kernels for different parts of the graph.
The activations are always stored in floating point. For ops that support quantized kernels, the activations are quantized to 8 bits of precision dynamically prior to processing and are de-quantized to float precision after processing. Depending on the model being converted, this can give a speedup over pure floating point computation.
In contrast to quantization aware training , the weights are quantized post training and the activations are quantized dynamically at inference in this method. Therefore, the model weights are not retrained to compensate for quantization induced errors. It is important to check the accuracy of the quantized model to ensure that the degradation is acceptable.
This tutorial trains an MNIST model from scratch, checks its accuracy in TensorFlow, and then converts the model into a Tensorflow Lite flatbuffer with dynamic range quantization. Finally, it checks the accuracy of the converted model and compare it to the original float model.
Build an MNIST model
Setup
Train a TensorFlow model
For the example, since you trained the model for just a single epoch, so it only trains to ~96% accuracy.
Convert to a TensorFlow Lite model
Using the TensorFlow Lite Converter, you can now convert the trained model into a TensorFlow Lite model.
Now load the model using the TFLiteConverter
:
Write it out to a tflite file:
To quantize the model on export, set the optimizations
flag to optimize for size:
Note how the resulting file, is approximately 1/4
the size.
Run the TFLite models
Run the TensorFlow Lite model using the Python TensorFlow Lite Interpreter.
Load the model into an interpreter
Test the model on one image
Evaluate the models
Repeat the evaluation on the dynamic range quantized model to obtain:
In this example, the compressed model has no difference in the accuracy.
Optimizing an existing model
Resnets with pre-activation layers (Resnet-v2) are widely used for vision applications. Pre-trained frozen graph for resnet-v2-101 is available on Tensorflow Hub.
You can convert the frozen graph to a TensorFLow Lite flatbuffer with quantization by:
The model size reduces from 171 MB to 43 MB. The accuracy of this model on imagenet can be evaluated using the scripts provided for TFLite accuracy measurement.
The optimized model top-1 accuracy is 76.8, the same as the floating point model.