Path: blob/master/site/en-snapshot/guide/migrate/tflite.ipynb
39067 views
Copyright 2021 The TensorFlow Authors.
Migrating your TFLite code to TF2
TensorFlow Lite (TFLite) is a set of tools that helps developers run ML inference on-device (mobile, embedded, and IoT devices). The TFLite converter is one such tool that converts existing TF models into an optimized TFLite model format that can be efficiently run on-device.
In this doc, you'll learn what changes you need to make to your TF to TFLite conversion code, followed by a few examples that do the same.
Changes to your TF to TFLite conversion code
If you're using a legacy TF1 model format (such as Keras file, frozen GraphDef, checkpoints, tf.Session), update it to TF1/TF2 SavedModel and use the TF2 converter API
tf.lite.TFLiteConverter.from_saved_model(...)to convert it to a TFLite model (refer to Table 1).Update the converter API flags (refer to Table 2).
Remove legacy APIs such as
tf.lite.constants. (eg: Replacetf.lite.constants.INT8withtf.int8)
// Table 1 // TFLite Python Converter API Update
| TF1 API | TF2 API |
|---|---|
tf.lite.TFLiteConverter.from_saved_model('saved_model/',..) | supported |
tf.lite.TFLiteConverter.from_keras_model_file('model.h5',..) | removed (update to SavedModel format) |
tf.lite.TFLiteConverter.from_frozen_graph('model.pb',..) | removed (update to SavedModel format) |
tf.lite.TFLiteConverter.from_session(sess,...) | removed (update to SavedModel format) |
// Table 2 // TFLite Python Converter API Flags Update
| TF1 API | TF2 API |
|---|---|
allow_custom_opsoptimizationsrepresentative_datasettarget_spec inference_input_typeinference_output_typeexperimental_new_converterexperimental_new_quantizer | supported |
input_tensorsoutput_tensorsinput_arrays_with_shapeoutput_arraysexperimental_debug_info_func | removed (unsupported converter API arguments) |
change_concat_input_rangesdefault_ranges_statsget_input_arrays()inference_typequantized_input_statsreorder_across_fake_quant | removed (unsupported quantization workflows) |
conversion_summary_dirdump_graphviz_dirdump_graphviz_video | removed (instead, visualize models using Netron or visualize.py) |
output_formatdrop_control_dependency | removed (unsupported features in TF2) |
Examples
You'll now walk through some examples to convert legacy TF1 models to TF1/TF2 SavedModels and then convert them to TF2 TFLite models.
Setup
Start with the necessary TensorFlow imports.
Create all the necessary TF1 model formats.
1. Convert a TF1 SavedModel to a TFLite model
Before: Converting with TF1
This is typical code for TF1-style TFlite conversion.
After: Converting with TF2
Directly convert the TF1 SavedModel to a TFLite model, with a smaller v2 converter flags set.
2. Convert a TF1 Keras model file to a TFLite model
Before: Converting with TF1
This is typical code for TF1-style TFlite conversion.
After: Converting with TF2
First, convert the TF1 Keras model file to a TF2 SavedModel and then convert it to a TFLite model, with a smaller v2 converter flags set.
3. Convert a TF1 frozen GraphDef to a TFLite model
Before: Converting with TF1
This is typical code for TF1-style TFlite conversion.
After: Converting with TF2
First, convert the TF1 frozen GraphDef to a TF1 SavedModel and then convert it to a TFLite model, with a smaller v2 converter flags set.
Further reading
Refer to the TFLite Guide to learn more about the workflows and latest features.
If you're using TF1 code or legacy TF1 model formats (Keras
.h5files, frozen GraphDef.pb, etc), please update your code and migrate your models to the TF2 SavedModel format.
View on TensorFlow.org
Run in Google Colab
View source on GitHub
Download notebook