Path: blob/master/site/en-snapshot/lite/guide/ops_version.md
25118 views
TensorFlow Lite operator versions
This document describes TensorFlow Lite's op versioning schema. Op versioning enables developers to add new functionalities and parameters into existing ops. In addition, it guarantees the following:
Backward compatibility: New TensorFlow Lite implementation should handle an old model file.
Forward compatibility: Old TensorFlow Lite implementation should handle a new model file produced by new version of converter, as long as no new features are used.
Forward in-compatibility detection: If an old TensorFlow Lite implementation reads a new model that contains a new version of an op which isn't supported, it should report the error.
Example: Adding dilation into depthwise convolution
The remainder of this document explains op versioning in TFLite by showing how to add dilation parameters to the depthwise convolution operation.
Knowledge of dilation is not required to understand this document. Note that:
2 new integer parameters will be added:
dilation_width_factor
anddilation_height_factor
.Old depthwise convolution kernels that don't support dilation are equivalent to setting the dilation factors to 1.
Change FlatBuffer schema
To add new parameters into an op, change the options table in lite/schema/schema.fbs
.
For example, the options table of depthwise convolution looks like this:
When adding new parameters:
Add comments indicating which parameters are supported by which version.
When the new implementation gets the default values for newly added parameters, it should work exactly the same as the old implementation.
The table will be like this after the new parameters are added:
The file lite/schema/schema_generated.h
should be re-generated for the new schema.
Change C structures and kernel implementation
In TensorFlow Lite, the kernel implementation is decoupled from FlatBuffer definition. The kernels read the parameter from C structures defined in lite/c/builtin_op_data.h
.
The original depthwise convolution parameter is as follows:
As with the FlatBuffer schema, add comments indicating which parameters are supported starting from which version. The result is seen below:
Please also change the kernel implementation to read the newly added parameters from the C structures. The details are omitted here.
Change the FlatBuffer reading code
The logic to read FlatBuffer and produce C structure is in lite/core/api/flatbuffer_conversions.cc
.
Update the file to handle the new parameters, as shown below:
It's not required to check the op version here. When the new implementation reads an old model file where dilation factors are missing, it will use 1 as the default value, and the new kernel will work consistently with the old kernel.
Change kernel registration
The MutableOpResolver (defined in lite/mutable_op_resolver.h
) provides a few functions to register op kernels. The minimum and maximum version are 1 by default:
The built-in ops are registered in lite/kernels/register.cc
. In this example, we implemented a new op kernel which can handle DepthwiseConv2D
version 1 and 2, so we need to change this line:
to:
Change TFLite op version
The next step is to make TFLite populate the minimum version that's required to execute the op. In this example, it means:
Populate version=1 when dilation factors are all 1.
Populate version=2 otherwise.
Modify GetBuiltinOperatorVersion
function for the operator in lite/tools/versioning/op_version.cc
by adding the new version to the case of DepthwiseConv2D
:
Update the operator version map
The last step is to add the new version info into the operator version map. This step is required because we need to generate the model's minimum required runtime version based on this version map.
To do this, you need to add a new map entry in lite/tools/versioning/runtime_version.cc
.
In this example, you need to add the following entry into op_version_map
:
where %CURRENT_RUNTIME_VERSION%
corresponds to the current runtime version defined in tensorflow/core/public/version.h.
Delegation implementation
TensorFlow Lite provides a delegation API which enables delegating ops to hardware backends. In the delegate's Prepare
function, check if the version is supported for every node in Delegation code.
This is required even if the delegation only supports version 1 ops, so the delegation can detect incompatibility when getting a higher version op.