Path: blob/master/site/en-snapshot/mlir/_includes/tf_passes.md
25118 views
-cluster-tf-ops-by-host
Cluster the TensorFlow ops by host so that each function only contains ops placed on the same host
-constant-op-device-assignment
Assign device for tf.Const ops
-convert-tf-control-flow-to-scf
Convert TensorFlow control flow to SCF.
This pass can be used for all direct control flow lowerings from the TensorFlow dialect to the SCF dialect.
-prepare-tpu-computation-for-tf-export
Prepare TPU computation to be legal for export to TensorFlow
Prepares TPU computation module attached to _TPUCompileMlir op for TensorFlow graph export by making transformation such as replacing or removing MLIR or XLA specific attributes that are not legal in TensorFlow graph.
-tf-batch-matmul-to-tf-einsum
Replace TF BatchMatMul op by TF Einsum op.
-tf-broadcast-fold
Fold explicit broadcasts into the following operations if they support implicit broadcasting on their operand.
-tf-canonicalize-compile-and-replicate-attributes
Canonicalize compilation and replication attributes.
A pass that converts existing compilation and replication attributes into unified attributes. For example, _tpu_replicate="cluster"
in the following code
wll be replaced by _replication_info="cluster"
and _xla_compile_device_type="TPU"
.
_XlaMustCompile=true
in the following code
will be replaced by _xla_compile_device_type
, with its value set to the value of device
.
-tf-convert-to-legacy-compile-and-replicate-attributes
Convert unified compilation and replication attributes back to legacy attributes.
This transformation pass converts unified compilation and replication attributes (_replication_info
and _xla_compile_device_type
) into legacy attributes. This ensures the unified attributes do not get exposed outside of the MLIR bridge with V1 pipeline in some cases. The pass expects to have either none or both of the unified attributes present in an op for the conversion to happen. Otherwise it will fail.
For example, _replication_info="cluster"
and _xla_compile_device_type="TPU"
in the following code
wll be replaced by _tpu_replicate="cluster"
as follows,
-tf-data-optimization
Performs tf.data optimizations
-tf-decompose-reduce-dataset
Decomposes ReduceDataset op into dataset operations.
Decomposes ReduceDataset op into a while loop that iterates the dataset and calls into the reduction function. This decomposition is only done if the ReduceDataset op is marked for compilation with the _xla_compile_device_type attribute.
For example, for the following function the ReduceDataset op:
with the following reduction function:
will be transformed into:
-tf-device-assignment-by-func-attr
Device assignment in TF dialect using the device specified in the function attribute.
-tf-device-cluster-formation
Form clusters from instructions assigned to same device
Clusters operations with the same device assignment id. For each cluster, creates a "tf_device.device_launch" op with a Region containing the ops in each cluster and replaces the ops with the new launch op.
For example, given the following program:
After the pass, we will have:
-tf-device-cluster-outlining
Outlines regions of tf_device.cluster operations
This pass outlines the body of a tf_device.cluster
into a function and replaces the tf_device.cluster
op with an equivalent tf_device.cluster_func
op. Implicit operands will be captured and materialized as explicit arguments to the newly created functions and associated tf_device.cluster_func
ops.
For example, the following:
will be transformed into:
-tf-device-constant-sinking
Sinks constants implicitly captured in a tf_device.cluster region.
This pass sinks implicitly captured constants (tf.Const
ops) used by and into a tf_device.cluster
region. Performing this prior to outlining will reduce the number of arguments of the outlined function.
For example, the following:
will be transformed into:
-tf-device-convert-launch-func-to-tf-call
Rewrites tf_device::LaunchFuncOp to TF::PartitionedCallOp
This pass converts tf_device::LaunchFuncOp into an equivalent TF::PartitionedCallOp so that it can be exported to TensorFlow GraphDef.
-tf-device-index-selector
Fold tf.DeviceIndex to constant.
-tf-device-launch-outlining
Outlines regions of tf_device.launch operations
This pass outlines the body of a tf_device.launch
into a function and replaces the tf_device.launch
op with an equivalent tf_device.launch_func
op. Implicit operands will be captured and materialized as explicit arguments to the newly created functions and associated tf_device.launch_func
ops. The device
attribute from the launch
op is transferred to launch_func
.
For example, the following:
will be transformed into:
Options
-tf-device-mark-input-output-aliases
Marks device cluster inputs-output pairs that read/write to the same variable as aliases
This pass analyzes the inputs and outputs to device cluster and marks those input-output pairs as aliases (using tf.aliasing_output
attribute) which read and write to the same resource. This aliasing information can then be propagated to XLA compiler for input/output buffer space optimizations.
-tf-drop-while-shape-invariant
Drop shape_invariant
attribute from While/WhileRegion ops.
Drop shape_invariant
attribute from tf.While and tf.WhileRegion op. This would allow shape inference pass to further refine operand/result shapes of these ops. This is only safe to do when compiling to XLA.
-tf-drop-while-shape-invariant-in-device-cluster
Drop shape_invariant
attribute from While/WhileRegion ops inside device cluster.
Drop shape_invariant
attribute from tf.While and tf.WhileRegion op only inside device cluster. This would allow shape inference pass to further refine operand/result shapes of these ops. This is only safe to do when compiling to XLA.
-tf-einsum
Transform Einsum to other TF Ops for the supported variants
-tf-embedding-pipelining
Rewrite graph for embedding pipelining
For architectures that support accelerated embedding lookups, this pass will rewrite the graph to use pipelining for better device utilization.
-tf-embedding-program-key
Sets the program key for embedding ops.
Passes in the program key to embedding ops. Will move the embedding ops after a _TPUCompileMlir op if there is no predecessor _TPUCompileMlir op. Both the embedding op and compile op are assumed to be wrapped in separate tf_device.launch() ops. This is because the embedding op is head outside compiled and the compile op is wrapped in launch to execute on host during TPURewritePass.
For example, the tf.OpA with the mini_batch_splits
attribute will be moved after _TPUCompileMlir and the first input will use the _TPUCompileMlir program output:
becomes:
-tf-embedding-sequencing
Rewrite graph for sequential execution of embeddings
This is a strictly sequential and formally correct fallback option for the embedding pipelining pass intended for debugging during pipelining development.
-tf-executor-break-up-islands
Transform from TF control dialect to TF executor dialect.
-tf-executor-check-control-dependencies
Checks control dependencies
This pass analyzes control dependencies between islands and warns about dependencies that are not explainable by side effects of the involved ops. More precisely, for every minimal unexplainable control dependency path we emit op warnings for all involved ops. The pass does not report intermediate dummy ops for grouping control dependencies (Identity, NoOp), unless they are part of an unexplainable path between other ops. This pass is useful to understand control dependency conservatism for a given MLIR module.
For example, the following function
produces the following warnings
because the first and last AssignVariableOp
s access different resources and therefore should be independent. Note that the NoOp
s are considered as intermediate ops for control dependency grouping.
-tf-executor-convert-control-to-data-outputs
Chain control outputs of while loop body
This pass converts the control outputs of a while loop body function to data outputs. Thus, inter iteration control dependencies are transformed to data dependencies. Since data dependencies can express which particular operations in the while loop body are dependent on which inputs, it captures inter iteration parallelism in while loop. Control dependencies on the other hand create a barrier at the end of while loop body thus blocking any parallelism across iterations.
For example, the following while loop body has a %barrier
at the end. Although there is no data/control dependency between tf.AssignVariableOp
for %arg0
to tf.AssignVariableOp
for %arg1
across any iteration, the while loop body has a control barrier (%barrier
) at the end which forces a dependency and the two assign variable ops must wait for each other to complete before starting the next iteration. Transforming these control outputs to data outputs removes the dependency between the two assign variable ops, thus allowing them to run in parallel across iterations.
Before:
After:
-tf-executor-graph-pruning
Prunes unreachable ops in a tf_executor.graph
This pass removes ops from a tf_executor.graph
that are not transitively, via data or control dependencies, connected to the associated tf_executor.fetch
op. The order of ops will be preserved. Functions named main
with no tf.entry_function
attribute will not be pruned, as such graphs/functions may have been imported from a V1 TensorFlow graph, where feeds/fetches/targets are not provided at certain stages of IR transformation (e.g. pre-placement).
Option ops-to-preserve
allows to specify ops that should not be pruned, regardless of their reachability.
For example, the following:
will be transformed into:
Options
-tf-executor-island-coarsening
Walks tf_executor::GraphOp and merges individual tf_executor::IslandOps.
This pass performs whole graph analysis for a graph encapsulated into tf_executor::GraphOp. The analysis identifies all IslandOps within the graph which could be merged together. The goal is to merge as many islands as possible. Once analysis is completed, the pass merges all IslandOps in a single scan.
For example given the following program with two disjunct islands:
After running this pass, the two islands are merged:
-tf-executor-split-into-island-per-op
Transform from TF control dialect to TF executor dialect.
Splits an island with multiple ops into multiple islands (one per op). Does not create any control dependencies between new islands, and does not propagate control dependencies that potentially existed between the old islands into the new islands. Maintains existing data dependencies between ops wrapped by the new islands.
Example: original program:
will be converted by this pass into:
-tf-executor-to-functional-conversion
Lifts tf_executor.island inner ops from a tf_executor.graph
This pass converts tf_executor.graphs consisting of only tf_executor.islands and a tf_executor.fetch into a sea of nodes consisting of TensorFlow Dialect ops by lifting such ops out of a tf_executor.graph's tf_executor.islands. If V1 control flow ops are present in a tf_executor.graph, an error will be returned.
For example, the following:
will be transformed into:
-tf-executor-tpu-v1-island-coarsening
Merges TPU clusters IslandOps, intended for V1 compatibility mode
This pass is a variant of ExecutorIslandCoarseningPass that is limited to TPU-annotated operations and intended to preserve backward compatibility with TFv1.
-tf-executor-tpu-v1-island-inlining
Inline calls to the nested TPU module.
This pass inlines the islands calling into the nested module that was outlined, thus reversing the effect of the -tf-executor-tpu-v1-island-outlining
pass.
For example, the following:
will be transformed into:
-tf-executor-tpu-v1-island-outlining
Outline TPU clusters from island into a nested module, so it can be processed like a V2 module, intended for V1 compatibility mode
Extract the islands containing a TPU cluster computation into an outlined function in a nested module. This will allow to run the usual bridge on this nested module which now exhibits a more friendly "V2-like" structure. This is only intended for V1 compatibility mode where the bridge runs without feed/fetches on session create/extend.
So given e.g.
This pass will create an additional function containing the code in tf_executor.island:
and will then replace the island with the wrapped call:
-tf-executor-update-control-dependencies
Computes and applies all necessary control dependencies based on side effect analysis.
This pass is intended to run after the split_into_island_per_op pass. That pass splits up multi-op islands into multiple individual islands wrapping a single op without applying any control deps between the new islands. So, this pass is needed in order to make preservation of the semantic ordering relationships between ops as determined by side effect analysis explicit in the IR.
Example: original program:
will be converted by this pass into:
-tf-extract-head-tail-outside-compilation
Extracts head or tail outside compilation to separate host launches before/after device cluster.
This pass extracts a CPU computation cluster with _xla_outside_compilation
annotation from the head or tail of a Device cluster.
For example:
becomes:
-tf-extract-outside-compilation
Extracts device outside compilation computation to a separate tf_device.parallel_execute region.
This pass extracts a CPU computation cluster with _xla_outside_compilation
annotation, which denotes ops that should be run on CPU/host, from a device cluster. Each outside compilation cluster is moved to a tf_device.parallel_execute region. The device cluster is also moved to a tf_device.parallel_execute region. Communication ops between device and host are added to pass inputs/outputs to/from the outside compiled region.
For example, the following tf_device.cluster with an op marked for xla_outside_compilation
:
will become a tf_device.parallel_execute op with a CPU/host region and a tf_device.cluster with communication ops to send data to/from device/host:
-tf-extract-tpu-copy-with-dynamic-shape-op
Extract the TPUCopyWithDynamicShapeOp out of the host launch and place it on device launch
This pass looks for TPUCopyWithDynamicShapeOp which wraps in a tf_device.launch
with host device attribute. It extracts the ops and wrap them in tf_device.launch
with tpu device attribute so that ops can be run on TPU instead of CPU while still being compiled on host.
-tf-functional-control-flow-to-cfg
Transform functional control flow Ops to MLIR Control Form Graph (CFG) form
-tf-functional-control-flow-to-regions
Transforms functional control flow operations to their region-based counterparts
This pass transforms functional control flow operations in the TensorFlow dialect to their region-based counterparts, i.e., tf.If
is transformed to tf.IfRegion
and tf.While
is transformed to tf.WhileRegion
.
For example, this functional operation
will be transformed into this region-based operation
-tf-functional-to-executor-conversion
Transform from func op to TF executor dialect.
-tf-fused-kernel-matcher
Matches computations corresponding to optimized fused kernels
-tf-gpu-op-fusion
Fusion optimization for GPU targets
This pass is performing fusion specific to GPU targets. This is an ad-hoc pass for now, but should be integrated with some notion of "target" in the MLIR pipeline in the future.
-tf-group-by-dialect
Groups ops into functions that only contain one dialect.
Factors operations into subroutines such that all functions only contain a single dialect. Which of the dialects are allowed in the "top" function is configurable.
For example, the code x.a() x.b() %c = y.c() x.d(%c) would be transformed into something like call @x_1() %c = call @y_1() call @x_2(%c) with @x_1, @x_2 and @y_1 filled in.
-tf-guarantee-all-funcs-one-use
Guarantee all FuncOp's have only a single use.
-tf-hoist-loop-invariant
Hoists loop invariant ops to the outside of the loop
Hoists loop invariant to the outside of the loop. The pass is similar to LoopInvariantCodeMotion pass, but it also hoists ReadVariableOps, if the variable is read only.
For example, the following pseudo MLIR code (types are left out for brevity)
would be transformed to
The tf.ReadVariableOp
and tf.OpB
can be hoisted to the outside of the loop.
-tf-hoist-replicate-invariant-resource-writes
Hoists writes to replicate invariant resource variables.
This pass hoists replicate invariant resource variable writes outside tf_device.replicate op. These may have been inserted by other passes such as resource op lifting. However, if the resource variable is not replicated, writes to such variables for each replica are redundant and can be replaced by writing a single value from first replica.
The benefit of this optimization is reduced memory requirement on host. For multiple writes (one from each replica) to such variables, the host would allocate buffer space to receive the device output from all replicas, which is not required. We can use the output of first replica in such cases.
-tf-init-text-file-to-import
Convert InitializeTableFromTextFileV2 ops to LookupTableImportV2Op to remove the dependency on asset files
Options
-tf-layout-assignment
Layout assignment pass.
Options
-tf-localize-var-handles
Creates VarHandleOps next to the operations that use them.
Creates VarHandleOps right next to the operations that use them, one per operation. This is useful for transformations that only end up with a few small snippets of remaining TF code, and wish for those snippets to be self-contained. For example, this would transform
"tf_saved_model.global_tensor"() { sym_name = "v" ... } func @f(%arg0 {tf_saved_model.bound_input = @v}) { %1 = "tf.ReadVariableOp"(%arg0) ... }
to
func @f(%arg0 {tf_saved_model.bound_input = @v}) { %0 = "tf.VarHandleOp"(sym_name = "v") %1 = "tf.ReadVariableOp"(%0) ... }
Note that this pass might leave behind unused values (like e.g. %arg0 in the example above), which can later be pruned using DCE.
-tf-lower-quantized
Lowers ops that require quantized input or output.
This pass rewrites all ops that have at least one input or output that must be a quantized type to ops whose inputs and outputs allow non-quantized types. Examples of quantized types are TF_Qint8 or TF_Quint8.
An example is TF_DequantizeOp, which converts a quantized type to a float. This op is rewritten to generic ops that perform the scale and shift and can operate on non-quantized types.
Currently, TF_DequantizeOp is the only op with a lowering that falls in this category. When more lowerings are added (e.g. QuantizeV2Op), they should be added to this pass.
-tf-mark-ops-for-outside-compilation
Marks ops in device cluster for outside compilation if they are unsupported on device.
This pass marks unsupported ops in a device cluster with _xla_outside_compilation
attribute so the operations will run on the host instead of the device. Unsupported ops are ops that can not be code generated to run on the device for the cluster including:
String operations on TPUs.
Operations that don't have a kernel defined for the device.
This pass is conservative in that it will mark all ops for outside compilation that can not be compiled for the device. Exceptions for this are added for ops that will be rewritten or decomposed before compiling on device.
For example, tf_device.cluster op with an unsupported op, tf.UnsupportedOp:
will mark tf.UnsupportedOp with _xla_outside_compilation
attribute:
-tf-materialize-passthrough-op
Materialize the MlirPassthroughOp by replacing it with the MLIR module attached as an attribute
A pass that replaces MlirPassthrough ops with the code they have in their mlir_module
string attribute.
-tf-merge-control-flow
Merges IfRegion ops together with a common predicate.
This pass merges IfRegion ops together if they have the same predicate and it is safe to do so (there are no intermediate dependencies, they are in the same block, etc).
For example:
Would be transformed to:
-tf-move-transposes
Move transposes pass.
Options
-tf-name-anonymous-iterators
Converts anonymous iterators to named iterators
This converts AnonymousIterator ops to Iterator, thus giving them a name. For example, this will convert %0 = "tf.AnonymousIteratorV3"() {...} to %0 = "tf.Iterator"() {shared_name = "_iterator1", ...}
-tf-optimize
Optimize TensorFlow module
-tf-order-by-dialect
Reorders ops so ops of the same dialect are next to each other.
Performs a reordering of ops so that (a) ops of the same dialect are next to each other (b) order within a dialect is preserved . For example, this would transform %a = "x.f"() %b = "y.f"(%a) %c = "x.f"(%a) to %a = "x.f"() %c = "x.f"(%a) %b = "y.f"(%a) so that the two "x" dialect instructions are next to each other.
-tf-outside-compiled-to-host-launch
_Wraps each op with the xla_outside_compiled attribute in a separate tf_device.launch on replicated host device.
This pass wraps ops with the same _xla_outside_compilation
attribute value in a tf_device.launch op with host device assignment.
A simple example:
Would become the following ops (unimportant attribute, type are omitted):
-tf-parallel-execute-to-islands
Lowers device parallel_execute to executor islands
Options
-tf-promote-resources-to-args
Promote resources reads/writes to function inputs/outputs.
This pass promotes resource accesses in function(s) (by default, the main) to input arguments and outputs of the function(s).
Two types of resources are supported: (1) A function argument of TF::ResourceType type (this pass). (2) A VarHandleOp in the function (tf-promote-var-handles-to-args).
After the pass,
. The function will have an input argument for each resource that is already provided as an input argument or is read. The type of the input argument will become the shape of the value represented by the resource.
. The function will have an output for each resource that is written. The type of the output will become the shape of the resource.
The information of variable identification and input-output alising is recorded as named attributes of the input argument or output:
. 'tf.resource_name' matches 'shared_name' of VarHandleOp, which represents the identifier of the corresponding resource. This attribute is added to an input argument if the initial value of the resource is read, or to the output if the initial value is not read.
. 'tf.aliasing_output' is the index of the function output that is an alias of the input argument. This attribute is added only to the input argument when the initial value of the corresponding resource is read, and the resource is written later.
Assumption of this pass: . Compound resource operations have already been decomposed. . Dead functions have already been removed, as resource arguments in dead functions can cause the pass to fail.
Options
-tf-promote-var-handles-to-args
Promote tf.VarHandleOps to function arguments.
See joint description in promote resources to args.### -tf-readonly-references-to-resources
Convert readonly reference variables to resource variables.
-tf-region-control-flow-to-functional
Transforms region-based control flow operations to their functional counterparts
This pass transforms region-based control flow operations in the TensorFlow dialect to their functional counterparts, i.e., tf.IfRegion
is transformed to tf.If
and tf.WhileRegion
is transformed to tf.While
.
For example, this region-based operation
will be transformed into this functional operation
-tf-remove-unused-arguments
Removes unused args from private functions & their callers.
Removes arguments from functions that aren't used in the function body, outside of returns. Also adjusts the callers of said functions.
For example, the code func.func @f(%arg0, %arg1) { SomeOpThatUsesArg0(%arg0) return %arg0 } ... call @x_1(x, y)
would be transformed into func.func @f(%arg0) { return %arg0 } ... call @x_1(x)
Note that, in the above example, both args would be removed if there wasn't the "SomeOpThatUsesArg0(%arg0)" line.
-tf-remove-unused-while-results
Removes unused results from tf.WhileRegion ops
Removes unused results from tf.WhileRegion
ops along with the defining ops in the body, if it is safe to do so. Currently, the pass detects results with following properties:
the result is unused outside of the
tf.WhileRegion
opthe defining op of the result in the body can be safely removed
the operand corresponding to the result is not used by any other op in the condition or body (in particular, there must not be intermediate pass-through ops like
tf.Identity
)
For example, the following pseudo MLIR code (types are left out for brevity)
would be transformed to
(the first result can be removed along with its defining op tf.OpB
).
-tf-replica-id-to-device-ordinal
Set device ordinal with replica id
This pass sets the device ordinal attribute of the ops using the replica id attribute. This is run immediately after the replica_to_island pass which sets the replica id attribute of these ops. Note for single chip usecase, the pass will check if there is one op and sets the device ordinal attribute to be zero.
-tf-replicate-invariant-op-hoisting
Hoists replicate invariant operations out of replicate
This pass looks for replicate invariant ops in a tf_device.replicate
op region and hoists them out. It also makes tf.Shape
ops replicate invariant if possible. This currently updates or replaces tf.Shape
ops of replicated arguments, either tensors or resources.
The primary benefit of the pass is to hoist num_replicas
_TPUCompile
s into a single _TPUCompile
.
This pass assumes that when a tf.Shape
directly inputs from replicate
params, then it is the same shape across replicas.
For example, the following
gets converted to
and for resource variables the following
gets converted to
-tf-replicate-tensor-list-init-ops
Replicate TensorList init ops for correct shape assignments in shape inference
If we pass same TensorList to a while op as multiple arguments or just use the same TensorList at multiple places and assign different TensorListSetItem to elements of TensorList, the shape inference is then unable to identify the Shape of these args and thus the input TensorList shape is unidentifiable. All of these args are supposed to be independent and not related to original creation of TensorList.
This pass will create multiple instances of TensorList for each arg of the while op and each use and thus there will be not a conflict in resolving the shape of these different inputs.
-tf-replicate-to-island
Lowers device replicate to executor islands
Options
-tf-resource-device-inference
Propagates the device attribute on resources from callers to callees.
A pass that propagates device assignment of resources on a module. It performs in-function propagation, as well as cross-function propagation from callers to callees.
This pass changes the module by adding "tf.device" attribute to function arguments and adding "device" attribute to TF ops.
For example, given the function
Observe how the op inside the island obtains a /TPU:0
device assignment:
-tf-rewrite-tpu-embedding-ops
Rewrites TPU embedding send/recv ops by adding TPU embedding deduplication data
-tf-shape-inference
Shape inference on TF dialect and ops implementing InferTypeOpInterface
Fixed point shape refinement pass that utilizes the shape functions registered on ops using the InferTypeOpInterface as well as by bridging to the TensorFlow op registry's shape functions. This is an interprocedural pass that propagates information across function calls/control flow operations where possible (the GuaranteeAllFuncsOneUsePass is often run before this pass to enable more propagation opportunities). It refines both the outermost element type of tensors as well as the nested component type (e.g., for tensor lists).
During shape refinement this pass may insert additional cast operations as well as fold some constant shape computations to enable more exact shape inference. Therefore it does do some mutation of the graph. Constant folding required to produce more exact shapes is also performed but these values are only kept in the context rather than the ops folded/IR mutated.
Options
-tf-simple-device-assignment
Simple device assignment in TF dialect.
Assigns the default device to all ops that have an empty (or nonexistent) device attribute.
For example, if we have the code
then running this pass with 'default-device=foobar', we get:
Options
-tf-stack-ops-decomposition
Decompose stack operations into local variable operations. Needs static shapes.
A pass that converts stack operations to tensor operations and read/assign ops on local variables. A later resource lifting pass can further remove the local variables.
This pass requires that the full shape of the stack can be inferred: 1) the maximum size needs to be a constant and 2) a push op can be found with a known shape, and all push ops need to have the same shape.
A stack creation op "tf.StackV2" will be turned in to two zero-initialized variables, for the buffer and current size. Each push will be turned into
and each pop will be turned into
The pass also works across control flow and functional calls.
-tf-strip-noinline-attribute
_Strip the tf.noinline attribute from top-level functions.
-tf-strip-tf-attributes
Removes TF specific attributes
Removes attributes that are TF specific (start with "tf.") or that have a value from the TF dialect. Useful after legalizing TF graphs to other dialects, to remove any TF remnants.
-tf-tensor-array-ops-decomposition
Decompose tensor array operations into local variable operations.
A pass that converts tensor array operations to tensor operations and read/assign ops on local variables. A later resource lifting pass can further remove the local variables.
This pass requires that the full shape of the tensor array can be inferred:
the size needs to be a constant, 2) it specifies the full element shape, or that can be inferred from a later write, and 3) all elements have the same shape.
-tf-tensor-device-copy
Fold the tf.Identity op and the tf.IdentityN op if the op has the same device as its operand
-tf-tensor-list-ops-decomposition
Decomposes TensorList operations into generic operations on tensors.
This pass rewrites TensorList operations into generic and non-mutating operations on tensors. This results in operations that can be legalized to XLA.
The list is converted to a single large tensor that includes all list elements, with a new first dimension for the list index. List update operations are converted to operations that create a new tensor representing the list.
In the current implementation, the resulting operations are statically shaped, which means it must be possible to infer a bound on the full shape of the TensorList. That is, the element_shape
and num_elements
arguments to a tensor list creation op are constant.
A tensor list creation op tf.EmptyTensorList
/tf.TensorListReserve
will be turned in to a zero-initialized buffer, and the size is initialized to 0 for tf.EmptyTensorList
or the specified size for tf.TensorListReserve
. Each push will be turned into tf.XlaDynamicUpdateSlice
with the incremented size, and each pop will be turned into a tf.Slice
and a copy of the buffer with decremented size. Each tf.TensorListSetItem
will be turned into a tf.XlaDynamicUpdateSlice
with unchanged size, and each tf.TensorListGetItem
will be rewritten to a tf.Slice
.
The pass also works across control flow and functional calls.
For example, the TensorList ops in the following function:
will be transformed to:
-tf-tpu-annotate-dynamic-shape-inputs
Annotate the inputs returned by TPUCopyWithDynamicShapeOp with dynamic shape
This pass looks for the usage of the result of TPUCopyWithDynamicShapeOp and sets the shape of these inputs to be dynamic shaped. This will ensure that the generated HLO program is correctly reflecting the dynamic shape.
-tf-tpu-cleanup-cluster-attributes
_Eliminate replication_info and other attributes from ops in a cluster
This pass eliminate _replication_info
and device
attribute on operations that are contained in a tf_device.cluster op.
-tf-tpu-cluster-formation
Forms clusters from operations assigned to the same TPU computation
TPU computations from the frontend are composed of a tf.TPUReplicateMetadata
op, a subgraph of ops (TensorFlow Dialect) each with a matching _replication_info
attribute relative to the associated tf.TPUReplicateMetadata
op, and optionally tf.TPUReplicatedInput
and tf.TPUReplicatedOutput
ops feeding in inputs and outputs to and from a replicated TPU computation. The number of times a TPU computation is replicated is defined in the tf.TPUReplicateMetadata
op (num_replicas
attribute) and operand and result sizes of tf.TPUReplicatedInput
and tf.TPUReplicatedOutput
respectively must match, excluding packed tensors. It is also assumed ops of the same TPU computation do not have ops outside of the TPU computation that are both inputs and outputs to the same TPU computation. Furthermore, we assume that every node has either none or both of _replication_info
and _xla_compile_device_type
attributes defined.
This pass takes the TPU computation subgraph, moves them into a tf_device.cluster
, and copies over attributes from the associated tf.TPUReplicateMetadata
op to the newly created tf_device.cluster
. If the computation is replicated (num_replicas
> 1), the num_replicas
attribute is not copied over but instead the tf_device.cluster
is further wrapped with a tf_device.replicate
, and associated tf.TPUReplicatedInput
and tf.TPUReplicatedOutput
ops are replaced as the tf_device.replicate
operands and results. Otherwise, the single operands and results of the associated tf.TPUReplicatedInput
and tf.TPUReplicatedOutput
ops are simply forwarded to the tf_device.cluster
.
For example, the following non replicated computation:
will be transformed into:
The following replicated computation:
will be transformed into:
-tf-tpu-colocate-composite-resource-ops
Colocate resource with composite device assignment to TPU device.
Pass that co-locates resource ops that use composite device resources (packed tensors) with the underlying physical TPU device.
So for example, if we have a function that does (inside a tf_device.replicate
):
Then said ReadVariableOp
is going to get replaced by:
-tf-tpu-colocate-splits
Colocates each Split op with its predecessor
It is beneficial for performance to assign a Split
op to the same device as its predecessor. This is because the weight of cut edges is always minimized when the Split
is with its predecessor. This colocation constraint will be used by the placer graph optimization to assign a device to the op.
This pass should run in the export pipeline after tf-replicate-to-island so each replica has its own distinct (predecessor, Split) pair.
The colocation class (_class
) of the Split
is set to the same class as its predecessor:
-tf-tpu-device-propagation
Propagates TPU devices from ops to users
-tf-tpu-dynamic-layout-pass
Inserts TPU layout ops to determine layout at run time.
A pass that allows TPU input layout to be determined after JIT compilation. This is done by adding run-time ops that interpret compilation result and copy the input to device with that layout.
Example: original program:
Without this pass, later TF graph partitioning passes will insert send/recv between %input and %execute and data will be copied to device in a fixed layout. With this pass, the program will be transformed into:
This way, %compile will determine the layout, which will be respected by %copy_to_device. There will not be send/recv ops added by later passes, because tf.TPUCopyWithLayout accepts a host input and produces a device output.
-tf-tpu-host-computation-expansion
Expands host computation before and after TPU computation.
This pass expands outside compilation attributes to Identity/Cast ops at the head of TPU computation if it's only used by outside compiled ops.
-tf-tpu-identity-pruning
Removes Identity/IdentityN ops from the TPU computation
-tf-tpu-merge-variables-with-execute
Merges device variable reads and updates into TPU execute ops
This pass finds on-device resource variable reads and updates surrounding a tf.TPUExecute
op and merges them into a tf.TPUExecuteAndUpdateVariables
op. This allows the TPU execution to perform more efficient in-place variable updates.
For example,
will be transformed into
The transformation happens only for on-device variables. The above transformation requires %arg0
, %arg1
to have the same device assignment as the TPUExecute
op.
-tf-tpu-parallel-execute-sink-resource-write
Moves tf.AssignVariableOp consumers of tf_device.parallel_execute into tf_device.parallel_execute regions
-tf-tpu-partitioned-op-conversion
Rewrite all TPU Partitioned ops into their V2 counterparts.
-tf-tpu-reorder-replicate-partitioned-inputs
Reorder replicated and partitioned input ops.
This pass rewrites how data parallelism and model parallelism is expressed for inputs. It reorders tf.TPUPartitionedInput
(model parallelism) and tf.TPUReplicatedInput
(data parallelism) ops. It transforms a DAG where multiple tf.TPUPartitionedInput
ops are feeding into a single tf.TPUReplicatedInput
into a DAG where multiple tf.TPUReplicatedInput
ops are feeding into a single tf.TPUPartitionedInput
. Transforming the IR in such a manner will allow subsequent cluster formation pass to handle IR with both data and model parallelism in an easier manner.
For example, the following:
will be transformed into:
-tf-tpu-resource-partition
Partitions unpartitioned resource read/write to partitioned resource variables.
This pass creates individual resource reads/writes from the unpartitioned resource variable (from tf.TPUPartitionedInput
) to individual partitioned resource variables (tf.TPUPartitionedInput
operands). As resource op decomposition/lifting occurs with the unpartitioned resource variables, transforming the IR in such a manner will allow for subsequent passes to operate on individual resource variable handles per core/device.
For example, the following:
will be transformed into:
-tf-tpu-resource-read-for-write
Inserts tf.ReadVariableOp inputs to a TPU cluster for resource writes with no reads
This pass materializes tf.ReadVariableOp
inputs to an outlined TPU computation for resource variables where only writes are present so later in the pipeline such resource variables can be fused with generated tf.TPUExecute
ops, which only supports resource variable read or read + write. For all TPU computations, resource variables are required to be initialized prior to execution. Write only resource variable uses can be generated currently via packed tensor uses.
For example, the following:
will be transformed into:
-tf-tpu-rewrite
Rewrites a tf_device.cluster_func
on TPUs into TPU runtime operations.
This pass rewrites a tf_device.cluster_func
operation into a sequence of tf._TPUCompileMlir
and tf.TPUExecute
operations. tf._TPUCompileMlir
contains a MLIR module that is functionally equivalent to the function referenced by tf_device.cluster_func
. This makes the module to be jit-compiled and executed on TPU. If it is not possible to rewrite the operation or device assignment fails, a failure will be returned.
Note, many parameters to the tf_device.cluster_func
are omitted in this and following examples. For example, a non replicated tf_device.cluster_func
:
will be rewritten as:
A replicated tf_device.cluster_func
:
will be rewritten as:
A non replicated tf_device.cluster_func
with the model parallelism:
will be rewritten as:
Options
-tf-tpu-sharding-identification
Identifies and handles inputs/outputs of TPU computation that is sharded across logical cores.
Bubbles up sharding configuration from cluster_func
regions into the attributes of cluster_func
. This is done by parsing the XlaSharding
/ TPUPartitionedOutput
/ TPUPartitionedInput
ops inside cluster_func
.
For example, given the following cluster_func
wrapping func
:
Now, cluster_func receives the following *_sharding_configuration
attributes, and func
receives the mhlo.sharding attribute:
-tf-tpu-space-to-depth-pass
Applies automatic space to depth transform for the first or frontier convolutions consume host inputs on TPU.
Automatic space to depth transform is done by adding space to depth transform op after host input and applying space to depth transform for the first convolution and its backprop filter on TPU.
For example, original program:
The program will be transformed into:
This way, the first convolution with 3 feature dimension will be transformed to 12 feature dimension, which has better performance on TPU.
-tf-tpu-update-embedding-enqueue-op-inputs
Updates inputs to TPU embedding enqueue ops depending on whether graph is in training mode or in evaluation mode.
Updates inputs to TPU embedding enqueue ops depending on whether graph is in training mode or in evaluation mode.
-tf-tpu-validate-inputs
Validates inputs to the TPU TF/XLA bridge
This pass checks that the IR has valid input to TPU TF/XLA bridge. It checks the relations of multiple ops. Properties of single ops are checked by the 'verify' method of ops.
-tf-tpu-variable-runtime-reformatting
Adds device variable formatting op to allow compilation-guided variable formatting.
A pass that takes advantage of a loop to add ops that allow the execution to avoid repeatedly formatting variables back and forth. The desired formatting is determined by TPU program compilation, so this pass does not include how to reformat the variables, but only inserts general TPUReshardVariablesOps in proper places, and TPUReshardVariablesOps interpret the compilation.
The core idea of this optimization is to keep track of the formatting state of variables, and when the next desired state does not change, it can avoid reformatting. We associate a set of variables on a device with a formatting state, and TPUReshardVariablesOps compares the current state with a desired state (which can be the compilation result). If they mismatch, TPUReshardVariablesOp reformats the variables to the desired state; if they match, TPUReshardVariablesOp is a no-op.
A major use of this pass is weight-update sharding in data parallelism, so we require there is a tf_device.replicate in the loop.
For example, suppose we have a training loop (for simplicity we write the loop body inine):
This pass will transform it into
-tf-unroll-batch-matmul
Unroll TF BatchMatMul op into Reshape, Slice, MatMul, Pack ops.
-tf-verify-for-export
Verify module is suitable for export back to TF Graph
Verifies whether all functions in module are of single tf_executor.graph and each tf_executor.island in tf_executor.graph only has a single op.
-tf-xla-call-module-deserialization
Deserializes StableHLO functions embedded in tf.XlaCallModule
to top level module
This pass deserializes the StableHLO bytecodes embedded in tf.XlaCallModule, then outlines the functions in the deserialized StableHLO module to the top level MLIR module, with function renamings to avoid naming conflicts.
After the outlining, it updates tf.XlaCallModule's module attribute to be empty, adds an _entry_function
attribute referring to the entry function. It also adds a _from_xla_call_module: true
attribute to each lifted StableHLO function.
-tf-xla-call-module-serialization
Serializes StableHLO functions from top-level module into tf.XlaCallModule
's module
attribute
This pass collects StableHLO functions referenced from tf.XlaCallModule
's _entry_function
attribute into a module, serializes the module into MLIR bytecode, and embed the bytecode to tf.XlaCallModule
's module
attribute.
After serialization, this pass removes the _entry_function
attribute from tf.XlaCallModule
, and removes all the serialized stablehlo functions from the top-level module.
-tfe-legalize-tfg
Legalize from TFG to the TFE dialect