Path: blob/master/site/en-snapshot/lattice/tutorials/keras_layers.ipynb
25118 views
Copyright 2020 The TensorFlow Authors.
Creating Keras Models with TFL Layers
##Overview
You can use TFL Keras layers to construct Keras models with monotonicity and other shape constraints. This example builds and trains a calibrated lattice model for the UCI heart dataset using TFL layers.
In a calibrated lattice model, each feature is transformed by a tfl.layers.PWLCalibration
or a tfl.layers.CategoricalCalibration
layer and the results are nonlinearly fused using a tfl.layers.Lattice
.
Setup
Installing TF Lattice package:
Importing required packages:
Downloading the UCI Statlog (Heart) dataset:
Setting the default values used for training in this guide:
Sequential Keras Model
This example creates a Sequential Keras model and only uses TFL layers.
Lattice layers expect input[i]
to be within [0, lattice_sizes[i] - 1.0]
, so we need to define the lattice sizes ahead of the calibration layers so we can properly specify output range of the calibration layers.
We use a tfl.layers.ParallelCombination
layer to group together calibration layers which have to be executed in parallel in order to be able to create a Sequential model.
We create a calibration layer for each feature and add it to the parallel combination layer. For numeric features we use tfl.layers.PWLCalibration
, and for categorical features we use tfl.layers.CategoricalCalibration
.
We then create a lattice layer to nonlinearly fuse the outputs of the calibrators.
Note that we need to specify the monotonicity of the lattice to be increasing for required dimensions. The composition with the direction of the monotonicity in the calibration will result in the correct end-to-end direction of monotonicity. This includes partial monotonicity of CategoricalCalibration layer.
We can then create a sequential model using the combined calibrators and lattice layers.
Training works the same as any other keras model.
Functional Keras Model
This example uses a functional API for Keras model construction.
As mentioned in the previous section, lattice layers expect input[i]
to be within [0, lattice_sizes[i] - 1.0]
, so we need to define the lattice sizes ahead of the calibration layers so we can properly specify output range of the calibration layers.
For each feature, we need to create an input layer followed by a calibration layer. For numeric features we use tfl.layers.PWLCalibration
and for categorical features we use tfl.layers.CategoricalCalibration
.
We then create a lattice layer to nonlinearly fuse the outputs of the calibrators.
Note that we need to specify the monotonicity of the lattice to be increasing for required dimensions. The composition with the direction of the monotonicity in the calibration will result in the correct end-to-end direction of monotonicity. This includes partial monotonicity of tfl.layers.CategoricalCalibration
layer.
To add more flexibility to the model, we add an output calibration layer.
We can now create a model using the inputs and outputs.
Training works the same as any other keras model. Note that, with our setup, input features are passed as separate tensors.