Working with preprocessing layers
Authors: Francois Chollet, Mark Omernick
Date created: 2020/07/25
Last modified: 2021/04/23
Description: Overview of how to leverage preprocessing layers to create end-to-end models.
View in Colab •
GitHub source
Keras preprocessing
The Keras preprocessing layers API allows developers to build Keras-native input processing pipelines. These input processing pipelines can be used as independent preprocessing code in non-Keras workflows, combined directly with Keras models, and exported as part of a Keras SavedModel.
With Keras preprocessing layers, you can build and export models that are truly end-to-end: models that accept raw images or raw structured data as input; models that handle feature normalization or feature value indexing on their own.
Available preprocessing
Text preprocessing
Numerical features preprocessing
Categorical features preprocessing
tf.keras.layers.CategoryEncoding
: turns integer categorical features into one-hot, multi-hot, or count dense representations.
tf.keras.layers.Hashing
: performs categorical feature hashing, also known as the "hashing trick".
tf.keras.layers.StringLookup
: turns string categorical values into an encoded representation that can be read by an Embedding
layer or Dense
layer.
tf.keras.layers.IntegerLookup
: turns integer categorical values into an encoded representation that can be read by an Embedding
layer or Dense
layer.
Image preprocessing
These layers are for standardizing the inputs of an image model.
tf.keras.layers.Resizing
: resizes a batch of images to a target size.
tf.keras.layers.Rescaling
: rescales and offsets the values of a batch of images (e.g. go from inputs in the [0, 255]
range to inputs in the [0, 1]
range.
tf.keras.layers.CenterCrop
: returns a center crop of a batch of images.
Image data augmentation
These layers apply random augmentation transforms to a batch of images. They are only active during training.
tf.keras.layers.RandomCrop
tf.keras.layers.RandomFlip
tf.keras.layers.RandomTranslation
tf.keras.layers.RandomRotation
tf.keras.layers.RandomZoom
tf.keras.layers.RandomContrast
The adapt()
method
Some preprocessing layers have an internal state that can be computed based on a sample of the training data. The list of stateful preprocessing layers is:
TextVectorization
: holds a mapping between string tokens and integer indices
StringLookup
and IntegerLookup
: hold a mapping between input values and integer indices.
Normalization
: holds the mean and standard deviation of the features.
Discretization
: holds information about value bucket boundaries.
Crucially, these layers are non-trainable. Their state is not set during training; it must be set before training, either by initializing them from a precomputed constant, or by "adapting" them on data.
You set the state of a preprocessing layer by exposing it to training data, via the adapt()
method:
import numpy as np
import tensorflow as tf
import keras
from keras import layers
data = np.array(
[
[0.1, 0.2, 0.3],
[0.8, 0.9, 1.0],
[1.5, 1.6, 1.7],
]
)
layer = layers.Normalization()
layer.adapt(data)
normalized_data = layer(data)
print("Features mean: %.2f" % (normalized_data.numpy().mean()))
print("Features std: %.2f" % (normalized_data.numpy().std()))
```
Features mean: -0.00
Features std: 1.00
</div>
The `adapt()` method takes either a Numpy array or a `tf.data.Dataset` object. In the
case of `StringLookup` and `TextVectorization`, you can also pass a list of strings:
```python
data = [
"ξεῖν᾽, ἦ τοι μὲν ὄνειροι ἀμήχανοι ἀκριτόμυθοι",
"γίγνοντ᾽, οὐδέ τι πάντα τελείεται ἀνθρώποισι.",
"δοιαὶ γάρ τε πύλαι ἀμενηνῶν εἰσὶν ὀνείρων:",
"αἱ μὲν γὰρ κεράεσσι τετεύχαται, αἱ δ᾽ ἐλέφαντι:",
"τῶν οἳ μέν κ᾽ ἔλθωσι διὰ πριστοῦ ἐλέφαντος,",
"οἵ ῥ᾽ ἐλεφαίρονται, ἔπε᾽ ἀκράαντα φέροντες:",
"οἱ δὲ διὰ ξεστῶν κεράων ἔλθωσι θύραζε,",
"οἵ ῥ᾽ ἔτυμα κραίνουσι, βροτῶν ὅτε κέν τις ἴδηται.",
]
layer = layers.TextVectorization()
layer.adapt(data)
vectorized_text = layer(data)
print(vectorized_text)
```
tf.Tensor(
[[37 12 25 5 9 20 21 0 0]
[51 34 27 33 29 18 0 0 0]
[49 52 30 31 19 46 10 0 0]
[ 7 5 50 43 28 7 47 17 0]
[24 35 39 40 3 6 32 16 0]
[ 4 2 15 14 22 23 0 0 0]
[36 48 6 38 42 3 45 0 0]
[ 4 2 13 41 53 8 44 26 11]], shape=(8, 9), dtype=int64)
</div>
In addition, adaptable layers always expose an option to directly set state via
constructor arguments or weight assignment. If the intended state values are known at
layer construction time, or are calculated outside of the `adapt()` call, they can be set
without relying on the layer's internal computation. For instance, if external vocabulary
files for the `TextVectorization`, `StringLookup`, or `IntegerLookup` layers already
exist, those can be loaded directly into the lookup tables by passing a path to the
vocabulary file in the layer's constructor arguments.
Here's an example where you instantiate a `StringLookup` layer with precomputed vocabulary:
```python
vocab = ["a", "b", "c", "d"]
data = tf.constant([["a", "c", "d"], ["d", "z", "b"]])
layer = layers.StringLookup(vocabulary=vocab)
vectorized_data = layer(data)
print(vectorized_data)
```
tf.Tensor(
[[1 3 4]
[4 0 2]], shape=(2, 3), dtype=int64)
</div>
---
## Preprocessing data before the model or inside the model
There are two ways you could be using preprocessing layers:
**Option 1:** Make them part of the model, like this:
```python
inputs = keras.Input(shape=input_shape)
x = preprocessing_layer(inputs)
outputs = rest_of_the_model(x)
model = keras.Model(inputs, outputs)
With this option, preprocessing will happen on device, synchronously with the rest of the model execution, meaning that it will benefit from GPU acceleration. If you're training on a GPU, this is the best option for the Normalization
layer, and for all image preprocessing and data augmentation layers.
Option 2: apply it to your tf.data.Dataset
, so as to obtain a dataset that yields batches of preprocessed data, like this:
dataset = dataset.map(lambda x, y: (preprocessing_layer(x), y))
With this option, your preprocessing will happen on a CPU, asynchronously, and will be buffered before going into the model. In addition, if you call dataset.prefetch(tf.data.AUTOTUNE)
on your dataset, the preprocessing will happen efficiently in parallel with training:
dataset = dataset.map(lambda x, y: (preprocessing_layer(x), y))
dataset = dataset.prefetch(tf.data.AUTOTUNE)
model.fit(dataset, ...)
This is the best option for TextVectorization
, and all structured data preprocessing layers. It can also be a good option if you're training on a CPU and you use image preprocessing layers.
Note that the TextVectorization
layer can only be executed on a CPU, as it is mostly a dictionary lookup operation. Therefore, if you are training your model on a GPU or a TPU, you should put the TextVectorization
layer in the tf.data
pipeline to get the best performance.
When running on a TPU, you should always place preprocessing layers in the tf.data
pipeline (with the exception of Normalization
and Rescaling
, which run fine on a TPU and are commonly used as the first layer in an image model).
Benefits of doing preprocessing inside the model at inference time
Even if you go with option 2, you may later want to export an inference-only end-to-end model that will include the preprocessing layers. The key benefit to doing this is that it makes your model portable and it helps reduce the training/serving skew.
When all data preprocessing is part of the model, other people can load and use your model without having to be aware of how each feature is expected to be encoded & normalized. Your inference model will be able to process raw images or raw structured data, and will not require users of the model to be aware of the details of e.g. the tokenization scheme used for text, the indexing scheme used for categorical features, whether image pixel values are normalized to [-1, +1]
or to [0, 1]
, etc. This is especially powerful if you're exporting your model to another runtime, such as TensorFlow.js: you won't have to reimplement your preprocessing pipeline in JavaScript.
If you initially put your preprocessing layers in your tf.data
pipeline, you can export an inference model that packages the preprocessing. Simply instantiate a new model that chains your preprocessing layers and your training model:
inputs = keras.Input(shape=input_shape)
x = preprocessing_layer(inputs)
outputs = training_model(x)
inference_model = keras.Model(inputs, outputs)
Preprocessing during multi-worker training
Preprocessing layers are compatible with the tf.distribute API for running training across multiple machines.
In general, preprocessing layers should be placed inside a tf.distribute.Strategy.scope()
and called either inside or before the model as discussed above.
with strategy.scope():
inputs = keras.Input(shape=input_shape)
preprocessing_layer = tf.keras.layers.Hashing(10)
dense_layer = tf.keras.layers.Dense(16)
For more details, refer to the Data preprocessing section of the Distributed input tutorial.
Quick recipes
Image data augmentation
Note that image data augmentation layers are only active during training (similarly to the Dropout
layer).
from tensorflow import keras
from tensorflow.keras import layers
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1),
layers.RandomZoom(0.1),
]
)
(x_train, y_train), _ = keras.datasets.cifar10.load_data()
input_shape = x_train.shape[1:]
classes = 10
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.batch(16).map(lambda x, y: (data_augmentation(x), y))
inputs = keras.Input(shape=input_shape)
x = layers.Rescaling(1.0 / 255)(inputs)
outputs = keras.applications.ResNet50(
weights=None, input_shape=input_shape, classes=classes
)(x)
model = keras.Model(inputs, outputs)
model.compile(optimizer="rmsprop", loss="sparse_categorical_crossentropy")
model.fit(train_dataset, steps_per_epoch=5)
```
5/5 [==============================] - 9s 124ms/step - loss: 9.9572
<keras.src.callbacks.History at 0x7f749c4f5010>
</div>
You can see a similar setup in action in the example
[image classification from scratch](https:
### Normalizing numerical features
```python
# Load some data
(x_train, y_train), _ = keras.datasets.cifar10.load_data()
x_train = x_train.reshape((len(x_train), -1))
input_shape = x_train.shape[1:]
classes = 10
# Create a Normalization layer and set its internal state using the training data
normalizer = layers.Normalization()
normalizer.adapt(x_train)
# Create a model that include the normalization layer
inputs = keras.Input(shape=input_shape)
x = normalizer(inputs)
outputs = layers.Dense(classes, activation="softmax")(x)
model = keras.Model(inputs, outputs)
# Train the model
model.compile(optimizer="adam", loss="sparse_categorical_crossentropy")
model.fit(x_train, y_train)
```
1563/1563 [==============================] - 3s 2ms/step - loss: 2.1200
<keras.src.callbacks.History at 0x7f749c3bd790>
</div>
```python
data = tf.constant([["a"], ["b"], ["c"], ["b"], ["c"], ["a"]])
lookup = layers.StringLookup(output_mode="one_hot")
lookup.adapt(data)
test_data = tf.constant([["a"], ["b"], ["c"], ["d"], ["e"], [""]])
encoded_data = lookup(test_data)
print(encoded_data)
```
tf.Tensor(
[[0. 0. 0. 1.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[1. 0. 0. 0.]
[1. 0. 0. 0.]
[1. 0. 0. 0.]], shape=(6, 4), dtype=float32)
</div>
Note that, here, index 0 is reserved for out-of-vocabulary values
(values that were not seen during `adapt()`).
You can see the `StringLookup` in action in the
[Structured data classification from scratch](https:
example.
### Encoding integer categorical features via one-hot encoding
```python
# Define some toy data
data = tf.constant([[10], [20], [20], [10], [30], [0]])
# Use IntegerLookup to build an index of the feature values and encode output.
lookup = layers.IntegerLookup(output_mode="one_hot")
lookup.adapt(data)
# Convert new test data (which includes unknown feature values)
test_data = tf.constant([[10], [10], [20], [50], [60], [0]])
encoded_data = lookup(test_data)
print(encoded_data)
```
tf.Tensor(
[[0. 0. 1. 0. 0.]
[0. 0. 1. 0. 0.]
[0. 1. 0. 0. 0.]
[1. 0. 0. 0. 0.]
[1. 0. 0. 0. 0.]
[0. 0. 0. 0. 1.]], shape=(6, 5), dtype=float32)
</div>
Note that index 0 is reserved for missing values (which you should specify as the value
0), and index 1 is reserved for out-of-vocabulary values (values that were not seen
during `adapt()`). You can configure this by using the `mask_token` and `oov_token`
constructor arguments of `IntegerLookup`.
You can see the `IntegerLookup` in action in the example
[structured data classification from scratch](https://keras.io/examples/structured_data/structured_data_classification_from_scratch/).
If you have a categorical feature that can take many different values (on the order of
10e3 or higher), where each value only appears a few times in the data,
it becomes impractical and ineffective to index and one-hot encode the feature values.
Instead, it can be a good idea to apply the "hashing trick": hash the values to a vector
of fixed size. This keeps the size of the feature space manageable, and removes the need
for explicit indexing.
```python
data = np.random.randint(0, 100000, size=(10000, 1))
hasher = layers.Hashing(num_bins=64, salt=1337)
encoder = layers.CategoryEncoding(num_tokens=64, output_mode="multi_hot")
encoded_data = encoder(hasher(data))
print(encoded_data.shape)
</div>
This is how you should preprocess text to be passed to an `Embedding` layer.
```python
adapt_data = tf.constant(
[
"The Brain is wider than the Sky",
"For put them side by side",
"The one the other will contain",
"With ease and You beside",
]
)
text_vectorizer = layers.TextVectorization(output_mode="int")
text_vectorizer.adapt(adapt_data)
print(
"Encoded text:\n",
text_vectorizer(["The Brain is deeper than the sea"]).numpy(),
)
inputs = keras.Input(shape=(None,), dtype="int64")
x = layers.Embedding(input_dim=text_vectorizer.vocabulary_size(), output_dim=16)(inputs)
x = layers.GRU(8)(x)
outputs = layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
train_dataset = tf.data.Dataset.from_tensor_slices(
(["The Brain is deeper than the sea", "for if they are held Blue to Blue"], [1, 0])
)
train_dataset = train_dataset.batch(2).map(lambda x, y: (text_vectorizer(x), y))
print("\nTraining model...")
model.compile(optimizer="rmsprop", loss="mse")
model.fit(train_dataset)
inputs = keras.Input(shape=(1,), dtype="string")
x = text_vectorizer(inputs)
outputs = model(x)
end_to_end_model = keras.Model(inputs, outputs)
print("\nCalling end-to-end model on test string...")
test_data = tf.constant(["The one the other will absorb"])
test_output = end_to_end_model(test_data)
print("Model output:", test_output)
```
Encoded text:
[[ 2 19 14 1 9 2 1]]
```
```
Training model...
1/1 [==============================] - 2s 2s/step - loss: 0.5227
```
```
Calling end-to-end model on test string...
Model output: tf.Tensor([[-0.00107805]], shape=(1, 1), dtype=float32)
</div>
You can see the `TextVectorization` layer in action, combined with an `Embedding` mode,
in the example
[text classification from scratch](https://keras.io/examples/nlp/text_classification_from_scratch/).
Note that when training such a model, for best performance, you should always
use the `TextVectorization` layer as part of the input pipeline.
This is how you should preprocess text to be passed to a `Dense` layer.
```python
adapt_data = tf.constant(
[
"The Brain is wider than the Sky",
"For put them side by side",
"The one the other will contain",
"With ease and You beside",
]
)
text_vectorizer = layers.TextVectorization(output_mode="multi_hot", ngrams=2)
text_vectorizer.adapt(adapt_data)
print(
"Encoded text:\n",
text_vectorizer(["The Brain is deeper than the sea"]).numpy(),
)
inputs = keras.Input(shape=(text_vectorizer.vocabulary_size(),))
outputs = layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
train_dataset = tf.data.Dataset.from_tensor_slices(
(["The Brain is deeper than the sea", "for if they are held Blue to Blue"], [1, 0])
)
train_dataset = train_dataset.batch(2).map(lambda x, y: (text_vectorizer(x), y))
print("\nTraining model...")
model.compile(optimizer="rmsprop", loss="mse")
model.fit(train_dataset)
inputs = keras.Input(shape=(1,), dtype="string")
x = text_vectorizer(inputs)
outputs = model(x)
end_to_end_model = keras.Model(inputs, outputs)
print("\nCalling end-to-end model on test string...")
test_data = tf.constant(["The one the other will absorb"])
test_output = end_to_end_model(test_data)
print("Model output:", test_output)
```
WARNING:tensorflow:5 out of the last 1567 calls to .adapt_step at 0x7f73dc15eac0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
Encoded text:
[[1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 0. 0. 0. 0. 0.
0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 1. 1. 0. 0. 0.]]
```
```
Training model...
1/1 [==============================] - 0s 204ms/step - loss: 1.1430
```
```
Calling end-to-end model on test string...
Model output: tf.Tensor([[0.64093614]], shape=(1, 1), dtype=float32)
</div>
This is an alternative way of preprocessing text before passing it to a `Dense` layer.
```python
adapt_data = tf.constant(
[
"The Brain is wider than the Sky",
"For put them side by side",
"The one the other will contain",
"With ease and You beside",
]
)
text_vectorizer = layers.TextVectorization(output_mode="tf-idf", ngrams=2)
text_vectorizer.adapt(adapt_data)
print(
"Encoded text:\n",
text_vectorizer(["The Brain is deeper than the sea"]).numpy(),
)
inputs = keras.Input(shape=(text_vectorizer.vocabulary_size(),))
outputs = layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
train_dataset = tf.data.Dataset.from_tensor_slices(
(["The Brain is deeper than the sea", "for if they are held Blue to Blue"], [1, 0])
)
train_dataset = train_dataset.batch(2).map(lambda x, y: (text_vectorizer(x), y))
print("\nTraining model...")
model.compile(optimizer="rmsprop", loss="mse")
model.fit(train_dataset)
inputs = keras.Input(shape=(1,), dtype="string")
x = text_vectorizer(inputs)
outputs = model(x)
end_to_end_model = keras.Model(inputs, outputs)
print("\nCalling end-to-end model on test string...")
test_data = tf.constant(["The one the other will absorb"])
test_output = end_to_end_model(test_data)
print("Model output:", test_output)
```
WARNING:tensorflow:6 out of the last 1568 calls to .adapt_step at 0x7f73bc6bf6a0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
Encoded text:
[[5.461647 1.6945957 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0.
0. 0. 1.0986123 1.0986123 1.0986123 0. 0.
0. 0. 0. 0. 0. 0. 0.
1.0986123 0. 0. 0. 0. 0. 0.
0. 1.0986123 1.0986123 0. 0. 0. ]]
```
```
Training model...
1/1 [==============================] - 1s 567ms/step - loss: 16.3522
```
```
Calling end-to-end model on test string...
Model output: tf.Tensor([[-0.20062147]], shape=(1, 1), dtype=float32)
</div>
---
You may find yourself working with a very large vocabulary in a `TextVectorization`, a `StringLookup` layer,
or an `IntegerLookup` layer. Typically, a vocabulary larger than 500MB would be considered "very large".
In such a case, for best performance, you should avoid using `adapt()`.
Instead, pre-compute your vocabulary in advance
(you could use Apache Beam or TF Transform for this)
and store it in a file. Then load the vocabulary into the layer at construction
time by passing the file path as the `vocabulary` argument.