Path: blob/master/site/en-snapshot/lite/examples/style_transfer/overview.ipynb
25118 views
Copyright 2019 The TensorFlow Authors.
Artistic Style Transfer with TensorFlow Lite
One of the most exciting developments in deep learning to come out recently is artistic style transfer, or the ability to create a new image, known as a pastiche, based on two input images: one representing the artistic style and one representing the content.
Using this technique, we can generate beautiful new artworks in a range of styles.
If you are new to TensorFlow Lite and are working with Android, we recommend exploring the following example applications that can help you get started.
If you are using a platform other than Android or iOS, or you are already familiar with the TensorFlow Lite APIs, you can follow this tutorial to learn how to apply style transfer on any pair of content and style image with a pre-trained TensorFlow Lite model. You can use the model to add style transfer to your own mobile applications.
The model is open-sourced on GitHub. You can retrain the model with different parameters (e.g. increase content layers' weights to make the output image look more like the content image).
Understand the model architecture
This Artistic Style Transfer model consists of two submodels:
Style Prediciton Model: A MobilenetV2-based neural network that takes an input style image to a 100-dimension style bottleneck vector.
Style Transform Model: A neural network that takes apply a style bottleneck vector to a content image and creates a stylized image.
If your app only needs to support a fixed set of style images, you can compute their style bottleneck vectors in advance, and exclude the Style Prediction Model from your app's binary.
Setup
Import dependencies.
Download the content and style images, and the pre-trained TensorFlow Lite models.
Pre-process the inputs
The content image and the style image must be RGB images with pixel values being float32 numbers between [0..1].
The style image size must be (1, 256, 256, 3). We central crop the image and resize it.
The content image must be (1, 384, 384, 3). We central crop the image and resize it.
Visualize the inputs
Run style transfer with TensorFlow Lite
Style prediction
Style transform
Style blending
We can blend the style of content image into the stylized output, which in turn making the output look more like the content image.
Performance Benchmarks
Performance benchmark numbers are generated with the tool described here.
Model name | Model size | Device | NNAPI | CPU | GPU |
---|---|---|---|---|---|
Style prediction model (int8) | 2.8 Mb | Pixel 3 (Android 10) | 142ms | 14ms* | |
Pixel 4 (Android 10) | 5.2ms | 6.7ms* | |||
iPhone XS (iOS 12.4.1) | 10.7ms** | ||||
Style transform model (int8) | 0.2 Mb | Pixel 3 (Android 10) | 540ms* | ||
Pixel 4 (Android 10) | 405ms* | ||||
iPhone XS (iOS 12.4.1) | 251ms** |
* 4 threads used.
** 2 threads on iPhone for the best performance.