Path: blob/master/site/en-snapshot/tfx/tutorials/serving/rest_simple.ipynb
25118 views
Copyright 2020 The TensorFlow Authors.
Train and serve a TensorFlow model with TensorFlow Serving
Warning: This notebook is designed to be run in a Google Colab only. It installs packages on the system and requires root access. If you want to run it in a local Jupyter notebook, please proceed with caution.
Note: You can run this example right now in a Jupyter-style notebook, no setup required! Just click "Run in Google Colab"
This guide trains a neural network model to classify images of clothing, like sneakers and shirts, saves the trained model, and then serves it with TensorFlow Serving. The focus is on TensorFlow Serving, rather than the modeling and training in TensorFlow, so for a complete example which focuses on the modeling and training see the Basic Classification example.
This guide uses tf.keras, a high-level API to build and train models in TensorFlow.
Create your model
Import the Fashion MNIST dataset
This guide uses the Fashion MNIST dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here:
![]() |
Figure 1. Fashion-MNIST samples (by Zalando, MIT License). Â |
Fashion MNIST is intended as a drop-in replacement for the classic MNIST dataset—often used as the "Hello, World" of machine learning programs for computer vision. You can access the Fashion MNIST directly from TensorFlow, just import and load the data.
Note: Although these are really images, they are loaded as NumPy arrays and not binary image objects.
Train and evaluate your model
Let's use the simplest possible CNN, since we're not focused on the modeling part.
Save your model
To load our trained model into TensorFlow Serving we first need to save it in SavedModel format. This will create a protobuf file in a well-defined directory hierarchy, and will include a version number. TensorFlow Serving allows us to select which version of a model, or "servable" we want to use when we make inference requests. Each version will be exported to a different sub-directory under the given path.
Examine your saved model
We'll use the command line utility saved_model_cli
to look at the MetaGraphDefs (the models) and SignatureDefs (the methods you can call) in our SavedModel. See this discussion of the SavedModel CLI in the TensorFlow Guide.
That tells us a lot about our model! In this case we just trained our model, so we already know the inputs and outputs, but if we didn't this would be important information. It doesn't tell us everything, like the fact that this is grayscale image data for example, but it's a great start.
Serve your model with TensorFlow Serving
Warning: If you are running this NOT on a Google Colab, following cells will install packages on the system with root access. If you want to run it in a local Jupyter notebook, please proceed with caution.
Add TensorFlow Serving distribution URI as a package source:
We're preparing to install TensorFlow Serving using Aptitude since this Colab runs in a Debian environment. We'll add the tensorflow-model-server
package to the list of packages that Aptitude knows about. Note that we're running as root.
Note: This example is running TensorFlow Serving natively, but you can also run it in a Docker container, which is one of the easiest ways to get started using TensorFlow Serving.
Install TensorFlow Serving
This is all you need - one command line!
Start running TensorFlow Serving
This is where we start running TensorFlow Serving and load our model. After it loads we can start making inference requests using REST. There are some important parameters:
rest_api_port
: The port that you'll use for REST requests.model_name
: You'll use this in the URL of REST requests. It can be anything.model_base_path
: This is the path to the directory where you've saved your model.
Make a request to your model in TensorFlow Serving
First, let's take a look at a random example from our test data.
Ok, that looks interesting. How hard is that for you to recognize? Now let's create the JSON object for a batch of three inference requests, and see how well our model recognizes things:
Make REST requests
Newest version of the servable
We'll send a predict request as a POST to our server's REST endpoint, and pass it three examples. We'll ask our server to give us the latest version of our servable by not specifying a particular version.
A particular version of the servable
Now let's specify a particular version of our servable. Since we only have one, let's select version 1. We'll also look at all three results.