Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Path: blob/main/sagemaker/19_serverless_inference/sagemaker-notebook.ipynb
Views: 2542
Serverless Inference with Hugging Face's Transformers & Amazon SageMaker
Welcome to this getting started guide. We will use the Hugging Face Inference DLCs and Amazon SageMaker Python SDK to create a Serverless Inference endpoint. Amazon SageMaker Serverless Inference is a new capability in SageMaker that enables you to deploy and scale ML models in a Serverless fashion. Serverless endpoints automatically launch compute resources and scale them in and out depending on traffic similar to AWS Lambda. Serverless Inference is ideal for workloads which have idle periods between traffic spurts and can tolerate cold starts. With a pay-per-use model, Serverless Inference is a cost-effective option if you have an infrequent or unpredictable traffic pattern.
How it works
The following diagram shows the workflow of Serverless Inference and the benefits of using a serverless endpoint.
When you create a serverless endpoint, SageMaker provisions and manages the compute resources for you. Then, you can make inference requests to the endpoint and receive model predictions in response. SageMaker scales the compute resources up and down as needed to handle your request traffic, and you only pay for what you use.
Limitations
Memory size: 1024 MB, 2048 MB, 3072 MB, 4096 MB, 5120 MB, or 6144 MB
Concurrent invocations: 50 per region
Cold starts: ms to seconds. Can be monitored with the ModelSetupTime
Cloudwatch Metric
NOTE: You can run this demo in Sagemaker Studio, your local machine, or Sagemaker Notebook Instances
Development Environment and Permissions
Installation
Permissions
If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find here more about it.
Create Inference HuggingFaceModel
for the Serverless Inference Endpoint
We use the distilbert-base-uncased-finetuned-sst-2-english model running our serverless endpoint. This model is a fine-tune checkpoint of DistilBERT-base-uncased, fine-tuned on SST-2. This model reaches an accuracy of 91.3 on the dev set (for comparison, Bert bert-base-uncased version reaches an accuracy of 92.7).
Request Serverless Inference Endpoint using the HuggingFacePredictor
The .deploy()
returns an HuggingFacePredictor
object which can be used to request inference. This HuggingFacePredictor
makes it easy to send requests to your endpoint and get the results back.
The first request might have some coldstart (2-5s).