Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Path: blob/main/sagemaker/21_image_segmantation/sagemaker-notebook.ipynb
Views: 2542
Semantic Segmantion with Hugging Face's Transformers & Amazon SageMaker
Transformer models are changing are changing the world of machine learning, starting with natural language processing, and now, with audio and computer vision. Hugging Face's mission is to democratize good machine learning and giving any one the opportunity to use these new state-of-the-art machine learning models. Together with Amazon SageMaker and AWS we have been working on extending the functionalities of the Hugging Face Inference DLC and the Python SageMaker SDK to make it easier to use speech and vision models together with transformers
. You can now use the Hugging Face Inference DLC to do automatic speech recognition using MetaAIs wav2vec2 model or Microsofts WavLM or use NVIDIAs SegFormer for image segmentation.
This guide will walk you through how to do Image Segmentation using segformer and new DataSerializer
In this example you will learn how to:
Setup a development Environment and permissions for deploying Amazon SageMaker Inference Endpoints.
Deploy a segformer model to Amazon SageMaker for image segmentation
Send requests to the endpoint to do image segmentation.
Let's get started! 🚀
If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find here more about it.
1. Setup a development Environment and permissions for deploying Amazon SageMaker Inference Endpoints.
Setting up the development environment and permissions needs to be done for the automatic-speech-recognition example and the semantic-segmentation example. First we update the sagemaker
SDK to make sure we have new DataSerializer
.
After we have update the SDK we can set the permissions.
If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find here more about it.
2. Deploy a segformer model to Amazon SageMaker for image segmentation
Image Segmentation divides an image into segments where each pixel in the image is mapped to an object. This task has multiple variants such as instance segmentation, panoptic segmentation and semantic segmentation.
We use the nvidia/segformer-b0-finetuned-ade-512-512 model running our segmentation endpoint. This model is fine-tuned on ADE20k (scene-centric image) at resolution 512x512.
Before we are able to deploy our HuggingFaceModel
class we need to create a new serializer, which supports our audio data. The Serializer are used in Predictor and in the predict
method to serializer our data to a specific mime-type
, which send to the endpoint. The default serialzier for the HuggingFacePredcitor is a JSNON serializer, but since we are not going to send text data to the endpoint we will use the DataSerializer.
3. Send requests to the endpoint to do image segmentation.
The .deploy()
returns an HuggingFacePredictor
object with our DataSeriliazer
which can be used to request inference. This HuggingFacePredictor
makes it easy to send requests to your endpoint and get the results back.
We will use 2 different methods to send requests to the endpoint:
a. Provide a image file via path to the predictor b. Provide binary image data object to the predictor
a. Provide a image file via path to the predictor
Using a image file as input is easy as easy as providing the path to its location. The DataSerializer
will then read it and send the bytes to the endpoint.
We can use a libirispeech
sample hosted on huggingface.co
before we send our request lest create a helper function to display our segmentation results.
To send a request with provide our path to the audio file we can use the following code:
b. Provide binary image data object to the predictor
Instead of providing a path to the image file we can also directy provide the bytes of it reading the file in python.
make sure ADE_val_00000001.jpg
is in the directory