CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
huggingface

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

GitHub Repository: huggingface/notebooks
Path: blob/main/sagemaker/12_batch_transform_inference/sagemaker-notebook.ipynb
Views: 2542
Kernel: conda_pytorch_p39

Huggingface Sagemaker-sdk - Run a batch transform inference job with 🤗 Transformers

  1. Introduction

  2. Run Batch Transform after training a model

  3. Run Batch Transform Inference Job with a fine-tuned model using jsonl

Welcome to this getting started guide, we will use the new Hugging Face Inference DLCs and Amazon SageMaker Python SDK to deploy two transformer model for inference. In the first example we deploy a trained Hugging Face Transformer model on to SageMaker for inference. In the second example we directly deploy one of the 10 000+ Hugging Face Transformers from the Hub to Amazon SageMaker for Inference.<

Run Batch Transform after training a model

not included in the notebook

After you train a model, you can use Amazon SageMaker Batch Transform to perform inferences with the model. In Batch Transform you provide your inference data as a S3 uri and SageMaker will care of downloading it, running the prediction and uploading the results afterwards to S3 again. You can find more documentation for Batch Transform here

If you trained the model using the HuggingFace estimator, you can invoke transformer() method to create a transform job for a model based on the training job.

batch_job = huggingface_estimator.transformer( instance_count=1, instance_type='ml.c5.2xlarge', strategy='SingleRecord') batch_job.transform( data='s3://s3-uri-to-batch-data', content_type='application/json', split_type='Line')

For more details about what can be specified here, see API docs.

!pip install "sagemaker>=2.48.0" "datasets==1.11" --upgrade

Run Batch Transform Inference Job with a fine-tuned model using jsonl

Data Pre-Processing

In this example we are using the provided tweet_data.csv as dataset. The csv contains ~1800 tweets about different airlines. The csv contains 1 column "inputs" with the tweets. To use this csv we need to convert it into a jsonl file and upload it to s3. Due to the complex structure of text are only jsonl file supported for batch transform. As pre-processing we are removing the @ in the beginning of the tweet to get the names/identities correct.

_NOTE: While preprocessing you need to make sure that your inputs fit the max_length.

import sagemaker import boto3 sess = sagemaker.Session() # sagemaker session bucket -> used for uploading data, models and logs # sagemaker will automatically create this bucket if it not exists sagemaker_session_bucket=None if sagemaker_session_bucket is None and sess is not None: # set to default bucket if a bucket name is not given sagemaker_session_bucket = sess.default_bucket() try: role = sagemaker.get_execution_role() except ValueError: iam = boto3.client('iam') role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn'] sess = sagemaker.Session(default_bucket=sagemaker_session_bucket) print(f"sagemaker role arn: {role}") print(f"sagemaker bucket: {sess.default_bucket()}") print(f"sagemaker session region: {sess.boto_region_name}")
sagemaker role arn: arn:aws:iam::558105141721:role/sagemaker_execution_role sagemaker bucket: sagemaker-us-east-1-558105141721 sagemaker session region: us-east-1
import csv import json from sagemaker.s3 import S3Uploader,s3_path_join # datset files dataset_csv_file="tweet_data.csv" dataset_jsonl_file="tweet_data.jsonl" with open(dataset_csv_file, "r+") as infile, open(dataset_jsonl_file, "w+") as outfile: reader = csv.DictReader(infile) for row in reader: # remove @ row["inputs"] = row["inputs"].replace("@","") json.dump(row, outfile) outfile.write('\n') # uploads a given file to S3. input_s3_path = s3_path_join("s3://",sagemaker_session_bucket,"batch_transform/input") output_s3_path = s3_path_join("s3://",sagemaker_session_bucket,"batch_transform/output") s3_file_uri = S3Uploader.upload(dataset_jsonl_file,input_s3_path) print(f"{dataset_jsonl_file} uploaded to {s3_file_uri}")
tweet_data.jsonl uploaded to s3://sagemaker-us-east-1-558105141721/batch_transform/input/tweet_data.jsonl

The created file looks like this

{"inputs": "VirginAmerica What dhepburn said."} {"inputs": "VirginAmerica plus you've added commercials to the experience... tacky."} {"inputs": "VirginAmerica I didn't today... Must mean I need to take another trip!"} {"inputs": "VirginAmerica it's really aggressive to blast obnoxious \"entertainment\"...."} {"inputs": "VirginAmerica and it's a really big bad thing about it"} {"inputs": "VirginAmerica seriously would pay $30 a flight for seats that didn't h...."} {"inputs": "VirginAmerica yes, nearly every time I fly VX this \u201cear worm\u201d won\u2019t go away :)"} {"inputs": "VirginAmerica Really missed a prime opportunity for Men Without ..."} {"inputs": "virginamerica Well, I didn't\u2026but NOW I DO! :-D"} {"inputs": "VirginAmerica it was amazing, and arrived an hour early. You're too good to me."} {"inputs": "VirginAmerica did you know that suicide is the second leading cause of death among teens 10-24"} {"inputs": "VirginAmerica I &lt;3 pretty graphics. so much better than minimal iconography. :D"} {"inputs": "VirginAmerica This is such a great deal! Already thinking about my 2nd trip ..."} ....

Create Inference Transformer to run the batch job

We use the twitter-roberta-base-sentiment model running our batch transform job. This is a RoBERTa-base model trained on ~58M tweets and finetuned for sentiment analysis with the TweetEval benchmark.

from sagemaker.huggingface.model import HuggingFaceModel # Hub Model configuration. <https://huggingface.co/models> hub = { 'HF_MODEL_ID':'cardiffnlp/twitter-roberta-base-sentiment', 'HF_TASK':'text-classification' } # create Hugging Face Model Class huggingface_model = HuggingFaceModel( env=hub, # configuration for loading model from Hub role=role, # iam role with permissions to create an Endpoint transformers_version="4.26", # transformers version used pytorch_version="1.13", # pytorch version used py_version='py39', # python version used ) # create Transformer to run our batch job batch_job = huggingface_model.transformer( instance_count=1, instance_type='ml.p3.2xlarge', output_path=output_s3_path, # we are using the same s3 path to save the output with the input strategy='SingleRecord') # starts batch transform job and uses s3 data as input batch_job.transform( data=s3_file_uri, content_type='application/json', split_type='Line')
import json from sagemaker.s3 import S3Downloader from ast import literal_eval # creating s3 uri for result file -> input file + .out output_file = f"{dataset_jsonl_file}.out" output_path = s3_path_join(output_s3_path,output_file) # download file S3Downloader.download(output_path,'.') batch_transform_result = [] with open(output_file) as f: for line in f: # converts jsonline array to normal array line = "[" + line.replace("[","").replace("]",",") + "]" batch_transform_result = literal_eval(line) # print results print(batch_transform_result[:3])
INFO:botocore.credentials:Found credentials from IAM Role: BaseNotebookInstanceEc2InstanceRole
[{'label': 'LABEL_1', 'score': 0.766870379447937}, {'label': 'LABEL_0', 'score': 0.8912611603736877}, {'label': 'LABEL_1', 'score': 0.5760677456855774}]