CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
huggingface

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

GitHub Repository: huggingface/notebooks
Path: blob/main/sagemaker/README.md
Views: 2535

Hugging Face Transformers Amazon SageMaker Examples

Example Jupyter notebooks that demonstrate how to build, train, and deploy Hugging Face Transformers using Amazon SageMaker and the Amazon SageMaker Python SDK.

🛠️ Setup

The quickest setup to run example notebooks includes:

📓 Examples

NotebookTypeDescription
01 Getting started with PyTorchTrainingGetting started end-to-end example on how to fine-tune a pre-trained Hugging Face Transformer for Text-Classification using PyTorch
02 getting started with TensorFlowTrainingGetting started end-to-end example on how to fine-tune a pre-trained Hugging Face Transformer for Text-Classification using TensorFlow
03 Distributed Training: Data ParallelismTrainingEnd-to-end example on how to use distributed training with data-parallelism strategy for fine-tuning a pre-trained Hugging Face Transformer for Question-Answering using Amazon SageMaker Data Parallelism
04 Distributed Training: Model ParallelismTrainingEnd-to-end example on how to use distributed training with model-parallelism strategy to pre-trained Hugging Face Transformer using Amazon SageMaker Model Parallelism
05 How to use Spot Instances & CheckpointingTrainingEnd-to-end example on how to use Spot Instances and Checkpointing to reduce training cost
06 Experiment Tracking with SageMaker MetricsTrainingEnd-to-end example on how to use SageMaker metrics to track your experiments and training jobs
07 Distributed Training: Data ParallelismTrainingEnd-to-end example on how to use Amazon SageMaker Data Parallelism with TensorFlow
08 Distributed Training: Summarization with T5/BARTTrainingEnd-to-end example on how to fine-tune BART/T5 for Summarization using Amazon SageMaker Data Parallelism
09 Vision: Fine-tune ViTTrainingEnd-to-end example on how to fine-tune Vision Transformer for Image-Classification
10 Deploy HF Transformer from Amazon S3InferenceEnd-to-end example on how to deploy a model from Amazon S3
11 Deploy HF Transformer from Hugging Face HubInferenceEnd-to-end example on how to deploy a model from the Hugging Face Hub
12 Batch Processing with Amazon SageMaker Batch TransformInferenceEnd-to-end example on how to do batch processing with Amazon SageMaker Batch Transform
13 Autoscaling SageMaker EndpointsInferenceEnd-to-end example on how to use autoscaling for a HF Endpoint
14 Fine-tune and push to HubTrainingEnd-to-end example on how to use the Hugging Face Hub as MLOps backend for saving checkpoints during training
15 Training CompilerTrainingEnd-to-end example on how to use Amazon SageMaker Training Compiler to speed up training time
16 Asynchronous InferenceInferenceEnd-to-end example on how to use Amazon SageMaker Asynchronous Inference endpoints with Hugging Face Transformers
17 Custom inference.py scriptInferenceEnd-to-end example on how to create a custom inference.py for Sentence Transformers and sentence embeddings
18 AWS InferentiaInferenceEnd-to-end example on how to AWS Inferentia to speed up inference time
19 Serverless InferenceInferenceServerless Inference example to save cost
20 Automatic Speech RecognitionInferenceExample how to do speech recognition with wav2vec2
21 Image SegmentationInferenceExample how to do image segmentation with segformer
22 Accelerate AWS SageMaker Integration examplesTrainingEnd-to-end examples on how to use AWS SageMaker integration of Accelerate
23 Stable DiffusionInferenceExample how to generate images with stable diffusion
24 Train BLOOM with PEFTTrainingExample how to train BLOOM on a single GPU using PEFT & LoRA
25 PyTorch FSDP model parallelismTrainingExample how to train LLMs on multi-node multi GPU with PyTorch FSDP
26 Document AI DonutTrainingIn this tutorial, you will learn how to fine-tune and deploy Donut-base for document-understand/document-parsing using Hugging Face Transformers and Amazon SageMaker.
27 Deploy Large Language ModelsInferenceLearn how to deploy LLMs with the Hugging Face LLM DLC
28 Train LLMs with QLoraTrainingExample on how to fine-tune LLMs using Q-Lora
29 Deploy LLMs with Inferentia2InferenceLearn how to deploy LLMs using AWS Inferentia2
30 Evaluate LLMs with ligthevalInferenceLearn how to evaluate LLMs using Hugging Face LightEval
31 Deploy Embedding Models with TEIInferenceLearn how to deploy Embedding models for RAG applications with Hugging Face TEI
32 Train and deploy Embedding ModelsTrain & InferenceLearn how to train and deploy embedding models with Sentence Transformers and TEI