CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
huggingface

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

GitHub Repository: huggingface/notebooks
Path: blob/main/course/it/chapter2/section6_pt.ipynb
Views: 2549
Kernel: Unknown Kernel

Mettiamo insieme i pezzi (PyTorch)

Install the Transformers, Datasets, and Evaluate libraries to run this notebook.

!pip install datasets evaluate transformers[sentencepiece]
from transformers import AutoTokenizer checkpoint = "distilbert-base-uncased-finetuned-sst-2-english" tokenizer = AutoTokenizer.from_pretrained(checkpoint) sequence = "I've been waiting for a HuggingFace course my whole life." model_inputs = tokenizer(sequence)
sequence = "I've been waiting for a HuggingFace course my whole life." model_inputs = tokenizer(sequence)
sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"] model_inputs = tokenizer(sequences)
# Effettua il padding della sequenza fino allla massima lunghezza della sequenza model_inputs = tokenizer(sequences, padding="longest") # Effettua il padding fino alla lunghezza massima del modello # (512 per BERT o DistilBERT) model_inputs = tokenizer(sequences, padding="max_length") # Effettua il padding fino alla lunghezza massima specificata model_inputs = tokenizer(sequences, padding="max_length", max_length=8)
sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"] # Tronca le sequenze più lunghe della lunghezza massima del modello. # (512 per BERT o DistilBERT) model_inputs = tokenizer(sequences, truncation=True) # Tronca le sequenze più lunghe della lunghezza massima specificata. model_inputs = tokenizer(sequences, max_length=8, truncation=True)
sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"] # Ritorna tensori PyTorch model_inputs = tokenizer(sequences, padding=True, return_tensors="pt") # Ritorna tensori TensorFlow model_inputs = tokenizer(sequences, padding=True, return_tensors="tf") # Ritorna NumPy arrays model_inputs = tokenizer(sequences, padding=True, return_tensors="np")
sequence = "I've been waiting for a HuggingFace course my whole life." model_inputs = tokenizer(sequence) print(model_inputs["input_ids"]) tokens = tokenizer.tokenize(sequence) ids = tokenizer.convert_tokens_to_ids(tokens) print(ids)
[101, 1045, 1005, 2310, 2042, 3403, 2005, 1037, 17662, 12172, 2607, 2026, 2878, 2166, 1012, 102] [1045, 1005, 2310, 2042, 3403, 2005, 1037, 17662, 12172, 2607, 2026, 2878, 2166, 1012]
print(tokenizer.decode(model_inputs["input_ids"])) print(tokenizer.decode(ids))
"[CLS] i've been waiting for a huggingface course my whole life. [SEP]" "i've been waiting for a huggingface course my whole life."
import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification checkpoint = "distilbert-base-uncased-finetuned-sst-2-english" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSequenceClassification.from_pretrained(checkpoint) sequences = ["I've been waiting for a HuggingFace course my whole life.", "So have I!"] tokens = tokenizer(sequences, padding=True, truncation=True, return_tensors="pt") output = model(**tokens)