CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
huggingface

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

GitHub Repository: huggingface/notebooks
Path: blob/main/course/fr/chapter2/section5_pt.ipynb
Views: 2549
Kernel: Python 3

Manipulation de plusieurs séquences (PyTorch)

Installez la bibliothèque 🤗 Transformers pour exécuter ce notebook.

!pip install transformers[sentencepiece]
import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification checkpoint = "tblard/tf-allocine" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSequenceClassification.from_pretrained(checkpoint, from_tf=True) sequence = "J'ai attendu un cours d’HuggingFace toute ma vie." tokens = tokenizer.tokenize(sequence) ids = tokenizer.convert_tokens_to_ids(tokens) input_ids = torch.tensor(ids) # Cette ligne va échouer model(input_ids)
tokenized_inputs = tokenizer(sequence, return_tensors="pt") print(tokenized_inputs["input_ids"])
import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification checkpoint = "tblard/tf-allocine" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSequenceClassification.from_pretrained(checkpoint, from_tf=True) sequence = "J'ai attendu un cours d’HuggingFace toute ma vie." tokens = tokenizer.tokenize(sequence) ids = tokenizer.convert_tokens_to_ids(tokens) input_ids = torch.tensor([ids]) print("Input IDs:", input_ids) output = model(input_ids) print("Logits:", output.logits)
batched_ids = [ [200, 200, 200], [200, 200] ]
padding_id = 100 batched_ids = [ [200, 200, 200], [200, 200, padding_id], ]
model = AutoModelForSequenceClassification.from_pretrained(checkpoint, from_tf=True) sequence1_ids = [[200, 200, 200]] sequence2_ids = [[200, 200]] batched_ids = [ [200, 200, 200], [200, 200, tokenizer.pad_token_id], ] print(model(torch.tensor(sequence1_ids)).logits) print(model(torch.tensor(sequence2_ids)).logits) print(model(torch.tensor(batched_ids)).logits)
batched_ids = [ [200, 200, 200], [200, 200, tokenizer.pad_token_id], ] attention_mask = [ [1, 1, 1], [1, 1, 0], ] outputs = model(torch.tensor(batched_ids), attention_mask=torch.tensor(attention_mask)) print(outputs.logits)
# max_sequence_length = 512 sequence = sequence[:max_sequence_length]