CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
huggingface

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

GitHub Repository: huggingface/notebooks
Path: blob/main/course/videos/qa_pipeline_pt.ipynb
Views: 2542
Kernel: Unknown Kernel

This notebook regroups the code sample of the video below, which is a part of the Hugging Face course.

#@title from IPython.display import HTML HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/_wxyB3j3mk4?rel=0&amp;controls=0&amp;showinfo=0" frameborder="0" allowfullscreen></iframe>')

Install the Transformers and Datasets libraries to run this notebook.

! pip install datasets transformers[sentencepiece]
from transformers import pipeline question_answerer = pipeline("question-answering") context = """ 🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration between them. It's straightforward to train your models with one before loading them for inference with the other. """ question = "Which deep learning libraries back 🤗 Transformers?" question_answerer(question=question, context=context)
long_context = """ 🤗 Transformers: State of the Art NLP 🤗 Transformers provides thousands of pretrained models to perform tasks on texts such as classification, information extraction, question answering, summarization, translation, text generation and more in over 100 languages. Its aim is to make cutting-edge NLP easier to use for everyone. 🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the community on our model hub. At the same time, each python module defining an architecture is fully standalone and can be modified to enable quick research experiments. Why should I use transformers? 1. Easy-to-use state-of-the-art models: - High performance on NLU and NLG tasks. - Low barrier to entry for educators and practitioners. - Few user-facing abstractions with just three classes to learn. - A unified API for using all our pretrained models. - Lower compute costs, smaller carbon footprint: 2. Researchers can share trained models instead of always retraining. - Practitioners can reduce compute time and production costs. - Dozens of architectures with over 10,000 pretrained models, some in more than 100 languages. 3. Choose the right framework for every part of a model's lifetime: - Train state-of-the-art models in 3 lines of code. - Move a single model between TF2.0/PyTorch frameworks at will. - Seamlessly pick the right framework for training, evaluation and production. 4. Easily customize a model or an example to your needs: - We provide examples for each architecture to reproduce the results published by its original authors. - Model internals are exposed as consistently as possible. - Model files can be used independently of the library for quick experiments. 🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration between them. It's straightforward to train your models with one before loading them for inference with the other. """
question_answerer( question=question, context=long_context )
from transformers import AutoTokenizer, AutoModelForQuestionAnswering model_checkpoint = "distilbert-base-cased-distilled-squad" tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) model = AutoModelForQuestionAnswering.from_pretrained(model_checkpoint) inputs = tokenizer(question, context, return_tensors="pt") outputs = model(**inputs) start_logits = outputs.start_logits end_logits = outputs.end_logits print(start_logits.shape, end_logits.shape)
import torch sequence_ids = inputs.sequence_ids() # Mask everything apart the tokens of the context mask = [i != 1 for i in sequence_ids] # Unmask the [CLS] token mask[0] = False mask = torch.tensor(mask)[None] start_logits[mask] = -10000 end_logits[mask] = -10000 start_probabilities = torch.nn.functional.softmax(start_logits, dim=-1)[0] end_probabilities = torch.nn.functional.softmax(end_logits, dim=-1)[0]
scores = torch.triu(scores) max_index = scores.argmax().item() start_index = max_index // scores.shape[1] end_index = max_index % scores.shape[1] score = scores[start_index, end_index].item() inputs_with_offsets = tokenizer(question, context, return_offsets_mapping=True) offsets = inputs_with_offsets["offset_mapping"] start_char, _ = offsets[start_index] _, end_char = offsets[end_index] answer = context[start_char:end_char] print(f"Answer: '{answer}', score: {score:.4f}")
inputs = tokenizer( question, long_context, stride=128, max_length=384, padding="longest", truncation="only_second", return_overflowing_tokens=True, return_offsets_mapping=True, )