CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
huggingface

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

GitHub Repository: huggingface/notebooks
Path: blob/main/course/videos/tf_predictions.ipynb
Views: 2542
Kernel: Unknown Kernel

This notebook regroups the code sample of the video below, which is a part of the Hugging Face course.

#@title from IPython.display import HTML HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/nx10eh4CoOs?rel=0&amp;controls=0&amp;showinfo=0" frameborder="0" allowfullscreen></iframe>')

Install the Transformers and Datasets libraries to run this notebook.

! pip install datasets transformers[sentencepiece]
from datasets import load_dataset from transformers import AutoTokenizer import numpy as np raw_datasets = load_dataset("glue", "mrpc") checkpoint = "bert-base-uncased" tokenizer = AutoTokenizer.from_pretrained(checkpoint) def tokenize_dataset(dataset): encoded = tokenizer( dataset["sentence1"], dataset["sentence2"], padding=True, truncation=True, return_tensors='np', ) return encoded.data tokenized_datasets = { split: tokenize_dataset(raw_datasets[split]) for split in raw_datasets.keys() } train_tokens = tokenized_datasets['train']['input_ids']
Reusing dataset glue (/home/sgugger/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)
import tensorflow as tf from transformers import TFAutoModelForSequenceClassification checkpoint = 'bert-base-cased' model = TFAutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
All model checkpoint layers were used when initializing TFBertForSequenceClassification. Some layers of TFBertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
from tensorflow.keras.optimizers.schedules import PolynomialDecay batch_size = 8 num_epochs = 3 num_train_steps = (len(train_tokens) // batch_size) * num_epochs lr_scheduler = PolynomialDecay( initial_learning_rate=5e-5, end_learning_rate=0., decay_steps=num_train_steps )
from tensorflow.keras.optimizers import Adam opt = Adam(learning_rate=lr_scheduler) model.compile(loss=loss, optimizer=opt)
model.fit( tokenized_datasets['train'], np.array(raw_datasets['train']['label']), validation_data=(tokenized_datasets['validation'], np.array(raw_datasets['validation']['label'])), batch_size=8, epochs=3 )
Epoch 1/3 WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). WARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7f5b279ce050>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7f5b279ce050>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. WARNING:tensorflow:From /home/sgugger/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py:5049: calling gather (from tensorflow.python.ops.array_ops) with validate_indices is deprecated and will be removed in a future version. Instructions for updating: The `validate_indices` argument has no effect. Indices are always validated on CPU and never validated on GPU. WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. 459/459 [==============================] - ETA: 0s - loss: 0.6243WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. 459/459 [==============================] - 49s 81ms/step - loss: 0.6243 - val_loss: 0.5969 Epoch 2/3 459/459 [==============================] - 36s 79ms/step - loss: 0.5407 - val_loss: 0.5179 Epoch 3/3 459/459 [==============================] - 36s 79ms/step - loss: 0.3405 - val_loss: 0.5317
<tensorflow.python.keras.callbacks.History at 0x7f58507d3410>
preds = model.predict(tokenized_datasets['validation'])['logits'] probabilities = tf.nn.softmax(preds) class_preds = np.argmax(probabilities, axis=1)
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
from datasets import load_metric metric = load_metric("glue", "mrpc") metric.compute(predictions=class_preds, references=raw_datasets['validation']['label'])
{'accuracy': 0.7549019607843137, 'f1': 0.8371335504885994}