Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Path: blob/main/examples/multiple_choice-tf.ipynb
Views: 2535
If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers and 🤗 Datasets. Uncomment the following cell and run it.
If you're opening this notebook locally, make sure your environment has an install from the last version of those libraries.
To be able to share your model with the community, there are a few more steps to follow.
First you have to store your authentication token from the Hugging Face website (sign up here if you haven't already!) then uncomment the following cell and input your token:
Then you need to install Git-LFS and setup Git if you haven't already. Uncomment the following instructions and adapt with your name and email:
Make sure your version of Transformers is at least 4.16.0 since some of the functionality we use was introduced in that version:
We also quickly upload some telemetry - this tells us which examples and software versions are getting used so we know where to prioritize our maintenance efforts. We don't collect (or care about) any personally identifiable information, but if you'd prefer not to be counted, feel free to skip this step or delete this cell entirely.
Fine-tuning a model on a multiple choice task
In this notebook, we will see how to fine-tune one of the 🤗 Transformers model on a multiple-choice task. In a multiple-choice task, multiple answers or continuations are provided for each input, and the model must guess which is most plausible. The dataset used here is SWAG but you can adapt the pre-processing to any other multiple choice dataset you like, or your own data. SWAG is a dataset about commonsense reasoning, where each example describes a situation and proposes four continuations that could follow it.
This notebook is built to run with any model checkpoint from the Model Hub as long as that model has a version with a mutiple choice head. Depending on your model and the GPU you are using, you might need to adjust the batch size to avoid out-of-memory errors. Set those two parameters, then the rest of the notebook should run smoothly.
Loading the dataset
We will use the 🤗 Datasets library to download the data. This can be easily done with the load_dataset
function.
load_dataset
will cache the dataset to avoid downloading it again the next time you run this cell.
The dataset
object itself is DatasetDict
, which contains one key for the training, validation and test set.
To access an actual element, you need to select a split first, then give an index:
To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset.
Each example in the dataset has a context composed of a first sentence (sent1
) and an introduction to the second sentence (sent2
). Then four possible endings are given (ending0
, ending1
, ending2
and ending3
) and the model must pick the right one (label
). The following function lets us visualize a given example a bit better:
Preprocessing the data
Before we can feed those texts to our model, we need to preprocess them. This is done by a 🤗 Transformers Tokenizer
which will (as the name indicates) tokenize the inputs, convert the tokens to their corresponding IDs in the pretrained vocabulary and put it in a format the model expects, as well as generate the other inputs that model requires.
To do all of this, we instantiate our tokenizer with the AutoTokenizer.from_pretrained
method, which will ensure:
we get a tokenizer that corresponds to the model architecture we want to use,
we download the vocabulary used when pretraining this specific checkpoint.
That vocabulary will be cached, so it's not downloaded again the next time we run the cell.
You can directly call this tokenizer on one sentence or a pair of sentences:
Depending on the model you selected, you will see different keys in the dictionary returned by the cell above. They don't matter much for what we're doing here (just know they are required by the model we will instantiate later). You can learn more about them in this tutorial if you're interested.
We can now write the function that will preprocess our samples. The tricky part is to put all the possible pairs of sentences into two big lists before passing them to the tokenizer, then un-flatten the result so that each example has four input ids, attentions masks, etc.
When calling the tokenizer
, we use the argument truncation=True
. This will ensure that an input longer that what the model selected can handle will be truncated to the maximum length accepted by the model.
This function works with one or several examples. In the case of several examples, the tokenizer will return a list of lists of lists for each key: a list of all examples (here 5), then a list of all choices (4) and a list of input IDs (length varying here since we did not apply any padding):
To check we didn't do anything wrong when grouping all possibilites and unflattening them, let's have a look at the decoded inputs for a given example:
We can compare it to the ground truth:
This seems alright, so we can apply this function on all the examples in our dataset. All we need to do is to use the map
method of the dataset
object we created earlier. This will apply the function on all the elements of all the splits in dataset
, so our training, validation and testing data will be preprocessed in one single command.
Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library is normally smart enough to detect when the function you pass to map has changed (and thus requires to not use the cache data). For instance, it will properly detect if you change the task in the first cell and rerun the notebook. 🤗 Datasets warns you when it uses cached files, so you can pass load_from_cache_file=False
in the call to map
to not use the cached files and force the preprocessing to be applied again.
Note that we passed batched=True
to encode the texts by batches together. This is to leverage the full benefit of the fast tokenizer we loaded earlier, which will use multi-threading to handle the texts in a batch concurrently.
Fine-tuning the model
Now that our data is ready, we can download the pretrained model and fine-tune it. Since all our task is about multiple choice, we use the AutoModelForMultipleChoice
class. Like with the tokenizer, the from_pretrained
method will download and cache the model for us.
The warning is telling us we are throwing away some weights (the vocab_transform
and vocab_layer_norm
layers) and randomly initializing some others (the pre_classifier
and classifier
layers). This is absolutely normal in this case, because we are removing the head used to pretrain the model on a masked language modeling objective and replacing it with a new head for which we don't have pretrained weights, and so the library warns us we should fine-tune this model before using it for inference, which is exactly what we are going to do.
Next, we set some names and hyperparameters for the model. The first two variables are used so we can push the model to the Hub at the end of training. Remove the two of them if you didn't follow the installation steps at the top of the notebook, otherwise you can change the value of push_to_hub_model_id
to something you would prefer.
Next we need to tell our Dataset
how to form batches from the pre-processed inputs. We haven't done any padding yet because we will pad each batch to the maximum length inside the batch (instead of doing so with the maximum length of the whole dataset). This will be the job of the data collator. A data collator takes a list of examples and converts them to a batch (by, in our case, applying padding). Since there is no data collator in the library that works on our specific problem, we will write one, adapted from the DataCollatorWithPadding
:
When called on a list of examples, it will flatten all the inputs/attentions masks etc. in big lists that it will pass to the tokenizer.pad
method. This will return a dictionary with big tensors (of shape (batch_size * 4) x seq_length
) that we then unflatten.
We can check this data collator works on a list of features, we just have to make sure to remove all features that are not inputs accepted by our model:
Again, all those flatten/un-flattens are sources of potential errors so let's make another sanity check on our inputs:
All good! Now we can use this collator as a collation function for our dataset.
Next, we convert our datasets to tf.data.Dataset
, which Keras understands natively. There are two ways to do this - we can use the slightly more low-level Dataset.to_tf_dataset()
method, or we can use Model.prepare_tf_dataset()
. The main difference between these two is that the Model
method can inspect the model to determine which column names it can use as input, which means you don't need to specify them yourself.
As we can see, our dataset will output a 2-tuple where the first element is a dict containing input_ids
, token_type_ids
and attention_mask
, and the second element is the label. This is exactly what we want for our model!
Now we can compile our model. First, we specify an optimizer. Using the create_optimizer
function we can get a nice AdamW
optimizer with weight decay and a learning rate decay schedule set up for free - but to compute that schedule, it needs to know how long training will take.
Note that most Transformers models compute loss internally, so we actually don't have to specify anything there! You can of course set your own loss function if you want, but by default our models will choose the 'obvious' loss that matches their task, such as cross-entropy in the case of language modelling. The built-in loss will also correctly handle things like masking the loss on padding tokens, or unlabelled tokens in the case of masked language modelling, so we recommend using it unless you're an advanced user!
In addition, because the outputs and loss for this model class are quite straightforward, we can use built-in Keras metrics - these are liable to misbehave in other contexts (for example, they don't know about the masking in masked language modelling) but work well here.
In some of our other examples, we use jit_compile
to compile the model with XLA. In this case, we should be careful about that - because our inputs have variable sequence lengths, we may end up having to do a new XLA compilation for each possible length, because XLA compilation expects a static input shape! For small datasets, this will probably result in spending more time on XLA compilation than actually training, which isn't very helpful.
If you really want to use XLA without these problems (for example, if you're training on TPU), you can create a tokenizer with padding="max_length"
. This will pad all of your samples to the same length, ensuring that a single XLA compilation will suffice for your entire dataset. Note that depending on the nature of your dataset, this may result in a lot of wasted computation on padding tokens!
Now we can train our model. We can also add a callback to sync up our model with the Hub - this allows us to resume training from other machines and even test the model's inference quality midway through training! Make sure to change the username
if you do. If you don't want to do this, simply remove the callbacks argument in the call to fit()
.
If you used the callback above, you can now share this model with all your friends, family or favorite pets: they can all load it with the identifier "your-username/the-name-you-picked"
so for instance:
Inference
Now we've trained our model, let's see how we could load it and use it to answer questions in future! First, let's load it from the hub. This means we can resume the code from here without needing to rerun everything above every time.
Now let's see how to use this model for inference. The SWAG task we trained on is a commonsense inference benchmark, where we ask the model to indicate which of four completions of a sentence is realistic and makes sense in context. Let's use a sample input from SWAG and see how we can get predictions for it.
Now we tokenize this input. Note that our inputs need to be reshaped a little - multiple choice models expect inputs to have the shape (num_samples, num_choices, num_tokens)
- this means we will need to add a sample/batch dimension of length 1.
And now we run these inputs through our model and see what it guesses!