CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
huggingface

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

GitHub Repository: huggingface/notebooks
Path: blob/main/examples/audio_classification.ipynb
Views: 2535
Kernel: Python 3 (ipykernel)

Fine-tuning for Audio Classification with 🤗 Transformers

This notebook shows how to fine-tune multi-lingual pretrained speech models for Automatic Speech Recognition.

This notebook is built to run on the Keyword Spotting subset of the SUPERB dataset with any speech model checkpoint from the Model Hub as long as that model has a version with a Sequence Classification head (e.g. Wav2Vec2ForSequenceClassification).

Depending on the model and the GPU you are using, you might need to adjust the batch size to avoid out-of-memory errors. Set those two parameters, then the rest of the notebook should run smoothly:

model_checkpoint = "facebook/wav2vec2-base" batch_size = 32

Before we start, let's install both datasets and transformers from master. Also, we need the librosa package to load audio files.

%%capture !pip install datasets==1.14 !pip install transformers==4.11.3 !pip install librosa

If you're opening this notebook locally, make sure your environment has an install from the last version of those libraries.

To be able to share your model with the community and generate results like the one shown in the picture below via the inference API, there are a few more steps to follow.

First you have to store your authentication token from the Hugging Face website (sign up here if you haven't already!) then execute the following cell and input your username and password:

from huggingface_hub import notebook_login notebook_login()

Then you need to install Git-LFS to upload your model checkpoints:

%%capture !apt install git-lfs

We also quickly upload some telemetry - this tells us which examples and software versions are getting used so we know where to prioritize our maintenance efforts. We don't collect (or care about) any personally identifiable information, but if you'd prefer not to be counted, feel free to skip this step or delete this cell entirely.

from transformers.utils import send_example_telemetry send_example_telemetry("audio_classification_notebook", framework="pytorch")

Fine-tuning a model on an audio classification task

In this notebook, we will see how to fine-tune one of the 🤗 Transformers acoustic models to a Keyword Spotting task of the SUPERB Benchmark

Keyword Spotting (KS) detects preregistered keywords by classifying utterances into a predefined set of words. SUPERB uses the widely used Speech Commands dataset v1.0 for the task. The dataset consists of ten classes of keywords, a class for silence, and an unknown class to include the false positive.

drawing

Loading the dataset

We will use the 🤗 Datasets library to download the data and get the Accuracy metric we need to use for evaluation. This can be easily done with the functions load_dataset and load_metric.

from datasets import load_dataset, load_metric
dataset = load_dataset("superb", "ks") metric = load_metric("accuracy")
Downloading and preparing dataset superb/ks (download: 1.45 GiB, generated: 9.64 MiB, post-processed: Unknown size, total: 1.46 GiB) to /root/.cache/huggingface/datasets/superb/ks/1.9.0/ce836692657f82230c16b3bbcb93eaacdbfd7de4def3be90016f112d68683481...
Dataset superb downloaded and prepared to /root/.cache/huggingface/datasets/superb/ks/1.9.0/ce836692657f82230c16b3bbcb93eaacdbfd7de4def3be90016f112d68683481. Subsequent calls will reuse this data.

The dataset object itself is a DatasetDict, which contains one key for the training, validation and test set.

dataset
DatasetDict({ train: Dataset({ features: ['file', 'audio', 'label'], num_rows: 51094 }) validation: Dataset({ features: ['file', 'audio', 'label'], num_rows: 6798 }) test: Dataset({ features: ['file', 'audio', 'label'], num_rows: 3081 }) })

To access an actual element, you need to select a split first, then give an index:

dataset["test"][1000]
{'audio': {'array': array([-1.2207031e-04, 3.0517578e-05, 1.8310547e-04, ..., -4.8828125e-04, -5.4931641e-04, -3.3569336e-04], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/94c69f12cc18539f842ccec832347d0f85a3424c14bc47bd63902105ba1b2391/go/e41a903b_nohash_4.wav', 'sampling_rate': 16000}, 'file': '/root/.cache/huggingface/datasets/downloads/extracted/94c69f12cc18539f842ccec832347d0f85a3424c14bc47bd63902105ba1b2391/go/e41a903b_nohash_4.wav', 'label': 9}

As you can see, the label field is not an actual string label. By default the ClassLabel fields are encoded into integers for convenience:

dataset["train"].features["label"]
ClassLabel(num_classes=12, names=['yes', 'no', 'up', 'down', 'left', 'right', 'on', 'off', 'stop', 'go', '_silence_', '_unknown_'], names_file=None, id=None)

Let's create an id2label dictionary to decode them back to strings and see what they are. The inverse label2id will be useful too, when we load the model later.

labels = dataset["train"].features["label"].names label2id, id2label = dict(), dict() for i, label in enumerate(labels): label2id[label] = str(i) id2label[str(i)] = label id2label["9"]
'go'

Wav2Vec2 expects the input in the format of a 1-dimensional array of 16 kHz. This means that the audio file has to be loaded and resampled.

Thankfully, datasets does this automatically when calling the column audio. Let try it out.

dataset["test"][1000]["audio"]
{'array': array([-1.2207031e-04, 3.0517578e-05, 1.8310547e-04, ..., -4.8828125e-04, -5.4931641e-04, -3.3569336e-04], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/94c69f12cc18539f842ccec832347d0f85a3424c14bc47bd63902105ba1b2391/go/e41a903b_nohash_4.wav', 'sampling_rate': 16000}

We can see that the audio file has automatically been loaded. This is thanks to the new "Audio" feature introduced in datasets == 1.13.3, which loads and resamples audio files on-the-fly upon calling.

The sampling rate is set to 16kHz which is what Wav2Vec2 expects as an input.

To get a sense of what the commands sound like, the following snippet will render some audio examples picked randomly from the dataset.

Note: You can run the following cell a couple of times to listen to different audio samples.

import random from IPython.display import Audio, display for _ in range(5): rand_idx = random.randint(0, len(dataset["train"])-1) example = dataset["train"][rand_idx] audio = example["audio"] print(f'Label: {id2label[str(example["label"])]}') print(f'Shape: {audio["array"].shape}, sampling rate: {audio["sampling_rate"]}') display(Audio(audio["array"], rate=audio["sampling_rate"])) print()
Label: go Shape: (16000,), sampling rate: 16000
Label: down Shape: (16000,), sampling rate: 16000
Label: _unknown_ Shape: (16000,), sampling rate: 16000
Label: go Shape: (16000,), sampling rate: 16000
Label: _unknown_ Shape: (15604,), sampling rate: 16000

If you run the cell a couple of times, you'll see that despite slight variations in length, most of the samples are about 1 second long (duration = audio_length / sampling_rate). So we can safely truncate and pad the samples to 16000.

Preprocessing the data

Before we can feed those audio clips to our model, we need to preprocess them. This is done by a 🤗 Transformers FeatureExtractor which will normalize the inputs and put them in a format the model expects, as well as generate the other inputs that the model requires.

To do all of this, we instantiate our feature extractor with the AutoFeatureExtractor.from_pretrained method, which will ensure that we get a preprocessor that corresponds to the model architecture we want to use.

from transformers import AutoFeatureExtractor feature_extractor = AutoFeatureExtractor.from_pretrained(model_checkpoint) feature_extractor
/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py:337: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`. "Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 "
Wav2Vec2FeatureExtractor { "do_normalize": true, "feature_extractor_type": "Wav2Vec2FeatureExtractor", "feature_size": 1, "padding_side": "right", "padding_value": 0.0, "return_attention_mask": false, "sampling_rate": 16000 }

As we've noticed earlier, the samples are roughly 1 second long, so let's set it here:

max_duration = 1.0 # seconds

We can then write the function that will preprocess our samples. We just feed them to the feature_extractor with the argument truncation=True, as well as the maximum sample length. This will ensure that very long inputs like the ones in the _silence_ class can be safely batched.

def preprocess_function(examples): audio_arrays = [x["array"] for x in examples["audio"]] inputs = feature_extractor( audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=int(feature_extractor.sampling_rate * max_duration), truncation=True, ) return inputs

The feature extractor will return a list of numpy arays for each example:

preprocess_function(dataset['train'][:5])
{'input_values': [array([-9.1631009e-05, -9.1631009e-05, -9.1631009e-05, ..., -4.6719767e-02, -8.0353022e-01, -1.3182331e+00], dtype=float32), array([0.01049979, 0.01049979, 0.01049979, ..., 0.6454253 , 0.43378347, 0.25741526], dtype=float32), array([ 9.0340059e-04, 9.0340059e-04, 9.0340059e-04, ..., -1.7281245e-01, 2.2313449e-01, 1.9931581e+00], dtype=float32), array([ 1.5586768 , 0.3870289 , 0.74101615, ..., -0.8897349 , -0.7703889 , -0.09471782], dtype=float32), array([-0.01518929, -0.01518929, -0.01518929, ..., -0.84138 , 0.22227868, -0.02409434], dtype=float32)]}

To apply this function on all utterances in our dataset, we just use the map method of our dataset object we created earlier. This will apply the function on all the elements of all the splits in dataset, so our training, validation and testing data will be preprocessed in one single command.

encoded_dataset = dataset.map(preprocess_function, remove_columns=["audio", "file"], batched=True) encoded_dataset
/usr/local/lib/python3.7/dist-packages/numpy/core/_asarray.py:83: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray return array(a, dtype, copy=False, order=order)
DatasetDict({ train: Dataset({ features: ['input_values', 'label'], num_rows: 51094 }) validation: Dataset({ features: ['input_values', 'label'], num_rows: 6798 }) test: Dataset({ features: ['input_values', 'label'], num_rows: 3081 }) })

Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library is normally smart enough to detect when the function you pass to map has changed (and thus requires to not use the cache data). 🤗 Datasets warns you when it uses cached files, you can pass load_from_cache_file=False in the call to map to not use the cached files and force the preprocessing to be applied again.

Training the model

Now that our data is ready, we can download the pretrained model and fine-tune it. For classification we use the AutoModelForAudioClassification class. Like with the feature extractor, the from_pretrained method will download and cache the model for us. As the label ids and the number of labels are dataset dependent, we pass num_labels, label2id, and id2label alongside the model_checkpoint here:

from transformers import AutoModelForAudioClassification, TrainingArguments, Trainer num_labels = len(id2label) model = AutoModelForAudioClassification.from_pretrained( model_checkpoint, num_labels=num_labels, label2id=label2id, id2label=id2label, )
/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py:337: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`. "Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 "
Some weights of the model checkpoint at facebook/wav2vec2-base were not used when initializing Wav2Vec2ForSequenceClassification: ['project_q.weight', 'quantizer.weight_proj.bias', 'project_hid.weight', 'project_q.bias', 'project_hid.bias', 'quantizer.codevectors', 'quantizer.weight_proj.weight'] - This IS expected if you are initializing Wav2Vec2ForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing Wav2Vec2ForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of Wav2Vec2ForSequenceClassification were not initialized from the model checkpoint at facebook/wav2vec2-base and are newly initialized: ['projector.bias', 'classifier.weight', 'projector.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

The warning is telling us we are throwing away some weights (the quantizer and project_q layers) and randomly initializing some other (the projector and classifier layers). This is expected in this case, because we are removing the head used to pretrain the model on an unsupervised Vector Quantization objective and replacing it with a new head for which we don't have pretrained weights, so the library warns us we should fine-tune this model before using it for inference, which is exactly what we are going to do.

To instantiate a Trainer, we will need to define the training configuration and the evaluation metric. The most important is the TrainingArguments, which is a class that contains all the attributes to customize the training. It requires one folder name, which will be used to save the checkpoints of the model, and all other arguments are optional:

model_name = model_checkpoint.split("/")[-1] args = TrainingArguments( f"{model_name}-finetuned-ks", evaluation_strategy = "epoch", save_strategy = "epoch", learning_rate=3e-5, per_device_train_batch_size=batch_size, gradient_accumulation_steps=4, per_device_eval_batch_size=batch_size, num_train_epochs=5, warmup_ratio=0.1, logging_steps=10, load_best_model_at_end=True, metric_for_best_model="accuracy", push_to_hub=True, )

Here we set the evaluation to be done at the end of each epoch, tweak the learning rate, use the batch_size defined at the top of the notebook and customize the number of epochs for training, as well as the weight decay. Since the best model might not be the one at the end of training, we ask the Trainer to load the best model it saved (according to metric_name) at the end of training.

The last argument push_to_hub allows the Trainer to push the model to the Hub regularly during training. Remove it if you didn't follow the installation steps at the top of the notebook. If you want to save your model locally with a name that is different from the name of the repository, or if you want to push your model under an organization and not your name space, use the hub_model_id argument to set the repo name (it needs to be the full name, including your namespace: for instance "anton-l/wav2vec2-finetuned-ks" or "huggingface/anton-l/wav2vec2-finetuned-ks").

Next, we need to define a function for how to compute the metrics from the predictions, which will just use the metric we loaded earlier. The only preprocessing we have to do is to take the argmax of our predicted logits:

import numpy as np def compute_metrics(eval_pred): """Computes accuracy on a batch of predictions""" predictions = np.argmax(eval_pred.predictions, axis=1) return metric.compute(predictions=predictions, references=eval_pred.label_ids)

Then we just need to pass all of this along with our datasets to the Trainer:

trainer = Trainer( model, args, train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset["validation"], tokenizer=feature_extractor, compute_metrics=compute_metrics )

You might wonder why we pass along the feature_extractor as a tokenizer when we already preprocessed our data. This is because we will use it once last time to make all the samples we gather the same length by applying padding, which requires knowing the model's preferences regarding padding (to the left or right? with which token?). The feature_extractor has a pad method that will do all of this for us, and the Trainer will use it. You can customize this part by defining and passing your own data_collator which will receive the samples like the dictionaries seen above and will need to return a dictionary of tensors.

Now we can finetune our model by calling the train method:

trainer.train()
***** Running training ***** Num examples = 51094 Num Epochs = 5 Instantaneous batch size per device = 32 Total train batch size (w. parallel, distributed & accumulation) = 128 Gradient Accumulation steps = 4 Total optimization steps = 1995
***** Running Evaluation ***** Num examples = 6798 Batch size = 32 Saving model checkpoint to wav2vec2-base-finetuned-ks/checkpoint-399 Configuration saved in wav2vec2-base-finetuned-ks/checkpoint-399/config.json Model weights saved in wav2vec2-base-finetuned-ks/checkpoint-399/pytorch_model.bin Configuration saved in wav2vec2-base-finetuned-ks/checkpoint-399/preprocessor_config.json Configuration saved in wav2vec2-base-finetuned-ks/preprocessor_config.json ***** Running Evaluation ***** Num examples = 6798 Batch size = 32 Saving model checkpoint to wav2vec2-base-finetuned-ks/checkpoint-798 Configuration saved in wav2vec2-base-finetuned-ks/checkpoint-798/config.json Model weights saved in wav2vec2-base-finetuned-ks/checkpoint-798/pytorch_model.bin Configuration saved in wav2vec2-base-finetuned-ks/checkpoint-798/preprocessor_config.json ***** Running Evaluation ***** Num examples = 6798 Batch size = 32 Saving model checkpoint to wav2vec2-base-finetuned-ks/checkpoint-1197 Configuration saved in wav2vec2-base-finetuned-ks/checkpoint-1197/config.json Model weights saved in wav2vec2-base-finetuned-ks/checkpoint-1197/pytorch_model.bin Configuration saved in wav2vec2-base-finetuned-ks/checkpoint-1197/preprocessor_config.json ***** Running Evaluation ***** Num examples = 6798 Batch size = 32 Saving model checkpoint to wav2vec2-base-finetuned-ks/checkpoint-1596 Configuration saved in wav2vec2-base-finetuned-ks/checkpoint-1596/config.json Model weights saved in wav2vec2-base-finetuned-ks/checkpoint-1596/pytorch_model.bin Configuration saved in wav2vec2-base-finetuned-ks/checkpoint-1596/preprocessor_config.json ***** Running Evaluation ***** Num examples = 6798 Batch size = 32 Saving model checkpoint to wav2vec2-base-finetuned-ks/checkpoint-1995 Configuration saved in wav2vec2-base-finetuned-ks/checkpoint-1995/config.json Model weights saved in wav2vec2-base-finetuned-ks/checkpoint-1995/pytorch_model.bin Configuration saved in wav2vec2-base-finetuned-ks/checkpoint-1995/preprocessor_config.json Training completed. Do not forget to share your model on huggingface.co/models =) Loading best model from wav2vec2-base-finetuned-ks/checkpoint-1995 (score: 0.9823477493380406).
TrainOutput(global_step=1995, training_loss=0.5248636143846919, metrics={'train_runtime': 4281.3853, 'train_samples_per_second': 59.67, 'train_steps_per_second': 0.466, 'total_flos': 2.31918157475328e+18, 'train_loss': 0.5248636143846919, 'epoch': 5.0})

We can check with the evaluate method that our Trainer did reload the best model properly (if it was not the last one):

trainer.evaluate()
***** Running Evaluation ***** Num examples = 6798 Batch size = 32
{'epoch': 5.0, 'eval_accuracy': 0.9823477493380406, 'eval_loss': 0.09516120702028275, 'eval_runtime': 68.0133, 'eval_samples_per_second': 99.951, 'eval_steps_per_second': 3.132}

You can now upload the result of the training to the Hub, just execute this instruction:

trainer.push_to_hub()

You can now share this model with all your friends, family, favorite pets: they can all load it with the identifier "your-username/the-name-you-picked" so for instance:

from transformers import AutoModelForAudioClassification, AutoFeatureExtractor feature_extractor = AutoFeatureExtractor.from_pretrained("anton-l/my-awesome-model") model = AutoModelForAudioClassification.from_pretrained("anton-l/my-awesome-model")