CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
huggingface

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

GitHub Repository: huggingface/notebooks
Path: blob/main/examples/speech_recognition.ipynb
Views: 2535
Kernel: Python 3 (ipykernel)

Open In Colab

Fine-tuning Speech Model with 🤗 Transformers

This notebook shows how to fine-tune multi-lingual pretrained speech models for Automatic Speech Recognition.

This notebook is built to run on the TIMIT dataset with any speech model checkpoint from the Model Hub as long as that model has a version with a Connectionist Temporal Classification (CTC) head. Depending on the model and the GPU you are using, you might need to adjust the batch size to avoid out-of-memory errors. Set those two parameters, then the rest of the notebook should run smoothly:

model_checkpoint = "facebook/wav2vec2-base" batch_size = 32

For a more in-detail explanation of how multi-lingual pretrained speech models function, please take a look at the 🤗 Blog.

Before we start, let's install both datasets and transformers from master. Also, we need the librosa package to load audio files and the jiwer to evaluate our fine-tuned model using the word error rate (WER) metric 1{}^1.

%%capture !pip install datasets==1.14 !pip install transformers==4.11.3 !pip install librosa !pip install jiwer

Next we strongly suggest to upload your training checkpoints directly to the 🤗 Hub while training. The 🤗 Hub has integrated version control so you can be sure that no model checkpoint is getting lost during training.

To do so you have to store your authentication token from the Hugging Face website (sign up here if you haven't already!)

from huggingface_hub import notebook_login notebook_login()

Then you need to install Git-LFS to upload your model checkpoints:

%%capture !apt install git-lfs

1{}^1 Timit is usually evaluated using the phoneme error rate (PER), but by far the most common metric in ASR is the word error rate (WER). To keep this notebook as general as possible we decided to evaluate the model using WER.

We also quickly upload some telemetry - this tells us which examples and software versions are getting used so we know where to prioritize our maintenance efforts. We don't collect (or care about) any personally identifiable information, but if you'd prefer not to be counted, feel free to skip this step or delete this cell entirely.

from transformers.utils import send_example_telemetry send_example_telemetry("speech_recognition_notebook", framework="pytorch")

Prepare Data, Tokenizer, Feature Extractor

ASR models transcribe speech to text, which means that we both need a feature extractor that processes the speech signal to the model's input format, e.g. a feature vector, and a tokenizer that processes the model's output format to text.

In 🤗 Transformers, speech recognition models are thus accompanied by both a tokenizer, and a feature extractor.

Let's start by creating the tokenizer responsible for decoding the model's predictions.

Create Wav2Vec2CTCTokenizer

Let's start by loading the TIMIT dataset and taking a look at its structure.

If you wish to fine-tune the model on a different speech dataset feel free to adapt this part.

from datasets import load_dataset, load_metric timit = load_dataset("timit_asr")
Downloading and preparing dataset timit_asr/clean (download: 828.75 MiB, generated: 7.90 MiB, post-processed: Unknown size, total: 836.65 MiB) to /root/.cache/huggingface/datasets/timit_asr/clean/2.0.1/5bebea6cd9df0fc2c8c871250de23293a94c1dc49324182b330b6759ae6718f8...
Dataset timit_asr downloaded and prepared to /root/.cache/huggingface/datasets/timit_asr/clean/2.0.1/5bebea6cd9df0fc2c8c871250de23293a94c1dc49324182b330b6759ae6718f8. Subsequent calls will reuse this data.
timit
DatasetDict({ train: Dataset({ features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], num_rows: 4620 }) test: Dataset({ features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], num_rows: 1680 }) })

Many ASR datasets only provide the target text, 'text' for each audio 'audio' and file 'file'. Timit actually provides much more information about each audio file, such as the 'phonetic_detail', etc., which is why many researchers choose to evaluate their models on phoneme classification instead of speech recognition when working with Timit. However, we want to keep the notebook as general as possible, so that we will only consider the transcribed text for fine-tuning.

timit = timit.remove_columns(["phonetic_detail", "word_detail", "dialect_region", "id", "sentence_type", "speaker_id"])

Let's write a short function to display some random samples of the dataset and run it a couple of times to get a feeling for the transcriptions.

from datasets import ClassLabel import random import pandas as pd from IPython.display import display, HTML def show_random_elements(dataset, num_examples=10): assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset." picks = [] for _ in range(num_examples): pick = random.randint(0, len(dataset)-1) while pick in picks: pick = random.randint(0, len(dataset)-1) picks.append(pick) df = pd.DataFrame(dataset[picks]) display(HTML(df.to_html()))
show_random_elements(timit["train"].remove_columns(["audio", "file"]), num_examples=10)

Alright! The transcriptions look very clean and the language seems to correspond more to written text than dialogue. This makes sense taking into account that Timit is a read speech corpus.

We can see that the transcriptions contain some special characters, such as ,.?!;:. Without a language model, it is much harder to classify speech chunks to such special characters because they don't really correspond to a characteristic sound unit. E.g., the letter "s" has a more or less clear sound, whereas the special character "." does not. Also in order to understand the meaning of a speech signal, it is usually not necessary to include special characters in the transcription.

In addition, we normalize the text to only have lower case letters and append a word separator token at the end.

import re chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"]' def remove_special_characters(batch): batch["text"] = re.sub(chars_to_ignore_regex, '', batch["text"]).lower() + " " return batch
timit = timit.map(remove_special_characters)
show_random_elements(timit["train"].remove_columns(["audio", "file"]))

Good! This looks better. We have removed most special characters from transcriptions and normalized them to lower-case only.

In CTC, it is common to classify speech chunks into letters, so we will do the same here. Let's extract all distinct letters of the training and test data and build our vocabulary from this set of letters.

We write a mapping function that concatenates all transcriptions into one long transcription and then transforms the string into a set of chars. It is important to pass the argument batched=True to the map(...) function so that the mapping function has access to all transcriptions at once.

def extract_all_chars(batch): all_text = " ".join(batch["text"]) vocab = list(set(all_text)) return {"vocab": [vocab], "all_text": [all_text]}
vocabs = timit.map( extract_all_chars, batched=True, batch_size=-1, keep_in_memory=True, remove_columns=timit.column_names["train"] )

Now, we create the union of all distinct letters in the training dataset and test dataset and convert the resulting list into an enumerated dictionary.

vocab_list = list(set(vocabs["train"]["vocab"][0]) | set(vocabs["test"]["vocab"][0]))
vocab_dict = {v: k for k, v in enumerate(vocab_list)} vocab_dict
{' ': 19, "'": 21, 'a': 1, 'b': 0, 'c': 4, 'd': 13, 'e': 15, 'f': 3, 'g': 26, 'h': 24, 'i': 10, 'j': 11, 'k': 17, 'l': 6, 'm': 5, 'n': 14, 'o': 18, 'p': 7, 'q': 16, 'r': 20, 's': 2, 't': 8, 'u': 9, 'v': 27, 'w': 25, 'x': 22, 'y': 12, 'z': 23}

Cool, we see that all letters of the alphabet occur in the dataset (which is not really surprising) and we also extracted the special characters " " and '. Note that we did not exclude those special characters because:

  • The model has to learn to predict when a word finished or else the model prediction would always be a sequence of chars which would make it impossible to separate words from each other.

  • In English, we need to keep the ' character to differentiate between words, e.g., "it's" and "its" which have very different meanings.

To make it clearer that " " has its own token class, we give it a more visible character |. In addition, we also add an "unknown" token so that the model can later deal with characters not encountered in Timit's training set.

Finally, we also add a padding token that corresponds to CTC's "blank token". The "blank token" is a core component of the CTC algorithm. For more information, please take a look at the "Alignment" section here.

vocab_dict["|"] = vocab_dict[" "] del vocab_dict[" "]
vocab_dict["[UNK]"] = len(vocab_dict) vocab_dict["[PAD]"] = len(vocab_dict) len(vocab_dict)
30

Cool, now our vocabulary is complete and consists of 30 tokens, which means that the linear layer that we will add on top of the pretrained speech checkpoint will have an output dimension of 30.

Let's now save the vocabulary as a json file.

import json with open('vocab.json', 'w') as vocab_file: json.dump(vocab_dict, vocab_file)

In a final step, we use the json file to instantiate a tokenizer object with the just created vocabulary file. The correct tokenizer_type can be retrieved from the model configuration. If a tokenizer_class is defined in the config, we can use it, else we assume the tokenizer_type corresponds to the model_type.

from transformers import AutoConfig config = AutoConfig.from_pretrained(model_checkpoint) tokenizer_type = config.model_type if config.tokenizer_class is None else None config = config if config.tokenizer_class is not None else None
/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py:337: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`. "Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 "

Now we can instantiate a tokenizer using AutoTokenizer. Additionally, we set the tokenizer's special tokens.

from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "./", config=config, tokenizer_type=tokenizer_type, unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|", )
file ./config.json not found Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.

If one wants to re-use the just created tokenizer with the fine-tuned model of this notebook, it is strongly advised to upload the tokenizer to the 🤗 Hub. Let's call the repo to which we will upload the files "wav2vec2-base-timit-demo-colab":

model_checkpoint_name = model_checkpoint.split("/")[-1] repo_name = f"{model_checkpoint_name}-demo-colab"

and upload the tokenizer to the 🤗 Hub.

tokenizer.push_to_hub(repo_name)
Cloning https://huggingface.co/patrickvonplaten/wav2vec2-base-timit-demo-colab into local empty directory. tokenizer config file saved in wav2vec2-base-timit-demo-colab/tokenizer_config.json Special tokens file saved in wav2vec2-base-timit-demo-colab/special_tokens_map.json To https://huggingface.co/patrickvonplaten/wav2vec2-base-timit-demo-colab 6aaf3f9..870c48e main -> main
'https://huggingface.co/patrickvonplaten/wav2vec2-base-timit-demo-colab/commit/870c48e622b77a3b27d61c094165c51d4aef9283'

Great, you can see the just created repository under https://huggingface.co/<your-username>/wav2vec2-base-timit-demo-colab

Preprocess Data

So far, we have not looked at the actual values of the speech signal but just the transcription. In addition to 'text', our datasets include two more column names 'file' and 'audio'. 'file' states the absolute path of the audio file. Let's take a look.

timit["train"][0]["file"]
'/root/.cache/huggingface/datasets/downloads/extracted/404950a46da14eac65eb4e2a8317b1372fb3971d980d91d5d5b221275b1fd7e0/data/TRAIN/DR4/MMDM0/SI681.WAV'

Wav2Vec2 expects the input in the format of a 1-dimensional array of 16 kHz. This means that the audio file has to be loaded and resampled.

Thankfully, datasets does this automatically when calling the column audio. Let try it out.

timit["train"][0]["audio"]
{'array': array([-2.1362305e-04, 6.1035156e-05, 3.0517578e-05, ..., -3.0517578e-05, -9.1552734e-05, -6.1035156e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/404950a46da14eac65eb4e2a8317b1372fb3971d980d91d5d5b221275b1fd7e0/data/TRAIN/DR4/MMDM0/SI681.WAV', 'sampling_rate': 16000}

We can see that the audio file has automatically been loaded. This is thanks to the new "Audio" feature introduced in datasets == 1.13.3, which loads and resamples audio files on-the-fly upon calling.

The sampling rate is set to 16kHz which is what Wav2Vec2 expects as an input.

Great, let's listen to a couple of audio files to better understand the dataset and verify that the audio was correctly loaded.

Note: You can click the following cell a couple of times to listen to different speech samples.

import IPython.display as ipd import numpy as np import random rand_int = random.randint(0, len(timit["train"])) print(timit["train"][rand_int]["text"]) ipd.Audio(data=np.asarray(timit["train"][rand_int]["audio"]["array"]), autoplay=True, rate=16000)
the triumphant warrior exhibited naive heroism

It can be heard, that the speakers change along with their speaking rate, accent, etc. Overall, the recordings sound relatively clear though, which is to be expected from a read speech corpus.

Let's do a final check that the data is correctly prepared, by printing the shape of the speech input, its transcription, and the corresponding sampling rate.

Note: You can click the following cell a couple of times to verify multiple samples.

rand_int = random.randint(0, len(timit["train"])) print("Target text:", timit["train"][rand_int]["text"]) print("Input array shape:", np.asarray(timit["train"][rand_int]["audio"]["array"]).shape) print("Sampling rate:", timit["train"][rand_int]["audio"]["sampling_rate"])
Target text: she had your dark suit in greasy wash water all year Input array shape: (57242,) Sampling rate: 16000

Good! Everything looks fine - the data is a 1-dimensional array, the sampling rate always corresponds to 16kHz, and the target text is normalized.

Next, we should process the data with the model's feature extractor. Let's load the feature extractor

from transformers import AutoFeatureExtractor feature_extractor = AutoFeatureExtractor.from_pretrained(model_checkpoint)

and wrap it into a Wav2Vec2Processor together with the tokenizer.

from transformers import Wav2Vec2Processor processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer)

Finally, we can leverage Wav2Vec2Processor to process the data to the format expected by the model for training. To do so let's make use of Dataset's map(...) function.

First, we load and resample the audio data, simply by calling batch["audio"]. Second, we extract the input_values from the loaded audio file. In our case, the Wav2Vec2Processor only normalizes the data. For other speech models, however, this step can include more complex feature extraction, such as Log-Mel feature extraction. Third, we encode the transcriptions to label ids.

def prepare_dataset(batch): audio = batch["audio"] # batched output is "un-batched" to ensure mapping is correct batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0] batch["input_length"] = len(batch["input_values"]) with processor.as_target_processor(): batch["labels"] = processor(batch["text"]).input_ids return batch

Let's apply the data preparation function to all examples.

timit = timit.map(prepare_dataset, remove_columns=timit.column_names["train"], num_proc=4)

Note: Currently datasets make use of torchaudio and librosa for audio loading and resampling. If you wish to implement your own costumized data loading/sampling, feel free to just make use of the "path" column instead and disregard the "audio" column.

Long input sequences require a lot of memory. Since Wav2Vec2 is based on self-attention the memory requirement scales quadratically with the input length for long input sequences (cf. with this reddit post). For this demo, let's filter all sequences that are longer than 4 seconds out of the training dataset.

max_input_length_in_sec = 4.0 timit["train"] = timit["train"].filter(lambda x: x < max_input_length_in_sec * processor.feature_extractor.sampling_rate, input_columns=["input_length"])

Awesome, now we are ready to start training!

Training

The data is processed so that we are ready to start setting up the training pipeline. We will make use of 🤗's Trainer for which we essentially need to do the following:

  • Define a data collator. In contrast to most NLP models, speech models usually have a much larger input length than output length. E.g., a sample of input length 50000 for Wav2Vec2 has an output length of no more than 100. Given the large input sizes, it is much more efficient to pad the training batches dynamically meaning that all training samples should only be padded to the longest sample in their batch and not the overall longest sample. Therefore, fine-tuning speech models requires a special padding data collator, which we will define below

  • Evaluation metric. During training, the model should be evaluated on the word error rate. We should define a compute_metrics function accordingly

  • Load a pretrained checkpoint. We need to load a pretrained checkpoint and configure it correctly for training.

  • Define the training configuration.

After having fine-tuned the model, we will correctly evaluate it on the test data and verify that it has indeed learned to correctly transcribe speech.

Set-up Trainer

Let's start by defining the data collator. The code for the data collator was copied from this example.

Without going into too many details, in contrast to the common data collators, this data collator treats the input_values and labels differently and thus applies to separate padding functions on them. This is necessary because in speech input and output are of different modalities meaning that they should not be treated by the same padding function. Analogous to the common data collators, the padding tokens in the labels with -100 so that those tokens are not taken into account when computing the loss.

import torch from dataclasses import dataclass, field from typing import Any, Dict, List, Optional, Union @dataclass class DataCollatorCTCWithPadding: """ Data collator that will dynamically pad the inputs received. Args: processor (:class:`~transformers.Wav2Vec2Processor`) The processor used for proccessing the data. padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`): Select a strategy to pad the returned sequences (according to the model's padding side and padding index) among: * :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided). * :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the maximum acceptable input length for the model if that argument is not provided. * :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different lengths). max_length (:obj:`int`, `optional`): Maximum length of the ``input_values`` of the returned list and optionally padding length (see above). max_length_labels (:obj:`int`, `optional`): Maximum length of the ``labels`` returned list and optionally padding length (see above). pad_to_multiple_of (:obj:`int`, `optional`): If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta). """ processor: Wav2Vec2Processor padding: Union[bool, str] = True max_length: Optional[int] = None max_length_labels: Optional[int] = None pad_to_multiple_of: Optional[int] = None pad_to_multiple_of_labels: Optional[int] = None def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: # split inputs and labels since they have to be of different lenghts and need # different padding methods input_features = [{"input_values": feature["input_values"]} for feature in features] label_features = [{"input_ids": feature["labels"]} for feature in features] batch = self.processor.pad( input_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors="pt", ) with self.processor.as_target_processor(): labels_batch = self.processor.pad( label_features, padding=self.padding, max_length=self.max_length_labels, pad_to_multiple_of=self.pad_to_multiple_of_labels, return_tensors="pt", ) # replace padding with -100 to ignore loss correctly labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100) batch["labels"] = labels return batch
data_collator = DataCollatorCTCWithPadding(processor=processor, padding=True)

Next, the evaluation metric is defined. As mentioned earlier, the predominant metric in ASR is the word error rate (WER), hence we will use it in this notebook as well.

wer_metric = load_metric("wer")

The model will return a sequence of logit vectors: y1,,ym\mathbf{y}_1, \ldots, \mathbf{y}_m with y1=fθ(x1,,xn)[0]\mathbf{y}_1 = f_{\theta}(x_1, \ldots, x_n)[0] and n>>mn >> m.

A logit vector y1\mathbf{y}_1 contains the log-odds for each word in the vocabulary we defined earlier, thus len(yi)=\text{len}(\mathbf{y}_i) = config.vocab_size. We are interested in the most likely prediction of the model and thus take the argmax(...) of the logits. Also, we transform the encoded labels back to the original string by replacing -100 with the pad_token_id and decoding the ids while making sure that consecutive tokens are not grouped to the same token in CTC style 1{}^1.

def compute_metrics(pred): pred_logits = pred.predictions pred_ids = np.argmax(pred_logits, axis=-1) pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id pred_str = processor.batch_decode(pred_ids) # we do not want to group tokens when computing the metrics label_str = processor.batch_decode(pred.label_ids, group_tokens=False) wer = wer_metric.compute(predictions=pred_str, references=label_str) return {"wer": wer}

Now, we can load the pretrained Wav2Vec2 checkpoint. The tokenizer's pad_token_id must be to define the model's pad_token_id or in the case of a CTC speech model also CTC's blank token 2{}^2.

from transformers import AutoModelForCTC model = AutoModelForCTC.from_pretrained( model_checkpoint, ctc_loss_reduction="mean", pad_token_id=processor.tokenizer.pad_token_id, )
loading configuration file https://huggingface.co/facebook/wav2vec2-base/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/c7746642f045322fd01afa31271dd490e677ea11999e68660a92619ec7c892b4.02212753c42f07ecd65bbe35175ac4866badb735f9dae5bf2ae455c57db4dbb7 /usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py:337: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`. "Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 " Model config Wav2Vec2Config { "activation_dropout": 0.0, "apply_spec_augment": true, "architectures": [ "Wav2Vec2ForPreTraining" ], "attention_dropout": 0.1, "bos_token_id": 1, "classifier_proj_size": 256, "codevector_dim": 256, "contrastive_logits_temperature": 0.1, "conv_bias": false, "conv_dim": [ 512, 512, 512, 512, 512, 512, 512 ], "conv_kernel": [ 10, 3, 3, 3, 3, 2, 2 ], "conv_stride": [ 5, 2, 2, 2, 2, 2, 2 ], "ctc_loss_reduction": "mean", "ctc_zero_infinity": false, "diversity_loss_weight": 0.1, "do_stable_layer_norm": false, "eos_token_id": 2, "feat_extract_activation": "gelu", "feat_extract_norm": "group", "feat_proj_dropout": 0.1, "feat_quantizer_dropout": 0.0, "final_dropout": 0.0, "freeze_feat_extract_train": true, "gradient_checkpointing": true, "hidden_act": "gelu", "hidden_dropout": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "layerdrop": 0.05, "mask_channel_length": 10, "mask_channel_min_space": 1, "mask_channel_other": 0.0, "mask_channel_prob": 0.0, "mask_channel_selection": "static", "mask_feature_length": 10, "mask_feature_prob": 0.0, "mask_time_length": 10, "mask_time_min_space": 1, "mask_time_other": 0.0, "mask_time_prob": 0.05, "mask_time_selection": "static", "model_type": "wav2vec2", "no_mask_channel_overlap": false, "no_mask_time_overlap": false, "num_attention_heads": 12, "num_codevector_groups": 2, "num_codevectors_per_group": 320, "num_conv_pos_embedding_groups": 16, "num_conv_pos_embeddings": 128, "num_feat_extract_layers": 7, "num_hidden_layers": 12, "num_negatives": 100, "pad_token_id": 29, "proj_codevector_dim": 256, "transformers_version": "4.11.3", "use_weighted_layer_sum": false, "vocab_size": 32 } loading weights file https://huggingface.co/facebook/wav2vec2-base/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/ef45231897ce572a660ebc5a63d3702f1a6041c4c5fb78cbec330708531939b3.fcae05302a685f7904c551c8ea571e8bc2a2c4a1777ea81ad66e47f7883a650a Some weights of the model checkpoint at facebook/wav2vec2-base were not used when initializing Wav2Vec2ForCTC: ['quantizer.codevectors', 'project_hid.weight', 'quantizer.weight_proj.bias', 'project_q.weight', 'project_q.bias', 'project_hid.bias', 'quantizer.weight_proj.weight'] - This IS expected if you are initializing Wav2Vec2ForCTC from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing Wav2Vec2ForCTC from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/wav2vec2-base and are newly initialized: ['lm_head.bias', 'lm_head.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

The first component of most transformer-based speech models consists of a stack of CNN layers that are used to extract acoustically meaningful - but contextually independent - features from the raw speech signal. This part of the model has already been sufficiently trained during pretraining and as stated in the paper does not need to be fine-tuned anymore. Thus, we can set the requires_grad to False for all parameters of the feature extraction part.

In a final step, we define all parameters related to training. To give more explanation on some of the parameters:

  • group_by_length makes training more efficient by grouping training samples of similar input length into one batch. This can significantly speed up training time by heavily reducing the overall number of useless padding tokens that are passed through the model

  • learning_rate and weight_decay were heuristically tuned until fine-tuning has become stable. Note that those parameters strongly depend on the Timit dataset and might be suboptimal for other speech datasets.

For more explanations on other parameters, one can take a look at the docs.

During training, a checkpoint will be uploaded asynchronously to the hub every 400 training steps. It allows you to also play around with the demo widget even while your model is still training.

Note: If one does not want to upload the model checkpoints to the hub, simply set push_to_hub=False.

from transformers import TrainingArguments training_args = TrainingArguments( output_dir=repo_name, group_by_length=True, per_device_train_batch_size=32, evaluation_strategy="steps", num_train_epochs=30, fp16=True, gradient_checkpointing=True, save_steps=500, eval_steps=500, logging_steps=500, learning_rate=1e-4, weight_decay=0.005, warmup_steps=1000, save_total_limit=2, push_to_hub=True, )
PyTorch: setting up devices The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).

Now, all instances can be passed to Trainer and we are ready to start training!

from transformers import Trainer trainer = Trainer( model=model, data_collator=data_collator, args=training_args, compute_metrics=compute_metrics, train_dataset=timit["train"], eval_dataset=timit["test"], tokenizer=processor.feature_extractor, )
/content/wav2vec2-base-timit-demo-colab is already a clone of https://huggingface.co/patrickvonplaten/wav2vec2-base-timit-demo-colab. Make sure you pull the latest changes with `repo.git_pull()`. Using amp fp16 backend

1{}^1 To allow models to become independent of the speaker rate, in CTC, consecutive tokens that are identical are simply grouped as a single token. However, the encoded labels should not be grouped when decoding since they don't correspond to the predicted tokens of the model, which is why the group_tokens=False parameter has to be passed. If we wouldn't pass this parameter a word like "hello" would incorrectly be encoded, and decoded as "helo".

2{}^2 The blank token allows the model to predict a word, such as "hello" by forcing it to insert the blank token between the two l's. A CTC-conform prediction of "hello" of our model would be [PAD] [PAD] "h" "e" "e" "l" "l" [PAD] "l" "o" "o" [PAD].

Training

Training will take a couple of hours depending on the GPU allocated to this notebook.

trainer.train()
The following columns in the training set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length. ***** Running training ***** Num examples = 3978 Num Epochs = 30 Instantaneous batch size per device = 32 Total train batch size (w. parallel, distributed & accumulation) = 32 Gradient Accumulation steps = 1 Total optimization steps = 3750
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py:1357: FutureWarning: Non-finite norm encountered in torch.nn.utils.clip_grad_norm_; continuing anyway. Note that the default behavior will change in a future release to error out if a non-finite total norm is encountered. At that point, setting error_if_nonfinite=false will be required to retain the old behavior. args.max_grad_norm, The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length. ***** Running Evaluation ***** Num examples = 1680 Batch size = 8 Saving model checkpoint to wav2vec2-base-timit-demo-colab/checkpoint-500 Configuration saved in wav2vec2-base-timit-demo-colab/checkpoint-500/config.json Model weights saved in wav2vec2-base-timit-demo-colab/checkpoint-500/pytorch_model.bin Configuration saved in wav2vec2-base-timit-demo-colab/checkpoint-500/preprocessor_config.json Configuration saved in wav2vec2-base-timit-demo-colab/preprocessor_config.json /usr/local/lib/python3.7/dist-packages/transformers/trainer.py:1357: FutureWarning: Non-finite norm encountered in torch.nn.utils.clip_grad_norm_; continuing anyway. Note that the default behavior will change in a future release to error out if a non-finite total norm is encountered. At that point, setting error_if_nonfinite=false will be required to retain the old behavior. args.max_grad_norm, The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length. ***** Running Evaluation ***** Num examples = 1680 Batch size = 8 Saving model checkpoint to wav2vec2-base-timit-demo-colab/checkpoint-1000 Configuration saved in wav2vec2-base-timit-demo-colab/checkpoint-1000/config.json Model weights saved in wav2vec2-base-timit-demo-colab/checkpoint-1000/pytorch_model.bin Configuration saved in wav2vec2-base-timit-demo-colab/checkpoint-1000/preprocessor_config.json The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length. ***** Running Evaluation ***** Num examples = 1680 Batch size = 8 Saving model checkpoint to wav2vec2-base-timit-demo-colab/checkpoint-1500 Configuration saved in wav2vec2-base-timit-demo-colab/checkpoint-1500/config.json Model weights saved in wav2vec2-base-timit-demo-colab/checkpoint-1500/pytorch_model.bin Configuration saved in wav2vec2-base-timit-demo-colab/checkpoint-1500/preprocessor_config.json Deleting older checkpoint [wav2vec2-base-timit-demo-colab/checkpoint-500] due to args.save_total_limit The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length. ***** Running Evaluation ***** Num examples = 1680 Batch size = 8 Saving model checkpoint to wav2vec2-base-timit-demo-colab/checkpoint-2000 Configuration saved in wav2vec2-base-timit-demo-colab/checkpoint-2000/config.json Model weights saved in wav2vec2-base-timit-demo-colab/checkpoint-2000/pytorch_model.bin Configuration saved in wav2vec2-base-timit-demo-colab/checkpoint-2000/preprocessor_config.json Deleting older checkpoint [wav2vec2-base-timit-demo-colab/checkpoint-1000] due to args.save_total_limit The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length. ***** Running Evaluation ***** Num examples = 1680 Batch size = 8 Saving model checkpoint to wav2vec2-base-timit-demo-colab/checkpoint-2500 Configuration saved in wav2vec2-base-timit-demo-colab/checkpoint-2500/config.json Model weights saved in wav2vec2-base-timit-demo-colab/checkpoint-2500/pytorch_model.bin Configuration saved in wav2vec2-base-timit-demo-colab/checkpoint-2500/preprocessor_config.json Deleting older checkpoint [wav2vec2-base-timit-demo-colab/checkpoint-1500] due to args.save_total_limit The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length. ***** Running Evaluation ***** Num examples = 1680 Batch size = 8 Saving model checkpoint to wav2vec2-base-timit-demo-colab/checkpoint-3000 Configuration saved in wav2vec2-base-timit-demo-colab/checkpoint-3000/config.json Model weights saved in wav2vec2-base-timit-demo-colab/checkpoint-3000/pytorch_model.bin Configuration saved in wav2vec2-base-timit-demo-colab/checkpoint-3000/preprocessor_config.json Deleting older checkpoint [wav2vec2-base-timit-demo-colab/checkpoint-2000] due to args.save_total_limit The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length. ***** Running Evaluation ***** Num examples = 1680 Batch size = 8 Saving model checkpoint to wav2vec2-base-timit-demo-colab/checkpoint-3500 Configuration saved in wav2vec2-base-timit-demo-colab/checkpoint-3500/config.json Model weights saved in wav2vec2-base-timit-demo-colab/checkpoint-3500/pytorch_model.bin Configuration saved in wav2vec2-base-timit-demo-colab/checkpoint-3500/preprocessor_config.json Deleting older checkpoint [wav2vec2-base-timit-demo-colab/checkpoint-2500] due to args.save_total_limit Training completed. Do not forget to share your model on huggingface.co/models =)
TrainOutput(global_step=3750, training_loss=0.0719044921875, metrics={'train_runtime': 6384.5833, 'train_samples_per_second': 18.692, 'train_steps_per_second': 0.587, 'total_flos': 3.098741829539622e+18, 'train_loss': 0.0719044921875, 'epoch': 30.0})

The final WER should be around 0.3 which is reasonable given that state-of-the-art phoneme error rates (PER) are just below 0.1 (see leaderboard) and that WER is usually worse than PER.

You can now upload the result of the training to the Hub, just execute this instruction:

trainer.push_to_hub()
Saving model checkpoint to wav2vec2-base-timit-demo-colab Configuration saved in wav2vec2-base-timit-demo-colab/config.json Model weights saved in wav2vec2-base-timit-demo-colab/pytorch_model.bin Configuration saved in wav2vec2-base-timit-demo-colab/preprocessor_config.json Several commits (2) will be pushed upstream. The progress bars may be unreliable.
To https://huggingface.co/patrickvonplaten/wav2vec2-base-timit-demo-colab 870c48e..2998ea6 main -> main Dropping the following result as it does not have all the necessary field: {} To https://huggingface.co/patrickvonplaten/wav2vec2-base-timit-demo-colab 2998ea6..431ef82 main -> main
'https://huggingface.co/patrickvonplaten/wav2vec2-base-timit-demo-colab/commit/2998ea690ec1ba32370f856fb558cf22dcb0e119'

You can now share this model with all your friends, family, favorite pets: they can all load it with the identifier "your-username/the-name-you-picked" so for instance:

from transformers import AutoModelForCTC, Wav2Vec2Processor model = AutoModelForCTC.from_pretrained("patrickvonplaten/wav2vec2-base-timit-demo-colab") processor = Wav2Vec2Processor.from_pretrained("patrickvonplaten/wav2vec2-base-timit-demo-colab")

To fine-tune larger models on larger datasets using CTC loss, one should take a look at the official speech-recognition examples here 🤗.