Path: blob/master/deep_learning/subword/bpe.ipynb
2592 views
Byte Pair Encoding (BPE) - Handling Rare Words with Subword Tokenization
NLP techniques, be it word embeddings or tfidf often works with a fixed vocabulary size. Due to this, rare words in the corpus would all be considered out of vocabulary, and is often times replaced with a default unknown token, <unk>. Then when it comes to feature representation, these unknown tokens often times get some global default values. e.g. mean embedding values of all the word embeddings.
Character level and n-gram representation aside, one seminal way of dealing with this rare word issue is Byte Pair Encoding. At a high level it works by encoding rare or unknown words as sequence of subword units. e.g. Imagine the model sees an out of vocabulary word talking. Instead of directly falling back to a default unknown token, <unk>, we may have noticed that we have words such as talked, talks appearing in the corpus. Hence, if these words got segmented as talk ed, talk s, etc. Now these words all have a talk in common and the model might be able to pick up more information compared with falling back to a unknown token.
The gist is, by breaking down our corpus into subword units, any word that doesn't appear in our training corpus can be broken down into these subword units. How to break the corpus down into subword units becomes the next question that we'll tackle.
Byte Pair Encoding, is a data compression algorithm that iteratively replaces the most frequent pair of bytes in a sequence with a single, unused byte. e.g.
aaabdaaabac. aa is the most frequent pair of bytes and we replace it with a unused byte Z.
ZabdZabac. ab is now the most frequent pair of bytes, we replace it with Y.
To adapt this idea for word segmentation, instead of replacing frequent pair of bytes, we now merge subword pairs that frequently occur. To elaborate:
We initialize the vocabulary with the character vocabulary, and represent each word as a sequence of characters, plus a special end-of-word symbol '/w', which allows us to restore the original tokenization after translation. More on this below.
Next, we iteratively count all symbol pairs and merge each occurrence of the most frequent pair (a, b) into ab. Each merge operation produces a new symbol which represents a character n-gram.
We'll keep repeating the previous process until the target vocabulary size is reached.
A special end of word symbol is important. As without it, say if there is a token est, this token could be in the word est ern, or the wold small est. The meaning between the two of them are quite different. With the end of word symbol, if there is a token est/w, the model immediately know that this is the token for the wold small est instead of est ern.
An example use case is neural machine translation (NMT), where subword tokenization was shown to have superior performance compared to using larger vocabulary size or fall-back dictionary.
Implementation From Scratch
Start by initializing the vocabulary with character vocabulary plus a special end of word symbol.
We then count the frequency of each consecutive character pair.
We find the most frequent, and merge them together into a new token whenever we encounter them in the vocabulary.
In this particular round, the e,s consecutive pair was identified as the most frequent pair, then in the new vocabulary, all the e,s was merged together into a single token.
This was only 1 iteration of the merging process, we can iteratively perform this merging step until we reach the number of merges or the number of tokens we would like to have.
Applying Encodings
Now that we've see the process of "learning" the byte pair encodings. We now turn our attention to the process of "applying" them to new vocabulary.
While possible:
Get all the bigram symbol for the word that we wish to encode.
Find the symbol pair in our byte pair codes that appeared first among the symbols that were merged.
Apply the merge on the word.
The previous couple of code chunks shows one iteration of applying the byte pair codes to encode a new word. The next one puts everything together into a single function.
Judging from the result, we can see that even though the word lowest did not appear in our "training" data, but because "low" and and ending "est" are both byte pair codes that were learned, the word got encoded into low est.
Sentencepiece
Upon seeing an educational implementation, we will give a more efficient implementation, sentencepiece, a swing. Consider reading the overview section of the software for a high-level introduction of the package.
To train the subword unit, we call the SentencePieceTrainer.train method by passing in our parameters. I've specified some of the commonly used parameters in the example below. As for what are the available options, I find it helpful to just search in the source code of the parameter parser.
Some comments regarding the parameters:
inputexpects a .txt file on disk. Hence, if we have a python variable, e.g. a list of sentences that we wish to learn the subword unit using sentencepiece, we would have to make an additional step to write it to a file on disk.model_typecan take inbpe,unigram,char,word, allowing us to experiment with different tokenization schemes. Theunigrammethod was not introduced in this notebook.pad_id, specifying these ids can be important with we're using sentencepiece in conjunction with other libraries, e.g. 1 library may have already by default preserved the token id 0 for padding characters, in that case, we can explicitly specifying that padding id to sentencepiece to be consistent.
Upon training the model, we can load it, the model resides in model_prefix.model, where model_prefix is a parameter that we also get to specify.
Showcasing some common operations.
This concludes our introduction to one of the subword tokenization methods, Byte Pair Encoding and one of the open-source packages that's available out there sentencepiece.
Some other options out there at the time of writing this documentation includes, YouTokenToMe and fastBPE. YouTokenToMe claims to be faster than both sentencepiece and fastBPE, and sentencepiece supports additional subword tokenization method.
Subword tokenization is a commonly used technique in modern NLP pipeline, and it's definitely worth understanding and adding to our toolkit.