Path: blob/master/2 - Natural Language Processing with Probabilistic Models/Week 4/C2W4_L1_Data Preparation.ipynb
65 views
Word Embeddings First Steps: Data Preparation
In this series of ungraded notebooks, you'll try out all the individual techniques that you learned about in the lectures. Practicing on small examples will prepare you for the graded assignment, where you will combine the techniques in more advanced ways to create word embeddings from a real-life corpus.
This notebook focuses on data preparation, which is the first step of any machine learning algorithm. It is a very important step because models are only as good as the data they are trained on and the models used require the data to have a particular structure to process it properly.
To get started, import and initialize all the libraries you will need.
Data preparation
In the data preparation phase, starting with a corpus of text, you will:
Clean and tokenize the corpus.
Extract the pairs of context words and center word that will make up the training data set for the CBOW model. The context words are the features that will be fed into the model, and the center words are the target values that the model will learn to predict.
Create simple vector representations of the context words (features) and center words (targets) that can be used by the neural network of the CBOW model.
Cleaning and tokenization
To demonstrate the cleaning and tokenization process, consider a corpus that contains emojis and various punctuation signs.
First, replace all interrupting punctuation signs — such as commas and exclamation marks — with periods.
Next, use NLTK's tokenization engine to split the corpus into individual tokens.
Finally, as you saw in the lecture, get rid of numbers and punctuation other than periods, and convert all the remaining tokens to lowercase.
Note that the heart emoji is considered as a token just like any normal word.
Now let's streamline the cleaning and tokenization process by wrapping the previous steps in a function.
Apply this function to the corpus that you'll be working on in the rest of this notebook: "I am happy because I am learning"
Now try it out yourself with your own sentence.
Sliding window of words
Now that you have transformed the corpus into a list of clean tokens, you can slide a window of words across this list. For each window you can extract a center word and the context words.
The get_windows function in the next cell was introduced in the lecture.
The first argument of this function is a list of words (or tokens). The second argument, C, is the context half-size. Recall that for a given center word, the context words are made of C words to the left and C words to the right of the center word.
Here is how you can use this function to extract context words and center words from a list of tokens. These context and center words will make up the training set that you will use to train the CBOW model.
The first example of the training set is made of:
the context words "i", "am", "because", "i",
and the center word to be predicted: "happy".
Now try it out yourself. In the next cell, you can change both the sentence and the context half-size.
Transforming words into vectors for the training set
To finish preparing the training set, you need to transform the context words and center words into vectors.
Mapping words to indices and indices to words
The center words will be represented as one-hot vectors, and the vectors that represent context words are also based on one-hot vectors.
To create one-hot word vectors, you can start by mapping each unique word to a unique integer (or index). We have provided a helper function, get_dict, that creates a Python dictionary that maps words to integers and back.
Here's the dictionary that maps words to numeric indices.
You can use this dictionary to get the index of a word.
And conversely, here's the dictionary that maps indices to words.
Finally, get the length of either of these dictionaries to get the size of the vocabulary of your corpus, in other words the number of different words making up the corpus.
Getting one-hot word vectors
Recall from the lecture that you can easily convert an integer, , into a one-hot vector.
Consider the word "happy". First, retrieve its numeric index.
Now create a vector with the size of the vocabulary, and fill it with zeros.
You can confirm that the vector has the right size.
Next, replace the 0 of the -th element with a 1.
And you have your one-hot word vector.
You can now group all of these steps in a convenient function, which takes as parameters: a word to be encoded, a dictionary that maps words to indices, and the size of the vocabulary.
Check that it works as intended.
What is the word vector for "learning"?
Expected output:
Getting context word vectors
To create the vectors that represent context words, you will calculate the average of the one-hot vectors representing the individual words.
Let's start with a list of context words.
Using Python's list comprehension construct and the word_to_one_hot_vector function that you created in the previous section, you can create a list of one-hot vectors representing each of the context words.
And you can now simply get the average of these vectors using numpy's mean function, to get the vector representation of the context words.
Note the axis=0 parameter that tells mean to calculate the average of the rows (if you had wanted the average of the columns, you would have used axis=1).
Now create the context_words_to_vector function that takes in a list of context words, a word-to-index dictionary, and a vocabulary size, and outputs the vector representation of the context words.
And check that you obtain the same output as the manual approach above.
What is the vector representation of the context words "am happy i am"?
Expected output:
Building the training set
You can now combine the functions that you created in the previous sections, to build a training set for the CBOW model, starting from the following tokenized corpus.
To do this you need to use the sliding window function (get_windows) to extract the context words and center words, and you then convert these sets of words into a basic vector representation using word_to_one_hot_vector and context_words_to_vector.
In this practice notebook you'll be performing a single iteration of training using a single example, but in this week's assignment you'll train the CBOW model using several iterations and batches of example. Here is how you would use a Python generator function (remember the yield keyword from the lecture?) to make it easier to iterate over a set of examples.
The output of this function can be iterated on to get successive context word vectors and center word vectors, as demonstrated in the next cell.
Your training set is ready, you can now move on to the CBOW model itself which will be covered in the next lecture notebook.
Congratulations on finishing this lecture notebook! Hopefully you now have a better understanding of how to prepare your data before feeding it to a continuous bag-of-words model.
Keep it up!