Path: blob/main/C3/W1/assignment/C3W1_Assignment.ipynb
3492 views
Week 1: Explore the BBC News archive
Welcome! In this assignment you will be working with a variation of the BBC News Classification Dataset, which contains 2225 examples of news articles with their respective categories.
TIPS FOR SUCCESSFUL GRADING OF YOUR ASSIGNMENT:
All cells are frozen except for the ones where you need to submit your solutions or when explicitly mentioned you can interact with it.
You can add new cells to experiment but these will be omitted by the grader, so don't rely on newly created cells to host your solution code, use the provided places for this.
You can add the comment # grade-up-to-here in any graded cell to signal the grader that it must only evaluate up to that point. This is helpful if you want to check if you are on the right track even if you are not done with the whole assignment. Be sure to remember to delete the comment afterwards!
Avoid using global variables unless you absolutely have to. The grader tests your code in an isolated environment without running all cells from the top. As a result, global variables may be unavailable when scoring your submission. Global variables that are meant to be used will be defined in UPPERCASE.
To submit your notebook, save it and then click on the blue submit button at the beginning of the page.
Let's get started!
Begin by looking at the structure of the csv that contains the data:
As you can see, each data point is composed of the category of the news article followed by a comma and then the actual text of the article.
Exercise 1: parse_data_from_file
The csv is a very common format to store data and you will probably encounter it many times so it is good to be comfortable with it. Your first exercise will be to read the data from the raw csv file so you can analyze it and built models around it. To do so, complete the parse_data_from_file function below.
Since this format is so common there are a lot of ways to deal with this files using python, both using the standard library or third party libraries such as pandas. Because of this the implementation details are entirely up to you, the only requirement is that your function returns the sentences and labels as regular python lists.
Hints:
Remember the file contains headers so take this into consideration.
If you are unfamiliar with libraries such as pandas or numpy and you prefer to use python's standard library, take a look at
csv.reader, which lets you iterate over the lines of a csv file.You can use the
read_csvfunction from the pandas library.You can use the
loadtxtfunction from the numpy library.If you use any of the two latter approaches remember you still need to convert the
sentencesandlabelsto regular python lists, so take a look at the docs to see how it can be done.
Expected Output:
An important note:
At this point you typically would convert your data into a tf.data.Dataset (alternatively you could have used tf.data.experimental.CsvDataset to do this directly but since this is an experimental feature it is better to avoid when possible) but for this assignment you will keep working with the data as regular python lists.
The reason behind this is that by using a tf.data.Dataset some parts of this assignment will be much more difficult (in particular the next exercise), because when dealing with tensors you need to take additional considerations that you don't need to when dealing with lists and since this is the first assignment of the course, it is best to keep things simple. During next week's assignment you will get to see how this process looks like but for now carry on with the data in this format and worry not since TensorFlow is still compatible with these data formats!
Exercise 2: standardize_func
One important step when working with text data is to standardize it so it is easier to extract information out of it. For instance, you probably want to convert it all to lower-case (so the same word doesn't have different representations such as "hello" and "Hello") and to remove the stopwords from it. These are the most common words in the language and they rarely provide useful information for the classification process. The next cell provides a list of common stopwords which you can use in the exercise:
To achieve this, complete the standardize_func below. This function should receive a string and return another string that excludes all of the stopwords provided from it, as well as converting it to lower-case.
Hints:
You only need to account for whitespace as the separation mechanism between words in the sentence.
The list of stopwords is already provided for you as a global variable you can safely use.
Check out the lower method for python strings.
The returned sentence should not include extra whitespace so the string "hello       again   FRIENDS" should be standardized to "hello friends".
Expected Output:
With the dataset standardized you could go ahead and convert it to a tf.data.Dataset, which you will NOT be doing for this assignment. However if you are curious, this can be achieved like this:
Exercise 3: fit_vectorizer
Now that your data is standardized, it is time to vectorize the sentences of the dataset. For this complete the fit_vectorizer below.
This function should receive the list of sentences as input and return a tf.keras.layers.TextVectorization that has been adapted to those sentences.
Expected Output:
Next, you can use the adapted vectorizer to vectorize the sentences in your dataset. Notice that by default tf.keras.layers.TextVectorization pads the sequences so all of them have the same length (typically the length of the longest sentence will be used if no truncation is defined), this is important because neural networks expect the inputs to have the same size.
Notice that now the variable refers to sequences rather than sentences. This is because all your text data is now encoded as a sequence of integers.
Exercise 4: fit_label_encoder
With the sentences already vectorized it is time to encode the labels so they can also be fed into a neural network. For this complete the fit_label_encoder below.
This function should receive the list of labels as input and return a tf.keras.layers.StringLookup that has been adapted to those sentences. In theory you could also use tf.keras.layers.TextVectorization layer here but it provides a lot of extra functionality that is not required so it ends up being overkill. tf.keras.layers.StringLookup is able to perform the job just fine and it is much simpler.
Hints:
Since all of the texts have their corresponding labels you need to ensure that the vocabulary does not include the out-of-vocabulary (OOV) token since that is not a valid label.
Expected Output:
You should see that each encoded label corresponds to the index of its corresponding label in the vocabulary!
Great job! Now you have successfully performed all the necessary steps to train a neural network capable of processing text. This is all for now but in next week's assignment you will train a model capable of classifying the texts in this same dataset!
Congratulations on finishing this week's assignment!
You have successfully implemented functions to process various text data processing ranging from pre-processing, reading from raw files and tokenizing text.
Keep it up!