Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
yiming-wange
GitHub Repository: yiming-wange/cs224n-2023-solution
Path: blob/main/a1/exploring_word_vectors_22_23.ipynb
995 views
Kernel: cs224n

CS224N Assignment 1: Exploring Word Vectors (25 Points)

Due 4:30pm, Tue Jan 17

Welcome to CS224N!

Before you start, make sure you read the README.txt in the same directory as this notebook for important setup information. A lot of code is provided in this notebook, and we highly encourage you to read and understand it as part of the learning 😃

If you aren't super familiar with Python, Numpy, or Matplotlib, we recommend you check out the review session on Friday. The session will be recorded and the material will be made available on our website. The CS231N Python/Numpy tutorial is also a great resource.

Assignment Notes: Please make sure to save the notebook as you go along. Submission Instructions are located at the bottom of the notebook.

# All Import Statements Defined Here # Note: Do not add to this list. # ---------------- import sys assert sys.version_info[0]==3 assert sys.version_info[1] >= 5 from platform import python_version assert int(python_version().split(".")[1]) >= 5, "Please upgrade your Python version following the instructions in \ the README.txt file found in the same directory as this notebook. Your Python version is " + python_version() from gensim.models import KeyedVectors from gensim.test.utils import datapath import pprint import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = [10, 5] import nltk nltk.download('reuters') #to specify download location, optionally add the argument: download_dir='/specify/desired/path/' from nltk.corpus import reuters import numpy as np import random import scipy as sp from sklearn.decomposition import TruncatedSVD from sklearn.decomposition import PCA START_TOKEN = '<START>' END_TOKEN = '<END>' np.random.seed(0) random.seed(0) # ----------------
[nltk_data] Downloading package reuters to [nltk_data] /Users/yimingwang/nltk_data... [nltk_data] Package reuters is already up-to-date!

Word Vectors

Word Vectors are often used as a fundamental component for downstream NLP tasks, e.g. question answering, text generation, translation, etc., so it is important to build some intuitions as to their strengths and weaknesses. Here, you will explore two types of word vectors: those derived from co-occurrence matrices, and those derived via GloVe.

Note on Terminology: The terms "word vectors" and "word embeddings" are often used interchangeably. The term "embedding" refers to the fact that we are encoding aspects of a word's meaning in a lower dimensional space. As Wikipedia states, "conceptually it involves a mathematical embedding from a space with one dimension per word to a continuous vector space with a much lower dimension".

Part 1: Count-Based Word Vectors (10 points)

Most word vector models start from the following idea:

You shall know a word by the company it keeps (Firth, J. R. 1957:11)

Many word vector implementations are driven by the idea that similar words, i.e., (near) synonyms, will be used in similar contexts. As a result, similar words will often be spoken or written along with a shared subset of words, i.e., contexts. By examining these contexts, we can try to develop embeddings for our words. With this intuition in mind, many "old school" approaches to constructing word vectors relied on word counts. Here we elaborate upon one of those strategies, co-occurrence matrices (for more information, see here or here).

Co-Occurrence

A co-occurrence matrix counts how often things co-occur in some environment. Given some word wiw_i occurring in the document, we consider the context window surrounding wiw_i. Supposing our fixed window size is nn, then this is the nn preceding and nn subsequent words in that document, i.e. words winwi1w_{i-n} \dots w_{i-1} and wi+1wi+nw_{i+1} \dots w_{i+n}. We build a co-occurrence matrix MM, which is a symmetric word-by-word matrix in which MijM_{ij} is the number of times wjw_j appears inside wiw_i's window among all documents.

Example: Co-Occurrence with Fixed Window of n=1:

Document 1: "all that glitters is not gold"

Document 2: "all is well that ends well"

*<START>allthatglittersisnotgoldwellends<END>
<START>0200000000
all2010100000
that0101000110
glitters0010100000
is0101010100
not0000101000
gold0000010001
well0010100011
ends0010000100
<END>0000001100

Note: In NLP, we often add <START> and <END> tokens to represent the beginning and end of sentences, paragraphs or documents. In this case we imagine <START> and <END> tokens encapsulating each document, e.g., "<START> All that glitters is not gold <END>", and include these tokens in our co-occurrence counts.

The rows (or columns) of this matrix provide one type of word vectors (those based on word-word co-occurrence), but the vectors will be large in general (linear in the number of distinct words in a corpus). Thus, our next step is to run dimensionality reduction. In particular, we will run SVD (Singular Value Decomposition), which is a kind of generalized PCA (Principal Components Analysis) to select the top kk principal components. Here's a visualization of dimensionality reduction with SVD. In this picture our co-occurrence matrix is AA with nn rows corresponding to nn words. We obtain a full matrix decomposition, with the singular values ordered in the diagonal SS matrix, and our new, shorter length-kk word vectors in UkU_k.

Picture of an SVD

This reduced-dimensionality co-occurrence representation preserves semantic relationships between words, e.g. doctor and hospital will be closer than doctor and dog.

Notes: If you can barely remember what an eigenvalue is, here's a slow, friendly introduction to SVD. If you want to learn more thoroughly about PCA or SVD, feel free to check out lectures 7, 8, and 9 of CS168. These course notes provide a great high-level treatment of these general purpose algorithms. Though, for the purpose of this class, you only need to know how to extract the k-dimensional embeddings by utilizing pre-programmed implementations of these algorithms from the numpy, scipy, or sklearn python packages. In practice, it is challenging to apply full SVD to large corpora because of the memory needed to perform PCA or SVD. However, if you only want the top kk vector components for relatively small kk — known as Truncated SVD — then there are reasonably scalable techniques to compute those iteratively.

Plotting Co-Occurrence Word Embeddings

Here, we will be using the Reuters (business and financial news) corpus. If you haven't run the import cell at the top of this page, please run it now (click it and press SHIFT-RETURN). The corpus consists of 10,788 news documents totaling 1.3 million words. These documents span 90 categories and are split into train and test. For more details, please see https://www.nltk.org/book/ch02.html. We provide a read_corpus function below that pulls out only articles from the "gold" (i.e. news articles about gold, mining, etc.) category. The function also adds <START> and <END> tokens to each of the documents, and lowercases words. You do not have to perform any other kind of pre-processing.

def read_corpus(category="gold"): """ Read files from the specified Reuter's category. Params: category (string): category name Return: list of lists, with words from each of the processed files """ files = reuters.fileids(category) return [[START_TOKEN] + [w.lower() for w in list(reuters.words(f))] + [END_TOKEN] for f in files]

Let's have a look what these documents are like….

reuters_corpus = read_corpus() pprint.pprint(reuters_corpus[:3], compact=True, width=100) #print(len(reuters_corpus))
[['<START>', 'western', 'mining', 'to', 'open', 'new', 'gold', 'mine', 'in', 'australia', 'western', 'mining', 'corp', 'holdings', 'ltd', '&', 'lt', ';', 'wmng', '.', 's', '>', '(', 'wmc', ')', 'said', 'it', 'will', 'establish', 'a', 'new', 'joint', 'venture', 'gold', 'mine', 'in', 'the', 'northern', 'territory', 'at', 'a', 'cost', 'of', 'about', '21', 'mln', 'dlrs', '.', 'the', 'mine', ',', 'to', 'be', 'known', 'as', 'the', 'goodall', 'project', ',', 'will', 'be', 'owned', '60', 'pct', 'by', 'wmc', 'and', '40', 'pct', 'by', 'a', 'local', 'w', '.', 'r', '.', 'grace', 'and', 'co', '&', 'lt', ';', 'gra', '>', 'unit', '.', 'it', 'is', 'located', '30', 'kms', 'east', 'of', 'the', 'adelaide', 'river', 'at', 'mt', '.', 'bundey', ',', 'wmc', 'said', 'in', 'a', 'statement', 'it', 'said', 'the', 'open', '-', 'pit', 'mine', ',', 'with', 'a', 'conventional', 'leach', 'treatment', 'plant', ',', 'is', 'expected', 'to', 'produce', 'about', '50', ',', '000', 'ounces', 'of', 'gold', 'in', 'its', 'first', 'year', 'of', 'production', 'from', 'mid', '-', '1988', '.', 'annual', 'ore', 'capacity', 'will', 'be', 'about', '750', ',', '000', 'tonnes', '.', '<END>'], ['<START>', 'belgium', 'to', 'issue', 'gold', 'warrants', ',', 'sources', 'say', 'belgium', 'plans', 'to', 'issue', 'swiss', 'franc', 'warrants', 'to', 'buy', 'gold', ',', 'with', 'credit', 'suisse', 'as', 'lead', 'manager', ',', 'market', 'sources', 'said', '.', 'no', 'confirmation', 'or', 'further', 'details', 'were', 'immediately', 'available', '.', '<END>'], ['<START>', 'belgium', 'launches', 'bonds', 'with', 'gold', 'warrants', 'the', 'kingdom', 'of', 'belgium', 'is', 'launching', '100', 'mln', 'swiss', 'francs', 'of', 'seven', 'year', 'notes', 'with', 'warrants', 'attached', 'to', 'buy', 'gold', ',', 'lead', 'mananger', 'credit', 'suisse', 'said', '.', 'the', 'notes', 'themselves', 'have', 'a', '3', '-', '3', '/', '8', 'pct', 'coupon', 'and', 'are', 'priced', 'at', 'par', '.', 'payment', 'is', 'due', 'april', '30', ',', '1987', 'and', 'final', 'maturity', 'april', '30', ',', '1994', '.', 'each', '50', ',', '000', 'franc', 'note', 'carries', '15', 'warrants', '.', 'two', 'warrants', 'are', 'required', 'to', 'allow', 'the', 'holder', 'to', 'buy', '100', 'grammes', 'of', 'gold', 'at', 'a', 'price', 'of', '2', ',', '450', 'francs', ',', 'during', 'the', 'entire', 'life', 'of', 'the', 'bond', '.', 'the', 'latest', 'gold', 'price', 'in', 'zurich', 'was', '2', ',', '045', '/', '2', ',', '070', 'francs', 'per', '100', 'grammes', '.', '<END>']] 124

Question 1.1: Implement distinct_words [code] (2 points)

Write a method to work out the distinct words (word types) that occur in the corpus. You can do this with for loops, but it's more efficient to do it with Python list comprehensions. In particular, this may be useful to flatten a list of lists. If you're not familiar with Python list comprehensions in general, here's more information.

Your returned corpus_words should be sorted. You can use python's sorted function for this.

You may find it useful to use Python sets to remove duplicate words.

def distinct_words(corpus): """ Determine a list of distinct words for the corpus. Params: corpus (list of list of strings): corpus of documents Return: corpus_words (list of strings): sorted list of distinct words across the corpus n_corpus_words (integer): number of distinct words across the corpus """ corpus_words = [] n_corpus_words = -1 ### SOLUTION BEGIN flaten_list = [y for x in corpus for y in x] corpus_words = sorted(list(set(flaten_list))) n_corpus_words = len(corpus_words) ### SOLUTION END return corpus_words, n_corpus_words
# --------------------- # Run this sanity check # Note that this not an exhaustive check for correctness. # --------------------- # Define toy corpus test_corpus = ["{} All that glitters isn't gold {}".format(START_TOKEN, END_TOKEN).split(" "), "{} All's well that ends well {}".format(START_TOKEN, END_TOKEN).split(" ")] test_corpus_words, num_corpus_words = distinct_words(test_corpus) # Correct answers ans_test_corpus_words = sorted([START_TOKEN, "All", "ends", "that", "gold", "All's", "glitters", "isn't", "well", END_TOKEN]) ans_num_corpus_words = len(ans_test_corpus_words) # Test correct number of words assert(num_corpus_words == ans_num_corpus_words), "Incorrect number of distinct words. Correct: {}. Yours: {}".format(ans_num_corpus_words, num_corpus_words) # Test correct words assert (test_corpus_words == ans_test_corpus_words), "Incorrect corpus_words.\nCorrect: {}\nYours: {}".format(str(ans_test_corpus_words), str(test_corpus_words)) # Print Success print ("-" * 80) print("Passed All Tests!") print ("-" * 80)
-------------------------------------------------------------------------------- Passed All Tests! --------------------------------------------------------------------------------

Question 1.2: Implement compute_co_occurrence_matrix [code] (3 points)

Write a method that constructs a co-occurrence matrix for a certain window-size nn (with a default of 4), considering words nn before and nn after the word in the center of the window. Here, we start to use numpy (np) to represent vectors, matrices, and tensors. If you're not familiar with NumPy, there's a NumPy tutorial in the second half of this cs231n Python NumPy tutorial.

def compute_co_occurrence_matrix(corpus, window_size=4): """ Compute co-occurrence matrix for the given corpus and window_size (default of 4). Note: Each word in a document should be at the center of a window. Words near edges will have a smaller number of co-occurring words. For example, if we take the document "<START> All that glitters is not gold <END>" with window size of 4, "All" will co-occur with "<START>", "that", "glitters", "is", and "not". Params: corpus (list of list of strings): corpus of documents window_size (int): size of context window Return: M (a symmetric numpy matrix of shape (number of unique words in the corpus , number of unique words in the corpus)): Co-occurence matrix of word counts. The ordering of the words in the rows/columns should be the same as the ordering of the words given by the distinct_words function. word2ind (dict): dictionary that maps word to index (i.e. row/column number) for matrix M. """ words, n_words = distinct_words(corpus) M = None word2ind = {} ### SOLUTION BEGIN ind = list(range(n_words)) word2ind = dict(zip(words, ind)) M = np.zeros((n_words, n_words)) def add_matrix(window): for co_occ in window: idx_col = word2ind[co_occ] M[inx_row, idx_col] += 1 # iter through all the documents in the corpus for documents in corpus: # use for judgements len_doc = len(documents) for i in range(len_doc): word = documents[i] # find the row / wordidx for data inx_row = word2ind[word] # no special occation if i >= window_size and i <= len_doc - window_size: window = documents[i - window_size : i] + documents[i: i + window_size] add_matrix(window) #down occation elif i < window_size: window = documents[0: i] + documents[i: i + window_size] add_matrix(window) #high occation elif i > len_doc - window_size: window = documents[i - window_size: i] + documents[i: len_doc] add_matrix(window) ### SOLUTION END return M, word2ind

Question 1.3: Implement reduce_to_k_dim [code] (1 point)

Construct a method that performs dimensionality reduction on the matrix to produce k-dimensional embeddings. Use SVD to take the top k components and produce a new matrix of k-dimensional embeddings.

Note: All of numpy, scipy, and scikit-learn (sklearn) provide some implementation of SVD, but only scipy and sklearn provide an implementation of Truncated SVD, and only sklearn provides an efficient randomized algorithm for calculating large-scale Truncated SVD. So please use sklearn.decomposition.TruncatedSVD.

def reduce_to_k_dim(M, k=2): """ Reduce a co-occurence count matrix of dimensionality (num_corpus_words, num_corpus_words) to a matrix of dimensionality (num_corpus_words, k) using the following SVD function from Scikit-Learn: - http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html Params: M (numpy matrix of shape (number of unique words in the corpus , number of unique words in the corpus)): co-occurence matrix of word counts k (int): embedding size of each word after dimension reduction Return: M_reduced (numpy matrix of shape (number of corpus words, k)): matrix of k-dimensioal word embeddings. In terms of the SVD from math class, this actually returns U * S """ n_iters = 10 # Use this parameter in your call to `TruncatedSVD` M_reduced = None print("Running Truncated SVD over %i words..." % (M.shape[0])) ### SOLUTION BEGIN svd = TruncatedSVD(n_components=k, n_iter=7, random_state=42) M_reduced = svd.fit_transform(M) ### SOLUTION END print("Done.") return M_reduced
# --------------------- # Run this sanity check # Note that this is not an exhaustive check for correctness # In fact we only check that your M_reduced has the right dimensions. # --------------------- # Define toy corpus and run student code test_corpus = ["{} All that glitters isn't gold {}".format(START_TOKEN, END_TOKEN).split(" "), "{} All's well that ends well {}".format(START_TOKEN, END_TOKEN).split(" ")] M_test, word2ind_test = compute_co_occurrence_matrix(test_corpus, window_size=1) M_test_reduced = reduce_to_k_dim(M_test, k=2) # Test proper dimensions assert (M_test_reduced.shape[0] == 10), "M_reduced has {} rows; should have {}".format(M_test_reduced.shape[0], 10) assert (M_test_reduced.shape[1] == 2), "M_reduced has {} columns; should have {}".format(M_test_reduced.shape[1], 2) # Print Success print ("-" * 80) print("Passed All Tests!") print ("-" * 80)
Running Truncated SVD over 10 words... Done. -------------------------------------------------------------------------------- Passed All Tests! --------------------------------------------------------------------------------

Question 1.4: Implement plot_embeddings [code] (1 point)

Here you will write a function to plot a set of 2D vectors in 2D space. For graphs, we will use Matplotlib (plt).

For this example, you may find it useful to adapt this code. In the future, a good way to make a plot is to look at the Matplotlib gallery, find a plot that looks somewhat like what you want, and adapt the code they give.

def plot_embeddings(M_reduced, word2ind, words): """ Plot in a scatterplot the embeddings of the words specified in the list "words". NOTE: do not plot all the words listed in M_reduced / word2ind. Include a label next to each point. Params: M_reduced (numpy matrix of shape (number of unique words in the corpus , 2)): matrix of 2-dimensioal word embeddings word2ind (dict): dictionary that maps word to indices for matrix M words (list of strings): words whose embeddings we want to visualize """ ### SOLUTION BEGIN plt.style.use('seaborn-whitegrid') for i, word in enumerate(words): [x, y] = M_reduced[word2ind[word], :] plt.scatter(x, y, marker='x') plt.annotate(word, (x, y), xytext=(x, y+0.05)) ### SOLUTION END
# --------------------- # Run this sanity check # Note that this is not an exhaustive check for correctness. # The plot produced should look like the "test solution plot" depicted below. # --------------------- print ("-" * 80) print ("Outputted Plot:") M_reduced_plot_test = np.array([[1, 1], [-1, -1], [1, -1], [-1, 1], [0, 0]]) word2ind_plot_test = {'test1': 0, 'test2': 1, 'test3': 2, 'test4': 3, 'test5': 4} words = ['test1', 'test2', 'test3', 'test4', 'test5'] plot_embeddings(M_reduced_plot_test, word2ind_plot_test, words) print ("-" * 80)
-------------------------------------------------------------------------------- Outputted Plot: --------------------------------------------------------------------------------
/var/folders/z1/nkz03qk14mz_p4v8rr8p7ybc0000gn/T/ipykernel_16867/2887674924.py:13: MatplotlibDeprecationWarning: The seaborn styles shipped by Matplotlib are deprecated since 3.6, as they no longer correspond to the styles shipped by seaborn. However, they will remain available as 'seaborn-v0_8-<style>'. Alternatively, directly use the seaborn API instead. plt.style.use('seaborn-whitegrid')
Image in a Jupyter notebook

Question 1.5: Co-Occurrence Plot Analysis [written] (3 points)

Now we will put together all the parts you have written! We will compute the co-occurrence matrix with fixed window of 4 (the default window size), over the Reuters "gold" corpus. Then we will use TruncatedSVD to compute 2-dimensional embeddings of each word. TruncatedSVD returns U*S, so we need to normalize the returned vectors, so that all the vectors will appear around the unit circle (therefore closeness is directional closeness). Note: The line of code below that does the normalizing uses the NumPy concept of broadcasting. If you don't know about broadcasting, check out Computation on Arrays: Broadcasting by Jake VanderPlas.

Run the below cell to produce the plot. It'll probably take a few seconds to run.

# ----------------------------- # Run This Cell to Produce Your Plot # ------------------------------ reuters_corpus = read_corpus() M_co_occurrence, word2ind_co_occurrence = compute_co_occurrence_matrix(reuters_corpus) M_reduced_co_occurrence = reduce_to_k_dim(M_co_occurrence, k=2) # Rescale (normalize) the rows to make them each of unit-length M_lengths = np.linalg.norm(M_reduced_co_occurrence, axis=1) M_normalized = M_reduced_co_occurrence / M_lengths[:, np.newaxis] # broadcasting words = ['value', 'gold', 'platinum', 'reserves', 'silver', 'metals', 'copper', 'belgium', 'australia', 'china', 'grammes', "mine"] plot_embeddings(M_normalized, word2ind_co_occurrence, words)
Running Truncated SVD over 2830 words... Done.
/var/folders/z1/nkz03qk14mz_p4v8rr8p7ybc0000gn/T/ipykernel_16867/2887674924.py:13: MatplotlibDeprecationWarning: The seaborn styles shipped by Matplotlib are deprecated since 3.6, as they no longer correspond to the styles shipped by seaborn. However, they will remain available as 'seaborn-v0_8-<style>'. Alternatively, directly use the seaborn API instead. plt.style.use('seaborn-whitegrid')
Image in a Jupyter notebook

Verify that your figure matches "question_1.5.png" in the assignment zip. If not, use that figure to answer the next two questions.

a. Find at least two groups of words that cluster together in 2-dimensional embedding space. Give an explanation for each cluster you observe.

SOLUTION BEGIN

  1. For example, the word "silver" and 'reserves' is belong to the same group, because they all mean some resources.

  2. The word "Platium" and "metal" is in the same group, because they all mean some metal or chemstry materials.

SOLUTION END

b. What doesn't cluster together that you might think should have? Describe at least two examples.

SOLUTION BEGIN

  1. China and Australia should be clustered together, because they both are country.

  2. copper is a kind of metal but not be clustered together with "metal".

SOLUTION END

Part 2: Prediction-Based Word Vectors (15 points)

As discussed in class, more recently prediction-based word vectors have demonstrated better performance, such as word2vec and GloVe (which also utilizes the benefit of counts). Here, we shall explore the embeddings produced by GloVe. Please revisit the class notes and lecture slides for more details on the word2vec and GloVe algorithms. If you're feeling adventurous, challenge yourself and try reading GloVe's original paper.

Then run the following cells to load the GloVe vectors into memory. Note: If this is your first time to run these cells, i.e. download the embedding model, it will take a couple minutes to run. If you've run these cells before, rerunning them will load the model without redownloading it, which will take about 1 to 2 minutes.

def load_embedding_model(): """ Load GloVe Vectors Return: wv_from_bin: All 400000 embeddings, each lengh 200 """ import gensim.downloader as api wv_from_bin = api.load("glove-wiki-gigaword-200") print("Loaded vocab size %i" % len(list(wv_from_bin.index_to_key))) return wv_from_bin
# ----------------------------------- # Run Cell to Load Word Vectors # Note: This will take a couple minutes # ----------------------------------- wv_from_bin = load_embedding_model()
from gensim.test.utils import datapath, get_tmpfile from gensim.models import KeyedVectors from gensim.scripts.glove2word2vec import glove2word2vec glove_file = datapath('/Users/yimingwang/glove.6B/glove.6B.200d.txt') word2vec_glove_file = get_tmpfile("glove.6B.200d.word2vec.txt") glove2word2vec(glove_file, word2vec_glove_file)
/var/folders/z1/nkz03qk14mz_p4v8rr8p7ybc0000gn/T/ipykernel_16867/3166754261.py:6: DeprecationWarning: Call to deprecated `glove2word2vec` (KeyedVectors.load_word2vec_format(.., binary=False, no_header=True) loads GLoVE text vectors.). glove2word2vec(glove_file, word2vec_glove_file)
(400000, 200)
wv_from_bin = KeyedVectors.load_word2vec_format(word2vec_glove_file)

Note: If you are receiving a "reset by peer" error, rerun the cell to restart the download. If you run into an "attribute" error, you may need to update to the most recent version of gensim and numpy. You can upgrade them inline by uncommenting and running the below cell:

#!pip install gensim --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple #!pip install numpy --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple

Reducing dimensionality of Word Embeddings

Let's directly compare the GloVe embeddings to those of the co-occurrence matrix. In order to avoid running out of memory, we will work with a sample of 10000 GloVe vectors instead. Run the following cells to:

  1. Put 10000 Glove vectors into a matrix M

  2. Run reduce_to_k_dim (your Truncated SVD function) to reduce the vectors from 200-dimensional to 2-dimensional.

def get_matrix_of_vectors(wv_from_bin, required_words): """ Put the GloVe vectors into a matrix M. Param: wv_from_bin: KeyedVectors object; the 400000 GloVe vectors loaded from file Return: M: numpy matrix shape (num words, 200) containing the vectors word2ind: dictionary mapping each word to its row number in M """ import random words = list(wv_from_bin.index_to_key) #print(words) print("Shuffling words ...") random.seed(225) random.shuffle(words) words = words[:10000] print("Putting %i words into word2ind and matrix M..." % len(words)) word2ind = {} M = [] curInd = 0 for w in words: try: M.append(wv_from_bin.get_vector(w)) word2ind[w] = curInd curInd += 1 except KeyError: continue for w in required_words: if w in words: continue try: M.append(wv_from_bin.get_vector(w)) word2ind[w] = curInd curInd += 1 except KeyError: continue M = np.stack(M) print("Done.") return M, word2ind
# ----------------------------------------------------------------- # Run Cell to Reduce 200-Dimensional Word Embeddings to k Dimensions # Note: This should be quick to run # ----------------------------------------------------------------- M, word2ind = get_matrix_of_vectors(wv_from_bin, words) M_reduced = reduce_to_k_dim(M, k=2) # Rescale (normalize) the rows to make them each of unit-length M_lengths = np.linalg.norm(M_reduced, axis=1) M_reduced_normalized = M_reduced / M_lengths[:, np.newaxis] # broadcasting
Shuffling words ... Putting 10000 words into word2ind and matrix M... Done. Running Truncated SVD over 10012 words... Done.

Note: If you are receiving out of memory issues on your local machine, try closing other applications to free more memory on your device. You may want to try restarting your machine so that you can free up extra memory. Then immediately run the jupyter notebook and see if you can load the word vectors properly. If you still have problems with loading the embeddings onto your local machine after this, please go to office hours or contact course staff.

Question 2.1: GloVe Plot Analysis [written] (3 points)

Run the cell below to plot the 2D GloVe embeddings for ['value', 'gold', 'platinum', 'reserves', 'silver', 'metals', 'copper', 'belgium', 'australia', 'china', 'grammes', "mine"].

words = ['value', 'gold', 'platinum', 'reserves', 'silver', 'metals', 'copper', 'belgium', 'australia', 'china', 'grammes', "mine"] plot_embeddings(M_reduced_normalized, word2ind, words)
/var/folders/z1/nkz03qk14mz_p4v8rr8p7ybc0000gn/T/ipykernel_16867/2887674924.py:13: MatplotlibDeprecationWarning: The seaborn styles shipped by Matplotlib are deprecated since 3.6, as they no longer correspond to the styles shipped by seaborn. However, they will remain available as 'seaborn-v0_8-<style>'. Alternatively, directly use the seaborn API instead. plt.style.use('seaborn-whitegrid')
Image in a Jupyter notebook

a. What is one way the plot is different from the one generated earlier from the co-occurrence matrix? What is one way it's similar?

SOLUTION BEGIN

The reduced features are centralized on the x-axis but distributed on the y-axis. But the co-occurence matrix's embedding vector is distributed on both two features.

SOLUTION END

b. What is a possible cause for the difference?

SOLUTION BEGIN

Because the Glove is trained on a lager dataset and use the deep learning method, the x-axis feature depart the word "grammes" and others and the y-axis feature contains more semantic information. The word group on the "right" have some family feature on x but have different features on the y-axis because they means differently in semantic representation.

SOLUTION END

Cosine Similarity

Now that we have word vectors, we need a way to quantify the similarity between individual words, according to these vectors. One such metric is cosine-similarity. We will be using this to find words that are "close" and "far" from one another.

We can think of n-dimensional vectors as points in n-dimensional space. If we take this perspective L1 and L2 Distances help quantify the amount of space "we must travel" to get between these two points. Another approach is to examine the angle between two vectors. From trigonometry we know that:

Instead of computing the actual angle, we can leave the similarity in terms of similarity=cos(Θ)similarity = cos(\Theta). Formally the Cosine Similarity ss between two vectors pp and qq is defined as:

s=pqpq, where s[1,1]s = \frac{p \cdot q}{||p|| ||q||}, \textrm{ where } s \in [-1, 1]

Question 2.2: Words with Multiple Meanings (1.5 points) [code + written]

Polysemes and homonyms are words that have more than one meaning (see this wiki page to learn more about the difference between polysemes and homonyms ). Find a word with at least two different meanings such that the top-10 most similar words (according to cosine similarity) contain related words from both meanings. For example, "leaves" has both "go_away" and "a_structure_of_a_plant" meaning in the top 10, and "scoop" has both "handed_waffle_cone" and "lowdown". You will probably need to try several polysemous or homonymic words before you find one.

Please state the word you discover and the multiple meanings that occur in the top 10. Why do you think many of the polysemous or homonymic words you tried didn't work (i.e. the top-10 most similar words only contain one of the meanings of the words)?

Note: You should use the wv_from_bin.most_similar(word) function to get the top 10 similar words. This function ranks all other words in the vocabulary with respect to their cosine similarity to the given word. For further assistance, please check the GenSim documentation.

### SOLUTION BEGIN wv_from_bin.most_similar('close') ### SOLUTION END
[('up', 0.6898430585861206), ('closed', 0.6854837536811829), ('down', 0.6749922633171082), ('while', 0.6741016507148743), ('just', 0.6566178798675537), ('but', 0.6522998213768005), ('closing', 0.6294093728065491), ('point', 0.6282009482383728), ('far', 0.6234282851219177), ('time', 0.620608389377594)]

SOLUTION BEGIN

For convience, we will explain the word close by Chinese, meaning 1 means "关闭", and 2 means "近". So, the meaning 1 similar word contains: closed, the meaning 2 similar word contains: closing, up, down, far, time, while and so on.

SOLUTION END

Question 2.3: Synonyms & Antonyms (2 points) [code + written]

When considering Cosine Similarity, it's often more convenient to think of Cosine Distance, which is simply 1 - Cosine Similarity.

Find three words (w1,w2,w3)(w_1,w_2,w_3) where w1w_1 and w2w_2 are synonyms and w1w_1 and w3w_3 are antonyms, but Cosine Distance (w1,w3)<(w_1,w_3) < Cosine Distance (w1,w2)(w_1,w_2).

As an example, w1w_1="happy" is closer to w3w_3="sad" than to w2w_2="cheerful". Please find a different example that satisfies the above. Once you have found your example, please give a possible explanation for why this counter-intuitive result may have happened.

You should use the the wv_from_bin.distance(w1, w2) function here in order to compute the cosine distance between two words. Please see the GenSim documentation for further assistance.

### SOLUTION BEGIN w1 = "like" w2 = "love" w3 = "dislike" w1_w2_dist = wv_from_bin.distance(w1, w2) w1_w3_dist = wv_from_bin.distance(w1, w3) print("Synonyms {}, {} have cosine distance: {}".format(w1, w2, w1_w2_dist)) print("Antonyms {}, {} have cosine distance: {}".format(w1, w3, w1_w3_dist)) ### SOLUTION END
Synonyms like, love have cosine distance: 0.4338703155517578 Antonyms like, dislike have cosine distance: 0.7098513841629028

SOLUTION BEGIN

We found that the word "like" and "dislike" is more similar but "like" and "love" is less similar, because the word front both contain "like".

SOLUTION END

Question 2.4: Analogies with Word Vectors [written] (1.5 points)

Word vectors have been shown to sometimes exhibit the ability to solve analogies.

As an example, for the analogy "man : grandfather :: woman : x" (read: man is to grandfather as woman is to x), what is x?

In the cell below, we show you how to use word vectors to find x using the most_similar function from the GenSim documentation. The function finds words that are most similar to the words in the positive list and most dissimilar from the words in the negative list (while omitting the input words, which are often the most similar; see this paper). The answer to the analogy will have the highest cosine similarity (largest returned numerical value).

# Run this cell to answer the analogy -- man : grandfather :: woman : x pprint.pprint(wv_from_bin.most_similar(positive=['woman', 'grandfather'], negative=['man']))
[('grandmother', 0.7608445286750793), ('granddaughter', 0.7200808525085449), ('daughter', 0.7168302536010742), ('mother', 0.7151536345481873), ('niece', 0.7005682587623596), ('father', 0.6659887433052063), ('aunt', 0.6623408794403076), ('grandson', 0.6618767976760864), ('grandparents', 0.6446609497070312), ('wife', 0.644535481929779)]

Let mm, gg, ww, and xx denote the word vectors for man, grandfather, woman, and the answer, respectively. Using only vectors mm, gg, ww, and the vector arithmetic operators ++ and - in your answer, to what expression are we maximizing xx's cosine similarity?

Hint: Recall that word vectors are simply multi-dimensional vectors that represent a word. It might help to draw out a 2D example using arbitrary locations of each vector. Where would man and woman lie in the coordinate plane relative to grandfather and the answer?

SOLUTION BEGIN

We have mw=gxm-w=g-x then x=gm+wx=g-m+w, the answer consider the gender reason.

SOLUTION END

Question 2.5: Finding Analogies [code + written] (1.5 points)

a. For the previous example, it's clear that "grandmother" completes the analogy. But give an intuitive explanation as to why the most_similar function gives us words like "granddaughter", "daughter", or "mother?

SOLUTION BEGIN

Because the negative sample is man, then the most similar gender is female. Also, these words are all kinsfolk words which is similar to "grandmother".

SOLUTION END

b. Find an example of analogy that holds according to these vectors (i.e. the intended word is ranked top). In your solution please state the full analogy in the form x:y :: a:b. If you believe the analogy is complicated, explain why the analogy holds in one or two sentences.

Note: You may have to try many analogies to find one that works!

### SOLUTION BEGIN x, y, a, b = 'woman', 'man', 'actress', 'actor' assert wv_from_bin.most_similar(positive=[a, y], negative=[x])[0][0] == b ### SOLUTION END

SOLUTION BEGIN

Because the difference between "actor" and "actress" is only gender and the difference between "man" and "woman" is only gender too.

SOLUTION END

Question 2.6: Incorrect Analogy [code + written] (1.5 points)

a. Below, we expect to see the intended analogy "hand : glove :: foot : sock", but we see an unexpected result instead. Give a potential reason as to why this particular analogy turned out the way it did?

pprint.pprint(wv_from_bin.most_similar(positive=['foot', 'glove'], negative=['hand']))
[('45,000-square', 0.4922032058238983), ('15,000-square', 0.4649604558944702), ('10,000-square', 0.45447564125061035), ('6,000-square', 0.44975775480270386), ('3,500-square', 0.4441334009170532), ('700-square', 0.44257497787475586), ('50,000-square', 0.43563973903656006), ('3,000-square', 0.43486514687538147), ('30,000-square', 0.4330596625804901), ('footed', 0.43236875534057617)]

SOLUTION BEGIN

Because the word "glove" also means a word embeding model, the model would learn many "glove" context consisting "%square" words.

SOLUTION END

b. Find another example of analogy that does not hold according to these vectors. In your solution, state the intended analogy in the form x:y :: a:b, and state the incorrect value of b according to the word vectors (in the previous example, this would be '45,000-square').

### SOLUTION BEGIN x, y, a, b = 'shoes', 'hat', 'up', 'down' pprint.pprint(wv_from_bin.most_similar(positive=[a, y], negative=[x])) ### SOLUTION END
[('out', 0.5788853168487549), ('just', 0.5641640424728394), ('put', 0.553745448589325), ('off', 0.5531725287437439), ('second', 0.546393632888794), ('another', 0.5417174100875854), ('trick', 0.541061282157898), ('down', 0.5368677377700806), ('set', 0.5307223200798035), ('got', 0.5275691151618958)]

SOLUTION BEGIN

The analogy of up and down with a negative example "hat", the right answer should be shoes, but the prediction is out.

SOLUTION END

Question 2.7: Guided Analysis of Bias in Word Vectors [written] (1 point)

It's important to be cognizant of the biases (gender, race, sexual orientation etc.) implicit in our word embeddings. Bias can be dangerous because it can reinforce stereotypes through applications that employ these models.

Run the cell below, to examine (a) which terms are most similar to "woman" and "profession" and most dissimilar to "man", and (b) which terms are most similar to "man" and "profession" and most dissimilar to "woman". Point out the difference between the list of female-associated words and the list of male-associated words, and explain how it is reflecting gender bias.

# Run this cell # Here `positive` indicates the list of words to be similar to and `negative` indicates the list of words to be # most dissimilar from. pprint.pprint(wv_from_bin.most_similar(positive=['man', 'profession'], negative=['woman'])) print() pprint.pprint(wv_from_bin.most_similar(positive=['woman', 'profession'], negative=['man']))
[('reputation', 0.5250176787376404), ('professions', 0.5178037881851196), ('skill', 0.49046966433525085), ('skills', 0.49005505442619324), ('ethic', 0.4897659420967102), ('business', 0.487585186958313), ('respected', 0.485920250415802), ('practice', 0.4821045696735382), ('regarded', 0.4778572618961334), ('life', 0.4760662019252777)] [('professions', 0.5957458019256592), ('practitioner', 0.4988412857055664), ('teaching', 0.48292139172554016), ('nursing', 0.48211804032325745), ('vocation', 0.4788965880870819), ('teacher', 0.47160351276397705), ('practicing', 0.4693780839443207), ('educator', 0.46524327993392944), ('physicians', 0.4628995656967163), ('professionals', 0.46013936400413513)]

SOLUTION BEGIN

(a) The word for a man with profession would be connected to "reputation", "skill", "ethic", "business", "respected". (b) The word for a woman with profession would be connected to "teaching", "nursing", "educator" and so on. Because the model learn the semantic meaning by the dataset, but dataset would always have bias and stereotype because of the limitation of history.

SOLUTION END

Question 2.8: Independent Analysis of Bias in Word Vectors [code + written] (1 point)

Use the most_similar function to find another pair of analogies that demonstrates some bias is exhibited by the vectors. Please briefly explain the example of bias that you discover.

### SOLUTION BEGIN A = 'black' B = 'white' word = 'job' pprint.pprint(wv_from_bin.most_similar(positive=[A, word], negative=[B])) print() pprint.pprint(wv_from_bin.most_similar(positive=[B, word], negative=[A])) ### SOLUTION END
[('jobs', 0.6843501925468445), ('work', 0.6047077775001526), ('hiring', 0.5963166356086731), ('doing', 0.5790296196937561), ('better', 0.5735870003700256), ('good', 0.5692481398582458), ('employment', 0.568367600440979), ('because', 0.5603122115135193), ('working', 0.5544289946556091), ('even', 0.5500229001045227)] [('jobs', 0.6257916688919067), ('office', 0.584953248500824), ('administration', 0.583185613155365), ('doing', 0.5742584466934204), ('done', 0.5696188807487488), ('working', 0.5639634728431702), ('clinton', 0.5559402108192444), ('work', 0.5546744465827942), ('staff', 0.5506452322006226), ('hiring', 0.5501008033752441)]

SOLUTION BEGIN

We proposed the color "white" and "blcak" which is associated with race and racial discrimination. The bias of this model is that it consider black people do work, hiring, and employment. But the white people work in office and administration. It's a stereotype of the model.

SOLUTION END

Question 2.9: Thinking About Bias [written] (2 points)

a. Give one explanation of how bias gets into the word vectors. Briefly describe a real-world example that demonstrates this source of bias.

SOLUTION BEGIN

The training set is via all the history, but there is a lot of inequality events in the history. So the model learns it. For example, there are many news about black people living in slums but white people work in goverment.

SOLUTION END

b. What is one method you can use to mitigate bias exhibited by word vectors? Briefly describe a real-world example that demonstrates this method.

SOLUTION BEGIN

We could learn newer text which represents the newest thought of people.

SOLUTION END

Submission Instructions

  1. Click the Save button at the top of the Jupyter Notebook.

  2. Select Cell -> All Output -> Clear. This will clear all the outputs from all cells (but will keep the content of all cells).

  3. Select Cell -> Run All. This will run all the cells in order, and will take several minutes.

  4. Once you've rerun everything, select File -> Download as -> PDF via LaTeX (If you have trouble using "PDF via LaTex", you can also save the webpage as pdf. Make sure all your solutions especially the coding parts are displayed in the pdf, it's okay if the provided codes get cut off because lines are not wrapped in code cells).

  5. Look at the PDF file and make sure all your solutions are there, displayed correctly. The PDF is the only thing your graders will see!

  6. Submit your PDF on Gradescope.