Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
adashofdata
GitHub Repository: adashofdata/nlp-in-python-tutorial
Path: blob/master/4-Topic-Modeling.ipynb
164 views
Kernel: Python 3

Topic Modeling

Introduction

Another popular text analysis technique is called topic modeling. The ultimate goal of topic modeling is to find various topics that are present in your corpus. Each document in the corpus will be made up of at least one topic, if not multiple topics.

In this notebook, we will be covering the steps on how to do Latent Dirichlet Allocation (LDA), which is one of many topic modeling techniques. It was specifically designed for text data.

To use a topic modeling technique, you need to provide (1) a document-term matrix and (2) the number of topics you would like the algorithm to pick up.

Once the topic modeling technique is applied, your job as a human is to interpret the results and see if the mix of words in each topic make sense. If they don't make sense, you can try changing up the number of topics, the terms in the document-term matrix, model parameters, or even try a different model.

Topic Modeling - Attempt #1 (All Text)

# Let's read in our document-term matrix import pandas as pd import pickle data = pd.read_pickle('dtm_stop.pkl') data
# Import the necessary modules for LDA with gensim # Terminal / Anaconda Navigator: conda install -c conda-forge gensim from gensim import matutils, models import scipy.sparse # import logging # logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
# One of the required inputs is a term-document matrix tdm = data.transpose() tdm.head()
# We're going to put the term-document matrix into a new gensim format, from df --> sparse matrix --> gensim corpus sparse_counts = scipy.sparse.csr_matrix(tdm) corpus = matutils.Sparse2Corpus(sparse_counts)
# Gensim also requires dictionary of the all terms and their respective location in the term-document matrix cv = pickle.load(open("cv_stop.pkl", "rb")) id2word = dict((v, k) for k, v in cv.vocabulary_.items())

Now that we have the corpus (term-document matrix) and id2word (dictionary of location: term), we need to specify two other parameters - the number of topics and the number of passes. Let's start the number of topics at 2, see if the results make sense, and increase the number from there.

# Now that we have the corpus (term-document matrix) and id2word (dictionary of location: term), # we need to specify two other parameters as well - the number of topics and the number of passes lda = models.LdaModel(corpus=corpus, id2word=id2word, num_topics=2, passes=10) lda.print_topics()
# LDA for num_topics = 3 lda = models.LdaModel(corpus=corpus, id2word=id2word, num_topics=3, passes=10) lda.print_topics()
# LDA for num_topics = 4 lda = models.LdaModel(corpus=corpus, id2word=id2word, num_topics=4, passes=10) lda.print_topics()

These topics aren't looking too great. We've tried modifying our parameters. Let's try modifying our terms list as well.

Topic Modeling - Attempt #2 (Nouns Only)

One popular trick is to look only at terms that are from one part of speech (only nouns, only adjectives, etc.). Check out the UPenn tag set: https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html.

# Let's create a function to pull out nouns from a string of text from nltk import word_tokenize, pos_tag def nouns(text): '''Given a string of text, tokenize the text and pull out only the nouns.''' is_noun = lambda pos: pos[:2] == 'NN' tokenized = word_tokenize(text) all_nouns = [word for (word, pos) in pos_tag(tokenized) if is_noun(pos)] return ' '.join(all_nouns)
# Read in the cleaned data, before the CountVectorizer step data_clean = pd.read_pickle('data_clean.pkl') data_clean
# Apply the nouns function to the transcripts to filter only on nouns data_nouns = pd.DataFrame(data_clean.transcript.apply(nouns)) data_nouns
# Create a new document-term matrix using only nouns from sklearn.feature_extraction import text from sklearn.feature_extraction.text import CountVectorizer # Re-add the additional stop words since we are recreating the document-term matrix add_stop_words = ['like', 'im', 'know', 'just', 'dont', 'thats', 'right', 'people', 'youre', 'got', 'gonna', 'time', 'think', 'yeah', 'said'] stop_words = text.ENGLISH_STOP_WORDS.union(add_stop_words) # Recreate a document-term matrix with only nouns cvn = CountVectorizer(stop_words=stop_words) data_cvn = cvn.fit_transform(data_nouns.transcript) data_dtmn = pd.DataFrame(data_cvn.toarray(), columns=cvn.get_feature_names()) data_dtmn.index = data_nouns.index data_dtmn
# Create the gensim corpus corpusn = matutils.Sparse2Corpus(scipy.sparse.csr_matrix(data_dtmn.transpose())) # Create the vocabulary dictionary id2wordn = dict((v, k) for k, v in cvn.vocabulary_.items())
# Let's start with 2 topics ldan = models.LdaModel(corpus=corpusn, num_topics=2, id2word=id2wordn, passes=10) ldan.print_topics()
# Let's try topics = 3 ldan = models.LdaModel(corpus=corpusn, num_topics=3, id2word=id2wordn, passes=10) ldan.print_topics()
# Let's try 4 topics ldan = models.LdaModel(corpus=corpusn, num_topics=4, id2word=id2wordn, passes=10) ldan.print_topics()

Topic Modeling - Attempt #3 (Nouns and Adjectives)

# Let's create a function to pull out nouns from a string of text def nouns_adj(text): '''Given a string of text, tokenize the text and pull out only the nouns and adjectives.''' is_noun_adj = lambda pos: pos[:2] == 'NN' or pos[:2] == 'JJ' tokenized = word_tokenize(text) nouns_adj = [word for (word, pos) in pos_tag(tokenized) if is_noun_adj(pos)] return ' '.join(nouns_adj)
# Apply the nouns function to the transcripts to filter only on nouns data_nouns_adj = pd.DataFrame(data_clean.transcript.apply(nouns_adj)) data_nouns_adj
# Create a new document-term matrix using only nouns and adjectives, also remove common words with max_df cvna = CountVectorizer(stop_words=stop_words, max_df=.8) data_cvna = cvna.fit_transform(data_nouns_adj.transcript) data_dtmna = pd.DataFrame(data_cvna.toarray(), columns=cvna.get_feature_names()) data_dtmna.index = data_nouns_adj.index data_dtmna
# Create the gensim corpus corpusna = matutils.Sparse2Corpus(scipy.sparse.csr_matrix(data_dtmna.transpose())) # Create the vocabulary dictionary id2wordna = dict((v, k) for k, v in cvna.vocabulary_.items())
# Let's start with 2 topics ldana = models.LdaModel(corpus=corpusna, num_topics=2, id2word=id2wordna, passes=10) ldana.print_topics()
# Let's try 3 topics ldana = models.LdaModel(corpus=corpusna, num_topics=3, id2word=id2wordna, passes=10) ldana.print_topics()
# Let's try 4 topics ldana = models.LdaModel(corpus=corpusna, num_topics=4, id2word=id2wordna, passes=10) ldana.print_topics()

Identify Topics in Each Document

Out of the 9 topic models we looked at, the nouns and adjectives, 4 topic one made the most sense. So let's pull that down here and run it through some more iterations to get more fine-tuned topics.

# Our final LDA model (for now) ldana = models.LdaModel(corpus=corpusna, num_topics=4, id2word=id2wordna, passes=80) ldana.print_topics()
[(0, '0.009*"joke" + 0.005*"mom" + 0.005*"parents" + 0.004*"hasan" + 0.004*"jokes" + 0.004*"anthony" + 0.003*"nuts" + 0.003*"dead" + 0.003*"tit" + 0.003*"twitter"'), (1, '0.005*"mom" + 0.005*"jenny" + 0.005*"clinton" + 0.004*"friend" + 0.004*"parents" + 0.003*"husband" + 0.003*"cow" + 0.003*"ok" + 0.003*"wife" + 0.003*"john"'), (2, '0.005*"bo" + 0.005*"gun" + 0.005*"guns" + 0.005*"repeat" + 0.004*"um" + 0.004*"ass" + 0.004*"eye" + 0.004*"contact" + 0.003*"son" + 0.003*"class"'), (3, '0.006*"ahah" + 0.004*"nigga" + 0.004*"gay" + 0.003*"dick" + 0.003*"door" + 0.003*"young" + 0.003*"motherfucker" + 0.003*"stupid" + 0.003*"bitch" + 0.003*"mad"')]

These four topics look pretty decent. Let's settle on these for now.

  • Topic 0: mom, parents

  • Topic 1: husband, wife

  • Topic 2: guns

  • Topic 3: profanity

# Let's take a look at which topics each transcript contains corpus_transformed = ldana[corpusna] list(zip([a for [(a,b)] in corpus_transformed], data_dtmna.index))
[(1, 'ali'), (0, 'anthony'), (2, 'bill'), (2, 'bo'), (3, 'dave'), (0, 'hasan'), (2, 'jim'), (3, 'joe'), (1, 'john'), (0, 'louis'), (1, 'mike'), (0, 'ricky')]

For a first pass of LDA, these kind of make sense to me, so we'll call it a day for now.

  • Topic 0: mom, parents [Anthony, Hasan, Louis, Ricky]

  • Topic 1: husband, wife [Ali, John, Mike]

  • Topic 2: guns [Bill, Bo, Jim]

  • Topic 3: profanity [Dave, Joe]

Additional Exercises

  1. Try further modifying the parameters of the topic models above and see if you can get better topics.

  2. Create a new topic model that includes terms from a different part of speech and see if you can get better topics.