Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
suyashi29
GitHub Repository: suyashi29/python-su
Path: blob/master/Natural Language Processing using Python/1 NLP Introduction.ipynb
3074 views
Kernel: Python 3 (ipykernel)

Natural language processing (NLP)

NLP is a branch of artificial intelligence that helps computers understand, interpret and manipulate human language. NLP draws from many disciplines, including computer science and computational linguistics, in its pursuit to fill the gap between human communication and computer understanding. image.png

NLTK

  • The NLTK module is a massive tool kit, aimed at helping you with the entire Natural Language Processing (NLP) methodology.

  • NLTK will aid you with everything from splitting sentences from paragraphs, splitting up words, recognizing the part of speech of those words, highlighting the main subjects, and then even with helping your machine to understand what the text is all about.

Installation

  • conda install -c anaconda nltk

Components of NLP

Five main Component of Natural Language processing are:

Morphological and Lexical Analysis##

  • Lexical analysis is a vocabulary that includes its words and expressions.

  • It depicts analyzing, identifying and description of the structure of words.

  • It includes dividing a text into paragraphs, words and the sentences.

  • Individual words are analyzed into their components, and nonword tokens such as punctuations are separated from the words

Semantic Analysis:

  • Semantic Analysis is a structure created by the syntactic analyzer which assigns meanings.

  • This component transfers linear sequences of words into structures.

  • It shows how the words are associated with each other.

Pragmatic Analysis:

  • Pragmatic Analysis deals with the overall communicative and social content and its effect on interpretation.

  • It means abstracting or deriving the meaningful use of language in situations.

  • In this analysis, the main focus always on what was said in reinterpreted on what is meant.

Syntax Analysis:

  • The words are commonly accepted as being the smallest units of syntax.

  • The syntax refers to the principles and rules that govern the sentence structure of any individual languages.

Discourse Integration :

  • It means a sense of the context.

  • The meaning of any single sentence which depends upon that sentences. It also considers the meaning of the following sentence.

NLP and writing systems

The kind of writing system used for a language is one of the deciding factors in determining the best approach for text pre-processing. Writing systems can be

  • Logographic: a Large number of individual symbols represent words. Example Japanese, Mandarin

  • Syllabic: Individual symbols represent syllables

  • Alphabetic: Individual symbols represent sound

    Challenges

    • Extracting meaning(semantics) from a text is a challenge

    • NLP is dependent on the quality of the corpus. If the domain is vast, it's difficult to understand context.

    • There is a dependence on the character set and language

## list,sets,dictionary,Tuples,strings
import nltk #nltk.download()
  • First step :conda install -c anaconda nltk : pip install nltk

  • Second Step : import nltk nltk.download()

NLP lib in Python

  • NLTK

  • Gensim (Topic Modelling, Document summarization)

  • CoreNLP(linguistic analysis)

  • SpaCY

  • TextBlob

  • Pattern (Web minning)

Tokenizing Words & Sentences

A sentence or data can be split into words using the method sent_tokenize() & word_tokenize() respectively.

from nltk.tokenize import word_tokenize E_TEXT = "Hello-Hello , - i am Suyashi Raiwani" a= word_tokenize(E_TEXT) #type(a) a
['Hello-Hello', ',', '-', 'i', 'am', 'Suyashi', 'Raiwani']
from nltk.tokenize import sent_tokenize #ispace , a=[,.?,!] S2_TEXT = "Positive thinking! You know is all. A matter of habits If you are? not quite a positive thinker Change Yourself?" print(sent_tokenize(S2_TEXT)) ## type(sent_tokenize(E_TEXT)) ##!,? and.
['Positive thinking!', 'You know is all.', 'A matter of habits If you are?', 'not quite a positive thinker Change Yourself?']

Quick Practice

  • Do you know Customer and target audience’s reviews can be analyzed? You can use this! to create a roadmap of features and products.

Convert above para into word token and sentence token

a=[1,2] import numpy as np b=np.array(a) b b+b
array([2, 4])
a+a b+b
%%time a=[1,2,3] b=np.array(a) b+b b*b b//b b/b
## store the words and sentences and type cast them in form of array: from nltk.tokenize import sent_tokenize, word_tokenize import numpy as np data = "All work and no play makes jack dull boy. All work and no play makes jack a dull boy." phrases = sent_tokenize(data) words = word_tokenize(data) new_array=np.array(words) new_array # print(type(new_array))
array(['All', 'work', 'and', 'no', 'play', 'makes', 'jack', 'dull', 'boy', '.', 'All', 'work', 'and', 'no', 'play', 'makes', 'jack', 'a', 'dull', 'boy', '.'], dtype='<U5')

Stopping Words

  • To do this, we need a way to convert words to values, in numbers, or signal patterns. The process of converting data to something a computer can understand is referred to as "pre-processing." One of the major forms of pre-processing is going to be filtering too useful data. In natural language processing, useless words (data), are referred to as stop words.

  • For now, we'll be considering stop words as words that just contain no meaning, and we want to remove them.

  • We can do this easily, by storing a list of words that you consider to be stop words. NLTK starts you off with a bunch of words that they consider to be stop words, you can access it via the NLTK corpus .

from nltk.corpus import stopwords from nltk.tokenize import word_tokenize a = "I think i that Learning DATA Science will bring a big leap in your Carrier Profile. Data science is an interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from noisy, structured and unstructured data, and apply knowledge from data across a broad range of application domains" word_tokens = word_tokenize(a) print(word_tokens) print ("Lenghth of words = ",len(word_tokens))
['I', 'think', 'i', 'that', 'Learning', 'DATA', 'Science', 'will', 'bring', 'a', 'big', 'leap', 'in', 'your', 'Carrier', 'Profile', '.', 'Data', 'science', 'is', 'an', 'interdisciplinary', 'field', 'that', 'uses', 'scientific', 'methods', ',', 'processes', ',', 'algorithms', 'and', 'systems', 'to', 'extract', 'knowledge', 'and', 'insights', 'from', 'noisy', ',', 'structured', 'and', 'unstructured', 'data', ',', 'and', 'apply', 'knowledge', 'from', 'data', 'across', 'a', 'broad', 'range', 'of', 'application', 'domains'] Lenghth of words = 58
stop_words1 = set(stopwords.words('english')) #downloads the file with english stop words word_tokens = word_tokenize(a) filtered_sentence = [w for w in word_tokens if not w in stop_words1] print(filtered_sentence) #print(word_tokens) #print(filtered_sentence) print("The number of words stopped :",(len(word_tokens)-len(filtered_sentence))) print ("Lenghth of words = ",len(filtered_sentence))
['I', 'think', 'Learning', 'DATA', 'Science', 'bring', 'big', 'leap', 'Carrier', 'Profile', '.', 'Data', 'science', 'interdisciplinary', 'field', 'uses', 'scientific', 'methods', ',', 'processes', ',', 'algorithms', 'systems', 'extract', 'knowledge', 'insights', 'noisy', ',', 'structured', 'unstructured', 'data', ',', 'apply', 'knowledge', 'data', 'across', 'broad', 'range', 'application', 'domains'] The number of words stopped : 18 Lenghth of words = 40
b=["I",".",",",",","?",":"] #Creating your own Stop word list stop_words1=list(stop_words1) stop_words2 = b #downloads the file with english stop words stop_words=stop_words1+stop_words2 word_tokens = word_tokenize(a) filtered_sentence = [w for w in word_tokens if not w in stop_words] print(filtered_sentence) #print(word_tokens) #print(filtered_sentence) print("The number of words stopped :",(len(word_tokens)-len(filtered_sentence))) print ("Lenghth of words filtered sentence = ",len(filtered_sentence))
['think', 'Learning', 'DATA', 'Science', 'bring', 'big', 'leap', 'Carrier', 'Profile', 'Data', 'science', 'interdisciplinary', 'field', 'uses', 'scientific', 'methods', 'processes', 'algorithms', 'systems', 'extract', 'knowledge', 'insights', 'noisy', 'structured', 'unstructured', 'data', 'apply', 'knowledge', 'data', 'across', 'broad', 'range', 'application', 'domains'] The number of words stopped : 24 Lenghth of words filtered sentence = 34

STEMMING

A word stem is part of a word. It is sort of a normalization idea, but linguistic.

  • For example, the stem of the word Using is use.

from nltk.stem import PorterStemmer ps = PorterStemmer() ## defining stemmer s_words = ["Aims","Aims","Aimed","Aimmer","Aiming","Aim","Calls","Caller","Calling","Call","Called"] for i in s_words: print(ps.stem(i))
aim aim aim aimmer aim aim call caller call call call
from nltk.stem import PorterStemmer ps = PorterStemmer() ## defining stemmer s_words = ["Dance", "dances", "Dancing", "dancer", "dances", "danced", "Goods","Good","sings","singings","that"] for i in s_words: print(ps.stem(i))
danc danc danc dancer danc danc good good sing sing that

help(Nltk)

Part of Speech tagging

This means labeling words in a sentence as nouns, adjectives, verbs.

image.png

import nltk from nltk.tokenize import PunktSentenceTokenizer from nltk import pos_tag document = 'Whether you\'re new to DataScience or an paracetamol , it\'s easy to learn and use Python.Are you Good enough in Prgramming?' sentences = nltk.sent_tokenize(document) for sent in sentences: print(nltk.pos_tag(nltk.word_tokenize(sent)))
[('Whether', 'IN'), ('you', 'PRP'), ("'re", 'VBP'), ('new', 'JJ'), ('to', 'TO'), ('DataScience', 'NNP'), ('or', 'CC'), ('an', 'DT'), ('paracetamol', 'NN'), (',', ','), ('it', 'PRP'), ("'s", 'VBZ'), ('easy', 'JJ'), ('to', 'TO'), ('learn', 'VB'), ('and', 'CC'), ('use', 'VB'), ('Python.Are', 'NNP'), ('you', 'PRP'), ('Good', 'NNP'), ('enough', 'RB'), ('in', 'IN'), ('Prgramming', 'NNP'), ('?', '.')]
sentences
##from nltk.corpus import state_union from nltk.tokenize import PunktSentenceTokenizer document = 'Whether you\'re new to DataScience or an experienced, it\'s easy to learn and use Python.' sentences = nltk.sent_tokenize(document) data = [] for sent in sentences: data = data + nltk.pos_tag(nltk.word_tokenize(sent)) for word in data: if 'CC' in word[1]: print(word)

Get synonyms/antonyms using WordNet

  • WordNet’s structure makes it a useful tool for computational linguistics and natural language processing

  • WordNet superficially resembles a thesaurus, in that it groups words together based on their meanings

# First, you're going to need to import wordnet: from nltk.corpus import wordnet # Then, we're going to use the term "D" to find synsets like so: syns = wordnet.synsets("Work") # An example of a synset: print(syns[0].name()) # Just the word: print(syns[0].lemmas()[0].name()) # Definition of that first synset: print(syns[0].definition()) # Examples of the word in use in sentences: print(syns[0].examples())
work.n.01 work activity directed toward making or doing something ['she checked several points needing further work']
# Then, we're going to use the term "Sad" to find synsets like so: syns = wordnet.synsets("Generative") # An example of a synset: print(syns[0].name()) # Just the word: print(syns[0].lemmas()[0].name()) # Definition of that first synset: print(syns[0].definition()) # Examples of the word in use in sentences: print(syns[0].examples())
import nltk from nltk.corpus import wordnet synonyms = [] antonyms = [] for syn in wordnet.synsets("Sound"): for l in syn.lemmas(): synonyms.append(l.name()) if l.antonyms(): antonyms.append(l.antonyms()[0].name()) print("Similar words =",set(synonyms)) print("opposite =", set(antonyms))
Similar words = {'vocalise', 'wakeless', 'phone', 'good', 'level-headed', 'effectual', 'profound', 'strait', 'audio', 'vocalize', 'sound', 'intelligent', 'auditory_sensation', 'speech_sound', 'levelheaded', 'healthy', 'voice', 'fathom', 'go', 'well-grounded', 'heavy', 'reasoned', 'legal'} opposite = {'unsound', 'silence', 'devoice'}

Filtering Duplicate Words we can use sets

##Sets s={1,2,33,33,44,0,-5} s
import nltk word_data = "The python is a a python data analytics language" # First Word tokenization nltk_tokens = nltk.word_tokenize(word_data) # Applying Set no_order = list(set(nltk_tokens)) print (no_order)
['language', 'a', 'data', 'python', 'analytics', 'is', 'The']
ordered_tokens = set() result = [] for word in nltk_tokens: if word not in ordered_tokens: ordered_tokens.add(word) result.append(word) print (result )

Lemmentization

Lemmatizing reduces words to their core meaning, but it will give you a complete English word that makes sense on its own instead of just a fragment of a word like 'danc'.

import nltk from nltk.tokenize import word_tokenize from nltk.stem import WordNetLemmatizer lem = WordNetLemmatizer() a = "I have yellow and Black scarves. I love to wear scarf" words = word_tokenize(a) a
from nltk.tokenize import word_tokenize lemmatized_words = [lem.lemmatize(words) for words in words] lemmatized_words
from nltk.stem import PorterStemmer ps = PorterStemmer() ## defining stemmer s_words = ["Dances", "dances", "Dancing", "dancer", "dances", "danced", "ddd","Sang","sings","singings","that"] s_words1 = ["dancess", "dances", "dancing", "dancer", "dances", "danced", "ddd"] for i in s_words1: print(ps.stem(i))
s = "dancess dances dancing dancer dances danced" words = word_tokenize(s) lm = [lem.lemmatize(word) for word in words] lm

Lem works directly on noun, for other Parts of Speeching tagging

a="worst" lem.lemmatize(a,pos ="a") #lemmatizer.lemmatize("worst", pos="a") #y
s="these places have many worst wolves. My friends love nicer to visit Zoo.All of Us goods were wearing beautiful dressess" words = word_tokenize(s) lemmatized_words = [lem.lemmatize(word) for word in words] lemmatized_words
from nltk.tokenize import word_tokenize d = "Good nice , India is place for young people, suyashi are you from Delhi? I love this place" quote = word_tokenize(d) quote
# next step is to tag those words by part of speech: import nltk #nltk.download("averaged_perceptron_tagger") pos_tags = nltk.pos_tag(quote) pos_tags

A chunk grammar is a combination of rules on how sentences should be chunked. It often uses regular expressions

  • According to the rule you created, your chunks:

  • Start with an optional (?) determiner ('DT')

  • Can have any number (*) of adjectives (JJ)

  • End with a noun ()

grammar = "JJ:{<DT>?<JJ>*<NNP>}" #NP stands for noun phrase.
chunk_parser = nltk.RegexpParser(grammar)
tree = chunk_parser.parse(pos_tags)
tree
tree.draw()

Using Named Entity Recognition (NER)

  • Named entities are noun phrases that refer to specific locations, people, organizations, and so on.

  • With named entity recognition, you can find the named entities in your texts and also determine what kind of named entity they are

  • you can use nltk.ne_chunk() to recognize named entities

nltk.download("maxent_ne_chunker") nltk.download("words") tree = nltk.ne_chunk(pos_tags)
quote = """ Suyashi has a cat form United States of America. Delhi bought it from Isha . Suyashi lives in India. """

Now create a function to extract named entities

from nltk.tokenize import word_tokenize def extract_ne(quote): words = word_tokenize(quote, language="english") tags = nltk.pos_tag(words) tree = nltk.ne_chunk(tags, binary=True) return set( " ".join(i[0] for i in t) for t in tree if hasattr(t, "label") and t.label() == "NE" )
extract_ne(quote)

Extarcting email Ids from data

import re # install re module to run below code (conda install re) text = "Please contact me at [email protected] for further information."+\ " You can also give feedback at [email protected], also share your details at [email protected]" emails = re.findall(r"[a-z0-9\.\-+_]+@[a-z0-9\.\-+_]+\.[a-z]+", text) print(emails)

The corpora with NLTK

  • The NLTK corpus is a massive dump of all kinds of natural language data sets

import nltk from nltk.corpus import PlaintextCorpusReader corpus_root=(r"C:\Users\suyashi144893\Documents\Python Analytics\Natural Language Processing using Python") filelists = PlaintextCorpusReader(corpus_root, '.*')
filelists.fileids()
file = filelists.words('Speach2.txt') file
type(file)