Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
YStrano
GitHub Repository: YStrano/DataScience_GA
Path: blob/master/april_18/lessons/lesson-13/code/solution-code/solution-code-13.ipynb
1905 views
Kernel: Python 2
import pandas as pd import json data = pd.read_csv("../../assets/dataset/stumbleupon.tsv", sep='\t') data['title'] = data.boilerplate.map(lambda x: json.loads(x).get('title', '')) data['body'] = data.boilerplate.map(lambda x: json.loads(x).get('body', '')) data.head()

Predicting "Greenness" Of Content

This dataset comes from stumbleupon, a web page recommender.

A description of the columns is below

FieldNameTypeDescription
urlstringUrl of the webpage to be classified
titlestringTitle of the article
bodystringBody text of article
urlidintegerStumbleUpon's unique identifier for each url
boilerplatejsonBoilerplate text
alchemy_categorystringAlchemy category (per the publicly available Alchemy API found at www.alchemyapi.com)
alchemy_category_scoredoubleAlchemy category score (per the publicly available Alchemy API found at www.alchemyapi.com)
avglinksizedoubleAverage number of words in each link
commonlinkratio_1double# of links sharing at least 1 word with 1 other links / # of links
commonlinkratio_2double# of links sharing at least 1 word with 2 other links / # of links
commonlinkratio_3double# of links sharing at least 1 word with 3 other links / # of links
commonlinkratio_4double# of links sharing at least 1 word with 4 other links / # of links
compression_ratiodoubleCompression achieved on this page via gzip (measure of redundancy)
embed_ratiodoubleCount of number of usage
frameBasedinteger (0 or 1)A page is frame-based (1) if it has no body markup but have a frameset markup
frameTagRatiodoubleRatio of iframe markups over total number of markups
hasDomainLinkinteger (0 or 1)True (1) if it contains an
html_ratiodoubleRatio of tags vs text in the page
image_ratiodoubleRatio of tags vs text in the page
is_newsinteger (0 or 1)True (1) if StumbleUpon's news classifier determines that this webpage is news
lengthyLinkDomaininteger (0 or 1)True (1) if at least 3
linkwordscoredoublePercentage of words on the page that are in hyperlink's text
news_front_pageinteger (0 or 1)True (1) if StumbleUpon's news classifier determines that this webpage is front-page news
non_markup_alphanum_charactersintegerPage's text's number of alphanumeric characters
numberOfLinksinteger Number of markups
numwords_in_urldoubleNumber of words in url
parametrizedLinkRatiodoubleA link is parametrized if it's url contains parameters or has an attached onClick event
spelling_errors_ratiodoubleRatio of words not found in wiki (considered to be a spelling mistake)
labelinteger (0 or 1)User-determined label. Either evergreen (1) or non-evergreen (0); available for train.tsv only

Let's try extracting some of the text content.

Create a feature for the title containing 'recipe'. Is the % of evegreen websites higher or lower on pages that have recipe in the the title?

# Option 1: Create a function to check for this def has_recipe(text_in): try: if 'recipe' in str(text_in).lower(): return 1 else: return 0 except: return 0 data['recipe'] = data['title'].map(has_recipe) # Option 2: lambda functions #data['recipe'] = data['title'].map(lambda t: 1 if 'recipe' in str(t).lower() else 0) # Option 3: string functions data['recipe'] = data['title'].str.contains('recipe')

Demo: Use of the Count Vectorizer

titles = data['title'].fillna('') from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer(max_features = 1000, ngram_range=(1, 2), stop_words='english', binary=True) # Use `fit` to learn the vocabulary of the titles vectorizer.fit(titles) # Use `tranform` to generate the sample X word matrix - one column per feature (word or n-grams) X = vectorizer.transform(titles)

Demo: Build a random forest model to predict evergreeness of a website using the title features

from sklearn.ensemble import RandomForestClassifier model = RandomForestClassifier(n_estimators = 20) # Use `fit` to learn the vocabulary of the titles vectorizer.fit(titles) # Use `tranform` to generate the sample X word matrix - one column per feature (word or n-grams) X = vectorizer.transform(titles).toarray() y = data['label'] from sklearn.cross_validation import cross_val_score scores = cross_val_score(model, X, y, scoring='roc_auc') print('CV AUC {}, Average AUC {}'.format(scores, scores.mean()))
CV AUC [ 0.78695201 0.80649177 0.80522998], Average AUC 0.799557921281

Exercise: Build a random forest model to predict evergreeness of a website using the title features and quantitative features

# Use `tranform` to generate the sample X word matrix - one column per feature (word or n-grams) X_text_features = vectorizer.transform(titles) # Identify the features you want from the original dataset other_features_columns = ['html_ratio', 'image_ratio'] other_features = data[other_features_columns] # Stack them horizontally together # This takes all of the word/n-gram columns and appends on two more columns for `html_ratio` and `image_ratio` from scipy.sparse import hstack X = hstack((X_text_features, other_features)).toarray() scores = cross_val_score(model, X, y, scoring='roc_auc') print('CV AUC {}, Average AUC {}'.format(scores, scores.mean())) # What features of these are most important? model.fit(X, y) all_feature_names = vectorizer.get_feature_names() + other_features_columns feature_importances = pd.DataFrame({'Features' : all_feature_names, 'Importance Score': model.feature_importances_}) feature_importances.sort_values('Importance Score', ascending=False).head()
CV AUC [ 0.78500263 0.79911166 0.79822481], Average AUC 0.794113032767

Exercise: Build a random forest model to predict evergreeness of a website using the body features

body_text = data['body'].fillna('') # Use `fit` to learn the vocabulary vectorizer.fit(body_text) # Use `tranform` to generate the sample X word matrix - one column per feature (word or n-grams) X = vectorizer.transform(body_text).toarray() scores = cross_val_score(model, X, y, scoring='roc_auc') print('CV AUC {}, Average AUC {}'.format(scores, scores.mean()))
CV AUC [ 0.83658603 0.84479776 0.83873979], Average AUC 0.840041195125

Exercise: Use TfIdfVectorizer instead of CountVectorizer - is this an improvement?

titles = data['title'].fillna('') from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer(max_features = 1000, ngram_range=(1, 2), stop_words='english') # Use `fit` to learn the vocabulary vectorizer.fit(body_text) # Use `tranform` to generate the sample X word matrix - one column per feature (word or n-grams) X = vectorizer.transform(body_text).toarray() scores = cross_val_score(model, X, y, scoring='roc_auc') print('CV AUC {}, Average AUC {}'.format(scores, scores.mean()))
CV AUC [ 0.84268957 0.85274835 0.8393846 ], Average AUC 0.844940841904