Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
suyashi29
GitHub Repository: suyashi29/python-su
Path: blob/master/SQL for Data Science/Analzing real world data set using Python and SQLpy.ipynb
3074 views
Kernel: Python 3

Lab: Analyzing a real world data-set with SQL and Python

Introduction

This notebook shows how to store a dataset into a database using and analyze data using SQL and Python. In this lab you will:

  1. Understand a dataset of selected socioeconomic indicators in Chicago

  2. Learn how to store data in an Db2 database on IBM Cloud instance

  3. Solve example problems to practice your SQL skills

Selected Socioeconomic Indicators in Chicago

The city of Chicago released a dataset of socioeconomic data to the Chicago City Portal. This dataset contains a selection of six socioeconomic indicators of public health significance and a “hardship index,” for each Chicago community area, for the years 2008 – 2012.

Scores on the hardship index can range from 1 to 100, with a higher index number representing a greater level of hardship.

A detailed description of the dataset can be found on the city of Chicago's website, but to summarize, the dataset has the following variables:

  • Community Area Number (ca): Used to uniquely identify each row of the dataset

  • Community Area Name (community_area_name): The name of the region in the city of Chicago

  • Percent of Housing Crowded (percent_of_housing_crowded): Percent of occupied housing units with more than one person per room

  • Percent Households Below Poverty (percent_households_below_poverty): Percent of households living below the federal poverty line

  • Percent Aged 16+ Unemployed (percent_aged_16_unemployed): Percent of persons over the age of 16 years that are unemployed

  • Percent Aged 25+ without High School Diploma (percent_aged_25_without_high_school_diploma): Percent of persons over the age of 25 years without a high school education

  • Percent Aged Under 18 or Over 64:Percent of population under 18 or over 64 years of age (percent_aged_under_18_or_over_64): (ie. dependents)

  • Per Capita Income (per_capita_income_): Community Area per capita income is estimated as the sum of tract-level aggragate incomes divided by the total population

  • Hardship Index (hardship_index): Score that incorporates each of the six selected socioeconomic indicators

In this Lab, we'll take a look at the variables in the socioeconomic indicators dataset and do some basic analysis with Python.

Connect to the database

Let us first load the SQL extension and establish a connection with the database

import sqlalchemy %load_ext sql %sql sqlite://
The sql extension is already loaded. To reload it, use: %reload_ext sql
'Connected: @None'
# Remember the connection string is of the format: # %sql ibm_db_sa://my-username:my-password@my-hostname:my-port/my-db-name # Enter the connection string for your Db2 on Cloud database instance below # i.e. copy after db2:// from the URI string in Service Credentials of your Db2 instance. Remove the double quotes at the end. %sql ibm_db_sa://my-ngl06911:my-t9+gx60xzq31b0mt@dashdb-txn-sbox-yp-dal09-03.services.dal.bluemix.net:50000/BLUDB

Store the dataset in a Table

In many cases the dataset to be analyzed is available as a .CSV (comma separated values) file, perhaps on the internet. To analyze the data using SQL, it first needs to be stored in the database.
We will first read the dataset source .CSV from the internet into pandas dataframe
Then we need to create a table in our Db2 database to store the dataset. The PERSIST command in SQL "magic" simplifies the process of table creation and writing the data from a pandas dataframe into the table
import pandas chicago_socioeconomic_data = pandas.read_csv('https://data.cityofchicago.org/resource/jcxq-k9xf.csv') %sql PERSIST chicago_socioeconomic_data
* sqlite://
'Persisted chicago_socioeconomic_data'
You can verify that the table creation was successful by making a basic query like:
%sql SELECT * FROM chicago_socioeconomic_data limit 5;
* sqlite:// Done.

Problems

Problem 1

How many rows are in the dataset?
%sql SELECT COUNT(*) FROM chicago_socioeconomic_data;
* sqlite:// Done.

Problem 2

How many community areas in Chicago have a hardship index greater than 50.0?
%sql SELECT COUNT(*) FROM chicago_socioeconomic_data WHERE hardship_index > 50.0;
* sqlite:// Done.

Problem 3

What is the maximum value of hardship index in this dataset?
%sql SELECT MAX(hardship_index) FROM chicago_socioeconomic_data;
* sqlite:// Done.

Problem 4

Which community area which has the highest hardship index?

Double-click here for the solution.

Problem 5

Which Chicago community areas have per-capita incomes greater than $60,000?

Double-click here for the solution.

Problem 6

Create a scatter plot using the variables per_capita_income_ and hardship_index. Explain the correlation between the two variables.
import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns income_vs_hardship = %sql SELECT per_capita_income_, hardship_index FROM chicago_socioeconomic_data; plot = sns.jointplot(x='per_capita_income_',y='hardship_index', data=income_vs_hardship.DataFrame())
* sqlite:// Done.
Image in a Jupyter notebook

Insights

You can see that as Per Capita Income rises as the Hardship Index decreases. We see that the points on the scatter plot are somewhat closer to a straight line in the negative direction, so we have a negative correlation between the two variables. -->

Conclusion

Now that you know how to do basic exploratory data analysis using SQL and python visualization tools.