Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
suyashi29
GitHub Repository: suyashi29/python-su
Path: blob/master/Data Visualization using Python/Pandas.ipynb
3074 views
Kernel: Python 3 (ipykernel)

Pandas

Pandas is the most popular python Data Analysis & Data Structure tool.

image.png

  • Python with Pandas is used in a wide range of fields including academic and commercial domains including finance, economics, Statistics, analytics, etc

Key Features of Pandas

  • Fast and efficient DataFrame object with default and customized indexing.

  • Tools for loading data into in-memory data objects from different file formats.

  • Data alignment and integrated handling of missing data.

  • Reshaping and pivoting of date sets.

  • Label-based slicing, indexing and subsetting of large data sets.

  • Columns from a data structure can be deleted or inserted.

  • Group by data for aggregation and transformations.

  • High performance merging and joining of data.

  • Time Series functionality.

Data Analysis

  • Raw data - information- Prepare- Feature selection- Model Data

  • import data(Data Acquistion) - Data prepartion(Cleaning data, Data Engineer) - EDA - Model Data

Installation

Standard Python distribution doesn't come bundled with Pandas module. A lightweight alternative is to install NumPy using popular Python package installer, pip.

  • pip install pandas

  • If you install Anaconda Python package, Pandas will be installed by default with the following − Anaconda (from https://www.continuum.io) is a free Python distribution for SciPy stack.

Pandas deals with the following three data structures −

  • Series: Series is a one-dimensional array like structure with homogeneous data.

  • DataFrame:DataFrame is a two-dimensional array with heterogeneous data

  • Panel: Panel is a three-dimensional data structure with heterogeneous data. It is hard to represent the panel in graphical representation. But a panel can be illustrated as a container of DataFrame. These data structures are built on top of Numpy array.

  • Data Acquition: , - Data Prepare: - Data Cleanings, Data manipulation, - Data Enginerring Raw Data - Information - Insights- Actions

  • pip install pandas import pandas

  • Lets work with Series

  1. Series (1D) , rows ,

  2. Data Frames (2D) , rows and columns

  3. Panel (3D)

import pandas #conda install -c pandas a=(1,23,23) a b=[1,2,3] b
import pandas as pd a=(1,2,3,4,5,6) b=["one","two","three","four","five","six"] #dtype=int x=pd.Series(a,dtype=float,index=b) x
#Series from list s=pd.Series([1,2,3,4],index=("I","II","III","IV"),dtype=int) s

Creating series from dictionary

emp={"A":8,"B":9,"C":10} details=pd.Series(emp) details
  • Note: Values are used by default as series elements & Keys as index Dictionary is a mapping data type , We cannot manupulate index in as we do in case of List & Tuples.

#changing order of index age = {'ram' : 28,'bob' : 19, 'cam' : 22} s = pd.Series(age,index=['bob','ram','cam',"Lina"],dtype=int) s
  • note : Missing value is filled by NAN & index taken by keys

  • Lets Work with Data Frame

import pandas as pd # Data frame from 1D List l=["ashi","rom","sid"] df=pd.DataFrame(l) df
#2D lIST data = [['Nokia',10000],['Asus',12000],['Samsung',13000],['Apple',33000]] d = pd.DataFrame(data,columns=['Mobile','Price'],index=[1,2,3,4]) d

Create a data frame containing Details of six students in chem, phy, maths,Roll/name(Index)

note : If no index or column is passed, then by default, index will be range(n), where n is the array length.

import pandas as pd data = [["A",34,23,56],["B",92,88,76],["C",76,87,25],["D",45,78,99]] stu = pd.DataFrame(data,columns=["Name",'Maths','Phy','Chem'],dtype=int) stu

Creating Data frame from Series

import pandas as pd # Selecting Columns d = {'Chem' : pd.Series([30, 70, 35,25,26,67,77,67,89,98], index=["Raj",'Ram', 'Asa', 'Pi','Chi','Ru',"Sita","Ria","Gita","Sia"],dtype=int), 'Math' : pd.Series([18, 26, 35, 40,55,89,79,100], index=["Raj",'Ram', 'Pi', 'Chi', 'Ru',"Sita","Ria","Gita"],dtype=int), 'Phy' : pd.Series([31, 42, 83,34,80,78,67,98], index=["Asa",'Ram', 'Pi', 'Ru',"Sita","Gita","Chi","Sia"],dtype=int)} exam = pd.DataFrame(d) exam1=exam.copy() ## To make copy of your data exam1

Data Preparation

  • Removing null values/replace

  • Data description

  • Adding new data fields:analysis

  • Feature selection: ML , Predictions:decisions

# Adding columns print ("Adding a new column using the existing columns in DataFrame:") exam1['Total']=exam1['Chem']+exam1['Math']+exam1['Phy'] exam1

Dealing with missing data

  • Check for Missing Values

  • To make detecting missing values easier (and across different array dtypes), Pandas provides the isnull() and notnull() functions, which are also methods on Series and DataFrame objects

exam
exam.isnull().sum()
  • Delete null values ; Row will get deleted

  • Replacing : Exact : 2000 :

  • Fill (mean,median, mode)

exam.isnull()#to check null values 1. Fillna 2. Fill (mean,median,mode) mean- doesn not outlier mean median-Median is better mode- Categorical

Calculations with Missing Data

  • When summing data, NA will be treated as Zero If the data are all NA, then the result will be NA Cleaning / Filling Missing Data

  • Pandas provides various methods for cleaning the missing values. The fillna function can “fill in” NA values with non-null data in a couple of ways, which we have illustrated in the following sections.

  • Replace NaN with a Scalar Value The following program shows how you can replace "NaN" with "0".

exam1=exam1.fillna(0) exam1
exam1["T"]=exam1["Math"]+exam1["Phy"]+exam1["Chem"] exam1
exam
median(1,2,3) mode(1,2,2,3,4,): Categorical data median,mean(Con): median(Is not afftected by outliers) 1,1,2,3,4,5,67,89,8,9 1,1,2,3,4,5,8,9,67,89
a=exam["Math"].median() exam["Math"]= exam["Math"].fillna(a) b=exam["Phy"].median() exam["Phy"]=exam["Phy"].fillna(b)
exam

Dropping missing values using "dropna()"

  • 10000 A- 999(NA): Drop A B-79(NA): Replace NA C- 0(NA)

exam1
exam1=exam1.dropna() exam1

Replacing error values by generic values by replace function

df = pd.DataFrame({'one':[10,20,30,40,50,"ABC"], 'AGE':[-19,1,30,40,50,60]}) df=df.replace({"ABC":60,-19:19}) df
exam
exam1.describe()
exam1.sum() exam1.mean() exam1.std() exam1["Chem"].min()

Stats: Data Description

import pandas as pd details = {'Brand':pd.Series(['Nokia','Asus',"Nokia","Nokia",'Samsung',"ABC",'Micromax','Apple','MI','Zen',"Apple"]), 'Price':pd.Series([10000,8000,12500,7000,40000,12000,12999,13999,59999]), 'Rating(10)':pd.Series([7,6.5,8.5,9,8,9.5,7,9]) } d = pd.DataFrame(details) d
  • Check missing values in given

  • replace missing values

  • describe data

  • mean, mode and median column wise

d.isnull().sum()
d["Price"].mean()
#d["Rating(10)"].describe() d.describe()

The describe() function computes a summary of statistics pertaining to the DataFrame columns.

d.describe(include="object") #d[d['Brand'] == 'Apple'] #d.head(2)#displays data labels from start #d.tail(1)# dispalys data lables from bottom # - Mode,Sum,medain,valuable(null), any details about categorical

Lets Practice

  • Check null values for the given data

  • replace null values by mean/median/mode

  • describe

Renaming

  • The rename() method allows you to relabel an axis based on some mapping (a dict or Series) or an arbitrary function.

d=d.rename(columns={'Brand' : 'Type', 'Avg.Price' : 'Price'}, index = {0 : 'S0', 1 : 'S1', 2 : 'S2'}) d

Sorting

import pandas as pd details = {'Brand':pd.Series(['Nokia','Asus',"Nokia","Nokia",'Samsung',"ABC",'Micromax','Apple','MI','Zen',"Apple"]), 'Avg.Price':pd.Series([10000,8000,12500,7000,40000,12000,12999,13999,59999,8000,76000]), 'Rating(10)':pd.Series([7,6.5,8.5,9,8,9.5,7,9]) } d = pd.DataFrame(details) d
df2 = d.sort_values(by=['Brand','Avg.Price'],ascending= [True,True])#Decending order df2

get_dummies()

  • Pass a list with length equal to the number of columns.

  • Returns the DataFrame with One-Hot Encoded values.

CAR PRICE VS "MODEL", MILAGE

Type Encode A 0 B 1 C 2 D 3 Grades STUDENTS A 0 C 2 B A B C C Dummy method Orange Pink Blue 1 0 0 0 1 0 0 0 1 map: (("F":1,:"M":0))
import pandas as pd s = pd.Series(['Orange', 'Pink', 'Blue']) s1=s.str.get_dummies() s1
pd.get_dummies(d.Brand) ## df.label("A":0,"B":1)

Indexing & Selecting Data

  • loc() : (label based indexing) takes two single/list/range operator separated by ','. The first one indicates the row and the second one indicates columns.

  • iloc() : integer based indexing.

loc()

import pandas as pd details = {'Brand':pd.Series(['Nokia','Asus',"Nokia","Nokia",'Samsung',"ABC",'Micromax','Apple','MI','Zen',"Apple"]), 'Avg.Price':pd.Series([10000,8000,22500,27000,40000,12000,12999,13999,59999,7899,8999]), 'Rating(10)':pd.Series([7,6.5,8.5,9,8,9.5,7,9,10,7,6.9]) } d = pd.DataFrame(details) d
d.Brand
d.loc[:,"Brand"]
d.iloc[1:,0] ##d.loc[:,"Brand"] ## d.loc["Nokia"]

About iloc() :

# select all rows for a specific column d.iloc[:,2]
d.iloc[2:5]
# Integer slicing d.iloc[2:5, 1:3]
# Slicing through list of values d.iloc[[7],[0]]
### Using index parameter d.Brand d["Brand"]

Correlation

Correlation shows the linear relationship between any two array of values (series). There are multiple methods to compute the correlation like pearson(default), spearman and kendall.

image.png

  • price increases with years (+ive)

  • For ABC cmpny Sales is decreasing with Time (-ve)

  • In last 5 years toursim industry has a constant growth of 0.6 per.

  • 30 marks : 12 hrs , 12 marks : 5 hrs

image.png

d.corr() -1,1 -1: one increases other varaible will decrease 1: one 0: not related
import numpy as np # Implemennts milti-dimensional array and matrices import pandas as pd # For data manipulation and analysis #import pandas_profiling import matplotlib.pyplot as plt # Plotting library for Python programming language and it's numerical mathematics extension NumPy # Provides a high level interface for drawing attractive and informative statistical graphics %matplotlib inline from subprocess import check_output d.corr() d.plot(x="Avg.Price",y="Rating(10)",style="*") plt.show()
d.corr()
### Percent_change Series, DatFrames and Panel, all have the function pct_change(). This function compares every element with its prior element and computes the change percentage. s = pd.Series([1,2,4,8,6,4]) print (s.pct_change())

Creating joints in Pandas

  • Full Outer Join

combines the results of both the left and the right outer joins. The joined DataFrame will contain all records from both the DataFrames and fill in NaNs for missing matches on either side. You can perform a full outer join by specifying the how argument as outer in the merge() function:

  • Inner Join

combines the common results of both

  • There should be relevance(Common field,analysis requirement)

import pandas as pd d = { 'id': ['1', '2', '3', '4', '5','6'], 'Color': ['RED', 'GREEN', 'YELLOW', 'BLUE', 'PINK','BLACK'], 'Fruit': ['APPLE', 'BANANA', 'MANGO', 'BERRY', 'MELON','GRAPES']} d1=pd.DataFrame(d) d1
z = { 'id': ['1', '2', '3', '4', '5','7'], 'rating': ['A', 'B', 'C', 'A', 'B','A'], 'COST': [200, 230, 400, 400, 100,450], 'Fruit': ['APPLE', 'BANANA', 'MANGO', 'BERRY', 'MELON','KIWI'], 'BUY': ['Y', 'N', 'Y', 'N', 'Y','N']} d2=pd.DataFrame(z) d2
join = pd.merge(d1, d2, on=["Fruit","id"], how='outer') ## similary we can use 'inner/outer/right/left join' join
#Selecting few fields from frames join = pd.merge(d1[["Color","Fruit"]], d2, on="Fruit", how='left') ## similary we can use 'inner/right/left join' join
#Selecting both common fields from frames join = pd.merge(d1, d2, on=["id"], how='outer') ## similary we can use 'inner/right/left join' join

How simple merge function differs from join?

Simple merge works as an inner joint

df_merge = pd.merge(d1,d2, on='Fruit') df_merge

Concadination

a = { "id": [12,14,15], 'Color': ['brown', 'yellow',"green" ], 'Fruit': ['Pear', 'muskmellon',"Apricot"]} d3=pd.DataFrame(a) d3
d3.iloc[2]
id 15 Color green Fruit Apricot Name: 2, dtype: object
df_col = pd.concat([d1,d3], axis=0) df_col.reset_index() #df_col

Frequency table : Crosstab

d2
my_tab = pd.crosstab(index=d2["rating"], # Make a crosstab columns="c1") # Name the count column my_tab
c = pd.value_counts(d2.rating).to_frame().reset_index() c

Split Data into Groups

import pandas as pd ipl_data = {'Team': ['Riders', 'Riders', 'Devils', 'Devils', 'Kings', 'Kings', 'Kings', 'Kings', 'Riders', 'Royals',"MI", 'Royals', 'Riders',"MI","MI","GL","MI"], 'Rank': [1, 2, 2, 3, 3,4 ,1 ,1,2 ,3, 4,1,2,1,1,1,5], 'Year': [2014,2015,2014,2015,2014,2015,2016,2017,2016,2016,2014,2015,2017,2018,2019,2022,2022], 'Points':[876,789,863,673,741,812,756,788,694,701,607,804,690,890,900,950,600]} ipl = pd.DataFrame(ipl_data) ipl.shape #print (ipl.groupby('Team'))
ipl
c=ipl.groupby('Team').groups c
a=ipl.groupby(['Team','Year']).groups a
team_data = ipl.groupby(["Team",'Year'])# Similary try for Year ,Rank & Points team_data.get_group(("Devils",2014)) ## team_data.first()
c=ipl.groupby(["Team",'Year']) z=c.get_group(("MI",2019)) z
ipl[ipl["Team"]=="Devils"]
my_tab = pd.crosstab(index=ipl["Team"], # Make a crosstab columns="count") # Name the count column my_tab
## Agrregration on groups import numpy as np grouped = ipl.groupby('Year') grouped['Points'].agg(np.mean)
# Attribute Access in Python Pandas import numpy as np grouped = ipl.groupby('Team') print (grouped.agg(np.size))
grouped = ipl.groupby('Team') print (grouped['Points'].agg([np.sum, np.mean, np.std]))

Filtration

  • Filtration filters the data on a defined criteria and returns the subset of data. The filter() function is used to filter the data.

Lambda function

  • The lambda keyword is used to create anonymous functions

  • This function can have any number of arguments but only one expression, which is evaluated and returned.

  • One is free to use lambda functions wherever function objects are required.

  • You need to keep in your knowledge that lambda functions are syntactically restricted to a single expression.

z=lambda x,y:(x*x-2*y+3) print(z(2,3))
x=lambda a,b,c : (a*b/c) print(x(5,3,2))
%%time a=[1,3,4,6,23,44,56,78,90,54,60] list_e = list(filter(lambda x:(x %2==0),a)) print(list_e)
# Python code to illustrate # filter() with lambda() final_list = list(filter(lambda x: (x > 800) , ipl["Points"])) print(final_list)
ipl[ipl["Rank"].apply(lambda s: s == 1)] ## by directly Acessing Colums
import pandas as pd top=lambda x:x ==1 ipl[ipl["Rank"].apply(top)] ## by column name
cap = lambda x: x.upper() ipl['Team'].apply(cap)

datetime.now() gives you the current date and time.

print (pd.datetime.now())
%%time from time import sleep ##sleep(5) ## Delay Function 3+4

Input Output

Reading files

Following files can be read in pandas:

  • df.to_csv(...) # csv file

  • df.to_hdf(...) # HDF5 file

  • df.to_pickle(...) # serialized object

  • df.to_sql(...) # to SQL database

  • df.to_excel(...) # to Excel sheet

  • df.to_json(...) # to JSON string

  • df.to_html(...) # render as HTML table

  • df.to_feather(...) # binary feather-format

  • df.to_latex(...) # tabular environment table

  • df.to_stata(...) # Stata binary data files

  • df.to_msgpack(...) # msgpack (serialize) object

  • df.to_gbq(...) # to a Google BigQuery table.

  • df.to_string(...) # console-friendly tabular output.

  • df.to_clipboard(...) # clipboard that can be pasted into Excel

import os print(os.getcwd()) pd.read.type("name"\ "Path")
%%time import pandas as pd #movies=pd.read_excel("movies.xlsx") viz_data=pd.ExcelFile(r"C:\Users\suyashi144893\Documents\data Sets\Exam_data.xlsx").parse("StudentsPerformance") viz_data viz_data.head(2)
CPU times: total: 31.2 ms Wall time: 36.2 ms
movies.head(2)
movies.shape movies.info() movies.describe(include="object") movies.isnull().sum()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 1000 entries, 0 to 999 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Rank 1000 non-null int64 1 Title 1000 non-null object 2 Genre 1000 non-null object 3 Description 1000 non-null object 4 Director 1000 non-null object 5 Actors 1000 non-null object 6 Year 1000 non-null int64 7 Runtime (Minutes) 1000 non-null int64 8 Rating 1000 non-null float64 9 Votes 1000 non-null int64 10 Revenue (Millions) 872 non-null float64 11 Metascore 936 non-null float64 dtypes: float64(3), int64(4), object(5) memory usage: 93.9+ KB
Rank 0 Title 0 Genre 0 Description 0 Director 0 Actors 0 Year 0 Runtime (Minutes) 0 Rating 0 Votes 0 Revenue (Millions) 128 Metascore 64 dtype: int64
n = pd.read_csv(r"C:\Users\suyashi144893\Documents\data Sets\netflix_data.csv")
n.head(2)
n.shape
n.isnull().sum()
n.drop(columns=["director"],inplace=True)
n
n.describe()
n.describe(include = "object") n.shape
n.drop_duplicates() n.shape
n["country"]=n["country"].fillna("United States") n.isnull().sum()

This will fectch data from bottom

##Movie Trend 2006 - 2016 1. Describe data 2. Drop Columns 3. Check Null values 4. Replace null values
import pymysql query = open('my_data.sql', 'r') con= "F:\python" DF = pd.read_sql_query(query.read(),con)
Emp.describe()
  • SQL DATA FROM SERVER

import pandas as pd from sqlalchemy import create_engine def getData(): # Parameters ServerName = "my_server" Database = "my_db" UserPwd = "user:pwd" Driver = "driver=SQL Server Native Client 11.0" # Create the connection engine = create_engine('mssql+pyodbc://' + UserPwd + '@' + ServerName + '/' + Database + "?" + Driver) sql = "select * from mytable" df = pd.read_sql(sql, engine) return df df2 = getData() print(df2)

creating a csv file

empid=[100,200,300,400] emprole=["lead","Trainer","Consultant","Sales"] details=list(zip(empid,emprole)) details
import pandas as pd df=pd.DataFrame(data=details,index=["ONE","TWO","THREE","FOUR"]) df
df.to_csv("e.csv",header=False,index=False)

Pandas SQL Operations

We can use the pandas read_sql_query function to read the results of a SQL query directly into a pandas DataFrame. The below code will execute the same query that we just did, but it will return a DataFrame. It has several advantages over the query we did above:

  • It doesn’t require us to create a Cursor object or call fetchall at the end.

  • It automatically reads in the names of the headers from the table.

  • It creates a DataFrame, so we can quickly explore the data.

## conda install -c sqlite3
# Program to Connect to the exixting database ##If the database does not exist, then it will be created and finally a database object will be returned import sqlite3 conn = sqlite3.connect('D1.db') print ("My first Connection")
# Program to create a table in the previously created database conn = sqlite3.connect('D1.db') print ("Opened database successfully") conn.execute('''CREATE TABLE COMPANY3 (ID INT PRIMARY KEY NOT NULL, NAME TEXT NOT NULL, AGE INT NOT NULL, ADDRESS CHAR(50), SALARY REAL);''') print ("Table created......") conn.close()
## to create records in the COMPANY table created in the above example. conn = sqlite3.connect('D1.db') print ("Opened database successfully"); conn.execute("INSERT INTO COMPANY3 (ID,NAME,AGE,ADDRESS,SALARY) \ VALUES (7, 'Poy', 30, 'UP', 30000.00 )") conn.execute("INSERT INTO COMPANY3(ID,NAME,AGE,ADDRESS,SALARY) \ VALUES (8, 'Ram', 33, 'FARIDABAD', 18000.00 )") conn.execute("INSERT INTO COMPANY3 (ID,NAME,AGE,ADDRESS,SALARY) \ VALUES (9, 'edd', 42, 'NEW DELHI', 22000.00 )") conn.commit() print ("Records updated successfully"); conn.close()
import pandas as pd import sqlite3 conn = sqlite3.connect("D1.db") Q = pd.read_sql_query( '''SELECT id, name, address, salary from COMPANY3''', conn)
d = pd.DataFrame(Q)#, columns=['ID','NAME','ADDRESS','SALARY'])
import zipfile import pandas as pd df=pd.read_csv("car_price_pred")