Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
UBC-DSCI
GitHub Repository: UBC-DSCI/dsci-100-assets
Path: blob/master/2019-fall/materials/worksheet_02/worksheet_02.ipynb
2051 views
Kernel: R

Worksheet 2: Introduction to Reading Data

You can read more about course policies on the course website.

Lecture and Tutorial Learning Goals:

After completing this week's lecture and tutorial work, you will be able to:

  • define the following:

    • absolute file path

    • relative file path

    • url

  • read data into R using a relative path and a url

  • compare and contrast the following functions:

    • read_csv

    • read_tsv

    • read_csv2

    • read_delim

    • read_excel

  • match the following tidyverse read_* function arguments to their descriptions:

    • file

    • delim

    • col_names

    • skip

  • choose the appropriate tidyverse read_* function and function arguments to load a given plain text tabular data set into R

  • use readxl library's read_excel function and arguments to load a sheet from an excel file into R

  • connect to a database using the DBI library's dbConnect function

  • list the tables in a database using the DBI library's dbListTables function

  • create a reference to a database table that is queriable using the tbl from the dbplyr library

  • retrieve data from a database query and bring it into R using the collect function from the dbplyr library

  • optional: scrape data from the web

    • read/scrape data from an internet URL using the rvest html_nodes and html_text functions

    • compare downloading tabular data from a plain text file (e.g. *.csv) from the web versus scraping data from a .html file

This worksheet covers parts of Chapter 2 of the online textbook. You should read this chapter before attempting the worksheet.

### Run this cell before continuing. library(tidyverse) library(repr) library(readxl) source("tests_worksheet_02.R")

1. Comparing Absolute Paths, Relative Paths, and URLs

Question 1.1 Multiple Choice:
{points: 1}

If you needed to read a file using an absolute path, what would be the first symbol in your argument (...) when using the read_csv function?

A. read_csv(">...")

B. read_csv(";...")

C. read_csv("...")

D. read_csv("/...")

Assign your answer to an object called answer1.

# Assign your answer to an object called: answer1 # Make sure the correct answer is an uppercase letter. # Surround your answer with quotation marks. # Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_1.1()

Question 1.2 True or False:
{points: 1}

The file argument in the read_csv function that uses an absolute path can never look like that of a relative path?

Assign your answer to an object called answer2.

# Assign your answer to an object called: answer2 # Make sure the correct answer is written in lower-case (true / false) # Surround your answer with quotation marks. # Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_1.2()

Question 1.3 Match the following arguments with the corresponding path that they represent:
{points: 1}

Definitions

A. /Users/my_user/Desktop/UBC/BIOL363/SciaticNerveLab/sn_trial_1.xlsx

B. https://www.ubc.ca

C. file_1.csv

D. /Users/name/Documents/Course_A/homework/my_first_homework.docx

E. homework/my_second_homework.docx

F. https://www.random_website.com

Functions

  1. absolute

  2. relative

  3. URL

For every argument, create an object using the letter associated with the example and assign it to the corresponding number from the list of path types. For example: B <- 1

# Assign your answer to a letter: A, B, C, D, E, F # Make sure the correct answer is a numerical number from 1-3 # Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_1.3()

Question 1.4 Multiple Choice:
{points: 1}

If the absolute path to a data file looks like this: /Users/my_user/Desktop/UBC/BIOL363/SciaticNerveLab/sn_trial_1.xlsx

What would the relative path look like if the working directory (i.e., where the Jupyter notebook is where you are running your R code from) is now located in the UBC folder?

A. sn_trial_1.xlsx

B. /SciaticNerveLab/sn_trial_1.xlsx

C. BIOL363/SciaticNerveLab/sn_trial_1.xlsx

D. UBC/BIOL363/SciaticNerveLab/sn_trial_1.xlsx

E. /BIOL363/SciaticNerveLab/sn_trial_1.xlsx

Assign your answer to an object called answer4.

# Assign your answer to an object called: answer4 # Make sure the correct answer is an uppercase letter. # Surround your answer with quotation marks. # Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_1.4()

Question 1.5 Match the following paths with the most likely kind of data format they contain.

{points: 1}

Paths:

  1. https://www.ubc.ca/datasets/data.db

  2. /home/user/downloads/data.xlsx

  3. data.tsv

  4. examples/data/data.csv

  5. https://en.wikipedia.org/wiki/Normal_distribution

Dataset Types:

A. Excel Spreadsheet

B. Database

C. HTML file

D. Comma-separated values file

E. Tab-separated values file

For every dataset type, create an object using the letter associated with the example and assign it to the corresponding number from the list of paths. For example: F <- 5

# Assign your answer to a letter: A, B, C, D, E, F # Make sure the correct answer is a numerical number from 1-6 # Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_1.5()

2. Argument Modifications to Read Data

Reading files is one of the first steps to wrangling data and consequently read_csv is a crucial function. However, despite how effortlessly it has worked so far, it has its limitations. read_csv works with particular files and does not accept differing formats.

Not all data sets come as perfectly organized like the ones you worked with last week. Time and effort was put into ensuring that the files were arranged with headers, columns were separated by commas, and the beginning excluded metadata.

Now that you understand how to read files located outside (or inside) of your working directory, you can begin to learn the tips and tricks necessary to overcoming the setbacks of read_csv.

### Run this cell to learn more about the arguments used in read_csv ### Reading over the help file will assist with the next question. ?read_csv

Question 2.1 Match the following definitions with the corresponding arguments used in read_csv:
{points: 1}

Definitions

G. Character that separates columns in your file.

H. Specifies whether or not the first row of data in your file are column labels. Also allows you to create a vector that can be used to label columns.

I. This is the file name, path to a file, or URL.

J. Specifies the number of lines which must be ignored because they contain metadata.

Functions

  1. file

  2. delim

  3. col_names

  4. skip

For every description, create an object using the letter associated with the definition and assign it to the corresponding number from the list of functions. For example: G <- 1

# Assign your answer to a letter: G, H, I, J # Make sure the correct answer is a numerical number from 1-4 # Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_2.1()

Question 2.2 True or False:
{points: 1}

read_csv2 and read_delim can both be used for reading files that have columns separated by ;.

Assign your answer to an object called answer2.2. Make sure to write in all lower-case.

# Assign your answer to an object called: answer2.2 # Make sure the correct answer is written in lower-case (true / false) # Surround your answer with quotation marks. # Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_2.2()

Question 2.3 Multiple Choice:
{points: 1}

read_tsv would be used for files that have columns separated by which of the following:

A. letters

B. tabs

C. numbers

D. commas

Assign your answer to an object called answer2.3.

# Assign your answer to an object called: answer2.3 # Make sure the correct answer is an uppercase letter. # Surround your answer with quotation marks. # Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_2.3()

3. Happiness Report (2017)

This data was taken from Kaggle and ranks countries on happiness based on rationalized factors like economic growth, social support, etc. The data was released by the United Nations at an event celebrating International Day of Happiness. According to the website, the file contains the following information:

  • Country = Name of the country.

  • Region = Region the country belongs to.

  • Happiness Rank = Rank of the country based on the Happiness Score.

  • Happiness Score = A metric measured by asking the sampled people the question: "How would you rate your happiness on a scale of 0 to 10 where 10 is the happiest."

  • Standard Error = The standard error of the happiness score.

  • Economy (GDP per Capita) = The extent to which GDP contributes to the calculation of the Happiness Score.

  • Family = The extent to which Family contributes to the calculation of the Happiness Score.

  • Health (Life Expectancy) = The extent to which Life expectancy contributed to the calculation of the Happiness Score.

  • Freedom = The extent to which Freedom contributed to the calculation of the Happiness Score.

  • Trust (Government Corruption) = The extent to which Perception of Corruption contributes to Happiness Score.

  • Generosity = The extent to which Generosity contributed to the calculation of the Happiness Score.

  • Dystopia Residual = The extent to which Dystopia Residual contributed to the calculation of the Happiness Score.

To clean up the file and make it easier to read, we only kept the country name, happiness score, economy (GDP per capita), life expectancy, and freedom.

Kaggle stores this information but it is compiled by the Sustainable Development Solutions Network. They survey these factors nearly every year (since 2012) and allow global comparisons to optimize political decision making. These landmark surveys are highly recognized and allow countries to learn and grow from one another. One day, they will provide a historical insight on the nature of our time.

Question 3.1 Fill in the Blank:
{points: 1}

Trust is the extent to which _______________ contributes to the Happiness Score.

A. Corruption

B. Government Intervention

C. Perception of Corruption

D. Tax Money Designation

Assign your answer to an object called answer3.1.

# Assign your answer to an object called: answer3.1 # Make sure the correct answer is an uppercase letter. # Surround your answer with quotation marks. # Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_3.1()

Question 3.2 Multiple Choice:
{points: 1}

What is the happiness report?

A. Study conducted by the governments of multiple countries.

B. Independent survey of citizens from multiple countries.

C. Study conducted by the UN.

D. Survey given to international students by UBC's psychology department.

Assign your answer to an object called answer3.2.

# Assign your answer to an object called: answer3.2 # Make sure the correct answer is an uppercase letter. # Surround your answer with quotation marks. # Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_3.2()

Question 3.3 Inspecting plain text data
{points: 3}

It is often a good idea to try to "inspect" your data to see what it looks like before trying to load it into R. This will help you figure out the right function to call and what arguments to use. When your data are stored as plain text, you can do this easily with Jupyter (or any text editor).

Open all the files named happiness_report... in the data folder in your working directory (the worksheet_02 directory) using Jupyter (again, this video shows you how to do this). This will allow you to visualize the files and the organization of your data. Based on your findings, fill in the table below. This table will be very useful to refer back to in the coming weeks.

You'll notice that trying to open one of the files gives you an error of the form Error! ... is not UTF-8 encoded. This means that this data is not stored as human-readable plain text. For this special file, just fill in the File Name and read_* function entries, leaving the others blank.

Double click on this cell to edit and fill out your table! We have filled the first row as an example.

Fill in your answers between the |.

File NamedelimHeader (yes/no)Metadata (yes/no)# lines to skipread_* function
happiness_report.csv,yesnoNAread_csv

YOUR ANSWER HERE

Question 3.4
{points: 1}

Read the file happiness_report.csv in the data folder using the shortest relative path.

*Hint - preview the data using Jupyter (as shown in this video) so you know which read_* function and which arguments to use.

Assign the relative path to an object named happiness_report_path, and assign the output of the correct read_* function you call to an object named happiness_report.

# Load happiness_report.csv using read_csv and name it: happiness_report # your code here fail() # No Answer - remove if you provide an answer head(happiness_report, n = 10) # the n = 10 argument tells head to print 10 lines instead of the default 6
test_3.4()
Question 3.5 Multiple Choice:


{points: 1}

If Norway is in "first place" based on the happiness score, at what position is Canada?

A. 3rd

B. 15th

C. 7th

D. 28th

Hint: create a new cell and run happiness_report.

Assign your answer to an object called answer3.5.

# Assign your answer to an object called: answer3.5 # Make sure the correct answer is an uppercase letter. # Surround your answer with quotation marks. # Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_3.5()

Question 3.6.1
{points: 1}

Read in the file happiness_report_semicolon.csv using read_delim and name it happy_semi_df

For each question in the ranges 3.6.1-5 and 3.7.1-2, fill in the ... in the cells given. Replace fail() with your finished answer. Refer to your table and don't be afraid to ask for help.

# happy_semi_df <- read_delim(file = "data/...", delim = "...") # your code here fail() # No Answer - remove if you provide an answer head(happy_semi_df)
test_3.6.1()

Question 3.6.2
{points: 1}

Read in the file happiness_report_semicolon.csv again, but this time use a different read_* function than read_delim (but that also works). Name it happy_semi_df2.

# happy_semi_df2 <- ...("...") # your code here fail() # No Answer - remove if you provide an answer head(happy_semi_df2)
test_3.6.2()

Question 3.6.3
{points: 1}

Read in the file happiness_report.tsv using the appropriate read_* function and name it happy_tsv.

# happy_tsv <- ...(file = "...") # your code here fail() # No Answer - remove if you provide an answer head(happy_tsv)
test_3.6.3()

Question 3.6.4
{points: 1}

Read in the file happiness_report_metadata.csv using the appropriate read_* function and name it happy_metadata.

# happy_metadata <- ...("data/happiness_report_metadata.csv", skip = ...) # your code here fail() # No Answer - remove if you provide an answer head(happy_metadata)
test_3.6.4()

Question 3.6.5
{points: 1}

Read in the file happiness_report_no_header.csv using the appropriate read_* function and name it happy_header.

# happy_header <- ...("...", col_names = c("country", "happiness_score", "GDP_per_capita", "life_expectancy", "freedom")) # your code here fail() # No Answer - remove if you provide an answer head(happy_header)
test_3.6.5()

Question 3.7
{points: 1}

Earlier when you tried to open happiness_report.xlsx in Jupyter, you received an error of the form Error! /... is not UTF-8 encoded. This happens because Excel spreadsheet files are not stored in plain text, and so Jupyter can't open them with its default text viewing program. This makes them a bit harder to inspect before trying to open in R.

To inspect the data, we will just try to load happiness_report.xlsx using the most basic form of the appropriate read_* function, passing only the filename as an argument. Assign the output to a variable called happy_xlsx.

Note: you can also try to examine .xlsx files with Microsoft Excel or Google Sheets before loading into R.

# happy_xlsx <- ...("...") # your code here fail() # No Answer - remove if you provide an answer head(happy_xlsx)
test_3.7()

Question 3.8
{points: 1}

Opening the data on a text editor showed some clear differences. Do all the data sets look the same once reading them on your R notebook?

yes no

Assign your answer to an object called answer3.8. Make sure to write in all lower-case.

# Assign your answer to an object called: answer3.8 # Make sure the correct answer is written in lower-case (yes / no) # Surround your answer with quotation marks. # Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_3.8()

Question 3.9
{points: 1}

Using the happy_header data set that you read earlier, plot life_expectancy vs. GDP_per_capita.

Assign your answer to an object called header_plot.

Note that the statement "plot A vs. B" usually means to put A on the vertical axis, and B on the horizontal axis.

Make sure to use xlab and ylab to label your axes appropriately.

options(repr.plot.width=4, repr.plot.height=3) # Assign your plot to an object called: header_plot # your code here fail() # No Answer - remove if you provide an answer header_plot
test_3.9()

4. Reading Data from a Database

Investigating the reliability of flights into / out of Boston Logan International Airport

Delays and cancellations seem to be an unavoidable risk of air travel. A missed connection, or hours spent waiting at the departure gate, might make you wonder though: how reliable is air travel, really?

The US Bureau of Transportation Statistics keeps a continually-updated Airline On-Time Performance Dataset that has tracked the scheduled and actual departure / arrival time of flights in the United States from 1987 to the present day. In this section we'll do some exploration of this data to try to answer some of the above questions. The actual data we'll be using was from only the year 2015, and was compiled into the 2015 Kaggle Flight Delays Dataset from the raw Bureau data. But even that dataset is too large to handle in this course (5.8 million flights in just one year!), so the data have been filtered down to flights that either depart or arrive at Logan International Airport (BOS), resulting in around 209,000 flight records.

Our data has the following variables (columns):

  • year

  • month

  • day

  • day of the week (from 1 - 7.999..., with fractional days based on departure time)

  • origin airport code

  • destination airport code

  • flight distance (miles)

  • scheduled departure time (local)

  • departure delay (minutes)

  • scheduled arrival time (local)

  • arrival delay (minutes)

  • diverted? (True/False)

  • cancelled? (True/False)

Question 4.1 True / False
{points: 1}

We can use our dataset to figure out which airline company was the least likely to experience a flight delay in 2015.

Assign your answer to an object called answer4.1

# Make sure the correct answer is written in lower-case (true / false) # Surround your answer with quotation marks. # answer 4.1 <- ... # your code here fail() # No Answer - remove if you provide an answer
test_4.1()

Question 4.2 Multiple Choice
{points: 1}

If we're mostly concerned with getting to our destination on time, which variable in our dataset should we use as the y-axis of a plot?

A. flight distance

B. departure delay

C. origin airport code

D. arrival delay

Assign your answer as a single character to an object called answer4.2. For example, answer4.2 <- 'F'

#answer4.2 <- ... # your code here fail() # No Answer - remove if you provide an answer
test_4.2()

Let's start exploring our data. The file is stored in data/flights_filtered.db in your working directory (still the worksheet_02 folder). If you try to open the file in Jupyter to inspect its contents, you'll again run into the Error! ... is not UTF-8 encoded message you got earlier when trying to read an Excel spreadsheet. This is because the file is a database (often denoted by the .db extension), which are usually not stored in plain text.

We'll need more R packages to help us handle this kind of data:

  • the database interface (DBI) package for opening, connecting to, and interfacing with databases

  • the R SQLite (RSQLite) package so that DBI can talk to SQLite databases

    • there are many kinds of databases; the flights_filtered.db database is an SQLite database

  • the dbplyr package for manipulating tables in the database using functions in R

    • without this, in order to retrieve data from the database, we would have to know a whole separate language, Structured Query Language (SQL)

Let's load those now.

# Run this cell library("DBI") library("RSQLite") library("dbplyr")

In order to open a database in R, you need to take the following steps:

  1. Connect to the database using the dbConnect function.

  2. Check what tables (similar to R dataframes, Excel spreadsheets) are in the database using the dbListTables function

  3. Once you've picked a table, create an R object for it using the tbl function

Note: the tbl function returns a reference to a database table, not the actual data itself. This allows R to talk to the database / get subsets of data without loading the entire thing into R!

The next few questions will walk you through this process.

Question 4.3.1
{points: 1}

Use the dbConnect function to open and connect to the flights_filtered.db database in the data folder.

Note: we have provided the first argument, RSQLite::SQLite(), to dbConnect for you below. This just tells the dbConnect function that we will be using an SQLite database.

Assign the output to a variable named conn.

#conn <- dbConnect(RSQLite::SQLite(), '...') #replace ... with the database relative path # your code here fail() # No Answer - remove if you provide an answer
test_4.3.1()

Question 4.3.2
{points: 1}

Use the dbListTables function to inspect the database to see what tables it contains.

Make a new variable named flights_table_name that stores the name of the table with our data in it

# Use this cell to figure out how to answer the question # Call the dbListTables function in this cell and take a look at the output # If you don't know what argument to give dbListTables, use ?dbListTables to find out! #dbListTables(...) #replace ... with the right argument #once you've called this and seen the output, insert the output string in the cell below as denoted
#flights_table_name <- '...' # your code here fail() # No Answer - remove if you provide an answer
test_4.3.2()

Question 4.3.3
{points: 1}

Use the tbl function to create an R reference to the table so that you can manipulate it with dbplyr functions.

Make a new variable named flight_data based on the output of tbl

#flight_data <- ... # your code here fail() # No Answer - remove if you provide an answer
test_4.3.3()

Now that we've connected to the database and created an R table object, we'll take a look at the first few rows and columns of the flight on-time performance data. Even though flight_data isn't a regular R dataframe---it's a database table connection, or specifically a tbl_SQLiteConnection---the functions from the dbplyr package let us treat it like an R dataframe!

So let's try using the head function and see what happens:

#run this code to print the first few rows of flight_data to see what it looks like head(flight_data)

It works! And---as luck would have it---it also works to use the select and filter functions you've learned about previously.

Note: not all functions that you're familiar with work on database table tbl reference objects. For example, if you try to run nrow (to count the rows) or tail (to get the last rows of the table), you won't get the result you expect.

Question 4.4
{points: 2}

Use the select and filter functions to extract the arrival and departure delay columns for rows where the origin airport is BOS.

Store your answer in a variable called delay_data.

# your code here fail() # No Answer - remove if you provide an answer
test_4.4()
#Take a look at `delay_data` to make sure it has the two columns we expect. #run this code head(delay_data)

You'll notice in the Source: line that the dimension of the table is listed as [?? x 2]. This is because databases do things in the laziest way possible. Since we only asked the database for its head (the first few rows), it didn't bother going through all the rows to figure out how many there are. This sort of laziness can help make things run a lot faster when dealing with large datasets.

Our next task is to visualize our data to see whether there is a difference in delays for arrivals at and departures from BOS. But before we do that, let's figure out just how much data we're working with using the count function.

#run this code to see how many rows there are count(delay_data)

Yikes---that's a lot of data! If we tried to do a scatter plot of these, we probably wouldn't be able to see anything useful; all the points would be mushed together. Let's try using a histogram instead. A histogram helps us visualize how a particular variable is distributed in a dataset. It does this by separating the data into bins, and then plotting vertical bars showing how many data points fell in each bin.

For example, we could use a histogram to visualize the distribution of waiting times between eruptions of the Old Faithful geyser in Yellowstone National Park, Wyoming with the geom_histogram layer. The bins argument specifies the number of bins to use in the histogram.

ggplot(faithful, aes(x = waiting)) + geom_histogram(bins=40) + xlab('Waiting Time (mins)') + ylab('Count')

We'll use histograms to visualize the departure delay times and arrival delay times separately.

Question 4.5
{points: 1}

Plot the arrival delay time data as a histogram. You will plot the delay (in hours) separated into 15-minute-wide bins on the x axis. The y axis will show the percentage of flights departing BOS that had that amount of delay during 2015.

You'll do this by finishing the code segment provided below. There are 4 places where ??? appear in the provided code below. Replace each instance of ??? with the correct choice from the following 4 choices:

  • ARRIVAL_DELAY/60

  • 'steelblue'

  • 'Delay (hours)'

  • geom_histogram

Assign the output of ggplot to an object called arrival_delay_plot.

#Below is the code to plot the histogram. Replace each ??? with the correct item in the list above. #arrival_delay_plot <- ggplot(delay_data, aes(x = ???))+ # ???(aes(y = 100 * stat(count) / sum(stat(count))), binwidth = .25, fill='lightblue', color=???)+ # scale_x_continuous(limits = c(-2, 5))+ # ylab('% of Flights')+ # xlab(???) #arrival_delay_plot # your code here fail() # No Answer - remove if you provide an answer
test_4.5()

Question 4.6
{points: 1}

Plot the departure delay time data as a histogram with the same format as the previous plot.

Hint: copy and paste your code from the previous block! The only thing that will change is the delay data you input.

Assign the output of ggplot to an object called departure_delay_plot.

# your code here fail() # No Answer - remove if you provide an answer
test_4.6()

Question 4.7
{points: 1}

Look at the two plots you generated. Are departures from or arrivals to BOS more likely to be on time (at most 15 minutes ahead/behind schedule)?

Assign your answer (either 'departures' or 'arrivals' ) to a variable called answer4.7.

#answer4.7 <- ... # your code here fail() # No Answer - remove if you provide an answer
test_4.7()

So far, we've done everything using the delay_data database reference object constructed using functions from the dbplyr library. Remember: this isn't the data itself! If we want to save the small data subset that we've constructed to our local machines (perhaps to share it on the web or with collaborators), we'll need to take one last step.

Question 4.8
{points: 1}

Use the collect function to download the arrival / departure times data from the database and store it in a dataframe object called delay_dataframe. Then, use the write_csv function to write the dataframe to a file called delay_data.csv.

Note: there are many possible ways to use write_csv to customize the output. Just use the defaults here!

#If you don't know how to call collect or write_csv, use this cell to #check the documentation by calling ?collect or ?write_csv
#delay_dataframe <- collect(...) #write_csv(..., ...) # your code here fail() # No Answer - remove if you provide an answer
test_4.8()

5 (Optional). Reading Data from the Internet

How has the World Gross Domestic product changed throughout history?

As defined on Wikipedia, the "Gross world product (GWP) is the combined gross national product of all the countries in the world." Living in our modern age with our roaring (sometimes up and sometimes down) economies, one might wonder how the world economy has changed over history. To answer this question we will scrape data from the Wikipedia Gross world product page.

Your data set will include the following columns:

  • year

  • gwp_value

Specifically we will scrape the 2 columns named "Year" and "Real GWP" in the table under the header "Historical and prehistorical estimates". The end goal of this exercise is to create a line plot with year on the x-axis and GWP value on the y-axis.

Question 5.1.0 Multiple Choice:
{points: 1}

Under which of the following headers is the table will we scrape from on the Wikipedia Gross world product page?

A. Gross world product

B. Recent growth

C. Historical and prehistorical estimates

D. See also

Assign your answer to an object called answer5.1.0.

# Assign your answer to an object called: answer5.1.0 # Make sure the correct answer is an uppercase letter. # Surround your answer with quotation marks. # Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_5.1.0()

Question 5.1.1 Multiple Choice:
{points: 1}

What is going to be the x-axis of the scatter plot we create?

A. compound annual growth rate

B. the value of the gross world product

C. year

Assign your answer to an object called answer5.1.1.

# Assign your answer to an object called: answer5.1.1 # Make sure the correct answer is an uppercase letter. # Surround your answer with quotation marks. # Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_5.1.1()

We need to now load the rvest package to begin our web scraping!

# Run this cell library(rvest)

Question 5.2
{points: 1}

Use read_html to download information from the URL given in the cell below.

Assign your answer to an object called gwp.

# Assign your answer to an object called: gwp # Instead of copying the entire URL, you can simply use the object (url) after read_html() url <- 'https://en.wikipedia.org/wiki/Gross_world_product' # your code here fail() # No Answer - remove if you provide an answer print(gwp)
test_5.2()

Question 5.3

Run the cell below to create the first column of your data set (the year from the table under the "Historical and prehistorical estimates" header). The node was obtained using SelectorGadget.

# your code here fail() # No Answer - remove if you provide an answer

We can see that although we want numbers for the year, the data we scraped includes the characters AD and \n (a newline character). We will have to do some string manipulation and then convert the years from characters to numbers.

First we use the str_replace_all function to match the string " AD\n" and replace it with nothing "":

# Run this cell. # Use stringr library. library(stringr) # Replace " AD\n" with nothing. year <- str_replace_all(string = year, pattern = " AD\n", replacement = "") print(year)

When we print year, we can see we were able to remove " AD\n", but we missed that there is also " BC\n" on the earliest years! There are also commas (",") in the large BC years that we will have to remove. We also need to put a - sign in front of the BC numbers so we don't confuse them with the AD numbers after we convert everything to numbers. To do this we will need to use a similar strategy to clean this all up!

This week we will provide you the code to do this cleaning, next week you will learn to do these kinds of things yourself. After we do all the string/text manipulation then we use the as.numeric function to convert the text to numbers.

# Run this cell to clean up the year data and convert it to a number. # Use grep to select the lines containing " BC\n" and put a - at the beginning of them. year[grepl(pattern = " BC\n", x = year)] <- str_replace_all(string = year[grepl(pattern = " BC\n", x = year)], pattern = "^", replacement = "-") # Replace all commas with nothing. year <- str_replace_all(string = year, pattern = ",", replacement = "") # Extract the minus symbol and the numbers. year <- as.numeric(str_extract(string = year, pattern = "-?[0-9]+")) print(year)

Question 5.4
{points: 1}

Create a new column for the gross world product (GWP) from the table we are scraping. Don't forget to use SelectorGadget to obtain the CSS selector needed to scrape the GWP values from the table we are scraping. Assign your answer to an object called gwp_value.

Fill in the ... in the cell below. Copy and paste your finished answer into the fail().

Refer to Question 5.3 and don't be afraid to ask for help.

# gwp_value <- ...(html_nodes(gwp, ...)) # your code here fail() # No Answer - remove if you provide an answer head(gwp_value)
test_5.4()

Again, looking at the output of head(gwp_value) we see we have some cleaning and type conversions to do. We need to remove the commas, the extraneous trailing information in the first 3 columns, and the "\n" character again. We provide the code to do this below:

# Run this cell to clean up the year data and convert it to a number. # Create a new variable called gwp_value_clean. gwp_value_clean <- gwp_value # Replace all commas with nothing. gwp_value <- str_replace_all(string = gwp_value, pattern = ",", replacement = "") # Extract the numbers and decimals. gwp_value <- as.numeric(str_extract(string = gwp_value, pattern = "[0-9.]+")) print(gwp_value)

Question 5.5
{points: 1}

Use the tidyverse tibble function to create a data frame named gwp with year and gwp_value as columns. The general form for the creating data frames from vectors/lists using the tibble function is as follows:

tibble(COLUMN1_NAME, COLUMN2_NAME, COLUMN3_NAME, ...)

# Create data.frame with columns year and gwp_value named gwp. # Fill in the blanks in the code skeleton provided below. #... <- tibble(..., ...) # your code here fail() # No Answer - remove if you provide an answer head(gwp)
test_5.5()

One last piece of data transformation/wrangling we will do before we get to data visualization is to create another column called sqrt_year which scales the year values so that they will be more informative when we plot them (if you look at our year data we have a lot of years in the recent past, and fewer and fewer as we go back in time). Often times you can just transform the scale within ggplot (for example see what we do with the gwp_value later on), but the year value is tricky for scaling because it contains negative values. So we need to first make everything positive, then take the square root, and then re-transform the values that should be negative to negative again! We provide the code to do this below.

gwp <- mutate(gwp, sqrt_year = sqrt(abs(year))) gwp <- mutate(gwp, sqrt_year = if_else(year < 0, sqrt_year * -1, sqrt_year)) head(gwp)

Question 5.6
{points: 1}

Create a line plot using the gwp data frame where sqrt_year is on the x-axis and gwp_value is on the y-axis. We provide the plot code to relabel the x-axis with the human understandable years instead of the tranformed ones we plot. Name your plot object gwp_historical. To make a line plot instead of a scatter plot you should use the geom_line() function instead of the geom_point() function.

# Assign your answer to an object called: gwp_historical # Fill in the missing parts of the code below to make the plot #... <- ggplot(gwp, aes(x = ..., y = ...)) + #geom_line() + #scale_y_continuous(trans='log10') + #scale_x_continuous(breaks = c(-1000, -750, -500, -250, -77.7, 0, 38.7), # labels = c("-1000000", "-562500", "-250000", "-62500", "-5000", "0", "1500")) + #ylab("...") + #xlab("Year") options(repr.plot.width=8, repr.plot.height=3) # your code here fail() # No Answer - remove if you provide an answer gwp_historical
test_5.6()

Question 5.7
{points: 1}

Looking at the line plot, when does the Gross World Domestic Product first start to more rapidly increase (i.e., when does the slope of the line first change)?

A. roughly around year -1,000,000

B. roughly around year -250,000

C. roughly around year -5000

D. roughly around year 1500

Assign your answer to an object called answer5.7.

# Assign your answer to an object called: answer5.7 # Make sure the correct answer is an uppercase letter. # Surround your answer with quotation marks. # Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_5.7()