Path: blob/master/2022-spring/materials/tutorial_reading/tutorial_reading.ipynb
2051 views
Tutorial 2: Introduction to Reading Data
Lecture and Tutorial Learning Goals:
After completing this week's lecture and tutorial work, you will be able to:
define the following:
absolute file path
relative file path
url
read data into R using a relative path and a url
compare and contrast the following functions:
read_csv
read_tsv
read_csv2
read_delim
read_excel
match the following
tidyverse
read_*
function arguments to their descriptions:file
delim
col_names
skip
choose the appropriate
tidyverse
read_*
function and function arguments to load a given plain text tabular data set into Ruse
readxl
library'sread_excel
function and arguments to load a sheet from an excel file into Rconnect to a database using the
DBI
library'sdbConnect
functionlist the tables in a database using the
DBI
library'sdbListTables
functioncreate a reference to a database table that is queriable using the
tbl
from thedbplyr
libraryretrieve data from a database query and bring it into R using the
collect
function from thedbplyr
libraryuse
write_csv
to save a data frame to a csv fileoptional: scrape data from the web
read/scrape data from an internet URL using the
rvest
html_nodes
andhtml_text
functionscompare downloading tabular data from a plain text file (e.g.
*.csv
) from the web versus scraping data from a.html
file
Any place you see ...
, you must fill in the function, variable, or data to complete the code. Replace fail()
with your completed code and run the cell!
1. Happiness Report
As you might remember from worksheet_02
, we practised loading data from the Sustainable Development Solutions Network's World Happiness Report. That data was the output of their analysis that calculated each country's happiness score and how much each variable contributed to it. In this tutorial, we are going to look at the data at an earlier stage of the study - the aggregated/averaged values (per country and year) for many different social and health aspects that the researchers anticipated might contribute to happiness (Table2.1 from this Excel spreadsheet).
The goal for today is to produce a plot of 2017's positive affect scores against healthy life expectancy at birth, with healthy life expectancy at birth on the x-axis and positive affect on the y-axis. For this study, positive affect was defined as the average of three positive affect measures: happiness, laughter and enjoyment. We would also like to convert the positive affect score from a scale of 0 - 1 to a scale from 0 - 10.
use
filter
to subset the rows where the year is equal to 2017use
mutate
to convert the "Positive affect" score from a scale of 0 - 1 to a scale from 0 - 10use
select
to choose the "Healthy life expectancy at birth" column and the scaled "Positive affect" columnuse
ggplot
to create our plot of "Healthy life expectancy at birth" (x - axis) and scaled "Positive affect" (y - axis)
Tips for success: Try going through all of the steps on your own, but don't forget to discuss with others (classmates, TAs, or an instructor) if you get stuck. If something is wrong and you can't spot the issue, be sure to read the error message carefully. Since there are a lot of steps involved in working with data and modifying it, feel free to look back at worksheet_02
.
Question 1.1 Multiple Choice:
{points: 1}
What is the maximum value for the "Positive affect" score (in the original data file that you read into R)?
A. 100
B. 10
C. 1
D. 0.1
E. 5
Assign your answer to an object called answer1.1
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).
Question 1.2 Multiple Choice:
{points: 1}
Which column's values will be used to filter the data?
A. countries
B. generosity
C. positive affect
D. year
Assign your answer to an object called answer1.2
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).
Question 1.3.0
{points: 1}
Use the appropriate read_*
function to read in the WHR2018Chapter2OnlineData
(look in the tutorial_02
directory to ensure you use the correct relative path to read it in).
Assign the data frame to an object called happy_df_csv
.
Question 1.3.1
{points: 1}
Above, you loaded the data from a file we already downloaded and converted to a .csv
for you. But you can also use the readxl
R package to directly load in Excel files into R. Given that the data we loaded above (WHR2018Chapter2OnlineData.csv
) was originally sourced from an Excel file on the web, let's now directly read that Excel file into R using the read_excel
function from that package. This Excel file has multiple sheets, the data we want is on the first one.
Note:
read_excel
does not support putting a URL as the file path argument. So we need to first download the file and write it to disk using R'sdownload.file
function, and then we can read that saved Excel file into R usingread_excel
.
To answer the question, fill in the blanks in the code below. If you are unsure, try reading the documentation for the new functions and ask others for help!
Assign the data into an object called happy_df
.
Look at the column names - they contain spaces!!! This is not a best practice and will make it difficult to use our tidyverse functions... Run the cell below to use the make.names
function that will replace all the spaces with a .
so we don't have this problem. The colnames
function is also needed to access the data frame's column names.
Question 1.3.2
{points: 1}
Using the scaffolding given in the cell below, filter
, mutate
, and select
the happy_df
data frame as needed to get it ready to create our desired scatterplot. Recall that we wanted to rescale the "Positive affect" scores so that they fall in the range 0-10 instead of 0-1. Call the new, re-scaled column Positive.affect.scaled
.
Assign the data frame containing only the columns we need to create our plot to an object called reduced_happy_df
.
Question 1.4
{points: 1}
Using the modified data set, reduced_happy_df
, generate the scatterplot described above and make sure to label the axes in proper written English.
Assign your plot to an object called happy_plot
.
Question 1.5
{points: 3}
In one sentence or two, describe what you see in the scatterplot above. Does there appear to be a relationship between life expectancy at birth and postive affect? If so, describe it.
DOUBLE CLICK TO EDIT THIS CELL AND REPLACE THIS TEXT WITH YOUR ANSWER.
Question 1.6
{points: 3}
Choose any variable (column) in the data set happy_df
other than Positive.affect
to plot against healthy life expectancy at birth. You should NOT scale whichever variable you choose. Ensure that healthy life expectancy at birth is on the x-axis and that you give your axes human-readable labels.
Assign your plot to an object called happy_plot_2
.
Question 1.7
{points: 3}
In a sentence or two, describe what you see in the scatterplot above. Does there appear to be a relationship between healthy life expectancy at birth and the other variable you plotted? If so, describe it.
DOUBLE CLICK TO EDIT THIS CELL AND REPLACE THIS TEXT WITH YOUR ANSWER.
2. Whistler Snow
Skiing and snowboarding are huge in British Columbia. Some of the best slopes for snow sports are quite close. In fact, the famous mountain-bearing city of Whistler is just two hours north of Vancouver. With cold weather and plenty of snowfall, Whistler is an ideal destination for winter sports fanatics.
One thing skiers and snowboarders want is fresh snow! When are they most likely to find this? In the data
directory, we have two-year-long data sets from Environment Canada from the Whistler Roundhouse Station (on Whistler mountain). This weather station is located 1,835 m above sea level.
To answer the question of "When are skiers and snowboarders most likely to find fresh snow at Whistler?" you will create a line plot with the date is on the x-axis and the total snow per day in centimetres (the column named Total Snow cm
in the data file) on the y-axis. Given that we have data for two years (2017 & 2018), we will create one plot for each year to see if there is a trend we can observe across the two years.
Question 2.1 Multiple Choice:
{points: 1}
What are we going to plot on the y-axis?
A. total precipitation per day in centimetres
B. total snow on the ground in centimetres
C. total snow per day in centimetres
D. total rain per day in centimetres
Assign your answer to an object called answer2.1
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).
Question 2.2.0
{points: 1}
Read in the file named eng-daily-01012018-12312018.csv
from the data
directory. Make sure you preview the file to choose the correct read_*
function and argument values to get the data into R.
Assign your data frame to an object called whistler_2018
.
Note: You'll see a lot of entries of the form NA
. This is the symbol R uses to denote missing data. Interestingly, you can do math and make comparisons with NA
: for example, NA + 1 = NA
, NA * 3 = NA
, NA > 3 = NA
. Most operations on NA
return NA
. This may seem a bit weird, but it makes things much simpler in R since it removes the need to write any special code to handle missing data!
Question 2.2.1
{points: 1}
Looking at the column names of the whistler_2018
data frame, you can see we have white space in our column names again. Use make.names
to remove the whitespace to make it easier to use our tidyverse
functions.
Question 2.3
{points: 1}
Create a line plot with the date on the x-axis and the total snow per day (in cm) on the y-axis by filling in the ...
in the code below. Ensure you give your axes human-readable labels.
Assign your plot to an object called whistler_2018_plot
.
Question 2.4
{points: 3}
Looking at the line plot above, for 2018, of the months when it snowed, which 2 months had the most fresh snow?
DOUBLE CLICK TO EDIT THIS CELL AND REPLACE THIS TEXT WITH YOUR ANSWER.
Question 2.5
{points: 3}
Repeat the data loading and plot creation using the file eng-daily-01012017-12312017.csv
located in the data
directory to visualize the same data for the year 2017.
Assign your plot to an object called whistler_2017_plot
.
Question 2.6
{points: 3}
Looking at the line plot above, for 2017, of the months when it snowed, which 2 months had the most fresh snow?
DOUBLE CLICK TO EDIT THIS CELL AND REPLACE THIS TEXT WITH YOUR ANSWER.
Question 2.7
{points: 3}
Are the months with the most fresh snow the same in 2017 as they were in 2018? Hint: you might want to add a code cell where you plot the two plots right after each other so you can easily compare them in one screen view.
You can combine two plots, one atop the other, by using the plot_grid
function from the cowplot
package:
Is there any advantage of looking at 2 years worth of data? Why or why not?
DOUBLE CLICK TO EDIT THIS CELL AND REPLACE THIS TEXT WITH YOUR ANSWER.
3. Reading from a Database
In worksheet_02
, you'll recall that we opened a database stored in a .db
file. This involved a lot more effort than just opening a .csv
, .tsv
, or any of the other plaintext / Excel formats. It was a bit of a pain to use a database in that setting since we had to use dbplyr
to translate tidyverse
-like commands (filter
, select
, etc.) into SQL commands that the database understands. We didn't run into this problem in the worksheet, but not all tidyverse
commands can currently be translated with SQLite databases. For example, with an SQLite database, we can compute a mean, but can't easily compute a median.
Why should we bother with databases at all then?
Databases become really useful in a large-scale setting:
they enable storing large datasets across multiple computers with automatic redundancy and backups
they enable multiple users to access them simultaneously and remotely without conflicts and errors
they provide mechanisms for ensuring data integrity and validating input
they provide security to keep data safe
For example: there are around 4 billion Google searches conducted daily as of 2019. Can you imagine if Google stored all of the data from those queries in a single .csv
file!? Chaos would ensue.
To reap the real benefits of databases, we'll need to move to a more fully-powered one: PostgreSQL. We'll begin by loading the DBI
and dbplyr
packages that R uses to talk to databases, as well as the RPostgres
package that provides the interface between these packages and PostgreSQL databases (note the similarity to the RSQLite
package from worksheet_02
).
Investigating Trends in Crowdfunding
Kickstarter is an online crowd-funding site where people can post projects they want to do, but don't have the financial resources required to fund the project on their own. Other users of Kickstarter can pledge money to the project (also called "backing" a project) to help the project become a reality. To persuade people to back a project, the project owner usually offers rewards to the "backers" for their help with funding, which they receive once funding reaches a particular amount.
In this section, we'll investigate how the amount of funding successful projects get has changed over time. We consider a project to be successful if the amount of funds pledged exceeded the goal.
Question 3.0
{points: 1}
Databases are often stored remotely (i.e., not on your computer or on this JupyterHub). Your first task is to load the Kickstarter data from a PostgreSQL database stored remotely on the UBC statistics network.
URL: "dsci-100-student.stat.ubc.ca"
Port: 5432
Username: "dsci100"
Password: "dsci100"
Database Name: "kickstarter"
Table Name: "projects"
We've provided the code to do this below. Replace each ...
with one of the 5 above items.
Note 1: Due to the UBC firewall, to get this to work you'll need to be connected to the UBC network or use the UBC VPN. For instructions on how to connect to UBC's VPN service, see this webpage on UBC's IT website.
Note 2: As this database will be used by the entire class, you will only have read access (no write permissions).
Assign the resulting database connection object to connection
and the project table data to project_data
.
We can now use the colnames
function to see what columns are available in the project_data
table.
Question 3.1
{points: 1}
If we want to plot compare pledged and goal amounts of funding over time for successful projects in the United States, which columns should we select
from the table?
A. id
, slug
, pledged
B. pledged
, goal
, deadline
, country
C. pledged
, usd_pledged
, location_id
D. currency
, state
, country
, goal
Assign your answer to an object called answer3.1
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).
Question 3.2
{points: 1}
Now we'll visualize the data. In order to do this, we need to take the correct subset of data from the table and use ggplot
to plot the result. Note that we make the scatter plot slightly transparent (using alpha = 0.01
in the code below) because there is so much data that it would otherwise be hard to see anything (overplotting).
In the below cell, you'll see some lines of code (currently commented out with #
characters). Remove the comments and rearrange these lines of code to plot the ratio of pledged and goal funding as a function of project deadline date for all successful (where pledged funding is greater than goal funding) projects in the United States in the dataset.
Note: there is a lot of data to plot here, so give it a moment to display!
Hint: you'll want to put all the dataframe manipulation functions first, and then the plotting functions afterward. Also note that some lines have a +
at the end, meaning they're in the middle of the plotting code!
Question 3.3
{points: 3}
Is there a relationship between the ratio of pledged/goal funding and time? If so, describe it.
Additionally, mention a pattern in the data or a characteristic of it that you may not have expected in advance.
DOUBLE CLICK TO EDIT THIS CELL AND REPLACE THIS TEXT WITH YOUR ANSWER.
Question 3.4
{points: 1}
Finally, we'll save the project data to a local file in the data/
folder called project_data.csv
. Recall that we don't want to try to download and save the entire dataset (way too much data!) from the database, but only the tbl
object named prj
. So you will need to use the collect
function followed by the appropriate write_*
function.
Assign the output of collect to an object called project_df
4 (Optional). Reading Data from the Internet
Question 4.0
{points: 0}
More practice scraping! To keep ourselves out of legal hot water, we will get more practice scraping data using a website that was created for that purpose: http://books.toscrape.com/
Your task here is to scrape the prices of the science fiction novels on this page and determine the maximum, minimum and average price of science fiction novels at this bookstore. Tidy up and nicely present your results by creating a data frame called sci_fi_stats
that has 2 columns, one called stats
that contains the words max
, min
and mean
and once called value
that contains the calculated value for each of these.
The functions for maximum, minimum and average in R are listed in the table below:
Calculation to perform | Function in R |
---|---|
maximum | max |
minimum | min |
average | mean |
Some other helpful hints:
If you end up scraping some characters other than numbers you will have to use
str_replace_all
from thestringr
library to remove them (similar to what we did with the commas in worksheet_02).Use
as.numeric
to convert your character type numbers to numeric type numbers before you pass them into themax
,min
andmean
functions.If you have
NA
values in your objects that you need to pass into themax
,min
andmean
functions, you will need to set thena.rm
argument in these functions toTRUE
.use the function
c
to create the vectors that will go in your data frame, for example, to create a vector with the values 10, 16 and 13 named ages, we would type:ages <- c(10, 16, 13)
.use the function
tibble
to create the data frame from your vectors.
Question 4.1
{points: 0}
In worksheet_02
you had practice scraping data from the web. Now that you have the skills, should you scrape that website you have been dreaming of harvesting data from? Maybe, maybe not... You should check the website's Terms of Service first and consider the application you have planned for the data after you scrape it.
List 3 websites you might be interested in scraping data from (for fun, profit, or research/education). List their URLs as part of your answer. For each website, search for their Terms of Service page. Take note if such a page exists, and if it does, provide the link to it and tell us whether or not they allow web scraping of their website.
You can list them in this cell! Double click to edit.
Bonus/optional additional readings on legalities of web scraping:
Here are two recent news stories about web scraping and their legal implications: