Path: blob/master/2022-spring/materials/worksheet_reading/worksheet_reading.ipynb
2051 views
Worksheet 2: Introduction to Reading Data
You can read more about course policies on the course website.
Lecture and Tutorial Learning Goals:
After completing this week's lecture and tutorial work, you will be able to:
define the following:
absolute file path
relative file path
url
read data into R using a relative path and a url
compare and contrast the following functions:
read_csv
read_tsv
read_csv2
read_delim
read_excel
match the following
tidyverse
read_*
function arguments to their descriptions:file
delim
col_names
skip
choose the appropriate
tidyverse
read_*
function and function arguments to load a given plain text tabular data set into Ruse
readxl
library'sread_excel
function and arguments to load a sheet from an excel file into Rconnect to a database using the
DBI
library'sdbConnect
functionlist the tables in a database using the
DBI
library'sdbListTables
functioncreate a reference to a database table that is queriable using the
tbl
from thedbplyr
libraryretrieve data from a database query and bring it into R using the
collect
function from thedbplyr
libraryuse
write_csv
to save a data frame to a csv fileoptional: scrape data from the web
read/scrape data from an internet URL using the rvest
html_nodes
andhtml_text
functionscompare downloading tabular data from a plain text file (e.g. *.csv) from the web versus scraping data from a .html file
This worksheet covers parts of Chapter 2 of the online textbook. You should read this chapter before attempting the worksheet.
1. Comparing Absolute Paths, Relative Paths, and URLs
Question 1.1 Multiple Choice:
{points: 1}
If you needed to read a file using an absolute path, what would be the first symbol in your argument (...) when using the read_csv
function?
A. read_csv(">...")
B. read_csv(";...")
C. read_csv("...")
D. read_csv("/...")
Assign your answer to an object called answer1.1
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).
Question 1.2 True or False:
{points: 1}
The file argument in the read_csv
function that uses an absolute path can never look like that of a relative path?
Assign your answer to an object called answer1.2
. Make sure your answer is written in lowercase and is surrounded by quotation marks (e.g. "true"
or "false"
).
Question 1.3 Match the following paths with the correct path type that they represent:
{points: 1}
Example Path
A. /Users/my_user/Desktop/UBC/BIOL363/SciaticNerveLab/sn_trial_1.xlsx
B. https://www.ubc.ca
C. file_1.csv
D. /Users/name/Documents/Course_A/homework/my_first_homework.docx
E. homework/my_second_homework.docx
F. https://www.random_website.com
Path Type
absolute
relative
URL
For every argument, create an object using the letter associated with the example path and assign it the corresponding number from the list of path types. For example: B <- 1
.
Question 1.4 Multiple Choice:
{points: 1}
If the absolute path to a data file looks like this: /Users/my_user/Desktop/UBC/BIOL363/SciaticNerveLab/sn_trial_1.xlsx
What would the relative path look like if the working directory (i.e., where the Jupyter notebook is where you are running your R code from) is now located in the UBC
folder?
A. sn_trial_1.xlsx
B. /SciaticNerveLab/sn_trial_1.xlsx
C. BIOL363/SciaticNerveLab/sn_trial_1.xlsx
D. UBC/BIOL363/SciaticNerveLab/sn_trial_1.xlsx
E. /BIOL363/SciaticNerveLab/sn_trial_1.xlsx
Assign your answer to an object called answer1.4
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).
Question 1.5
{points: 1}
Match the following paths with the most likely kind of data format they contain.
Paths:
https://www.ubc.ca/datasets/data.db
/home/user/downloads/data.xlsx
data.tsv
examples/data/data.csv
https://en.wikipedia.org/wiki/Normal_distribution
Dataset Types:
A. Excel Spreadsheet
B. Database
C. HTML file
D. Comma-separated values file
E. Tab-separated values file
For every dataset type, create an object using the letter associated with the example and assign it the corresponding number from the list of paths. For example: F <- 5
2. Argument Modifications to Read Data
Reading files is one of the first steps to wrangling data and consequently read_csv
is a crucial function. However, despite how effortlessly it has worked so far, it has its limitations. read_csv
works with particular files and does not accept differing formats.
Not all data sets come as perfectly organized like the ones you worked with last week. Time and effort were put into ensuring that the files were arranged with headers, columns were separated by commas, and the beginning excluded metadata.
Now that you understand how to read files located outside (or inside) of your working directory, you can begin to learn the tips and tricks necessary to overcoming the setbacks of read_csv
.
Question 2.1
{points: 1}
Match the following descriptions with the corresponding arguments used in read_csv
:
Descriptions
G. Character that separates columns in your file.
H. Specifies whether or not the first row of data in your file are column labels. Also allows you to create a vector that can be used to label columns.
I. This is the file name, path to a file, or URL.
J. Specifies the number of lines which must be ignored because they contain metadata.
Arguments
file
delim
col_names
skip
For every description, create an object using the letter associated with the description and assign it the corresponding number from the list of functions. For example: G <- 1
Question 2.2 True or False:
{points: 1}
read_csv2
and read_delim
can both be used for reading files that have columns separated by ;
.
Assign your answer to an object called answer2.2
. Make sure your answer is in lowercase and is surrounded by quotation marks (e.g. "true"
or "false"
).
Question 2.3 Multiple Choice:
{points: 1}
read_tsv
can be used for files that have columns separated by which of the following:
A. letters
B. tabs
C. numbers
D. commas
Assign your answer to an object called answer2.3
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).
3. Happiness Report (2017)
This data was taken from Kaggle and ranks countries on happiness based on rationalized factors like economic growth, social support, etc. The data was released by the United Nations at an event celebrating International Day of Happiness. According to the website, the file contains the following information:
Country = Name of the country.
Region = Region the country belongs to.
Happiness Rank = Rank of the country based on the Happiness Score.
Happiness Score = A metric measured by asking the sampled people the question: "How would you rate your happiness on a scale of 0 to 10 where 10 is the happiest?"
Standard Error = The standard error of the happiness score.
Economy (GDP per Capita) = The extent to which GDP contributes to the calculation of the Happiness Score.
Family = The extent to which Family contributes to the calculation of the Happiness Score.
Health (Life Expectancy) = The extent to which Life expectancy contributed to the calculation of the Happiness Score.
Freedom = The extent to which Freedom contributed to the calculation of the Happiness Score.
Trust (Government Corruption) = The extent to which Perception of Corruption contributes to Happiness Score.
Generosity = The extent to which Generosity contributed to the calculation of the Happiness Score.
Dystopia Residual = The extent to which Dystopia Residual contributed to the calculation of the Happiness Score.
To clean up the file and make it easier to read, we only kept the country name, happiness score, economy (GDP per capita), life expectancy, and freedom. The happiness scores and rankings use data from the Gallup World Poll, which surveys citizens in countries from around the world.
Kaggle stores this information but it is compiled by the Sustainable Development Solutions Network. They survey these factors nearly every year (since 2012) and allow global comparisons to optimize political decision making. These landmark surveys are highly recognized and allow countries to learn and grow from one another. One day, they will provide a historical insight on the nature of our time.
Question 3.1 Fill in the Blank:
{points: 1}
Trust is the extent to which _______________ contributes to Happiness Score.
A. Corruption
B. Government Intervention
C. Perception of Corruption
D. Tax Money Designation
Assign your answer to an object called answer3.1
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).
Question 3.2 Multiple Choice:
{points: 1}
What is the happiness report?
A. Study conducted by the governments of multiple countries.
B. Independent survey of citizens from multiple countries.
C. Study conducted by the UN.
D. Survey given to international students by UBC's psychology department.
Assign your answer to an object called answer3.2
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).
Question 3.3 Fill in the Blanks (of the Table):
{points: 1}
It is often a good idea to try to "inspect" your data to see what it looks like before trying to load it into R. This will help you figure out the right function to call and what arguments to use. When your data are stored as plain text, you can do this easily with Jupyter (or any text editor).
Open all the files named happiness_report...
in the data
folder with the plain text editor in your working directory (the worksheet_02
directory) using Jupyter (Right click the file -> Open With -> Editor). This will allow you to visualize the files and the organization of your data. Based on your findings, fill in the missing items in the table below. This table will be very useful to refer back to in the coming weeks.
You'll notice that trying to open one of the files gives you an error (File Load Error ... is not UTF-8 encoded
). This means that this data is not stored as human-readable plain text. For this special file, just fill in the read_*
function entry, the other columns will be left blank.
File Name | delim | Header | Metadata | skip | read_* |
---|---|---|---|---|---|
_.csv | ";" , "," , "\" , or "tab" | "yes" or "no" | "yes" or "no" | NA or # of lines | read_* |
happiness_report.csv | , | A | no | NA | read_csv |
happiness_report_semicolon.csv | ; | yes | no | NA | B |
happiness_report.tsv | C | yes | no | NA | read_tsv |
happiness_report_metadata.csv | , | yes | D | 2 | read_csv |
happiness_report_no_header.csv | , | E | no | NA | read_csv |
happiness_report.xlsx | F |
For the missing items (labelled A to F) in the table above, create an object using the letter and assign it the corresponding missing value.
For example: A <- "yes"
. The possible options for each column are given in the first row of the table.
Question 3.4
{points: 1}
Read the file happiness_report.csv
in the data
folder using the shortest relative path. Hint: preview the data using Jupyter (as discussed above) so you know which read_*
function and arguments to use.
Assign the relative path (the string) to an object named happiness_report_path
, and assign the output of the correct read_*
function you call to an object named happiness_report
.
Question 3.5 Multiple Choice:
{points: 1}
If Norway is in "first place" based on the happiness score, at what position is Canada?
A. 3rd
B. 15th
C. 7th
D. 28th
Hint: create a new cell and run happiness_report
.
Assign your answer to an object called answer3.5
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).
Question 3.6.1
{points: 1}
For each question in the ranges 3.6.1 to 3.6.5 and 3.7.1 to 3.7.2, fill in the ...
in the code given. Replace fail()
with your finished answer. Refer to your table above and don't be afraid to ask for help. Remember you can use ?
help operator to access documentation for a function (e.g. ?read_csv
).
Read in the file happiness_report_semicolon.csv
using read_delim
and name it happy_semi_df
Take a look at the data type in the GDP_per_capita
, life_expectancy
, and freedom
columns. It says <chr>
; that stands for "character" or text data -- not numeric as we would hope! The happiness_score
column has <dbl>
(stands for "double-precision floating point number", a numeric type), which is correct. We'd like the other columns to have this type as well... what happened?
If we look closer, we'll see that the decimal point in this data was a comma ,
rather than a period (common in some European countries).
Instead of read_delim
, for this data we'll need another function that can handle commas as decimal points.
Question 3.6.2
{points: 1}
Read in the file happiness_report_semicolon.csv
again, but this time use a different read_*
function than read_delim
to ensure that the column types are correct. Remember you can use ?
help operator to access documentation for a function (e.g. ?read_csv
). Hint: take a look at the list of read_*
functions at the top of this worksheet under the learning goals section. Name the data frame happy_semi_df2
.
Question 3.6.3
{points: 1}
Read in the file happiness_report.tsv
using the appropriate read_*
function and name it happy_tsv
.
Question 3.6.4
{points: 1}
Read in the file happiness_report_metadata.csv
using the appropriate read_*
function and name it happy_metadata
.
Question 3.6.5
{points: 1}
Read in the file happiness_report_no_header.csv
using the appropriate read_*
function and name it happy_header
. Note: If the argument col_names
is a character vector, the values will be used as the names of the columns.
Question 3.7
{points: 1}
Earlier when you tried to open happiness_report.xlsx
in Jupyter, you received an error message (File Load Error ... is not UTF-8 encoded
). This happens because Excel spreadsheet files are not stored in plain text, and so Jupyter can't open them with its default text viewing program. This makes them a bit harder to inspect before trying to open in R
.
To inspect the data, we will just try to load happiness_report.xlsx
using the most basic form of the appropriate read_*
function, passing only the filename as an argument. Assign the output to a variable called happy_xlsx
.
Note: you can also try to examine .xlsx
files with Microsoft Excel or Google Sheets before loading into R.
Question 3.8
{points: 1}
Opening the data on a text editor showed some clear differences. Do all the data sets look the same once reading them on your R notebook ("yes"
or "no"
)?
Assign your answer to an object called answer3.8
. Make sure your answer is in lowercase and is surrounded by quotation marks (e.g. "yes"
or "no"
).
Question 3.9
{points: 1}
Using the happy_header
data set that you read earlier, plot life_expectancy
vs. GDP_per_capita
. Note that the statement "plot A vs. B" usually means to plot A on the y-axis, and B on the x-axis. Be sure to use xlab
and ylab
to give your axes human-readable labels.
Assign your answer to an object called header_plot
.
4. Reading Data from a Database
Investigating the reliability of flights into and out of Boston Logan International Airport
Delays and cancellations seem to be an unavoidable risk of air travel. A missed connection, or hours spent waiting at the departure gate, might make you wonder though: how reliable is air travel, really?
The US Bureau of Transportation Statistics keeps a continually-updated Airline On-Time Performance Dataset that has tracked the scheduled and actual departure / arrival time of flights in the United States from 1987 to the present day. In this section we'll do some exploration of this data to try to answer some of the above questions. The actual data we'll be using was from only the year 2015, and was compiled into the 2015 Kaggle Flight Delays Dataset from the raw Bureau data. But even that dataset is too large to handle in this course (5.8 million flights in just one year!), so the data have been filtered down to flights that either depart or arrive at Logan International Airport (BOS
), resulting in around 209,000 flight records.
Our data has the following variables (columns):
year
month
day
day of the week (from 1 - 7.999..., with fractional days based on departure time)
origin airport code
destination airport code
flight distance (miles)
scheduled departure time (local)
departure delay (minutes)
scheduled arrival time (local)
arrival delay (minutes)
diverted? (True/False)
cancelled? (True/False)
Question 4.1 True or False:
{points: 1}
We can use our dataset to figure out which airline company was the least likely to experience a flight delay in 2015.
Assign your answer to an object called answer4.1
. Make sure your answer is in lowercase and is surrounded by quotation marks (e.g. "true"
or "false"
).
Question 4.2 Multiple Choice
{points: 1}
If we're mostly concerned with getting to our destination on time, which variable in our dataset should we use as the y-axis of a plot?
A. flight distance
B. departure delay
C. origin airport code
D. arrival delay
Assign your answer as a single character to an object called answer4.2
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).
Let's start exploring our data. The file is stored in data/flights_filtered.db
in your working directory (still the worksheet_02
folder). If you try to open the file in Jupyter to inspect its contents, you'll again run into the File Load Error ... is not UTF-8 encoded
message you got earlier when trying to open an Excel spreadsheet in Jupyter. This is because the file is a database (often denoted by the .db
extension), which are usually not stored in plain text.
We'll need more R packages to help us handle this kind of data:
the database interface (
DBI
) package for opening, connecting to, and interfacing with databasesthe R SQLite (
RSQLite
) package so that DBI can talk to SQLite databasesthere are many kinds of databases; the
flights_filtered.db
database is an SQLite database
the dbplyr package for manipulating tables in the database using functions in R
without this, in order to retrieve data from the database, we would have to know a whole separate language, Structured Query Language (SQL)
Let's load those now.
In order to open a database in R, you need to take the following steps:
Connect to the database using the
dbConnect
function.Check what tables (similar to R dataframes, Excel spreadsheets) are in the database using the
dbListTables
functionOnce you've picked a table, create an R object for it using the
tbl
function
Note: the tbl
function returns a reference to a database table, not the actual data itself. This allows R to talk to the database / get subsets of data without loading the entire thing into R!
The next few questions will walk you through this process.
Question 4.3.1
{points: 1}
Use the dbConnect
function to open and connect to the flights_filtered.db
database in the data
folder.
Note: we have provided the first argument, RSQLite::SQLite()
, to dbConnect
for you below. This just tells the dbConnect function that we will be using an SQLite database.
Assign the output to a variable named conn
.
Question 4.3.2
{points: 1}
Use the dbListTables
function to inspect the database to see what tables it contains.
Make a new variable named flights_table_name
that stores the name of the table with our data in it
Question 4.3.3
{points: 1}
Use the tbl
function to create an R reference to the table so that you can manipulate it with dbplyr
functions.
Make a new variable named flight_data
based on the output of tbl
Now that we've connected to the database and created an R table object, we'll take a look at the first few rows and columns of the flight on-time performance data. Even though flight_data
isn't a regular R dataframe---it's a database table connection, or specifically a tbl_SQLiteConnection
---the functions from the dbplyr
package let us treat it like an R dataframe!
So let's try using the head
function (which allows us to see the first few rows of a dataset) and see what happens:
It works! And---as luck would have it---it also works to use the select
and filter
functions you've learned about previously.
Note: not all functions that you're familiar with work on database table tbl
reference objects. For example, if you try to run nrow
(to count the rows) or tail
(to get the last rows of the table), you won't get the result you expect.
Question 4.4
{points: 1}
Use the select
and filter
functions to extract the arrival and departure delay columns for rows where the origin airport is BOS.
Store your answer in a variable called delay_data
.
You'll notice in the Source:
line that the dimension of the table is listed as [?? x 2]
. This is because databases do things in the laziest way possible. Since we only asked the database for its head
(the first few rows), it didn't bother going through all the rows to figure out how many there are. This sort of laziness can help make things run a lot faster when dealing with large datasets.
Our next task is to visualize our data to see whether there is a difference in delays for arrivals at and departures from BOS
. But before we do that, let's figure out just how much data we're working with using the count
function.
Yikes---that's a lot of data! If we tried to do a scatter plot of these, we probably wouldn't be able to see anything useful; all the points would be mushed together. Let's try using a histogram instead. A histogram helps us visualize how a particular variable is distributed in a dataset. It does this by separating the data into bins, and then plotting vertical bars showing how many data points fell in each bin.
For example, we could use a histogram to visualize the distribution of waiting times between eruptions of the Old Faithful geyser in Yellowstone National Park, Wyoming with the geom_histogram
layer. The bins
argument specifies the number of bins to use in the histogram.
We'll use histograms to visualize the departure delay times and arrival delay times separately.
Question 4.5
{points: 1}
Plot the arrival delay time data as a histogram. You will plot the delay (in hours) separated into 15-minute-wide bins on the x axis. The y axis will show the percentage of flights departing BOS that had that amount of delay during 2015.
You'll do this by finishing the code segment provided below. There are 4 places where ...
appears in the provided code below. Replace each instance of ...
with the correct item from the following list:
ARRIVAL_DELAY/60
'steelblue'
"Delay (hours)"
geom_histogram
Assign the output of ggplot to an object called arrival_delay_plot
.
Question 4.6
{points: 1}
Plot the departure delay time data as a histogram with the same format as the previous plot. Hint: copy and paste your code from the previous block! The only thing that will change is column from delay_data
that you use for the x-axis.
Assign the output of ggplot to an object called departure_delay_plot
.
Question 4.7
{points: 1}
Look at the two plots you generated. Are departures from or arrivals to BOS
more likely to be on time (at most 15 minutes ahead/behind schedule)?
Assign your answer (either "departures"
or "arrivals"
) to an object called answer4.7
.
So far, we've done everything using the delay_data
database reference object constructed using functions from the dbplyr
library. Remember: this isn't the data itself! If we want to save the small data subset that we've constructed to our local machines (perhaps to share it on the web or with collaborators), we'll need to take one last step.
Question 4.8.1
{points: 1}
We want to download the arrival / departure times data where the origin airport is BOS from the database. We will use the collect
function to do this, which of the following should you use?
A. collect(delay_data)
B. collect(flights_table_name)
C. collect(conn)
D. collect(flight_data)
Assign your answer to an object called answer4.8.1
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).
Question 4.8.2
{points: 1}
If you input the wrong argument in the collect()
function below your worksheet will time out. Please double check you have the correct answer to question 4.8.1 above and input the correct argument in the collect()
function below!
Use the collect
function to download the arrival / departure times data where the origin airport is BOS from the database and store it in a dataframe object called delay_dataframe
. Then, use the write_csv
function to write the dataframe to a file called delay_data.csv
. Save the file in the data/
folder.
Note: there are many possible ways to use write_csv
to customize the output. Just use the defaults here!
5 (Optional). Reading Data from the Internet
How has the World Gross Domestic product changed throughout history?
As defined on Wikipedia, the "Gross world product (GWP) is the combined gross national product of all the countries in the world." Living in our modern age with our roaring (sometimes up and sometimes down) economies, one might wonder how the world economy has changed over history. To answer this question we will scrape data from the Wikipedia Gross world product page.
Your data set will include the following columns:
year
gwp_value
Specifically we will scrape the 2 columns named "Year" and "Real GWP" in the table under the header "Historical and prehistorical estimates". The end goal of this exercise is to create a line plot with year on the x-axis and GWP value on the y-axis.
Question 5.1.0 Multiple Choice:
{points: 0}
Under which of the following headers in the table will we scrape from on the Wikipedia Gross world product page?
A. Gross world product
B. Recent growth
C. Historical and prehistorical estimates
D. See also
Assign your answer to an object called answer5.1.0
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).
Question 5.1.1 Multiple Choice:
{points: 0}
What is going to be the x-axis of the scatter plot we create?
A. compound annual growth rate
B. the value of the gross world product
C. year
Assign your answer to an object called answer5.1.1
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).
We need to now load the rvest
package to begin our web scraping!
Question 5.2
{points: 0}
Use read_html
to download information from the URL given in the cell below. Instead of copying the entire URL, you can simply use the object (url
) after read_html()
.
Assign your answer to an object called gwp
.
Question 5.3
Run the cell below to create the first column of your data set (the year from the table under the "Historical and prehistorical estimates" header). The node was obtained using SelectorGadget
.
We can see that although we want numbers for the year, the data we scraped includes the characters AD
and \n
(a newline character). We will have to do some string manipulation and then convert the years from characters to numbers.
First we use the str_replace_all
function to match the string " AD\n"
and replace it with nothing ""
:
When we print year, we can see we were able to remove " AD\n"
, but we missed that there is also " BC\n"
on the earliest years! There are also commas (","
) in the large BC years that we will have to remove. We also need to put a -
sign in front of the BC numbers so we don't confuse them with the AD numbers after we convert everything to numbers. To do this we will need to use a similar strategy to clean this all up!
This week we will provide you the code to do this cleaning, next week you will learn to do these kinds of things yourself. After we do all the string/text manipulation then we use the as.numeric
function to convert the text to numbers.
Question 5.4
{points: 0}
Create a new column for the gross world product (GWP) from the table we are scraping. Don't forget to use SelectorGadget
to obtain the CSS selector needed to scrape the GWP values from the table we are scraping. Assign your answer to an object called gwp_value
.
Fill in the ...
in the cell below. Copy and paste your finished answer into the fail()
.
Refer to Question 5.3 and don't be afraid to ask for help.
Again, looking at the output of head(gwp_value)
we see we have some cleaning and type conversions to do. We need to remove the commas, the extraneous trailing information in the first 3 columns, and the "\n"
character again. We provide the code to do this below:
Question 5.5
{points: 0}
Use the tidyverse
tibble
function to create a data frame named gwp
with year
and gwp_value
as columns. The general form for the creating data frames from vectors/lists using the tibble
function is as follows:
tibble(COLUMN1_NAME, COLUMN2_NAME, COLUMN3_NAME, ...)
One last piece of data transformation/wrangling we will do before we get to data visualization is to create another column called sqrt_year
which scales the year values so that they will be more informative when we plot them (if you look at our year data we have a lot of years in the recent past, and fewer and fewer as we go back in time). Often times you can just transform the scale within ggplot
(for example see what we do with the gwp_value
later on), but the year value is tricky for scaling because it contains negative values. So we need to first make everything positive, then take the square root, and then re-transform the values that should be negative to negative again! We provide the code to do this below.
Question 5.6
{points: 0}
Create a line plot using the gwp
data frame where sqrt_year
is on the x-axis and gwp_value
is on the y-axis. We provide the plot code to relabel the x-axis with the human understandable years instead of the tranformed ones we plot. Name your plot object gwp_historical
. To make a line plot instead of a scatter plot you should use the geom_line()
function instead of the geom_point()
function.
Question 5.7
{points: 0}
Looking at the line plot, when does the Gross World Domestic Product first start to more rapidly increase (i.e., when does the slope of the line first change)?
A. roughly around year -1,000,000
B. roughly around year -250,000
C. roughly around year -5000
D. roughly around year 1500
Assign your answer to an object called answer5.7
. Make sure your answer is an uppercase letter and is surrounded by quotation marks (e.g. "F"
).