Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
UBC-DSCI
GitHub Repository: UBC-DSCI/dsci-100-assets
Path: blob/master/2022-spring/materials/tutorial_wrangling/tutorial_wrangling.ipynb
2051 views
Kernel: R

Tutorial 3: Cleaning and Wrangling Data

Lecture and Tutorial Learning Goals:

After completing this week's lecture and tutorial work, you will be able to:

  • define the term "tidy data"

  • discuss the advantages and disadvantages of storing data in a tidy data format

  • recall and use the following tidyverse functions and operators for their intended data wrangling tasks:

    • select

    • filter

    • %>%

    • map

    • mutate

    • summarize

    • group_by

    • pivot_longer

    • pivot_wider

    • %in%

Any place you see ..., you must fill in the function, variable, or data to complete the code. Replace fail() with your completed code and run the cell!

### Run this cell before continuing. library(repr) library(tidyverse) source("tests.R") source("cleanup.R") options(repr.matrix.max.rows = 6) #limits output of dataframes to 6 rows

Question 0.1
{points: 1}

Match the following definitions with the corresponding functions used in R:

A. Transforms the input by applying a function to each element and returning a vector the same length as the input.

B. Reads files that have columns separated by tabs.

C. Most data operations are done on groups defined by variables. This function takes an existing data set and converts it into a grouped data set where operations are performed "by group".

D. Works in an analogous way to mutate, except instead of adding columns to an existing data frame, it creates a new data frame.

E. "lengthens" data, increasing the number of rows and decreasing the number of columns.

F. Labels the x-axis.

Functions

  1. group_by

  2. map

  3. read_tsv

  4. summarise

  5. xlab

  6. pivot_longer

For every description, create an object using the letter associated with the definition and assign it to the corresponding number from the list of functions. For example:

A <- 1 B <- 2 C <- 3 ... F <- 6
# Replace the fail() with your answer. # your code here fail() # No Answer - remove if you provide an answer
test_0.1()

1. Historical Data on Avocado Prices

In the tutorial, we will be finishing off our analysis of the avocado data set.

You might recall from the lecture that millennials LOVE avocado toast. However, avocados are expensive and this is costing millennials a lot more than you think (joking again 😉, well mostly...). To ensure that they can save enough to buy a house, it would be beneficial for an avocado fanatic to move to a city with low avocado prices. From Worksheet 3 we saw that the price of the avocados is less in the months between December and May, but we still don't know which region contains the cheapest avocados.

image source: https://media.giphy.com/media/8p3ylHVA2ZOIo/giphy.gif

As a reminder, here are some relevant columns in the dataset:

  • average_price - The average price of a single avocado.

  • type - conventional or organic

  • year - The year

  • region - The city or region of the observation

  • small_hass_volume

  • large_hass_volume

  • extra_l_hass_volume

Additionally, the last three columns can be used to calculate total_volume in pounds (lbs). The goal for today is to find the region with the cheapest avocados and then produce a plot of the total number of avocados sold against the average price per avocado (in US dollars) for that region. To do this, you will follow the steps below.

  1. use a tidyverse read_* function to load the csv file into your notebook

  2. use group_by + summarize to find the region with the cheapest avocados.

  3. use filter to specifically look at data from the region of interest.

  4. use mutate to add up the volume for all types of avocados (small, large, and extra)

  5. use ggplot to create our plot of volume vs average price

Question 1.1
{points: 1}

Read the file avocado_prices.csv found in the tutorial_03 directory using a relative path.

Assign your answer to an object called avocado.

# your code here fail() # No Answer - remove if you provide an answer avocado
test_1.1()

Question 1.2
{points: 1}

Now find the region with the cheapest avocados in 2018. To do this, calculate the average price for each region. Your answer should be the row from a data frame with the lowest average price. The data frame you create should have two columns, one named region that has the region, and the other that contains the average price for that region.

Assign your answer to an object called cheapest.

Hint: You can use the function slice to provide a row index (or range of row indices) to numerically subset rows from a data frame. For example, to subset the 5th row of a data frame you would type:

dataframe %>% slice(5)
#... <- ... %>% # ... %>% # group_by(...) %>% # summarize(...) %>% # arrange(...) %>% # slice(...) # your code here fail() # No Answer - remove if you provide an answer cheapest
test_1.2()

Question 1.3
{points: 1}

Now we will plot the total volume against average price for the cheapest region and all years. First, you need to mutate the data frame such that total_volume is equal to the addition of all three volume columns. Next, filter the dataset using the cheapest region found in Question 1.2. Finally, you will have the data necessary to create a scatter plot with:

  • x = total_volume

  • y = average_price

Fill in the ... in the cell below. Copy and paste your finished answer and replace the fail(). We will be scaling the axes - the function is added in the scaffolding for you.

Assign your answer to an object called avocado_plot.

Hint: Do not forget units on your data visualization! Here the price is in US dollars (USD) and the volume in pounds (lbs).

options(repr.plot.width = 8, repr.plot.height = 7) #... <- ... %>% # mutate(...) %>% # filter(region == ...) %>% # ggplot(aes(x = ..., y = ...)) + # ... + # xlab(...) + # ylab(...) + # scale_x_log10() # your code here fail() # No Answer - remove if you provide an answer avocado_plot
test_1.3()

Question 1.4

What do you notice? Discuss your plot with the person next to you.

To further investigate this trend, let's colour the data points to see if the type of avocado (either organic or not, which is called conventional in this data set) affects the volume and price of avocados sold in our region of interest.

Run the cell below to colour the data points by avocado type.

#run this cell to set plot width/height #change the numbers below if the plot doesn't fit on your screen and run again! options(repr.plot.width = 8, repr.plot.height = 6)
# Run this cell to see if avocado type (the type variable) plays a role in production and price. avocado_plot <- avocado_plot + geom_point(aes(colour = type)) + theme(text = element_text(size = 20)) avocado_plot

Question 1.4 (Continued)
{points: 3}

In 2-3 sentences, describe what you see in the graph above. Comment specifically on whether there is any evidence/indication that avocado type might influence price?

Hint: Make sure to include information about volume, average price, and avocado type in your answer.

DOUBLE CLICK TO EDIT THIS CELL AND REPLACE THIS TEXT WITH YOUR ANSWER.

2. Historical Data on Avocado Prices (Continued)

Question 2.1
{points: 3}

Now that we know the region that sold the cheapest avocados (on average) in 2018, which region sold the most expensive avocados (on average) in 2018? And for that region, what role might avocado type play in sales? Repeat the analysis you did above, but now apply it to investigate the region which sold the most expensive avocados (on average) in 2018.

Remember: we are finding the region that sold the most expensive avocados in 2018, but then producing a scatter plot of average price versus total volume sold for all years.

Name your plot object priciest_plot.

# your code here fail() # No Answer - remove if you provide an answer
# check that plot has the correct name test_that('scatter plot should be named priciest_plot', { expect_true(exists('priciest_plot')) }) print('plot has correct name') # The remainder of the tests were intentionally hidden so that you can practice deciding # when you have the correct answer.

Question 2.2
{points: 3}

In 2-3 sentences, describe what you see in the graph above for the region with the most expensive avocados (on average). Comment specifically on whether there is any evidence/indication that avocado type might influence price.

Hint: Make sure to include information about volume, average price, and avocado type in your answer.

DOUBLE CLICK TO EDIT THIS CELL AND REPLACE THIS TEXT WITH YOUR ANSWER.

Question 2.3
{points: 3}

Plot the scatterplots for the two regions so that they are in adjacent cells (so it is easier for you to compare them). Compare the price and volume data across the two regions. Then argue for or against the following hypothesis:

"the region that has the cheapest avocados has them because it sells less of the organic (expensive) type of avocados compared to conventional cheaper ones."

DOUBLE CLICK TO EDIT THIS CELL AND REPLACE THIS TEXT WITH YOUR ANSWER.

3. Sea Surface Salinity in Departure Bay

As mentioned in this week's Worksheet, Canada's Department of Fisheries and Oceans (DFO) compiled environmentally essential data from 1914 to 2018. The data was collected at the Pacific Biological Station (Departure Bay). Daily sea surface temperature (degrees Celsius) and salinity (practical salinity units, PSU)observations have been carried out at several locations on the coast of British Columbia. The number of stations reporting at any given time has varied as sampling has been discontinued at some stations, and started or resumed at others.

In Worksheet 3 we already worked with the temperature observations. Now, we will be focusing on salinity! Specifically, we want to see if the monthly maximum salinity has been changing over the years. We will only be focusing our attention on the winter months December, January and February.

Question 3.1
{points: 1}

To begin working with this data, read the file max_salinity.csv into R. Note, this file (just like the avocado data set) is found within the tutorial_03 folder.

Assign your answer to an object called sea_surface.

# your code here fail() # No Answer - remove if you provide an answer sea_surface
test_3.1()

Question 3.2
{points: 3}

Given that ggplot prefers tidy data, we must tidy the data! Use the pivot_longer() function to create a tidy data frame with three columns: Year, Month and Salinity. Remember we only want to look at the winter months (December, January and February) so don't forget to reduce the data to just those three!

Assign your answer to an object called max_salinity.

#... <- sea_surface %>% # ...(...) %>% # ...(cols = -Year, # names_to = '...', # values_to = '...') # your code here fail() # No Answer - remove if you provide an answer max_salinity
# check that data frame has the correct name test_that('data frame should be named max_salinity', { expect_true(exists('max_salinity')) }) print('data frame has correct name') # The tests were intentionally hidden so that you can practice deciding # when you have the correct answer.

Question 3.3
{points: 3}

Now that we've created new columns, we can finally create our plot that compares the maximum salinity observations to the year they were recorded. As usual, label your axes!

Assign your answer to an object called max_salinity_plot.

Hint: do not forget to add units to your axes! Remember from the data description that salinity is measured in practical salinity units (PSU).

# your code here fail() # No Answer - remove if you provide an answer max_salinity_plot
# check that plot has the correct name test_that('scatter plot should be named max_salinity_plot', { expect_true(exists('max_salinity_plot')) }) print('plot has correct name') # The tests were intentionally hidden so that you can practice deciding # when you have the correct answer.

Question 3.4
{points: 3}

In 1-2 sentences, describe what you see in the graph above. Comment specifically on whether there is a change in salinity across time for the winter months and if there is, whether this indicates a postive or a negative relationship for these variables within this data set. If there is a relationship, also comment on its strength and linearity.

DOUBLE CLICK TO EDIT THIS CELL AND REPLACE THIS TEXT WITH YOUR ANSWER.

4. Pollution in Madrid

The goal of this analysis (which we started in worksheet_03) is to see if pollutants are decreasing (is air quality improving) and also determine which pollutant has decreased the most over the span of 5 years (2001 - 2006). In worksheet_03 we investigated what happened with the maximum values of each pollutant over time, now we will investigate the average values of each pollutant over time. To do this we will:

  1. Calculate the average monthly value for each pollutant for each year.

  2. Create a scatter plot for the average monthly value for each month. Plot these values for each pollutant and each year so that a trend over time for each pollutant can be observed.

  3. Now we will look at which pollutant decreased the most between 2001 - 2006 when we look at the average instead of the maximum values.

Question 4.1
{points: 3}

To begin working with this data, read the file madrid_pollution.csv. Note, this file (just like the other data sets in this tutorial) is found in the tutorial_03 directory.

Assign your answer to an object called madrid.

# your code here fail() # No Answer - remove if you provide an answer madrid
# check that data frame has the correct name test_that('data frame should be named madrid', { expect_true(exists('madrid')) }) print('data frame has correct name') # The tests were intentionally hidden so that you can practice deciding # when you have the correct answer.

Given that we are going to plotting months, which are dates, let's tell R how they should be ordered. We can do this by changing the month column from a character vector to a factor vector. Factors in R are useful for categorical data and they have an order.

# run this cell to order the column month by month (date) and not alphabetically madrid <- madrid %>% mutate(month = factor(month, levels = c('January','February','March','April', 'May','June','July','August', 'September','October','November','December')))

Question 4.2
{points: 3}

Calculate the average monthly value for each pollutant for each year and store that as a data frame. Your data frame should have the following 4 columns:

  1. year

  2. month

  3. pollutant

  4. monthly_avg

Name your data frame madrid_avg.

madrid
# your code here fail() # No Answer - remove if you provide an answer
# check that data frame has the correct name test_that('data frame should be named madrid_avg', { expect_true(exists('madrid_avg')) }) print('data frame has correct name') # The tests were intentionally hidden so that you can practice deciding # when you have the correct answer.

Question 4.3
{points: 3}

Create a scatter plot for the average monthly value for each month. Plot these values for each pollutant and each year so that a trend over time for each pollutant can be observed. To do this all in one plot, you are going to want to use a facet_grid layer (makes subplots within one plot when data are "related") and a theme layer (to adjust the angle of the text on the x-axis. We provide you with the code for these two layers in the scaffolding for this plot.

options(repr.plot.width = 16, repr.plot.height = 18) #pollutant_labels <- c(BEN = "Benzene \n(μg/m³)", # CO = "Carbon \nmonoxide \n(mg/m³)", # EBE = "Ethylbenzene \n(μg/m³)", # MXY = "M-xylene \n(μg/m³)", # NMHC = "Non-methane \nhydrocarbons \n(mg/m³)", # NO_2 = "Nitrogen \ndioxide \n(μg/m³)", # NOx = "Nitrous \noxides \n(μg/m³)", # O_3 = "Ozone \n(μg/m³)", # OXY = "O-xylene \n(μg/m³)", # PM10 = "Particles \nsmaller than 10 μm", # PXY = "P-xylene \n(μg/m³)", # SO_2 = "Sulphur \ndioxide \n(μg/m³)", # TCH = "Total \nhydrocarbons \n(mg/m³)", # TOL = "Toluene \n(μg/m³)") # #... <- ... %>% # ggplot(aes(x = ..., y = ...)) + # geom_point() + # xlab(...) + # ylab(...) + # facet_grid(pollutant ~ year, scales = "free", # switch = "y", # labeller = labeller(pollutant = pollutant_labels)) + # theme(axis.text.x = element_text(angle = 90, hjust = 1), # strip.text.y.left = element_text(angle = 0), # text = element_text(size = 20)) # your code here fail() # No Answer - remove if you provide an answer madrid_avg_plot

Question 4.4
{points: 3}

By looking at the plots above, which monthly average pollutant levels appear to have decreased over time? Which appear to have increased?

DOUBLE CLICK TO EDIT THIS CELL AND REPLACE THIS TEXT WITH YOUR ANSWER.

Question 4.5
{points: 3}

Now we will look at which pollutant decreased the most between 2001 - 2006 when we look at the average yearly values for each pollutant. Your final result should be a data frame that has at least these two columns: pollutant and yearly_avg_diff and one row (the most decreased pollutant when looking at yearly average between 2001 - 2006). Make sure to use the madrid_avg data frame in your solution.

There are several different ways to solve this problem. My solution included using a function called pivot_wider which is the inverse of pivot_longer. If you would like to use that function, see here for more info.

# your code here fail() # No Answer - remove if you provide an answer

Question 4.6
{points: 3}

Did using the average to find the most decreased pollutant between 2001 and 2006 give you the same answer as using the maximum in the worksheet? Is your answer to the previous question surprising? Explain.

DOUBLE CLICK TO EDIT THIS CELL AND REPLACE THIS TEXT WITH YOUR ANSWER.

Optional Question
(for fun and does not count for grades):

Consider doing the same analysis as you did for Question 4.5, except this time calculate the difference as a percent or fold difference (as opposed to absolute difference as we did in Question 4.5). The scales for the pollutants are very different, and so we might want to take this into consideration when trying to answer the question "Which pollutant decreased the most"?

# Your optional answer goes here
source("cleanup.R")