Lecture 18 β Permutation Testing, Bootstrapping
DSC 10, Fall 2022
Announcements
Lab 5 is due Saturday 11/5 at 11:59pm.
Homework 5 is due Tuesday 11/8 at 11:59pm.
Agenda
Permutation testing examples.
Are the distributions of weight for babies πΆ born to smoking mothers vs. non-smoking mothers different?
Are the distributions of pressure drops for footballs π from two different teams different?
Bootstrapping π₯Ύ.
Permutation testing
Purpose
Permutation tests help answer questions of the form:
I have two samples, but no information about any population distributions. Do these samples look like they were drawn from the same population?
Are the distributions of weight for babies πΆ born to smoking mothers vs. non-smoking mothers different?
Are the distributions of pressure drops for footballs π from two different teams different?
Smoking and birth weight πΆ
Setup for the hypothesis test
Null Hypothesis: In the population, birth weights of smokers' babies and non-smokers' babies have the same distribution, and the observed differences in our samples are due to random chance.
Alternative Hypothesis: In the population, smokers' babies have lower birth weights than non-smokers' babies, on average. The observed differences in our samples cannot be explained by random chance alone.
Test statistic: Difference in mean birth weight of non-smokers' babies and smokers' babies.
Strategy and implementation
Strategy:
Create a "population" by pooling data from both samples together.
Randomly divide this "population" into two groups of the same sizes as the original samples.
Repeat this process, calculating the test statistic for each pair of random groups.
Generate an empirical distribution of test statistics and see whether the observed statistic is consistent with it.
Implementation:
To randomly divide the "population" into two groups of the same sizes as the original samples, we'll just shuffle the group labels and use the shuffled group labels to define the two random groups.
Shuffling the labels
The 'Maternal Smoker' column defines the original groups. The 'Shuffed_Labels' column defines the random groups.
Calculating the test statistic
For the original groups:
For the random groups:
Repeating the process
Note that the empirical distribution of the test statistic (difference in means) is centered around 0.
This matches our intuition β if the null hypothesis is true, there should be no difference in the group means on average.
Comparing the empirical distribution to the observed statistic
Conclusion
Under the null hypothesis, we rarely see differences as large as 9.26 ounces.
Therefore, we reject the null hypothesis: the evidence implies that the groups do not come from the same distribution.
- Can we conclude that smoking causes lower birth weight? Why or why not? Think about it, then click here for the answer.No, we cannot. This was an observational study; there may be confounding factors. For instance, maybe smokers are more likely to drink caffeine, and caffeine causes lower birth weight.
Concept Check β β Answer at cc.dsc10.com
Recall, babies has two columns.
To randomly assign weights to groups, we shuffled 'Maternal Smoker' column. Could we have shuffled the 'Birth Weight' column instead?
A. Yes
B. No
Click here to see the answer to the previous question after you've submitted an answer to it.
Yes, we could have. It doesnβt matter which column we shuffle β we could shuffle one or the other, or even both, as long as we shuffle each separately.
Think about it like this β pretend you bring a gift π to a Christmas party π for a gift exchange, where everyone must leave the party with a random personβs gift. Pretend everyone stands around a circular table and puts the gift they bought in front of them. To randomly assign people to gifts, you could shuffle the gifts on the table and have all the people stay in the same spot, or you could have the people physically shuffle and keep the gifts in the same spots, or you could do both β either way, everyone will end up with a random gift!
Example: Did the New England Patriots cheat? π

On January 18, 2015, the New England Patriots played the Indianapolis Colts for a spot in the Super Bowl.
The Patriots won, 45-7. They went on to win the Super Bowl.
After the game, it was alleged that the Patriots intentionally deflated footballs, making them easier to catch. This scandal was called "Deflategate."
Background
Each team brings 12 footballs to the game. Teams use their own footballs while on offense.
NFL rules stipulate that each ball must be inflated to between 12.5 and 13.5 pounds per square inch (psi).
Before the game, officials found that all of the Patriots' footballs were at about 12.5 psi, and that all of the Colts' footballs were at about 13.0 psi.
This pre-game data was not written down.
In the second quarter, the Colts intercepted a Patriots ball and notified officials that it felt under-inflated.
At halftime, two officials (Clete Blakeman and Dyrol Prioleau) independently measured the pressures of as many of the 24 footballs as they could.
They ran out of time before they could finish.
Note that the relevant quantity is the change in pressure from the start of the game to the halftime.
The Patriots' balls started at a lower psi (which is not an issue on its own).
The allegations were that the Patriots deflated their balls, during the game.
The measurements
There are only 15 rows (11 for Patriots footballs, 4 for Colts footballs) since the officials weren't able to record the pressures of every ball.
The
'Pressure'column records the average of the two officials' measurements at halftime.The
'PressureDrop'column records the difference between the estimated starting pressure and the average recorded'Pressure'of each football.
The question
Did the Patriots' footballs drop in pressure more than the Colts'?
We want to test whether two samples came from the same distribution β this calls for a permutation test.
Null hypothesis: The drop in pressures for both teams came from the same distribution.
By chance, the Patriots' footballs deflated more.
Alternative hypothesis: No, the Patriots' footballs deflated more than one would expect due to random chance alone.
The test statistic
Similar to the baby weights example, our test statistic will be the difference between the teams' average pressure drops. We'll calculate the mean drop for the 'Patriots' minus the mean drop for the 'Colts'.
The average pressure drop for the Patriots was about 0.74 psi more than the Colts.
Creating random groups and calculating one value of the test statistic
We'll run a permutation test to see if 0.74 psi is a significant difference.
To do this, we'll need to repeatedly shuffle either the
'Team'or the'PressureDrop'column.We'll shuffle the
'PressureDrop'column.Tip: It's a good idea to simulate one value of the test statistic before putting everything in a
for-loop.
The simulation
Repeat the process many times by wrapping it inside a
for-loop.Keep track of the difference in group means in an array, appending each time.
Optionally, create a function to calculate the difference in group means.
Conclusion
It doesn't look good for the Patriots. What is the p-value?
Recall, the p-value is the probability, under the null hypothesis, of seeing a result as or more extreme than the observation.
In this case, that's the probability of the difference in mean pressure drops being greater than or equal to 0.74 psi.
This p-value is low enough to consider this result to be highly statistically significant ().
Caution! β οΈ
We reject the null hypothesis, as it is unlikely that the difference in mean pressure drops is due to chance alone.
But this doesn't establish causation.
That is, we can't conclude that the Patriots intentionally deflated their footballs.
Aftermath
Quote from an investigative report commissioned by the NFL:
β[T]he average pressure drop of the Patriots game balls exceeded the average pressure drop of the Colts balls by 0.45 to 1.02 psi, depending on various possible assumptions regarding the gauges used, and assuming an initial pressure of 12.5 psi for the Patriots balls and 13.0 for the Colts balls.β
Many different methods were used to determine whether the drop in pressures were due to chance, including physics.
We computed an observed difference of 0.74, which is in line with the findings of the report.
In the end, Tom Brady (quarterback for the Patriots at the time) was suspended 4 games and the team was fined $1 million dollars.
The Deflategate Wikipedia article is extremely thorough; give it a read if you're curious!
Aside: Establishing causation
To actually establish causation, we need the following two statements to be true:
The data must come from a randomized controlled trial, to mitigate the effects of confounding factors.
A permutation test must show a statistically significant difference in the outcome between the treatment and control group.
If both of these conditions are met, then we can conclude that the treatment causes the outcome.
Bootstrapping π₯Ύ
City of San Diego employee salary data
All City of San Diego employee salary data is public. We are using the latest available data.
When you load in a dataset that has so many columns that you can't see them all, it's a good idea to look at the column names.
We only need the 'TotalWages' column, so let's get just that column.
Concept Check β β Answer at cc.dsc10.com
Consider the question
What is the median salary of all San Diego city employees?
What is the right tool to answer this question?
A. Standard hypothesis testing
B. Permutation testing
C. Either of the above
D. None of the above
The median salary
We can use
.median()to find the median salary of all city employees.This is not a random quantity.
Let's be realistic...
In practice, it is costly and time-consuming to survey all 12,000+ employees.
More generally, we can't expect to survey all members of the population we care about.
Instead, we gather salaries for a random sample of, say, 500 people.
Hopefully, the median of the sample is close to the median of the population.
In the language of statistics
The full DataFrame of salaries is the population.
We observe a sample of 500 salaries from the population.
We want to determine the population median (a parameter), but we don't have the whole population, so instead we use the sample median (a statistic) as an estimate.
Hopefully the sample median is close to the population median.
The sample median
Let's survey 500 employees at random. To do so, we can use the .sample method.
We won't reassign my_sample at any point in this notebook, so it will always refer to this particular sample.
How confident are we that this is a good estimate?
Our estimate depended on a random sample.
If our sample was different, our estimate may have been different, too.
How different could our estimate have been?
Our confidence in the estimate depends on the answer to this question.
The sample median is random
The sample median is a random number.
It comes from some distribution, which we don't know.
How different could our estimate have been, if we drew a different sample?
"Narrow" distribution not too different.
"Wide" distribution quite different.
What is the distribution of the sample median?
An impractical approach
One idea: repeatedly collect random samples of 500 from the population and compute its median.
This is what we did in Lecture 14 to compute an empirical distribution of the sample mean of flight delays.
The animation below visualizes the process of repeatedly collecting a sample and computing its median.
This shows an empirical distribution of the sample median. It is an approximation of the true probability distribution of the sample median, based on 246 samples.
The problem
Drawing new samples like this is impractical.
If we were able to do this, why not just collect more data in the first place?
Often, we can't ask for new samples from the population.
Key insight: our original sample,
my_sample, looks a lot like the population.Their distributions are similar.
Note that unlike the previous histogram we saw, this is depicting the distribution of the population and of one particular sample (my_sample), not the distribution of sample medians for 246 samples.
The bootstrap
Shortcut: Use the sample in lieu of the population.
The sample itself looks like the population.
So, resampling from the sample is kind of like sampling from the population.
The act of resampling from a sample is called bootstrapping or "the bootstrap" method.
In our case specifically:
We have a sample of 500 salaries.
We want another sample of 500 salaries, but we can't draw from the population.
However, the original sample looks like the population.
So, let's just resample from the sample!
Resampling with replacement
When bootstrapping, we resample with replacement. Why? π€
Resampling with replacement
Our goal when bootstrapping is to create a sample of the same size as our original sample.
If we were to resample without replacement times from an original sample of size , our resample would look exactly the same as the original sample.
For instance, if we sample 5 elements without replacement from
['A', 'B', 'C', 'D', 'E'], our sample will contain the same 5 characters, just in a different order.
So, we need to sample with replacement to ensure that our resamples can be different from the original sample.
Running the bootstrap
We can simulate the act of collecting new samples by sampling with replacement from our original sample, my_sample.
Bootstrap distribution of the sample median
The population median (blue dot) is near the middle.
In reality, we'd never get to see this!
What's the point of bootstrapping?
We have a sample median wage:
With it, we can say that the population median wage is approximately $72,016, and not much else.
But by bootstrapping, we can generate an empirical distribution of the sample median:
which allows us to say things like
We think the population median wage is between $67,000 and $77,000.
Next time, we'll talk about how to set this range precisely.
Summary, next time
Summary
Given a single sample, we want to estimate some population parameter using just one sample.
In real life, you don't get access to the population, only a sample!
One sample gives one estimate of the parameter. To get a sense of how much our estimate might have been different with a different sample, we need more samples.
In real life, sampling is expensive. You only get one sample!
Key idea: The distribution of a sample looks a lot like the distribution of the population it was drawn from. So we can treat it like the population and resample from it.
Each resample yields another estimate of the parameter. Taken together, many estimates give a sense of how much variability exists in our estimates, or how certain we are of any single estimate being accurate.
Next time
We just learned how to approximate the distribution of a sample statistic, which means we now have a sense of how much our estimates can vary.
Bootstrapping lets us quantify uncertainty.
Next time, we'll learn how to give an interval of likely values where we think a population parameter falls, based on data in our sample.
The width of such an interval will reflect our uncertainty about the actual value of the parameter.