Path: blob/master/ab_tests/causal_inference/inverse_propensity_weighting.ipynb
2617 views
Inverse Propensity Weighting
The purpose of using propensity score is to balance the treatment/control groups on the observed covariates. The advantage with using weighting is that all individuals/records in our dataset can be leveraged. Unlike techniques such as matching, where we might be discarding large amount of data in the control group.
In this document, we'll be:
Conducting the analysis on the outcome without the use of propensity score.
Performing the propensity score estimation.
Conducting the outcome analysis with the use of inverse propensity weighting.
Data Preprocessing
We'll be working with data coming from the The National Study of Learning Mindsets. The background behind this data is an attempt to find out if instilling students with a growth mindset will improve their overall academic performance. In this dataset, the academic performance is recorded as a standardized achievement_score.
Besides the treated and outcome variables, the study also recorded some other features:
schoolid: identifier of the student's school
success_expect: self-reported expectations for success in the future, a proxy for prior achievement, measured prior to random assignment
ethnicity: categorical variable for student race/ethnicity
gender: categorical variable for student identified gender
frst_in_family: categorical variable for student first-generation status, i.e. first in family to go to college
school_urbanicity: school-level categorical variable for urbanicity of the school, i.e. rural, suburban, etc
school_mindset: school-level mean of students' fixed mindsets, reported prior to random assignment, standardize
school_achievement: school achievement level, as measured by test scores and college preparation for the previous 4 cohorts of students, standardized
school_ethnic_minority: school racial/ethnic minority composition, i.e., percentage of student body that is Black, Latino, or Native American, standardized
school_poverty: school poverty concentration, i.e., percentage of students who are from families whose incomes fall below the federal poverty line, standardized
school_size: total number of students in all four grade levels in the school, standardized.
Our data will be fed into a logistic regression in the next section, here we one hot encode the categorical variables. As the numeric features are already standardized, we leave them as is.
Outcome Analysis
It's often times useful to establish some baseline. Here, we would like to gauge what would the result look like if we don't use propensity score weighting to control for potential biases with assignment of individuals to the control and treatment group.
We can use Linear Regression from different packages to establish our baseline estimates. The one from statsmodels will give us some additional statistical information. By blindly comparing individuals with and without the intervention, we can see that, on average, those in the treatment group achieved a achievement score 0.4723 higher than the control. Be aware that in this dataset is score was standardized, i.e. it means the treated is 0.4723 standard deviation higher than the untreated.
Upon establishing the baseline, our next task is to question these numbers.
Propensity Score Estimation
The idea behind propensity score is we don't need to directly control for our confounders to achieve conditional independence . Hence, denotes the outcome if treatment, , was applied, whereas measures the outcome if individual was under the control group.
Instead, it is sufficient to control for a single variable, propensity score, , which is the conditional probability of the treatment, . Or in notations form, with our propensity score, we now have .
Here, we'll use a logistic regression to estimate our propensity score. Feel free to use other classification techniques, but keep in mind that other classification techniques might not produce well calibrated probabilities and the utmost goal of propensity score estimation is to make sure to include all the confounding variables instead of getting taken away of the different kinds of classification model that we can potentially use.
After training our propensity score, it's important to check for score overlap between the treated and untreated population. Looking at the distribution plot below, we can find both treated and untreated individuals in different regions. In this case, we have a balanced dataset and our positivity assumption holds true. Remember positivity refers to the scenario where all of the individuals have at least some chance of receiving either treatment and that appears to be the case here. In summary, this would be a situation where we would feel comfortable to proceed with our propensity score matching.
Note that if we encounter a situation where the propensity score between the treatment and control does not overlap, then we should stop and better construct the prediction that controls for confounding, or in much simpler terms, see if we are missing any important features in our propensity score model. This requires domain knowledge of the data that we are working with. Failing to check for positivity and lack of balance can introduce bias in our estimation as we will be extrapolating the effect to unknown regions.
Outcome Analysis with Inverse Propensity Score Weighting
The final step in our analysis is to run our outcome model using the propensity score as weights, i.e. fit a weighted regression. In order to use our propensity score as weights, we will need to apply some transformation known as Inverse Propensity Weighting (IPW). For individuals in the treatment group, , whereas for individuals in the control group, .
To understand why applying inverse propensity weighting helps us de-bias our potentially biased data. Say in the data we collected there are 200 records from group A and 2000 records from group B, to "balance" our dataset, we would like to "up-weight" the records from group A and "down-weight" the records from group B. If we were to weight each records in both groups by number of inverse records, for group A, and for group B, we would end up with effectively 1 record on both group.
Coming back to our example, we are taking all the individuals that are in the treatment group and scaling them with the inverse propensity of being treated . What this effectively does is it makes those with a very low probability of treatment have a high weight, in other words, we have a individual in the treatment group that looks like he/she should belong to the control group, hence we will give a higher weight to this individual. What this does is create a population with the same size as the original, but where everyone is treated. We can apply the same reasoning for the control group.
Once the sample weight are created, we can re-estimate the outcome with a weighted Linear Regression.
FYI, even though scikit-learn's LinearRegression by default doesn't give us an estimated standard error, we can estimate this using bootstrapping. i.e. by sampling with replacement from the original data, and computing the average treatment effect like before. After repeating this step for lots of times, we will get a distribution of the outcome estimation. We will also use this time to organize the overall workflow into one single code cell.
Ending Notes
In this article, we look a stab at calculating average treatment effect, , using propensity scores. We saw how using inverse propensity weighting helps us create an un-biased data from a biased data. With this method, it's important to call out identifying the features to use for the propensity score model should be treated with care. In general:
Aim to include confounding variables, variables that effects both the treatment and the outcome.
It's a good idea to add variables that predicts the outcome.
It's a bad idea to add variables that only predicts the treatment, this correlation with the treatment will make it harder for us to detect the true effect of the treatment.