Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ethen8181
GitHub Repository: ethen8181/machine-learning
Path: blob/master/ab_tests/causal_inference/inverse_propensity_weighting.ipynb
2617 views
Kernel: Python 3 (ipykernel)
# code for loading the format for the notebook import os # path : store the current path to convert back to it later path = os.getcwd() os.chdir(os.path.join('..', '..', 'notebook_format')) from formats import load_style load_style(plot_style=False)
os.chdir(path) # 1. magic for inline plot # 2. magic to print version # 3. magic so that the notebook will reload external python modules # 4. magic to enable retina (high resolution) plots # https://gist.github.com/minrk/3301035 %matplotlib inline %load_ext watermark %load_ext autoreload %autoreload 2 %config InlineBackend.figure_format='retina' import os import time import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import statsmodels.formula.api as smf from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder from sklearn.linear_model import LogisticRegression, LinearRegression from sklearn.model_selection import train_test_split # prevent scientific notations pd.set_option('display.float_format', lambda x: '%.3f' % x) plt.style.use('fivethirtyeight') %watermark -a 'Ethen' -d -t -v -p numpy,pandas,sklearn,matplotlib,seaborn,statsmodels
Ethen 2020-11-07 13:28:40 CPython 3.6.4 IPython 7.15.0 numpy 1.18.5 pandas 1.0.5 sklearn 0.23.1 matplotlib 3.1.0 seaborn 0.9.0 statsmodels 0.11.1

Inverse Propensity Weighting

The purpose of using propensity score is to balance the treatment/control groups on the observed covariates. The advantage with using weighting is that all individuals/records in our dataset can be leveraged. Unlike techniques such as matching, where we might be discarding large amount of data in the control group.

In this document, we'll be:

  • Conducting the analysis on the outcome without the use of propensity score.

  • Performing the propensity score estimation.

  • Conducting the outcome analysis with the use of inverse propensity weighting.

Data Preprocessing

We'll be working with data coming from the The National Study of Learning Mindsets. The background behind this data is an attempt to find out if instilling students with a growth mindset will improve their overall academic performance. In this dataset, the academic performance is recorded as a standardized achievement_score.

Besides the treated and outcome variables, the study also recorded some other features:

  • schoolid: identifier of the student's school

  • success_expect: self-reported expectations for success in the future, a proxy for prior achievement, measured prior to random assignment

  • ethnicity: categorical variable for student race/ethnicity

  • gender: categorical variable for student identified gender

  • frst_in_family: categorical variable for student first-generation status, i.e. first in family to go to college

  • school_urbanicity: school-level categorical variable for urbanicity of the school, i.e. rural, suburban, etc

  • school_mindset: school-level mean of students' fixed mindsets, reported prior to random assignment, standardize

  • school_achievement: school achievement level, as measured by test scores and college preparation for the previous 4 cohorts of students, standardized

  • school_ethnic_minority: school racial/ethnic minority composition, i.e., percentage of student body that is Black, Latino, or Native American, standardized

  • school_poverty: school poverty concentration, i.e., percentage of students who are from families whose incomes fall below the federal poverty line, standardized

  • school_size: total number of students in all four grade levels in the school, standardized.

# we'll only work with a subset of the columns, feel free to experiment with others cat_cols = ['ethnicity', 'gender', 'school_urbanicity'] num_cols = ['school_mindset', 'school_achievement', 'school_ethnic_minority', 'school_poverty', 'school_size'] treatment_col = 'intervention' label_col = 'achievement_score' use_cols = [label_col, treatment_col] + num_cols + cat_cols df = pd.read_csv('data/learning_mindset.csv', usecols=use_cols)[use_cols] print(df.shape) df.head()
(10391, 10)

Our data will be fed into a logistic regression in the next section, here we one hot encode the categorical variables. As the numeric features are already standardized, we leave them as is.

one_hot_encoder = OneHotEncoder(handle_unknown='ignore', sparse=False, dtype=np.int32) column_transformer = ColumnTransformer( transformers = [ ('one_hot', one_hot_encoder, cat_cols) ], sparse_threshold=0, remainder='passthrough' ) column_transformer.fit(df)
ColumnTransformer(remainder='passthrough', sparse_threshold=0, transformers=[('one_hot', OneHotEncoder(dtype=<class 'numpy.int32'>, handle_unknown='ignore', sparse=False), ['ethnicity', 'gender', 'school_urbanicity'])])
one_hot_encoder = column_transformer.named_transformers_['one_hot'] one_hot_encoded_cols = one_hot_encoder.get_feature_names(cat_cols).tolist() columns = one_hot_encoded_cols + [label_col, treatment_col] + num_cols df = pd.DataFrame(column_transformer.transform(df), columns=columns) print(df.shape) df
(10391, 29)

Outcome Analysis

It's often times useful to establish some baseline. Here, we would like to gauge what would the result look like if we don't use propensity score weighting to control for potential biases with assignment of individuals to the control and treatment group.

# change default style figure and font size plt.rcParams['figure.figsize'] = 8, 6 plt.rcParams['font.size'] = 12 # we can check the histogram of our label between the treatment and control plt.hist(df.loc[df[treatment_col] == 0.0, label_col], bins=20, alpha=0.3, label='control') plt.hist(df.loc[df[treatment_col] == 1.0, label_col], bins=20, alpha=0.3, label='treatment') plt.legend() plt.show()
Image in a Jupyter notebook
# fitting a linear regression to estimate the outcome linear = LinearRegression() linear.fit(df[[treatment_col]], df[label_col]) print(linear.intercept_, linear.coef_)
-0.15380303423613792 [0.47227167]
smf.ols(f'{label_col} ~ {treatment_col}', data=df).fit().summary().tables[1]

We can use Linear Regression from different packages to establish our baseline estimates. The one from statsmodels will give us some additional statistical information. By blindly comparing individuals with and without the intervention, we can see that, on average, those in the treatment group achieved a achievement score 0.4723 higher than the control. Be aware that in this dataset is score was standardized, i.e. it means the treated is 0.4723 standard deviation higher than the untreated.

Upon establishing the baseline, our next task is to question these numbers.

Propensity Score Estimation

The idea behind propensity score is we don't need to directly control for our confounders XX to achieve conditional independence (Y1,Y0)⊥T | X(Y^1,Y^0) \perp T \text{ | } X. Hence, Y1Y^1 denotes the outcome if treatment, TT, was applied, whereas Y0Y^0 measures the outcome if individual was under the control group.

Instead, it is sufficient to control for a single variable, propensity score, P(x)P(x), which is the conditional probability of the treatment, P(T∣X)P(T|X). Or in notations form, with our propensity score, we now have (Y1,Y0)⊥T | P(X)(Y^1,Y^0) \perp T \text{ | } P(X).

Here, we'll use a logistic regression to estimate our propensity score. Feel free to use other classification techniques, but keep in mind that other classification techniques might not produce well calibrated probabilities and the utmost goal of propensity score estimation is to make sure to include all the confounding variables instead of getting taken away of the different kinds of classification model that we can potentially use.

input_cols = one_hot_encoded_cols + num_cols logistic = LogisticRegression() logistic.fit(df[input_cols], df[treatment_col]) propensity_score = 'propensity_score' df[propensity_score] = logistic.predict_proba(df[input_cols])[:, 1] print(df.shape) df.head()
(10391, 30)

After training our propensity score, it's important to check for score overlap between the treated and untreated population. Looking at the distribution plot below, we can find both treated and untreated individuals in different regions. In this case, we have a balanced dataset and our positivity assumption holds true. Remember positivity refers to the scenario where all of the individuals have at least some chance of receiving either treatment and that appears to be the case here. In summary, this would be a situation where we would feel comfortable to proceed with our propensity score matching.

Note that if we encounter a situation where the propensity score between the treatment and control does not overlap, then we should stop and better construct the prediction that controls for confounding, or in much simpler terms, see if we are missing any important features in our propensity score model. This requires domain knowledge of the data that we are working with. Failing to check for positivity and lack of balance can introduce bias in our estimation as we will be extrapolating the effect to unknown regions.

control_score = df.loc[df[treatment_col] == 0.0, propensity_score] treatment_score = df.loc[df[treatment_col] == 1.0, propensity_score] sns.distplot(control_score, label='control') sns.distplot(treatment_score, label='treatment') plt.title('Propensity Score Distribution of Control vs Treatment') plt.ylabel('Density') plt.xlabel('Scores') plt.legend() plt.tight_layout() plt.show()
Image in a Jupyter notebook

Outcome Analysis with Inverse Propensity Score Weighting

The final step in our analysis is to run our outcome model using the propensity score as weights, i.e. fit a weighted regression. In order to use our propensity score as weights, we will need to apply some transformation known as Inverse Propensity Weighting (IPW). For individuals in the treatment group, w=1P(x)w = \frac{1}{P(x)}, whereas for individuals in the control group, w=11−P(x)w = \frac{1}{1 - P(x)}.

To understand why applying inverse propensity weighting helps us de-bias our potentially biased data. Say in the data we collected there are 200 records from group A and 2000 records from group B, to "balance" our dataset, we would like to "up-weight" the records from group A and "down-weight" the records from group B. If we were to weight each records in both groups by number of inverse records, 1/2001 / 200 for group A, and 1/20001 / 2000 for group B, we would end up with effectively 1 record on both group.

Coming back to our example, we are taking all the individuals that are in the treatment group and scaling them with the inverse propensity of being treated w=1P(x)w = \frac{1}{P(x)}. What this effectively does is it makes those with a very low probability of treatment have a high weight, in other words, we have a individual in the treatment group that looks like he/she should belong to the control group, hence we will give a higher weight to this individual. What this does is create a population with the same size as the original, but where everyone is treated. We can apply the same reasoning for the control group.

treatment_weight = 1.0 / treatment_score control_weight = 1.0 / (1.0 - control_score) print('Original Sample Size', df.shape[0]) print('Treated Population Sample Size', treatment_weight.sum()) print('Untreated Population Sample Size', control_weight.sum())
Original Sample Size 10391 Treated Population Sample Size 10390.321766244168 Untreated Population Sample Size 10390.61301069852
sample_weight = 'sample_weight' df[sample_weight] = np.where( df[treatment_col] == 1.0, 1.0 / df[propensity_score], 1.0 / (1.0 - df[propensity_score]) )

Once the sample weight are created, we can re-estimate the outcome with a weighted Linear Regression.

smf.wls(f'{label_col} ~ {treatment_col}', data=df, weights=df[sample_weight]).fit().summary().tables[1]
linear = LinearRegression() linear.fit(df[[treatment_col]], df[label_col], sample_weight=df[sample_weight]) print(linear.intercept_, linear.coef_)
-0.14633238210898603 [0.44298379]

FYI, even though scikit-learn's LinearRegression by default doesn't give us an estimated standard error, we can estimate this using bootstrapping. i.e. by sampling with replacement from the original data, and computing the average treatment effect like before. After repeating this step for lots of times, we will get a distribution of the outcome estimation. We will also use this time to organize the overall workflow into one single code cell.

def run_propensity_score_estimation(df, input_cols, treatment_col, label_col): # df is our pre-processed data df = df.sample(frac=1, replace=True) # estimate the propensity score logistic = LogisticRegression() logistic.fit(df[input_cols], df[treatment_col]) propensity_score = logistic.predict_proba(df[input_cols])[:, 1] # calculate the inverse propensity weight sample_weight = np.where( df[treatment_col] == 1.0, 1.0 / propensity_score, 1.0 / (1.0 - propensity_score) ) # estimate the outcome using weighted regression linear = LinearRegression() linear.fit(df[[treatment_col]], df[label_col], sample_weight=sample_weight) return linear.coef_[0]
from joblib import Parallel, delayed np.random.seed(88) # the bootstrap approach of computing standard error can be computationally expensive on large datasets. bootstrap_sample = 1000 parallel = Parallel(n_jobs=4) ates = parallel(delayed(run_propensity_score_estimation)(df, input_cols, treatment_col, label_col) for _ in range(bootstrap_sample)) ates = np.array(ates) print(f"ATE: {ates.mean()}") print(f"95% C.I.: {(np.percentile(ates, 2.5), np.percentile(ates, 97.5))}")
ATE: 0.44207303186986624 95% C.I.: (0.402353989536924, 0.4803417055971445)

Ending Notes

In this article, we look a stab at calculating average treatment effect, E[Y∣T=1]−E[Y∣T=0]E[Y|T=1] - E[Y|T=0], using propensity scores. We saw how using inverse propensity weighting helps us create an un-biased data from a biased data. With this method, it's important to call out identifying the features to use for the propensity score model should be treated with care. In general:

  • Aim to include confounding variables, variables that effects both the treatment and the outcome.

  • It's a good idea to add variables that predicts the outcome.

  • It's a bad idea to add variables that only predicts the treatment, this correlation with the treatment will make it harder for us to detect the true effect of the treatment.

Reference