Path: blob/master/lab2/PT_Part2_Debiasing.ipynb
547 views
Copyright Information
Laboratory 2: Computer Vision
Part 2: Debiasing Facial Detection Systems
In the second portion of the lab, we'll explore two prominent aspects of applied deep learning: facial detection and algorithmic bias.
Deploying fair, unbiased AI systems is critical to their long-term acceptance. Consider the task of facial detection: given an image, is it an image of a face? This seemingly simple, but extremely important, task is subject to significant amounts of algorithmic bias among select demographics.
In this lab, we'll investigate one recently published approach to addressing algorithmic bias. We'll build a facial detection model that learns the latent variables underlying face image datasets and uses this to adaptively re-sample the training data, thus mitigating any biases that may be present in order to train a debiased model.
Run the next code block for a short video from Google that explores how and why it's important to consider bias when thinking about machine learning:
Let's get started by installing the relevant dependencies.
We will be using Comet ML to track our model development and training runs.
Sign up for a Comet account: HERE
This will generate a personal API Key, which you can find either in the first 'Get Started with Comet' page, under your account settings, or by pressing the '?' in the top right corner and then 'Quickstart Guide'. Enter this API key as the global variable
COMET_API_KEY
below.
2.1 Datasets
We'll be using three datasets in this lab. In order to train our facial detection models, we'll need a dataset of positive examples (i.e., of faces) and a dataset of negative examples (i.e., of things that are not faces). We'll use these data to train our models to classify images as either faces or not faces. Finally, we'll need a test dataset of face images. Since we're concerned about the potential bias of our learned models against certain demographics, it's important that the test dataset we use has equal representation across the demographics or features of interest. In this lab, we'll consider skin tone and gender.
Positive training data: CelebA Dataset. A large-scale (over 200K images) of celebrity faces.
Negative training data: ImageNet. Many images across many different categories. We'll take negative examples from a variety of non-human categories. Fitzpatrick Scale skin type classification system, with each image labeled as "Lighter'' or "Darker''.
Let's begin by importing these datasets. We've written a class that does a bit of data pre-processing to import the training data in a usable format.
We can look at the size of the training dataset and grab a batch of size 100:
Play around with displaying images to get a sense of what the training data actually looks like!
Thinking about bias
Remember we'll be training our facial detection classifiers on the large, well-curated CelebA dataset (and ImageNet), and then evaluating their accuracy by testing them on an independent test dataset. Our goal is to build a model that trains on CelebA and achieves high classification accuracy on the the test dataset across all demographics, and to thus show that this model does not suffer from any hidden bias.
What exactly do we mean when we say a classifier is biased? In order to formalize this, we'll need to think about latent variables, variables that define a dataset but are not strictly observed. As defined in the generative modeling lecture, we'll use the term latent space to refer to the probability distributions of the aforementioned latent variables. Putting these ideas together, we consider a classifier biased if its classification decision changes after it sees some additional latent features. This notion of bias may be helpful to keep in mind throughout the rest of the lab.
2.2 CNN for facial detection
First, we'll define and train a CNN on the facial classification task, and evaluate its accuracy. Later, we'll evaluate the performance of our debiased models against this baseline CNN. The CNN model has a relatively standard architecture consisting of a series of convolutional layers with batch normalization followed by two fully connected layers to flatten the convolution output and generate a class prediction.
Define and train the CNN model
Like we did in the first part of the lab, we'll define our CNN model, and then train on the CelebA and ImageNet datasets by leveraging PyTorch's automatic differentiation (torch.autograd
) by using the loss.backward()
and optimizer.step()
functions.
Now let's train the standard CNN!
Evaluate performance of the standard CNN
Next, let's evaluate the classification performance of our CelebA-trained standard CNN on the training dataset.
We will also evaluate our networks on an independent test dataset containing faces that were not seen during training. For the test data, we'll look at the classification accuracy across four different demographics, based on the Fitzpatrick skin scale and sex-based labels: dark-skinned male, dark-skinned female, light-skinned male, and light-skinned female.
Let's take a look at some sample faces in the test set.
Now, let's evaluate the probability of each of these face demographics being classified as a face using the standard CNN classifier we've just trained.
Take a look at the accuracies for this first model across these four groups. What do you observe? Would you consider this model biased or unbiased? What are some reasons why a trained model may have biased accuracies?
2.3 Mitigating algorithmic bias
Imbalances in the training data can result in unwanted algorithmic bias. For example, the majority of faces in CelebA (our training set) are those of light-skinned females. As a result, a classifier trained on CelebA will be better suited at recognizing and classifying faces with features similar to these, and will thus be biased.
How could we overcome this? A naive solution -- and one that is being adopted by many companies and organizations -- would be to annotate different subclasses (i.e., light-skinned females, males with hats, etc.) within the training data, and then manually even out the data with respect to these groups.
But this approach has two major disadvantages. First, it requires annotating massive amounts of data, which is not scalable. Second, it requires that we know what potential biases (e.g., race, gender, pose, occlusion, hats, glasses, etc.) to look for in the data. As a result, manual annotation may not capture all the different features that are imbalanced within the training data.
Instead, let's actually learn these features in an unbiased, unsupervised manner, without the need for any annotation, and then train a classifier fairly with respect to these features. In the rest of this lab, we'll do exactly that.
2.4 Variational autoencoder (VAE) for learning latent structure
As you saw, the accuracy of the CNN varies across the four demographics we looked at. To think about why this may be, consider the dataset the model was trained on, CelebA. If certain features, such as dark skin or hats, are rare in CelebA, the model may end up biased against these as a result of training with a biased dataset. That is to say, its classification accuracy will be worse on faces that have under-represented features, such as dark-skinned faces or faces with hats, relevative to faces with features well-represented in the training data! This is a problem.
Our goal is to train a debiased version of this classifier -- one that accounts for potential disparities in feature representation within the training data. Specifically, to build a debiased facial classifier, we'll train a model that learns a representation of the underlying latent space to the face training data. The model then uses this information to mitigate unwanted biases by sampling faces with rare features, like dark skin or hats, more frequently during training. The key design requirement for our model is that it can learn an encoding of the latent features in the face data in an entirely unsupervised way. To achieve this, we'll turn to variational autoencoders (VAEs).
As shown in the schematic above and in Lecture 4, VAEs rely on an encoder-decoder structure to learn a latent representation of the input data. In the context of computer vision, the encoder network takes in input images, encodes them into a series of variables defined by a mean and standard deviation, and then draws from the distributions defined by these parameters to generate a set of sampled latent variables. The decoder network then "decodes" these variables to generate a reconstruction of the original image, which is used during training to help the model identify which latent variables are important to learn.
Let's formalize two key aspects of the VAE model and define relevant functions for each.
Understanding VAEs: loss function
In practice, how can we train a VAE? In learning the latent space, we constrain the means and standard deviations to approximately follow a unit Gaussian. Recall that these are learned parameters, and therefore must factor into the loss computation, and that the decoder portion of the VAE is using these parameters to output a reconstruction that should closely match the input image, which also must factor into the loss. What this means is that we'll have two terms in our VAE loss function:
Latent loss (): measures how closely the learned latent variables match a unit Gaussian and is defined by the Kullback-Leibler (KL) divergence.
Reconstruction loss (): measures how accurately the reconstructed outputs match the input and is given by the norm of the input image and its reconstructed output.
The equation for the latent loss is provided by:
The equation for the reconstruction loss is provided by:
Thus for the VAE loss we have:
where is a weighting coefficient used for regularization. Now we're ready to define our VAE loss function:
Great! Now that we have a more concrete sense of how VAEs work, let's explore how we can leverage this network structure to train a debiased facial classifier.
Understanding VAEs: reparameterization
As you may recall from lecture, VAEs use a "reparameterization trick" for sampling learned latent variables. Instead of the VAE encoder generating a single vector of real numbers for each latent variable, it generates a vector of means and a vector of standard deviations that are constrained to roughly follow Gaussian distributions. We then sample from the standard deviations and add back the mean to output this as our sampled latent vector. Formalizing this for a latent variable where we sample we have:
where is the mean and is the covariance matrix. This is useful because it will let us neatly define the loss function for the VAE, generate randomly sampled latent variables, achieve improved network generalization, and make our complete VAE network differentiable so that it can be trained via backpropagation. Quite powerful!
Let's define a function to implement the VAE sampling operation:
2.5 Debiasing variational autoencoder (DB-VAE)
Now, we'll use the general idea behind the VAE architecture to build a model, termed a debiasing variational autoencoder or DB-VAE, to mitigate (potentially) unknown biases present within the training idea. We'll train our DB-VAE model on the facial detection task, run the debiasing operation during training, evaluate on the PPB dataset, and compare its accuracy to our original, biased CNN model.
The DB-VAE model
The key idea behind this debiasing approach is to use the latent variables learned via a VAE to adaptively re-sample the CelebA data during training. Specifically, we will alter the probability that a given image is used during training based on how often its latent features appear in the dataset. So, faces with rarer features (like dark skin, sunglasses, or hats) should become more likely to be sampled during training, while the sampling probability for faces with features that are over-represented in the training dataset should decrease (relative to uniform random sampling across the training data).
A general schematic of the DB-VAE approach is shown here:
Recall that we want to apply our DB-VAE to a supervised classification problem -- the facial detection task. Importantly, note how the encoder portion in the DB-VAE architecture also outputs a single supervised variable, , corresponding to the class prediction -- face or not face. Usually, VAEs are not trained to output any supervised variables (such as a class prediction)! This is another key distinction between the DB-VAE and a traditional VAE.
Keep in mind that we only want to learn the latent representation of faces, as that's what we're ultimately debiasing against, even though we are training a model on a binary classification problem. We'll need to ensure that, for faces, our DB-VAE model both learns a representation of the unsupervised latent variables, captured by the distribution , and outputs a supervised class prediction , but that, for negative examples, it only outputs a class prediction .
Defining the DB-VAE loss function
This means we'll need to be a bit clever about the loss function for the DB-VAE. The form of the loss will depend on whether it's a face image or a non-face image that's being considered.
For face images, our loss function will have two components:
VAE loss (): consists of the latent loss and the reconstruction loss.
Classification loss (): standard cross-entropy loss for a binary classification problem.
In contrast, for images of non-faces, our loss function is solely the classification loss.
We can write a single expression for the loss by defining an indicator variable which reflects which training data are images of faces ( ) and which are images of non-faces (). Using this, we obtain:
Let's write a function to define the DB-VAE loss function:
DB-VAE architecture
Now we're ready to define the DB-VAE architecture. To build the DB-VAE, we will use the standard CNN classifier from above as our encoder, and then define a decoder network. We will create and initialize the two models, and then construct the end-to-end VAE. We will use a latent space with 100 latent variables.
The decoder network will take as input the sampled latent variables, run them through a series of deconvolutional layers, and output a reconstruction of the original input image.
Now, we will put this decoder together with the standard CNN classifier as our encoder to define the DB-VAE. Note that at this point, there is nothing special about how we put the model together that makes it a "debiasing" model -- that will come when we define the training operation. Here, we will define the core VAE architecture by sublassing nn.Module
class; defining encoding, reparameterization, and decoding operations; and calling the network end-to-end.
As stated, the encoder architecture is identical to the CNN from earlier in this lab. Note the outputs of our constructed DB_VAE model in the forward
function: y_logit, z_mean, z_logsigma, z
. Think carefully about why each of these are outputted and their significance to the problem at hand.
Adaptive resampling for automated debiasing with DB-VAE
So, how can we actually use DB-VAE to train a debiased facial detection classifier?
Recall the DB-VAE architecture: as input images are fed through the network, the encoder learns an estimate of the latent space. We want to increase the relative frequency of rare data by increased sampling of under-represented regions of the latent space. We can approximate using the frequency distributions of each of the learned latent variables, and then define the probability distribution of selecting a given datapoint based on this approximation. These probability distributions will be used during training to re-sample the data.
You'll write a function to execute this update of the sampling probabilities, and then call this function within the DB-VAE training loop to actually debias the model.
First, we've defined a short helper function get_latent_mu
that returns the latent variable means returned by the encoder after a batch of images is inputted to the network:
Now, let's define the actual resampling algorithm get_training_sample_probabilities
. Importantly note the argument smoothing_fac
. This parameter tunes the degree of debiasing: for smoothing_fac=0
, the re-sampled training set will tend towards falling uniformly over the latent space, i.e., the most extreme debiasing.
Now that we've defined the resampling update, we can train our DB-VAE model on the CelebA/ImageNet training data, and run the above operation to re-weight the importance of particular data points as we train the model. Remember again that we only want to debias for features relevant to faces, not the set of negative examples. Complete the code block below to execute the training loop!
Wonderful! Now we should have a trained and (hopefully!) debiased facial classification model, ready for evaluation!
2.6 Evaluation of DB-VAE on Test Dataset
Finally let's test our DB-VAE model on the test dataset, looking specifically at its accuracy on each the "Dark Male", "Dark Female", "Light Male", and "Light Female" demographics. We will compare the performance of this debiased model against the (potentially biased) standard CNN from earlier in the lab.
2.7 Conclusion and submission information
We encourage you to think about and maybe even address some questions raised by the approach and results outlined here:
How does the accuracy of the DB-VAE across the four demographics compare to that of the standard CNN? Do you find this result surprising in any way?
Can the performance of the DB-VAE classifier be improved even further?
In which applications (either related to facial detection or not!) would debiasing in this way be desired? Are there applications where you may not want to debias your model?
Do you think it should be necessary for companies to demonstrate that their models, particularly in the context of tasks like facial detection, are not biased? If so, do you have thoughts on how this could be standardized and implemented?
Do you have ideas for other ways to address issues of bias, particularly in terms of the training data?
The debiased model may or may not perform well based on the initial hyperparameters. This lab competition will be focused on your answers to the questions above, experiments you tried, and your interpretation and analysis of the results. To enter the competition, please upload the following to the lab submission site for the Debiasing Faces Lab (submission upload link).
Jupyter notebook with the code you used to generate your results;
copy of the bar plot from section 2.6 showing the performance of your model;
a written description and/or diagram of the architecture and hyperparameters you used -- if there are any additional or interesting modifications you made to the template code, please include these in your description;
a written discussion of why and how these modifications changed performance.
Name your file in the following format: [FirstName]_[LastName]_Face
, followed by the file format (.zip, .ipynb, .pdf, etc). ZIP files are preferred over individual files. If you submit individual files, you must name the individual files according to the above nomenclature (e.g., [FirstName]_[LastName]_Face_TODO.pdf
, [FirstName]_[LastName]_Face_Report.pdf
, etc.).
Hopefully this lab has shed some light on a few concepts, from vision based tasks, to VAEs, to algorithmic bias. We like to think it has, but we're biased 😉.