Path: blob/master/cloud/notebooks/python_sdk/experiments/autoai/fairness/Use Lale AIF360 DisparateImpactRemover to mitigate bias for credit risk AutoAI model.ipynb
6408 views
Use Lale
, AIF360
and DisparateImpactRemover
to mitigate bias for credit risk AutoAI model
This notebook contains the steps and code to demonstrate support of AutoAI experiments in watsonx.ai Runtime service. It introduces commands for bias detecting and mitigation performed with lale.lib.aif360
module.
Some familiarity with Python is helpful. This notebook uses Python 3.11.
NOTE: The notebook is a continuation for sample notebook: "Use AutoAI to train fair models".
Contents
This notebook contains the following parts:
1. Set up the environment
If you are not familiar with watsonx.ai Runtime Service and AutoAI experiments please read more about it in the sample notebook: "Use AutoAI to train fair models".
Install and import the ibm-watsonx-ai
, lale
,aif360
and dependencies.
Note: ibm-watsonx-ai
documentation can be found here.
Connection to watsonx.ai Runtime
Authenticate the watsonx.ai Runtime service on IBM Cloud. You need to provide Cloud API key
and location
.
Tip: Your Cloud API key
can be generated by going to the Users section of the Cloud console. From that page, click your name, scroll down to the API Keys section, and click Create an IBM Cloud API key. Give your key a name and click Create, then copy the created key and paste it below. You can also get a service specific url by going to the Endpoint URLs section of the watsonx.ai Runtime docs. You can check your instance location in your watsonx.ai Runtime Service instance details.
You can use IBM Cloud CLI to retrieve the instance location
.
NOTE: You can also get a service specific apikey by going to the Service IDs section of the Cloud Console. From that page, click Create, and then copy the created key and paste it in the following cell.
Action: Enter your api_key
and location
in the following cell.
Working with spaces
You need to create a space that will be used for your work. If you do not have a space, you can use Deployment Spaces Dashboard to create one.
Click New Deployment Space
Create an empty space
Select Cloud Object Storage
Select watsonx.ai Runtime instance and press Create
Copy
space_id
and paste it below
Tip: You can also use SDK to prepare the space for your work. More information can be found here.
Action: assign space ID below
Initialiaze AutoAI experiment with watsonx.ai Runtime credentials and space.
List all previous AutoAI experiment runs named 'Credit Risk Prediction and bias detection - AutoAI'
which was run in sample notebook "Use AutoAI to train fair models".
NOTE: If you don't have any experiment listed below please run the "Use AutoAI to train fair models" notebook first and then continue running the current notebook.
Load last experiment run to variable pipeline_optimizer
.
Get selected pipeline model
Download pipeline model object from the AutoAI training job.
Get Credit Risk dataset from experiment configuration.
3. Bias detection and mitigation
The fairness_info
dictionary contains some fairness-related metadata. The favorable and unfavorable label are values of the target class column that indicate whether the loan was granted or denied. A protected attribute is a feature that partitions the population into groups whose outcome should have parity. The credit-risk dataset has two protected attribute columns, sex and age. Each prottected attributes has monitored and reference group.
Calculate fairness metrics
We will calculate some model metrics. Accuracy describes how accurate is the model according to dataset. Disparate impact is defined by comparing outcomes between a privileged group and an unprivileged group, so it needs to check the protected attribute to determine group membership for the sample record at hand. The closer to 1 is the value of disparate impact the less biased is the model. The third calculated metric takes the disparate impact into account along with accuracy. The best value of the score is 1.0.
Refinery with lale
In this section we will use DisparateImpactRemover
algorithm for mitigating fairness problems from lale.lib.aif360
module. It modifies the features that are not the protected attribute in such a way that it is hard to predict the protected attribute from them. This algorithm has a hyperparameter repair_level
that we will tune with hyperparameter optimization.
Pipeline decomposition and new definition
Start by removing the last step of the pipeline, i.e., the final estimator.
Initialize the DisparateImpactRemover
with fairness configuration and pipeline without final estimator and add a new final step, which consists of a choice of two estimators. In this code, |
is the or combinator (algorithmic choice). It defines a search space for another optimizer run.
Fairness metrics can be more unstable than accuracy, because they depend not just on the distribution of labels, but also on the distribution of privileged and unprivileged groups as defined by the protected attributes. In AI Automation, k-fold cross validation helps reduce overfitting. To get more stable results, we will stratify these k folds by both labels and groups with FairStratifiedKFold
class.
Pipeline training
To automatically select the algorithm and tune its hyperparameters we use auto_configure
method of lale pipeline. The combined metric accuracy_and_disparate_impact
is used as scoring metric in evaluation process.
Results
Visualize the final pipeline and calculate its metrics.
Summary: As result of the described steps we received unbiased pipeline model based on disparate impact value, however the accuracy of the model decreased from 70% to 66%.
5. Deploy and Score
In this section you will learn how to deploy and score Lale pipeline model using watsonx.ai Runtime instance.
Store the model
Deployment creation
Deployment scoring
You need to pass scoring values as input data if the deployed model. Use client.deployments.score()
method to get predictions from deployed model.
If you want to clean up all created assets:
experiments
trainings
pipelines
model definitions
models
functions
deployments
please follow up this sample notebook.
You successfully completed this notebook!
Check out used packeges domuntations:
ibm-watsonx-ai
Online Documentationaif360
: https://aif360.mybluemix.net/
Authors
Dorota Lączak, Software Engineer at watsonx.ai
Mateusz Szewczyk, Software Engineer at watsonx.ai
Copyright © 2022-2025 IBM. This notebook and its source code are released under the terms of the MIT License.