Path: blob/master/cpd4.8/notebooks/python_sdk/experiments/autoai/fairness/Use Lale AIF360 DisparateImpactRemover to mitigate bias for credit risk AutoAI model.ipynb
6408 views
Use Lale
AIF360
DisparateImpactRemover
to mitigate bias for credit risk AutoAI model
This notebook contains the steps and code to demonstrate support of AutoAI experiments in Watson Machine Learning service. It introduces commands for bias detecting and mitigation performed with lale.lib.aif360
module.
Some familiarity with Python is helpful. This notebook uses Python 3.10.
NOTE: The notebook is a continuation for sample notebook: "Use AutoAI to train fair models".
Contents
This notebook contains the following parts:
Install and import the ibm-watson-machine-learning
, lale
,aif360
and dependencies.
Note: ibm-watson-machine-learning
documentation can be found here.
Connection to WML
Authenticate the Watson Machine Learning service on IBM Cloud Pack for Data. You need to provide platform url
, your username
and api_key
.
Alternatively you can use username
and password
to authenticate WML services.
Working with spaces
First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use {PLATFORM_URL}/ml-runtime/spaces?context=icp4data
to create one.
Click New Deployment Space
Create an empty space
Go to space
Settings
tabCopy
space_id
and paste it below
Tip: You can also use SDK to prepare the space for your work. More information can be found here.
Action: Assign space ID below
You can use the list
method to print all existing spaces.
To be able to interact with all resources available in Watson Machine Learning, you need to set the space which you will be using.
Initialiaze AutoAI experiment with Watson Machine Learning credentials and space.
List all previous AutoAI experiment runs named 'Credit Risk Prediction and bias detection - AutoAI'
which was run in sample notebook "Use AutoAI to train fair models".
NOTE: If you don't have any experiment listed below please run the "Use AutoAI to train fair models" notebook first and then continue running the current notebook.
Load last experiment run to variable pipeline_optimizer
.
Get selected pipeline model
Download pipeline model object from the AutoAI training job.
Get Credit Risk dataset from experiment configuration.
3. Bias detection and mitigation
The fairness_info
dictionary contains some fairness-related metadata. The favorable and unfavorable label are values of the target class column that indicate whether the loan was granted or denied. A protected attribute is a feature that partitions the population into groups whose outcome should have parity. The credit-risk dataset has two protected attribute columns, sex and age. Each prottected attributes has monitored and reference group.
Calculate fairness metrics
We will calculate some model metrics. Accuracy describes how accurate is the model according to dataset. Disparate impact is defined by comparing outcomes between a privileged group and an unprivileged group, so it needs to check the protected attribute to determine group membership for the sample record at hand. The closer to 1 is the value of disparate impact the less biased is the model. The third calculated metric takes the disparate impact into account along with accuracy. The best value of the score is 1.0.
Refinery with lale
In this section we will use DisparateImpactRemover
algorithm for mitigating fairness problems from lale.lib.aif360
module. It modifies the features that are not the protected attribute in such a way that it is hard to predict the protected attribute from them. This algorithm has a hyperparameter repair_level
that we will tune with hyperparameter optimization.
Pipeline decomposition and new definition
Start by removing the last step of the pipeline, i.e., the final estimator.
Initialize the DisparateImpactRemover
with fairness configuration and pipeline without final estimator and add a new final step, which consists of a choice of two estimators. In this code, |
is the or combinator (algorithmic choice). It defines a search space for another optimizer run.
Fairness metrics can be more unstable than accuracy, because they depend not just on the distribution of labels, but also on the distribution of privileged and unprivileged groups as defined by the protected attributes. In AI Automation, k-fold cross validation helps reduce overfitting. To get more stable results, we will stratify these k folds by both labels and groups with FairStratifiedKFold
class.
Pipeline training
To automatically select the algorithm and tune its hyperparameters we use auto_configure
method of lale pipeline. The combined metric accuracy_and_disparate_impact
is used as scoring metric in evaluation process.
Results
Visualize the final pipeline and calculate its metrics.
Summary: As result of the described steps we received unbiased pipeline model based on disparate impact value, however the accuracy of the model decreased from 70% to 66%.
Custom software_specification
Created model is AutoAI model refined with Lale. We will create new software specification based on default Python 3.10 environment extended by autoai-libs
package.
config.yaml
file describes details of package extention. Now you need to store new package extention with APIClient
.
Create new software specification and add created package extention to it.
You can get details of created software specification using client.software_specifications.get_details(sw_spec_uid)
Store the model
Deployment creation
Deployment scoring
You need to pass scoring values as input data if the deployed model. Use client.deployments.score()
method to get predictions from deployed model.
If you want to clean up all created assets:
experiments
trainings
pipelines
model definitions
models
functions
deployments
please follow up this sample notebook.
You successfully completed this notebook!.
Check out used packeges domuntations:
ibm-watson-machine-learning
Online Documentationaif360
: https://aif360.mybluemix.net/
Authors
Dorota Lączak, software engineer in Watson Machine Learning at IBM
Copyright © 2022-2025 IBM. This notebook and its source code are released under the terms of the MIT License.