Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cloud/notebooks/python_sdk/experiments/autoai/fairness/Use Lale AIF360 DisparateImpactRemover to mitigate bias for credit risk AutoAI model.ipynb
6408 views
Kernel: Python 3 (ipykernel)

Use Lale, AIF360 and DisparateImpactRemover to mitigate bias for credit risk AutoAI model

This notebook contains the steps and code to demonstrate support of AutoAI experiments in watsonx.ai Runtime service. It introduces commands for bias detecting and mitigation performed with lale.lib.aif360 module.

Some familiarity with Python is helpful. This notebook uses Python 3.11.

NOTE: The notebook is a continuation for sample notebook: "Use AutoAI to train fair models".

1. Set up the environment

If you are not familiar with watsonx.ai Runtime Service and AutoAI experiments please read more about it in the sample notebook: "Use AutoAI to train fair models".

Install and import the ibm-watsonx-ai, lale ,aif360 and dependencies.

Note: ibm-watsonx-ai documentation can be found here.

!pip install wget !pip install "scikit-learn==1.3.0" | tail -n 1 !pip install -U ibm-watsonx-ai | tail -n 1 !pip install -U autoai-libs | tail -n 1 !pip install -U 'lale[fairness]>=0.8.2,<0.9' | tail -n 1

Connection to watsonx.ai Runtime

Authenticate the watsonx.ai Runtime service on IBM Cloud. You need to provide Cloud API key and location.

Tip: Your Cloud API key can be generated by going to the Users section of the Cloud console. From that page, click your name, scroll down to the API Keys section, and click Create an IBM Cloud API key. Give your key a name and click Create, then copy the created key and paste it below. You can also get a service specific url by going to the Endpoint URLs section of the watsonx.ai Runtime docs. You can check your instance location in your watsonx.ai Runtime Service instance details.

You can use IBM Cloud CLI to retrieve the instance location.

ibmcloud login --apikey API_KEY -a https://cloud.ibm.com ibmcloud resource service-instance INSTANCE_NAME

NOTE: You can also get a service specific apikey by going to the Service IDs section of the Cloud Console. From that page, click Create, and then copy the created key and paste it in the following cell.

Action: Enter your api_key and location in the following cell.

api_key = 'PASTE YOUR PLATFORM API KEY HERE' location = 'PASTE YOUR INSTANCE LOCATION HERE'
from ibm_watsonx_ai import Credentials credentials = Credentials( api_key=api_key, url='https://' + location + '.ml.cloud.ibm.com' )
from ibm_watsonx_ai import APIClient client = APIClient(credentials)

Working with spaces

You need to create a space that will be used for your work. If you do not have a space, you can use Deployment Spaces Dashboard to create one.

  • Click New Deployment Space

  • Create an empty space

  • Select Cloud Object Storage

  • Select watsonx.ai Runtime instance and press Create

  • Copy space_id and paste it below

Tip: You can also use SDK to prepare the space for your work. More information can be found here.

Action: assign space ID below

space_id = 'PASTE YOUR SPACE ID HERE'
client.spaces.list(limit=10)
client.set.default_space(space_id)
'SUCCESS'

2. Load historical experiment

Initialiaze AutoAI experiment with watsonx.ai Runtime credentials and space.

from ibm_watsonx_ai.experiment import AutoAI experiment = AutoAI(credentials, space_id=space_id)

List all previous AutoAI experiment runs named 'Credit Risk Prediction and bias detection - AutoAI' which was run in sample notebook "Use AutoAI to train fair models".

NOTE: If you don't have any experiment listed below please run the "Use AutoAI to train fair models" notebook first and then continue running the current notebook.

autoai_experiment_name = 'Credit Risk Prediction and bias detection - AutoAI' historical_experiments = experiment.runs(filter=autoai_experiment_name).list() historical_experiments

Load last experiment run to variable pipeline_optimizer.

run_id = historical_experiments.run_id[0] pipeline_optimizer= experiment.runs.get_optimizer(run_id)
summary = pipeline_optimizer.summary() summary

Get selected pipeline model

Download pipeline model object from the AutoAI training job.

best_pipeline = pipeline_optimizer.get_pipeline()

Get Credit Risk dataset from experiment configuration.

X_train, X_holdout, y_train, y_holdout = pipeline_optimizer.get_data_connections()[0].read(with_holdout_split=True) X_holdout.head()

3. Bias detection and mitigation

The fairness_info dictionary contains some fairness-related metadata. The favorable and unfavorable label are values of the target class column that indicate whether the loan was granted or denied. A protected attribute is a feature that partitions the population into groups whose outcome should have parity. The credit-risk dataset has two protected attribute columns, sex and age. Each prottected attributes has monitored and reference group.

fairness_info = pipeline_optimizer.get_params()['fairness_info'] fairness_info
{'favorable_labels': ['No Risk'], 'protected_attributes': [{'feature': 'Sex', 'monitored_group': ['female'], 'reference_group': ['male']}, {'feature': 'Age', 'monitored_group': [[18, 25]], 'reference_group': [[26, 75]]}], 'unfavorable_labels': ['Risk']}

Calculate fairness metrics

We will calculate some model metrics. Accuracy describes how accurate is the model according to dataset. Disparate impact is defined by comparing outcomes between a privileged group and an unprivileged group, so it needs to check the protected attribute to determine group membership for the sample record at hand. The closer to 1 is the value of disparate impact the less biased is the model. The third calculated metric takes the disparate impact into account along with accuracy. The best value of the score is 1.0.

import sklearn.metrics from lale.lib.aif360 import disparate_impact, accuracy_and_disparate_impact accuracy_scorer = sklearn.metrics.make_scorer(sklearn.metrics.accuracy_score) print(f'accuracy {accuracy_scorer(best_pipeline, X_holdout, y_holdout):.1%}') disparate_impact_scorer = disparate_impact(**fairness_info) print(f'disparate impact {disparate_impact_scorer(best_pipeline, X_holdout, y_holdout):.2f}') combined_scorer = accuracy_and_disparate_impact(**fairness_info) print(f'accuracy and disparate impact metric {combined_scorer(best_pipeline, X_holdout, y_holdout):.2f}')
accuracy 81.2% disparate impact 1.42 accuracy and disparate impact metric 0.76

Refinery with lale

In this section we will use DisparateImpactRemover algorithm for mitigating fairness problems from lale.lib.aif360 module. It modifies the features that are not the protected attribute in such a way that it is hard to predict the protected attribute from them. This algorithm has a hyperparameter repair_level that we will tune with hyperparameter optimization.

from lale.lib.aif360 import DisparateImpactRemover from lale.pretty_print import ipython_display ipython_display(DisparateImpactRemover.hyperparam_schema('repair_level'))
{ "description": "Repair amount from 0 = none to 1 = full.", "type": "number", "minimum": 0, "maximum": 1, "default": 1, }

Pipeline decomposition and new definition

Start by removing the last step of the pipeline, i.e., the final estimator.

prefix = best_pipeline.remove_last().freeze_trainable() prefix.visualize()
Image in a Jupyter notebook

Initialize the DisparateImpactRemover with fairness configuration and pipeline without final estimator and add a new final step, which consists of a choice of two estimators. In this code, | is the or combinator (algorithmic choice). It defines a search space for another optimizer run.

from sklearn.linear_model import LogisticRegression as LR from sklearn.ensemble import RandomForestClassifier as RF from lale.operator_wrapper import wrap_imported_operators wrap_imported_operators() di_remover = DisparateImpactRemover(**fairness_info, preparation=prefix) planned_fairer = di_remover >> (LR | RF) planned_fairer.visualize()
Image in a Jupyter notebook

Fairness metrics can be more unstable than accuracy, because they depend not just on the distribution of labels, but also on the distribution of privileged and unprivileged groups as defined by the protected attributes. In AI Automation, k-fold cross validation helps reduce overfitting. To get more stable results, we will stratify these k folds by both labels and groups with FairStratifiedKFold class.

from lale.lib.aif360 import FairStratifiedKFold fair_cv = FairStratifiedKFold(**fairness_info, n_splits=3)

Pipeline training

To automatically select the algorithm and tune its hyperparameters we use auto_configure method of lale pipeline. The combined metric accuracy_and_disparate_impact is used as scoring metric in evaluation process.

from lale.lib.lale import Hyperopt trained_fairer = planned_fairer.auto_configure( X_train, y_train, optimizer=Hyperopt, cv=fair_cv, max_evals=10, scoring=combined_scorer, best_score=1.0)

Results

Visualize the final pipeline and calculate its metrics.

trained_fairer.visualize()
Image in a Jupyter notebook
print(f'accuracy {accuracy_scorer(trained_fairer, X_holdout, y_holdout):.1%}') print(f'disparate impact {disparate_impact_scorer(trained_fairer, X_holdout, y_holdout):.2f}') print(f'accuracy and disparate impact metric {combined_scorer(trained_fairer, X_holdout, y_holdout):.2f}')
accuracy 66.5% disparate impact 1.00 accuracy and disparate impact metric 0.83

Summary: As result of the described steps we received unbiased pipeline model based on disparate impact value, however the accuracy of the model decreased from 70% to 66%.

5. Deploy and Score

In this section you will learn how to deploy and score Lale pipeline model using watsonx.ai Runtime instance.

Store the model

model_props = { client.repository.ModelMetaNames.NAME: "Fairer AutoAI model" } feature_vector = list(X_train.columns)
published_model = client.repository.store_model( model=best_pipeline.export_to_sklearn_pipeline(), meta_props=model_props, training_id=run_id )
published_model_id = client.repository.get_model_id(published_model)

Deployment creation

metadata = { client.deployments.ConfigurationMetaNames.NAME: "Deployment of fairer model", client.deployments.ConfigurationMetaNames.ONLINE: {} } created_deployment = client.deployments.create(published_model_id, meta_props=metadata)
####################################################################################### Synchronous deployment creation for uid: 'c9748e9e-e2e2-4a69-9f81-71e506d66a03' started ####################################################################################### initializing Note: online_url and serving_urls are deprecated and will be removed in a future release. Use inference instead. .. ready ------------------------------------------------------------------------------------------------ Successfully finished deployment creation, deployment_uid='ce74c4fc-2f81-4aaf-adbe-2fd9683db17e' ------------------------------------------------------------------------------------------------
deployment_id = client.deployments.get_id(created_deployment)

Deployment scoring

You need to pass scoring values as input data if the deployed model. Use client.deployments.score() method to get predictions from deployed model.

values = X_holdout.values scoring_payload = { "input_data": [{ 'values': values[:5] }] }
predictions = client.deployments.score(deployment_id, scoring_payload) predictions
{'predictions': [{'fields': ['prediction', 'probability'], 'values': [['No Risk', [0.6876538991928101, 0.31234613060951233]], ['Risk', [0.3356795907020569, 0.6643204092979431]], ['No Risk', [0.9083482027053833, 0.0916517898440361]], ['No Risk', [0.7679007649421692, 0.23209922015666962]], ['No Risk', [0.7194289565086365, 0.2805710434913635]]]}]}

5. Clean up

If you want to clean up all created assets:

  • experiments

  • trainings

  • pipelines

  • model definitions

  • models

  • functions

  • deployments

please follow up this sample notebook.

6. Summary and next steps

You successfully completed this notebook!

Check out used packeges domuntations:

Authors

Dorota Lączak, Software Engineer at watsonx.ai

Mateusz Szewczyk, Software Engineer at watsonx.ai

Copyright © 2022-2025 IBM. This notebook and its source code are released under the terms of the MIT License.