Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cpd5.2/notebooks/python_sdk/experiments/autoai/fairness/Use Lale AIF360 DisparateImpactRemover to mitigate bias for credit risk AutoAI model.ipynb
6415 views
Kernel: watsonx-ai-samples-py-312

Use Lale AIF360 DisparateImpactRemover to mitigate bias for credit risk AutoAI model

This notebook contains the steps and code to demonstrate support of AutoAI experiments in watsonx.ai service. It introduces commands for bias detecting and mitigation performed with lale.lib.aif360 module.

Some familiarity with Python is helpful. This notebook uses Python 3.12.

NOTE: The notebook is a continuation for sample notebook: "Use AutoAI to train fair models".

1. Set up the environment

Before you use the sample code in this notebook, you must perform the following setup tasks:

  • Contact with your Cloud Pak for Data administrator and ask them for your account credentials

Install dependencies

Note: ibm-watsonx-ai documentation can be found here.

%pip install -U wget | tail -n 1 %pip install -U ibm-watsonx-ai | tail -n 1 %pip install "scikit-learn==1.6.1" | tail -n 1 %pip install -U "autoai-libs==3.0.3" | tail -n 1 %pip install -U "lale[fairness]" | tail -n 1
Successfully installed wget-3.2 Successfully installed ibm-watsonx-ai-1.3.20 Requirement already satisfied: threadpoolctl>=3.1.0 in /opt/user-env/pyt6/lib64/python3.12/site-packages (from scikit-learn==1.6.1) (3.6.0) Successfully installed autoai-libs-3.0.3 Successfully installed BlackBoxAuditing-0.1.54 aif360-0.6.1 imbalanced-learn-0.13.0 klepto-0.2.7 liac-arff-2.5.0 mystic-0.4.4 pox-0.3.6 sklearn-compat-0.1.3

Define credentials

Authenticate the watsonx.ai Runtime service on IBM Cloud Pak for Data. You need to provide the admin's username and the platform url.

username = "PASTE YOUR USERNAME HERE" url = "PASTE THE PLATFORM URL HERE"

Use the admin's api_key to authenticate watsonx.ai Runtime services:

import getpass from ibm_watsonx_ai import Credentials credentials = Credentials( username=username, api_key=getpass.getpass("Enter your watsonx.ai API key and hit enter: "), url=url, instance_id="openshift", version="5.2", )

Alternatively you can use the admin's password:

import getpass from ibm_watsonx_ai import Credentials if "credentials" not in locals() or not credentials.api_key: credentials = Credentials( username=username, password=getpass.getpass("Enter your watsonx.ai password and hit enter: "), url=url, instance_id="openshift", version="5.2", )
Enter your watsonx.ai password and hit enter: ········

Create APIClient instance

from ibm_watsonx_ai import APIClient client = APIClient(credentials)

Working with spaces

First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use {PLATFORM_URL}/ml-runtime/spaces?context=icp4data to create one.

  • Click New Deployment Space

  • Create an empty space

  • Go to space Settings tab

  • Copy space_id and paste it below

Tip: You can also use SDK to prepare the space for your work. More information can be found here.

Action: Assign space ID below

space_id = "PASTE YOUR SPACE ID HERE"

You can use the list method to print all existing spaces.

client.spaces.list(limit=10)

To be able to interact with all resources available in Watson Machine Learning, you need to set the space which you will be using.

client.set.default_space(space_id)
'SUCCESS'

2. Load historical experiment

Initialize AutoAI experiment with watsonx.ai credentials and space.

from ibm_watsonx_ai.experiment import AutoAI experiment = AutoAI(credentials, space_id=space_id)

List all previous AutoAI experiment runs named 'Credit Risk Prediction and bias detection - AutoAI' which was run in sample notebook "Use AutoAI to train fair models".

NOTE: If you don't have any experiment listed below please run the "Use AutoAI to train fair models" notebook first and then continue running the current notebook.

autoai_experiment_name = "Credit Risk Prediction and bias detection - AutoAI" historical_experiments = experiment.runs(filter=autoai_experiment_name).list() historical_experiments

Load last experiment run to variable pipeline_optimizer.

run_id = historical_experiments.run_id[0] pipeline_optimizer = experiment.runs.get_optimizer(run_id)
summary = pipeline_optimizer.summary() summary

Get selected pipeline model

Download pipeline model object from the AutoAI training job.

best_pipeline = pipeline_optimizer.get_pipeline()

Get Credit Risk dataset from experiment configuration.

data_connections = pipeline_optimizer.get_data_connections() X_train, X_holdout, y_train, y_holdout = data_connections[0].read( with_holdout_split=True ) X_holdout.head()

3. Bias detection and mitigation

The fairness_info dictionary contains some fairness-related metadata. The favorable and unfavorable label are values of the target class column that indicate whether the loan was granted or denied. A protected attribute is a feature that partitions the population into groups whose outcome should have parity. The credit-risk dataset has two protected attribute columns, sex and age. Each prottected attributes has monitored and reference group.

fairness_info = pipeline_optimizer.get_params()["fairness_info"] fairness_info
{'favorable_labels': ['No Risk'], 'protected_attributes': [{'feature': 'Sex', 'monitored_group': ['female'], 'reference_group': ['male']}, {'feature': 'Age', 'monitored_group': [[18, 25]], 'reference_group': [[26, 75]]}], 'unfavorable_labels': ['Risk']}

Calculate fairness metrics

We will calculate some model metrics. Accuracy describes how accurate is the model according to dataset. Disparate impact is defined by comparing outcomes between a privileged group and an unprivileged group, so it needs to check the protected attribute to determine group membership for the sample record at hand. The closer to 1 is the value of disparate impact the less biased is the model. The third calculated metric takes the disparate impact into account along with accuracy. The best value of the score is 1.0.

import sklearn.metrics from lale.lib.aif360 import disparate_impact, accuracy_and_disparate_impact accuracy_scorer = sklearn.metrics.make_scorer(sklearn.metrics.accuracy_score) print(f"accuracy {accuracy_scorer(best_pipeline, X_holdout, y_holdout):.1%}") disparate_impact_scorer = disparate_impact(**fairness_info) print( f"disparate impact {disparate_impact_scorer(best_pipeline, X_holdout, y_holdout):.2f}" ) combined_scorer = accuracy_and_disparate_impact(**fairness_info) print( f"accuracy and disparate impact metric {combined_scorer(best_pipeline, X_holdout, y_holdout):.2f}" )
accuracy 81.2% disparate impact 1.43 accuracy and disparate impact metric 0.76

Refinery with lale

In this section we will use DisparateImpactRemover algorithm for mitigating fairness problems from lale.lib.aif360 module. It modifies the features that are not the protected attribute in such a way that it is hard to predict the protected attribute from them. This algorithm has a hyperparameter repair_level that we will tune with hyperparameter optimization.

from lale.lib.aif360 import DisparateImpactRemover from lale.pretty_print import ipython_display ipython_display(DisparateImpactRemover.hyperparam_schema("repair_level"))
{ "description": "Repair amount from 0 = none to 1 = full.", "type": "number", "minimum": 0, "maximum": 1, "default": 1, }

Pipeline decomposition and new definition

Start by removing the last step of the pipeline, i.e., the final estimator.

prefix = best_pipeline.remove_last().freeze_trainable() prefix.export_to_sklearn_pipeline()

Initialize the DisparateImpactRemover with fairness configuration and pipeline without final estimator and add a new final step, which consists of a choice of two estimators. In this code, | is the or combinator (algorithmic choice). It defines a search space for another optimizer run.

from sklearn.linear_model import LogisticRegression as LR from sklearn.ensemble import RandomForestClassifier as RF from lale.operator_wrapper import wrap_imported_operators wrap_imported_operators() di_remover = DisparateImpactRemover(**fairness_info, preparation=prefix) planned_fairer = di_remover >> (LR | RF)

Fairness metrics can be more unstable than accuracy, because they depend not just on the distribution of labels, but also on the distribution of privileged and unprivileged groups as defined by the protected attributes. In AI Automation, k-fold cross validation helps reduce overfitting. To get more stable results, we will stratify these k folds by both labels and groups with FairStratifiedKFold class.

from lale.lib.aif360 import FairStratifiedKFold fair_cv = FairStratifiedKFold(**fairness_info, n_splits=3)

Pipeline training

To automatically select the algorithm and tune its hyperparameters we use auto_configure method of lale pipeline. The combined metric accuracy_and_disparate_impact is used as scoring metric in evaluation process.

from lale.lib.lale import Hyperopt trained_fairer = planned_fairer.auto_configure( X_train, y_train, optimizer=Hyperopt, cv=fair_cv, max_evals=10, scoring=combined_scorer, best_score=1.0, )
100%|██████████| 10/10 [00:26<00:00, 2.70s/trial, best loss: 0.1676332233430191]

Results

Visualize the final pipeline and calculate its metrics.

trained_fairer.export_to_sklearn_pipeline()
print(f"accuracy {accuracy_scorer(trained_fairer, X_holdout, y_holdout):.1%}") print( f"disparate impact {disparate_impact_scorer(trained_fairer, X_holdout, y_holdout):.2f}" ) print( f"accuracy and disparate impact metric {combined_scorer(trained_fairer, X_holdout, y_holdout):.2f}" )
accuracy 66.5% disparate impact 1.00 accuracy and disparate impact metric 0.83

Summary: As result of the described steps we received unbiased pipeline model based on disparate impact value, however the accuracy of the model decreased from 70% to 66%.

5. Deploy and Score

In this section you will learn how to deploy and score Lale pipeline model using WML instance.

Custom software specification

Created model is AutoAI model refined with Lale. We will create new software specification based on default Python 3.12 environment extended by autoai-libs package.

base_sw_spec_id = client.software_specifications.get_id_by_name("runtime-25.1-py3.12") print("ID of default Python 3.12 software specification is: ", base_sw_spec_id)
ID of default Python 3.12 software specification is: f47ae1c3-198e-5718-b59d-2ea471561e9e

Create the file defining dependencies of package extension.

with open("requirements.txt", "w") as file: file.write("autoai-libs")

The requirements.txt file describes details of package extention. Now you need to store new package extention with APIClient.

meta_prop_pkg_extn = { client.package_extensions.ConfigurationMetaNames.NAME: "Scikit with autoai-libs", client.package_extensions.ConfigurationMetaNames.DESCRIPTION: "Package extension for autoai-libs", client.package_extensions.ConfigurationMetaNames.TYPE: "requirements_txt", } pkg_extn_details = client.package_extensions.store( meta_props=meta_prop_pkg_extn, file_path="requirements.txt" ) pkg_extn_id = client.package_extensions.get_id(pkg_extn_details) pkg_extn_url = client.package_extensions.get_href(pkg_extn_details)
Creating package extensions SUCCESS

Create new software specification and add created package extention to it.

meta_prop_sw_spec = { client.software_specifications.ConfigurationMetaNames.NAME: "Mitigated AutoAI bases on scikit specification", client.software_specifications.ConfigurationMetaNames.DESCRIPTION: "Software specification for scikit with autoai-libs", client.software_specifications.ConfigurationMetaNames.BASE_SOFTWARE_SPECIFICATION: { "guid": base_sw_spec_id }, } sw_spec_details = client.software_specifications.store(meta_props=meta_prop_sw_spec) sw_spec_id = client.software_specifications.get_id(sw_spec_details) status = client.software_specifications.add_package_extension(sw_spec_id, pkg_extn_id)
SUCCESS

You can get details of created software specification using client.software_specifications.get_details(sw_spec_id)

Store the model

model_props = { client.repository.ModelMetaNames.NAME: "Fairer AutoAI model", client.repository.ModelMetaNames.TYPE: "scikit-learn_1.6", client.repository.ModelMetaNames.SOFTWARE_SPEC_ID: sw_spec_id, } feature_vector = list(X_train.columns)
published_model = client.repository.store_model( model=best_pipeline.export_to_sklearn_pipeline(), meta_props=model_props, training_data=X_train.values, training_target=y_train.values, feature_names=feature_vector, label_column_names=["Risk"], )
published_model_id = client.repository.get_model_id(published_model)

Deployment creation

metadata = { client.deployments.ConfigurationMetaNames.NAME: "Deployment of fairer model", client.deployments.ConfigurationMetaNames.ONLINE: {}, } created_deployment = client.deployments.create(published_model_id, meta_props=metadata)
###################################################################################### Synchronous deployment creation for id: '99333010-9333-465f-ab57-bc354ca03efa' started ###################################################################################### initializing Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. ...... ready ----------------------------------------------------------------------------------------------- Successfully finished deployment creation, deployment_id='1e5d83bd-43c0-4c2b-95cf-dfd002687aed' -----------------------------------------------------------------------------------------------
deployment_id = client.deployments.get_id(created_deployment)

Deployment scoring

You need to pass scoring values as input data if the deployed model. Use client.deployments.score() method to get predictions from deployed model.

values = X_holdout.values scoring_payload = {"input_data": [{"values": values[:5]}]}
predictions = client.deployments.score(deployment_id, scoring_payload) predictions
{'predictions': [{'fields': ['prediction', 'probability'], 'values': [['No Risk', [0.7776124477386475, 0.22238758206367493]], ['No Risk', [0.7723912596702576, 0.22760875523090363]], ['No Risk', [0.7723912596702576, 0.22760875523090363]], ['No Risk', [0.7541248202323914, 0.24587517976760864]], ['No Risk', [0.7541248202323914, 0.24587517976760864]]]}]}

5. Clean up

If you want to clean up all created assets:

  • experiments

  • trainings

  • pipelines

  • model definitions

  • models

  • functions

  • deployments

please follow up this sample notebook.

6. Summary and next steps

You successfully completed this notebook!

Check out the documentations of libraries used in this notebook:

Authors

Dorota Lączak, Software Engineer at watsonx.ai.

Copyright © 2022-2025 IBM. This notebook and its source code are released under the terms of the MIT License.