Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cpd5.2/notebooks/python_sdk/lifecycle-management/Use Python API to automate AutoAI deployment lifecycle.ipynb
6397 views
Kernel: Python 3.12

Use Python API to automate AutoAI deployment lifecycle

This notebook contains the steps and code to demonstrate support of AI Lifecycle features of the AutoAI model in watsonx.ai service. It contains steps and code to work with ibm-watsonx-ai SDK available in PyPI repository. It also introduces commands for training, persisting and deploying model, scoring it, updating the model and redeploying it.

Some familiarity with Python is helpful. This notebook uses Python 3.12.

Learning goals

The learning goals of this notebook are:

  • List all deprecated and unsupported deployments.

  • Identify AutoAI models that need to be retrained.

  • Work with watsonx.ai experiments to re-train AutoAI models.

  • Persist an updated AutoAI model in watsonx.ai repository.

  • Redeploy model in-place.

  • Score sample records using client library.

Contents

This notebook contains the following parts:

  1. Setup

  2. Deployments state check

  3. Identification of model requiring retraining

  4. Experiment re-run

  5. Persist trained AutoAI model

  6. Redeploy and score new version of the model

  7. Clean up

  8. Summary and next steps

1. Set up the environment

Before you use the sample code in this notebook, contact with your Cloud Pak for Data administrator and ask for your account credentials.

Install dependencies

Note: ibm-watsonx-ai documentation can be found here.

%pip install -U wget | tail -n 1 %pip install "scikit-learn==1.6.1" | tail -n 1 %pip install -U autoai-libs | tail -n 1 %pip install -U ibm-watsonx-ai | tail -n 1
Successfully installed wget-3.2 Requirement already satisfied: threadpoolctl>=3.1.0 in /opt/user-env/pyt6/lib64/python3.12/site-packages (from scikit-learn==1.6.1) (3.6.0) Successfully installed autoai-libs-3.0.3 Successfully installed ibm-watsonx-ai-1.3.20

Define credentials

Authenticate the watsonx.ai Runtime service on IBM Cloud Pak for Data. You need to provide the admin's username and the platform url.

username = "PASTE YOUR USERNAME HERE" url = "PASTE THE PLATFORM URL HERE"

Use the admin's api_key to authenticate watsonx.ai Runtime services:

import getpass from ibm_watsonx_ai import Credentials credentials = Credentials( username=username, api_key=getpass.getpass("Enter your watsonx.ai API key and hit enter: "), url=url, instance_id="openshift", version="5.2", )

Alternatively you can use the admin's password:

import getpass from ibm_watsonx_ai import Credentials if "credentials" not in locals() or not credentials.api_key: credentials = Credentials( username=username, password=getpass.getpass("Enter your watsonx.ai password and hit enter: "), url=url, instance_id="openshift", version="5.2", )
Enter your watsonx.ai password and hit enter: ········

Create APIClient instance

from ibm_watsonx_ai import APIClient client = APIClient(credentials)

Working with spaces

First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use {PLATFORM_URL}/ml-runtime/spaces?context=icp4data to create one.

  • Click New Deployment Space

  • Create an empty space

  • Go to space Settings tab

  • Copy space_id and paste it below

Tip: You can also use SDK to prepare the space for your work. More information can be found here.

You can use the list method to print all existing spaces.

client.spaces.list(limit=10)

Extract all space IDs

space_ids = [ space["metadata"]["id"] for space in client.spaces.get_details()["resources"] ] space_ids[:5]
['dca8e5ae-bf7f-4903-b200-65bfa21fd4c5', 'ee2821f4-cb2a-412a-b093-0106204a3e8e', 'f6d27946-2381-4abe-a620-7e4b1afe8a16', '3813019e-e0d3-4b96-92ac-04a30af93744', '5a9af126-1850-4b69-a1ed-cf1d66e73176']

2. Deployments state check

Iterate over spaces and search for deprecated and unsupported deployments. Next, identify models requiring re-training.

from ibm_watsonx_ai.lifecycle import SpecStates for space_id in space_ids[:5]: client.set.default_space(space_id) print(f"****** SPACE {space_id} ******") print(client.deployments.get_details(spec_state=SpecStates.DEPRECATED)) print(client.deployments.get_details(spec_state=SpecStates.UNSUPPORTED))
****** SPACE dca8e5ae-bf7f-4903-b200-65bfa21fd4c5 ****** {'resources': []} {'resources': []} ****** SPACE ee2821f4-cb2a-412a-b093-0106204a3e8e ****** {'resources': []} {'resources': []} ****** SPACE f6d27946-2381-4abe-a620-7e4b1afe8a16 ****** {'resources': []} {'resources': []} ****** SPACE 3813019e-e0d3-4b96-92ac-04a30af93744 ****** {'resources': []} {'resources': []} ****** SPACE 5a9af126-1850-4b69-a1ed-cf1d66e73176 ****** {'resources': []} {'resources': []}

You can also list deployments under particular space. The output contains SPEC_STATE and SPEC_REPLACEMENT. Set the working space.

deployment_space_id = "PASTE YOUR SPACE ID HERE" client.set.default_space(deployment_space_id)
'SUCCESS'

List deployments under this space.

client.deployments.list()

3. Identification of model requiring retraining

Pick up deployment of the AutoAI model you wish to retrain.

Hint: You can also do that programatically in the loop sequence over spaces check (Check the state of your deployments cell). Hint: You can also use software_specification information (model details) to identify models and deployments that are not yet deprecated but can be retrained (updated software specification is available).

deployment_id = "PASTE YOUR DEPLOYMENT ID HERE" deployment_details = client.deployments.get_details(deployment_id) deployed_model_id = deployment_details["entity"]["asset"]["id"] deployed_model_id
Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead.
'269f7f9a-920c-4ef4-b021-ef5786ae709a'

Extract the deployed model's details (including the pipeline information).

import json deployed_model_details = client.repository.get_model_details(deployed_model_id) deployed_pipeline_id = deployed_model_details["entity"]["pipeline"]["id"] deployed_pipeline_details = client.repository.get_details(deployed_pipeline_id) experiment_params = deployed_pipeline_details["entity"]["document"]["pipelines"][0]["nodes"][0]["parameters"] optimization_params = experiment_params["optimization"] print("Experiment parameters:", json.dumps(experiment_params, indent=2)) print("Optimization parameters:", json.dumps(optimization_params, indent=2))
Experiment parameters: { "drop_duplicates": true, "encoding": "utf-8", "input_file_separator": ",", "optimization": { "compute_pipeline_notebooks_flag": true, "daub_adaptive_subsampling_max_mem_usage": 9000000000, "label": "Risk", "learning_type": "binary", "retrain_on_holdout": true, "run_cognito_flag": true, "scorer_for_ranking": "roc_auc" }, "output_logs": true, "stage_flag": true } Optimization parameters: { "compute_pipeline_notebooks_flag": true, "daub_adaptive_subsampling_max_mem_usage": 9000000000, "label": "Risk", "learning_type": "binary", "retrain_on_holdout": true, "run_cognito_flag": true, "scorer_for_ranking": "roc_auc" }

Find the AutoAI experiment runs matching the extracted pipeline

Extract the project ID where the training took place.

Tip: For more information about using AutoAI with projects, see this sample notebook.

Note: If the training took place in a space, please update accordingly.

try: training_project_id = deployed_pipeline_details["metadata"]["tags"][0].split(".")[1] except LookupError: training_project_id = input("Please enter your project_id (hit enter): ")
Please enter your project_id (hit enter): 9cf5ee36-da98-4856-be9f-a04df0d43f7b

Extract AutoAI experiment training_id

The training_id is available in model's details.

run_id = deployed_model_details["entity"]["training_id"] print("AutoAI experiment training_id found in model details:", run_id)
AutoAI experiment training_id found in model details: 69e7991d-eacb-4f51-bbd1-631fa91d17bd

4. Experiment re-run

Set the training project_id (where data asset resides) to retrain AutoAI models.

from ibm_watsonx_ai.experiment import AutoAI experiment = AutoAI(credentials, project_id=training_project_id) optimizer = experiment.runs.get_optimizer(run_id=run_id)
from ibm_watsonx_ai.utils.autoai.errors import TestDataNotPresent training_data_reference = optimizer.get_data_connections() try: test_data_reference = optimizer.get_test_data_connections() except TestDataNotPresent: test_data_reference = None
User defined (test / holdout) data is not present for this AutoAI experiment. Reason: User specified test data was not present in this experiment. Try to use 'with_holdout_split' parameter for original training_data_references to retrieve test data.
train_details = optimizer.fit( training_data_references=training_data_reference, test_data_references=test_data_reference, )
Training job edc7616d-a2c9-465a-9dab-d01574fba1a5 completed: 100%|████████| [02:18<00:00, 1.39s/it]

Explore experiment's results

Connect to finished experiment and preview the results.

optimizer.summary()

Evaluate the best model locally

Load the model for test purposes.

Hint: The best model is returned automatically if no pipeline_name provided.

pipeline_name = "Pipeline_4" pipeline_model = optimizer.get_pipeline(pipeline_name=pipeline_name, astype="sklearn") pipeline_model

This cell constructs the cell scorer based on the experiment metadata.

from sklearn.metrics import get_scorer scorer = get_scorer(optimization_params["scorer_for_ranking"])

Read the train and holdout data.

Hint: You can also use external test dataset.

connection = optimizer.get_data_connections()[0] train_X, test_X, train_y, test_y = connection.read(with_holdout_split=True)

Calculate the score

score = scorer(pipeline_model, test_X.values, test_y.values) print(score)
0.845104970781329

5. Store the model in repository

Provide pipeline_name and training_id.

client.set.default_project(training_project_id)
Unsetting the space_id ...
'SUCCESS'
model_metadata = { client.repository.ModelMetaNames.NAME: "{0} - {1} - {2}".format( deployed_pipeline_details["metadata"]["name"], pipeline_name, pipeline_model.get_params()["steps"][-1][0], ) } published_model = client.repository.store_model( model=pipeline_name, meta_props=model_metadata, training_id=train_details["metadata"]["id"], ) updated_model_id = client.repository.get_model_id(published_model) print("Re-trained model id", updated_model_id)
Re-trained model id 8683b3a7-bb11-46da-b46f-52355acb8c18

List stored models.

client.repository.list_models()

6. Redeploy and score new version of the model

In this section, you'll learn how to redeploy new version of the model by using the watsonx.ai Client.

Hint: As a best practice please consider using the test space before moving to production.

promote(asset_id: str, source_project_id: str, target_space_id: str, rev_id: str = None)

Promote model to deployment space

promoted_model_id = client.spaces.promote( asset_id=updated_model_id, source_project_id=training_project_id, target_space_id=deployment_space_id, )

Check current deployment details before update.

client.set.default_space(deployment_space_id) print(json.dumps(client.deployments.get_details(deployment_id), indent=2))

Update the deployment with new model

Note: The update is asynchronous.

metadata = { client.deployments.ConfigurationMetaNames.ASSET: { "id": promoted_model_id, } } updated_deployment = client.deployments.update(deployment_id, changes=metadata)
Since ASSET is patched, deployment need to be restarted. ######################################################################## Deployment update for id: '9c6943cd-e804-43b6-9d49-3c094e92146b' started ######################################################################## updating....... ready --------------------------------------------------------------------------------------------- Successfully finished deployment update, deployment_id='9c6943cd-e804-43b6-9d49-3c094e92146b' ---------------------------------------------------------------------------------------------

Wait for the deployment update:

import time status = None while status not in ("ready", "failed"): time.sleep(2) deployment_details = client.deployments.get_details(deployment_id) status = deployment_details["entity"]["status"].get("state") print(".", status, end=" ") print("\nDeployment update finished with status: ", status)
Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. . ready Deployment update finished with status: ready

Get updated deployment details

print(json.dumps(client.deployments.get_details(deployment_id), indent=2))

Score updated model

Create sample payload and score the deployed model.

scoring_payload = {"input_data": [{"values": test_X[:3]}]}

Use client.deployments.score() method to run scoring.

predictions = client.deployments.score(deployment_id, scoring_payload)
print(json.dumps(predictions, indent=2))
{ "predictions": [ { "fields": [ "prediction", "probability" ], "values": [ [ "No Risk", [ 0.5360556244850159, 0.46394437551498413 ] ], [ "No Risk", [ 0.9565750956535339, 0.04342489689588547 ] ], [ "No Risk", [ 0.57205730676651, 0.42794269323349 ] ] ] } ] }

7. Clean up

If you want to clean up all created assets:

  • experiments

  • trainings

  • pipelines

  • models

  • deployments

please follow up this sample notebook.

8. Summary and next steps

You successfully completed this notebook! You learned how to use scikit-learn machine learning as well as watsonx.ai for model creation and deployment.

Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Authors

Lukasz Cmielowski, PhD, is a Senior Technical Staff Member at IBM with a track record of developing enterprise-level applications that substantially increases clients' ability to turn data into actionable knowledge.

Dorota Lączak, Python Software Developer in Watson Machine Learning AutoAI at IBM

Copyright © 2023-2025 IBM. This notebook and its source code are released under the terms of the MIT License.