Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cloud/notebooks/python_sdk/lifecycle-management/Use python API to automate AutoAI deployment lifecycle.ipynb
6405 views
Kernel: Python 3 (ipykernel)

Re-train and re-deploy AutoAI pipelines with ibm-watsonx-ai

This notebook contains the steps and code to demonstrate support of AI Lifecycle features of the AutoAI model in watsonx.ai Runtime Service in watsonx.ai Runtime service. It contains steps and code to work with ibm-watsonx-ai library available in PyPI repository. It also introduces commands for training, persisting and deploying model, scoring it, updating the model and redeploying it.

Some familiarity with Python is helpful. This notebook uses Python 3.11.

Learning goals

The learning goals of this notebook are:

  • List all deprecated and unsupported deployments.

  • Identify AutoAI models that need to be retrained.

  • Work with watsonx.ai Runtime experiments to re-train AutoAI models.

  • Persist an updated AutoAI model in watsonx.ai Runtime repository.

  • Redeploy model in-place.

  • Score sample records using client library.

Contents

This notebook contains the following parts:

  1. Setup

  2. Deployments state check

  3. Identification of model requiring retraining

  4. Experiment re-run

  5. Persist trained AutoAI model

  6. Redeploy and score new version of the model

  7. Clean up

  8. Summary and next steps

1. Set up the environment

Before you use the sample code in this notebook, you must perform the following setup tasks:

  • Create a watsonx.ai Runtime Service instance (a free plan is offered and information about how to create the instance can be found here).

  • Create a Cloud Object Storage (COS) instance (a lite plan is offered and information about how to order storage can be found here).
    Note: When using Watson Studio, you already have a COS instance associated with the project you are running the notebook in.

Install and import the ibm-watsonx-ai and dependecies

Note: ibm-watsonx-ai documentation can be found here.

!pip install -U ibm-watsonx-ai | tail -n 1

Connection to watsonx.ai Runtime

Authenticate the watsonx.ai Runtime service on IBM Cloud. You need to provide Cloud API key and location.

Tip: Your Cloud API key can be generated by going to the Users section of the Cloud console. From that page, click your name, scroll down to the API Keys section, and click Create an IBM Cloud API key. Give your key a name and click Create, then copy the created key and paste it below. You can also get a service specific url by going to the Endpoint URLs section of the watsonx.ai Runtime docs. You can check your instance location in your watsonx.ai Runtime Service instance details.

You can use IBM Cloud CLI to retrieve the instance location.

ibmcloud login --apikey API_KEY -a https://cloud.ibm.com ibmcloud resource service-instance INSTANCE_NAME

NOTE: You can also get a service specific apikey by going to the Service IDs section of the Cloud Console. From that page, click Create, and then copy the created key and paste it in the following cell.

Action: Enter your api_key and location in the following cell.

api_key = 'PUT_YOUR_KEY_HERE' location = 'us-south'
from ibm_watsonx_ai import Credentials credentials = Credentials( api_key=api_key, url='https://' + location + '.ml.cloud.ibm.com' )
from ibm_watsonx_ai import APIClient client = APIClient(credentials)

Working with spaces

You need to create a space that will be used for your work. If you do not have a space, you can use Deployment Spaces Dashboard to create one.

  • Click New Deployment Space

  • Create an empty space

  • Select Cloud Object Storage

  • Select watsonx.ai Runtime instance and press Create

  • Copy space_id and paste it below

Tip: You can also use SDK to prepare the space for your work. More information can be found here.

Action: assign space ID below

You can use the list method to print all existing spaces.

client.spaces.list(limit=10)

Extract all spaces id's.

spaces_ids = [space['metadata']['id'] for space in client.spaces.get_details()['resources']]

2. Deployments state check

Iterate over spaces and search for deprecated and unsupported deployments. Next, identify models requiring re-training.

from ibm_watsonx_ai.lifecycle import SpecStates for space_id in spaces_ids: client.set.default_space(space_id) print('****** SPACE', space_id, '******') print(client.deployments.get_details(spec_state=SpecStates.DEPRECATED)) print(client.deployments.get_details(spec_state=SpecStates.UNSUPPORTED))

You can also list deployments under particular space. The output contains SPEC_STATE and SPEC_REPLACEMENT. Set the working space.

deployment_space_id = 'PROVIDE SPACE_ID HERE' client.set.default_space(deployment_space_id)
'SUCCESS'

List deployments under this space.

client.deployments.list()

3. Identification of model requiring retraining

Pick up deployment of the AutoAI model you wish to retrain.

Hint: You can also do that programatically in the loop sequence over spaces check (Check the state of your deployments cell).

Hint: You can also use software_specification information (model details) to identify models and deployments that are not yet deprecated but can be retrained (updated software specification is available).

deployment_id = 'PROVIDE DEPLOYMENT_ID HERE' deployed_model_id = client.deployments.get_details(deployment_id)['entity']['asset']['id']
Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead.

Extract the deployed model's details (including the pipeline information).

import json deployed_model_details = client.repository.get_model_details(deployed_model_id) deployed_pipeline_id = deployed_model_details['entity']['pipeline']['id'] deployed_pipeline_details = client.repository.get_details(deployed_pipeline_id) experiment_params = deployed_pipeline_details['entity']['document']['pipelines'][0]['nodes'][0]['parameters'] optimization_params = experiment_params['optimization'] print('Experiment parameters:', json.dumps(experiment_params, indent=3)) print('Optimization parameters:', json.dumps(optimization_params, indent=3))
Experiment parameters: { "output_logs": true, "input_file_separator": ",", "stage_flag": true, "optimization": { "compute_pipeline_notebooks_flag": true, "cv_num_folds": 3.0, "daub_adaptive_subsampling_max_mem_usage": 15000000000.0, "holdout_param": 0.1, "label": "Risk", "learning_type": "classification", "max_num_daub_ensembles": 2.0, "positive_label": "No Risk", "run_cognito_flag": true, "scorer_for_ranking": "accuracy" }, "n_parallel_data_connections": 4.0, "one_vs_all_file": true } Optimization parameters: { "compute_pipeline_notebooks_flag": true, "cv_num_folds": 3.0, "daub_adaptive_subsampling_max_mem_usage": 15000000000.0, "holdout_param": 0.1, "label": "Risk", "learning_type": "classification", "max_num_daub_ensembles": 2.0, "positive_label": "No Risk", "run_cognito_flag": true, "scorer_for_ranking": "accuracy" }

Find the AutoAI experiment runs matching the extracted pipeline

Extract the project_id where the training took place.

Note: If the training took place in the space please update accordingly.

training_project_id = deployed_pipeline_details['metadata']['tags'][0].split('.')[1]

List all training runs matching this pipeline name.

from ibm_watsonx_ai.experiment import AutoAI experiment = AutoAI(credentials, project_id=training_project_id) runs_df = experiment.runs(filter=deployed_pipeline_details['metadata']['name']).list() runs_df.head()

Extract AutoAI experiment training_id

Check if the training_id is available in model's details.

If not, we will use the latest run of AutoAI experiment matching the pipeline name.

Hint: It is also possible to extract the run_id parsing model's details (location property):

deployed_model_details['entity']['metrics'][0]['context']['intermediate_model']['location']['model'].split('wml_data/')[1].split('/data')[0]
try: run_id = deployed_model_details["entity"]["training_id"] print('AutoAI experiment training_id found in model details:', run_id) except: run_id = runs_df['run_id'].values[0] print('AutoAI experiment training_id extracted from historical runs:', run_id)
AutoAI experiment training_id extracted from historical runs: 1736dbe2-e0d0-4eb5-ae88-34bb49934b9a

Get the training definition using pipeline_id linked with experiment run.

NOTE: We need to extract the pipeline_id from the same context as experiment run (project). The one linked with deployed model has been promoted to space and cannot be used for re-training purposes (different scope than training data reference).

pipeline_id = client.training.get_details(run_id)['entity']['pipeline']['id'] print('AutoAI experiment definition id:', pipeline_id)
AutoAI experiment definition id: 17edcc72-6b4f-4e83-b67a-9d71a57ce3eb

4. Experiment re-run

Set the training project_id (where data asset resides) to retrain AutoAI models.

client.set.default_project(training_project_id) metadata = { client.training.ConfigurationMetaNames.NAME: deployed_pipeline_details['metadata']['name'], client.training.ConfigurationMetaNames.DESCRIPTION: 'Re-train of ' + run_id, client.training.ConfigurationMetaNames.PIPELINE: {"id": pipeline_id}, client.training.ConfigurationMetaNames.TRAINING_DATA_REFERENCES: deployed_model_details['entity']['training_data_references'], client.training.ConfigurationMetaNames.TRAINING_RESULTS_REFERENCE: {'location': {'path': '.'}, 'type': 'container'}, } train_details = client.training.run(metadata, asynchronous=False)
Unsetting the space_id ... ############################################## Running '638c4f4c-e6d5-4390-b8c7-5def49255ba4' ############################################## pending........ running..................... completed Training of '638c4f4c-e6d5-4390-b8c7-5def49255ba4' finished successfully.

Explore experiment's results

Connect to finished experiment and preview the results.

optimizer = experiment.runs.get_optimizer(run_id=train_details['metadata']['id']) optimizer.summary()

Evaluate the best model locally

Load the model for test purposes.

Hint: The best model is returned automatically if no pipeline_name provided.

pipeline_name = 'Pipeline_4' pipeline_model = optimizer.get_pipeline(pipeline_name=pipeline_name, astype='sklearn') pipeline_model

This cell constructs the cell scorer based on the experiment metadata.

from sklearn.metrics import get_scorer scorer = get_scorer(optimization_params['scorer_for_ranking'])

Read the train and holdout data.

Hint: You can also use external test dataset.

connection = optimizer.get_data_connections()[0] train_X, test_X, train_y, test_y = connection.read(with_holdout_split=True)

Calculate the score

score = scorer(pipeline_model, test_X.values, test_y.values) print(score)
0.7203219315895373

5. Store the model in repository

Provide pipeline_name and training_id.

model_metadata = { client.repository.ModelMetaNames.NAME: "{0} - {1} - {2}".format(deployed_pipeline_details['metadata']['name'], pipeline_name, pipeline_model.get_params()['steps'][-1][0]) } published_model = client.repository.store_model(model=pipeline_name, meta_props=model_metadata, training_id=train_details['metadata']['id']) updated_model_id = client.repository.get_model_id(published_model) print('Re-trained model id', updated_model_id)
Re-trained model id ace63907-95e8-44d8-82c4-953e6d02339f

List stored models.

client.repository.list_models()

6. Redeploy and score new version of the model

In this section, you'll learn how to redeploy new version of the model by using the watsonx.ai Client.

Hint: As a best practice please consider using the test space before moving to production.

promote(asset_id: str, source_project_id: str, target_space_id: str, rev_id: str = None)

Promote model to deployment space

promoted_model_id = client.spaces.promote(asset_id=updated_model_id, source_project_id=training_project_id, target_space_id=deployment_space_id)

Check current deployment details before update.

client.set.default_space(deployment_space_id) print(json.dumps(client.deployments.get_details(deployment_id), indent=3))
Unsetting the project_id ... Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. { "entity": { "asset": { "id": "ce9c15b6-1480-4ec8-a60c-de7991463cea" }, "custom": {}, "deployed_asset_type": "model", "hybrid_pipeline_hardware_specs": [ { "hardware_spec": { "id": "e7ed1d6c-2e89-42d7-aed5-863b972c1d2b", "name": "S", "num_nodes": 2 }, "node_runtime_id": "auto_ai.kb" } ], "name": "Fairness - credit risk - P4 Snap Random Forest Classifier", "online": {}, "space_id": "efb29ab8-0862-4f0b-aeac-c523ee3d9258", "status": { "message": { "level": "warning", "text": "Successfully patched the asset." }, "online_url": { "url": "https://us-south.ml.cloud.ibm.com/ml/v4/deployments/cd927dbd-b8e5-49f3-80f5-9965fb030348/predictions" }, "serving_urls": [ "https://us-south.ml.cloud.ibm.com/ml/v4/deployments/cd927dbd-b8e5-49f3-80f5-9965fb030348/predictions" ], "state": "ready" } }, "metadata": { "created_at": "2023-02-15T10:31:21.907Z", "id": "cd927dbd-b8e5-49f3-80f5-9965fb030348", "modified_at": "2023-02-15T14:54:55.352Z", "name": "Fairness - credit risk - P4 Snap Random Forest Classifier", "owner": "IBMid-270002BE4G", "space_id": "efb29ab8-0862-4f0b-aeac-c523ee3d9258" }, "system": { "warnings": [ { "id": "Deprecated", "message": "online_url is deprecated and will be removed in a future release. Use serving_urls instead." } ] } }

Update the deployment with new model

Note: The update is asynchronous.

metadata = { client.deployments.ConfigurationMetaNames.ASSET: { "id": promoted_model_id, } } updated_deployment = client.deployments.update(deployment_id, changes=metadata)
Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. Since ASSET is patched, deployment with new asset id/rev is being started. Monitor the status using deployments.get_details(deployment_uid) api

Wait for the deployment update:

import time status = None while status not in ['ready', 'failed']: time.sleep(2) deployment_details = client.deployments.get_details(deployment_id) status = deployment_details['entity']['status'].get('state') print('.', status, end=' ') print("\nDeployment update finished with status: ", status)

Get updated deployment details

print(json.dumps(client.deployments.get_details(deployment_id), indent=2))
Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. { "entity": { "asset": { "id": "3b7f0c67-db45-4463-86c6-1dc4e9686787" }, "custom": {}, "deployed_asset_type": "model", "hybrid_pipeline_hardware_specs": [ { "hardware_spec": { "id": "e7ed1d6c-2e89-42d7-aed5-863b972c1d2b", "name": "S", "num_nodes": 2 }, "node_runtime_id": "auto_ai.kb" } ], "name": "Fairness - credit risk - P4 Snap Random Forest Classifier", "online": {}, "space_id": "efb29ab8-0862-4f0b-aeac-c523ee3d9258", "status": { "message": { "level": "warning", "text": "Successfully patched the asset." }, "online_url": { "url": "https://us-south.ml.cloud.ibm.com/ml/v4/deployments/cd927dbd-b8e5-49f3-80f5-9965fb030348/predictions" }, "serving_urls": [ "https://us-south.ml.cloud.ibm.com/ml/v4/deployments/cd927dbd-b8e5-49f3-80f5-9965fb030348/predictions" ], "state": "ready" } }, "metadata": { "created_at": "2023-02-15T10:31:21.907Z", "id": "cd927dbd-b8e5-49f3-80f5-9965fb030348", "modified_at": "2023-02-16T10:29:51.188Z", "name": "Fairness - credit risk - P4 Snap Random Forest Classifier", "owner": "IBMid-270002BE4G", "space_id": "efb29ab8-0862-4f0b-aeac-c523ee3d9258" }, "system": { "warnings": [ { "id": "Deprecated", "message": "online_url is deprecated and will be removed in a future release. Use serving_urls instead." } ] } }

Score updated model

Create sample payload and score the deployed model.

scoring_payload = {"input_data": [{"values": test_X[:3]}]}

Use client.deployments.score() method to run scoring.

predictions = client.deployments.score(deployment_id, scoring_payload)
print(json.dumps(predictions, indent=2))
{ "predictions": [ { "fields": [ "prediction", "probability" ], "values": [ [ "No Risk", [ 0.5881121028804668, 0.4118878971195331 ] ], [ "Risk", [ 0.029820769605171344, 0.9701792303948288 ] ], [ "Risk", [ 0.29487495440414097, 0.7051250455958591 ] ] ] } ] }

7. Clean up

If you want to clean up all created assets:

  • experiments

  • trainings

  • pipelines

  • models

  • deployments

please follow up this sample notebook.

8. Summary and next steps

You successfully completed this notebook! You learned how to use scikit-learn machine learning as well as watsonx.ai Runtime for model creation and deployment. Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Authors

Lukasz Cmielowski, PhD, is a Senior Technical Staff Member at IBM with a track record of developing enterprise-level applications that substantially increases clients' ability to turn data into actionable knowledge.

Copyright © 2023-2025 IBM. This notebook and its source code are released under the terms of the MIT License.