Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cpd4.7/notebooks/python_sdk/lifecycle-management/Use python API to automate AutoAI deployment lifecycle.ipynb
6405 views
Kernel: Python 3 (ipykernel)

Re-train and re-deploy AutoAI pipelines with ibm-watson-machine-learning

This notebook contains the steps and code to demonstrate support of AI Lifecycle features of the AutoAI model in Watson Machine Learning Service in Watson Machine Learning service. It contains steps and code to work with ibm-watson-machine-learning library available in PyPI repository. It also introduces commands for training, persisting and deploying model, scoring it, updating the model and redeploying it.

Some familiarity with Python is helpful. This notebook uses Python 3.10.

Learning goals

The learning goals of this notebook are:

  • List all deprecated and unsupported deployments.

  • Identify AutoAI models that need to be retrained.

  • Work with Watson Machine Learning experiments to re-train AutoAI models.

  • Persist an updated AutoAI model in Watson Machine Learning repository.

  • Redeploy model in-place.

  • Score sample records using client library.

Contents

This notebook contains the following parts:

  1. Setup

  2. Deployments state check

  3. Identification of model requiring retraining

  4. Experiment re-run

  5. Persist trained AutoAI model

  6. Redeploy and score new version of the model

  7. Clean up

  8. Summary and next steps

1. Set up the environment

Before you use the sample code in this notebook, contact with your Cloud Pack for Data administrator and ask for your account credentials.

Connection to WML

Authenticate the Watson Machine Learning service on IBM Cloud Pack for Data. You need to provide platform url, your username and api_key.

username = 'PASTE YOUR USERNAME HERE' api_key = 'PASTE YOUR API_KEY HERE' url = 'PASTE THE PLATFORM URL HERE'
wml_credentials = { "username": username, "apikey": api_key, "url": url, "instance_id": 'openshift', "version": '4.7' }

Alternatively you can use username and password to authenticate WML services.

wml_credentials = { "username": ***, "password": ***, "url": ***, "instance_id": 'openshift', "version": '4.7' }

Install and import the ibm-watson-machine-learning

Note: ibm-watson-machine-learning documentation can be found here.

!pip install -U ibm-watson-machine-learning | tail -n 1
import json from ibm_watson_machine_learning import APIClient client = APIClient(wml_credentials)

Working with spaces

First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use {PLATFORM_URL}/ml-runtime/spaces?context=icp4data to create one.

  • Click New Deployment Space

  • Create an empty space

  • Go to space Settings tab

  • Copy space_id and paste it below

Tip: You can also use SDK to prepare the space for your work. More information can be found here.

You can use the list method to print all existing spaces.

client.spaces.list(limit=10)

Extract all spaces id's.

spaces_ids = [space['metadata']['id'] for space in client.spaces.get_details()['resources']]

2. Deployments state check

Iterate over spaces and search for deprecated and unsupported deployments. Next, identify models requiring re-training.

from ibm_watson_machine_learning.lifecycle import SpecStates for space_id in spaces_ids: client.set.default_space(space_id) print('****** SPACE', space_id, '******') print(client.deployments.get_details(spec_state=SpecStates.DEPRECATED)) print(client.deployments.get_details(spec_state=SpecStates.UNSUPPORTED))
****** SPACE 448d413d-34e3-4974-a19a-f6a7488cc975 ****** {'resources': []} {'resources': []} ****** SPACE 4359771d-33c0-4173-9ed5-cdd22897ebfb ****** {'resources': []} {'resources': []} ****** SPACE 8413e7c3-7483-4568-8b93-b88120d91042 ****** {'resources': []} {'resources': []} ****** SPACE 71d17a57-8b0d-483c-a928-6791aa21c703 ****** {'resources': []} {'resources': []} ****** SPACE aaff80ab-d812-48c9-9113-0d0f298c8abe ****** {'resources': []} {'resources': []} ****** SPACE ac0d6d39-f567-4640-8807-92ca5a17f745 ****** {'resources': []} {'resources': []}

You can also list deployments under particular space. The output contains SPEC_STATE and SPEC_REPLACEMENT. Set the working space.

deployment_space_id = 'PASTE YOUR SPACE ID HERE' client.set.default_space(deployment_space_id)
'SUCCESS'

List deployments under this space.

client.deployments.list()
------------------------------------ -------------------------------------- ----- ------------------------ ------------- ---------- ---------------- GUID NAME STATE CREATED ARTIFACT_TYPE SPEC_STATE SPEC_REPLACEMENT d4437b42-6e74-49e1-a0d5-f68a8eb03992 AutoAI Credit risk - Online Deployment ready 2023-05-16T14:05:47.486Z model supported ------------------------------------ -------------------------------------- ----- ------------------------ ------------- ---------- ----------------

3. Identification of model requiring retraining

Pick up deployment of the AutoAI model you wish to retrain.

Hint: You can also do that programatically in the loop sequence over spaces check (Check the state of your deployments cell). Hint: You can also use software_specification information (model details) to identify models and deployments that are not yet deprecated but can be retrained (updated software specification is available).

deployment_id = 'PASTE YOUR DEPLOYMENT ID HERE' deployed_model_id = client.deployments.get_details(deployment_id)['entity']['asset']['id']
Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead.

Extract the deployed model's details (including the pipeline information).

deployed_model_details = client.repository.get_model_details(deployed_model_id) deployed_pipeline_id = deployed_model_details['entity']['pipeline']['id'] deployed_pipeline_details = client.repository.get_details(deployed_pipeline_id) experiment_params = deployed_pipeline_details['entity']['document']['pipelines'][0]['nodes'][0]['parameters'] optimization_params = experiment_params['optimization'] print('Experiment parameters:', json.dumps(experiment_params, indent=3)) print('Optimization parameters:', json.dumps(optimization_params, indent=3))
/Users/dorotalaczak/opt/anaconda3/envs/rt_23_1_py310/lib/python3.10/site-packages/urllib3/connectionpool.py:1045: InsecureRequestWarning: Unverified HTTPS request is being made to host 'cpd-zen.apps.ocp412wmlautoaicpd47x1fips.cp.fyre.ibm.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings warnings.warn( /Users/dorotalaczak/opt/anaconda3/envs/rt_23_1_py310/lib/python3.10/site-packages/urllib3/connectionpool.py:1045: InsecureRequestWarning: Unverified HTTPS request is being made to host 'cpd-zen.apps.ocp412wmlautoaicpd47x1fips.cp.fyre.ibm.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings warnings.warn( /Users/dorotalaczak/opt/anaconda3/envs/rt_23_1_py310/lib/python3.10/site-packages/urllib3/connectionpool.py:1045: InsecureRequestWarning: Unverified HTTPS request is being made to host 'cpd-zen.apps.ocp412wmlautoaicpd47x1fips.cp.fyre.ibm.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings warnings.warn( /Users/dorotalaczak/opt/anaconda3/envs/rt_23_1_py310/lib/python3.10/site-packages/urllib3/connectionpool.py:1045: InsecureRequestWarning: Unverified HTTPS request is being made to host 'cpd-zen.apps.ocp412wmlautoaicpd47x1fips.cp.fyre.ibm.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings warnings.warn(
Experiment parameters: { "incremental_learning": true, "input_file_separator": ",", "stage_flag": true, "optimization": { "compute_pipeline_notebooks_flag": true, "cv_num_folds": 3.0, "daub_adaptive_subsampling_max_mem_usage": 9000000000.0, "global_stage_include_batched_ensemble_estimators": [ "BatchedTreeEnsembleClassifier(ExtraTreesClassifier)", "BatchedTreeEnsembleClassifier(LGBMClassifier)", "BatchedTreeEnsembleClassifier(RandomForestClassifier)", "BatchedTreeEnsembleClassifier(SnapBoostingMachineClassifier)", "BatchedTreeEnsembleClassifier(SnapRandomForestClassifier)", "BatchedTreeEnsembleClassifier(XGBClassifier)" ], "holdout_param": 0.1, "label": "Risk", "learning_type": "classification", "max_num_daub_ensembles": 2.0, "positive_label": "No Risk", "run_cognito_flag": true, "scorer_for_ranking": "accuracy" }, "enable_early_stop": true, "n_parallel_data_connections": 4.0, "one_vs_all_file": true, "output_logs": true, "early_stop_window_size": 3.0 } Optimization parameters: { "compute_pipeline_notebooks_flag": true, "cv_num_folds": 3.0, "daub_adaptive_subsampling_max_mem_usage": 9000000000.0, "global_stage_include_batched_ensemble_estimators": [ "BatchedTreeEnsembleClassifier(ExtraTreesClassifier)", "BatchedTreeEnsembleClassifier(LGBMClassifier)", "BatchedTreeEnsembleClassifier(RandomForestClassifier)", "BatchedTreeEnsembleClassifier(SnapBoostingMachineClassifier)", "BatchedTreeEnsembleClassifier(SnapRandomForestClassifier)", "BatchedTreeEnsembleClassifier(XGBClassifier)" ], "holdout_param": 0.1, "label": "Risk", "learning_type": "classification", "max_num_daub_ensembles": 2.0, "positive_label": "No Risk", "run_cognito_flag": true, "scorer_for_ranking": "accuracy" }

Find the AutoAI experiment runs matching the extracted pipeline

Extract the project_id where the training took place.

Note: If the training took place in the space please update accordingly.

training_project_id = deployed_pipeline_details['metadata']['tags'][0].split('.')[1]

Extract AutoAI experiment training_id

The training_id is available in model's details.

run_id = deployed_model_details['entity']['training_id'] print('AutoAI experiment training_id found in model details:', run_id)
AutoAI experiment training_id found in model details: ce97dafb-fa77-45ed-8603-4f525f6c16b7

4. Experiment re-run

Set the training project_id (where data asset resides) to retrain AutoAI models.

from ibm_watson_machine_learning.experiment import AutoAI experiment = AutoAI(wml_credentials, project_id=training_project_id) optimizer = experiment.runs.get_optimizer(run_id=run_id)
from ibm_watson_machine_learning.utils.autoai.errors import TestDataNotPresent training_data_reference = optimizer.get_data_connections() try: test_data_reference = optimizer.get_test_data_connections() except TestDataNotPresent: test_data_reference = None
User defined (test / holdout) data is not present for this AutoAI experiment. Reason: User specified test data was not present in this experiment. Try to use 'with_holdout_split' parameter for original training_data_references to retrieve test data.
train_details = optimizer.fit(training_data_references = training_data_reference, test_data_references=test_data_reference)
Training job 1578524e-ab25-467d-b305-acfe24057a7f completed: 100%|████████| [02:24<00:00, 1.45s/it]

Explore experiment's results

Connect to finished experiment and preview the results.

optimizer.summary()

Evaluate the best model locally

Load the model for test purposes.

Hint: The best model is returned automatically if no pipeline_name provided.

pipeline_name = 'Pipeline_4' pipeline_model = optimizer.get_pipeline(pipeline_name=pipeline_name, astype='sklearn') pipeline_model

This cell constructs the cell scorer based on the experiment metadata.

from sklearn.metrics import get_scorer scorer = get_scorer(optimization_params['scorer_for_ranking'])

Read the train and holdout data.

Hint: You can also use external test dataset.

connection = optimizer.get_data_connections()[0] train_X, test_X, train_y, test_y = connection.read(with_holdout_split=True)
E0516 16:16:20.975787000 140704665543488 tls_security_connector.cc:382] TlsChannelSecurityConnector::cancel_check_peer error: UNKNOWN:Subchannel disconnected {created_time:"2023-05-16T16:16:20.975767+02:00"}

Calculate the score

score = scorer(pipeline_model, test_X.values, test_y.values) print(score)
0.8

5. Store the model in repository

Provide pipeline_name and training_id.

client.set.default_project(training_project_id)
Unsetting the space_id ...
'SUCCESS'
model_metadata = { client.repository.ModelMetaNames.NAME: "{0} - {1} - {2}".format(deployed_pipeline_details['metadata']['name'], pipeline_name, pipeline_model.get_params()['steps'][-1][0]) } published_model = client.repository.store_model(model=pipeline_name, meta_props=model_metadata, training_id=train_details['metadata']['id']) updated_model_id = client.repository.get_model_id(published_model) print('Re-trained model id', updated_model_id)
Re-trained model id 225b11ff-1f52-4d35-959f-9101a1db2560

List stored models.

client.repository.list_models()
------------------------------------ ------------------------------------------------------------ ------------------------ -------------- ---------- ---------------- ID NAME CREATED TYPE SPEC_STATE SPEC_REPLACEMENT 225b11ff-1f52-4d35-959f-9101a1db2560 AutoAI Credit risk - Pipeline_4 - snaprandomforestclassifier 2023-05-16T14:16:28.002Z wml-hybrid_0.1 supported dcee0a44-7f8c-476e-815a-0dcd2befa4df AutoAI Credit risk - Pipeline_4 - snaprandomforestclassifier 2023-05-16T13:47:31.002Z wml-hybrid_0.1 supported d2de1575-ded0-42f7-95f6-9b47322469f0 AutoAI Credit risk - Pipeline_4 - snaprandomforestclassifier 2023-05-16T12:16:25.002Z wml-hybrid_0.1 supported f7fc49ae-798e-42f9-b951-32ef9f866f05 AutoAI Credit risk - P9 XGB Classifier 2023-05-16T11:54:07.002Z wml-hybrid_0.1 supported 506bf6ed-7ff6-4e02-bc11-9d4b2c640ebc P10 - Pretrained AutoAI pipeline 2023-05-15T12:43:59.002Z wml-hybrid_0.1 supported ------------------------------------ ------------------------------------------------------------ ------------------------ -------------- ---------- ----------------

6. Redeploy and score new version of the model

In this section, you'll learn how to redeploy new version of the model by using the Watson Machine Learning Client.

Hint: As a best practice please consider using the test space before moving to production.

promote(asset_id: str, source_project_id: str, target_space_id: str, rev_id: str = None)

Promote model to deployment space

promoted_model_id = client.spaces.promote(asset_id=updated_model_id, source_project_id=training_project_id, target_space_id=deployment_space_id)

Check current deployment details before update.

client.set.default_space(deployment_space_id) print(json.dumps(client.deployments.get_details(deployment_id), indent=3))
Unsetting the project_id ... Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. { "entity": { "asset": { "id": "aeb64510-166c-46d2-8b11-229c1740ea8c" }, "custom": {}, "deployed_asset_type": "model", "hybrid_pipeline_hardware_specs": [ { "hardware_spec": { "name": "S", "num_nodes": 1 }, "node_runtime_id": "auto_ai.kb" } ], "name": "AutoAI Credit risk - Online Deployment", "online": {}, "space_id": "ac0d6d39-f567-4640-8807-92ca5a17f745", "status": { "online_url": { "url": "https://cpd-zen.apps.ocp412wmlautoaicpd47x1fips.cp.fyre.ibm.com/ml/v4/deployments/d4437b42-6e74-49e1-a0d5-f68a8eb03992/predictions" }, "serving_urls": [ "https://cpd-zen.apps.ocp412wmlautoaicpd47x1fips.cp.fyre.ibm.com/ml/v4/deployments/d4437b42-6e74-49e1-a0d5-f68a8eb03992/predictions" ], "state": "ready" } }, "metadata": { "created_at": "2023-05-16T14:05:47.486Z", "id": "d4437b42-6e74-49e1-a0d5-f68a8eb03992", "modified_at": "2023-05-16T14:05:47.486Z", "name": "AutoAI Credit risk - Online Deployment", "owner": "1000331001", "space_id": "ac0d6d39-f567-4640-8807-92ca5a17f745" }, "system": { "warnings": [ { "id": "Deprecated", "message": "online_url is deprecated and will be removed in a future release. Use serving_urls instead." } ] } }

Update the deployment with new model

Note: The update is asynchronous.

metadata = { client.deployments.ConfigurationMetaNames.ASSET: { "id": promoted_model_id, } } updated_deployment = client.deployments.update(deployment_id, changes=metadata)
Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. Since ASSET is patched, deployment with new asset id/rev is being started. Monitor the status using deployments.get_details(deployment_uid) api

Wait for the deployment update:

import time status = None while status not in ['ready', 'failed']: time.sleep(2) deployment_details = client.deployments.get_details(deployment_id) status = deployment_details['entity']['status'].get('state') print('.', status, end=' ') print("\nDeployment update finished with status: ", status)
Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. . updating Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. . updating Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. . updating Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. . updating Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. . updating Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. . updating Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. . updating Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. . updating Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. . updating Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. . updating Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. . updating Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. . updating Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. . updating Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. . updating Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. . ready Deployment update finished with status: ready

Get updated deployment details

print(json.dumps(client.deployments.get_details(deployment_id), indent=2))
Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. { "entity": { "asset": { "id": "d78cc303-f277-42ef-85c0-d5d220b60202" }, "custom": {}, "deployed_asset_type": "model", "hybrid_pipeline_hardware_specs": [ { "hardware_spec": { "id": "e7ed1d6c-2e89-42d7-aed5-863b972c1d2b", "name": "S", "num_nodes": 1 }, "node_runtime_id": "auto_ai.kb" } ], "name": "AutoAI Credit risk - Online Deployment", "online": {}, "space_id": "ac0d6d39-f567-4640-8807-92ca5a17f745", "status": { "message": { "level": "warning", "text": "Successfully patched the asset." }, "online_url": { "url": "https://cpd-zen.apps.ocp412wmlautoaicpd47x1fips.cp.fyre.ibm.com/ml/v4/deployments/d4437b42-6e74-49e1-a0d5-f68a8eb03992/predictions" }, "serving_urls": [ "https://cpd-zen.apps.ocp412wmlautoaicpd47x1fips.cp.fyre.ibm.com/ml/v4/deployments/d4437b42-6e74-49e1-a0d5-f68a8eb03992/predictions" ], "state": "ready" } }, "metadata": { "created_at": "2023-05-16T14:05:47.486Z", "id": "d4437b42-6e74-49e1-a0d5-f68a8eb03992", "modified_at": "2023-05-16T14:17:32.010Z", "name": "AutoAI Credit risk - Online Deployment", "owner": "1000331001", "space_id": "ac0d6d39-f567-4640-8807-92ca5a17f745" }, "system": { "warnings": [ { "id": "Deprecated", "message": "online_url is deprecated and will be removed in a future release. Use serving_urls instead." } ] } }

Score updated model

Create sample payload and score the deployed model.

scoring_payload = {"input_data": [{"values": test_X[:3]}]}

Use client.deployments.score() method to run scoring.

predictions = client.deployments.score(deployment_id, scoring_payload)
print(json.dumps(predictions, indent=2))
{ "predictions": [ { "fields": [ "prediction", "probability" ], "values": [ [ "Risk", [ 0.1754017922936416, 0.8245982077063584 ] ], [ "Risk", [ 0.1643842371498666, 0.8356157628501334 ] ], [ "Risk", [ 0.4907570931969619, 0.5092429068030381 ] ] ] } ] }

7. Clean up

If you want to clean up all created assets:

  • experiments

  • trainings

  • pipelines

  • models

  • deployments

please follow up this sample notebook.

8. Summary and next steps

You successfully completed this notebook! You learned how to use scikit-learn machine learning as well as Watson Machine Learning for model creation and deployment. Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Authors

Lukasz Cmielowski, PhD, is a Senior Technical Staff Member at IBM with a track record of developing enterprise-level applications that substantially increases clients' ability to turn data into actionable knowledge.

Dorota Laczak, Python Software Developer in Watson Machine Learning AutoAI at IBM

Copyright © 2023-2025 IBM. This notebook and its source code are released under the terms of the MIT License.