Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
IBM
GitHub Repository: IBM/watson-machine-learning-samples
Path: blob/master/cloud/notebooks/python_sdk/converters/Use ONNX model converted from AutoAI.ipynb
5087 views
Kernel: .venv_samples_py_311_x86

Use ONNX model converted from AutoAI with ibm-watsonx-ai

This notebook facilitates ONNX, AutoAI, and watsonx.ai Runtime service. It contains steps and code to work with ibm-watsonx-ai library available in PyPI repository in order to convert the model to ONNX format. It also introduces commands for persisting, deploying and scoring the model.

Some familiarity with Python is helpful. This notebook uses Python 3.11.

Learning goals

The learning goals of this notebook are:

  • Train an AutoAI model

  • Convert the native scikit-learn model to ONNX format

  • Deploy the model for online scoring using client library

  • Score sample records using the client library

Contents

This notebook contains the following parts:

  1. Environment setup

  2. Optimizer definition

  3. Experiment run

  4. Deploy and run

  5. Cleanup

  6. Summary and next steps

1. Environment setup

Before you use the sample code in this notebook, you must perform the following setup tasks:

1.1. Installing and importing the ibm-watsonx-ai and dependencies

Note: ibm-watsonx-ai documentation can be found here.

%pip install ibm-watsonx-ai | tail -n 1 %pip install wget | tail -n 1 %pip install "autoai_libs[onnx]<3.0.0" | tail -n 1 %pip install onnxruntime | tail -n 1
Successfully installed anyio-4.12.0 cachetools-6.2.2 certifi-2025.11.12 charset_normalizer-3.4.4 h11-0.16.0 httpcore-1.0.9 httpx-0.28.1 ibm-cos-sdk-2.14.3 ibm-cos-sdk-core-2.14.3 ibm-cos-sdk-s3transfer-2.14.3 ibm-watsonx-ai-1.4.11 idna-3.11 jmespath-1.0.1 lomond-0.3.3 numpy-2.3.5 pandas-2.2.3 pytz-2025.2 requests-2.32.5 tabulate-0.9.0 tzdata-2025.2 urllib3-2.6.2 Successfully installed wget-3.2 Successfully installed astunparse-1.6.3 attrs-25.4.0 autoai_libs-2.0.29 black-25.12.0 click-8.3.1 cloudpickle-3.1.2 future-1.0.0 gensim-4.3.3 graphviz-0.21 greenery-3.3.3 hyperopt-0.2.7 joblib-1.5.2 jsonref-1.1.0 jsonschema-4.20.0 jsonschema-specifications-2025.9.1 jsonsubschema-0.0.7 lale-0.8.4 lightgbm-4.2.0 mypy-extensions-1.1.0 networkx-3.6.1 numpy-1.26.4 onnx-1.16.0 onnxconverter-common-1.13.0 onnxmltools-1.14.0 onnxruntime-extensions-0.13.0 pandas-2.1.4 parameterized-0.8.1 pathspec-0.12.1 portion-2.6.1 protobuf-6.33.2 py4j-0.10.9.9 pytokens-0.3.0 referencing-0.37.0 rpds-py-0.30.0 scikit-learn-1.3.2 scipy-1.13.1 skl2onnx-1.18.0 smart-open-7.5.0 sortedcontainers-2.4.0 threadpoolctl-3.6.0 tqdm-4.67.1 wheel-0.45.1 wrapt-2.0.1 xgboost-2.0.3 Successfully installed coloredlogs-15.0.1 flatbuffers-25.9.23 humanfriendly-10.0 mpmath-1.3.0 onnxruntime-1.23.2 sympy-1.14.0

1.2. Connecting to watsonx.ai Runtime

Authenticate with the watsonx.ai Runtime service on IBM Cloud. You need to provide platform api_key and instance location.

You can use IBM Cloud CLI to retrieve platform API Key and instance location.

API Key can be generated in the following way:

ibmcloud login ibmcloud iam api-key-create API_KEY_NAME

Get the value of api_key from the output.

Location of your watsonx.ai Runtime instance can be retrieved in the following way:

ibmcloud login --apikey API_KEY -a https://cloud.ibm.com ibmcloud resource service-instance INSTANCE_NAME

Get the value of location from the output.

Tip: You can generate your Cloud API key by going to the Users section of the Cloud console. From that page, click your name, scroll down to the API Keys section, and click Create an IBM Cloud API key. Give your key a name and click Create, then copy the created key and paste it below. You can also get a service-specific url by going to the Endpoint URLs section of the watsonx.ai Runtime docs. You can check your instance location in your watsonx.ai Runtime Service instance details.

You can also get the service specific apikey by going to the Service IDs section of the Cloud Console. From that page, click Create, then copy the created key, and paste it below.

Action: Enter your api_key and location in the following cells.

import getpass api_key = getpass.getpass("Please enter your api key (hit enter): ")
location = "PASTE YOUR LOCATION HERE"

If you are running this notebook on Cloud, you can access the location via:

location = os.environ.get("RUNTIME_ENV_REGION")
from ibm_watsonx_ai import Credentials credentials = Credentials(api_key=api_key, url=f"https://{location}.ml.cloud.ibm.com")
from ibm_watsonx_ai import APIClient client = APIClient(credentials=credentials)

1.3. Working with spaces

First of all, you need to create a space that will be used for your work. If you do not have a space, you can use Deployment Spaces Dashboard to create one.

  • Click New Deployment Space

  • Create an empty space

  • Select Cloud Object Storage

  • Select watsonx.ai Runtime instance and press Create

  • Copy space_id and paste it below

Tip: You can also use the ibm_watsonx_ai SDK to prepare the space for your work. More information can be found here.

Action: Assign space ID below

space_id = "PASTE YOUR SPACE ID HERE"

You can use the list method to print all existing spaces.

client.spaces.list(limit=10)

To be able to interact with all resources available in watsonx.ai Runtime, you need to set space which you will be using.

client.set.default_space(space_id)
'SUCCESS'

Connections to COS

In next cell we read the COS credentials from the space.

from ibm_watsonx_ai.utils import get_from_json
space_details = client.spaces.get_details(space_id=space_id) cos_credentials = get_from_json(space_details, ["entity", "storage", "properties"])

2. Optimizer definition

Training data connection

Define connection information to COS bucket and training data CSV file. This example uses the German Credit Risk dataset.

The code in next cell uploads training data to the bucket.

filename = "credit_risk_training_light.csv" datasource_name = "bluemixcloudobjectstorage" bucket_name = cos_credentials.get("bucket_name")

Download training data from git repository.

import os import wget url = "https://raw.githubusercontent.com/IBM/watsonx-ai-samples/master/cloud/data/credit_risk/credit_risk_training_light.csv" if not os.path.isfile(filename): wget.download(url)

Create connection

conn_meta_props = { client.connections.ConfigurationMetaNames.NAME: f"Connection to Database - {datasource_name} ", client.connections.ConfigurationMetaNames.DATASOURCE_TYPE: client.connections.get_datasource_type_id_by_name( datasource_name ), client.connections.ConfigurationMetaNames.DESCRIPTION: "Connection to external Database", client.connections.ConfigurationMetaNames.PROPERTIES: { "bucket": bucket_name, "access_key": get_from_json( cos_credentials, ["credentials", "editor", "access_key_id"] ), "secret_key": get_from_json( cos_credentials, ["credentials", "editor", "secret_access_key"] ), "iam_url": "https://iam.cloud.ibm.com/identity/token", "url": cos_credentials.get("endpoint_url"), }, } conn_details = client.connections.create(meta_props=conn_meta_props)
Creating connections... SUCCESS

Note: The above connection can be initialized alternatively with api_key and resource_instance_id. The above cell can be replaced with:

conn_meta_props= { client.connections.ConfigurationMetaNames.NAME: f"Connection to Database - {db_name} ", client.connections.ConfigurationMetaNames.DATASOURCE_TYPE: client.connections.get_datasource_type_id_by_name(db_name), client.connections.ConfigurationMetaNames.DESCRIPTION: "Connection to external Database", client.connections.ConfigurationMetaNames.PROPERTIES: { "bucket": bucket_name, "api_key": cos_credentials["apikey"], "resource_instance_id": cos_credentials["resource_instance_id"], "iam_url": "https://iam.cloud.ibm.com/identity/token", "url": "https://s3.us.cloud-object-storage.appdomain.cloud" } } conn_details = client.connections.create(meta_props=conn_meta_props)
connection_id = client.connections.get_id(conn_details)

Define connection information to training data.

from ibm_watsonx_ai.helpers import DataConnection, S3Location credit_risk_conn = DataConnection( connection_asset_id=connection_id, location=S3Location(bucket=bucket_name, path=filename), ) training_data_reference = [credit_risk_conn]

Check the connection information. Upload the data and validate.

credit_risk_conn.set_client(client) credit_risk_conn.write(data=filename, remote_name=filename) credit_risk_conn.read()

Optimizer configuration

Provide the input information for AutoAI optimizer:

  • name - experiment name

  • prediction_type - type of the problem

  • prediction_column - target column name

  • scoring - optimization metric

from ibm_watsonx_ai.experiment import AutoAI experiment = AutoAI(credentials, space_id=space_id) pipeline_optimizer = experiment.optimizer( name="Credit Risk Prediction - AutoAI", prediction_type=AutoAI.PredictionType.BINARY, prediction_column="Risk", include_only_estimators=["RandomForestClassifier"], )

Configuration parameters can be retrieved via get_params().

pipeline_optimizer.get_params()
{'name': 'Credit Risk Prediction - AutoAI', 'desc': '', 'prediction_type': 'binary', 'prediction_column': 'Risk', 'prediction_columns': None, 'timestamp_column_name': None, 'scoring': None, 'holdout_size': None, 'max_num_daub_ensembles': None, 't_shirt_size': 'l', 'train_sample_rows_test_size': None, 'include_only_estimators': [<ClassificationAlgorithms.RF: 'RandomForestClassifier'>], 'include_batched_ensemble_estimators': None, 'backtest_num': None, 'lookback_window': None, 'forecast_window': None, 'backtest_gap_length': None, 'cognito_transform_names': None, 'csv_separator': ',', 'excel_sheet': None, 'encoding': 'utf-8', 'positive_label': None, 'drop_duplicates': True, 'outliers_columns': None, 'text_processing': None, 'word2vec_feature_number': None, 'daub_give_priority_to_runtime': None, 'text_columns_names': None, 'sampling_type': None, 'sample_size_limit': None, 'sample_rows_limit': None, 'sample_percentage_limit': None, 'number_of_batch_rows': None, 'n_parallel_data_connections': None, 'test_data_csv_separator': ',', 'test_data_excel_sheet': None, 'test_data_encoding': 'utf-8', 'categorical_imputation_strategy': None, 'numerical_imputation_strategy': None, 'numerical_imputation_value': None, 'imputation_threshold': None, 'retrain_on_holdout': True, 'feature_columns': None, 'pipeline_types': None, 'supporting_features_at_forecast': None, 'numerical_columns': None, 'categorical_columns': None, 'confidence_level': None, 'incremental_learning': None, 'early_stop_enabled': None, 'early_stop_window_size': None, 'time_ordered_data': None, 'feature_selector_mode': None, 'run_id': None}

3. Experiment run

Call the fit() method to trigger the AutoAI experiment. You can either use interactive mode (synchronous job) or background mode (asychronous job) by specifying background_model=True.

run_details = pipeline_optimizer.fit( training_data_reference=training_data_reference, background_mode=False )
Training job c5bb1f49-573e-49d9-8641-2a35adb5aa35 completed: 100%|████████| [03:03<00:00, 1.84s/it]

You can use the get_run_status() method to monitor AutoAI jobs in background mode.

pipeline_optimizer.get_run_status()
'completed'
pipeline_optimizer.summary()
pipeline_name = "Pipeline_1"
pipeline_model = pipeline_optimizer.get_pipeline( pipeline_name=pipeline_name, astype=AutoAI.PipelineTypes.ONNX )

3.1. Model evaluation

X_test = credit_risk_conn.read().drop(["Risk"], axis=1)[:3]
X_test_dict = {col: X_test[col].apply(lambda x: [x]).tolist() for col in X_test.columns} pipeline_model.run([], X_test_dict)
[array(['No Risk', 'No Risk', 'No Risk'], dtype=object), [{'No Risk': 0.8999999761581421, 'Risk': 0.10000000149011612}, {'No Risk': 1.0, 'Risk': 0.0}, {'No Risk': 0.800000011920929, 'Risk': 0.20000000298023224}]]

4. Deploy and score

In this section you will learn how to deploy and score pipeline model as webservice using watsonx.ai Runtime instance.

Online deployment creation

run_id = get_from_json(run_details, ["metadata", "id"])
from ibm_watsonx_ai.deployment import WebService service = WebService(credentials, source_space_id=space_id) service.create( experiment_run_id=run_id, model=pipeline_name, deployment_name=f"Credit Risk Deployment AutoAI - ONNX - {pipeline_name}", )
Preparing an AutoAI Deployment... Published model uid: 62e3bf7b-37f7-4c38-8e4e-db82ce5a9f79 Deploying model 62e3bf7b-37f7-4c38-8e4e-db82ce5a9f79 using V4 client. ###################################################################################### Synchronous deployment creation for id: '62e3bf7b-37f7-4c38-8e4e-db82ce5a9f79' started ###################################################################################### initializing Note: online_url and serving_urls are deprecated and will be removed in a future release. Use inference instead. ..... ready ----------------------------------------------------------------------------------------------- Successfully finished deployment creation, deployment_id='368c2d8f-cb12-430f-be1d-0ff493b2d3d5' -----------------------------------------------------------------------------------------------

Deployment object could be printed to show basic information:

print(service)

To show all available information about the deployment use the .get_params() method:

service.get_params()
deployment_id = get_from_json(service.get_params(), ["metadata", "id"]) deployment_id
Note: online_url and serving_urls are deprecated and will be removed in a future release. Use inference instead.
'368c2d8f-cb12-430f-be1d-0ff493b2d3d5'
scoring_payload = {"input_data": [{"values": X_test}]}

Webservice scoring

You can make scoring request by calling score() on deployed pipeline.

predictions = client.deployments.score(deployment_id, scoring_payload) predictions
{'predictions': [{'fields': ['prediction', 'probability'], 'values': [['No Risk', [0.9, 0.1]], ['No Risk', [1.0, 0.0]], ['No Risk', [0.8, 0.2]]]}]}

If you want to work with the web service in an external Python application you can retrieve the service object by:

  • Initialize the service by service = WebService(credentials)

  • Get deployment_id by service.list() method

  • Get webservice object by service.get('deployment_id') method

After that you can call service.score() method.

Deleting deployment

You can delete the existing deployment by calling the service.delete() command. To list the existing web services you can use service.list().

5. Cleanup

If you want to clean up after the notebook execution, i.e. remove any created assets like:

  • experiments

  • trainings

  • pipelines

  • model definitions

  • models

  • functions

  • deployments

please follow up this sample notebook.

6. Summary and next steps

You successfully completed this notebook! You learned how to use ONNX, scikit-learn machine learning library as well as watsonx.ai Runtime for model creation and deployment. Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Authors

Marta Tomzik, Software Engineer at watsonx.ai

Copyright © 2025-2026 IBM. This notebook and its source code are released under the terms of the MIT License.