Path: blob/master/cloud/notebooks/python_sdk/converters/Use ONNX model converted from AutoAI.ipynb
5087 views
Use ONNX model converted from AutoAI with ibm-watsonx-ai
This notebook facilitates ONNX, AutoAI, and watsonx.ai Runtime service. It contains steps and code to work with ibm-watsonx-ai library available in PyPI repository in order to convert the model to ONNX format. It also introduces commands for persisting, deploying and scoring the model.
Some familiarity with Python is helpful. This notebook uses Python 3.11.
Learning goals
The learning goals of this notebook are:
Train an AutoAI model
Convert the native scikit-learn model to ONNX format
Deploy the model for online scoring using client library
Score sample records using the client library
Contents
This notebook contains the following parts:
1. Environment setup
Before you use the sample code in this notebook, you must perform the following setup tasks:
Create a watsonx.ai Runtime instance (information on service plans and further reading can be found here).
1.1. Installing and importing the ibm-watsonx-ai and dependencies
Note: ibm-watsonx-ai documentation can be found here.
Successfully installed anyio-4.12.0 cachetools-6.2.2 certifi-2025.11.12 charset_normalizer-3.4.4 h11-0.16.0 httpcore-1.0.9 httpx-0.28.1 ibm-cos-sdk-2.14.3 ibm-cos-sdk-core-2.14.3 ibm-cos-sdk-s3transfer-2.14.3 ibm-watsonx-ai-1.4.11 idna-3.11 jmespath-1.0.1 lomond-0.3.3 numpy-2.3.5 pandas-2.2.3 pytz-2025.2 requests-2.32.5 tabulate-0.9.0 tzdata-2025.2 urllib3-2.6.2
Successfully installed wget-3.2
Successfully installed astunparse-1.6.3 attrs-25.4.0 autoai_libs-2.0.29 black-25.12.0 click-8.3.1 cloudpickle-3.1.2 future-1.0.0 gensim-4.3.3 graphviz-0.21 greenery-3.3.3 hyperopt-0.2.7 joblib-1.5.2 jsonref-1.1.0 jsonschema-4.20.0 jsonschema-specifications-2025.9.1 jsonsubschema-0.0.7 lale-0.8.4 lightgbm-4.2.0 mypy-extensions-1.1.0 networkx-3.6.1 numpy-1.26.4 onnx-1.16.0 onnxconverter-common-1.13.0 onnxmltools-1.14.0 onnxruntime-extensions-0.13.0 pandas-2.1.4 parameterized-0.8.1 pathspec-0.12.1 portion-2.6.1 protobuf-6.33.2 py4j-0.10.9.9 pytokens-0.3.0 referencing-0.37.0 rpds-py-0.30.0 scikit-learn-1.3.2 scipy-1.13.1 skl2onnx-1.18.0 smart-open-7.5.0 sortedcontainers-2.4.0 threadpoolctl-3.6.0 tqdm-4.67.1 wheel-0.45.1 wrapt-2.0.1 xgboost-2.0.3
Successfully installed coloredlogs-15.0.1 flatbuffers-25.9.23 humanfriendly-10.0 mpmath-1.3.0 onnxruntime-1.23.2 sympy-1.14.0
1.2. Connecting to watsonx.ai Runtime
Authenticate with the watsonx.ai Runtime service on IBM Cloud. You need to provide platform api_key and instance location.
You can use IBM Cloud CLI to retrieve platform API Key and instance location.
API Key can be generated in the following way:
Get the value of api_key from the output.
Location of your watsonx.ai Runtime instance can be retrieved in the following way:
Get the value of location from the output.
Tip: You can generate your Cloud API key by going to the Users section of the Cloud console. From that page, click your name, scroll down to the API Keys section, and click Create an IBM Cloud API key. Give your key a name and click Create, then copy the created key and paste it below. You can also get a service-specific url by going to the Endpoint URLs section of the watsonx.ai Runtime docs. You can check your instance location in your watsonx.ai Runtime Service instance details.
You can also get the service specific apikey by going to the Service IDs section of the Cloud Console. From that page, click Create, then copy the created key, and paste it below.
Action: Enter your api_key and location in the following cells.
If you are running this notebook on Cloud, you can access the location via:
1.3. Working with spaces
First of all, you need to create a space that will be used for your work. If you do not have a space, you can use Deployment Spaces Dashboard to create one.
Click New Deployment Space
Create an empty space
Select Cloud Object Storage
Select watsonx.ai Runtime instance and press Create
Copy
space_idand paste it below
Tip: You can also use the ibm_watsonx_ai SDK to prepare the space for your work. More information can be found here.
Action: Assign space ID below
You can use the list method to print all existing spaces.
To be able to interact with all resources available in watsonx.ai Runtime, you need to set space which you will be using.
Connections to COS
In next cell we read the COS credentials from the space.
Training data connection
Define connection information to COS bucket and training data CSV file. This example uses the German Credit Risk dataset.
The code in next cell uploads training data to the bucket.
Download training data from git repository.
Create connection
Note: The above connection can be initialized alternatively with api_key and resource_instance_id.
The above cell can be replaced with:
Define connection information to training data.
Check the connection information. Upload the data and validate.
Optimizer configuration
Provide the input information for AutoAI optimizer:
name- experiment nameprediction_type- type of the problemprediction_column- target column namescoring- optimization metric
Configuration parameters can be retrieved via get_params().
You can use the get_run_status() method to monitor AutoAI jobs in background mode.
4. Deploy and score
In this section you will learn how to deploy and score pipeline model as webservice using watsonx.ai Runtime instance.
Online deployment creation
Deployment object could be printed to show basic information:
To show all available information about the deployment use the .get_params() method:
Webservice scoring
You can make scoring request by calling score() on deployed pipeline.
If you want to work with the web service in an external Python application you can retrieve the service object by:
Initialize the service by
service = WebService(credentials)Get deployment_id by
service.list()methodGet webservice object by
service.get('deployment_id')method
After that you can call service.score() method.
Deleting deployment
You can delete the existing deployment by calling the service.delete() command. To list the existing web services you can use service.list().
If you want to clean up after the notebook execution, i.e. remove any created assets like:
experiments
trainings
pipelines
model definitions
models
functions
deployments
please follow up this sample notebook.
You successfully completed this notebook! You learned how to use ONNX, scikit-learn machine learning library as well as watsonx.ai Runtime for model creation and deployment. Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.
Authors
Marta Tomzik, Software Engineer at watsonx.ai
Copyright © 2025-2026 IBM. This notebook and its source code are released under the terms of the MIT License.