Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cloud/notebooks/python_sdk/converters/Use ONNX model converted from PyTorch.ipynb
6405 views
Kernel: Python 3 (ipykernel)

Use ONNX model converted from PyTorch with ibm-watsonx-ai

This notebook facilitates ONNX, PyTorch and watsonx.ai Runtime service. It contains steps and code to work with ibm-watsonx-ai library available in PyPI repository in order to convert the model to ONNX format. It also introduces commands for getting model and training data, persisting the model, deploying model, and scoring it.

Some familiarity with Python is helpful. This notebook uses Python 3.11.

Learning goals

The learning goals of this notebook are:

  • Create PyTorch model with dataset.

  • Convert PyTorch model to ONNX format

  • Persist converted model in watsonx.ai Runtime repository.

  • Deploy model for online scoring using client library.

  • Score sample records using client library.

Contents

This notebook contains the following parts:

  1. Setting up the environment

  2. Creating PyTorch model with dataset

  3. Converting PyTorch model to ONNX format

  4. Persisting converted ONNX model

  5. Deploying and scoring ONNX model

  6. Cleaning up

  7. Summary and next steps

1. Setting up the environment

Before you use the sample code in this notebook, you must perform the following setup tasks:

1.1. Installing and importing the ibm-watsonx-ai and dependecies

Note: ibm-watsonx-ai documentation can be found here.

!pip install -U ibm-watsonx-ai | tail -n 1 !pip install torch==2.1 | tail -n 1 !pip install onnx==1.16 | tail -n 1 !pip install onnxruntime==1.16.3 | tail -n 1
import getpass import json import getpass import torch import torch.nn as nn import onnx import onnxruntime as ort from ibm_watsonx_ai import Credentials, APIClient

1.2. Connecting to watsonx.ai Runtime

Authenticate to the watsonx.ai Runtime service on IBM Cloud. You need to provide platform api_key and instance location.

You can use IBM Cloud CLI to retrieve platform API Key and instance location.

API Key can be generated in the following way:

ibmcloud login ibmcloud iam api-key-create API_KEY_NAME

In result, get the value of api_key from the output.

Location of your watsonx.ai Runtime instance can be retrieved in the following way:

ibmcloud login --apikey API_KEY -a https://cloud.ibm.com ibmcloud resource service-instance INSTANCE_NAME

In result, get the value of location from the output.

Tip: Your Cloud API key can be generated by going to the Users section of the Cloud console. From that page, click your name, scroll down to the API Keys section, and click Create an IBM Cloud API key. Give your key a name and click Create, then copy the created key and paste it below. You can also get a service specific url by going to the Endpoint URLs section of the watsonx.ai Runtime docs. You can check your instance location in your watsonx.ai Runtime Service instance details.

You can also get service specific apikey by going to the Service IDs section of the Cloud Console. From that page, click Create, then copy the created key and paste it below.

Action: Enter your api_key and location in the following cells.

api_key = getpass.getpass("Please enter your api key (hit enter): ")
{"name":"stdin","output_type":"stream","text":"Please enter your api key (hit enter):  ········\n"}
location = 'ENTER YOUR LOCATION HERE'

If you are running this notebook on Cloud, you can access the location via:

location = os.environ.get("RUNTIME_ENV_REGION")
credentials = Credentials( api_key=api_key, url=f'https://{location}.ml.cloud.ibm.com' )
client = APIClient(credentials=credentials)

1.3. Working with spaces

First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use Deployment Spaces Dashboard to create one.

  • Click New Deployment Space

  • Create an empty space

  • Select Cloud Object Storage

  • Select watsonx.ai Runtime instance and press Create

  • Copy space_id and paste it below

Tip: You can also use the ibm_watsonx_ai SDK to prepare the space for your work. More information can be found here.

Action: Assign space ID below

space_id = 'ENTER YOUR SPACE ID HERE'

You can use list method to print all existing spaces.

client.spaces.list(limit=10)

To be able to interact with all resources available in watsonx.ai Runtime, you need to set space which you will be using.

client.set.default_space(space_id)
'SUCCESS'

2. Creating PyTorch model with dataset

To demonstrate how to convert a PyTorch model to ONNX format, we’ll create a simple neural network and generate a random dataset that will be used to perform inference later on. Feel free to replace these with your model and dataset suited to your specific needs.

class SingleInputModel(nn.Module): """Custom neural network model with single input, three linear layers and ReLU activations.""" def __init__(self, input_size: int, output_size: int) -> None: """Initialize model based on the input and output size :param input_size: Size of the input features. :type input_size: int :param output_size: Size of the output features. :type output_size: int """ super().__init__() self.ll_1 = nn.Linear(input_size, 64, bias=False) nn.init.ones_(self.ll_1.weight) self.ll_2 = nn.Linear(64, 32, bias=False) nn.init.ones_(self.ll_2.weight) self.ll_3 = nn.Linear(32, output_size, bias=False) nn.init.ones_(self.ll_3.weight) self.fc = nn.Sequential( self.ll_1, nn.ReLU(), self.ll_2, nn.ReLU(), self.ll_3, nn.ReLU(), ) def forward(self, x) -> torch.Tensor: """Forward pass of the model. :param x: Input tensor. :type x: torch.Tensor :return: Output tensor. :rtype: torch.Tensor """ return self.fc(x)
input_size, output_size = 4, 4 model = SingleInputModel(input_size, output_size) torch_input = torch.rand(10, input_size, generator=torch.manual_seed(777))

2.1. Evaluating the model

model(torch_input)
tensor([[2789.9216, 2789.9216, 2789.9216, 2789.9216], [6123.4058, 6123.4058, 6123.4058, 6123.4058], [4084.3904, 4084.3904, 4084.3904, 4084.3904], [5028.5454, 5028.5454, 5028.5454, 5028.5454], [6144.1240, 6144.1240, 6144.1240, 6144.1240], [3597.1663, 3597.1663, 3597.1663, 3597.1663], [6529.6514, 6529.6514, 6529.6514, 6529.6514], [2993.0212, 2993.0212, 2993.0212, 2993.0212], [5866.2710, 5866.2710, 5866.2710, 5866.2710], [4912.6562, 4912.6562, 4912.6562, 4912.6562]], grad_fn=<ReluBackward0>)

3. Converting PyTorch model to ONNX format

In this section, you will convert created PyTorch model to ONNX format model. For multi-input or multi-output models, ensure that input_names, output_names, dynamic_axes, and input_data are properly adjusted to account for all relevant inputs and outputs. More information can be found here:

onnx_model_name = "pytorch_model.onnx"

3.1. Exporting the Model with torch.onnx.export

Below, we use torch.onnx.export to save the model in ONNX format. This approach is compatible with dynamic batch sizes and sets the model’s input and output names.

torch.onnx.export(model, torch_input, onnx_model_name, verbose=False, export_params=True, keep_initializers_as_inputs=True, input_names=['input'], # the model's input names output_names=['output'], # the model's output names dynamic_axes={'input' : {0 : 'batch_size'}, 'output' : {0 : 'batch_size'}}) # enables dynamic input size print(f"ONNX model has been saved as {onnx_model_name}")
ONNX model has been saved as pytorch_model.onnx

3.2. (Beta) Exporting with torch.onnx.dynamo_export (PyTorch 2.0+)

For users with PyTorch 2.0 and above, the torch.onnx.dynamo_export method is available. This feature, still in beta, leverages the onnxscript library and provides an alternative method for model export. To use this exporter change the Markdown cell below to a Code cell and remove the triple backticks (```).

!pip install onnxscript | tail -n 1 onnx_program = torch.onnx.dynamo_export(model, input_data) onnx_program.save(onnx_model_name)

3.3. Evaluating the ONNX Model

After exporting the model, we should verify its integrity and ensure that it functions as expected. We will use onnxruntime to load the model and perform inference on the test data. Additionally, we’ll use onnx's checker module to validate the exported ONNX model.

onnx_model = onnx.load(onnx_model_name) onnx.checker.check_model(onnx_model)
ort.set_default_logger_severity(3) session = ort.InferenceSession(onnx_model_name) input_data = {session.get_inputs()[0].name: torch_input.numpy()} output = session.run([], input_data) print(output)
[array([[2789.9216, 2789.9216, 2789.9216, 2789.9216], [6123.406 , 6123.406 , 6123.406 , 6123.406 ], [4084.3904, 4084.3904, 4084.3904, 4084.3904], [5028.5454, 5028.5454, 5028.5454, 5028.5454], [6144.124 , 6144.124 , 6144.124 , 6144.124 ], [3597.1663, 3597.1663, 3597.1663, 3597.1663], [6529.6514, 6529.6514, 6529.6514, 6529.6514], [2993.0212, 2993.0212, 2993.0212, 2993.0212], [5866.271 , 5866.271 , 5866.271 , 5866.271 ], [4912.6562, 4912.6562, 4912.6562, 4912.6562]], dtype=float32)]

As you can see, the predicted values are consistent with those calculated in the evaluation section.

4. Persisting converted ONNX model

In this section, you will learn how to store your converted ONNX model in watsonx.ai Runtime repository using the IBM watsonx.ai Runtime SDK.

4.1. Publishing model in watsonx.ai Runtime repository

Define model name, type and software spec.

sofware_spec_id = client.software_specifications.get_id_by_name("onnxruntime_opset_19") onnx_model_zip = "pytorch_onnx.zip"
!zip {onnx_model_zip} {onnx_model_name}
adding: pytorch_model.onnx (deflated 97%)
metadata = { client.repository.ModelMetaNames.NAME: 'PyTorch to ONNX converted model', client.repository.ModelMetaNames.TYPE: 'onnxruntime_1.16', client.repository.ModelMetaNames.SOFTWARE_SPEC_ID: sofware_spec_id } published_model = client.repository.store_model( model=onnx_model_zip, meta_props=metadata )

4.2. Getting model details

published_model_id = client.repository.get_model_id(published_model) model_details = client.repository.get_details(published_model_id) print(json.dumps(model_details, indent=2))

5. Deploying and scoring ONNX model

In this section you'll learn how to create an online scoring service and predict on unseen data.

5.1. Creating online deployment for published model

metadata = { client.deployments.ConfigurationMetaNames.NAME: "Deployment of PyTorch to ONNX converted model", client.deployments.ConfigurationMetaNames.ONLINE: {} } created_deployment = client.deployments.create(published_model_id, meta_props=metadata)
###################################################################################### Synchronous deployment creation for id: '3bf730f5-345a-4474-9f39-d3c8ffe7b82a' started ###################################################################################### initializing Note: online_url and serving_urls are deprecated and will be removed in a future release. Use inference instead. . ready ----------------------------------------------------------------------------------------------- Successfully finished deployment creation, deployment_id='7f6044ad-2aa6-44ea-946b-53c359b4fa12' -----------------------------------------------------------------------------------------------
deployment_id = client.deployments.get_id(created_deployment)

Now you can print an online scoring endpoint.

client.deployments.get_scoring_href(created_deployment)

5.2. Getting deployment details

client.deployments.get_details(deployment_id)

5.3. Scoring

You can use below method to do test scoring request against deployed model.

scoring_payload = {"input_data": [{"values": torch_input.tolist()}]}

Use client.deployments.score() method to run scoring.

predictions = client.deployments.score(deployment_id, scoring_payload)

Let's print the result of predictions.

print(json.dumps(predictions, indent=2))
{ "predictions": [ { "id": "output", "values": [ [ 2789.921630859375, 2789.921630859375, 2789.921630859375, 2789.921630859375 ], [ 6123.40576171875, 6123.40576171875, 6123.40576171875, 6123.40576171875 ], [ 4084.390380859375, 4084.390380859375, 4084.390380859375, 4084.390380859375 ], [ 5028.54541015625, 5028.54541015625, 5028.54541015625, 5028.54541015625 ], [ 6144.1240234375, 6144.1240234375, 6144.1240234375, 6144.1240234375 ], [ 3597.166259765625, 3597.166259765625, 3597.166259765625, 3597.166259765625 ], [ 6529.6513671875, 6529.6513671875, 6529.6513671875, 6529.6513671875 ], [ 2993.021240234375, 2993.021240234375, 2993.021240234375, 2993.021240234375 ], [ 5866.27099609375, 5866.27099609375, 5866.27099609375, 5866.27099609375 ], [ 4912.65625, 4912.65625, 4912.65625, 4912.65625 ] ] } ] }

As you can see, the predicted values are consistent with those calculated in the evaluation section.

6. Cleaning up

If you want to clean up after the notebook execution, i.e. remove any created assets like:

  • experiments

  • trainings

  • pipelines

  • model definitions

  • models

  • functions

  • deployments

please follow up this sample notebook.

7. Summary and next steps

You successfully completed this notebook! You learned how to use ONNX, PyTorch machine learning library as well as watsonx.ai Runtime for model creation and deployment. Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Authors

Michał Koruszowic, Software Engineer

Copyright © 2024-2025 IBM. This notebook and its source code are released under the terms of the MIT License.