Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cpd5.2/notebooks/python_sdk/deployments/pmml/Use PMML to predict iris species.ipynb
6412 views
Kernel: watsonx-ai-samples-py-312

Use PMML to predict iris species with ibm-watsonx-ai

This notebook contains steps from storing sample PMML model to starting scoring new data using online and batch deployment.

Some familiarity with python is helpful. This notebook uses Python 3.12.

You will use a Iris data set, which details measurements of iris perianth. Use the details of this data set to predict iris species.

Learning goals

The learning goals of this notebook are:

  • Working with the WML instance

  • Batch deployment of PMML model

  • Scoring of deployed model

Contents

This notebook contains the following parts:

  1. Setup

  2. Model upload

  3. Deployment creation

  4. Scoring

  5. Clean up

  6. Summary and next steps

1. Set up the environment

Before you use the sample code in this notebook, you must perform the following setup tasks:

  • Contact with your Cloud Pak for Data administrator and ask them for your account credentials

Install dependencies

Note: ibm-watsonx-ai documentation can be found here.

%pip install -U wget | tail -n 1 %pip install -U ibm-watsonx-ai | tail -n 1
Successfully installed wget-3.2 Successfully installed anyio-4.9.0 certifi-2025.4.26 charset-normalizer-3.4.2 h11-0.16.0 httpcore-1.0.9 httpx-0.28.1 ibm-cos-sdk-2.14.0 ibm-cos-sdk-core-2.14.0 ibm-cos-sdk-s3transfer-2.14.0 ibm-watsonx-ai-1.3.13 idna-3.10 jmespath-1.0.1 lomond-0.3.3 numpy-2.2.5 pandas-2.2.3 pytz-2025.2 requests-2.32.2 sniffio-1.3.1 tabulate-0.9.0 typing_extensions-4.13.2 tzdata-2025.2 urllib3-2.4.0

Define credentials

Authenticate the watsonx.ai Runtime service on IBM Cloud Pak for Data. You need to provide the admin's username and the platform url.

username = "PASTE YOUR USERNAME HERE" url = "PASTE THE PLATFORM URL HERE"

Use the admin's api_key to authenticate watsonx.ai Runtime services:

import getpass from ibm_watsonx_ai import Credentials credentials = Credentials( username=username, api_key=getpass.getpass("Enter your watsonx.ai API key and hit enter: "), url=url, instance_id="openshift", version="5.2", )

Alternatively you can use the admin's password:

import getpass from ibm_watsonx_ai import Credentials if "credentials" not in locals() or not credentials.api_key: credentials = Credentials( username=username, password=getpass.getpass("Enter your watsonx.ai password and hit enter: "), url=url, instance_id="openshift", version="5.2", )

Create APIClient instance

from ibm_watsonx_ai import APIClient client = APIClient(credentials)

Working with spaces

First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use {PLATFORM_URL}/ml-runtime/spaces?context=icp4data to create one.

  • Click New Deployment Space

  • Create an empty space

  • Go to space Settings tab

  • Copy space_id and paste it below

Tip: You can also use SDK to prepare the space for your work. More information can be found here.

Action: Assign space ID below

space_id = "PASTE YOUR SPACE ID HERE"

You can use list method to print all existing spaces.

client.spaces.list(limit=10)

To be able to interact with all resources available in watsonx.ai, you need to set space which you will be using.

client.set.default_space(space_id)
'SUCCESS'

2. Upload model

In this section you will learn how to upload the model to the Cloud.

Action: Download sample PMML model from git project using wget.

import os from wget import download sample_dir = "pmml_sample_model" if not os.path.isdir(sample_dir): os.mkdir(sample_dir) filename = os.path.join(sample_dir, "iris_chaid.xml") if not os.path.isfile(filename): filename = download( "https://raw.githubusercontent.com/IBM/watsonx-ai-samples/master/cpd5.2/models/pmml/iris-species/model/iris_chaid.xml", out=sample_dir, )

Store downloaded file in watsonx.ai repository.

sw_spec_uid = client.software_specifications.get_uid_by_name("pmml-3.0_4.3") meta_props = { client.repository.ModelMetaNames.NAME: "pmmlmodel", client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_uid, client.repository.ModelMetaNames.TYPE: "pmml_4.2.1", }
published_model = client.repository.store_model(model=filename, meta_props=meta_props)

Note: You can see that the model is successfully stored in watsonx.ai.

client.repository.list_models()

3. Create deployments

In this step, we will create both an online deployment and a batch deployment of PMML model. Depending on your use-case, only one deployment out of these two might be necessary. You can learn more about batch deployments here.

Online deployment

You can use command below to create online deployment for stored model (web service)

model_id = client.repository.get_model_id(published_model) online_deployment = client.deployments.create( artifact_uid=model_id, meta_props={ client.deployments.ConfigurationMetaNames.NAME: "Sample PMML Online deployment", client.deployments.ConfigurationMetaNames.ONLINE: {}, }, )
###################################################################################### Synchronous deployment creation for id: 'f06faf9c-eb7c-4604-b745-11aa7f3da9d2' started ###################################################################################### initializing Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. ...... ready ----------------------------------------------------------------------------------------------- Successfully finished deployment creation, deployment_id='9eed99e9-0938-4de0-b3f4-766e2c8264ed' -----------------------------------------------------------------------------------------------

You can retrieve now your online deployment ID

online_deployment_id = client.deployments.get_id(online_deployment)

You can also list all deployments in your space

client.deployments.list()

If you want to get additional information on your deployment, you can do it as below

client.deployments.get_details(online_deployment_id)

Batch deployment

You can use command below to create batch deployment for stored model.

model_id = client.repository.get_model_id(published_model) batch_deployment = client.deployments.create( artifact_uid=model_id, meta_props={ client.deployments.ConfigurationMetaNames.NAME: "Sample PMML Batch deployment", client.deployments.ConfigurationMetaNames.BATCH: {}, client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: { "name": "S", "num_nodes": 1, }, }, )
###################################################################################### Synchronous deployment creation for id: 'f06faf9c-eb7c-4604-b745-11aa7f3da9d2' started ###################################################################################### ready. ----------------------------------------------------------------------------------------------- Successfully finished deployment creation, deployment_id='0855f464-e1db-4a99-b25f-902460cf298f' -----------------------------------------------------------------------------------------------

You can retrieve now your online deployment ID

batch_deployment_id = client.deployments.get_id(batch_deployment)

You can also list all deployments in your space

client.deployments.list()

If you want to get additional information on your deployment, you can do it as below

client.deployments.get_details(batch_deployment_id)
{'entity': {'asset': {'id': 'f06faf9c-eb7c-4604-b745-11aa7f3da9d2'}, 'batch': {}, 'chat_enabled': False, 'custom': {}, 'deployed_asset_type': 'model', 'hardware_spec': {'id': 'e7ed1d6c-2e89-42d7-aed5-863b972c1d2b', 'name': 'S', 'num_nodes': 1}, 'name': 'Sample PMML Batch deployment', 'space_id': '8a13841b-df99-4b4d-bf2a-161ad2e33980', 'status': {'state': 'ready'}}, 'metadata': {'created_at': '2025-05-13T12:12:20.581Z', 'id': '0855f464-e1db-4a99-b25f-902460cf298f', 'modified_at': '2025-05-13T12:12:20.581Z', 'name': 'Sample PMML Batch deployment', 'owner': '1000331001', 'space_id': '8a13841b-df99-4b4d-bf2a-161ad2e33980'}}

4. Scoring

In this step, we will score the models available through the recently created deployments.

import json meta_props = { client.deployments.ScoringMetaNames.INPUT_DATA: [ { "fields": ["Sepal.Length", "Sepal.Width", "Petal.Length", "Petal.Width"], "values": [[5.1, 3.5, 1.4, 0.2]], } ] }

Online deployment scoring

Scoring of online deployments can be performed using the score method.

predictions = client.deployments.score(online_deployment_id, meta_props) print(json.dumps(predictions, indent=2))
{ "predictions": [ { "fields": [ "$R-Species", "$RC-Species", "$RP-Species", "$RP-setosa", "$RP-versicolor", "$RP-virginica", "$RI-Species" ], "values": [ [ "setosa", 1.0, 1.0, 1.0, 0.0, 0.0, "1" ] ] } ] }

Batch deployment scoring

In order to score a model in batch deployment, a job needs to be created.

job = client.deployments.create_job(batch_deployment_id, meta_props=meta_props)

After submitting your job, you can retrieve its ID

job_id = client.deployments.get_job_id(job)

You can also list all jobs in your space.

client.deployments.list_jobs()

If you want to get additional information on your job, you can do it as below.

client.deployments.get_job_details(job_id)
{'entity': {'deployment': {'id': '0855f464-e1db-4a99-b25f-902460cf298f'}, 'platform_job': {'job_id': 'd0d23a0f-f968-42f8-a35a-64dadd01d800', 'run_id': '917b7508-c432-4572-956c-6c3f03080eb7'}, 'scoring': {'input_data': [{'fields': ['Sepal.Length', 'Sepal.Width', 'Petal.Length', 'Petal.Width'], 'values': [[5.1, 3.5, 1.4, 0.2]]}], 'status': {'completed_at': '', 'running_at': '', 'state': 'queued'}}}, 'metadata': {'created_at': '2025-05-13T12:12:47.216Z', 'id': 'e9c67ed5-2a42-4c29-83a7-49d2107aec7f', 'name': 'name_5c8af886-daed-4f9a-8850-56c48271c167', 'space_id': '8a13841b-df99-4b4d-bf2a-161ad2e33980'}}

Here you can check status of your batch scoring.

import time elapsed_time = 0 while ( client.deployments.get_job_status(job_id).get("state") != "completed" and elapsed_time < 300 ): print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}") elapsed_time += 10 time.sleep(10) if client.deployments.get_job_status(job_id).get("state") == "completed": print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}") job_details_do = client.deployments.get_job_details(job_id) print(job_details_do) else: print("Job hasn't completed successfully in 5 minutes.")
Current state: queued Current state: queued Current state: completed {'entity': {'deployment': {'id': '0855f464-e1db-4a99-b25f-902460cf298f'}, 'platform_job': {'job_id': 'd0d23a0f-f968-42f8-a35a-64dadd01d800', 'run_id': '917b7508-c432-4572-956c-6c3f03080eb7'}, 'scoring': {'input_data': [{'fields': ['Sepal.Length', 'Sepal.Width', 'Petal.Length', 'Petal.Width'], 'values': [[5.1, 3.5, 1.4, 0.2]]}], 'predictions': [{'fields': ['$R-Species', '$RC-Species', '$RP-Species', '$RP-setosa', '$RP-versicolor', '$RP-virginica', '$RI-Species'], 'values': [['setosa', 1.0, 1.0, 1.0, 0.0, 0.0, '1']]}], 'status': {'completed_at': '2025-05-13T12:13:15.000Z', 'running_at': '2025-05-13T12:13:15.000Z', 'state': 'completed'}}}, 'metadata': {'created_at': '2025-05-13T12:12:47.216Z', 'id': 'e9c67ed5-2a42-4c29-83a7-49d2107aec7f', 'modified_at': '2025-05-13T12:13:15.701Z', 'name': 'name_5c8af886-daed-4f9a-8850-56c48271c167', 'space_id': '8a13841b-df99-4b4d-bf2a-161ad2e33980'}}

After the job completes, you can retrieve its scoring data

import json print(json.dumps(client.deployments.get_job_details(job_id), indent=2))
{ "entity": { "deployment": { "id": "0855f464-e1db-4a99-b25f-902460cf298f" }, "platform_job": { "job_id": "d0d23a0f-f968-42f8-a35a-64dadd01d800", "run_id": "917b7508-c432-4572-956c-6c3f03080eb7" }, "scoring": { "input_data": [ { "fields": [ "Sepal.Length", "Sepal.Width", "Petal.Length", "Petal.Width" ], "values": [ [ 5.1, 3.5, 1.4, 0.2 ] ] } ], "predictions": [ { "fields": [ "$R-Species", "$RC-Species", "$RP-Species", "$RP-setosa", "$RP-versicolor", "$RP-virginica", "$RI-Species" ], "values": [ [ "setosa", 1.0, 1.0, 1.0, 0.0, 0.0, "1" ] ] } ], "status": { "completed_at": "2025-05-13T12:13:15.000Z", "running_at": "2025-05-13T12:13:15.000Z", "state": "completed" } } }, "metadata": { "created_at": "2025-05-13T12:12:47.216Z", "id": "e9c67ed5-2a42-4c29-83a7-49d2107aec7f", "modified_at": "2025-05-13T12:13:15.701Z", "name": "name_5c8af886-daed-4f9a-8850-56c48271c167", "space_id": "8a13841b-df99-4b4d-bf2a-161ad2e33980" } }

Results examination

As we can see, in both cases the predicted flower is Iris Setosa.

5. Clean up

If you want to clean up all created assets:

  • experiments

  • trainings

  • pipelines

  • model definitions

  • models

  • functions

  • deployments

please follow up this sample notebook.

6. Summary and next steps

You successfully completed this notebook! You learned how to use watsonx.ai for PMML model deployment and scoring.

Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Authors

Jan Sołtysik, Software Engineer at IBM.

Rafał Chrzanowski, Software Engineer Intern at watsonx.ai.

Copyright © 2020-2025 IBM. This notebook and its source code are released under the terms of the MIT License.