Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cloud/notebooks/python_sdk/deployments/pmml/Use PMML and Batch Deployment to predict iris species.ipynb
6405 views
Kernel: Python 3 (ipykernel)

Use PMML and Batch Deployments to predict iris species with ibm-watsonx-ai

This notebook contains steps from storing sample PMML model to starting scoring new data using batch deployment.

Some familiarity with python is helpful. This notebook uses Python 3.11.

You will use a Iris data set, which details measurements of iris perianth. Use the details of this data set to predict iris species.

Learning goals

The learning goals of this notebook are:

  • Working with the watsonx.ai Runtime instance

  • Batch deployment of PMML model

  • Scoring of deployed model

Contents

This notebook contains the following parts:

  1. Setup

  2. Model upload

  3. Batch deployment creation

  4. Scoring

  5. Clean up

  6. Summary and next steps

1. Set up the environment

Before you use the sample code in this notebook, you must perform the following setup tasks:

Install and import the ibm-watsonx-ai and dependecies

Note: ibm-watsonx-ai documentation can be found here.

!pip install wget !pip install -U ibm-watsonx-ai | tail -n 1

Connection to watsonx.ai Runtime

Authenticate the watsonx.ai Runtime service on IBM Cloud. You need to provide platform api_key and instance location.

You can use IBM Cloud CLI to retrieve platform API Key and instance location.

API Key can be generated in the following way:

ibmcloud login ibmcloud iam api-key-create API_KEY_NAME

In result, get the value of api_key from the output.

Location of your watsonx.ai Runtime instance can be retrieved in the following way:

ibmcloud login --apikey API_KEY -a https://cloud.ibm.com ibmcloud resource service-instance INSTANCE_NAME

In result, get the value of location from the output.

Tip: Your Cloud API key can be generated by going to the Users section of the Cloud console. From that page, click your name, scroll down to the API Keys section, and click Create an IBM Cloud API key. Give your key a name and click Create, then copy the created key and paste it below. You can also get a service specific url by going to the Endpoint URLs section of the watsonx.ai Runtime docs. You can check your instance location in your watsonx.ai Runtime Service instance details.

You can also get service specific apikey by going to the Service IDs section of the Cloud Console. From that page, click Create, then copy the created key and paste it below.

Action: Enter your api_key and location in the following cell.

api_key = 'PASTE YOUR PLATFORM API KEY HERE' location = 'PASTE YOUR INSTANCE LOCATION HERE'
from ibm_watsonx_ai import Credentials credentials = Credentials( api_key=api_key, url='https://' + location + '.ml.cloud.ibm.com' )
from ibm_watsonx_ai import APIClient client = APIClient(credentials)

Working with spaces

First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use Deployment Spaces Dashboard to create one.

  • Click New Deployment Space

  • Create an empty space

  • Select Cloud Object Storage

  • Select watsonx.ai Runtime instance and press Create

  • Copy space_id and paste it below

Tip: You can also use SDK to prepare the space for your work. More information can be found here.

Action: Assign space ID below

space_id = 'PASTE YOUR SPACE ID HERE'

You can use list method to print all existing spaces.

client.spaces.list(limit=10)

To be able to interact with all resources available in watsonx.ai Runtime, you need to set space which you will be using.

client.set.default_space(space_id)
'SUCCESS'

2. Upload model

In this section you will learn how to upload the model to the Cloud.

Action: Download sample PMML model from git project using wget.

import os from wget import download sample_dir = 'pmml_sample_model' if not os.path.isdir(sample_dir): os.mkdir(sample_dir) filename=os.path.join(sample_dir, 'iris_chaid.xml') if not os.path.isfile(filename): filename = download('https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cloud/models/pmml/iris-species/model/iris_chaid.xml', out=sample_dir)

Store downloaded file in watsonx.ai Runtime repository.

sw_spec_id = client.software_specifications.get_id_by_name("pmml-3.0_4.3") meta_props = { client.repository.ModelMetaNames.NAME: "pmmlmodel", client.repository.ModelMetaNames.SOFTWARE_SPEC_ID: sw_spec_id, client.repository.ModelMetaNames.TYPE: 'pmml_4.2.1'}
published_model = client.repository.store_model(model=filename, meta_props=meta_props)

Note: You can see that model is successfully stored in watsonx.ai Runtime Service.

client.repository.list_models()

3. Create battch deployment

You can use command bellow to create batch deployment for stored model.

model_id = client.repository.get_model_id(published_model) deployment = client.deployments.create( artifact_id=model_id, meta_props={ client.deployments.ConfigurationMetaNames.NAME: "Sample PMML Batch deployment", client.deployments.ConfigurationMetaNames.BATCH:{}, client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: { "name": "S", "num_nodes": 1 } } )
###################################################################################### Synchronous deployment creation for id: '53d11a5c-ebc9-46a6-9064-876b4266fab5' started ###################################################################################### ready. ----------------------------------------------------------------------------------------------- Successfully finished deployment creation, deployment_id='8e068ff0-2498-4740-b3bb-b5400990ce11' -----------------------------------------------------------------------------------------------

Batch deployment has been created.

You can retrieve now your deployment ID.

deployment_id = client.deployments.get_id(deployment)

You can also list all deployments in your space.

client.deployments.list()

If you want to get additional information on your deployment, you can do it as below.

client.deployments.get_details(deployment_id)

4. Scoring

You can send new scoring records to batch deployment using by creating job.

job_payload_ref = { client.deployments.ScoringMetaNames.INPUT_DATA: [ { 'fields': ['Sepal.Length', 'Sepal.Width', 'Petal.Length', 'Petal.Width'], 'values': [[5.1, 3.5, 1.4, 0.2]] } ] } job = client.deployments.create_job(deployment_id, meta_props=job_payload_ref)

Now, your job has been submitted to runtime.

You can retrieve now your job ID.

job_id = client.deployments.get_job_id(job)

You can also list all jobs in your space.

client.deployments.list_jobs()

If you want to get additional information on your job, you can do it as below.

client.deployments.get_job_details(job_id)

Monitor job execution Here you can check status of your batch scoring.

import time elapsed_time = 0 while client.deployments.get_job_status(job_id).get('state') != 'completed' and elapsed_time < 300: print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}") elapsed_time += 10 time.sleep(10) if client.deployments.get_job_status(job_id).get('state') == 'completed': print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}") job_details_do = client.deployments.get_job_details(job_id) print(job_details_do) else: print("Job hasn't completed successfully in 5 minutes.")
Current state: queued Current state: completed {'entity': {'deployment': {'id': '8e068ff0-2498-4740-b3bb-b5400990ce11'}, 'platform_job': {'job_id': '80cf9460-62f6-4ed6-98ac-01e55795b839', 'run_id': 'c7abfdc1-6f16-43f0-ad40-f91a6a8e6011'}, 'scoring': {'input_data': [{'fields': ['Sepal.Length', 'Sepal.Width', 'Petal.Length', 'Petal.Width'], 'values': [[5.1, 3.5, 1.4, 0.2]]}], 'predictions': [{'fields': ['$R-Species', '$RC-Species', '$RP-Species', '$RP-setosa', '$RP-versicolor', '$RP-virginica', '$RI-Species'], 'values': [['setosa', 1.0, 1.0, 1.0, 0.0, 0.0, '1']]}], 'status': {'completed_at': '2024-07-29T07:42:06.000Z', 'running_at': '2024-07-29T07:42:05.000Z', 'state': 'completed'}}}, 'metadata': {'created_at': '2024-07-29T07:41:49.913Z', 'id': '5620d21d-17b8-4511-b2fe-377da5e70dd3', 'modified_at': '2024-07-29T07:42:06.447Z', 'name': 'name_d4b971f8-5a30-4303-a9e4-c222b7e71238', 'space_id': '93ee84d1-b7dd-42b4-b2ca-121bc0c86315'}}

Get scored data

import json print(json.dumps(client.deployments.get_job_details(job_id), indent=2))
{ "entity": { "deployment": { "id": "8e068ff0-2498-4740-b3bb-b5400990ce11" }, "platform_job": { "job_id": "80cf9460-62f6-4ed6-98ac-01e55795b839", "run_id": "c7abfdc1-6f16-43f0-ad40-f91a6a8e6011" }, "scoring": { "input_data": [ { "fields": [ "Sepal.Length", "Sepal.Width", "Petal.Length", "Petal.Width" ], "values": [ [ 5.1, 3.5, 1.4, 0.2 ] ] } ], "predictions": [ { "fields": [ "$R-Species", "$RC-Species", "$RP-Species", "$RP-setosa", "$RP-versicolor", "$RP-virginica", "$RI-Species" ], "values": [ [ "setosa", 1.0, 1.0, 1.0, 0.0, 0.0, "1" ] ] } ], "status": { "completed_at": "2024-07-29T07:42:06.000Z", "running_at": "2024-07-29T07:42:05.000Z", "state": "completed" } } }, "metadata": { "created_at": "2024-07-29T07:41:49.913Z", "id": "5620d21d-17b8-4511-b2fe-377da5e70dd3", "modified_at": "2024-07-29T07:42:06.447Z", "name": "name_d4b971f8-5a30-4303-a9e4-c222b7e71238", "space_id": "93ee84d1-b7dd-42b4-b2ca-121bc0c86315" } }

As we can see this is Iris Setosa flower.

5. Clean up

If you want to clean up all created assets:

  • experiments

  • trainings

  • pipelines

  • model definitions

  • models

  • functions

  • deployments

please follow up this sample notebook.

6. Summary and next steps

You successfully completed this notebook! You learned how to use watsonx.ai Runtime for PMML model deployment and scoring. Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Authors

Jan Sołtysik, Software Engineer at IBM.

Mateusz Szewczyk, Software Engineer at watsonx.ai

Copyright © 2020-2025 IBM. This notebook and its source code are released under the terms of the MIT License.