Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cpd4.7/notebooks/python_sdk/deployments/pmml/Use PMML and Batch Deploymentt to predict iris species.ipynb
6408 views
Kernel: Python 3 (ipykernel)

Use PMML to predict iris species with ibm-watson-machine-learning

This notebook contains steps from storing sample PMML model to starting scoring new data using batch deployment.

Some familiarity with python is helpful. This notebook uses Python 3.10.

You will use a Iris data set, which details measurements of iris perianth. Use the details of this data set to predict iris species.

Learning goals

The learning goals of this notebook are:

  • Working with the WML instance

  • Batch deployment of PMML model

  • Scoring of deployed model

Contents

This notebook contains the following parts:

  1. Setup

  2. Model upload

  3. Batch deployment creation

  4. Scoring

  5. Clean up

  6. Summary and next steps

1. Set up the environment

Before you use the sample code in this notebook, you must perform the following setup tasks:

  • Contact with your Cloud Pack for Data administrator and ask him for your account credentials

Connection to WML

Authenticate the Watson Machine Learning service on IBM Cloud Pack for Data. You need to provide platform url, your username and api_key.

username = 'PASTE YOUR USERNAME HERE' api_key = 'PASTE YOUR API_KEY HERE' url = 'PASTE THE PLATFORM URL HERE'
wml_credentials = { "username": username, "apikey": api_key, "url": url, "instance_id": 'openshift', "version": '4.7' }

Alternatively you can use username and password to authenticate WML services.

wml_credentials = { "username": ***, "password": ***, "url": ***, "instance_id": 'openshift', "version": '4.7' }

Install and import the ibm-watson-machine-learning package

Note: ibm-watson-machine-learning documentation can be found here.

!pip install -U ibm-watson-machine-learning
from ibm_watson_machine_learning import APIClient client = APIClient(wml_credentials)

Working with spaces

First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use {PLATFORM_URL}/ml-runtime/spaces?context=icp4data to create one.

  • Click New Deployment Space

  • Create an empty space

  • Go to space Settings tab

  • Copy space_id and paste it below

Tip: You can also use SDK to prepare the space for your work. More information can be found here.

Action: Assign space ID below

space_id = 'PASTE YOUR SPACE ID HERE'

You can use list method to print all existing spaces.

client.spaces.list(limit=10)

To be able to interact with all resources available in Watson Machine Learning, you need to set space which you will be using.

client.set.default_space(space_id)
'SUCCESS'

2. Upload model

In this section you will learn how to upload the model to the Cloud.

Action: Download sample PMML model from git project using wget.

import os from wget import download sample_dir = 'pmml_sample_model' if not os.path.isdir(sample_dir): os.mkdir(sample_dir) filename=os.path.join(sample_dir, 'iris_chaid.xml') if not os.path.isfile(filename): filename = download('https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cpd4.6/models/pmml/iris-species/model/iris_chaid.xml', out=sample_dir)

Store downloaded file in Watson Machine Learning repository.

sw_spec_uid = client.software_specifications.get_uid_by_name("pmml-3.0_4.3") meta_props = { client.repository.ModelMetaNames.NAME: "pmmlmodel", client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_uid, client.repository.ModelMetaNames.TYPE: 'pmml_4.2.1'}
published_model = client.repository.store_model(model=filename, meta_props=meta_props)

Note: You can see that model is successfully stored in Watson Machine Learning Service.

client.repository.list_models()

3. Create battch deployment

You can use command bellow to create batch deployment for stored model.

model_id = client.repository.get_model_id(published_model) deployment = client.deployments.create( artifact_uid=model_id, meta_props={ client.deployments.ConfigurationMetaNames.NAME: "Sample PMML Batch deployment", client.deployments.ConfigurationMetaNames.BATCH:{}, client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: { "name": "S", "num_nodes": 1 } } )
####################################################################################### Synchronous deployment creation for uid: '77ca8d9f-110e-48b6-9d08-471a579c153b' started ####################################################################################### ready. ------------------------------------------------------------------------------------------------ Successfully finished deployment creation, deployment_uid='0614f9da-5435-43d0-8e10-2ee9c4a17a8a' ------------------------------------------------------------------------------------------------

Batch deployment has been created.

You can retrieve now your deployment ID.

deployment_id = client.deployments.get_id(deployment)

You can also list all deployments in your space.

client.deployments.list()

If you want to get additional information on your deployment, you can do it as below.

client.deployments.get_details(deployment_id)

4. Scoring

You can send new scoring records to batch deployment using by creating job.

job_payload_ref = { client.deployments.ScoringMetaNames.INPUT_DATA: [ { 'fields': ['Sepal.Length', 'Sepal.Width', 'Petal.Length', 'Petal.Width'], 'values': [[5.1, 3.5, 1.4, 0.2]] } ] } job = client.deployments.create_job(deployment_id, meta_props=job_payload_ref)

Now, your job has been submitted to runtime.

You can retrieve now your job ID.

job_id = client.deployments.get_job_uid(job)

You can also list all jobs in your space.

client.deployments.list_jobs()

If you want to get additional information on your job, you can do it as below.

client.deployments.get_job_details(job_id)

Monitor job execution Here you can check status of your batch scoring.

import time elapsed_time = 0 while client.deployments.get_job_status(job_id).get('state') != 'completed' and elapsed_time < 300: print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}") elapsed_time += 10 time.sleep(10) if client.deployments.get_job_status(job_id).get('state') == 'completed': print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}") job_details_do = client.deployments.get_job_details(job_id) print(job_details_do) else: print("Job hasn't completed successfully in 5 minutes.")
Current state: queued Current state: completed {'entity': {'deployment': {'id': '0614f9da-5435-43d0-8e10-2ee9c4a17a8a'}, 'platform_job': {'job_id': '05abbc64-0de7-4c54-9789-9b0bedea5801', 'run_id': '9329426a-6f5d-4950-9ff2-1b3fb7a30dc6'}, 'scoring': {'input_data': [{'fields': ['Sepal.Length', 'Sepal.Width', 'Petal.Length', 'Petal.Width'], 'values': [[5.1, 3.5, 1.4, 0.2]]}], 'predictions': [{'fields': ['$R-Species', '$RC-Species', '$RP-Species', '$RP-setosa', '$RP-versicolor', '$RP-virginica', '$RI-Species'], 'values': [['setosa', 1.0, 1.0, 1.0, 0.0, 0.0, '1']]}], 'status': {'completed_at': '2022-04-05T10:55:05.000Z', 'running_at': '2022-04-05T10:55:05.000Z', 'state': 'completed'}}}, 'metadata': {'created_at': '2022-04-05T10:54:52.841Z', 'id': 'ee090eff-10ca-42ab-9641-71ac1e778f48', 'modified_at': '2022-04-05T10:55:05.740Z', 'name': 'name_330848e1-cc6c-4ef6-b937-a308a0f4cd73', 'space_id': '680a7515-620c-461f-9c6f-1f4c535bfc47'}}

Get scored data

import json print(json.dumps(client.deployments.get_job_details(job_id), indent=2))
{ "entity": { "deployment": { "id": "0614f9da-5435-43d0-8e10-2ee9c4a17a8a" }, "platform_job": { "job_id": "05abbc64-0de7-4c54-9789-9b0bedea5801", "run_id": "9329426a-6f5d-4950-9ff2-1b3fb7a30dc6" }, "scoring": { "input_data": [ { "fields": [ "Sepal.Length", "Sepal.Width", "Petal.Length", "Petal.Width" ], "values": [ [ 5.1, 3.5, 1.4, 0.2 ] ] } ], "predictions": [ { "fields": [ "$R-Species", "$RC-Species", "$RP-Species", "$RP-setosa", "$RP-versicolor", "$RP-virginica", "$RI-Species" ], "values": [ [ "setosa", 1.0, 1.0, 1.0, 0.0, 0.0, "1" ] ] } ], "status": { "completed_at": "2022-04-05T10:55:05.000Z", "running_at": "2022-04-05T10:55:05.000Z", "state": "completed" } } }, "metadata": { "created_at": "2022-04-05T10:54:52.841Z", "id": "ee090eff-10ca-42ab-9641-71ac1e778f48", "modified_at": "2022-04-05T10:55:05.740Z", "name": "name_330848e1-cc6c-4ef6-b937-a308a0f4cd73", "space_id": "680a7515-620c-461f-9c6f-1f4c535bfc47" } }

As we can see this is Iris Setosa flower.

5. Clean up

If you want to clean up all created assets:

  • experiments

  • trainings

  • pipelines

  • model definitions

  • models

  • functions

  • deployments

please follow up this sample notebook.

6. Summary and next steps

You successfully completed this notebook! You learned how to use Watson Machine Learning for PMML model deployment and scoring.

Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Authors

Jan Sołtysik, Software Engineer at IBM.

Copyright © 2020-2025 IBM. This notebook and its source code are released under the terms of the MIT License.