Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cpd5.2/notebooks/python_sdk/deployments/spss/Use SPSS and batch deployment with DB2 to predict customer churn.ipynb
6412 views
Kernel: watsonx-ai-samples-py-312

Use SPSS and batch deployment with Db2 to predict customer churn with ibm-watsonx-ai

This notebook contains steps to deploy a sample SPSS stream and start batch scoring new data.

Some familiarity with bash is helpful. This notebook uses Python 3.12.

You will use a data set, Telco Customer Churn, which details anonymous customer data from a telecommunication company. Use the details of this data set to predict customer churn. This is critical to business, as it's easier to retain existing customers than acquire new ones.

Learning goals

The learning goals of this notebook are:

  • Loading a CSV file into Db2

  • Working with the Watson Machine Learning instance

  • Batch deployment of an SPSS model

  • Scoring data using deployed model and a Db2 connection

Contents

This notebook contains the following parts:

  1. Setup

  2. Model upload

  3. Create Db2 connection

  4. Web service creation

  5. Scoring

  6. Clean up

  7. Summary and next steps

1. Set up the environment

Before you use the sample code in this notebook, contact with your Cloud Pak for Data administrator and ask for your account credentials

Install dependencies

Note: ibm-watsonx-ai documentation can be found here.

%pip install -U wget | tail -n 1 %pip install -U ibm-watsonx-ai | tail -n 1

Define credentials

Authenticate the watsonx.ai Runtime service on IBM Cloud Pak for Data. You need to provide the admin's username and the platform url.

username = "PASTE YOUR USERNAME HERE" url = "PASTE THE PLATFORM URL HERE"

Use the admin's api_key to authenticate watsonx.ai Runtime services:

import getpass from ibm_watsonx_ai import Credentials credentials = Credentials( username=username, api_key=getpass.getpass("Enter your watsonx.ai API key and hit enter: "), url=url, instance_id="openshift", version="5.2", )

Alternatively you can use the admin's password:

import getpass from ibm_watsonx_ai import Credentials if "credentials" not in locals() or not credentials.api_key: credentials = Credentials( username=username, password=getpass.getpass("Enter your watsonx.ai password and hit enter: "), url=url, instance_id="openshift", version="5.2", )

Create APIClient instance

from ibm_watsonx_ai import APIClient client = APIClient(credentials)

Working with spaces

First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use {PLATFORM_URL}/ml-runtime/spaces?context=icp4data to create one.

  • Click New Deployment Space

  • Create an empty space

  • Go to space Settings tab

  • Copy space_id and paste it below

Tip: You can also use SDK to prepare the space for your work. More information can be found here.

Action: Assign space ID below

space_id = "PASTE YOUR SPACE ID HERE"

You can use list method to print all existing spaces.

client.spaces.list(limit=10)

To be able to interact with all resources available in watsonx.ai, you need to set space which you will be using.

client.set.default_space(space_id)
'SUCCESS'

2. Upload model

In this section you will learn how to upload the model.

Action: Download sample SPSS model from git project using wget.

import os from wget import download sample_dir = "spss_sample_model" if not os.path.isdir(sample_dir): os.mkdir(sample_dir) filename = os.path.join(sample_dir, "db2-customer-satisfaction-prediction.str") if not os.path.isfile(filename): filename = download( "https://github.com/IBM/watsonx-ai-samples/raw/master/cpd5.2/models/spss/db2_customer_satisfaction/model/db2-customer-satisfaction-prediction.str", out=sample_dir, ) print(filename)
spss_sample_model/db2-customer-satisfaction-prediction.str

Store SPSS sample model in your watsonx.ai instance.

client.software_specifications.list()
sw_spec_id = client.software_specifications.get_id_by_name("spss-modeler_18.2") model_meta_props = { client.repository.ModelMetaNames.NAME: "SPSS customer satisfaction model", client.repository.ModelMetaNames.TYPE: "spss-modeler_18.2", client.repository.ModelMetaNames.SOFTWARE_SPEC_ID: sw_spec_id, } model_details = client.repository.store_model(filename, model_meta_props)

Note: You can see that model is successfully stored in Watson Machine Learning Service.

client.repository.list_models()

3. Create a Db2 connection

You can use commands below to create a Db2 connection and required data assets to perform batch scoring.

Create tables in Db2

  • Download the input_score.csv and input_score_2.csv file from the GitHub repository

  • Click the Open the console to get started with Db2 on Cloud icon.

  • Select the Load Data and Desktop load type.

  • Drag and drop the previously downloaded file and click Next.

  • Set table name to CUSTOMER and proceed with creating.

Create a connection

schema_name = "PUT YOUR SCHEMA NAME HERE"
db_name = "db2" input_table_1 = "CUSTOMER" input_table_2 = "CUSTOMER_2" output_table = "OUTPUT"
db_credentials = { "db": "***", "host": "***", "https_url": "***", "password": "***", "port": "***", "username": "***", }
db2_data_source_type_id = client.connections.get_datasource_type_uid_by_name(db_name) db2_conn_meta_props = { client.connections.ConfigurationMetaNames.NAME: "conn_db2", client.connections.ConfigurationMetaNames.DATASOURCE_TYPE: db2_data_source_type_id, client.connections.ConfigurationMetaNames.DESCRIPTION: "Connection using DB2", client.connections.ConfigurationMetaNames.PROPERTIES: { "database": db_credentials["db"], "port": db_credentials["port"], "host": db_credentials["host"], "password": db_credentials["password"], "username": db_credentials["username"], }, } db2_conn_details = client.connections.create(meta_props=db2_conn_meta_props)
Creating connections... SUCCESS
db2_conn_id = client.connections.get_id(db2_conn_details)

Create input connection data asset

db2_asset_meta_props = { client.data_assets.ConfigurationMetaNames.NAME: "INPUT_TABLE_1", client.data_assets.ConfigurationMetaNames.CONNECTION_ID: db2_conn_id, client.data_assets.ConfigurationMetaNames.DESCRIPTION: "db2 table", client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: input_table_1, } db2_conn_input_asset_details = client.data_assets.store(db2_asset_meta_props) input_data_1_href = client.data_assets.get_href(db2_conn_input_asset_details)
Creating data asset... SUCCESS
db2_asset_meta_props = { client.data_assets.ConfigurationMetaNames.NAME: "INPUT_TABLE_2", client.data_assets.ConfigurationMetaNames.CONNECTION_ID: db2_conn_id, client.data_assets.ConfigurationMetaNames.DESCRIPTION: "db2 table", client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: input_table_2, } db2_conn_input_asset_details = client.data_assets.store(db2_asset_meta_props) input_data_2_href = client.data_assets.get_href(db2_conn_input_asset_details)
Creating data asset... SUCCESS

Create output connection data assets

db2_asset_meta_props = { client.data_assets.ConfigurationMetaNames.NAME: "OUTPUT_TABLE", client.data_assets.ConfigurationMetaNames.CONNECTION_ID: db2_conn_id, client.data_assets.ConfigurationMetaNames.DESCRIPTION: "db2 table", client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: output_table, } db2_conn_output_asset_details = client.data_assets.store(db2_asset_meta_props) output_data_href = client.data_assets.get_href(db2_conn_output_asset_details)
Creating data asset... SUCCESS

4. Create batch deployment

Use bellow to create batch deployment for stored model.

model_id = client.repository.get_model_id(model_details) deployment = client.deployments.create( artifact_id=model_id, meta_props={ client.deployments.ConfigurationMetaNames.NAME: "SPSS BATCH customer satisfaction", client.deployments.ConfigurationMetaNames.BATCH: {}, client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: { "name": "S", "num_nodes": 1, }, }, )
####################################################################################### Synchronous deployment creation for uid: 'b5f984cb-9b46-4884-aa9b-1541efab1faf' started ####################################################################################### ready. ------------------------------------------------------------------------------------------------ Successfully finished deployment creation, deployment_uid='7ca37c54-19ab-4632-a191-625b3da3c898' ------------------------------------------------------------------------------------------------

5. Scoring

You can create batch job using below methods.

5.1 Scoring using data_asset pointing to the Db2.

job_payload_ref = { client.deployments.ScoringMetaNames.INPUT_DATA_REFERENCES: [ { "id": "conn_db2", "name": "input_data_1_href", "type": "data_asset", "connection": {}, "location": {"href": input_data_1_href}, }, { "id": "conn_db2", "name": "input_data_2_href", "type": "data_asset", "connection": {}, "location": {"href": input_data_2_href}, }, ], client.deployments.ScoringMetaNames.OUTPUT_DATA_REFERENCE: { "type": "data_asset", "connection": {}, "location": {"href": output_data_href}, }, } deployment_id = client.deployments.get_id(deployment) job = client.deployments.create_job(deployment_id, meta_props=job_payload_ref)

You can retrieve job ID.

job_id = client.deployments.get_job_id(job)
Monitor job execution

Here you can check the status of your batch scoring. When batch job is completed the results will be written to a Db2 table.

import time elapsed_time = 0 while ( client.deployments.get_job_status(job_id).get("state") != "completed" and elapsed_time < 300 ): print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}") elapsed_time += 10 time.sleep(10) if client.deployments.get_job_status(job_id).get("state") == "completed": print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}") job_details_do = client.deployments.get_job_details(job_id) print(job_details_do) else: print("Job hasn't completed successfully in 5 minutes.")
Current state: queued Current state: running Current state: completed {'entity': {'deployment': {'id': 'ac0dfe9a-6c98-4eeb-967b-120f9234b0e5'}, 'platform_job': {'job_id': 'bfcf37c5-d190-49bd-abca-fb22116323f5', 'run_id': '361b151b-c5f6-48dc-ae1d-776642c9a692'}, 'scoring': {'input_data_references': [{'connection': {}, 'id': 'conn_db2', 'location': {'href': '/v2/assets/04f74392-0b89-4ca6-baa1-157e4295bd8b?space_id=680a7515-620c-461f-9c6f-1f4c535bfc47'}, 'type': 'data_asset'}, {'connection': {}, 'id': 'conn_db2', 'location': {'href': '/v2/assets/d1e1c2eb-d0eb-430f-8707-f948c0b37ba2?space_id=680a7515-620c-461f-9c6f-1f4c535bfc47'}, 'type': 'data_asset'}], 'output_data_reference': {'connection': {}, 'location': {'href': '/v2/assets/8009a976-1218-42b7-81fc-39b77ec225cd?space_id=680a7515-620c-461f-9c6f-1f4c535bfc47'}, 'type': 'data_asset'}, 'status': {'completed_at': '2021-05-13T08:26:06.058Z', 'running_at': '2021-05-13T08:25:58.434Z', 'state': 'completed'}}}, 'metadata': {'created_at': '2021-05-13T08:25:47.071Z', 'id': 'a5d0e0b3-62c5-41f4-879b-41702e16b185', 'modified_at': '2021-05-13T08:26:06.191Z', 'name': 'name_d68b05fd-7866-4a48-af56-235eb3f03809', 'space_id': '680a7515-620c-461f-9c6f-1f4c535bfc47'}}

5.2 Scoring using connection_asset poiniting to the Db2

job_payload_ref = { client.deployments.ScoringMetaNames.INPUT_DATA_REFERENCES: [ { "id": "conn_db2", "name": "input_table_1", "type": "connection_asset", "connection": {"id": db2_conn_id}, "location": {"schema_name": schema_name, "file_name": input_table_1}, }, { "id": "conn_db2", "name": "input_table_2", "type": "connection_asset", "connection": {"id": db2_conn_id}, "location": {"schema_name": schema_name, "file_name": input_table_2}, }, ], client.deployments.ScoringMetaNames.OUTPUT_DATA_REFERENCE: { "id": "conn_db2", "name": "output_table", "type": "connection_asset", "connection": {"id": db2_conn_id}, "location": {"schema_name": schema_name, "file_name": output_table}, }, } deployment_id = client.deployments.get_id(deployment) job = client.deployments.create_job(deployment_id, meta_props=job_payload_ref)

Retrive job ID.

job_id = client.deployments.get_job_id(job)
Monitor job execution
import time elapsed_time = 0 while ( client.deployments.get_job_status(job_id).get("state") != "completed" and elapsed_time < 300 ): print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}") elapsed_time += 10 time.sleep(10) if client.deployments.get_job_status(job_id).get("state") == "completed": print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}") job_details_do = client.deployments.get_job_details(job_id) print(job_details_do) else: print("Job hasn't completed successfully in 5 minutes.")
Current state: queued Current state: running Current state: completed {'entity': {'deployment': {'id': 'ac0dfe9a-6c98-4eeb-967b-120f9234b0e5'}, 'platform_job': {'job_id': 'bfcf37c5-d190-49bd-abca-fb22116323f5', 'run_id': '4d83ddcb-88ca-45ca-8209-446698936b8d'}, 'scoring': {'input_data_references': [{'connection': {'id': 'ae2a374b-85fd-4823-b6f5-95f11662324f'}, 'id': 'conn_db2', 'location': {'file_name': 'CUSTOMER', 'schema_name': 'PUT YOUR SCHEMA NAME HERE'}, 'type': 'connection_asset'}, {'connection': {'id': 'ae2a374b-85fd-4823-b6f5-95f11662324f'}, 'id': 'conn_db2', 'location': {'file_name': 'CUSTOMER_2', 'schema_name': 'PUT YOUR SCHEMA NAME HERE'}, 'type': 'connection_asset'}], 'output_data_reference': {'connection': {'id': 'ae2a374b-85fd-4823-b6f5-95f11662324f'}, 'id': 'conn_db2', 'location': {'file_name': 'OUTPUT', 'schema_name': 'PUT YOUR SCHEMA NAME HERE'}, 'type': 'connection_asset'}, 'status': {'completed_at': '2021-05-13T08:26:43.422Z', 'running_at': '2021-05-13T08:26:35.279Z', 'state': 'completed'}}}, 'metadata': {'created_at': '2021-05-13T08:26:23.663Z', 'id': '77973aa5-fbd0-41b8-9012-a4e2016a57dc', 'modified_at': '2021-05-13T08:26:43.622Z', 'name': 'name_3dc38b68-0227-4422-b7eb-b9bdd3f72d95', 'space_id': '680a7515-620c-461f-9c6f-1f4c535bfc47'}}

Preview scored data

In this subsection you will load scored data.

Tip: To install requests execute the following command: !pip install requests

import requests
host = db_credentials["https_url"] + "/dbapi/v3" url = host + "/auth/tokens" token = requests.post( url, json={"userid": db_credentials["username"], "password": db_credentials["password"]}, ).json()["token"]
Get stored output using Db2 REST API
auth_header = {"Authorization": f"Bearer {token}"} sql_command = { "commands": "SELECT * FROM OUTPUT", "limit": 100, "separator": ",", "stop_on_error": "yes", }
url = host + "/sql_jobs" jobid = requests.post(url, headers=auth_header, json=sql_command).json()["id"]
resp = requests.get(f"{url}/{jobid}", headers=auth_header) results = resp.json()["results"][0] columns = results["columns"] rows = results["rows"]
Preview output using pandas DateFrame

Tip: To install pandas execute following command: !pip install pandas

import pandas as pd pd.DataFrame(data=rows, columns=columns)

6. Clean up

If you want to clean up all created assets:

  • experiments

  • trainings

  • pipelines

  • model definitions

  • models

  • functions

  • deployments

please follow up this sample notebook.

7. Summary and next steps

You successfully completed this notebook! You learned how to use Watson Machine Learning for SPSS model deployment and scoring.

Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Author

Jan Sołtysik, Intern at watson.ai.

Rafał Chrzanowski, Software Engineer Intern at watsonx.ai.

Copyright © 2020-2025 IBM. This notebook and its source code are released under the terms of the MIT License.