Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cpd4.5/notebooks/python_sdk/deployments/scikit-learn/Use scikit-learn to recognize hand-written digits.ipynb
6405 views
Kernel: Python 3 (ipykernel)

Use scikit-learn to recognize hand-written digits with ibm-watson-machine-learning

This notebook contains steps and code to demonstrate how to persist and deploy a locally trained scikit-learn model in the Watson Machine Learning Service. This notebook contains steps and code to work with the ibm-watson-machine-learning library available in the PyPI repository. This notebook introduces commands for getting a model and training data, persisting the model, deploying it, scoring it, updating it, and redeploying it.

Some familiarity with Python is helpful. This notebook uses Python 3.9 with the ibm-watson-machine-learning package.

Learning goals

The learning goals of this notebook are:

  • Train an sklearn model

  • Persist the trained model in the Watson Machine Learning repository

  • Deploy the model for online scoring using the client library

  • Score sample records using the client library

Contents

This notebook contains the following parts:

  1. Setup

  2. Explore data and create a scikit-learn model

  3. Persist the externally created scikit model

  4. Deploy and score

  5. Batch scoring using connection_asset

  6. Clean up

  7. Summary and next steps

1. Set up the environment

Before you use the sample code in this notebook, contact your Cloud Pack for Data administrator and ask for your account credentials.

Connection to WML

Authenticate the Watson Machine Learning service on IBM Cloud Pack for Data. You need to provide platform url, your username and api_key.

username = 'PASTE YOUR USERNAME HERE' api_key = 'PASTE YOUR API_KEY HERE' url = 'PASTE THE PLATFORM URL HERE'
wml_credentials = { "username": username, "apikey": api_key, "url": url, "instance_id": 'openshift', "version": '4.5' }

Alternatively you can use username and password to authenticate WML services.

wml_credentials = { "username": ***, "password": ***, "url": ***, "instance_id": 'openshift', "version": '4.5' }

Install and import the ibm-watson-machine-learning package

Note: ibm-watson-machine-learning documentation can be found here.

!pip install -U ibm-watson-machine-learning
from ibm_watson_machine_learning import APIClient client = APIClient(wml_credentials)

Working with spaces

First of all, you need to create a space that will be used for your work. If you do not have a space, you can use {PLATFORM_URL}/ml-runtime/spaces?context=icp4data to create one.

  • Click New Deployment Space

  • Create an empty space

  • Go to space Settings tab

  • Copy space_id and paste it below

Tip: You can also use SDK to prepare the space for your work. More information can be found here.

Action: Assign space ID below

space_id = 'PASTE YOUR SPACE ID HERE'

You can use the list() method to print all existing spaces.

client.spaces.list(limit=10)

To be able to interact with all resources available in Watson Machine Learning, you need to set space which you will be using.

client.set.default_space(space_id)
'SUCCESS'

2. Explore data and create an scikit-learn model

In this section, you will prepare and train a handwritten digits model using the scikit-learn library.

2.1 Explore data

As the first step, you will load the data from scikit-learn sample datasets and perform basic exploration.

import sklearn from sklearn import datasets digits = datasets.load_digits()

Loaded dataset consists of 8x8 pixels images of hand-written digits.

Let's display first digit data and label using data and target.

print(digits.data[0].reshape((8, 8)))
[[ 0. 0. 5. 13. 9. 1. 0. 0.] [ 0. 0. 13. 15. 10. 15. 5. 0.] [ 0. 3. 15. 2. 0. 11. 8. 0.] [ 0. 4. 12. 0. 0. 8. 8. 0.] [ 0. 5. 8. 0. 0. 9. 8. 0.] [ 0. 4. 11. 0. 1. 12. 7. 0.] [ 0. 2. 14. 5. 10. 12. 0. 0.] [ 0. 0. 6. 13. 10. 0. 0. 0.]]
digits.target[0]
0

In the next step, you will count data examples.

samples_count = len(digits.images) print("Number of samples: " + str(samples_count))
Number of samples: 1797

2.2. Create a scikit-learn model

Prepare data

In this step, you'll split your data into three datasets:

  • train

  • test

  • score

train_data = digits.data[: int(0.7*samples_count)] train_labels = digits.target[: int(0.7*samples_count)] test_data = digits.data[int(0.7*samples_count): int(0.9*samples_count)] test_labels = digits.target[int(0.7*samples_count): int(0.9*samples_count)] score_data = digits.data[int(0.9*samples_count): ] print("Number of training records: " + str(len(train_data))) print("Number of testing records : " + str(len(test_data))) print("Number of scoring records : " + str(len(score_data)))
Number of training records: 1257 Number of testing records : 360 Number of scoring records : 180

Create pipeline

Next, you'll create an scikit-learn pipeline.

In this step, you will import the scikit-learn machine learning packages to be used in next cells.

from sklearn.pipeline import Pipeline from sklearn import preprocessing from sklearn import svm, metrics

Standardize features by removing the mean and scaling to unit variance.

scaler = preprocessing.StandardScaler()

Next, define estimators you want to use for classification. Support Vector Machines (SVM) with the radial basis function as kernel is used in the following example.

clf = svm.SVC(kernel='rbf')

Let's build the pipeline now. This pipeline consists of a transformer and an estimator.

pipeline = Pipeline([('scaler', scaler), ('svc', clf)])

Train model

Now, you can train your SVM model by using the previously defined pipeline and train data.

model = pipeline.fit(train_data, train_labels)

Evaluate model

You can check your model quality now. To evaluate the model, use test data.

predicted = model.predict(test_data) print("Evaluation report: \n\n%s" % metrics.classification_report(test_labels, predicted))
Evaluation report: precision recall f1-score support 0 1.00 0.97 0.99 37 1 0.97 0.97 0.97 34 2 1.00 0.97 0.99 36 3 1.00 0.94 0.97 35 4 0.78 0.97 0.87 37 5 0.97 0.97 0.97 38 6 0.97 0.86 0.91 36 7 0.92 0.97 0.94 35 8 0.91 0.89 0.90 35 9 0.97 0.92 0.94 37 accuracy 0.94 360 macro avg 0.95 0.94 0.95 360 weighted avg 0.95 0.94 0.95 360

You can tune your model now to achieve better accuracy. For simplicity, tuning section is omitted.

3. Persist the locally created scikit-learn model

In this section, you will learn how to store your model in the Watson Machine Learning repository by using the IBM Watson Machine Learning SDK.

3.1: Publish model

Publish the model in the Watson Machine Learning repository on Cloud.

Define model name, autor name and email.

sofware_spec_uid = client.software_specifications.get_id_by_name("runtime-22.1-py3.9")
metadata = { client.repository.ModelMetaNames.NAME: 'Scikit model', client.repository.ModelMetaNames.TYPE: 'scikit-learn_1.0', client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sofware_spec_uid } published_model = client.repository.store_model( model=model, meta_props=metadata, training_data=train_data, training_target=train_labels)

3.2: Get model details

import json published_model_uid = client.repository.get_model_id(published_model) model_details = client.repository.get_details(published_model_uid) print(json.dumps(model_details, indent=2))

3.3 Get all models

models_details = client.repository.list_models()

4. Deploy and score

In this section, you will learn how to create online scoring and to score a new data record by using the IBM Watson Machine Learning SDK.

4.1: Create a model deployment

Create an online deployment for the published model

metadata = { client.deployments.ConfigurationMetaNames.NAME: "Deployment of scikit model", client.deployments.ConfigurationMetaNames.ONLINE: {} } created_deployment = client.deployments.create(published_model_uid, meta_props=metadata)
####################################################################################### Synchronous deployment creation for uid: '5d11ad11-fcc3-49af-afa4-43f6ec139e46' started ####################################################################################### initializing Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. ready ------------------------------------------------------------------------------------------------ Successfully finished deployment creation, deployment_uid='b857cbaf-7fb2-4d41-b5a7-d3842c13767a' ------------------------------------------------------------------------------------------------

Note: Here we use the deployment url saved in the published_model object. In the next section, we show how to retrieve the deployment url from the Watson Machine Learning instance.

deployment_uid = client.deployments.get_id(created_deployment)

Now you can print an online scoring endpoint.

scoring_endpoint = client.deployments.get_scoring_href(created_deployment) print(scoring_endpoint)
https://cpd-wmlautoai-jun24.apps.ocp46wmlautoaai.cp.fyre.ibm.com/ml/v4/deployments/b857cbaf-7fb2-4d41-b5a7-d3842c13767a/predictions

You can also list existing deployments.

client.deployments.list()

4.2: Get deployment details

client.deployments.get_details(deployment_uid)

4.3: Score

You can use the following method to perform a test scoring request against the deployed model.

Action: Prepare scoring payload with records to score.

score_0 = list(score_data[0]) score_1 = list(score_data[1])
scoring_payload = {"input_data": [{"values": [score_0, score_1]}]}

Use client.deployments.score() method to run scoring.

predictions = client.deployments.score(deployment_uid, scoring_payload)
print(json.dumps(predictions, indent=2))
{ "predictions": [ { "fields": [ "prediction" ], "values": [ [ 5 ], [ 4 ] ] } ] }

5. Create a batch deployment and score using connection asset

5.1: Create a batch deployment of the scikit‑learn model

Use the cell below to create a batch deployment for the stored model.

deployment = client.deployments.create( artifact_uid=published_model_uid, meta_props={ client.deployments.ConfigurationMetaNames.NAME: "Batch deployment of scikit model", client.deployments.ConfigurationMetaNames.BATCH: {}, client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: { "name": "S", "num_nodes": 1 } } ) deployment_uid = client.deployments.get_uid(deployment)
####################################################################################### Synchronous deployment creation for uid: '5d11ad11-fcc3-49af-afa4-43f6ec139e46' started ####################################################################################### ready. ------------------------------------------------------------------------------------------------ Successfully finished deployment creation, deployment_uid='b2d346aa-1fe5-451a-9977-fe17887d96f0' ------------------------------------------------------------------------------------------------

5.2 Create a connection to an external database

Action: Enter your COS credentials in the following cell. You can find these credentials in your COS instance dashboard under the Service credentials tab. Note the HMAC key, described in set up the environment is included in these credentials.

db_name = 'bluemixcloudobjectstorage' file_name = 'mnist_scoring.csv'
bucket_name = 'PUT YOUR COS BUCKET NAME HERE'
cos_credentials = { "apikey": "***", "cos_hmac_keys": { "access_key_id": "***", "secret_access_key": "***" }, "endpoints": "***", "iam_apikey_description": "***", "iam_apikey_name": "***", "iam_role_crn": "***", "iam_serviceid_crn": "***", "resource_instance_id": "***" }

Create the connection

conn_meta_props= { client.connections.ConfigurationMetaNames.NAME: f"Connection to Database - {db_name} ", client.connections.ConfigurationMetaNames.DATASOURCE_TYPE: client.connections.get_datasource_type_uid_by_name(db_name), client.connections.ConfigurationMetaNames.DESCRIPTION: "Connection to external Database", client.connections.ConfigurationMetaNames.PROPERTIES: { 'bucket': bucket_name, 'access_key': cos_credentials['cos_hmac_keys']['access_key_id'], 'secret_key': cos_credentials['cos_hmac_keys']['secret_access_key'], 'iam_url': 'https://iam.cloud.ibm.com/identity/token', 'url': 'https://s3.us.cloud-object-storage.appdomain.cloud' } } conn_details = client.connections.create(meta_props=conn_meta_props)
Creating connections... SUCCESS

Note: The above connection can be initialized alternatively with api_key and resource_instance_id. The above cell can be replaced with:

conn_meta_props= { client.connections.ConfigurationMetaNames.NAME: f"Connection to Database - {db_name} ", client.connections.ConfigurationMetaNames.DATASOURCE_TYPE: client.connections.get_datasource_type_uid_by_name(db_name), client.connections.ConfigurationMetaNames.DESCRIPTION: "Connection to external Database", client.connections.ConfigurationMetaNames.PROPERTIES: { 'bucket': bucket_name, 'api_key': cos_credentials['apikey'], 'resource_instance_id': cos_credentials['resource_instance_id'], 'iam_url': 'https://iam.cloud.ibm.com/identity/token', 'url': 'https://s3.us.cloud-object-storage.appdomain.cloud' } } conn_details = client.connections.create(meta_props=conn_meta_props)
connection_id = client.connections.get_uid(conn_details)

5.4 Scoring

You can create a batch job using methods listed below.

Upload batch data to the specified location.

Hint: To install pandas execute !pip install pandas

import pandas as pd from ibm_watson_machine_learning.helpers import DataConnection, S3Location conn = DataConnection( connection_asset_id=connection_id, location=S3Location( bucket=bucket_name, path=file_name ) ) conn._wml_client = client conn.write(data=pd.DataFrame(data=score_data), remote_name=file_name)
job_payload_ref = { client.deployments.ScoringMetaNames.INPUT_DATA_REFERENCES: [ { "id": f"Connection to Database - {db_name}", "name": "input_data_href", "type": "connection_asset", "connection": { "id": connection_id }, "location": { "bucket": bucket_name, 'file_name': file_name } } ], client.deployments.ScoringMetaNames.OUTPUT_DATA_REFERENCE: { "type": "connection_asset", "connection": { "id": connection_id }, "location": { "bucket": bucket_name, "file_name": 'output' } } } job = client.deployments.create_job(deployment_uid, meta_props=job_payload_ref) job_id = client.deployments.get_job_uid(job)
client.deployments.get_job_details(job_id)
{'entity': {'deployment': {'id': 'b2d346aa-1fe5-451a-9977-fe17887d96f0'}, 'platform_job': {'job_id': 'b07c0682-bad0-4e9c-bac7-59502c331825', 'run_id': '2decf6fa-72aa-46b2-9a8d-594d3112a7dc'}, 'scoring': {'input_data_references': [{'connection': {'id': '3cef301b-16f6-4231-b56b-d8dd3703c2d9'}, 'id': 'Connection to Database - bluemixcloudobjectstorage', 'location': {'bucket': 'tests-wml-samples', 'file_name': 'mnist_scoring.csv'}, 'type': 'connection_asset'}], 'output_data_reference': {'connection': {'id': '3cef301b-16f6-4231-b56b-d8dd3703c2d9'}, 'location': {'bucket': 'tests-wml-samples', 'file_name': 'output'}, 'type': 'connection_asset'}, 'status': {'completed_at': '2021-07-19T11:34:34.504993Z', 'running_at': '2021-07-19T11:34:23.099998Z', 'state': 'completed'}}}, 'metadata': {'created_at': '2021-07-19T11:34:06.227Z', 'id': '5731ee76-a3ed-4987-97cd-29c581ba9c1f', 'modified_at': '2021-07-19T11:34:34.584Z', 'name': 'name_d72336cd-efb1-4457-a74e-fa67bf2b649f', 'space_id': '97cebfee-a9da-4c04-822d-f36540c1070d'}}

Monitor job execution

Here you can check the status of your batch scoring. When the batch job is completed the results will be written to an output table.

import time elapsed_time = 0 while client.deployments.get_job_status(job_id).get('state') != 'completed' and elapsed_time < 300: print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}") elapsed_time += 10 time.sleep(10) if client.deployments.get_job_status(job_id).get('state') == 'completed': print(f" Current state: {client.deployments.get_job_status(job_id).get('state')}") job_details_do = client.deployments.get_job_details(job_id) print(job_details_do) else: print("Job hasn't completed successfully in 5 minutes.")
Current state: queued Current state: queued Current state: running Current state: completed {'entity': {'deployment': {'id': 'b2d346aa-1fe5-451a-9977-fe17887d96f0'}, 'platform_job': {'job_id': 'b07c0682-bad0-4e9c-bac7-59502c331825', 'run_id': '2decf6fa-72aa-46b2-9a8d-594d3112a7dc'}, 'scoring': {'input_data_references': [{'connection': {'id': '3cef301b-16f6-4231-b56b-d8dd3703c2d9'}, 'id': 'Connection to Database - bluemixcloudobjectstorage', 'location': {'bucket': 'tests-wml-samples', 'file_name': 'mnist_scoring.csv'}, 'type': 'connection_asset'}], 'output_data_reference': {'connection': {'id': '3cef301b-16f6-4231-b56b-d8dd3703c2d9'}, 'location': {'bucket': 'tests-wml-samples', 'file_name': 'output'}, 'type': 'connection_asset'}, 'status': {'completed_at': '2021-07-19T11:34:34.504993Z', 'running_at': '2021-07-19T11:34:23.099998Z', 'state': 'completed'}}}, 'metadata': {'created_at': '2021-07-19T11:34:06.227Z', 'id': '5731ee76-a3ed-4987-97cd-29c581ba9c1f', 'modified_at': '2021-07-19T11:34:34.584Z', 'name': 'name_d72336cd-efb1-4457-a74e-fa67bf2b649f', 'space_id': '97cebfee-a9da-4c04-822d-f36540c1070d'}}

7. Cleanup

If you want to clean up all created assets:

  • experiments

  • trainings

  • pipelines

  • model definitions

  • models

  • functions

  • deployments

follow the steps listed in this sample notebook.

8. Summary and next steps

You successfully completed this notebook! You learned how to use scikit-learn machine learning as well as Watson Machine Learning for model creation and deployment.

Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Authors

Daniel Ryszka, Software Engineer

Copyright © 2020-2025 IBM. This notebook and its source code are released under the terms of the MIT License.