Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cpd3.5/notebooks/python_sdk/deployments/scikit-learn/Use scikit-learn to recognize hand-written digits.ipynb
6405 views
Kernel: Python 3

Use scikit-learn to recognize hand-written digits with ibm-watson-machine-learning

This notebook contains steps and code to demonstrate how to persist and deploy locally trained scikit-learn model in Watson Machine Learning Service. This notebook contains steps and code to work with ibm-watson-machine-learning library available in PyPI repository. This notebook introduces commands for getting model and training data, persisting model, deploying model, scoring it, updating the model and redeploying it.

Some familiarity with Python is helpful. This notebook uses Python 3.7 with the ibm-watson-machine-learning package.

Learning goals

The learning goals of this notebook are:

  • Train sklearn model.

  • Persist trained model in Watson Machine Learning repository.

  • Deploy model for online scoring using client library.

  • Score sample records using client library.

Contents

This notebook contains the following parts:

  1. Setup

  2. Explore data and create scikit-learn model

  3. Persist externally created scikit model

  4. Deploy and score

  5. Clean up

  6. Summary and next steps

1. Set up the environment

Before you use the sample code in this notebook, you must perform the following setup tasks:

  • Contact with your Cloud Pack for Data administrator and ask him for your account credentials

Connection to WML

Authenticate the Watson Machine Learning service on IBM Cloud Pack for Data. You need to provide platform url, your username and password.

username = 'PASTE YOUR USERNAME HERE' password = 'PASTE YOUR PASSWORD HERE' url = 'PASTE THE PLATFORM URL HERE'
wml_credentials = { "username": username, "password": password, "url": url, "instance_id": 'openshift', "version": '3.5' }

Install and import the ibm-watson-machine-learning package

Note: ibm-watson-machine-learning documentation can be found here.

!pip install -U ibm-watson-machine-learning
from ibm_watson_machine_learning import APIClient client = APIClient(wml_credentials)

Working with spaces

First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use {PLATFORM_URL}/ml-runtime/spaces?context=icp4data to create one.

  • Click New Deployment Space

  • Create an empty space

  • Go to space Settings tab

  • Copy space_id and paste it below

Tip: You can also use SDK to prepare the space for your work. More information can be found here.

Action: Assign space ID below

space_id = 'PASTE YOUR SPACE ID HERE'

You can use list method to print all existing spaces.

client.spaces.list(limit=10)

To be able to interact with all resources available in Watson Machine Learning, you need to set space which you will be using.

client.set.default_space(space_id)
'SUCCESS'

2. Explore data and create scikit-learn model

In this section, you will prepare and train handwritten digits model using scikit-learn library.

2.1 Explore data

As a first step, you will load the data from scikit-learn sample datasets and perform a basic exploration.

import sklearn from sklearn import datasets digits = datasets.load_digits()

Loaded toy dataset consists of 8x8 pixels images of hand-written digits.

Let's display first digit data and label using data and target.

print(digits.data[0].reshape((8, 8)))
[[ 0. 0. 5. 13. 9. 1. 0. 0.] [ 0. 0. 13. 15. 10. 15. 5. 0.] [ 0. 3. 15. 2. 0. 11. 8. 0.] [ 0. 4. 12. 0. 0. 8. 8. 0.] [ 0. 5. 8. 0. 0. 9. 8. 0.] [ 0. 4. 11. 0. 1. 12. 7. 0.] [ 0. 2. 14. 5. 10. 12. 0. 0.] [ 0. 0. 6. 13. 10. 0. 0. 0.]]
digits.target[0]
0

In next step, you will count data examples.

samples_count = len(digits.images) print("Number of samples: " + str(samples_count))
Number of samples: 1797

2.2. Create a scikit-learn model

Prepare data

In this step, you'll split your data into three datasets:

  • train

  • test

  • score

train_data = digits.data[: int(0.7*samples_count)] train_labels = digits.target[: int(0.7*samples_count)] test_data = digits.data[int(0.7*samples_count): int(0.9*samples_count)] test_labels = digits.target[int(0.7*samples_count): int(0.9*samples_count)] score_data = digits.data[int(0.9*samples_count): ] print("Number of training records: " + str(len(train_data))) print("Number of testing records : " + str(len(test_data))) print("Number of scoring records : " + str(len(score_data)))
Number of training records: 1257 Number of testing records : 360 Number of scoring records : 180

Create pipeline

Next, you'll create scikit-learn pipeline.

In ths step, you will import scikit-learn machine learning packages that will be needed in next cells.

from sklearn.pipeline import Pipeline from sklearn import preprocessing from sklearn import svm, metrics

Standardize features by removing the mean and scaling to unit variance.

scaler = preprocessing.StandardScaler()

Next, define estimators you want to use for classification. Support Vector Machines (SVM) with radial basis function as kernel is used in the following example.

clf = svm.SVC(kernel='rbf')

Let's build the pipeline now. This pipeline consists of transformer and an estimator.

pipeline = Pipeline([('scaler', scaler), ('svc', clf)])

Train model

Now, you can train your SVM model by using the previously defined pipeline and train data.

model = pipeline.fit(train_data, train_labels)

Evaluate model

You can check your model quality now. To evaluate the model, use test data.

predicted = model.predict(test_data) print("Evaluation report: \n\n%s" % metrics.classification_report(test_labels, predicted))
Evaluation report: precision recall f1-score support 0 1.00 0.97 0.99 37 1 0.97 0.97 0.97 34 2 1.00 0.97 0.99 36 3 1.00 0.94 0.97 35 4 0.78 0.97 0.87 37 5 0.97 0.97 0.97 38 6 0.97 0.86 0.91 36 7 0.92 0.97 0.94 35 8 0.91 0.89 0.90 35 9 0.97 0.92 0.94 37 accuracy 0.94 360 macro avg 0.95 0.94 0.95 360 weighted avg 0.95 0.94 0.95 360

You can tune your model now to achieve better accuracy. For simplicity of this example tuning section is omitted.

3. Persist locally created scikit-learn model

In this section, you will learn how to store your model in Watson Machine Learning repository by using the IBM Watson Machine Learning SDK.

3.1: Publish model

Publish model in Watson Machine Learning repository on Cloud.

Define model name, autor name and email.

sofware_spec_uid = client.software_specifications.get_id_by_name("default_py3.7")
metadata = { client.repository.ModelMetaNames.NAME: 'Scikit model', client.repository.ModelMetaNames.TYPE: 'scikit-learn_0.23', client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sofware_spec_uid } published_model = client.repository.store_model( model=model, meta_props=metadata, training_data=train_data, training_target=train_labels)

3.2: Get model details

import json published_model_uid = client.repository.get_model_uid(published_model) model_details = client.repository.get_details(published_model_uid) print(json.dumps(model_details, indent=2))

3.3 Get all models

models_details = client.repository.list_models()

4. Deploy and score

In this section you will learn how to create online scoring and to score a new data record by using the IBM Watson Machine Learning SDK.

4.1: Create model deployment

Create online deployment for published model

metadata = { client.deployments.ConfigurationMetaNames.NAME: "Deployment of scikit model", client.deployments.ConfigurationMetaNames.ONLINE: {} } created_deployment = client.deployments.create(published_model_uid, meta_props=metadata)
####################################################################################### Synchronous deployment creation for uid: '2648c571-d0c9-4596-bf25-4d9702763ed0' started ####################################################################################### initializing. ready ------------------------------------------------------------------------------------------------ Successfully finished deployment creation, deployment_uid='d12bff99-0d99-4c65-8471-c97178cfc023' ------------------------------------------------------------------------------------------------

Note: Here we use deployment url saved in published_model object. In next section, we show how to retrive deployment url from Watson Machine Learning instance.

deployment_uid = client.deployments.get_uid(created_deployment)

Now you can print an online scoring endpoint.

scoring_endpoint = client.deployments.get_scoring_href(created_deployment) print(scoring_endpoint)
https://wmlgmc-cpd-wmlgmc.apps.wmlautoai.cp.fyre.ibm.com/ml/v4/deployments/d12bff99-0d99-4c65-8471-c97178cfc023/predictions

You can also list existing deployments.

client.deployments.list()

4.2: Get deployment details

client.deployments.get_details(deployment_uid)
{'entity': {'asset': {'id': '2648c571-d0c9-4596-bf25-4d9702763ed0'}, 'custom': {}, 'deployed_asset_type': 'model', 'hardware_spec': {'id': 'Not_Applicable', 'name': 'S', 'num_nodes': 1}, 'name': 'Deployment of scikit model', 'online': {}, 'space_id': '83b00166-9047-4159-b777-83dcb498e7ab', 'status': {'online_url': {'url': 'https://wmlgmc-cpd-wmlgmc.apps.wmlautoai.cp.fyre.ibm.com/ml/v4/deployments/d12bff99-0d99-4c65-8471-c97178cfc023/predictions'}, 'state': 'ready'}}, 'metadata': {'created_at': '2020-12-08T11:54:07.114Z', 'id': 'd12bff99-0d99-4c65-8471-c97178cfc023', 'modified_at': '2020-12-08T11:54:07.114Z', 'name': 'Deployment of scikit model', 'owner': '1000330999', 'space_id': '83b00166-9047-4159-b777-83dcb498e7ab'}}

4.3: Score

You can use the following method to do test scoring request against deployed model.

Action: Prepare scoring payload with records to score.

score_0 = list(score_data[0]) score_1 = list(score_data[1])
scoring_payload = {"input_data": [{"values": [score_0, score_1]}]}

Use client.deployments.score() method to run scoring.

predictions = client.deployments.score(deployment_uid, scoring_payload)
print(json.dumps(predictions, indent=2))
{ "predictions": [ { "fields": [ "prediction" ], "values": [ [ 5 ], [ 4 ] ] } ] }

5. Clean up

If you want to clean up all created assets:

  • experiments

  • trainings

  • pipelines

  • model definitions

  • models

  • functions

  • deployments

please follow up this sample notebook.

6. Summary and next steps

You successfully completed this notebook! You learned how to use scikit-learn machine learning as well as Watson Machine Learning for model creation and deployment.

Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Authors

Daniel Ryszka, Software Engineer

Copyright © 2020-2025 IBM. This notebook and its source code are released under the terms of the MIT License.