Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cpd5.1/notebooks/python_sdk/experiments/deep_learning/Use TensorFlow to recognize hand-written digits.ipynb
6405 views
Kernel: Python 3 (ipykernel)

Use Tensorflow to recognize hand-written digits with ibm-watsonx-ai

This notebook contains the steps and code to demonstrate support of Deep Learning model training and scoring in the Watson Machine Learning service. It introduces commands for data retrieval, training_definition persistance to Watson Machine Learning repository, model training, model persistance, model deployment and scoring.

Some familiarity with Python is helpful. This notebook uses Python 3.11.

Learning goals

The learning goals of this notebook are:

  • Working with Watson Machine Learning service.

  • Training Deep Learning models (TensorFlow).

  • Saving trained models in Watson Machine Learning repository.

  • Online deployment and scoring of trained model.

Contents

This notebook contains the following parts:

  1. Setup

  2. Create model definition

  3. Train model

  4. Persist trained model

  5. Deploy and Score

  6. Clean up

  7. Summary and next steps

1. Set up the environment

Before you use the sample code in this notebook, you must perform the following setup tasks:

  • Contact with your Cloud Pak for Data administrator and ask them for your account credentials

Install and import the ibm-watsonx-ai and dependecies

Note: ibm-watsonx-ai documentation can be found here.

!pip install wget | tail -n 1 !pip install -U ibm-watsonx-ai | tail -n 1

Connection to WML

Authenticate the Watson Machine Learning service on IBM Cloud Pak for Data. You need to provide platform url, your username and api_key.

username = 'PASTE YOUR USERNAME HERE' api_key = 'PASTE YOUR API_KEY HERE' url = 'PASTE THE PLATFORM URL HERE'
from ibm_watsonx_ai import Credentials credentials = Credentials( username=username, api_key=api_key, url=url, instance_id="openshift", version="5.1" )

Alternatively you can use username and password to authenticate WML services.

credentials = Credentials( username=***, password=***, url=***, instance_id="openshift", version="5.1" )
from ibm_watsonx_ai import APIClient client = APIClient(credentials)

Working with spaces

First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use {PLATFORM_URL}/ml-runtime/spaces?context=icp4data to create one.

  • Click New Deployment Space

  • Create an empty space

  • Go to space Settings tab

  • Copy space_id and paste it below

Tip: You can also use SDK to prepare the space for your work. More information can be found here.

Action: Assign space ID below

space_id = 'PASTE YOUR SPACE ID HERE'

You can use list method to print all existing spaces.

client.spaces.list(limit=10)

To be able to interact with all resources available in Watson Machine Learning, you need to set space which you will be using.

client.set.default_space(space_id)
'SUCCESS'

2. Create model definition

2.1 Prepare model definition metadata

model_definition_metadata = { client.model_definitions.ConfigurationMetaNames.NAME: "Hand-written Digit Recognition", client.model_definitions.ConfigurationMetaNames.DESCRIPTION: "Hand-written Digit Recognition", client.model_definitions.ConfigurationMetaNames.COMMAND: "convolutional_network.py --trainImagesFile train-images-idx3-ubyte.gz --trainLabelsFile train-labels-idx1-ubyte.gz --testImagesFile t10k-images-idx3-ubyte.gz --testLabelsFile t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 200", client.model_definitions.ConfigurationMetaNames.PLATFORM: {"name": "python", "versions": ["3.11"]}, client.model_definitions.ConfigurationMetaNames.VERSION: "2.0", client.model_definitions.ConfigurationMetaNames.SPACE_UID: space_id }

2.2 Get sample model definition content file from git

import os, wget filename='tf-softmax-model.zip' if not os.path.isfile(filename): filename = wget.download('https://github.com/IBM/watson-machine-learning-samples/raw/master/cpd5.1/definitions/tensorflow/tf-softmax-model.zip')

Tip: Convert below cell to code and run it to see model deinition's code.

!unzip -oqd . tf-softmax-model.zip && cat convolutional_network.py

2.3 Publish model definition

definition_details = client.model_definitions.store(filename, model_definition_metadata)
model_definition_id = client.model_definitions.get_id(definition_details) print(model_definition_id)
9b9ca252-7615-4b52-be79-e993629a57f6
client.model_definitions.get_details(model_definition_id)
{'metadata': {'space_id': 'c4f8dcc6-2f39-4a7d-8c85-cda634f8c52c', 'guid': 'db097b13-145b-4a74-a5bb-a573d51e4fe7', 'asset_type': 'wml_model_definition', 'created_at': '2021-12-07T15:12:03Z', 'last_updated_at': '2021-12-07T15:12:07Z', 'name': 'Hand-written Digit Recognition', 'description': 'Hand-written Digit Recognition', 'href': '/v2/assets/wml_model_definition/db097b13-145b-4a74-a5bb-a573d51e4fe7?space_id=c4f8dcc6-2f39-4a7d-8c85-cda634f8c52c', 'attachment_id': 'fb9cc1c6-7040-45d6-aa0c-c790350b384f'}, 'entity': {'wml_model_definition': {'version': '2.0', 'platform': {'name': 'python', 'versions': ['3.10']}, 'command': 'convolutional_network.py --trainImagesFile train-images-idx3-ubyte.gz --trainLabelsFile train-labels-idx1-ubyte.gz --testImagesFile t10k-images-idx3-ubyte.gz --testLabelsFile t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 200'}}}

List model definitions

client.model_definitions.list(limit=5)

3. Train model

Warning: Before executing deep learning experiment make sure that training data is saved in a folder where Watson Machine Learning Accelerator is installed.

3.1 Prepare training metadata

training_metadata = training_metadata = { client.training.ConfigurationMetaNames.NAME: "Hand-written Digit Recognition", client.training.ConfigurationMetaNames.SPACE_UID: space_id, client.training.ConfigurationMetaNames.DESCRIPTION: "Hand-written Digit Recognition", client.training.ConfigurationMetaNames.TRAINING_RESULTS_REFERENCE: { "name":"MNIST results", "connection":{}, "location":{ "path":f"spaces/{space_id}/assets/experiment" }, "type":"fs" }, client.training.ConfigurationMetaNames.MODEL_DEFINITION:{ "id": model_definition_id, "hardware_spec": { "name": "K80", "nodes": 1 }, "software_spec": { "name": "tensorflow_rt24.1-py3.11" } }, client.training.ConfigurationMetaNames.TRAINING_DATA_REFERENCES: [ { "name":"training_input_data", "type":"fs", "connection":{}, "location":{ "path": "tf-mnist" }, "schema":{ "id":"idmlp_schema", "fields":[ { "name":"text", "type":"string" } ] } } ] }

3.2 Train model in background

training = client.training.run(training_metadata)

3.3 Get training id and status

training_id = client.training.get_id(training)
client.training.get_status(training_id)["state"]
'completed'

3.4 Get training details

import json training_details = client.training.get_details(training_id) print(json.dumps(training_details, indent=2))

List trainings

client.training.list(limit=5)

Cancel training

You can cancel the training run by calling the method below. Tip: If you want to delete train runs and results, add hard_delete=True as a parameter.

client.training.cancel(training_id)

4. Persist trained model

4.1 Publish model

software_spec_id = client.software_specifications.get_id_by_name('tensorflow_rt24.1-py3.11')
model_meta_props = { client.repository.ModelMetaNames.NAME: "TF Mnist Model", client.repository.ModelMetaNames.TYPE: "tensorflow_2.14", client.repository.ModelMetaNames.SOFTWARE_SPEC_ID: software_spec_id } published_model_details = client.repository.store_model(training_id, meta_props=model_meta_props) model_id = client.repository.get_model_id(published_model_details)

4.2 Get model details

model_details = client.repository.get_details(model_id) print(json.dumps(model_details, indent=2))

List stored models

client.repository.list_models(limit=5)

5. Deploy and score

5.1 Create online deployment for published model

You can deploy the stored model as a web service (online) by running the code in the following cell.

deployment = client.deployments.create( model_id, meta_props={ client.deployments.ConfigurationMetaNames.NAME:"TF Mnist deployment", client.deployments.ConfigurationMetaNames.ONLINE:{} } ) deployment_id = client.deployments.get_id(deployment)
####################################################################################### Synchronous deployment creation for uid: '4c362397-3d08-48a7-93b5-1599d22fd380' started ####################################################################################### initializingNote: online_url is deprecated and will be removed in a future release. Use serving_urls instead. Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. .Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. .Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. .Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. ready ------------------------------------------------------------------------------------------------ Successfully finished deployment creation, deployment_uid='273eccbd-849c-429d-a491-82edd32b068e' ------------------------------------------------------------------------------------------------

5.2 Get deployments details

deployments_details = client.deployments.get_details(deployment_id) print(json.dumps(deployments_details, indent=2))

List deployments

client.deployments.list(limit=5)

5.3 Score deployed model

Prepare sample scoring data to score deployed model.

import wget dataset_filename='mnist.npz' if not os.path.isfile(dataset_filename): dataset_filename = wget.download('https://github.com/IBM/watson-machine-learning-samples/raw/master/cpd5.1/data/mnist/mnist.npz')
import numpy as np mnist_dataset = np.load(dataset_filename) x_test = mnist_dataset['x_test']
image_1 = x_test[0].ravel() / 255 image_2 = x_test[1].ravel() / 255
%matplotlib inline import matplotlib.pyplot as plt
for i, image in enumerate([x_test[0], x_test[1]]): plt.subplot(2, 2, i + 1) plt.axis('off') plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
Image in a Jupyter notebook

Build scoring ditionary consisting of two digits and send it to deployed model to get predictions.

scoring_payload = { client.deployments.ScoringMetaNames.INPUT_DATA : [ {'values': [image_1.tolist(), image_2.tolist()]} ] } scores = client.deployments.score(deployment_id, meta_props=scoring_payload) print("Scoring result:\n" + json.dumps(scores, indent=2))
Scoring result: { "predictions": [ { "id": "classes", "values": [ 7, 2 ] } ] }

6. Clean up

If you want to clean up all created assets:

  • experiments

  • trainings

  • pipelines

  • model definitions

  • models

  • functions

  • deployments

please follow up this sample notebook.

7. Summary and next steps

You successfully completed this notebook! You learned how to use ibm-watsonx-ai to train and score TensorFlow models.

Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Author

Jan Sołtysik, Intern in Watson Machine Learning.

Copyright © 2020-2025 IBM. This notebook and its source code are released under the terms of the MIT License.