Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cpd4.8/notebooks/python_sdk/instance-management/Machine Learning artifacts export and import.ipynb
6405 views
Kernel: Python 3 (ipykernel)

Export/Import assets with ibm-watson-machine-learning

This notebook demonstrates an example for exporting/importing assets using Watson Machine Learning service. It contains steps and code to work with ibm-watson-machine-learning library available in PyPI repository.

Learning goals

The learning goals of this notebook are:

  • Download an externally trained Keras model.

  • Persist an external model in Watson Machine Learning repository.

  • Export the model from the space

  • Import the model to another space( For Cloud Pak for Data 4.0, the space/project where assets are imported has to be empty ) and deploy

Contents

This notebook contains the following parts:

  1. Setup

  2. Download externally created Keras model

  3. Persist externally created Keras model

  4. Export the model

  5. Import the model

  6. Deploy and score the imported model

  7. Clean up

  8. Summary and next steps

1. Set up the environment

Before you use the sample code in this notebook, you must perform the following setup tasks:

  • Contact with your Cloud Pack for Data administrator and ask him for your account credentials

Connection to WML

Authenticate the Watson Machine Learning service on IBM Cloud Pack for Data. You need to provide platform url, your username and api_key.

username = 'PASTE YOUR USERNAME HERE' api_key = 'PASTE YOUR API_KEY HERE' url = 'PASTE THE PLATFORM URL HERE'
wml_credentials = { "username": username, "apikey": api_key, "url": url, "instance_id": 'openshift', "version": '4.8' }

Alternatively you can use username and password to authenticate WML services.

wml_credentials = { "username": ***, "password": ***, "url": ***, "instance_id": 'openshift', "version": '4.8' }

ibm-watson-machine-learning is packaged in Cloud Pak For Data 4.0

Note: ibm-watson-machine-learning documentation can be found here.

from ibm_watson_machine_learning import APIClient client = APIClient(wml_credentials)

Create two spaces. One for export and one for import

Tip: You can refer to example for space management apis here.

import uuid import json space_name = str(uuid.uuid4()) export_space_metadata = { client.spaces.ConfigurationMetaNames.NAME: 'client_space_export_' + space_name, client.spaces.ConfigurationMetaNames.DESCRIPTION: space_name + ' description' } space = client.spaces.store(meta_props=export_space_metadata) export_space_id = client.spaces.get_id(space) print("{}export space_id: {}{}".format('\n', export_space_id, '\n')) import_space_metadata = { client.spaces.ConfigurationMetaNames.NAME: 'client_space_import_' + space_name, client.spaces.ConfigurationMetaNames.DESCRIPTION: space_name + ' description' } space = client.spaces.store(meta_props=import_space_metadata) import_space_id = client.spaces.get_id(space) print("{}import space_id: {}".format('\n', import_space_id))
Space has been created. However some background setup activities might still be on-going. Check for 'status' field in the response. It has to show 'active' before space can be used. If its not 'active', you can monitor the state with a call to spaces.get_details(space_id) export space_id: be32da95-1735-4c8a-96aa-e4d6b8b6a340 Space has been created. However some background setup activities might still be on-going. Check for 'status' field in the response. It has to show 'active' before space can be used. If its not 'active', you can monitor the state with a call to spaces.get_details(space_id) import space_id: 68dbf09b-5ddb-4c0c-a925-2ee9327f511d

2. Download externally created Keras model and data

In this section, you will download externally created Keras models and data used for training it.

import os import wget import ssl data_dir = 'MNIST_DATA' if not os.path.isdir(data_dir): os.mkdir(data_dir) model_path = os.path.join(data_dir, 'mnist_keras.h5.tgz') if not os.path.isfile(model_path): wget.download("https://github.com/IBM/watson-machine-learning-samples/raw/master/cpd4.0/models/keras/mnist_keras.h5.tgz", out=data_dir)
data_dir = 'MNIST_DATA' if not os.path.isdir(data_dir): os.mkdir(data_dir) filename = os.path.join(data_dir, 'mnist.npz') if not os.path.isfile(filename): wget.download('https://s3.amazonaws.com/img-datasets/mnist.npz', out=data_dir)
import numpy as np dataset = np.load(filename) x_test = dataset['x_test']

3. Persist externally created Keras model

In this section, you will learn how to store your model in Watson Machine Learning repository by using the Watson Machine Learning Client.

3.1: Publish model

Define model name, type and software specification needed to deploy model later.

sofware_spec_uid = client.software_specifications.get_id_by_name("runtime-23.1-py3.10")
client.set.default_space(export_space_id) metadata = { client.repository.ModelMetaNames.NAME: 'External Keras model', client.repository.ModelMetaNames.TYPE: 'tensorflow_2.12', client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sofware_spec_uid } published_model = client.repository.store_model( model=model_path, meta_props=metadata)

3.2: Get model details

import json published_model_uid = client.repository.get_model_uid(published_model) model_details = client.repository.get_details(published_model_uid) print(json.dumps(model_details, indent=2))
{ "entity": { "software_spec": { "id": "e4429883-c883-42b6-87a8-f419d64088cd", "name": "default_py3.7" }, "type": "tensorflow_2.1" }, "metadata": { "created_at": "2020-12-08T13:29:30.489Z", "id": "7bd7c597-01f5-496f-8131-f9464248cd4b", "modified_at": "2020-12-08T13:29:48.846Z", "name": "External Keras model", "owner": "1000330999", "space_id": "be32da95-1735-4c8a-96aa-e4d6b8b6a340" }, "system": { "warnings": [] } }

3.3 Get all models in the space

space_id is automatically picked up from client.set.default_space() api call before

models_details = client.repository.list_models()

4. Export

help(client.export_assets.start)
Help on method start in module ibm_watson_machine_learning.export_assets: start(meta_props, space_id=None, project_id=None) method of ibm_watson_machine_learning.export_assets.Export instance Start the export. Either space_id or project_id has to be provided and is mandatory. ALL_ASSETS is by default False. No need to provide explicitly unless it has to be set to True Either ALL_ASSETS or ASSET_TYPES or ASSET_IDS has to be given in the meta_props. Only one of these can be provided In the meta_props: ALL_ASSETS is a boolean. When set to True, it exports all assets in the given space ASSET_IDS is an array containing the list of assets ids to be exported ASSET_TYPES is for providing the asset types to be exported. All assets of that asset type will be exported Eg: wml_model, wml_model_definition, wml_pipeline, wml_function, wml_experiment, software_specification, hardware_specification, package_extension, script **Parameters** .. important:: #. **meta_props**: meta data. To see available meta names use **client.export_assets.ConfigurationMetaNames.get()** **type**: dict #. **space_id**: Space identifier** **type**: str #. **project_id**: Project identifier** **type**: str **Output** .. important:: **returns**: Response json **return type**: dict **Example** >>> metadata = { >>> client.export_assets.ConfigurationMetaNames.NAME: "export_model", >>> client.export_assets.ConfigurationMetaNames.ASSET_IDS: ["13a53931-a8c0-4c2f-8319-c793155e7517", "13a53931-a8c0-4c2f-8319-c793155e7518"] >>> } >>> details = client.export_assets.start(meta_props=metadata, space_id="98a53931-a8c0-4c2f-8319-c793155e4598") >>> metadata = { >>> client.export_assets.ConfigurationMetaNames.NAME: "export_model", >>> client.export_assets.ConfigurationMetaNames.ASSET_TYPES: ["wml_model"], >>> } >>> details = client.export_assets.start(meta_props=metadata, space_id="98a53931-a8c0-4c2f-8319-c793155e4598") >>> metadata = { >>> client.export_assets.ConfigurationMetaNames.NAME: "export_model", >>> client.export_assets.ConfigurationMetaNames.ALL_ASSETS: True, >>> }

client.export_assets has these apis. For any help on these apis, type 'help(api_name)' in your notebook Example: help(client.export_assets.start), help(client.export_assets.get_details)

  1. client.export_assets.start: This starts the export job. export job is asynchronously executed

  2. client.export_assets.get_details: Given export_id and corresponding space_id/project_id, this gives the export job details. Usually used for monitoring the export job submitted with start api

  3. client.export_assets.list: Prints summary of all the export jobs

  4. client.export_assets.get_exported_content: Downloads the exported content. This information will be used by the import process

  5. client.export_assets.delete: Deletes the given export job

  6. client.export_assets.cancel: Cancels the given export job if running

4.1: Start the export process

Start the export process for the model created. Either ASSET_IDS or ASSET_TYPES or ALL_ASSETS can be provided. If you have more than one model ids, you need to provide them as array like client.export_assets.ConfigurationMetaNames.ASSET_IDS: [model_id1, model_id2] Refer to the help api above to see different usages and details

metadata = { client.export_assets.ConfigurationMetaNames.NAME: "export_model", client.export_assets.ConfigurationMetaNames.ASSET_IDS: [published_model_uid] } details = client.export_assets.start(meta_props=metadata, space_id=export_space_id) print(json.dumps(details, indent=2)) export_job_id = details[u'metadata'][u'id']
export job with id a182d2e2-8d1f-46b9-9b11-28e2d2e7ce9a has started. Monitor status using client.export_assets.get_details api. Check 'help(client.export_assets.get_details)' for details on the api usage { "entity": { "assets": { "all_assets": false, "asset_ids": [ "7bd7c597-01f5-496f-8131-f9464248cd4b" ] }, "status": { "state": "pending" } }, "metadata": { "created_at": "2020-12-08T13:30:00.306Z", "creator_id": "1000330999", "id": "a182d2e2-8d1f-46b9-9b11-28e2d2e7ce9a", "name": "export_model", "space_id": "be32da95-1735-4c8a-96aa-e4d6b8b6a340", "url": "/v2/asset_exports/a182d2e2-8d1f-46b9-9b11-28e2d2e7ce9a" } }

4.2: Monitor the export process

import time start_time = time.time() diff_time = start_time - start_time while True and diff_time < 10 * 60: time.sleep(3) response = client.export_assets.get_details(export_job_id, space_id=export_space_id) state = response[u'entity'][u'status'][u'state'] print(state) if state == 'completed' or state == 'error' or state == 'failed': break diff_time = time.time() - start_time print(json.dumps(response, indent=2))
pending completed { "entity": { "assets": { "all_assets": false, "asset_ids": [ "7bd7c597-01f5-496f-8131-f9464248cd4b" ] }, "status": { "state": "completed" } }, "metadata": { "created_at": "2020-12-08T13:30:00.306Z", "creator_id": "1000330999", "id": "a182d2e2-8d1f-46b9-9b11-28e2d2e7ce9a", "name": "export_model", "space_id": "be32da95-1735-4c8a-96aa-e4d6b8b6a340", "updated_at": "2020-12-08T13:30:07.102Z", "url": "/v2/asset_exports/a182d2e2-8d1f-46b9-9b11-28e2d2e7ce9a" } }

4.3: Get the exported content

export_dir = 'EXPORT_DATA' if not os.path.isdir(export_dir): os.mkdir(export_dir) export_file_name = 'exported_content_' + str(uuid.uuid4()) + '.zip' export_file_path = os.path.join(export_dir, export_file_name) details = client.export_assets.get_exported_content(export_job_id, space_id = export_space_id, file_path = export_file_path) print(details)
Successfully saved export content to file: 'EXPORT_DATA/exported_content_b2d90b7d-f95b-4121-84cc-58cab5d43de4.zip' EXPORT_DATA/exported_content_b2d90b7d-f95b-4121-84cc-58cab5d43de4.zip

5. Import

client.import_assets has these apis. For any help on these apis, type 'help(api_name)' in your notebook Example: help(client.import_assets.start), help(client.import_assets.get_details)

  1. client.import_assets.start: This starts the import job. import job is asynchronously executed

  2. client.import_assets.get_details: Given import_id and corresponding space_id/project_id, this gives the import job details. Usually used for monitoring the import job submitted with start api

  3. client.import_assets.list: Prints summary of all the import jobs

  4. client.import_assets.delete: Deletes the given import job

  5. client.import_assets.cancel: Cancels the given import job if running

5.1: Start the import process

details = client.import_assets.start(file_path=export_file_path, space_id=import_space_id) print(json.dumps(details, indent=2)) import_job_id = details[u'metadata'][u'id']
import job with id 6d7c3f3f-6a9b-4942-b05f-d4381222fa73 has started. Monitor status using client.import_assets.get_details api. Check 'help(client.import_assets.get_details)' for details on the api usage { "entity": { "status": { "state": "pending" } }, "metadata": { "created_at": "2020-12-08T13:30:41.646Z", "creator_id": "1000330999", "id": "6d7c3f3f-6a9b-4942-b05f-d4381222fa73", "space_id": "68dbf09b-5ddb-4c0c-a925-2ee9327f511d", "url": "/v2/asset_imports/6d7c3f3f-6a9b-4942-b05f-d4381222fa73" } }

5.2: Monitor the import process

import time start_time = time.time() diff_time = start_time - start_time while True and diff_time < 10 * 60: time.sleep(3) response = client.import_assets.get_details(import_job_id, space_id=import_space_id) state = response[u'entity'][u'status'][u'state'] print(state) if state == 'completed' or state == 'error' or state == 'failed': break diff_time = time.time() - start_time print(json.dumps(response, indent=2)) client.set.default_space(import_space_id) print("{}List of models: {}".format('\n', '\n')) client.repository.list_models() details = client.repository.get_model_details() for obj in details[u'resources']: if obj[u'metadata'][u'name'] == "External Keras model": model_id_for_deployment = obj[u'metadata'][u'id'] print("{}model id for deployment: {}".format('\n', model_id_for_deployment))
completed { "entity": { "status": { "state": "completed" } }, "metadata": { "created_at": "2020-12-08T13:30:41.646Z", "creator_id": "1000330999", "id": "6d7c3f3f-6a9b-4942-b05f-d4381222fa73", "space_id": "68dbf09b-5ddb-4c0c-a925-2ee9327f511d", "updated_at": "2020-12-08T13:30:44.922Z", "url": "/v2/asset_imports/6d7c3f3f-6a9b-4942-b05f-d4381222fa73" } } List of models: ------------------------------------ -------------------- ------------------------ -------------- ID NAME CREATED TYPE 0e22514a-927b-4909-a562-e12ca0c46c29 External Keras model 2020-12-08T13:30:43.002Z tensorflow_2.1 ------------------------------------ -------------------- ------------------------ -------------- model id for deployment: 0e22514a-927b-4909-a562-e12ca0c46c29

List the import and export jobs

print("Export jobs: \n") client.export_assets.list(space_id=export_space_id) print("\nImport jobs:") client.import_assets.list(space_id=import_space_id)
Export jobs: ------------------------------------ ------------ ------------------------ --------- ID NAME CREATED STATUS a182d2e2-8d1f-46b9-9b11-28e2d2e7ce9a export_model 2020-12-08T13:30:00.306Z completed ------------------------------------ ------------ ------------------------ --------- Import jobs: ------------------------------------ ------------------------ --------- ID CREATED STATUS 6d7c3f3f-6a9b-4942-b05f-d4381222fa73 2020-12-08T13:30:41.646Z completed ------------------------------------ ------------------------ ---------

6. Deploy and score the imported model

6.1: Create model deployment

Create online deployment for published model

metadata = { client.deployments.ConfigurationMetaNames.NAME: "Deployment of external Keras model", client.deployments.ConfigurationMetaNames.ONLINE: {} } created_deployment = client.deployments.create(model_id_for_deployment, meta_props=metadata)
####################################################################################### Synchronous deployment creation for uid: '0e22514a-927b-4909-a562-e12ca0c46c29' started ####################################################################################### initializing. ready ------------------------------------------------------------------------------------------------ Successfully finished deployment creation, deployment_uid='eb2c9c76-c5eb-455d-994c-02331e1af342' ------------------------------------------------------------------------------------------------
deployment_uid = client.deployments.get_uid(created_deployment)

Now you can print an online scoring endpoint.

scoring_endpoint = client.deployments.get_scoring_href(created_deployment) print(scoring_endpoint)
https://wmlgmc-cpd-wmlgmc.apps.wmlautoai.cp.fyre.ibm.com/ml/v4/deployments/eb2c9c76-c5eb-455d-994c-02331e1af342/predictions

You can also list existing deployments.

client.deployments.list()

6.2: Get deployment details

details = client.deployments.get_details(deployment_uid) print(json.dumps(details, indent=2))
{ "entity": { "asset": { "id": "0e22514a-927b-4909-a562-e12ca0c46c29" }, "custom": {}, "deployed_asset_type": "model", "hardware_spec": { "id": "Not_Applicable", "name": "XXS", "num_nodes": 1 }, "name": "Deployment of external Keras model", "online": {}, "space_id": "68dbf09b-5ddb-4c0c-a925-2ee9327f511d", "status": { "online_url": { "url": "https://wmlgmc-cpd-wmlgmc.apps.wmlautoai.cp.fyre.ibm.com/ml/v4/deployments/eb2c9c76-c5eb-455d-994c-02331e1af342/predictions" }, "state": "ready" } }, "metadata": { "created_at": "2020-12-08T13:30:59.496Z", "id": "eb2c9c76-c5eb-455d-994c-02331e1af342", "modified_at": "2020-12-08T13:30:59.496Z", "name": "Deployment of external Keras model", "owner": "1000330999", "space_id": "68dbf09b-5ddb-4c0c-a925-2ee9327f511d" } }

6.3: Score

You can use below method to do test scoring request against deployed model.

Let's first visualize two samples from dataset, we'll use for scoring. You must have matplotlib package installed

%matplotlib inline import matplotlib.pyplot as plt
for i, image in enumerate([x_test[0], x_test[1]]): plt.subplot(2, 2, i + 1) plt.axis('off') plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
Image in a Jupyter notebook

Prepare scoring payload with records to score.

score_0 = x_test[0].flatten().tolist() score_1 = x_test[1].flatten().tolist()
scoring_payload = {"input_data": [{"values": [score_0, score_1]}]}

Use client.deployments.score() method to run scoring.

predictions = client.deployments.score(deployment_uid, scoring_payload)
print(json.dumps(predictions, indent=2))
{ "predictions": [ { "id": "dense_2", "fields": [ "prediction", "prediction_classes", "probability" ], "values": [ [ [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0 ], 7, [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0 ] ], [ [ 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 ], 2, [ 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 ] ] ] } ] }

7. Clean up

client.export_assets.delete(export_job_id, space_id=export_space_id) client.import_assets.delete(import_job_id, space_id=import_space_id) client.spaces.delete(export_space_id) client.spaces.delete(import_space_id)
Export job deleted Import job deleted DELETED DELETED
'SUCCESS'

If you want to clean up all created assets:

  • experiments

  • trainings

  • pipelines

  • model definitions

  • models

  • functions

  • deployments

please follow up this sample notebook.

8. Summary and next steps

You successfully completed this notebook! You learned how to use export/import assets client apis. Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Authors

Mithun - [email protected], Software Engineer

Copyright © 2020-2025 IBM. This notebook and its source code are released under the terms of the MIT License.