Path: blob/master/cloud/notebooks/python_sdk/instance-management/Machine Learning artifacts export and import.ipynb
6405 views
Export/Import assets with ibm-watsonx-ai
This notebook demonstrates an example for exporting/importing assets using watsonx.ai Runtime service. It contains steps and code to work with ibm-watsonx-ai library available in PyPI repository.
Learning goals
The learning goals of this notebook are:
Download an externally trained Keras model.
Persist an external model in watsonx.ai Runtime repository.
Export the model from the space
Import the model to another space and deploy
Contents
This notebook contains the following parts:
1. Set up the environment
Before you use the sample code in this notebook, you must perform the following setup tasks:
Create a watsonx.ai Runtime Service instance (a free plan is offered and information about how to create the instance can be found here).
Install and import the ibm-watsonx-ai
and dependecies
Note: ibm-watsonx-ai
documentation can be found here.
Connection to watsonx.ai Runtime
Authenticate the watsonx.ai Runtime service on IBM Cloud. You need to provide platform api_key
and instance location
.
You can use IBM Cloud CLI to retrieve platform API Key and instance location.
API Key can be generated in the following way:
In result, get the value of api_key
from the output.
Location of your watsonx.ai Runtime instance can be retrieved in the following way:
In result, get the value of location
from the output.
Tip: Your Cloud API key
can be generated by going to the Users section of the Cloud console. From that page, click your name, scroll down to the API Keys section, and click Create an IBM Cloud API key. Give your key a name and click Create, then copy the created key and paste it below. You can also get a service specific url by going to the Endpoint URLs section of the watsonx.ai Runtime docs. You can check your instance location in your watsonx.ai Runtime Service instance details.
You can also get service specific apikey by going to the Service IDs section of the Cloud Console. From that page, click Create, then copy the created key and paste it below.
Action: Enter your api_key
and location
in the following cell.
Create two spaces. One for export and one for import
Tip: You can refer to details and example for space management apis here
In this section, you will learn how to store your model in watsonx.ai Runtime repository by using the watsonx.ai Client.
3.1: Publish model
Define model name, type and software specification needed to deploy model later.
3.2: Get model details
3.3 Get all models in the space
space_id is automatically picked up from client.set.default_space() api call before
client.export_assets has these apis. For any help on these apis, type 'help(api_name)' in your notebook Example: help(client.export_assets.start), help(client.export_assets.get_details)
client.export_assets.start: This starts the export job. export job is asynchronously executed
client.export_assets.get_details: Given export_id and corresponding space_id/project_id, this gives the export job details. Usually used for monitoring the export job submitted with start api
client.export_assets.list: Prints summary of all the export jobs
client.export_assets.get_exported_content: Downloads the exported content. This information will be used by the import process
client.export_assets.delete: Deletes the given export job
client.export_assets.cancel: Cancels the given export job if running
4.1: Start the export process
Start the export process for the model created. Either ASSET_IDS or ASSET_TYPES or ALL_ASSETS can be provided. If you have more than one model ids, you need to provide them as array like client.export_assets.ConfigurationMetaNames.ASSET_IDS: [model_id1, model_id2] Refer to the help api above to see different usages and details
4.2: Monitor the export process
4.3: Get the exported content
client.import_assets has these apis. For any help on these apis, type 'help(api_name)' in your notebook Example: help(client.import_assets.start), help(client.import_assets.get_details)
client.import_assets.start: This starts the import job. import job is asynchronously executed
client.import_assets.get_details: Given import_id and corresponding space_id/project_id, this gives the import job details. Usually used for monitoring the import job submitted with start api
client.import_assets.list: Prints summary of all the import jobs
client.import_assets.delete: Deletes the given import job
client.import_assets.cancel: Cancels the given import job if running
5.1: Start the import process
5.2: Monitor the import process
List the import and export jobs
6.1: Create model deployment
Create online deployment for published model
Now you can print an online scoring endpoint.
You can also list existing deployments.
6.2: Get deployment details
6.3: Score
You can use below method to do test scoring request against deployed model.
Let's first visualize two samples from dataset, we'll use for scoring. You must have matplotlib package installed
Prepare scoring payload with records to score.
Use client.deployments.score()
method to run scoring.
If you want to clean up all created assets:
experiments
trainings
pipelines
model definitions
models
functions
deployments
please follow up this sample notebook.
You successfully completed this notebook! You learned how to use export/import assets client apis. Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.
Authors
Mithun - [email protected], Software Engineer
Mateusz Szewczyk, Software Engineer at watsonx.ai
Copyright © 2020-2025 IBM. This notebook and its source code are released under the terms of the MIT License.