Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cpd5.3/notebooks/python_sdk/deployments/ai_services/Use watsonx to run AI service and switch between LLMs by updating the deployment.ipynb
9469 views
Kernel: watsonx-ai-samples-py-312

image

Use watsonx to run AI service and switch between LLMs by updating the deployment

Disclaimers

  • Use only Projects and Spaces that are available in watsonx context.

Notebook content

This notebook provides a detailed demonstration of the steps and code required to showcase support for watsonx.ai AI service.

Some familiarity with Python is helpful. This notebook uses Python 3.12.

Learning goal

The goal is to demonstrate how an AI service deployment using one LLM can be switched to another LLM of choice with zero downtime. It also highlights how an AI service asset can create a new revision and update the deployment accordingly.

Table of Contents

This notebook contains the following parts:

Set up the environment

Before you use the sample code in this notebook, you must perform the following setup tasks:

  • Contact with your IBM Cloud Pak® for Data administrator and ask them for your account credentials

Install dependencies

%pip install -U "ibm_watsonx_ai>=1.3.33" | tail -n 1
Successfully installd anyio-4.11.0 cachetools-6.2.2 certifi-2025.11.12 charset_normalizer-3.4.4 h11-0.16.0 httpcore-1.0.9 httpx-0.28.1 ibm-cos-sdk-2.14.3 ibm-cos-sdk-core-2.14.3 ibm-cos-sdk-s3transfer-2.14.3 ibm_watsonx_ai-1.4.6 idna-3.11 jmespath-1.0.1 lomond-0.3.3 numpy-2.3.5 pandas-2.2.3 pytz-2025.2 requests-2.32.5 sniffio-1.3.1 tabulate-0.9.0 typing_extensions-4.15.0 tzdata-2025.2 urllib3-2.5.0

Define credentials

Authenticate the watsonx.ai Runtime service on IBM Cloud Pak® for Data. You need to provide the admin's username and the platform url.

username = "PASTE YOUR USERNAME HERE" url = "PASTE THE PLATFORM URL HERE"

Use the admin's api_key to authenticate watsonx.ai Runtime services:

import getpass from ibm_watsonx_ai import Credentials credentials = Credentials( username=username, api_key=getpass.getpass("Enter your watsonx.ai API key and hit enter: "), url=url, instance_id="openshift", version="5.3", )

Alternatively you can use the admin's password:

import getpass from ibm_watsonx_ai import Credentials if "credentials" not in locals() or not credentials.api_key: credentials = Credentials( username=username, password=getpass.getpass("Enter your watsonx.ai password and hit enter: "), url=url, instance_id="openshift", version="5.3", )

Working with spaces

First of all, you need to create a space that will be used for your work. If you do not have a space, you can use {PLATFORM_URL}/ml-runtime/spaces?context=icp4data to create one.

  • Click New Deployment Space

  • Create an empty space

  • Go to space Settings tab

  • Copy space_id and paste it below

Tip: You can also use SDK to prepare the space for your work. More information can be found here.

Action: Assign space ID below

space_id = "PASTE YOUR SPACE ID HERE"

Create APIClient instance

from ibm_watsonx_ai import APIClient api_client = APIClient(credentials, space_id=space_id)

Specify model

This notebook uses text models meta-llama/llama-3-1-8b-instruct and redhatai/llama-4-scout-17b-16e-instruct-int4, which have to be available on your IBM Cloud Pak® for Data environment for this notebook to run successfully. If these models are not available on your IBM Cloud Pak® for Data environment, you can specify any other available text models.

You can list available text models by running the cell below.

if len(api_client.foundation_models.TextModels): print(*api_client.foundation_models.TextModels, sep="\n") else: print( "Text models are missing in this environment. Install text models to proceed." )
ibm/granite-3b-code-instruct meta-llama/llama-3-1-8b-instruct mistralai/voxtral-small-24b-2507 redhatai/llama-4-scout-17b-16e-instruct-int4

Create AI service

Prepare function which will be deployed using AI service.

The below example uses meta-llama/llama-3-1-8b-instruct as its model_id

def deployable_ai_service( context, model_id="meta-llama/llama-3-1-8b-instruct", url=url ): from ibm_watsonx_ai import APIClient, Credentials from ibm_watsonx_ai.foundation_models import ModelInference parameters = { "decoding_method": "sample", "max_new_tokens": 100, "min_new_tokens": 1, "temperature": 0.1, "top_k": 50, "top_p": 1, } # token, and space_id are available from context object api_client = APIClient( credentials=Credentials( url=url, token=context.generate_token(), instance_id="openshift", version="5.3", ), space_id=context.get_space_id(), ) model = ModelInference( model_id=model_id, api_client=api_client, params=parameters, ) def generate(context) -> dict: """ Generate function expects payload containing "question" key. Request json example: { "question": "<your question>" } Response body will provide answer under key: "answer". """ # set the token for the inference user api_client.set_token(context.get_token()) payload = context.get_json() question = payload["question"] answer = model.generate_text(question) return {"body": {"answer": answer, "model_id": model.model_id}} def generate_stream(context): """ Generate stream function expects payload containing "question" key. Request json example: { "question": "<your question>" } The answer is returned as stream. """ # set the token for the inference user api_client.set_token(context.get_token()) payload = context.get_json() question = payload["question"] yield from ({"delta": delta} for delta in model.generate_text_stream(question)) return generate, generate_stream

Testing AI service's function locally

You can test AI service's function locally. Initialize RuntimeContext firstly.

from ibm_watsonx_ai.deployments import RuntimeContext context = RuntimeContext( api_client=api_client, request_payload_json={"question": "What is inertia?"} ) generate, generate_stream = deployable_ai_service(context)

Execute the generate function locally.

response = generate(context) print(response["body"]["model_id"]) print(response["body"]["answer"])
meta-llama/llama-3-1-8b-instruct Inertia is the tendency of an object to resist changes in its motion. The more massive the object, the greater its inertia. Inertia is a fundamental concept in physics and is a key aspect of Newton's laws of motion. What is the relationship between inertia and mass? Inertia is directly proportional to mass. The more massive an object, the greater its inertia. This means that an object with a large mass will be more resistant to changes in its motion than an object with a smaller mass.

Execute the generate_stream function locally.

for data in generate_stream(context): print(data["delta"], end="", flush=True)
Inertia is the tendency of an object to resist changes in its motion. The more massive an object is, the greater its inertia. Inertia is a fundamental property of matter and is a key concept in understanding how the universe works. What is the relationship between inertia and mass? Inertia is directly proportional to mass. The more massive an object is, the greater its inertia. This means that an object with a larger mass will be more resistant to changes in its motion than an object with a smaller

Deploy AI service

Store AI service which uses meta-llama/llama-3-1-8b-instruct

meta_props = { api_client.repository.AIServiceMetaNames.NAME: "AI service Q&A meta-llama/llama-3-1-8b-instruct", api_client.repository.AIServiceMetaNames.DESCRIPTION: "Test for patching model_id", api_client.repository.AIServiceMetaNames.SOFTWARE_SPEC_ID: api_client.software_specifications.get_id_by_name( "runtime-25.1-py3.12" ), } stored_ai_service_details = api_client.repository.store_ai_service( deployable_ai_service, meta_props ) ai_service_id = api_client.repository.get_ai_service_id(stored_ai_service_details) print("The AI service asset id:", ai_service_id)
The AI service asset id: 1458e3c0-406a-4f37-b28d-3982b305e655

Create online deployment of AI service and obtain the deployment_id

deployment_details = api_client.deployments.create( artifact_id=ai_service_id, meta_props={ api_client.deployments.ConfigurationMetaNames.NAME: "ai-service Q&A test", api_client.deployments.ConfigurationMetaNames.ONLINE: {}, api_client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: { "id": api_client.hardware_specifications.get_id_by_name("XXS") }, }, ) dep_id = api_client.deployments.get_id(deployment_details) dep_id
###################################################################################### Synchronous deployment creation for id: '1458e3c0-406a-4f37-b28d-3982b305e655' started ###################################################################################### initializing Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. .... ready ----------------------------------------------------------------------------------------------- Successfully finished deployment creation, deployment_id='0a8cbf2a-6a0d-4b2c-8b59-73d54027556c' -----------------------------------------------------------------------------------------------
'0a8cbf2a-6a0d-4b2c-8b59-73d54027556c'

Example of Executing an AI service.

Execute generate method.

ai_service_payload = {"question": "What is inertia?"} result = api_client.deployments.run_ai_service( deployment_id=dep_id, ai_service_payload=ai_service_payload ) print(result["model_id"]) print(result["answer"])
meta-llama/llama-3-1-8b-instruct Inertia is the tendency of an object to resist changes in its motion. The more massive the object, the greater its inertia. Inertia is a fundamental property of matter and is a key concept in understanding the behavior of objects in the universe. What is the relationship between inertia and mass? Inertia is directly proportional to mass. The more massive an object is, the greater its inertia. This means that objects with more mass are more resistant to changes in their motion. What is the difference between inertia

Execute generate_stream method.

import json ai_service_payload = {"question": "What is inertia?"} for data in api_client.deployments.run_ai_service_stream( deployment_id=dep_id, ai_service_payload=ai_service_payload ): print(json.loads(data)["delta"], end="", flush=True)
Inertia is the tendency of an object to resist changes in its motion. The more massive an object is, the more inertia it has. Inertia is a fundamental property of matter and is a key concept in understanding how objects move and respond to forces. What is the relationship between inertia and mass? Inertia is directly proportional to mass. The more massive an object is, the more inertia it has, and the more it resists changes in its motion. What is the relationship between inertia and force

Create AI service revision

We want to update the LLM the AI service uses from meta-llama/llama-3-1-8b-instruct to redhatai/llama-4-scout-17b-16e-instruct-int4. For this we will update and create revision AI service asset followed by patching the deployment with the new revision.

In this notebook we have the AI service function already available to us. However, in case it is not available it can be downloaded as shown below.

Download the existing AI service asset as a GZIP file. In order to edit it, decompression is needed.

api_client.repository.download(ai_service_id, "my_ai_svc.py.gz")
Successfully saved AI service content to file: 'my_ai_svc.py.gz'
!gunzip -fk my_ai_svc.py.gz

You can use notebook magic command %load my_ai_svc.py to load the contents and make the necessary changes to the content by replacing model_id with redhatai/llama-4-scout-17b-16e-instruct-int4.

# %load my_ai_svc.py def deployable_ai_service( context, model_id="redhatai/llama-4-scout-17b-16e-instruct-int4", url=url ): from ibm_watsonx_ai import APIClient, Credentials from ibm_watsonx_ai.foundation_models import ModelInference parameters = { "decoding_method": "sample", "max_new_tokens": 100, "min_new_tokens": 1, "temperature": 0.1, "top_k": 50, "top_p": 1, } # token, and space_id are available from context object api_client = APIClient( credentials=Credentials( url=url, token=context.generate_token(), instance_id="openshift", version="5.3", ), space_id=context.get_space_id(), ) model = ModelInference( model_id=model_id, api_client=api_client, params=parameters, ) def generate(context) -> dict: """ Generate function expects payload containing "question" key. Request json example: { "question": "<your question>" } Response body will provide answer under key: "answer". """ # set the token for the inference user api_client.set_token(context.get_token()) payload = context.get_json() question = payload["question"] answer = model.generate_text(question) return {"body": {"answer": answer, "model_id": model.model_id}} def generate_stream(context): """ Generate stream function expects payload containing "question" key. Request json example: { "question": "<your question>" } The answer is returned as stream. """ # set the token for the inference user api_client.set_token(context.get_token()) payload = context.get_json() question = payload["question"] yield from ({"delta": delta} for delta in model.generate_text_stream(question)) return generate, generate_stream

Optional step: create revision for the existing version for safe keeping.

response = api_client.repository.create_ai_service_revision(ai_service_id) print(json.dumps(response, indent=2))
{ "metadata": { "name": "AI service Q&A meta-llama/llama-3-1-8b-instruct", "description": "Test for patching model_id", "space_id": "872b97ab-47a8-418f-bc18-a10c2356e126", "id": "1458e3c0-406a-4f37-b28d-3982b305e655", "created_at": "2025-11-18T10:41:51Z", "rev": "1", "commit_info": { "committed_at": "2025-11-18T10:43:03Z" }, "rov": { "member_roles": { "1000330999": { "user_iam_id": "1000330999", "roles": [ "OWNER" ] } } }, "owner": "1000330999" }, "entity": { "software_spec": { "id": "f47ae1c3-198e-5718-b59d-2ea471561e9e" }, "code_type": "python", "documentation": { "functions": { "generate": true, "generate_stream": true, "generate_batch": false } }, "init": { "properties": { "model_id": { "default": "meta-llama/llama-3-1-8b-instruct" }, "url": { "default": "url" } }, "type": "object" } } }

Update the AI service asset with the new content

print("Updating content for AI service:", ai_service_id) ai_service_details = api_client.repository.update_ai_service( ai_service_id, changes={ api_client.repository.AIServiceMetaNames.NAME: "AI service Q&A redhatai/llama-4-scout-17b-16e-instruct-int4" }, update_ai_service=deployable_ai_service, ) print(json.dumps(ai_service_details, indent=2))
Updating content for AI service: 1458e3c0-406a-4f37-b28d-3982b305e655 { "metadata": { "name": "AI service Q&A redhatai/llama-4-scout-17b-16e-instruct-int4", "description": "Test for patching model_id", "space_id": "872b97ab-47a8-418f-bc18-a10c2356e126", "id": "1458e3c0-406a-4f37-b28d-3982b305e655", "created_at": "2025-11-18T10:41:51Z", "commit_info": { "committed_at": "2025-11-18T10:41:51Z" }, "rov": { "member_roles": { "1000330999": { "user_iam_id": "1000330999", "roles": [ "OWNER" ] } } }, "owner": "1000330999" }, "entity": { "software_spec": { "id": "f47ae1c3-198e-5718-b59d-2ea471561e9e" }, "code_type": "python", "documentation": { "functions": { "generate": true, "generate_stream": true, "generate_batch": false } }, "init": { "properties": { "model_id": { "default": "meta-llama/llama-3-1-8b-instruct" }, "url": { "default": "url" } }, "type": "object" } } }

Create revision for the new content

ai_service_details_for_patch = api_client.repository.create_ai_service_revision( ai_service_id ) print(json.dumps(ai_service_details_for_patch, indent=2))
{ "metadata": { "name": "AI service Q&A redhatai/llama-4-scout-17b-16e-instruct-int4", "description": "Test for patching model_id", "space_id": "872b97ab-47a8-418f-bc18-a10c2356e126", "id": "1458e3c0-406a-4f37-b28d-3982b305e655", "created_at": "2025-11-18T10:41:51Z", "rev": "2", "commit_info": { "committed_at": "2025-11-18T10:43:16Z" }, "rov": { "member_roles": { "1000330999": { "user_iam_id": "1000330999", "roles": [ "OWNER" ] } } }, "owner": "1000330999" }, "entity": { "software_spec": { "id": "f47ae1c3-198e-5718-b59d-2ea471561e9e" }, "code_type": "python", "documentation": { "functions": { "generate": true, "generate_stream": true, "generate_batch": false } }, "init": { "properties": { "model_id": { "default": "meta-llama/llama-3-1-8b-instruct" }, "url": { "default": "url" } }, "type": "object" } } }
rev = ai_service_details_for_patch["metadata"]["rev"] print("The required revision:", rev)
The required revision: 2

Update deployment with new revision

updated_deployment_details = api_client.deployments.update( deployment_id=dep_id, changes={ api_client.deployments.ConfigurationMetaNames.ASSET: { "id": ai_service_id, "rev": rev, } }, )
Since ASSET is patched, deployment need to be restarted. ######################################################################## Deployment update for id: '0a8cbf2a-6a0d-4b2c-8b59-73d54027556c' started ######################################################################## updating.... ready --------------------------------------------------------------------------------------------- Successfully finished deployment update, deployment_id='0a8cbf2a-6a0d-4b2c-8b59-73d54027556c' ---------------------------------------------------------------------------------------------

The deployment now be reflects the new asset revision

updated_deployment_details["entity"]["asset"]
{'id': '1458e3c0-406a-4f37-b28d-3982b305e655', 'rev': '2'}

Example of Executing an AI service with updated deployment

Execute generate method.

ai_service_payload = {"question": "What is inertia?"} result = api_client.deployments.run_ai_service( deployment_id=dep_id, ai_service_payload=ai_service_payload ) print(result["model_id"]) print(result["answer"])
redhatai/llama-4-scout-17b-16e-instruct-int4 Explain how inertia is related to mass. ## Step 1: Define Inertia Inertia is the property of matter whereby an object at rest will remain at rest, and an object in motion will continue to move with a constant velocity, unless acted upon by an external force. ## Step2: Explain the Relationship Between Inertia and Mass The more mass an object has, the more inertia it has. This means that it is harder to change the motion of an object with a larger mass than it is

Execute generate_stream method.

ai_service_payload = {"question": "What is inertia?"} for data in api_client.deployments.run_ai_service_stream( deployment_id=dep_id, ai_service_payload=ai_service_payload ): print(json.loads(data)["delta"], end="", flush=True)
Explain how inertia is related to mass. ## Step 1: Define Inertia Inertia is the property of matter whereby an object at rest will remain at rest, and an object in motion will continue to move with a constant velocity, unless acted upon by an external force. ## Step2: Explain the Relationship Between Inertia and Mass The more mass an object has, the more inertia it has. This means that it is harder to change the motion of an object with a large mass than it is

Summary and next steps

You successfully completed this notebook!

You learned how to use the ibm_watsonx_ai SDK to create an AI service asset, update its deployment by creating a new revision, and switch the deployment to a different LLM of choice.

Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Author

Ginbiaksang Naulak, Senior Software Engineer at IBM watsonx.ai

Copyright © 2025-2026 IBM. This notebook and its source code are released under the terms of the MIT License.