Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
IBM
GitHub Repository: IBM/watson-machine-learning-samples
Path: blob/master/cloud/notebooks/rest_api/deployments/foundation_models/chat/Use watsonx to make simple chat conversation and tool calls.ipynb
5214 views
Kernel: .venv_watsonx_ai_samples_py_312

image

Use watsonx to make simple chat conversation and tool calls

Disclaimers

  • Use only Projects and Spaces that are available in watsonx context.

Notebook content

This notebook provides a detailed demonstration of the steps and code required to showcase support for Chat models, including the integration of tools and watsonx.ai models.

Some familiarity with Python is helpful. This notebook uses Python 3.12.

Learning goal

The purpose of this notebook is to demonstrate how to use Chat models, by using tools.

Table of Contents

This notebook contains the following parts:

Set up the environment

Before you use the sample code in this notebook, you must perform the following setup tasks:

Install and import the datasets and dependencies

%pip install httpx | tail -n 1 %pip install ibm-cloud-sdk-core | tail -n 1
import getpass import os import httpx from ibm_cloud_sdk_core import IAMTokenManager

Inferencing class

This cell defines a class that makes a REST API call to the watsonx Foundation Model inferencing API that we will use to generate output from the provided input. The class takes the access token created in the previous step, and uses it to make a REST API call with input, model id and model parameters. The response from the API call is returned as the cell output.

Action: Provide watsonx.ai Runtime url to work with watsonx.ai.

endpoint_url = input("Please enter your watsonx.ai Runtime endpoint url (hit enter): ")

Define a ModelInferanceChat class for chat generation.

class ModelInferanceChat: def __init__(self, access_token, project_id): self.access_token = access_token self.project_id = project_id def chat( self, model_id, messages, tools=None, tool_choice=None, tool_choice_option=None, **kwargs, ): text_chat_url = f"{endpoint_url}/ml/v1/text/chat?version=2024-10-07" headers = { "Authorization": "Bearer " + self.access_token, "Content-Type": "application/json", "Accept": "application/json", } data = { "model_id": model_id, "messages": messages, "tools": tools, "tool_choice": tool_choice, "tool_choice_option": tool_choice_option, "project_id": self.project_id, } data = data | kwargs response = httpx.post(text_chat_url, json=data, headers=headers) if response.status_code == 200: return response.json() else: raise RuntimeError(response.text)

watsonx API connection

Use the code cell below to define the watsonx.ai credentials that are required to work with watsonx Foundation Model inferencing.

Action: Provide the IBM Cloud user API key. For details, see Managing user API keys.

access_token = IAMTokenManager( apikey=getpass.getpass("Enter your watsonx.ai api key and hit enter: "), url="https://iam.cloud.ibm.com/identity/token", ).get_token()

Defining the project id

You need to provide the project ID to give the Foundation Model the context for the call. If you have a default project ID set in Watson Studio, the notebook obtains that project ID. Otherwise, you need to provide the project ID in the code cell below.

try: project_id = os.environ["PROJECT_ID"] except KeyError: project_id = input("Please enter your project_id (hit enter): ")

Set up the Foundation Model on watsonx.ai

List available chat models

models_json = httpx.get( endpoint_url + "/ml/v1/foundation_model_specs", headers={ "Authorization": f"Bearer {access_token}", "Content-Type": "application/json", "Accept": "application/json", }, params={ "limit": 50, "version": "2024-03-19", "filters": "function_text_chat,!lifecycle_withdrawn:and", }, ).json() models_ids = [m["model_id"] for m in models_json["resources"]] models_ids
['ibm/granite-3-2-8b-instruct', 'ibm/granite-3-3-8b-instruct', 'ibm/granite-3-3-8b-instruct-np', 'ibm/granite-3-8b-instruct', 'ibm/granite-4-h-small', 'ibm/granite-guardian-3-8b', 'meta-llama/llama-3-2-11b-vision-instruct', 'meta-llama/llama-3-2-90b-vision-instruct', 'meta-llama/llama-3-3-70b-instruct', 'meta-llama/llama-3-405b-instruct', 'meta-llama/llama-4-maverick-17b-128e-instruct-fp8', 'meta-llama/llama-guard-3-11b-vision', 'mistral-large-2512', 'mistralai/mistral-medium-2505', 'mistralai/mistral-small-3-1-24b-instruct-2503', 'openai/gpt-oss-120b']

Specify the model_id of the model you will use for the chat with tools.

model_id = "mistralai/mistral-small-3-1-24b-instruct-2503"

Initialize the ModelInferanceChat class.

Hint: Your authentication token might expire, if so please regenerate the access_token reinitialize the ModelInferanceChat class.

model = ModelInferanceChat(access_token, project_id)

Work with chat messages

Working with simple chat message

messages = [{"role": "user", "content": "What is 1 + 1"}] simple_chat_response = model.chat(model_id=model_id, messages=messages)
print(simple_chat_response["choices"][0]["message"]["content"])
The sum of 1 + 1 is: **2**

Working with simple chat message with params

messages = [{"role": "user", "content": "What is 1 + 1"}] simple_chat_response_2 = model.chat( model_id=model_id, messages=messages, temperature=1.5, max_tokens=100 )
print(simple_chat_response_2["choices"][0]["message"]["content"])
The sum of 1 + 1 is: 2

Work with an advanced chat message

messages = [ {"role": "system", "content": "You are a helpful assistant."}, { "role": "user", "content": [{"type": "text", "text": "Who won the world series in 2020?"}], }, { "role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020.", }, { "role": "user", "content": [{"type": "text", "text": "Where was it played?"}], }, ] advanced_chat_response = model.chat(model_id=model_id, messages=messages)
print(advanced_chat_response["choices"][0]["message"]["content"])
The 2020 World Series was played at Globe Life Field in Arlington, Texas. This was the first time in World Series history that all games were played at a neutral site, due to the COVID-19 pandemic. The facility was limited to a small number of fans. The Tampa Bay Rays were scheduled to be the home team during the odd-numbered games. The Los Angeles Dodgers won the series, defeating the Rays in six games.

Work with chat messages using tools and tool_choice

messages = [ {"role": "user", "content": "What's the weather like in Boston today?"}, ] tools = [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA", }, "unit": { "type": "string", "enum": ["celsius", "fahrenheit"], }, }, "required": ["location"], }, }, } ] tool_choice = {"type": "function", "function": {"name": "get_current_weather"}} tool_response = model.chat( model_id=model_id, messages=messages, tools=tools, tool_choice=tool_choice )
import json print(json.dumps(tool_response["choices"][0]["message"], indent=4))
{ "role": "assistant", "content": "", "tool_calls": [ { "id": "yKpMV6FAF", "type": "function", "function": { "name": "get_current_weather", "arguments": "{\n \"location\": \"Boston, MA\"\n}" } } ] }

Work with chat messages using tools and tool_choice_option

tools = [ { "function": { "description": "Adds a and b.", "name": "add", "parameters": { "properties": {"a": {"type": "number"}, "b": {"type": "number"}}, "required": ["a", "b"], "type": "object", }, }, "type": "function", }, { "function": { "description": "Multiplies a and b.", "name": "multiply", "parameters": { "properties": {"a": {"type": "number"}, "b": {"type": "number"}}, "required": ["a", "b"], "type": "object", }, }, "type": "function", }, ] tool_choice_option = "auto"
messages = [ {"role": "user", "content": "What is 5 + 6?"}, ] tools_response = model.chat( model_id=model_id, messages=messages, tools=tools, tool_choice_option=tool_choice_option, ) print(json.dumps(tools_response["choices"][0]["message"], indent=4))
{ "role": "assistant", "tool_calls": [ { "id": "aAMizXijy", "type": "function", "function": { "name": "add", "arguments": "{\"a\": 5, \"b\": 6}" } } ] }
messages = [ {"role": "user", "content": "What is 5 * 6?"}, ] tools_response_2 = model.chat( model_id=model_id, messages=messages, tools=tools, tool_choice_option=tool_choice_option, ) print(json.dumps(tools_response_2["choices"][0]["message"], indent=4))
{ "role": "assistant", "tool_calls": [ { "id": "DZ3vKILTk", "type": "function", "function": { "name": "multiply", "arguments": "{\"a\": 5, \"b\": 6}" } } ] }

Summary and next steps

You successfully completed this notebook!

You learned how to work with chat models using tools and watsonx.ai.

Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Author

Mateusz Szewczyk, Software Engineer at watsonx.ai.

Copyright © 2026 IBM. This notebook and its source code are released under the terms of the MIT License.