Path: blob/master/cloud/notebooks/python_sdk/deployments/ai_services/Use watsonx, and `llama-3-2-11b-vision-instruct` to run as an AI service.ipynb
6405 views
Use watsonx, and meta-llama/llama-3-2-11b-vision-instruct
to run as an AI service
Disclaimers
Use only Projects and Spaces that are available in watsonx context.
Notebook content
This notebook provides a detailed demonstration of the steps and code required to showcase support for watsonx.ai AI service.
Some familiarity with Python is helpful. This notebook uses Python 3.11.
Learning goal
The learning goal for your notebook is to leverage AI services to generate accurate and contextually relevant responses based on a given image and a related question.
Table of Contents
This notebook contains the following parts:
Set up the environment
Before you use the sample code in this notebook, you must perform the following setup tasks:
Create a watsonx.ai Runtime Service instance (a free plan is offered and information about how to create the instance can be found here).
Install and import the datasets
and dependencies
Define the watsonx.ai credentials
Use the code cell below to define the watsonx.ai credentials that are required to work with watsonx Foundation Model inferencing.
Action: Provide the IBM Cloud user API key. For details, see Managing user API keys.
Working with spaces
You need to create a space that will be used for your work. If you do not have a space, you can use Deployment Spaces Dashboard to create one.
Click New Deployment Space
Create an empty space
Select Cloud Object Storage
Select watsonx.ai Runtime instance and press Create
Go to Manage tab
Copy
Space GUID
and paste it below
Tip: You can also use SDK to prepare the space for your work. More information can be found here.
Action: assign space ID below
Create an instance of APIClient with authentication details.
Specify the model_id
of the model you will use for the chat with image.
Please retrieve an image and display it. This example is based on the IBM logo.
Prepare request json payload for local invoke.
Execute the generate
function locally.
Execute the generate_stream
function locally.
Store AI service with previous created custom software specifications.
Create online deployment of AI service.
Obtain the deployment_id
of the previously created deployment.
Execute generate_stream
method.
Summary and next steps
You successfully completed this notebook!
You learned how to create and deploy AI service using ibm_watsonx_ai
SDK.
Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.
Author
Mateusz Szewczyk, Software Engineer at watsonx.ai.
Copyright © 2024-2025 IBM. This notebook and its source code are released under the terms of the MIT License.