Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cpd5.2/notebooks/python_sdk/experiments/autoai_rag/Use AutoAI RAG and Chroma to create a pattern about IBM.ipynb
9379 views
Kernel: rag

image

Use AutoAI RAG and Chroma to create a pattern and get information from ibm-watsonx-ai SDK documentation

Disclaimers

  • Use only Projects and Spaces that are available in the watsonx context.

Notebook content

This notebook contains the steps and code to demonstrate the usage of IBM AutoAI RAG. The AutoAI RAG experiment conducted in this notebook uses data scraped from the ibm-watsonx-ai SDK documentation.

Some familiarity with Python is helpful. This notebook uses Python 3.12.

Learning goal

The learning goals of this notebook are:

  • Create an AutoAI RAG job that will find the best RAG pattern based on provided data

Table of Contents

This notebook contains the following parts:

Set up the environment

Before you use the sample code in this notebook, you must perform the following setup task:

  • Contact your Cloud Pak for Data administrator and ask them for your account credentials

Install dependencies

Note: ibm-watsonx-ai documentation can be found here.

%pip install -U "ibm-watsonx-ai[rag]" | tail -n 1

Define credentials

Authenticate the watsonx.ai Runtime service on IBM Cloud Pak for Data. You need to provide the admin's username and the platform url.

import os try: username = os.environ["USERNAME"] except KeyError: username = input("Please enter your username (hit enter): ") try: url = os.environ["URL"] except KeyError: url = input("Please enter the platform url (hit enter): ")

Use the admin's api_key to authenticate watsonx.ai Runtime services:

import getpass from ibm_watsonx_ai import Credentials credentials = Credentials( username=username, api_key=getpass.getpass("Enter your watsonx.ai API key and hit enter: "), url=url, instance_id="openshift", version="5.2", )

Alternatively you can use the admin's password:

import getpass from ibm_watsonx_ai import Credentials if "credentials" not in locals() or not credentials.api_key: credentials = Credentials( username=username, password=getpass.getpass("Enter your watsonx.ai password and hit enter: "), url=url, instance_id="openshift", version="5.2", )

Create APIClient instance

from ibm_watsonx_ai import APIClient client = APIClient(credentials)

Working with spaces

First, you need to create a space for your work. If you do not have a space already created, you can use {PLATFORM_URL}/ml-runtime/spaces?context=icp4data to create one.

  • Click New Deployment Space

  • Create an empty space

  • Go to the space Settings tab

  • Copy Space GUID into your env file or else enter it in the window which will show up after running below cell

Tip: You can also use SDK to prepare the space for your work. Find more information in the Space Management sample notebook.

Action: Assign the space ID below

try: space_id = os.environ["SPACE_ID"] except KeyError: space_id = input("Please enter your space_id (hit enter): ")

To print all existing spaces, use the list method.

client.spaces.list(limit=10)

To be able to interact with all resources available in watsonx.ai, you need to set the space which you will be using.

client.set.default_space(space_id)
'SUCCESS'

RAG Optimizer definition

Define a connection to the training data

Upload the training data to the project as a data asset and then define a connection to the file. This example uses the ModelInference description from the ibm_watsonx_ai documentation.

from langchain_community.document_loaders import WebBaseLoader url = "https://ibm.github.io/watsonx-ai-python-sdk/fm_model_inference.html" docs = WebBaseLoader(url).load() model_inference_content = docs[0].page_content

Upload the training data to the project as a data asset.

document_filename = "ModelInference.txt" if not os.path.isfile(document_filename): with open(document_filename, "w") as file: file.write(model_inference_content) document_asset_details = client.data_assets.create( name=document_filename, file_path=document_filename ) document_asset_id = client.data_assets.get_id(document_asset_details) document_asset_id
Creating data asset... SUCCESS
'6fd32041-b70b-4231-9704-98a5b992ede1'

Define a connection to the training data.

from ibm_watsonx_ai.helpers import DataConnection input_data_references = [DataConnection(data_asset_id=document_asset_id)]

Define a connection to the test data

Upload a json file that you want to use as a benchmark to the project as a data asset and then define a connection to the file. This example uses content from the ibm_watsonx_ai SDK documentation.

benchmarking_data_IBM_page_content = [ { "question": "What is path to ModelInference class?", "correct_answer": "ibm_watsonx_ai.foundation_models.inference.ModelInference", "correct_answer_document_ids": ["ModelInference.txt"], }, { "question": "What is method for get model inference details?", "correct_answer": "get_details()", "correct_answer_document_ids": ["ModelInference.txt"], }, ]

Upload the benchmark testing data to the project as a data asset with json extension.

import json test_filename = "benchmarking_data_ModelInference.json" if not os.path.isfile(test_filename): with open(test_filename, "w") as json_file: json.dump(benchmarking_data_IBM_page_content, json_file, indent=4) test_asset_details = client.data_assets.create( name=test_filename, file_path=test_filename ) test_asset_id = client.data_assets.get_id(test_asset_details) test_asset_id
Creating data asset... SUCCESS
'61559dcb-5c34-4038-8a1b-316064523373'

Define a connection to the benchmark testing data.

test_data_references = [DataConnection(data_asset_id=test_asset_id)]

Configure the RAG Optimizer

Provide the input information for the AutoAI RAG optimizer:

  • name - experiment name

  • description - experiment description

  • max_number_of_rag_patterns - maximum number of RAG patterns to create

  • optimization_metrics - target optimization metrics

from ibm_watsonx_ai.experiment import AutoAI from ibm_watsonx_ai.foundation_models.schema import ( AutoAIRAGModelConfig, AutoAIRAGRetrievalConfig, AutoAIRAGLanguageConfig, AutoAIRAGGenerationConfig, ) experiment = AutoAI( credentials=credentials, space_id=space_id, ) retrieval_config = AutoAIRAGRetrievalConfig( method="window", number_of_chunks=1, window_size=1, ) foundation_model = AutoAIRAGModelConfig( model_id="ibm/granite-3-8b-instruct", ) language_config = AutoAIRAGLanguageConfig( auto_detect=True, ) generation_config = AutoAIRAGGenerationConfig( language=language_config, foundation_models=[foundation_model], ) chunking_config = {"method": "recursive", "chunk_size": 128, "chunk_overlap": 64} rag_optimizer = experiment.rag_optimizer( name="AutoAI RAG test - sample noteook", description="Experiment run in sample notebook", chunking=[chunking_config], retrieval=[retrieval_config], generation=generation_config, max_number_of_rag_patterns=5, optimization_metrics=[AutoAI.RAGMetrics.ANSWER_CORRECTNESS], )

To retrieve the configuration parameters, use get_params().

rag_optimizer.get_params()
{'name': 'AutoAI RAG test - sample noteook', 'description': 'Experiment run in sample notebook', 'chunking': [{'method': 'recursive', 'chunk_size': 128, 'chunk_overlap': 64}], 'max_number_of_rag_patterns': 5, 'optimization_metrics': ['answer_correctness'], 'generation': {'language': {'auto_detect': True}, 'foundation_models': [{'model_id': 'ibm/granite-3-8b-instruct'}]}, 'retrieval': [{'method': 'window', 'number_of_chunks': 1, 'window_size': 1}]}

Run the RAG Experiment

Call the run() method to trigger the AutoAI RAG experiment. Choose one of two modes:

  • To use the interactive mode (synchronous job), specify background_mode=False

  • To use the background mode (asynchronous job), specify background_mode=True

run_details = rag_optimizer.run( input_data_references=input_data_references, test_data_references=test_data_references, background_mode=False, )
############################################## Running 'ccefe8b6-11d9-45f1-ba2f-5f9c8cfd9e0f' ############################################## pending...... running completed Training of 'ccefe8b6-11d9-45f1-ba2f-5f9c8cfd9e0f' finished successfully.

To monitor the AutoAI RAG jobs in background mode, use the get_run_status() method.

rag_optimizer.get_run_status()
'completed'

Compare and test RAG Patterns

You can list the trained patterns and information on evaluation metrics in the form of a Pandas DataFrame by calling the summary() method. Use the DataFrame to compare all discovered patterns and select the one you want for further testing.

summary = rag_optimizer.summary() summary

Additionally, you can pass the scoring parameter to the summary method to filter RAG patterns, starting with the best.

summary = rag_optimizer.summary(scoring="faithfulness")

Get the selected pattern

Get the RAGPattern object from the RAG Optimizer experiment. By default, the RAGPattern of the best pattern is returned.

best_pattern_name = summary.index.values[0] print("Best pattern is:", best_pattern_name) best_pattern = rag_optimizer.get_pattern(pattern_name="Pattern1")
Best pattern is: Pattern1

To retrieve the pattern details, use the get_pattern_details method.

rag_optimizer.get_pattern_details(pattern_name="Pattern1")
{'composition_steps': ['model_selection', 'chunking', 'embeddings', 'retrieval', 'generation'], 'duration_seconds': 2, 'location': {'evaluation_results': '/spaces/7ebfe478-6ddb-45b5-99f7-6ced60918402/assets/auto_ml/auto_ml.235daf6b-cc32-454f-b2db-93455e455b9e/wml_data/ccefe8b6-11d9-45f1-ba2f-5f9c8cfd9e0f/Pattern1/evaluation_results.json', 'indexing_notebook': '/spaces/7ebfe478-6ddb-45b5-99f7-6ced60918402/assets/auto_ml/auto_ml.235daf6b-cc32-454f-b2db-93455e455b9e/wml_data/ccefe8b6-11d9-45f1-ba2f-5f9c8cfd9e0f/Pattern1/indexing_inference_notebook.ipynb', 'inference_notebook': '/spaces/7ebfe478-6ddb-45b5-99f7-6ced60918402/assets/auto_ml/auto_ml.235daf6b-cc32-454f-b2db-93455e455b9e/wml_data/ccefe8b6-11d9-45f1-ba2f-5f9c8cfd9e0f/Pattern1/indexing_inference_notebook.ipynb', 'inference_service_code': '/spaces/7ebfe478-6ddb-45b5-99f7-6ced60918402/assets/auto_ml/auto_ml.235daf6b-cc32-454f-b2db-93455e455b9e/wml_data/ccefe8b6-11d9-45f1-ba2f-5f9c8cfd9e0f/Pattern1/inference_ai_service.gz', 'inference_service_metadata': '/spaces/7ebfe478-6ddb-45b5-99f7-6ced60918402/assets/auto_ml/auto_ml.235daf6b-cc32-454f-b2db-93455e455b9e/wml_data/ccefe8b6-11d9-45f1-ba2f-5f9c8cfd9e0f/Pattern1/inference_service_metadata.json'}, 'name': 'Pattern1', 'settings': {'chunking': {'chunk_overlap': 64, 'chunk_size': 128, 'method': 'recursive'}, 'embeddings': {'model_id': 'intfloat/multilingual-e5-large', 'truncate_input_tokens': 512, 'truncate_strategy': 'left'}, 'generation': {'context_template_text': '[Document]\n{document}\n[End]', 'model_id': 'ibm/granite-3-8b-instruct', 'parameters': {'decoding_method': 'greedy', 'max_new_tokens': 1000, 'max_sequence_length': 131072, 'min_new_tokens': 1}, 'prompt_template_text': '<|system|>\nYou are Granite Chat, an AI language model developed by IBM. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior.<|user|>\nYou are an AI language model designed to function as a specialized Retrieval Augmented Generation (RAG) assistant. When generating responses, prioritize correctness, i.e., ensure that your response is grounded in context and user query. Always make sure that your response is relevant to the question. \nAnswer Length: detailed\n{reference_documents}\nRespond exclusively in the language of the question, regardless of any other language used in the provided context. Ensure that your entire response is in the same language as the question.\n{question} \n\n<|assistant|>', 'word_to_token_ratio': 2.3108}, 'retrieval': {'method': 'window', 'number_of_chunks': 1, 'window_size': 1}, 'vector_store': {'datasource_type': 'chroma', 'distance_metric': 'cosine', 'index_name': 'autoai_rag_ccefe8b6_20251017095250', 'operation': 'upsert', 'schema': {'fields': [{'description': 'text field', 'name': 'text', 'role': 'text', 'type': 'string'}, {'description': 'document name field', 'name': 'document_id', 'role': 'document_name', 'type': 'string'}, {'description': 'chunk starting token position in the source document', 'name': 'start_index', 'role': 'start_index', 'type': 'number'}, {'description': 'chunk number per document', 'name': 'sequence_number', 'role': 'sequence_number', 'type': 'number'}, {'description': 'vector embeddings', 'name': 'vector', 'role': 'vector_embeddings', 'type': 'array'}], 'id': 'autoai_rag_1.0', 'name': 'Document schema using open-source loaders', 'type': 'struct'}}}, 'settings_importance': importance setting_category parameter chunking chunk_overlap 0.142857 chunk_size 0.142857 embeddings embedding_model 0.142857 generation foundation_model 0.142857 retrieval number_of_chunks 0.142857 retrieval_method 0.142857 window_size 0.142857}

Query the RAGPattern locally to test it.

from ibm_watsonx_ai.deployments import RuntimeContext runtime_context = RuntimeContext(api_client=client) inference_service_function = best_pattern.inference_service(runtime_context)[0]
question = "How to add Task Credentials?" context = RuntimeContext( api_client=client, request_payload_json={"messages": [{"role": "user", "content": question}]}, ) inference_service_function(context)
{'body': {'choices': [{'index': 0, 'message': {'role': 'assistant', 'content': "\nTo add task credentials, you would typically use the `Credentials` class from the `ibm_watsonx_ai.foundation_models` module. Here's an example of how you might do this:\n\n```python\nfrom ibm_watsonx_ai.foundation_models import Credentials\n\n# Create a Credentials object\ncredentials = Credentials(\n apikey='your_api_key',\n url='https://api.us-south.watson-natural-language-understanding.watson.cloud.ibm.com'\n)\n\n# Now you can use these credentials when initializing your ModelInference object\nmodel = ModelInference(\n model_id=client.foundation_models.TextModels.GRANITE_13B_INSTRUCT_V2,\n credentials=credentials,\n project_id=project_id\n)\n```\n\nIn this example, replace `'your_api_key'` with your actual API key and `'https://api.us-south.watson-natural-language-understanding.watson.cloud.ibm.com'` with your actual API URL. The `project_id` should be replaced with your actual project ID.\n\nPlease note that the actual process might vary depending on the specifics of your setup and the version of the IBM Watson Natural Language Understanding service you are using. Always refer to the official IBM documentation for the most accurate and up-to-date information.\n\nRemember to handle your credentials securely and avoid hardcoding them into your scripts or version control systems. Consider using environment variables or secure secret management systems for storing sensitive information."}, 'reference_documents': [{'page_content': 'Example of initialising ModelInference with TextModels Enum:\nfrom ibm_watsonx_ai.foundation_models import ModelInference model = ModelInference(\n model_id=client.foundation_models.TextModels.GRANITE_13B_INSTRUCT_V2, credentials=Credentials(...),\n project_id=project_id,\n) class ChatModels¶\nBases: StrEnum\nThis represents a dynamically generated Enum for Chat Models.\nExample of getting ChatModels:\n# GET ChatModels ENUM\nclient.foundation_models.ChatModels', 'metadata': {'sequence_number': [371, 372, 373, 374, 375], 'document_id': 'ModelInference.txt'}}]}]}}

Deploy the RAGPattern

To deploy the RAGPattern, store the defined RAG function and then create a deployed asset.

deployment_details = best_pattern.inference_service.deploy( name="AutoAI RAG deployment - ibm_watsonx_ai documentataion", space_id=space_id, deploy_params={"tags": ["wx-autoai-rag"]}, )
###################################################################################### Synchronous deployment creation for id: '83df92ee-0014-4d9d-825b-bfc3a364b228' started ###################################################################################### initializing Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead. ...... ready ----------------------------------------------------------------------------------------------- Successfully finished deployment creation, deployment_id='c56404de-a50e-4cdd-96c4-c43f758e39d6' -----------------------------------------------------------------------------------------------

Test the deployed function

The RAG service is now deployed in our space. To test the solution, run the cell below. Questions have to be provided in the payload. Their format is provided below.

deployment_id = client.deployments.get_id(deployment_details) question = "How to add Task Credentials?" payload = {"messages": [{"role": "user", "content": question}]} score_response = client.deployments.run_ai_service(deployment_id, payload)
print(score_response["choices"][0]["message"]["content"])
To add task credentials when initializing ModelInference with ChatModels Enum, you can use the `credentials` parameter. Here's an example: ```python from ibm_watsonx_ai.foundation_models import ModelInference, Credentials # Replace 'your_credentials' with your actual credentials credentials = Credentials('your_credentials') model = ModelInference( model_id=client.foundation_models.ChatModels.GRANITE_3_8B_INSTRUCT, credentials=credentials, project_id=project_id, ) ``` In this example, replace `'your_credentials'` with your actual credentials. The `Credentials` class is used to create a credentials object, which is then passed to the `ModelInference` constructor using the `credentials` parameter. Remember to replace `project_id` with your actual project ID. Please ensure that you handle your credentials securely and follow best practices for managing sensitive information. For more information, refer to the official IBM documentation or the source code.

Historical runs

In this section, you will learn how to work with historical RAG Optimizer jobs (runs).

To list historical runs, use the list() method and provide the 'rag_optimizer' filter.

experiment.runs(filter="rag_optimizer").list()
run_id = run_details["metadata"]["id"] run_id
'ccefe8b6-11d9-45f1-ba2f-5f9c8cfd9e0f'

Get the executed optimizer's configuration parameters

experiment.runs.get_rag_params(run_id=run_id)
{'name': 'AutoAI RAG test - sample noteook', 'description': 'Experiment run in sample notebook', 'chunking': [{'chunk_overlap': 64, 'chunk_size': 128, 'method': 'recursive'}], 'max_number_of_rag_patterns': 5, 'generation': {'foundation_models': [{'model_id': 'ibm/granite-3-8b-instruct'}], 'language': {'auto_detect': True}}, 'retrieval': [{'method': 'window', 'number_of_chunks': 1, 'window_size': 1}], 'optimization_metrics': ['answer_correctness']}

Get the historical rag_optimizer instance and training details

historical_opt = experiment.runs.get_rag_optimizer(run_id)

List trained patterns for the selected optimizer

historical_opt.summary()

Clean up

To delete the current experiment, use the cancel_run(hard_delete=True) method.

Warning: Be careful: once you delete an experiment, you will no longer be able to refer to it.

rag_optimizer.cancel_run(hard_delete=True)
'SUCCESS'

To delete the deployment, use the delete method.

Warning: If you keep the deployment active, it might lead to unnecessary consumption of Compute Unit Hours (CUHs).

client.deployments.delete(deployment_id)
'SUCCESS'

To clean up all of the created assets:

  • experiments

  • trainings

  • pipelines

  • model definitions

  • models

  • functions

  • deployments

follow the steps in this sample notebook.

Summary and next steps

You successfully completed this notebook!

You learned how to use ibm-watsonx-ai to run AutoAI RAG experiments.

Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Authors

Mateusz Szewczyk, Software Engineer at watsonx.ai

Copyright © 2024-2026 IBM. This notebook and its source code are released under the terms of the MIT License.