Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
IBM
GitHub Repository: IBM/watson-machine-learning-samples
Path: blob/master/cloud/notebooks/python_sdk/experiments/autoai_rag/Use AutoAI RAG with SQL knowledge base reference.ipynb
5051 views
Kernel: fresh

image

Use AutoAI RAG with SQL knowledge base reference

Disclaimers

  • Use only Projects and Spaces that are available in watsonx context.

Notebook content

This notebook contains the steps and code to demonstrate the usage of IBM AutoAI RAG with SQL database as knowledge source. The AutoAI RAG experiment conducted in this notebook uses simple exemplary data about employees of an imaginary company.

Some familiarity with Python is helpful. This notebook uses Python 3.11.

Learning goal

The learning goals of this notebook are:

  • Create an AutoAI RAG job that will find the best SQL RAG agent pattern based on provided SQL knowledge base.

Contents

This notebook contains the following parts:

Set up the environment

Before you use the sample code in this notebook, you must perform the following setup tasks:

  • Create a watsonx.ai Runtime Service instance (a free plan is offered and information about how to create the instance can be found here).

  • Provide a knowledge base instance - PostgreSQL/MySQL/DB2

Install and import the required modules and dependencies

%pip install -U 'ibm-watsonx-ai[rag]>=1.4.6' | tail -n 1
Successfully installed Pillow-12.0.0 SQLAlchemy-2.0.44 XlsxWriter-3.2.9 aiohappyeyeballs-2.6.1 aiohttp-3.13.2 aiosignal-1.4.0 annotated-types-0.7.0 anyio-4.11.0 attrs-25.4.0 backoff-2.2.1 bcrypt-5.0.0 beautifulsoup4-4.13.5 build-1.3.0 cachetools-6.2.2 certifi-2025.11.12 charset_normalizer-3.4.4 chromadb-1.3.5 click-8.3.1 coloredlogs-15.0.1 dataclasses-json-0.6.7 distro-1.9.0 durationpy-0.10 elastic-transport-8.17.1 elasticsearch-8.19.2 et-xmlfile-2.0.0 filelock-3.20.0 flatbuffers-25.9.23 frozenlist-1.8.0 fsspec-2025.10.0 google-auth-2.43.0 googleapis-common-protos-1.72.0 grpcio-1.76.0 h11-0.16.0 hf-xet-1.2.0 httpcore-1.0.9 httptools-0.7.1 httpx-0.28.1 httpx-sse-0.4.3 huggingface-hub-1.1.5 humanfriendly-10.0 ibm-cos-sdk-2.14.3 ibm-cos-sdk-core-2.14.3 ibm-cos-sdk-s3transfer-2.14.3 ibm-db-3.2.7 ibm-watsonx-ai-1.4.7 idna-3.11 importlib-resources-6.5.2 jmespath-1.0.1 joblib-1.5.2 jsonpatch-1.33 jsonpointer-3.0.0 jsonschema-4.25.1 jsonschema-specifications-2025.9.1 kubernetes-33.1.0 langchain-0.3.27 langchain-chroma-0.2.5 langchain-community-0.3.31 langchain-core-0.3.80 langchain-db2-0.1.7 langchain-elasticsearch-0.3.2 langchain-ibm-0.3.20 langchain-milvus-0.2.1 langchain-text-splitters-0.3.11 langgraph-0.6.11 langgraph-checkpoint-3.0.1 langgraph-prebuilt-0.6.5 langgraph-sdk-0.2.10 langsmith-0.4.48 lomond-0.3.3 lxml-6.0.2 markdown-3.8.2 markdown-it-py-4.0.0 marshmallow-3.26.1 mdurl-0.1.2 mmh3-5.2.0 mpmath-1.3.0 multidict-6.7.0 mypy-extensions-1.1.0 numpy-2.3.5 oauthlib-3.3.1 onnxruntime-1.23.2 openpyxl-3.1.5 opentelemetry-api-1.38.0 opentelemetry-exporter-otlp-proto-common-1.38.0 opentelemetry-exporter-otlp-proto-grpc-1.38.0 opentelemetry-proto-1.38.0 opentelemetry-sdk-1.38.0 opentelemetry-semantic-conventions-0.59b0 orjson-3.11.4 ormsgpack-1.12.0 overrides-7.7.0 pandas-2.2.3 posthog-5.4.0 propcache-0.4.1 protobuf-6.33.1 pyYAML-6.0.3 pyasn1-0.6.1 pyasn1-modules-0.4.2 pybase64-1.4.2 pydantic-2.12.5 pydantic-core-2.41.5 pydantic-settings-2.12.0 pymilvus-2.6.4 pypdf-6.4.0 pypika-0.48.9 pyproject_hooks-1.2.0 python-docx-1.2.0 python-dotenv-1.2.1 python-pptx-1.0.2 pytz-2025.2 referencing-0.37.0 requests-2.32.5 requests-oauthlib-2.0.0 requests-toolbelt-1.0.0 rich-14.2.0 rpds-py-0.29.0 rsa-4.9.1 scikit-learn-1.7.2 scipy-1.16.3 shellingham-1.5.4 simsimd-6.5.3 sniffio-1.3.1 soupsieve-2.8 sympy-1.14.0 tabulate-0.9.0 tenacity-9.1.2 threadpoolctl-3.6.0 tokenizers-0.22.1 tqdm-4.67.1 typer-0.20.0 typer-slim-0.20.0 typing-inspect-0.9.0 typing-inspection-0.4.2 tzdata-2025.2 urllib3-2.5.0 uvicorn-0.38.0 uvloop-0.22.1 watchfiles-1.1.1 websocket-client-1.9.0 websockets-15.0.1 xxhash-3.6.0 yarl-1.22.0 zstandard-0.25.0 Note: you may need to restart the kernel to use updated packages.

Defining the watsonx.ai credentials

This cell defines the credentials required to work with the watsonx.ai Runtime service.

Action: Provide the IBM Cloud user API key. For details, see documentation.

import getpass from ibm_watsonx_ai import Credentials credentials = Credentials( url="https://us-south.ml.cloud.ibm.com", api_key=getpass.getpass("Please enter your watsonx.ai api key (hit enter): "), )

Working with spaces

You need to create a space that will be used for your work. If you do not have a space, you can use Deployment Spaces Dashboard to create one.

  • Click New Deployment Space

  • Create an empty space

  • Select Cloud Object Storage

  • Select watsonx.ai Runtime instance and press Create

  • Go to Manage tab

  • Copy Space GUID into your env file or else enter it in the window which will show up after running below cell

Tip: You can also use SDK to prepare the space for your work. More information can be found here.

Action: assign space ID below

import os try: space_id = os.environ["SPACE_ID"] except KeyError: space_id = input("Please enter your space_id (hit enter): ")

Create an instance of APIClient with authentication details.

from ibm_watsonx_ai import APIClient client = APIClient(credentials=credentials, space_id=space_id)

RAG Optimizer definition

Defining a connection to knowledge base

Provide id of connection to your knowledge database or create a new one. You can add connection on watsonx platform or type your credentials after running the code below.

knowledge_base_connection_id = ( input( "Provide connection asset ID in your space. Skip this, if you wish to type credentials by hand and hit enter: " ) or None ) if knowledge_base_connection_id is None: try: db_type_name = os.environ["DB_TYPE"].upper() except KeyError: db_type_name = input( "Please enter your db type from provided ones: P [Postgres], D [DB2], M [MySQL]" ).upper() if db_type_name == "P" or db_type_name == "POSTGRES": db_type = "postgresql" elif db_type_name == "D" or db_type_name == "DB2": db_type = "db2" elif db_type_name == "M" or db_type_name == "MYSQL": db_type = "mysql" else: raise ValueError( "Unavailable db_type_name. Choose one of P [Postgres], D [DB2], M [MySQL]." ) try: hostname = os.environ["HOSTNAME"] except KeyError: hostname = input("Please enter hostname or IP address of your database: ") try: port = os.environ["PORT"] except KeyError: port = input("Please enter your database port number and hit enter: ") try: database = os.environ["DATABASE"] except KeyError: database = input("Please enter your database name and hit enter: ") try: username = os.environ["USERNAME"] except KeyError: username = input("Please enter your username and hit enter: ") try: password = os.environ["PASSWORD"] except KeyError: password = getpass.getpass("Please enter your password and hit enter: ") try: ssl = os.environ["SSL"] except KeyError: ssl = getpass.getpass("Please enter your ssl certificate and hit enter: ") # Create connection db_data_source_type_id = client.connections.get_datasource_type_uid_by_name(db_type) details = client.connections.create( { client.connections.ConfigurationMetaNames.NAME: "Database connection", client.connections.ConfigurationMetaNames.DESCRIPTION: "Connection created by the sample notebook", client.connections.ConfigurationMetaNames.DATASOURCE_TYPE: db_data_source_type_id, client.connections.ConfigurationMetaNames.PROPERTIES: { "hostname": hostname, "port": port, "username": username, "password": password, "ssl": ssl, }, } ) knowledge_base_connection_id = client.connections.get_id(details) try: schema_name = os.environ["SCHEMA"] except KeyError: schema_name = ( input( "Please enter name of the schema you want to use (if you are using basic schema, just hit enter): " ) or "public" )

Define a reference to knowledge base.

from ibm_watsonx_ai.helpers import DatabaseLocation, DataConnection from ibm_watsonx_ai.utils.autoai.knowledge_base import DatabaseKnowledgeBase knowledge_base_references = [ DatabaseKnowledgeBase( name="Sample notebook knowledge database", description="Base used in exemplary notebook from watsonx_ai_samples", connection=DataConnection( connection_asset_id=knowledge_base_connection_id, location=DatabaseLocation(schema_name=schema_name), ), ) ]

Defining a connection to test data

Define benchmarking question about your knowledge base. Replace the questions below.

benchmarking_data_IBM_page_content = [ { "question": "Who earns the highest salary in the Engineering department?", "correct_answer": "Jack Thompson earns the highest salary in the Engineering department.", }, { "question": "List all employees hired before 2019.", "correct_answer": "The employees hired before 2019 are Jack Thompson, Emma Davis, Daniel Brown, and Henry Clark.", }, { "question": "What’s the average salary per department?", "correct_answer": "Engineering: $101,000; Marketing: $71,000; Finance: $77,500; Human Resources: $85,000; Sales: $69,000.", }, ]

The code in the next cell uploads testing data to the bucket as a json file.

import json test_filename = "benchmarking_data_kb_sample.json" if not os.path.isfile(test_filename): with open(test_filename, "w") as json_file: json.dump(benchmarking_data_IBM_page_content, json_file, indent=4) test_asset_details = client.data_assets.create( name=test_filename, file_path=test_filename ) test_asset_id = client.data_assets.get_id(test_asset_details) test_asset_id
Creating data asset... SUCCESS
'76b95530-6c46-45fe-b804-3a406e97f53d'

Define connection information to testing data.

test_data_references = [DataConnection(data_asset_id=test_asset_id)]

RAG Optimizer configuration

Provide the input information for AutoAI RAG optimizer:

  • name - experiment name

  • description - experiment description

  • max_number_of_rag_patterns - maximum number of RAG patterns to create

  • optimization_metrics - target optimization metrics

from ibm_watsonx_ai.experiment import AutoAI from ibm_watsonx_ai.foundation_models.schema import ( AutoAIRAGGenerationConfig, AutoAIRAGModelConfig, ) experiment = AutoAI( credentials=credentials, space_id=space_id, ) foundation_model = AutoAIRAGModelConfig( model_id="mistralai/mistral-small-3-1-24b-instruct-2503", ) generation_config = AutoAIRAGGenerationConfig( foundation_models=[foundation_model], ) rag_optimizer = experiment.rag_optimizer( name="AutoAI RAG - sample notebook - knowledge base", description="Experiment run in sample notebook", generation=generation_config, max_number_of_rag_patterns=3, optimization_metrics=[AutoAI.RAGMetrics.ANSWER_CORRECTNESS], )

Configuration parameters can be retrieved via get_params().

rag_optimizer.get_params()
{'name': 'AutoAI RAG - sample notebook - knowledge base', 'description': 'Experiment run in sample notebook', 'max_number_of_rag_patterns': 3, 'optimization_metrics': ['answer_correctness'], 'generation': {'foundation_models': [{'model_id': 'mistralai/mistral-small-3-1-24b-instruct-2503'}]}}

RAG Experiment run

Call the run() method to trigger the AutoAI RAG experiment. You can either use interactive mode (synchronous job) or background mode (asynchronous job) by specifying background_mode=True.

run_details = rag_optimizer.run( knowledge_base_references=knowledge_base_references, test_data_references=test_data_references, background_mode=False, )
############################################## Running '8019f164-995b-4403-bbdb-995f623e56f8' ############################################## pending............... running....... completed Training of '8019f164-995b-4403-bbdb-995f623e56f8' finished successfully.

You can use the get_run_status() method to monitor AutoAI RAG jobs in background mode.

rag_optimizer.get_run_status()
'completed'

Comparison and testing of RAG Patterns

You can list the trained patterns and information on evaluation metrics in the form of a Pandas DataFrame by calling the summary() method. You can use the DataFrame to compare all discovered patterns and select the one you like for further testing.

summary = rag_optimizer.summary() summary

Additionally, you can pass the scoring parameter to the summary method, to filter RAG patterns starting with the best.

summary = rag_optimizer.summary(scoring="answer_correctness")
rag_optimizer.get_run_details()
{'entity': {'hardware_spec': {'id': 'a6c4923b-b8e4-444c-9f43-8a7ec3020110', 'name': 'L'}, 'knowledge_base_references': [{'description': 'Base used in exemplary notebook from watsonx_ai_samples', 'name': 'Sample notebook knowledge database', 'reference': {'connection': {'id': '7001a013-c61b-4393-b43e-29922a40b00f'}, 'location': {'schema_name': 'scheme_functional_test_1'}, 'type': 'connection_asset'}, 'type': 'database'}], 'parameters': {'constraints': {'generation': {'foundation_models': [{'model_id': 'mistralai/mistral-small-3-1-24b-instruct-2503'}]}, 'max_number_of_rag_patterns': 3}, 'optimization': {'metrics': ['answer_correctness']}, 'output_logs': True}, 'results': [{'context': {'iteration': 0, 'max_combinations': 2, 'rag_pattern': {'composition_steps': ['model_selection', 'chunking', 'embeddings', 'retrieval', 'generation'], 'duration_seconds': 18, 'location': {'evaluation_results': 'default_autoai_rag_out/8019f164-995b-4403-bbdb-995f623e56f8/Pattern1/evaluation_results.json', 'inference_notebook': 'default_autoai_rag_out/8019f164-995b-4403-bbdb-995f623e56f8/Pattern1/inference_notebook.ipynb', 'inference_service_code': 'default_autoai_rag_out/8019f164-995b-4403-bbdb-995f623e56f8/Pattern1/inference_ai_service.gz', 'inference_service_metadata': 'default_autoai_rag_out/8019f164-995b-4403-bbdb-995f623e56f8/Pattern1/inference_service_metadata.json'}, 'name': 'Pattern1', 'settings': {'agent': {'description': 'Sequential graph with query checking and rewriting nodes.', 'framework': 'langgraph', 'type': 'sequential'}, 'generation': {'chat_template_messages': {'user_message_text': '\nGiven an input question, create a syntactically correct {dialect} query to\nrun to help find the answer. Unless the user specifies in his question a\nspecific number of examples they wish to obtain, always limit your query to\nat most {top_k} results. You can order the results by a relevant column to\nreturn the most interesting examples in the database.\n\nNever query for all the columns from a specific table, only ask for a\nfew relevant columns given the question.\n\nPay attention to use only the column names that you can see in the schema\ndescription. Be careful to not query for columns that do not exist. Also,\npay attention to which column is in which table.\n\n You will format your generation as a python dictionary, such as:\n{{"sql_query": <prepared sql query>}}\n\n\nOnly use the following tables:\n{table_info}\n\nUser query in natural language:\n{question}\n\n'}, 'model_id': 'mistralai/mistral-small-3-1-24b-instruct-2503', 'parameters': {'max_completion_tokens': 2048, 'temperature': 0.2}, 'word_to_token_ratio': 1.5}}, 'settings_importance': {'agent': [{'importance': 0.5, 'parameter': 'type'}], 'generation': [{'importance': 0.5, 'parameter': 'foundation_model'}]}}, 'software_spec': {'name': 'autoai-rag_rt24.1-py3.11'}}, 'metrics': {'test_data': [{'ci_high': 1.0, 'ci_low': 0.9091, 'mean': 0.9697, 'metric_name': 'answer_correctness'}]}}, {'context': {'iteration': 1, 'max_combinations': 2, 'rag_pattern': {'composition_steps': ['model_selection', 'chunking', 'embeddings', 'retrieval', 'generation'], 'duration_seconds': 12, 'location': {'evaluation_results': 'default_autoai_rag_out/8019f164-995b-4403-bbdb-995f623e56f8/Pattern2/evaluation_results.json', 'inference_notebook': 'default_autoai_rag_out/8019f164-995b-4403-bbdb-995f623e56f8/Pattern2/inference_notebook.ipynb', 'inference_service_code': 'default_autoai_rag_out/8019f164-995b-4403-bbdb-995f623e56f8/Pattern2/inference_ai_service.gz', 'inference_service_metadata': 'default_autoai_rag_out/8019f164-995b-4403-bbdb-995f623e56f8/Pattern2/inference_service_metadata.json'}, 'name': 'Pattern2', 'settings': {'agent': {'description': 'ReAct agent with WatsonxSQLDatabaseToolkit.', 'framework': 'langgraph', 'type': 'react'}, 'generation': {'chat_template_messages': {'system_message_text': 'You are an agent designed to interact with a SQL database.\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nTo start you should ALWAYS look at the tables in the database to see what you can query.\nDo NOT skip this step.\nThen you should query the schema of the most relevant tables.'}, 'model_id': 'mistralai/mistral-small-3-1-24b-instruct-2503', 'parameters': {'max_completion_tokens': 2048, 'temperature': 0.01}, 'word_to_token_ratio': 1.5}}, 'settings_importance': {'agent': [{'importance': 1.0, 'parameter': 'type'}], 'generation': [{'importance': 0.0, 'parameter': 'foundation_model'}]}}, 'software_spec': {'name': 'autoai-rag_rt24.1-py3.11'}}, 'metrics': {'test_data': [{'ci_high': 0.7857, 'ci_low': 0.0, 'mean': 0.2619, 'metric_name': 'answer_correctness'}]}}], 'results_reference': {'location': {'path': 'default_autoai_rag_out', 'training': 'default_autoai_rag_out/8019f164-995b-4403-bbdb-995f623e56f8', 'training_status': 'default_autoai_rag_out/8019f164-995b-4403-bbdb-995f623e56f8/training-status.json', 'training_log': 'default_autoai_rag_out/8019f164-995b-4403-bbdb-995f623e56f8/output.log', 'assets_path': 'default_autoai_rag_out/8019f164-995b-4403-bbdb-995f623e56f8/assets'}, 'type': 'container'}, 'status': {'completed_at': '2025-11-26T20:14:38.788Z', 'message': {'level': 'info', 'text': 'AutoAI RAG execution completed.'}, 'running_at': '2025-11-26T20:14:38.000Z', 'state': 'completed', 'step': 'generation'}, 'test_data_references': [{'location': {'href': '/v2/assets/76b95530-6c46-45fe-b804-3a406e97f53d?space_id=d95bc9d3-1521-4059-ba89-4a5884ac864e', 'id': '76b95530-6c46-45fe-b804-3a406e97f53d'}, 'type': 'data_asset'}], 'timestamp': '2025-11-26T20:14:39.651Z'}, 'metadata': {'created_at': '2025-11-26T20:12:17.415Z', 'description': 'Experiment run in sample notebook', 'id': '8019f164-995b-4403-bbdb-995f623e56f8', 'modified_at': '2025-11-26T20:14:38.867Z', 'name': 'AutoAI RAG - sample notebook - knowledge base', 'space_id': 'd95bc9d3-1521-4059-ba89-4a5884ac864e'}}

Get selected pattern

Get the RAGPattern object from the RAG Optimizer experiment. By default, the RAGPattern of the best pattern is returned.

best_pattern_name = summary.index.values[0] print("Best pattern is:", best_pattern_name) best_pattern = rag_optimizer.get_pattern()
Best pattern is: Pattern1 Collecting pyarrow>=3.0.0 Using cached pyarrow-22.0.0-cp311-cp311-macosx_12_0_arm64.whl.metadata (3.1 kB) Using cached pyarrow-22.0.0-cp311-cp311-macosx_12_0_arm64.whl (34.3 MB) Installing collected packages: pyarrow Successfully installed pyarrow-22.0.0

The pattern details can be retrieved by calling the get_pattern_details method:

rag_optimizer.get_pattern_details(pattern_name='Pattern2')

Query the RAGPattern locally, to test it.

from ibm_watsonx_ai.deployments import RuntimeContext runtime_context = RuntimeContext(api_client=client) inference_service_function = best_pattern.inference_service(runtime_context)[0]
question = "Which employees are based in New York?" context = RuntimeContext( api_client=client, request_payload_json={"messages": [{"role": "user", "content": question}]}, ) inference_service_function(context)
{'body': {'choices': [{'index': 0, 'message': {'role': 'assistant', 'content': 'Based on the provided SQL result, the employees who are based in New York are:\n\n1. **Alice Johnson**\n - Department: Engineering\n - Role: Software Engineer\n - Hire Date: March 15, 2019\n\n2. **Frank Miller**\n - Department: Engineering\n - Role: Data Scientist\n - Hire Date: May 9, 2022\n\n3. **Jack Thompson**\n - Department: Engineering\n - Role: Software Architect\n - Hire Date: April 23, 2015'}, 'reference_documents': [{'metadata': {'document_id': 'scheme_functional_test_1'}}]}]}}

Deploy RAGPattern

Deployment is done by storing the defined RAG function and then by creating a deployed asset.

deployment_details = best_pattern.inference_service.deploy( name="AutoAI RAG deployment - ibm_watsonx_ai documentation", space_id=space_id, deploy_params={"tags": ["wx-autoai-rag"]}, )
###################################################################################### Synchronous deployment creation for id: '46c7da3e-7f65-4aaf-8d6e-225c2e71b58c' started ###################################################################################### initializing Note: online_url and serving_urls are deprecated and will be removed in a future release. Use inference instead. ............ ready ----------------------------------------------------------------------------------------------- Successfully finished deployment creation, deployment_id='91cc8506-1ad3-4bd3-8b85-624df7ccc76e' -----------------------------------------------------------------------------------------------

Test the deployed function

RAG service is now deployed in our space. To test our solution we can run the cell below. Questions have to be provided in the payload. Their format is provided below.

deployment_id = client.deployments.get_id(deployment_details) payload = {"messages": [{"role": "user", "content": question}]} score_response = client.deployments.run_ai_service(deployment_id, payload)
print(score_response["choices"][0]["message"]["content"])
Based on the provided SQL query and result, the employees who are based in New York are: 1. **Alice Johnson** - Software Engineer 2. **Frank Miller** - Data Scientist 3. **Jack Thompson** - Software Architect
score_response["choices"][0]["message"]["content"]
'Based on the provided SQL query and result, the employees who are based in New York are:\n\n1. **Alice Johnson** - Software Engineer\n2. **Frank Miller** - Data Scientist\n3. **Jack Thompson** - Software Architect'

Historical runs

In this section you learn to work with historical RAG Optimizer jobs (runs).

To list historical runs use the list() method and provide the 'rag_optimizer' filter.

experiment.runs(filter="rag_optimizer").list()
run_id = run_details["metadata"]["id"] run_id
'8019f164-995b-4403-bbdb-995f623e56f8'

Get executed optimizer's configuration parameters

experiment.runs.get_rag_params(run_id=run_id)
{'name': 'AutoAI RAG - sample notebook - knowledge base', 'description': 'Experiment run in sample notebook', 'max_number_of_rag_patterns': 3, 'generation': {'foundation_models': [{'model_id': 'mistralai/mistral-small-3-1-24b-instruct-2503'}]}, 'optimization_metrics': ['answer_correctness']}

Get historical rag_optimizer instance and training details

historical_opt = experiment.runs.get_rag_optimizer(run_id)

List trained patterns for selected optimizer

historical_opt.summary()

Clean up

To delete the current experiment, use the cancel_run method.

Warning: Be careful: once you delete an experiment, you will no longer be able to refer to it.

rag_optimizer.cancel_run(hard_delete=True)
'SUCCESS'

To delete the deployment, use the delete method.

Warning: Keeping the deployment active may lead to unnecessary consumption of Compute Unit Hours (CUHs).

client.deployments.delete(deployment_id)
'SUCCESS'

If you want to clean up all created assets:

  • experiments

  • trainings

  • pipelines

  • model definitions

  • models

  • functions

  • deployments

please follow up this sample notebook.

Summary and next steps

You successfully completed this notebook!

You learned how to use ibm-watsonx-ai to run AutoAI RAG experiments.

Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Authors

Paweł Kocur, Software Engineer watsonx.ai

Copyright © 2025-2026 IBM. This notebook and its source code are released under the terms of the MIT License.