Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cpd5.1/notebooks/python_sdk/experiments/autoai_rag/Use AutoAI RAG with watsonx Text Extraction service.ipynb
6405 views
Kernel: Python 3.11

image

Use AutoAI RAG with watsonx Text Extraction service

Disclaimers

  • Use only Projects and Spaces that are available in the watsonx context.

Notebook content

This notebook demonstrates how to process data using the IBM watsonx.ai Text Extraction service and use the result in an AutoAI RAG experiment. The data used in this notebook is from the Granite Code Models paper

Some familiarity with Python is helpful. This notebook uses Python 3.11.

Learning goal

The learning goals of this notebook are:

  • Process data using the IBM watsonx.ai Text Extraction service

  • Create an AutoAI RAG job that will find the best RAG pattern based on processed data

Contents

This notebook contains the following parts:

Set up the environment

Before you use the sample code in this notebook, you must perform the following setup tasks:

  • Contact your Cloud Pak for Data administrator and ask them for your account credentials

Install and import the required modules and dependencies

!pip install -U wget | tail -n 1 !pip install -U 'ibm-watsonx-ai[rag]>=1.2.4' | tail -n 1

Connect to WML

Authenticate the Watson Machine Learning service on IBM Cloud Pak for Data. You need to provide the platform url, your username, and your api_key.

username = "PASTE YOUR USERNAME HERE" api_key = "PASTE YOUR API_KEY HERE" url = "PASTE THE PLATFORM URL HERE"
from ibm_watsonx_ai import Credentials credentials = Credentials( username=username, api_key=api_key, url=url, instance_id="openshift", version="5.1" )

Alternatively, you can use your username and password to authenticate WML services.

credentials = Credentials( username=***, password=***, url=***, instance_id="openshift", version="5.1" )

Create an instance of APIClient with authentication details

from ibm_watsonx_ai import APIClient client = APIClient(credentials)

Working with spaces

First, you need to create a space for your work. If you do not have a space already created, you can use {PLATFORM_URL}/ml-runtime/spaces?context=icp4data to create one.

  • Click New Deployment Space

  • Create an empty space

  • Go to the space Settings tab

  • Copy the space_id and paste it below

Tip: You can also use SDK to prepare the space for your work. Find more information in the Space Management sample notebook.

Action: Assign the space ID below

space_id = 'PASTE YOUR SPACE GUID HERE'

To be able to interact with all resources available in Watson Machine Learning, set the space that you are using.

client.set.default_space(space_id)
'SUCCESS'

Create an instance of COS client

Connect to the Cloud Object Storage instance for the by using the ibm_boto3 package.

Action: Assign COS credentials below

cos_bucket_name = "PUT YOUR COS BUCKET NAME HERE" endpoint_url = "PUT YOUR ENDPOINT URL HERE" access_key = "PUT YOUR COS ACCESS KEY HERE" secret_access_key = "PUT YOUR COS SECRET ACCESS KEY HERE"

Create client

import ibm_boto3 cos_client = ibm_boto3.client( service_name="s3", endpoint_url=endpoint_url, aws_access_key_id=access_key, aws_secret_access_key=secret_access_key, )

Initialize the client connection to the created bucket and get the connection ID.

connection_details = client.connections.create( { "datasource_type": client.connections.get_datasource_type_uid_by_name( "bluemixcloudobjectstorage" ), "name": "Connection to COS for tests", "properties": { "bucket": cos_bucket_name, "access_key": access_key, "secret_key": secret_access_key, "iam_url": client.service_instance._href_definitions.get_iam_token_url(), "url": endpoint_url, }, } ) cos_connection_id = client.connections.get_id( connection_details )
Creating connections... SUCCESS

Prepare data and connections for the Text Extraction service

The document, from which we are going to extract text, is located in the IBM Cloud Object Storage (COS). In this notebook, we will use the Granite Code Models paper as a source text document. The final results file, which will contain extracted text and necessary metadata, will be placed in the COS. So we will use the ibm_watsonx_ai.helpers.DataConnection and the ibm_watsonx_ai.helpers.S3Location class to create Python objects that will represent the references to the processed files. Reference to the final results will be used as an input for the AutoAI RAG experiment.

from ibm_watsonx_ai.helpers import DataConnection, S3Location data_url = "https://arxiv.org/pdf/2405.04324" te_input_filename = "granite_code_models.pdf" te_result_filename = "granite_code_models.md"

Download and upload training data to the COS bucket. Then define a connection to the uploaded file.

import wget wget.download(data_url, te_input_filename) cos_client.upload_file(te_input_filename, cos_bucket_name, te_input_filename)

Input file connection.

input_data_reference = DataConnection( connection_asset_id=cos_connection_id, location=S3Location(bucket=cos_bucket_name, path=te_input_filename), ) input_data_reference.set_client(client)

Output file connection.

result_data_reference = DataConnection( connection_asset_id=cos_connection_id, location=S3Location( bucket=cos_bucket_name, path=te_result_filename ) ) result_data_reference.set_client(client)

Process data using the Text Extraction service

Initialize the Text Extraction service endpoint.

from ibm_watsonx_ai.foundation_models.extractions import TextExtractions extraction = TextExtractions( credentials=credentials, space_id=space_id, )

Run a text extraction job for connections created in the previous step.

from ibm_watsonx_ai.metanames import TextExtractionsMetaNames response = extraction.run_job( document_reference=input_data_reference, results_reference=result_data_reference, steps={ TextExtractionsMetaNames.OCR: { "process_image": True, "languages_list": ["en"], }, TextExtractionsMetaNames.TABLE_PROCESSING: {"enabled": True}, }, results_format="markdown", ) job_id = response['metadata']['id']

Wait for the job to be complete.

import json import time while True: job_details = extraction.get_job_details(job_id) status = job_details['entity']['results']['status'] if status == "completed": print("Job completed successfully, details: {}".format(json.dumps(job_details, indent=2))) break if status == "failed": print("Job failed, details: {}. \n Try to run job again.".format(json.dumps(job_details, indent=2))) break time.sleep(10)
Job completed successfully, details: { "entity": { "document_reference": { "connection": { "id": "ff205130-2c63-4b5e-b0fa-8d6d6d417cfd" }, "location": { "bucket": "wnowogorski-test-bucket", "file_name": "granite_code_models.pdf" }, "type": "connection_asset" }, "results": { "completed_at": "2025-02-20T10:36:13.409Z", "number_pages_processed": 28, "running_at": "2025-02-20T10:33:13.009Z", "status": "completed" }, "results_reference": { "connection": { "id": "ff205130-2c63-4b5e-b0fa-8d6d6d417cfd" }, "location": { "bucket": "wnowogorski-test-bucket", "file_name": "granite_code_models.md" }, "type": "connection_asset" }, "steps": { "ocr": { "languages_list": [ "en" ] }, "tables_processing": { "enabled": true } } }, "metadata": { "created_at": "2025-02-20T10:33:10.714Z", "id": "1b7489ea-4abb-418c-9aae-0800244f3d10", "modified_at": "2025-02-20T10:36:13.448Z", "space_id": "94c99253-1750-4091-befa-0ae8d6ffe454" } }

Get the text extraction result.

from IPython.display import display, Markdown cos_client.download_file( Bucket=cos_bucket_name, Key=te_result_filename, Filename=te_result_filename ) with open(te_result_filename, 'r', encoding='utf-8') as file: # Display beginning of the result file display(Markdown((file.read()[:3000])))

†Corresponding Authors

Large Language Models (LLMs) trained on code are revolutionizing the software development process. Increasingly, code LLMs are being inte- grated into software development environments to improve the produc- tivity of human programmers, and LLM-based agents are beginning to show promise for handling complex tasks autonomously. Realizing the full potential of code LLMs requires a wide range of capabilities, including code generation, fixing bugs, explaining and documenting code, maintaining repositories, and more. In this work, we introduce the Granite series of decoder-only code models for code generative tasks, trained with code written in 116 programming languages. The Granite Code models family consists of models ranging in size from 3 to 34 billion parameters, suitable for applications ranging from complex application modernization tasks to on-device memory-constrained use cases. Evaluation on a comprehensive set of tasks demonstrates that Granite Code models consistently reaches state-of-the-art performance among available open-source code LLMs. The Granite Code model family was optimized for enterprise software devel- opment workflows and performs well across a range of coding tasks (e.g. code generation, fixing and explanation), making it a versatile “all around” code model. We release all our Granite Code models under an Apache 2.0 license for both research and commercial use. https://github.com/ibm-granite/granite-code-models

1 Introduction

Over the last several decades, software has been woven into the fabric of every aspect of our society. As demand for software development surges, it is more critical than ever to increase software development productivity, and LLMs provide promising path for augmenting human programmers. Prominent enterprise use cases for LLMs in software development productivity include code generation, code explanation, code fixing, unit test and documentation generation, application modernization, vulnerability detection, code translation, and more.

Recent years have seen rapid progress in LLM’s ability to generate and manipulate code, and a range of models with impressive coding abilities are available today. Models range in 43.5

  • 41.5

    • 37.4

    • 34.4

    • 31.2

    • 29.2

      • 29.0

        • 33.2

        • 18.9

            • 21.3

            • 15.1

              • 15.0

            • 29.6

            • 14.2 O.

              • 8.9

              • 2.9

    • 15.0

    • 13.4

    • 13.5

    • 13.9

    • 12.4Generating Code

Explaining Code

Fixing Code

Average

O MistralAI/Mistral-7B W @ Meta/Llama-3-8B

@) Google/Gemma-7B

Meta/CodeLlama-7B @) Google/CodeGemma-7B @ IBM/Granite-8B-Code-Base W

@ BigCode/StarCoder2-7B

Apache 2.0 License

+ 49.6
  • 41.6

  • 36.9

  • 37.9

  • 19.8

  • 40.9

  • 38.4

29.1 |

  • 25.1

21

  • 41.9

  • 43.6

  • 5.8

  • 31.8

  • 25.3

  • 18.3

  • 13.7

  • 112.8

Generating Code

Explaining Code

Fixing Code

Average

O MistralAI/Mistral-7B-Instruct-v0.2

@ Meta/CodeLlama-7B-Instru

Prepare data and connections for the AutoAI RAG experiment

Upload a json file to use for benchmarking to COS and define a connection to this file.

Note: correct_answer_document_ids must refer to the document processed by text extraction service, not the initial document.

benchmarking_data = [ { "question": "What are the two main variants of Granite Code models?", "correct_answer": "The two main variants are Granite Code Base and Granite Code Instruct.", "correct_answer_document_ids": [te_result_filename] }, { "question": "What is the purpose of Granite Code Instruct models?", "correct_answer": "Granite Code Instruct models are finetuned for instruction-following tasks using datasets like CommitPack, OASST, HelpSteer, and synthetic code instruction datasets, aiming to improve reasoning and instruction-following capabilities.", "correct_answer_document_ids": [te_result_filename] }, { "question": "What is the licensing model for Granite Code models?", "correct_answer": "Granite Code models are released under the Apache 2.0 license, ensuring permissive and enterprise-friendly usage.", "correct_answer_document_ids": [te_result_filename] }, ]
import os test_filename = "benchmark.json" if not os.path.isfile(test_filename): with open(test_filename, "w") as json_file: json.dump(benchmarking_data, json_file, indent=4) cos_client.upload_file(test_filename, cos_bucket_name, test_filename)

Test the data connection.

test_data_reference = DataConnection( connection_asset_id=cos_connection_id, location=S3Location(bucket=cos_bucket_name, path=test_filename), ) test_data_reference.set_client(client) test_data_references = [test_data_reference]

Use the reference to the Text Extraction job result as input for the AutoAI RAG experiment.

input_data_references = [result_data_reference]

Run the AutoAI RAG experiment

Provide the input information for AutoAI RAG optimizer:

  • name - experiment name

  • description - experiment description

  • max_number_of_rag_patterns - maximum number of RAG patterns to create

  • optimization_metrics - target optimization metrics

from ibm_watsonx_ai.experiment import AutoAI experiment = AutoAI(credentials, space_id=space_id) rag_optimizer = experiment.rag_optimizer( name='AutoAI RAG - Text Extraction service experiment', description = "AutoAI RAG experiment on documents generated by text extraction service", max_number_of_rag_patterns=4, optimization_metrics=['answer_correctness'] )

Call the run() method to trigger the AutoAI RAG experiment. Choose one of two modes:

  • To use the interactive mode (synchronous job), specify background_mode=False

  • To use the background mode (asynchronous job), specify background_mode=True

rag_optimizer.run( input_data_references=input_data_references, test_data_references=test_data_references, background_mode=False );
############################################## Running '508d378a-3514-47a4-971c-c2735ad8314b' ############################################## pending.... running.................................................................................................................................................................................................................... completed Training of '508d378a-3514-47a4-971c-c2735ad8314b' finished successfully.

Compare and test of RAG Patterns

You can list the trained patterns and information on evaluation metrics in the form of a Pandas DataFrame by calling the summary() method. You can use the DataFrame to compare all discovered patterns and select the one you like for further testing.

summary = rag_optimizer.summary() summary

Get the selected pattern

Get the RAGPattern object from the RAG Optimizer experiment. By default, the RAGPattern of the best pattern is returned.

best_pattern_name = summary.index.values[0] print('Best pattern is:', best_pattern_name) best_pattern = rag_optimizer.get_pattern()
Best pattern is: Pattern4
rag_optimizer.get_pattern_details(pattern_name=best_pattern_name)
{'composition_steps': ['chunking', 'embeddings', 'vector_store', 'retrieval', 'generation'], 'duration_seconds': 398, 'location': {'evaluation_results': '/spaces/94c99253-1750-4091-befa-0ae8d6ffe454/assets/auto_ml/auto_ml.6b3ec81e-4bf0-4cf7-9f98-ba17a8f7d3e7/wml_data/508d378a-3514-47a4-971c-c2735ad8314b/Pattern4/evaluation_results.json', 'indexing_notebook': '/spaces/94c99253-1750-4091-befa-0ae8d6ffe454/assets/auto_ml/auto_ml.6b3ec81e-4bf0-4cf7-9f98-ba17a8f7d3e7/wml_data/508d378a-3514-47a4-971c-c2735ad8314b/Pattern4/indexing_inference_notebook.ipynb', 'inference_notebook': '/spaces/94c99253-1750-4091-befa-0ae8d6ffe454/assets/auto_ml/auto_ml.6b3ec81e-4bf0-4cf7-9f98-ba17a8f7d3e7/wml_data/508d378a-3514-47a4-971c-c2735ad8314b/Pattern4/indexing_inference_notebook.ipynb'}, 'name': 'Pattern4', 'settings': {'chunking': {'chunk_overlap': 512, 'chunk_size': 1024, 'method': 'recursive'}, 'embeddings': {'model_id': 'intfloat/multilingual-e5-large', 'truncate_input_tokens': 512, 'truncate_strategy': 'left'}, 'generation': {'context_template_text': '{document}', 'model_id': 'google/flan-ul2', 'parameters': {'decoding_method': 'greedy', 'max_new_tokens': 1000, 'min_new_tokens': 1}, 'prompt_template_text': 'Please answer the question I provide in the Question section below, based solely on the information I provide in the Context section. If the question is unanswerable, please say you cannot answer.\n\nContext:\n{reference_documents}:\n\nQuestion: {question}. \nAgain, please answer the question based on the context provided only. If the context is not related to the question, just say you cannot answer.'}, 'retrieval': {'method': 'window', 'number_of_chunks': 5, 'window_size': 1}, 'vector_store': {'datasource_type': 'chroma', 'distance_metric': 'cosine', 'index_name': 'autoai_rag_508d378a_20250220105126', 'operation': 'upsert', 'schema': {'fields': [{'description': 'text field', 'name': 'text', 'role': 'text', 'type': 'string'}, {'description': 'document name field', 'name': 'document_id', 'role': 'document_name', 'type': 'string'}, {'description': 'chunk starting token position in the source document', 'name': 'start_index', 'role': 'start_index', 'type': 'number'}, {'description': 'chunk number per document', 'name': 'sequence_number', 'role': 'sequence_number', 'type': 'number'}, {'description': 'vector embeddings', 'name': 'vector', 'role': 'vector_embeddings', 'type': 'array'}], 'id': 'autoai_rag_1.0', 'name': 'Document schema using open-source loaders', 'type': 'struct'}}}}

Test the RAGPattern by querying it locally.

questions = ["What training objectives are used for the models?"] payload = { client.deployments.ScoringMetaNames.INPUT_DATA: [ { "values": questions, "access_token": client.service_instance._get_token() } ] } resp = best_pattern.inference_function()(payload)
print(resp["predictions"][0]["values"][0][0])
The models are trained using the causal language modeling objective and the Fill-In-the-Middle (FIM) objective. The FIM objective predicts inserted tokens based on the given context and subsequent text.

Summary

You successfully completed this notebook!

You learned how to use AutoAI RAG with documents processed by the TextExtraction service.

Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Author:

Witold Nowogórski, Software Engineer at watsonx.ai.

Copyright © 2025 IBM. This notebook and its source code are released under the terms of the MIT License.