Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cloud/notebooks/python_sdk/deployments/foundation_models/Use watsonx Text Extraction service to extract text from file.ipynb
6408 views
Kernel: note_env

image

Use watsonx.ai Text Extraction service to extract text from file

Disclaimers

  • Use only Projects and Spaces that are available in watsonx context.

Notebook content

This notebook contains the steps and code demonstrating how to run a Text Extraction job using python SDK and then retrieve the results in the form of JSON file.

Some familiarity with Python is helpful. This notebook uses Python 3.11.

Learning goal

The purpose of this notebook is to demonstrate the usage a Text Extraction service and ibm-watsonx-ai Python SDK to retrieve a text from file that is located at IBM Cloud Object Storage.

Contents

This notebook contains the following parts:

Set up the environment

Before you use the sample code in this notebook, you must perform the following setup tasks:

Install required packages

!pip install "ibm-watsonx-ai>=1.1.15" | tail -n 1

Defining the watsonx.ai credentials

This cell defines the watsonx.ai credentials required to work with watsonx Foundation Model inferencing.

Action: Provide the IBM Cloud user API key. For details, see documentation.

import getpass from ibm_watsonx_ai import Credentials credentials = Credentials(url="https://us-south.ml.cloud.ibm.com", api_key=getpass.getpass("Please enter your watsonx.ai api key (hit enter): "))

Defining the project id

The Text Extraction service requires project id that provides the context for the call. We will obtain the id from the project in which this notebook runs. Otherwise, please provide the project id.

import os try: project_id = os.environ["PROJECT_ID"] except KeyError: project_id = input("Please enter your project_id (hit enter): ")

API Client initialization

from ibm_watsonx_ai import APIClient client = APIClient(credentials=credentials, project_id=project_id)

Create data connections with source document and results reference

The document, from which we are going to extract text, is located at IBM Cloud Object Storage (COS). In the following example we are going to use Granite Code Models paper as a source text document. Also, the final results file, which will contain extracted text and necessary metadata, will be placed in COS. Therefore, we use ibm_watsonx_ai.helpers.DataConnection and ibm_watsonx_ai.helpers.S3Location class to create a Python objects that will represent the references to the processed files. Please note that you have to create connection asset with your COS details (for detailed explanation how to do this see IBM Cloud Object Storage connection or check below cells).

Create connection to COS

You can skip this section if you already have connection asset with IBM Cloud Object Storage.

datasource_name = 'bluemixcloudobjectstorage' bucketname = "textextractionms"
cos_credentials = { "endpoint_url": "<endpoint url>", "apikey": "<apikey>", "access_key_id": "<access_key_id>", "secret_access_key": "<secret_access_key>" }
conn_meta_props= { client.connections.ConfigurationMetaNames.NAME: f"Connection to Database - {datasource_name} ", client.connections.ConfigurationMetaNames.DATASOURCE_TYPE: client.connections.get_datasource_type_id_by_name(datasource_name), client.connections.ConfigurationMetaNames.DESCRIPTION: "Connection to external Database", client.connections.ConfigurationMetaNames.PROPERTIES: { 'bucket': bucketname, 'access_key': cos_credentials['access_key_id'], 'secret_key': cos_credentials['secret_access_key'], 'iam_url': 'https://iam.cloud.ibm.com/identity/token', 'url': cos_credentials['endpoint_url'] } } conn_details = client.connections.create(meta_props=conn_meta_props) connection_asset_id = client.connections.get_id(conn_details)
Creating connections... SUCCESS

Upload file and create document and results reference

from ibm_watsonx_ai.helpers import DataConnection, S3Location local_source_file_name = "granite_code_models_paper.pdf" source_file_name = "./files/granite_code_models_paper.pdf" results_file_name = "./files/text_extraction_granite_code_models_paper.json"
remote_document_reference = DataConnection(connection_asset_id=connection_asset_id, location=S3Location(bucket = bucketname, path = ".")) remote_document_reference.set_client(client) remote_document_reference.write(local_source_file_name, remote_name=source_file_name)

Finally, we can create Data Connection that represents document and results reference.

document_reference = DataConnection(connection_asset_id=connection_asset_id, location=S3Location(bucket=bucketname, path=source_file_name)) results_reference = DataConnection(connection_asset_id=connection_asset_id, location=S3Location(bucket=bucketname, path=results_file_name))

Text Extraction request

Since data connection for source and results files are ready, we can proceed to the text extraction run job step. To initialize Text Extraction manager we use TextExtractions class.

from ibm_watsonx_ai.foundation_models.extractions import TextExtractions from ibm_watsonx_ai.metanames import TextExtractionsMetaNames
extraction = TextExtractions(api_client=client, project_id=project_id)

When running job the steps for the text extraction pipeline can be specified. For more details about available steps see documentation. The list of steps available in sdk can be found below.

TextExtractionsMetaNames().show()
---------------- ---- -------- META_PROP NAME TYPE REQUIRED OCR dict N TABLE_PROCESSING dict N ---------------- ---- --------

To view sample parameter values for the text extraction steps run get_example_values().

TextExtractionsMetaNames().get_example_values()
{'ocr': {'languages_list': ['en']}, 'tables_processing': {'enabled': True}}

In our example we are going to use the following steps

steps = {TextExtractionsMetaNames.OCR: {'languages_list': ['en']}, TextExtractionsMetaNames.TABLE_PROCESSING: {'enabled': True}}

Now, we can run Text Extraction job. Please note that to get results in a more readable format we set results_format to "markdown". However, if you want to get a file with more detailed results please use "json".

details = extraction.run_job(document_reference=document_reference, results_reference=results_reference, steps=steps, results_format="markdown") details
{'metadata': {'id': 'ac2fac08-575c-4661-bd1f-1738be6f506a', 'created_at': '2024-11-28T14:50:22.990Z', 'project_id': '7e8b59ba-2610-4a29-9d90-dc02483ed5f4'}, 'entity': {'document_reference': {'type': 'connection_asset', 'connection': {'id': '99d36548-ecb8-462a-ac5b-9d6604c1fc37'}, 'location': {'file_name': './files/granite_code_models_paper.pdf', 'bucket': 'textextractionms'}}, 'document': None, 'results_reference': {'type': 'connection_asset', 'connection': {'id': '99d36548-ecb8-462a-ac5b-9d6604c1fc37'}, 'location': {'bucket': 'textextractionms', 'file_name': './files/text_extraction_granite_code_models_paper.json'}}, 'steps': {'ocr': {'languages_list': ['en']}, 'tables_processing': {'enabled': True}}, 'assembly_md': {}, 'results': {'status': 'submitted', 'number_pages_processed': 0}}}
extraction_job_id = extraction.get_id(extraction_details=details)

We can list text extraction jobs using a proper list method.

extraction.list_jobs()

Moreover, to get details of a particular text extraction request run following

extraction.get_job_details(extraction_id=extraction_job_id)
{'entity': {'document_reference': {'connection': {'id': '99d36548-ecb8-462a-ac5b-9d6604c1fc37'}, 'location': {'bucket': 'textextractionms', 'file_name': './files/granite_code_models_paper.pdf'}, 'type': 'connection_asset'}, 'results': {'completed_at': '2024-11-28T14:50:52.104Z', 'number_pages_processed': 28, 'running_at': '2024-11-28T14:50:25.294Z', 'status': 'completed'}, 'results_reference': {'connection': {'id': '99d36548-ecb8-462a-ac5b-9d6604c1fc37'}, 'location': {'bucket': 'textextractionms', 'file_name': './files/text_extraction_granite_code_models_paper.json'}, 'type': 'connection_asset'}, 'steps': {'ocr': {'languages_list': ['en']}, 'tables_processing': {'enabled': True}}}, 'metadata': {'created_at': '2024-11-28T14:50:22.990Z', 'id': 'ac2fac08-575c-4661-bd1f-1738be6f506a', 'modified_at': '2024-11-28T14:50:52.146Z', 'project_id': '7e8b59ba-2610-4a29-9d90-dc02483ed5f4'}}

Furthermore, to delete text extraction jub run use delete_job() method.

Results examination

Once the job extraction is completed we can download the results file and process it further.

results_reference = extraction.get_results_reference(extraction_id=extraction_job_id)
filename = "text_extraction_results_granite_code_models_paper.md" results_reference.download(filename=filename)
with open(filename, 'r') as file: extracted_text = file.read() print(extracted_text[:1000])
†Corresponding Authors Large Language Models (LLMs) trained on code are revolutionizing the software development process. Increasingly, code LLMs are being inte- grated into software development environments to improve the produc- tivity of human programmers, and LLM-based agents are beginning to show promise for handling complex tasks autonomously. Realizing the full potential of code LLMs requires a wide range of capabilities, including code generation, fixing bugs, explaining and documenting code, maintaining repositories, and more. In this work, we introduce the Granite series of decoder-only code models for code generative tasks, trained with code written in 116 programming languages. The Granite Code models family consists of models ranging in size from 3 to 34 billion parameters, suitable for applications ranging from complex application modernization tasks to on-device memory-constrained use cases. Evaluation on a comprehensive set of tasks demonstrates that Granite Code mode

Summary and next steps

You successfully completed this notebook!

You learned how to use TextExtractions manager to run text extraction requests, check status of the submitted job and download a results file.

Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Authors:

Mateusz Świtała, Software Engineer at watsonx.ai.

Copyright © 2024-2025 IBM. This notebook and its source code are released under the terms of the MIT License.