Path: blob/master/cloud/notebooks/python_sdk/deployments/foundation_models/Use watsonx Text Extraction V2 service to extract text from file.ipynb
6405 views
Use watsonx.ai Text Extraction V2 service to extract text from file
Disclaimers
Use only Projects and Spaces that are available in watsonx context.
Notebook content
This notebook contains the steps and code demonstrating how to run a Text Extraction job using Python SDK and then retrieve the results in the form of JSON, Markdown, HTML and image files.
Some familiarity with Python is helpful. This notebook uses Python 3.11.
Learning goal
The purpose of this notebook is to demonstrate the usage a Text Extraction V2 service and ibm-watsonx-ai
Python SDK to retrieve a text from file that is located at IBM Cloud Object Storage.
Contents
This notebook contains the following parts:
Set up the environment
Before you use the sample code in this notebook, you must perform the following setup tasks:
Create a watsonx.ai Runtime Service instance (a free plan is offered and information about how to create the instance can be found here).
Install required packages
Successfully installed wget-3.2
Successfully installed anyio-4.9.0 cachetools-6.1.0 certifi-2025.7.14 charset_normalizer-3.4.2 h11-0.16.0 httpcore-1.0.9 httpx-0.28.1 ibm-cos-sdk-2.14.2 ibm-cos-sdk-core-2.14.2 ibm-cos-sdk-s3transfer-2.14.2 ibm-watsonx-ai-1.3.30 idna-3.10 jmespath-1.0.1 lomond-0.3.3 numpy-2.3.1 pandas-2.2.3 pytz-2025.2 requests-2.32.4 sniffio-1.3.1 tabulate-0.9.0 tzdata-2025.2 urllib3-2.5.0
Defining the watsonx.ai credentials
This cell defines the watsonx.ai credentials required to work with watsonx Foundation Model inferencing.
Action: Provide the IBM Cloud user API key. For details, see documentation.
Defining the project ID
The Text Extraction service requires project id that provides the context for the call. We will obtain the id from the project in which this notebook runs. Otherwise, please provide the project ID.
API Client initialization
The document from which we are going to extract text is located in IBM Cloud Object Storage (COS). In the following example we are going to use Granite Code Models paper as a source text document. Also, the final results file, which will contain extracted text and necessary metadata, will be placed in COS as well. Therefore, we use ibm_watsonx_ai.helpers.DataConnection
and ibm_watsonx_ai.helpers.S3Location
class to create a Python objects that will represent the references to the processed files. Please note that you have to create connection asset with your COS details (for detailed explanation how to do this see IBM Cloud Object Storage connection or check below cells).
Download source document
Create connection to COS
Tip: You need to create your connection only once.
Create text extraction document reference and result references
Upload source file to COS
Since data connection for source and results files are ready, we can proceed to the text extraction run job step. To initialize Text Extraction manager we use TextExtractions
class.
Define Text Extraction parameters
When running a job, the parameters for the text extraction pipeline can be specified. For more details about available parameters see documentation. The list of parameters available in SDK can be found below.
In our example we are going to use the following parameters:
Run extraction job for single return format
In order to run an extraction job, where only a single output format is requested, the result_formats
parameter must be specified using the TextExtractionsV2ResultFormats
enum. In our example we will use the TextExtractionsV2ResultFormats.MARKDOWN
format.
We can list text extraction jobs using the list method.
Moreover, to get details of a particular text extraction request, run the following:
To wait until the text extraction job completes, run the following cell:
Furthermore, to delete text extraction job run use delete_job()
method.
Results examination
Once the job extraction is completed, we can use the get_results_reference
method to create the results data connection.
Download the file to the result path.
After a successful download, it's possible to read the file content.
Run extraction job for multiple file formats
In order to run an extraction job, where multiple output formats are requested, the result_format
parameter must be specified using either a list of TextExtractionsV2ResultFormats
enums (recommended) or a list of str
instances. In our example we will use a list of the following file formats:
TextExtractionsV2ResultFormats.ASSEMBLY_JSON
TextExtractionsV2ResultFormats.HTML
TextExtractionsV2ResultFormats.PAGE_IMAGES
We can list text extraction jobs using the list method.
Moreover, to get details of a particular text extraction request, run the following:
To wait until the text extraction job completes, run the following cell:
Furthermore, to delete text extraction job run use delete_job()
method.
Results examination
Once the job extraction is completed, we can use the get_results_reference
method to create the results data connection.
After a successful download, it's possible to read the file contents.
Title
Granite Code Models: A Family of Open Foundation Models for Code Intelligence
Authors
Mayank Mishra⋆ Matt Stallone⋆ Gaoyuan Zhang⋆ Yikang Shen Aditya Prasad Adriana Meza Soria Michele Merler Parameswaran Selvam Saptha Surendran Shivdeep Singh Manish Sethi Xuan-Hong Dang Pengyuan Li Kun-Lung Wu Syed Zawad Andrew Coleman Matthew White Mark Lewis Raju Pavuluri Yan Koyfman Boris Lublinsky Maximilien de Bayser Ibrahim Abdelaziz Kinjal Basu Mayank Agarwal Yi Zhou Chris Johnson Aanchal Goyal Hima Patel Yousaf Shah Petros Zerfos Heiko Ludwig Asim Munawar Maxwell Crouse Pavan Kapanipathi Shweta Belgodere Carlos Fonseca Amith Singhee Nirmit Desai David D. Cox Ruchir Puri† Rameswar Panda†
Abstract
Large Language Models (LLMs) trained on code are revolutionizing the software development process. Increasingly, code LLMs are being into software development environments to improve the productivity of human programmers, and LLM-based agents are beginning to show promise for handling complex tasks autonomously. Realizing the full potential of code LLMs requires a wide range of capabilities, including code generation, fixing bugs, explaining and documenting code, maintaining repositories, and more. In this work, we introduce the Granite series of decoder-only code models for code generative tasks, trained with code written in 116 programming languages. The Granite Code models family can be used in a wide range of applications, from complex application modernization tasks to on-device memory-constrained use cases. Evaluation on a comprehensive set of tasks demonstrates that Granite Code models consistently reaches state-of-the-art performance among available open-source code LLMs. The Granite Code model family was optimized for enterprise software development workflows and performs well across a range of coding tasks (e.g., code generation, fixing and explanation), making it a versatile “all around” code model. We release all our Granite Code models under an Apache 2.0 license for both research and commercial use.
Complete text (first 1000 characters)
Granite Code Models: A Family of Open Foundation Models for Code Intelligence Mayank Mishra⋆ Matt Stallone⋆ Gaoyuan Zhang⋆ Yikang Shen Aditya Prasad Adriana Meza Soria Michele Merler Parameswaran Selvam Saptha Surendran Shivdeep Singh Manish Sethi Xuan-Hong Dang Pengyuan Li Kun-Lung Wu Syed Zawad Andrew Coleman Matthew White Mark Lewis Raju Pavuluri Yan Koyfman Boris Lublinsky Maximilien de Bayser Ibrahim Abdelaziz Kinjal Basu Mayank Agarwal Yi Zhou Chris Johnson Aanchal Goyal Hima Patel Yousaf Shah Petros Zerfos Heiko Ludwig Asim Munawar Maxwell Crouse Pavan Kapanipathi Shweta Salaria Bob Calio Sophia Wen Seetharami Seelam Brian Belgodere Carlos Fonseca Amith Singhee Nirmit Desai David D. Cox Ruchir Puri† Rameswar Panda† IBM Research ⋆Equal Contribution †Corresponding Authors [email protected], [email protected] Abstract Large Language Models (LLMs) trained on code are revolutionizing the software development process. Increasingly, code LLMs are being inte grated into software developmen
Summary and next steps
You successfully completed this notebook!
You learned how to use TextExtractionsV2
manager to run text extraction requests, check status of the submitted job and download a results file.
Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.
Authors:
Mateusz Świtała, Software Engineer at watsonx.ai.
Rafał Chrzanowski, Software Engineer Intern at watsonx.ai.
Copyright © 2025 IBM. This notebook and its source code are released under the terms of the MIT License.