Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cpd5.0/notebooks/python_sdk/deployments/foundation_models/Use watsonx, and Google `flan-t5-xxl` to find sentiments of legal documents.ipynb
6405 views
Kernel: Python 3 (ipykernel)

image

Use watsonx, and Google flan-t5-xxl to analyze sentiments of legal documents

Disclaimers

  • Use only Projects and Spaces that are available in watsonx context.

Notebook content

This notebook contains the steps and code to demonstrate support of sentiment analysis in watsonx. It introduces commands for data retrieval and model testing.

Some familiarity with Python is helpful. This notebook uses Python 3.11.

Learning goal

The goal of this notebook is to demonstrate how to use google/flan-t5-xxl model to analyze sentiments of legal documents.

Use case & dataset

One of the key use cases of legal sentiment analysis is in assisting legal professionals in predicting case outcomes. By analyzing the sentiment expressed in previous court decisions and related documents, sentiment analysis algorithms can identify patterns and correlations between the sentiment and the final verdict. This can help lawyers and judges in assessing the strength of legal arguments, evaluating the potential impact of public opinion on the case, and making more accurate predictions about the likely outcome of ongoing cases. The dataset consists of two colums; the phrases and the sentiments.

Contents

This notebook contains the following parts:

Set up the environment

Before you use the sample code in this notebook, you must perform the following setup tasks:

  • Contact with your Cloud Pack for Data administrator and ask him for your account credentials

Install and import the ibm-watsonx-ai and dependecies

Note: ibm-watsonx-ai documentation can be found here.

!pip install wget | tail -n 1 !pip install "scikit-learn==1.3.2" | tail -n 1 !pip install -U ibm-watsonx-ai | tail -n 1

Connection to WML

Authenticate the Watson Machine Learning service on IBM Cloud Pack for Data. You need to provide platform url, your username and api_key.

username = 'PASTE YOUR USERNAME HERE' api_key = 'PASTE YOUR API_KEY HERE' url = 'PASTE THE PLATFORM URL HERE'
from ibm_watsonx_ai import Credentials credentials = Credentials( username=username, api_key=api_key, url=url, instance_id="openshift", version="5.0" )

Alternatively you can use username and password to authenticate WML services.

credentials = Credentials( username=***, password=***, url=***, instance_id="openshift", version="5.0" )

Defining the project id

The Foundation Model requires project id that provides the context for the call. We will obtain the id from the project in which this notebook runs. Otherwise, please provide the project id.

import os try: project_id = os.environ["PROJECT_ID"] except KeyError: project_id = input("Please enter your project_id (hit enter): ")

Data loading

Download the legal documents dataset.

import wget filename = 'Legal_Sentences.csv' url = 'https://raw.githubusercontent.com/kmokht1/Datasets/main/Legal_Sentences.csv' if not os.path.isfile(filename): wget.download(url, out=filename)

Read the data.

import pandas as pd data = pd.read_csv("Legal_Sentences.csv", index_col=0 ) data = data[['Phrase','Sentiment']] data.head()

Prepare dataset label map.

label_map = { -1: "negative", 0: "neutral", 1: "positive" }

Inspect data sample.

data.value_counts(['Sentiment'])
Sentiment -1 282 1 172 0 122 Name: count, dtype: int64

Split the data into training and test sets.

from sklearn.model_selection import train_test_split data_train, data_test, y_train, y_test = train_test_split(data['Phrase'], data['Sentiment'], test_size=0.3, random_state=33, stratify=data['Sentiment']) data_train = pd.DataFrame(data_train) data_test = pd.DataFrame(data_test)

Foundation Models on watsonx.ai

List available models

All avaliable models are presented under ModelTypes class. For more information refer to documentation.

from ibm_watsonx_ai.foundation_models.utils.enums import ModelTypes print([model.name for model in ModelTypes])
['FLAN_T5_XXL', 'FLAN_UL2', 'MT0_XXL', 'GPT_NEOX', 'MPT_7B_INSTRUCT2', 'STARCODER', 'LLAMA_2_70B_CHAT', 'LLAMA_2_13B_CHAT', 'GRANITE_13B_INSTRUCT', 'GRANITE_13B_CHAT', 'FLAN_T5_XL', 'GRANITE_13B_CHAT_V2', 'GRANITE_13B_INSTRUCT_V2', 'ELYZA_JAPANESE_LLAMA_2_7B_INSTRUCT', 'MIXTRAL_8X7B_INSTRUCT_V01_Q', 'CODELLAMA_34B_INSTRUCT_HF', 'GRANITE_20B_MULTILINGUAL']

You need to specify model_id that will be used for inferencing:

model_id = ModelTypes.FLAN_T5_XXL

Defining the model parameters

You might need to adjust model parameters for different models or tasks, to do so please refer to documentation.

from ibm_watsonx_ai.metanames import GenTextParamsMetaNames as GenParams parameters = { GenParams.DECODING_METHOD: "greedy", GenParams.RANDOM_SEED: 33, GenParams.REPETITION_PENALTY: 1, GenParams.MIN_NEW_TOKENS: 1, GenParams.MAX_NEW_TOKENS: 1 }

Initialize the model

Initialize the ModelInference class with previous set params.

from ibm_watsonx_ai.foundation_models import ModelInference model = ModelInference( model_id=model_id, params=parameters, credentials=credentials, project_id=project_id)

Model's details

model.get_details()
{'model_id': 'google/flan-t5-xxl', 'label': 'flan-t5-xxl-11b', 'provider': 'Google', 'source': 'Hugging Face', 'functions': [{'id': 'text_generation'}], 'short_description': 'flan-t5-xxl is an 11 billion parameter model based on the Flan-T5 family.', 'long_description': 'flan-t5-xxl (11B) is an 11 billion parameter model based on the Flan-T5 family. It is a pretrained T5 - an encoder-decoder model pre-trained on a mixture of supervised / unsupervised tasks converted into a text-to-text format, and fine-tuned on the Fine-tuned Language Net (FLAN) with instructions for better zero-shot and few-shot performance.', 'tier': 'class_2', 'number_params': '11b', 'min_shot_size': 0, 'task_ids': ['question_answering', 'summarization', 'retrieval_augmented_generation', 'classification', 'generation', 'extraction'], 'tasks': [{'id': 'question_answering', 'ratings': {'quality': 4}}, {'id': 'summarization', 'ratings': {'quality': 4}}, {'id': 'retrieval_augmented_generation', 'ratings': {'quality': 3}}, {'id': 'classification', 'ratings': {'quality': 4}}, {'id': 'generation'}, {'id': 'extraction', 'ratings': {'quality': 4}}], 'lifecycle': [{'id': 'available', 'since_version': '8.0.0', 'current_state': True}]}

Define instructions for the model.

instruction="""Determine the sentiment of the sentense. Use either 'positive', 'negative','neutral'.Use the provided examples as a template. """

Prepare model inputs for zero-shot example - use below zero_shot_inputs.

zero_shot_inputs = [{"input": text} for text in data_test['Phrase']] for i in range(2): print(f"The sentence example {i+1} is:\n\t {zero_shot_inputs[i]['input']}\n")
The sentence example 1 is: The Court rejects the CCAs conclusion that Moore failed to make the requisite showings with respect to intellectual functioning The sentence example 2 is: He argues on appeal that had Defendants written truthful reports, or testified truthfully in deposition

Prepare model inputs for few-shot examples - use below few_shot_inputs.

data_train_and_labels=data_train.copy() data_train_and_labels['Sentiment']=y_train
few_shot_example=[] few_shot_examples=[] for phrase,sentiment in data_train_and_labels.groupby('Sentiment').apply(lambda x: x.sample(2)).values: few_shot_example.append(f"\tsentence:\t{phrase}\n\tsentiment: {sentiment}\n") few_shot_examples=[''.join(few_shot_example)]
few_shot_inputs_ = [{"input": text} for text in data_test['Phrase'].values] for i in range(2): print(f"The sentence example {i+1} is:\n {few_shot_inputs_[i]['input']}\n") print(f"\tSentiment: {y_test[i]}\n")
The sentence example 1 is: The Court rejects the CCAs conclusion that Moore failed to make the requisite showings with respect to intellectual functioning Sentiment: -1 The sentence example 2 is: He argues on appeal that had Defendants written truthful reports, or testified truthfully in deposition Sentiment: -1

Get the docs summaries.

results = [] for inp in few_shot_inputs_[:2]: results.append(model.generate(" ".join([instruction+few_shot_examples[0], inp['input']]))["results"][0])

Explore model output.

import json print(json.dumps(results, indent=2))
[ { "generated_text": "neutral", "generated_token_count": 1, "input_token_count": 212, "stop_reason": "max_tokens" }, { "generated_text": "neutral", "generated_token_count": 1, "input_token_count": 212, "stop_reason": "max_tokens" } ]

Score the model

Note: To run the Score section for model scoring on the whole financial phrasebank dataset, please transform following markdown cells to code cells. Have in mind that scoring model on the whole test set can take significant amount of time.

Get the true labels.

y_true = [label_map[label] for label in y_test.values[:2]] y_true

Get the prediction labels.

y_pred = [result['generated_text'] for result in results] y_pred

Calculate the accuracy score.

from sklearn.metrics import accuracy_score print(accuracy_score(y_pred, y_true))

Summary and next steps

You successfully completed this notebook!.

You learned how to find sentiments of legal documents with Google's flan-t5-xxl on watsonx.

Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Authors:

Mateusz Szewczyk, Software Engineer at Watson Machine Learning.

Copyright © 2023-2025 IBM. This notebook and its source code are released under the terms of the MIT License.