Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cloud/notebooks/python_sdk/deployments/foundation_models/Use watsonx, and `granite-4-h-small` to analyze car rental customer satisfaction from text.ipynb
9478 views
Kernel: watsonx-ai-samples-py-311

image

Use watsonx, and ibm/granite-4-h-small to analyze car rental customer satisfaction from text

Disclaimers

  • Use only Projects and Spaces that are available in watsonx context.

Notebook content

This notebook contains the steps and code to demonstrate support of text sentiment analysis in watsonx. It introduces commands for data retrieval, model testing and scoring.

Some familiarity with Python is helpful. This notebook uses Python 3.11.

Learning goal

The goal of this notebook is to demonstrate how to use ibm/granite-4-h-small model to analyze customer satisfaction from text.

Contents

This notebook contains the following parts:

Set up the environment

Before you use the sample code in this notebook, you must perform the following setup tasks:

Install dependencies

%pip install wget | tail -n 1 %pip install "scikit-learn==1.3.2" | tail -n 1 %pip install -U ibm-watsonx-ai | tail -n 1
Successfully installed wget-3.2 Note: you may need to restart the kernel to use updated packages. Successfully installed joblib-1.5.2 numpy-1.26.4 scikit-learn-1.3.2 scipy-1.16.2 threadpoolctl-3.6.0 Note: you may need to restart the kernel to use updated packages. Successfully installed anyio-4.11.0 cachetools-6.2.1 certifi-2025.10.5 charset_normalizer-3.4.4 h11-0.16.0 httpcore-1.0.9 httpx-0.28.1 ibm-cos-sdk-2.14.3 ibm-cos-sdk-core-2.14.3 ibm-cos-sdk-s3transfer-2.14.3 ibm-watsonx-ai-1.4.2 idna-3.11 jmespath-1.0.1 lomond-0.3.3 pandas-2.2.3 pytz-2025.2 requests-2.32.5 sniffio-1.3.1 tabulate-0.9.0 tzdata-2025.2 urllib3-2.5.0 Note: you may need to restart the kernel to use updated packages.

Defining the watsonx.ai credentials

This cell defines the watsonx.ai credentials required to work with watsonx Foundation Model inferencing.

Action: Provide the IBM Cloud user API key. For details, see documentation.

import getpass from ibm_watsonx_ai import Credentials credentials = Credentials( url="https://us-south.ml.cloud.ibm.com", api_key=getpass.getpass("Please enter your watsonx.ai api key (hit enter): "), )

Defining the project ID

The Foundation Model requires project ID that provides the context for the call. We will obtain the ID from the project in which this notebook runs. Otherwise, please provide the project ID.

import os try: project_id = os.environ["PROJECT_ID"] except KeyError: project_id = input("Please enter your project_id (hit enter): ")

API Client initialization

from ibm_watsonx_ai import APIClient client = APIClient(credentials, project_id=project_id)

Data loading

Download the car_rental_training_data dataset. The dataset provides insight about customers opinions on car rental. It has a label that consists of values: unsatisfied, satisfied.

import pandas as pd import wget filename = "car_rental_training_data.csv" url = "https://raw.githubusercontent.com/IBM/watsonx-ai-samples/master/cloud/data/cars-4-you/car_rental_training_data.csv" if not os.path.isfile(filename): wget.download(url, out=filename) df = pd.read_csv("car_rental_training_data.csv", sep=";") data = df[["Customer_Service", "Satisfaction"]]

Examine downloaded data.

data.head()

Prepare train and test sets.

from sklearn.model_selection import train_test_split train, test = train_test_split(data, test_size=0.2) comments = list(test.Customer_Service) satisfaction = list(test.Satisfaction)

Foundation Models on watsonx.ai

List available models

client.foundation_models.TextModels.show()
{'GRANITE_3_2_8B_INSTRUCT': 'ibm/granite-3-2-8b-instruct', 'GRANITE_3_2B_INSTRUCT': 'ibm/granite-3-2b-instruct', 'GRANITE_3_3_8B_INSTRUCT': 'ibm/granite-3-3-8b-instruct', 'GRANITE_3_8B_INSTRUCT': 'ibm/granite-3-8b-instruct', 'GRANITE_4_H_SMALL': 'ibm/granite-4-h-small', 'GRANITE_8B_CODE_INSTRUCT': 'ibm/granite-8b-code-instruct', 'GRANITE_GUARDIAN_3_8B': 'ibm/granite-guardian-3-8b', 'GRANITE_VISION_3_2_2B': 'ibm/granite-vision-3-2-2b', 'LLAMA_3_2_11B_VISION_INSTRUCT': 'meta-llama/llama-3-2-11b-vision-instruct', 'LLAMA_3_2_90B_VISION_INSTRUCT': 'meta-llama/llama-3-2-90b-vision-instruct', 'LLAMA_3_3_70B_INSTRUCT': 'meta-llama/llama-3-3-70b-instruct', 'LLAMA_3_405B_INSTRUCT': 'meta-llama/llama-3-405b-instruct', 'LLAMA_4_MAVERICK_17B_128E_INSTRUCT_FP8': 'meta-llama/llama-4-maverick-17b-128e-instruct-fp8', 'LLAMA_GUARD_3_11B_VISION': 'meta-llama/llama-guard-3-11b-vision', 'MISTRAL_MEDIUM_2505': 'mistralai/mistral-medium-2505', 'MISTRAL_SMALL_3_1_24B_INSTRUCT_2503': 'mistralai/mistral-small-3-1-24b-instruct-2503', 'GPT_OSS_120B': 'openai/gpt-oss-120b'}

You need to specify model_id that will be used for inferencing:

model_id = client.foundation_models.TextModels.GRANITE_4_H_SMALL

Defining the model parameters

You might need to adjust model parameters for different models or tasks, to do so please refer to documentation.

from ibm_watsonx_ai.metanames import GenTextParamsMetaNames as GenParams parameters = { GenParams.MIN_NEW_TOKENS: 1, GenParams.MAX_NEW_TOKENS: 1, GenParams.TEMPERATURE: 0, GenParams.REPETITION_PENALTY: 1, }

Initialize the model

Initialize the ModelInference class with previous set params.

from ibm_watsonx_ai.foundation_models import ModelInference model = ModelInference( model_id=model_id, params=parameters, credentials=credentials, project_id=project_id )

Model's details

import json print(json.dumps(model.get_details(), indent=2))
{ "model_id": "ibm/granite-4-h-small", "label": "granite-4-h-small", "provider": "IBM", "source": "IBM", "indemnity": "IBM_COVERED", "functions": [ { "id": "autoai_rag" }, { "id": "text_chat" }, { "id": "text_generation" } ], "short_description": "Granite-4.0-H-Small is a 30B parameter long-context instruct model finetuned from Granite-4.0-H-Small-Base using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets.", "long_description": "Granite-4.0-H-Small is a 30B parameter long-context instruct model finetuned from Granite-4.0-H-Small-Base using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets. This model is developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging. Granite 4.0 instruct models feature improved instruction following (IF) and tool-calling capabilities, making them more effective in enterprise applications.", "terms_url": "https://www.ibm.com/support/customer/csol/terms/?id=i126-6883&lc=en", "input_tier": "class_18", "output_tier": "class_5", "number_params": "32b", "min_shot_size": 1, "task_ids": [ "question_answering", "summarization", "retrieval_augmented_generation", "classification", "generation", "code", "extraction", "translation", "function_calling", "code-generation", "code-explanation", "code-fixing" ], "tasks": [ { "id": "question_answering" }, { "id": "summarization" }, { "id": "retrieval_augmented_generation" }, { "id": "classification" }, { "id": "generation" }, { "id": "code" }, { "id": "extraction" }, { "id": "translation" }, { "id": "function_calling" } ], "limits": { "3f6acf43-ede8-413a-ac69-f8af3bb0cbfe": { "call_time": "5m0s" }, "a3d2f92f-06f9-48d0-b2e6-a7ba2b4e0577": { "call_time": "10m0s" }, "d18d88b9-be7a-46ec-be1e-aff14904f1e9": { "call_time": "10m0s" } }, "lifecycle": [ { "id": "available", "start_date": "2025-10-02" } ], "versions": [ { "version": "4.0.0", "available_date": "2025-10-02" } ] }

Analyze the satisfaction

Prepare prompt and generate text

instruction = """ Determine if the customer was satisfied with the experience based on the comment. If the customer should want to continue using the service, otherwise they are not satisfied. Return simple yes or no. Comment: The car was broken. They couldn't find a replacement. I've waster over 2 hours. Satisfied: no Comment: The service was very good, even though the engine was loud. Satisfied: yes """
prompt1 = "\n".join([instruction, "Comment:" + comments[2], "Satisfied:"]) print(prompt1)
Determine if the customer was satisfied with the experience based on the comment. If the customer should want to continue using the service, otherwise they are not satisfied. Return simple yes or no. Comment: The car was broken. They couldn't find a replacement. I've waster over 2 hours. Satisfied: no Comment: The service was very good, even though the engine was loud. Satisfied: yes Comment:I would like the personnel to pretend they care about customer, at least. Satisfied:

Analyze the sentiment for a sample of zero-shot input from the test set.

print(model.generate_text(prompt=prompt1))
no

Calculate the accuracy

sample_size = 10 prompts_batch = [ "\n".join([instruction, "Comment:" + comment, "Satisfied:"]) for comment in comments[:10] ] results = [item.strip() for item in model.generate_text(prompt=prompts_batch)]
print(prompts_batch[0])
Determine if the customer was satisfied with the experience based on the comment. If the customer should want to continue using the service, otherwise they are not satisfied. Return simple yes or no. Comment: The car was broken. They couldn't find a replacement. I've waster over 2 hours. Satisfied: no Comment: The service was very good, even though the engine was loud. Satisfied: yes Comment:The car rental company that I went with had very good customer service. They were out of a certain car I reserved and gave me a upgrade and apologized. Satisfied:

Score the model

from sklearn.metrics import accuracy_score label_map = {0: "no", 1: "yes"} y_true = [label_map[sat] for sat in satisfaction][:sample_size] print("accuracy_score", accuracy_score(y_true, results))
accuracy_score 0.9
print("true", y_true, "\npred", results)
true ['yes', 'no', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'yes'] pred ['yes', 'no', 'no', 'yes', 'no', 'yes', 'no', 'yes', 'no', 'yes']

Summary and next steps

You successfully completed this notebook!

You learned how to analyze car rental customer satisfaction with watsonx.ai foundation model.

Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Authors

Mateusz Szewczyk, Software Engineer at watsonx.ai.

Lukasz Cmielowski, PhD, is an Automation Architect and Data Scientist at IBM with a track record of developing enterprise-level applications that substantially increases clients' ability to turn data into actionable knowledge.

Copyright © 2023-2026 IBM. This notebook and its source code are released under the terms of the MIT License.