Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cpd4.8/notebooks/python_sdk/deployments/foundation_models/Use watsonx, and LangChain to make a series of calls to a language model.ipynb
6405 views
Kernel: notebook-samples

image

Use watsonx, and LangChain to make a series of calls to a language model

Disclaimers

  • Use only Projects and Spaces that are available in watsonx context.

Notebook content

This notebook contains the steps and code to demonstrate Simple Sequential Chain using langchain integration with watsonx models.

Some familiarity with Python is helpful. This notebook uses Python 3.10.

Learning goal

The goal of this notebook is to demonstrate how to chain google/flan-ul2 and google/flan-t5-xxl models to generate a sequence of creating a random question on a given topic and an answer to that question and also to make the user friends with LangChain framework, using simple chain (LLMChain) and the extended chain (SimpleSequentialChain) with the WatsonxLLM.

Contents

This notebook contains the following parts:

Set up the environment

Before you use the sample code in this notebook, you must perform the following setup tasks:

  • Contact with your Cloud Pack for Data administrator and ask him for your account credentials

Install and import the datasets and dependecies

!pip install "ibm-watson-machine-learning>=1.0.327" | tail -n 1 !pip install "pydantic>=1.10.0" | tail -n 1 !pip install "langchain==0.0.340" | tail -n 1

Connection to WML

Authenticate the Watson Machine Learning service on IBM Cloud Pack for Data. You need to provide platform url, your username and api_key.

username = 'PASTE YOUR USERNAME HERE' api_key = 'PASTE YOUR API_KEY HERE' url = 'PASTE THE PLATFORM URL HERE'
wml_credentials = { "username": username, "apikey": api_key, "url": url, "instance_id": 'openshift', "version": '4.8' }

Alternatively you can use username and password to authenticate WML services.

wml_credentials = { "username": ***, "password": ***, "url": ***, "instance_id": 'openshift', "version": '4.8' }

Defining the project id

The Foundation Model requires project id that provides the context for the call. We will obtain the id from the project in which this notebook runs. Otherwise, please provide the project id.

import os try: project_id = os.environ["PROJECT_ID"] except KeyError: project_id = input("Please enter your project_id (hit enter): ")

Foundation Models on watsonx.ai

List available models

All avaliable models are presented under ModelTypes class.

from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes print([model.name for model in ModelTypes])
['FLAN_T5_XXL', 'FLAN_UL2', 'MT0_XXL', 'GPT_NEOX', 'MPT_7B_INSTRUCT2', 'STARCODER', 'LLAMA_2_70B_CHAT']

You need to specify model_id's that will be used for inferencing:

model_id_1 = ModelTypes.FLAN_UL2 model_id_2 = ModelTypes.FLAN_T5_XXL

Defining the model parameters

You might need to adjust model parameters for different models or tasks, to do so please refer to documentation under GenTextParamsMetaNames class.

Action: If any complications please refer to the documentation.

from ibm_watson_machine_learning.metanames import GenTextParamsMetaNames as GenParams from ibm_watson_machine_learning.foundation_models.utils.enums import DecodingMethods parameters = { GenParams.DECODING_METHOD: DecodingMethods.SAMPLE, GenParams.MAX_NEW_TOKENS: 100, GenParams.MIN_NEW_TOKENS: 1, GenParams.TEMPERATURE: 0.5, GenParams.TOP_K: 50, GenParams.TOP_P: 1 }

Initialize the model

Initialize the Model class with previous set params.

from ibm_watson_machine_learning.foundation_models import Model flan_ul2_model = Model( model_id=model_id_1, params=parameters, credentials=wml_credentials, project_id=project_id) flan_t5_model = Model( model_id=model_id_2, credentials=wml_credentials, project_id=project_id)

WatsonxLLM interface

WatsonxLLM is a wrapper around watsonx.ai models that provide chain integration around the models.

Action: For more details about CustomLLM check the LangChain documentation

Initialize the WatsonxLLM class.

from ibm_watson_machine_learning.foundation_models.extensions.langchain import WatsonxLLM flan_ul2_llm = WatsonxLLM(model=flan_ul2_model) flan_t5_llm = WatsonxLLM(model=flan_t5_model)

Hint: To use Chain interface from LangChain with watsonx.ai models you must call model.to_langchain() method.

It returns WatsonxLLM wrapper compatible with LangChain CustomLLM specification.

flan_ul2_model.to_langchain()

You can print all set data about the WatsonxLLM object using the dict() method.

flan_ul2_llm.dict()
{'model_id': 'google/flan-ul2', 'params': {'decoding_method': <DecodingMethods.SAMPLE: 'sample'>, 'max_new_tokens': 100, 'min_new_tokens': 1, 'temperature': 0.5, 'top_k': 50, 'top_p': 1}, 'project_id': '391894e5-057e-4af9-af39-bc19200709df', 'space_id': None, '_type': 'IBM watsonx.ai'}

Simple Sequential Chain experiment

The simplest type of sequential chain is called a SimpleSequentialChain, in which each step has a single input and output and the output of one step serves as the input for the following step.

The experiment will consist in generating a random question about any topic and answer the following question.

An object called PromptTemplate assists in generating prompts using a combination of user input, additional non-static data, and a fixed template string.

In our case we would like to create two PromptTemplate objects which will be responsible for creating a random question and answering it.

from langchain import PromptTemplate prompt_1 = PromptTemplate( input_variables=["topic"], template="Generate a random question about {topic}: Question: " ) prompt_2 = PromptTemplate( input_variables=["question"], template="Answer the following question: {question}", )

We would like to add functionality around language models using LLMChain chain.

prompt_to_flan_ul2 chain formats the prompt template whose task is to generate random question, passes the formatted string to LLM and returns the LLM output.

Hint: To use Chain interface from LangChain with watsonx.ai models you must call model.to_langchain() method.

It returns WatsonxLLM wrapper compatible with LangChain CustomLLM specification.

from langchain.chains import LLMChain prompt_to_flan_ul2 = LLMChain(llm=flan_ul2_model.to_langchain(), prompt=prompt_1)

flan_to_t5 chain formats the prompt template whose task is to answer the question we got from prompt_to_flan_ul2 chain, passes the formatted string to LLM and returns the LLM output.

flan_to_t5 = LLMChain(llm=flan_t5_model.to_langchain(), prompt=prompt_2)

This is the overall chain where we run prompt_to_flan_ul2 and flan_to_t5 chains in sequence.

from langchain.chains import SimpleSequentialChain qa = SimpleSequentialChain(chains=[prompt_to_flan_ul2, flan_to_t5], verbose=True)

Generate random question and answer to topic.

qa.run('life')
> Entering new SimpleSequentialChain chain... What is a great example of an organism that has a hard outer shell? turtle > Finished chain.
'turtle'

Summary and next steps

You successfully completed this notebook!.

You learned how to use Simple Squential Chain using custom llm WastonxLLM.

Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Authors:

Mateusz Szewczyk, Software Engineer at Watson Machine Learning.

Copyright © 2023-2025 IBM. This notebook and its source code are released under the terms of the MIT License.