Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ibm
GitHub Repository: ibm/watson-machine-learning-samples
Path: blob/master/cpd5.2/notebooks/python_sdk/deployments/foundation_models/Use watsonx and IBM `granite-3-8b-code-instruct` to generate code based on instruction.ipynb
6412 views
Kernel: watsonx-ai-samples-py-312

image

Use watsonx and IBM granite-8b-code-instruct to generate code based on instruction

Disclaimers

  • Use only Projects and Spaces that are available in watsonx context.

Notebook content

This notebook contains the steps and code to demonstrate support for code generation in watsonx.ai. It introduces commands for defining prompt and model testing.

Some familiarity with Python is helpful. This notebook uses Python 3.12.

Learning goal

The goal of this notebook is to demonstrate how to generate code using ibm/granite-8b-code-instruct model based on instruction provided by the user.

Contents

This notebook contains the following parts:

Set up the environment

Before you use the sample code in this notebook, you must perform the following setup tasks:

  • Contact with your Cloud Pak for Data administrator and ask them for your account credentials

Install dependencies

Note: ibm-watsonx-ai documentation can be found here.

%pip install -U ibm-watsonx-ai | tail -n 1
Successfully installed anyio-4.9.0 certifi-2025.4.26 charset-normalizer-3.4.2 h11-0.16.0 httpcore-1.0.9 httpx-0.28.1 ibm-cos-sdk-2.14.0 ibm-cos-sdk-core-2.14.0 ibm-cos-sdk-s3transfer-2.14.0 ibm-watsonx-ai-1.3.13 idna-3.10 jmespath-1.0.1 lomond-0.3.3 numpy-2.2.5 pandas-2.2.3 pytz-2025.2 requests-2.32.2 sniffio-1.3.1 tabulate-0.9.0 typing_extensions-4.13.2 tzdata-2025.2 urllib3-2.4.0

Define credentials

Authenticate the watsonx.ai Runtime service on IBM Cloud Pak for Data. You need to provide the admin's username and the platform url.

username = "PASTE YOUR USERNAME HERE" url = "PASTE THE PLATFORM URL HERE"

Use the admin's api_key to authenticate watsonx.ai Runtime services:

import getpass from ibm_watsonx_ai import Credentials credentials = Credentials( username=username, api_key=getpass.getpass("Enter your watsonx.ai API key and hit enter: "), url=url, instance_id="openshift", version="5.2", )

Alternatively you can use the admin's password:

import getpass from ibm_watsonx_ai import Credentials if "credentials" not in locals() or not credentials.api_key: credentials = Credentials( username=username, password=getpass.getpass("Enter your watsonx.ai password and hit enter: "), url=url, instance_id="openshift", version="5.2", )

Working with projects

First of all, you need to create a project that will be used for your work. If you do not have a project created already, follow the steps below:

  • Open IBM Cloud Pak main page

  • Click all projects

  • Create an empty project

  • Copy project_id from url and paste it below

Action: Assign project ID below

import os try: project_id = os.environ["PROJECT_ID"] except KeyError: project_id = input("Please enter your project_id (hit enter): ")

Create APIClient instance

from ibm_watsonx_ai import APIClient client = APIClient(credentials, project_id)

Foundation models in watsonx.ai

List available models

for model in client.foundation_models.TextModels: print(f"- {model}")
- ibm/granite-8b-code-instruct - sdaia/allam-1-13b-instruct

You need to specify model_id that will be used for inferencing:

model_id = client.foundation_models.TextModels.GRANITE_8B_CODE_INSTRUCT

Defining the model parameters

You might need to adjust model parameters for different models or tasks, to do so please refer to documentation.

from ibm_watsonx_ai.metanames import GenTextParamsMetaNames parameters = { GenTextParamsMetaNames.DECODING_METHOD: "greedy", GenTextParamsMetaNames.MAX_NEW_TOKENS: 100, GenTextParamsMetaNames.STOP_SEQUENCES: ["<end·of·code>"], }

Initialize the model

Initialize the model inference with previously set parameters.

from ibm_watsonx_ai.foundation_models import ModelInference model = ModelInference( model_id=model_id, params=parameters, credentials=credentials, project_id=project_id )

Get model details

model.get_details()
{'model_id': 'ibm/granite-8b-code-instruct', 'label': 'granite-8b-code-instruct', 'provider': 'IBM', 'source': 'IBM', 'functions': [{'id': 'fine_tune_trainable'}, {'id': 'text_generation'}], 'short_description': 'The Granite model series is a family of IBM-trained, dense decoder-only models, which are particularly well-suited for generative tasks.', 'long_description': 'Granite models are designed to be used for a wide range of generative and non-generative tasks with appropriate prompt engineering. They employ a GPT-style decoder-only architecture, with additional innovations from IBM Research and the open community.', 'terms_url': 'https://www.ibm.com/support/customer/csol/terms/?id=i126-6883', 'input_tier': 'class_1', 'output_tier': 'class_1', 'number_params': '8b', 'min_shot_size': 1, 'task_ids': ['question_answering', 'summarization', 'classification', 'generation', 'extraction', 'code-generation', 'code-explanation', 'code-fixing'], 'tasks': [{'id': 'question_answering'}, {'id': 'summarization'}, {'id': 'classification'}, {'id': 'generation'}, {'id': 'extraction'}, {'id': 'code-generation', 'benchmarks': [{'type': 'academic', 'name': 'Code', 'description': "Evaluates a model's ability to generate code in Python, C++, Go, Java, JavaScript, and Rust by using 164 code problems from the 'HumanEval' dataset that were translated from Python by humans. Metric represents the pass@1 score.", 'language': 'C++', 'dataset': {'name': 'HumanEvalSynthesize'}, 'metrics': [{'name': 'pass@1', 'value': 0.482}], 'tags': ['Code generation']}, {'type': 'academic', 'name': 'Code', 'description': "Evaluates a model's ability to generate code in Python, C++, Go, Java, JavaScript, and Rust by using 164 code problems from the 'HumanEval' dataset that were translated from Python by humans. Metric represents the pass@1 score.", 'language': 'Go', 'dataset': {'name': 'HumanEvalSynthesize'}, 'metrics': [{'name': 'pass@1', 'value': 0.433}], 'tags': ['Code generation']}, {'type': 'academic', 'name': 'Code', 'description': "Evaluates a model's ability to generate code in Python, C++, Go, Java, JavaScript, and Rust by using 164 code problems from the 'HumanEval' dataset that were translated from Python by humans. Metric represents the pass@1 score.", 'language': 'Java', 'dataset': {'name': 'HumanEvalSynthesize'}, 'metrics': [{'name': 'pass@1', 'value': 0.585}], 'tags': ['Code generation']}, {'type': 'academic', 'name': 'Code', 'description': "Evaluates a model's ability to generate code in Python, C++, Go, Java, JavaScript, and Rust by using 164 code problems from the 'HumanEval' dataset that were translated from Python by humans. Metric represents the pass@1 score.", 'language': 'JavaScript', 'dataset': {'name': 'HumanEvalSynthesize'}, 'metrics': [{'name': 'pass@1', 'value': 0.524}], 'tags': ['Code generation']}, {'type': 'academic', 'name': 'Code', 'description': "Evaluates a model's ability to generate code in Python, C++, Go, Java, JavaScript, and Rust by using 164 code problems from the 'HumanEval' dataset that were translated from Python by humans. Metric represents the pass@1 score.", 'language': 'Python', 'dataset': {'name': 'HumanEvalSynthesize'}, 'metrics': [{'name': 'pass@1', 'value': 0.579}], 'tags': ['Code generation']}, {'type': 'academic', 'name': 'Code', 'description': "Evaluates a model's ability to generate code in Python, C++, Go, Java, JavaScript, and Rust by using 164 code problems from the 'HumanEval' dataset that were translated from Python by humans. Metric represents the pass@1 score.", 'language': 'Rust', 'dataset': {'name': 'HumanEvalSynthesize'}, 'metrics': [{'name': 'pass@1', 'value': 0.372}], 'tags': ['Code generation']}, {'type': 'watsonx.ai', 'name': 'Code', 'description': "Evaluates a model's ability to generate code in Python, C++, Go, Java, JavaScript, and Rust by using 164 code problems from the 'HumanEval' dataset that were translated from Python by humans. Metric represents the pass@1 score.", 'language': 'C++', 'dataset': {'name': 'HumanEvalSynthesize'}, 'metrics': [{'name': 'pass@1', 'value': 0.482}], 'tags': ['Code generation']}, {'type': 'watsonx.ai', 'name': 'Code', 'description': "Evaluates a model's ability to generate code in Python, C++, Go, Java, JavaScript, and Rust by using 164 code problems from the 'HumanEval' dataset that were translated from Python by humans. Metric represents the pass@1 score.", 'language': 'Go', 'dataset': {'name': 'HumanEvalSynthesize'}, 'metrics': [{'name': 'pass@1', 'value': 0.402}], 'tags': ['Code generation']}, {'type': 'watsonx.ai', 'name': 'Code', 'description': "Evaluates a model's ability to generate code in Python, C++, Go, Java, JavaScript, and Rust by using 164 code problems from the 'HumanEval' dataset that were translated from Python by humans. Metric represents the pass@1 score.", 'language': 'Java', 'dataset': {'name': 'HumanEvalSynthesize'}, 'metrics': [{'name': 'pass@1', 'value': 0.573}], 'tags': ['Code generation']}, {'type': 'watsonx.ai', 'name': 'Code', 'description': "Evaluates a model's ability to generate code in Python, C++, Go, Java, JavaScript, and Rust by using 164 code problems from the 'HumanEval' dataset that were translated from Python by humans. Metric represents the pass@1 score.", 'language': 'JavaScript', 'dataset': {'name': 'HumanEvalSynthesize'}, 'metrics': [{'name': 'pass@1', 'value': 0.549}], 'tags': ['Code generation']}, {'type': 'watsonx.ai', 'name': 'Code', 'description': "Evaluates a model's ability to generate code in Python, C++, Go, Java, JavaScript, and Rust by using 164 code problems from the 'HumanEval' dataset that were translated from Python by humans. Metric represents the pass@1 score.", 'language': 'Python', 'dataset': {'name': 'HumanEvalSynthesize'}, 'metrics': [{'name': 'pass@1', 'value': 0.585}], 'tags': ['Code generation']}, {'type': 'watsonx.ai', 'name': 'Code', 'description': "Evaluates a model's ability to generate code in Python, C++, Go, Java, JavaScript, and Rust by using 164 code problems from the 'HumanEval' dataset that were translated from Python by humans. Metric represents the pass@1 score.", 'language': 'Rust', 'dataset': {'name': 'HumanEvalSynthesize'}, 'metrics': [{'name': 'pass@1', 'value': 0.348}], 'tags': ['Code generation']}, {'type': 'watsonx.ai', 'name': 'Code', 'description': "Evaluates a model's ability to solve entry-level Python problems using a dataset with 974 crowd-sourced problems and solutions. Metric represents the pass@1 score.", 'language': 'Python', 'dataset': {'name': 'MBPP'}, 'metrics': [{'name': 'pass@1', 'value': 0.502}], 'tags': ['Code generation']}, {'type': 'watsonx.ai', 'name': 'Code', 'description': 'Expands on the MBPP dataset with more Python programming problems and more comprehensive test cases. Metric represents the pass@1 score.', 'language': 'Python', 'dataset': {'name': 'MBPP+'}, 'metrics': [{'name': 'pass@1', 'value': 0.571}], 'tags': ['Code generation']}]}, {'id': 'code-explanation', 'benchmarks': [{'type': 'academic', 'name': 'Code', 'description': "Evaluates a model's ability to explain C++, Go, Java, JavaScript, Python and Rust code by using the 'HumanEval' dataset to first ask the model to explain the solution to a programming problem and then solve the problem given only the previously generated explanation. Metric represents the pass@1 score.", 'language': 'C++', 'dataset': {'name': 'HumanEvalExplain'}, 'metrics': [{'name': 'pass@1', 'value': 0.439}], 'tags': ['Code explanation']}, {'type': 'academic', 'name': 'Code', 'description': "Evaluates a model's ability to explain C++, Go, Java, JavaScript, Python and Rust code by using the 'HumanEval' dataset to first ask the model to explain the solution to a programming problem and then solve the problem given only the previously generated explanation. Metric represents the pass@1 score.", 'language': 'Go', 'dataset': {'name': 'HumanEvalExplain'}, 'metrics': [{'name': 'pass@1', 'value': 0.366}], 'tags': ['Code explanation']}, {'type': 'academic', 'name': 'Code', 'description': "Evaluates a model's ability to explain C++, Go, Java, JavaScript, Python and Rust code by using the 'HumanEval' dataset to first ask the model to explain the solution to a programming problem and then solve the problem given only the previously generated explanation. Metric represents the pass@1 score.", 'language': 'Java', 'dataset': {'name': 'HumanEvalExplain'}, 'metrics': [{'name': 'pass@1', 'value': 0.524}], 'tags': ['Code explanation']}, {'type': 'academic', 'name': 'Code', 'description': "Evaluates a model's ability to explain C++, Go, Java, JavaScript, Python and Rust code by using the 'HumanEval' dataset to first ask the model to explain the solution to a programming problem and then solve the problem given only the previously generated explanation. Metric represents the pass@1 score.", 'language': 'JavaScript', 'dataset': {'name': 'HumanEvalExplain'}, 'metrics': [{'name': 'pass@1', 'value': 0.427}], 'tags': ['Code explanation']}, {'type': 'academic', 'name': 'Code', 'description': "Evaluates a model's ability to explain C++, Go, Java, JavaScript, Python and Rust code by using the 'HumanEval' dataset to first ask the model to explain the solution to a programming problem and then solve the problem given only the previously generated explanation. Metric represents the pass@1 score.", 'language': 'Python', 'dataset': {'name': 'HumanEvalExplain'}, 'metrics': [{'name': 'pass@1', 'value': 0.53}], 'tags': ['Code explanation']}, {'type': 'academic', 'name': 'Code', 'description': "Evaluates a model's ability to explain C++, Go, Java, JavaScript, Python and Rust code by using the 'HumanEval' dataset to first ask the model to explain the solution to a programming problem and then solve the problem given only the previously generated explanation. Metric represents the pass@1 score.", 'language': 'Rust', 'dataset': {'name': 'HumanEvalExplain'}, 'metrics': [{'name': 'pass@1', 'value': 0.165}], 'tags': ['Code explanation']}, {'type': 'watsonx.ai', 'name': 'Code', 'description': "Evaluates a model's ability to explain C++, Go, Java, JavaScript, Python and Rust code by using the 'HumanEval' dataset to first ask the model to explain the solution to a programming problem and then solve the problem given only the previously generated explanation. Metric represents the pass@1 score.", 'language': 'C++', 'dataset': {'name': 'HumanEvalExplain'}, 'metrics': [{'name': 'pass@1', 'value': 0.396}], 'tags': ['Code explanation']}, {'type': 'watsonx.ai', 'name': 'Code', 'description': "Evaluates a model's ability to explain C++, Go, Java, JavaScript, Python and Rust code by using the 'HumanEval' dataset to first ask the model to explain the solution to a programming problem and then solve the problem given only the previously generated explanation. Metric represents the pass@1 score.", 'language': 'Go', 'dataset': {'name': 'HumanEvalExplain'}, 'metrics': [{'name': 'pass@1', 'value': 0.396}], 'tags': ['Code explanation']}, {'type': 'watsonx.ai', 'name': 'Code', 'description': "Evaluates a model's ability to explain C++, Go, Java, JavaScript, Python and Rust code by using the 'HumanEval' dataset to first ask the model to explain the solution to a programming problem and then solve the problem given only the previously generated explanation. Metric represents the pass@1 score.", 'language': 'Java', 'dataset': {'name': 'HumanEvalExplain'}, 'metrics': [{'name': 'pass@1', 'value': 0.476}], 'tags': ['Code explanation']}, {'type': 'watsonx.ai', 'name': 'Code', 'description': "Evaluates a model's ability to explain C++, Go, Java, JavaScript, Python and Rust code by using the 'HumanEval' dataset to first ask the model to explain the solution to a programming problem and then solve the problem given only the previously generated explanation. Metric represents the pass@1 score.", 'language': 'JavaScript', 'dataset': {'name': 'HumanEvalExplain'}, 'metrics': [{'name': 'pass@1', 'value': 0.39}], 'tags': ['Code explanation']}, {'type': 'watsonx.ai', 'name': 'Code', 'description': "Evaluates a model's ability to explain C++, Go, Java, JavaScript, Python and Rust code by using the 'HumanEval' dataset to first ask the model to explain the solution to a programming problem and then solve the problem given only the previously generated explanation. Metric represents the pass@1 score.", 'language': 'Python', 'dataset': {'name': 'HumanEvalExplain'}, 'metrics': [{'name': 'pass@1', 'value': 0.506}], 'tags': ['Code explanation']}, {'type': 'watsonx.ai', 'name': 'Code', 'description': "Evaluates a model's ability to explain C++, Go, Java, JavaScript, Python and Rust code by using the 'HumanEval' dataset to first ask the model to explain the solution to a programming problem and then solve the problem given only the previously generated explanation. Metric represents the pass@1 score.", 'language': 'Rust', 'dataset': {'name': 'HumanEvalExplain'}, 'metrics': [{'name': 'pass@1', 'value': 0.22}], 'tags': ['Code explanation']}]}, {'id': 'code-fixing', 'benchmarks': [{'type': 'academic', 'name': 'Code', 'description': "Evaluates a model's ability to fix coding errors in the C++, Go, Java, JavaScript, Python and Rust using 'HumanEval' dataset with introduced errors and unit tests to help identify problems. Metric represents the pass@1 score.", 'language': 'C++', 'dataset': {'name': 'HumanEvalFix'}, 'metrics': [{'name': 'pass@1', 'value': 0.39}], 'tags': ['Code fixing']}, {'type': 'academic', 'name': 'Code', 'description': "Evaluates a model's ability to fix coding errors in the C++, Go, Java, JavaScript, Python and Rust using 'HumanEval' dataset with introduced errors and unit tests to help identify problems. Metric represents the pass@1 score.", 'language': 'Go', 'dataset': {'name': 'HumanEvalFix'}, 'metrics': [{'name': 'pass@1', 'value': 0.415}], 'tags': ['Code fixing']}, {'type': 'academic', 'name': 'Code', 'description': "Evaluates a model's ability to fix coding errors in the C++, Go, Java, JavaScript, Python and Rust using 'HumanEval' dataset with introduced errors and unit tests to help identify problems. Metric represents the pass@1 score.", 'language': 'Java', 'dataset': {'name': 'HumanEvalFix'}, 'metrics': [{'name': 'pass@1', 'value': 0.482}], 'tags': ['Code fixing']}, {'type': 'academic', 'name': 'Code', 'description': "Evaluates a model's ability to fix coding errors in the C++, Go, Java, JavaScript, Python and Rust using 'HumanEval' dataset with introduced errors and unit tests to help identify problems. Metric represents the pass@1 score.", 'language': 'JavaScript', 'dataset': {'name': 'HumanEvalFix'}, 'metrics': [{'name': 'pass@1', 'value': 0.409}], 'tags': ['Code fixing']}, {'type': 'academic', 'name': 'Code', 'description': "Evaluates a model's ability to fix coding errors in the C++, Go, Java, JavaScript, Python and Rust using 'HumanEval' dataset with introduced errors and unit tests to help identify problems. Metric represents the pass@1 score.", 'language': 'Python', 'dataset': {'name': 'HumanEvalFix'}, 'metrics': [{'name': 'pass@1', 'value': 0.396}], 'tags': ['Code fixing']}, {'type': 'academic', 'name': 'Code', 'description': "Evaluates a model's ability to fix coding errors in the C++, Go, Java, JavaScript, Python and Rust using 'HumanEval' dataset with introduced errors and unit tests to help identify problems. Metric represents the pass@1 score.", 'language': 'Rust', 'dataset': {'name': 'HumanEvalFix'}, 'metrics': [{'name': 'pass@1', 'value': 0.329}], 'tags': ['Code fixing']}, {'type': 'watsonx.ai', 'name': 'Code', 'description': "Evaluates a model's ability to fix coding errors in the C++, Go, Java, JavaScript, Python and Rust using 'HumanEval' dataset with introduced errors and unit tests to help identify problems. Metric represents the pass@1 score.", 'language': 'C++', 'dataset': {'name': 'HumanEvalFix'}, 'metrics': [{'name': 'pass@1', 'value': 0.39}], 'tags': ['Code fixing']}, {'type': 'watsonx.ai', 'name': 'Code', 'description': "Evaluates a model's ability to fix coding errors in the C++, Go, Java, JavaScript, Python and Rust using 'HumanEval' dataset with introduced errors and unit tests to help identify problems. Metric represents the pass@1 score.", 'language': 'Go', 'dataset': {'name': 'HumanEvalFix'}, 'metrics': [{'name': 'pass@1', 'value': 0.421}], 'tags': ['Code fixing']}, {'type': 'watsonx.ai', 'name': 'Code', 'description': "Evaluates a model's ability to fix coding errors in the C++, Go, Java, JavaScript, Python and Rust using 'HumanEval' dataset with introduced errors and unit tests to help identify problems. Metric represents the pass@1 score.", 'language': 'Java', 'dataset': {'name': 'HumanEvalFix'}, 'metrics': [{'name': 'pass@1', 'value': 0.482}], 'tags': ['Code fixing']}, {'type': 'watsonx.ai', 'name': 'Code', 'description': "Evaluates a model's ability to fix coding errors in the C++, Go, Java, JavaScript, Python and Rust using 'HumanEval' dataset with introduced errors and unit tests to help identify problems. Metric represents the pass@1 score.", 'language': 'JavaScript', 'dataset': {'name': 'HumanEvalFix'}, 'metrics': [{'name': 'pass@1', 'value': 0.427}], 'tags': ['Code fixing']}, {'type': 'watsonx.ai', 'name': 'Code', 'description': "Evaluates a model's ability to fix coding errors in the C++, Go, Java, JavaScript, Python and Rust using 'HumanEval' dataset with introduced errors and unit tests to help identify problems. Metric represents the pass@1 score.", 'language': 'Python', 'dataset': {'name': 'HumanEvalFix'}, 'metrics': [{'name': 'pass@1', 'value': 0.402}], 'tags': ['Code fixing']}, {'type': 'watsonx.ai', 'name': 'Code', 'description': "Evaluates a model's ability to fix coding errors in the C++, Go, Java, JavaScript, Python and Rust using 'HumanEval' dataset with introduced errors and unit tests to help identify problems. Metric represents the pass@1 score.", 'language': 'Rust', 'dataset': {'name': 'HumanEvalFix'}, 'metrics': [{'name': 'pass@1', 'value': 0.28}], 'tags': ['Code fixing']}]}], 'model_limits': {'max_sequence_length': 128000, 'max_output_tokens': 4096}, 'limits': {'cpd': {'max_output_tokens': 4096}}, 'lifecycle': [{'id': 'available', 'since_version': '9.1.0', 'current_state': True}], 'fine_tuning_parameters': {'num_epochs': {'default': 5, 'min': 1, 'max': 50}, 'verbalizer': {'default': '### Input: {{input}} \n\n### Response: {{output}}'}, 'batch_size': {'default': 5, 'min': 1, 'max': 16}, 'accumulate_steps': {'default': 1, 'min': 1, 'max': 128}, 'learning_rate': {'default': 3e-05, 'min': 1e-05, 'max': 0.5}, 'max_seq_length': {'default': 1024, 'min': 1, 'max': 8192}, 'tokenizer': {'default': 'ibm/granite-8b-code-instruct'}, 'response_template': {'default': '\n### Response:'}, 'num_gpus': {'default': 4}, 'gradient_checkpointing': {'default': True}}, 'deployment_parameters': [{'name': 'dtype', 'display_name': 'Data Type', 'default': 'float16', 'type': 'string', 'options': ['float16']}, {'name': 'max_batch_size', 'display_name': 'Max Batch Size', 'default': 8, 'type': 'number'}, {'name': 'max_concurrent_requests', 'display_name': 'Max Concurrent Requests', 'default': 1024, 'type': 'number'}, {'name': 'max_sequence_length', 'display_name': 'Max Sequence Length', 'default': 4096, 'type': 'number'}, {'name': 'max_new_tokens', 'display_name': 'Max New Tokens', 'default': 4095, 'type': 'number'}, {'name': 'enable_lora', 'display_name': 'Enable Lora', 'default': False, 'type': 'boolean'}, {'name': 'max_gpu_loras', 'display_name': 'Max GPU Loras', 'default': 8, 'type': 'number', 'required_if': [{'field': 'enable_lora', 'value': True}]}, {'name': 'max_cpu_loras', 'display_name': 'Max CPU Loras', 'default': 10, 'type': 'number', 'required_if': [{'field': 'enable_lora', 'value': True}]}, {'name': 'max_lora_rank', 'display_name': 'Max Lora Rank', 'default': 32, 'type': 'number', 'options': [8, 16, 32, 64, 128, 256], 'required_if': [{'field': 'enable_lora', 'value': True}]}]}

Generate code based on instruction

Define instructions for the model with at-least one example.

instruction = """Using the directions below, generate Python code for the given task. Input: # Write a Python function that prints 'Hello World!' string 'n' times. Output: def print_n_times(n): for i in range(n): print("Hello World!") <end of code> """

Prepare question for the model.

question = """Input: # Write a Python function, which generates sequence of prime numbers. # The function 'primes' will take the argument 'n', an int. It will return a list which contains all primes less than 'n'. """

Generate the code using ibm/granite-8b-code-instruct model.

Inference the model to generate the code, according to provided instruction.

result = model.generate_text(f"{instruction} {question}")

Formatting the text to get the function itself

code_as_text = result.split("Output:")[1].split("<end of code>")[0]

Generated code testing

The resulting code looks as below.

print(code_as_text)
def primes(n): primes = [] for num in range(2, n): is_prime = True for i in range(2, num): if num % i == 0: is_prime = False break if is_prime: primes.append(num) return primes

Use generated code to make it as function.

Note: Before executing this line, make sure the model's output visible above doesn't contain any malicious instructions.

exec(code_as_text)

Define the number 'n' for which the primes() function should process prime numbers.

n = 25

Test and run the generated function.

primes(n)
[2, 3, 5, 7, 11, 13, 17, 19, 23]

Summary and next steps

You successfully completed this notebook!

You learned how to generate code based on instruction with ibm/granite-8b-code-instruct in watsonx.ai.

Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Authors

Mateusz Szewczyk, Software Engineer at watsonx.ai.

Copyright © 2023-2025 IBM. This notebook and its source code are released under the terms of the MIT License.