Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
keras-team
GitHub Repository: keras-team/keras-io
Path: blob/master/guides/keras_hub/function_gemma_with_keras.py
17129 views
1
"""
2
Title: Native Function Calling with FunctionGemma in KerasHub
3
Author: [Laxmareddy Patlolla](https://github.com/laxmareddyp)
4
Date created: 2026/02/24
5
Last modified: 2026/03/06
6
Description: A guide to using the function calling feature in KerasHub with FunctionGemma.
7
Accelerator: GPU
8
"""
9
10
"""
11
## Introduction
12
13
This example demonstrates how to build a multi-tool AI Assistant using FunctionGemma native
14
function calling capabilities in KerasHub. The Assistant can execute multiple tools
15
including web search, stock prices, world time, and system stats.
16
17
## Overview
18
19
Function calling enables language models to interact with
20
external APIs and tools by generating structured function calls. This implementation
21
uses FunctionGemma native function calling format with proper tool declarations and
22
function call parsing.
23
24
## Architecture
25
26
The Assistant follows this flow:
27
28
- **Tool Definition**: Tools are defined in Function Calling Format JSON format
29
- **Prompt Formatting**: Tools are injected into the prompt using FunctionGemma template
30
- **Model Generation**: The model generates a structured function call
31
- **Parsing**: Extract function name and arguments from the response
32
- **Execution**: Execute the corresponding Python function
33
- **Response**: Return formatted results to the user
34
35
## Key Components
36
37
- `TOOL_DEFINITIONS`: Function Calling Format tool schemas
38
- `format_prompt_with_tools()`: Manual chat template implementation
39
- `parse_function_call()`: Extract function calls from model output
40
- `TOOLS`: Registry mapping tool names to Python functions
41
42
## Setup
43
Before starting this tutorial, complete the following steps:
44
```
45
pip install -U keras-hub
46
pip install yfinance ddgs psutil pytz
47
```
48
49
Get access to `FunctionGemma` by logging into your [Kaggle](https://www.kaggle.com/models/keras/function-gemma) account and selecting Acknowledge license for a FunctionGemma model.
50
51
Generate a `Kaggle Access Token` by visiting kaggle [settings](https://www.kaggle.com/settings) page and add it to your Colab Secrets:
52
53
`KAGGLE_USERNAME = "Your kaggle user name"`
54
55
`KAGGLE_KEY=" Your kaggle Key"`
56
57
This notebook will run on either CPU or GPU.
58
59
## Install Python packages
60
"""
61
62
import os
63
import datetime
64
import psutil
65
import yfinance as yf
66
from ddgs import DDGS
67
import keras_hub
68
import re
69
import pytz
70
import warnings
71
72
warnings.filterwarnings("ignore")
73
74
os.environ["KERAS_BACKEND"] = "jax"
75
76
"""
77
# Tool Implementations
78
79
We'll start by defining the actual Python functions that our Assistant will be able
80
to call. Each of these functions represents a "tool" that extends the model's
81
capabilities beyond just text generation.
82
83
## Web Search Tool
84
85
This function allows the model to search the web for information using DuckDuckGo.
86
In a real application, you might use more sophisticated search APIs or custom
87
search implementations tailored to your domain.
88
"""
89
90
91
def web_search(query, max_results=5):
92
"""Search the web using DuckDuckGo.
93
94
Args:
95
query: Search query string
96
max_results: Maximum number of results to return (default: 5)
97
98
Returns:
99
Dictionary with "results" key containing list of search results,
100
each with "title", "description", and "link" keys.
101
Returns {"error": message} on failure.
102
"""
103
results = []
104
try:
105
with DDGS() as ddgs:
106
for r in ddgs.text(query, max_results=max_results):
107
results.append(
108
{
109
"title": r.get("title"),
110
"description": r.get("body"),
111
"link": r.get("href"),
112
}
113
)
114
except Exception as e:
115
return {"error": f"Search failed: {str(e)}"}
116
return {"results": results}
117
118
119
"""
120
## Stock Price Tool
121
122
This function retrieves real-time stock prices from Yahoo Finance. The model
123
can use this to answer questions about current market prices for publicly
124
traded companies.
125
"""
126
127
128
def get_stock_price(symbol):
129
"""Get real-time stock price from Yahoo Finance.
130
131
Args:
132
symbol: Stock ticker symbol (e.g., "AAPL", "GOOGL")
133
134
Returns:
135
Dictionary with "symbol", "price", and "currency" keys.
136
Returns {"error": message} on failure.
137
"""
138
try:
139
t = yf.Ticker(symbol)
140
info = t.fast_info
141
if info.last_price is None:
142
raise ValueError("Price not available")
143
current_date = datetime.datetime.now().strftime("%Y-%m-%d")
144
return {
145
"symbol": symbol,
146
"price": round(info.last_price, 2),
147
"currency": info.currency,
148
"date": current_date,
149
}
150
except Exception as e:
151
return {"error": f"Failed to get stock price for '{symbol}'. Reason: {e}"}
152
153
154
"""
155
## System Statistics Tool
156
157
This function provides information about the current system's CPU and memory
158
usage. It demonstrates how the model can interact with the local environment.
159
"""
160
161
162
def get_system_stats(**kwargs):
163
"""Get CPU and memory usage.
164
165
Args:
166
**kwargs: Ignore any additional arguments the model might hallucinate.
167
"""
168
169
mem = psutil.virtual_memory()
170
return {
171
"cpu_percent": psutil.cpu_percent(interval=1),
172
"mem_percent": mem.percent,
173
"used_gb": round(mem.used / 1e9, 2),
174
"total_gb": round(mem.total / 1e9, 2),
175
}
176
177
178
"""
179
## Time and Timezone Helpers
180
181
These functions work together to provide accurate time information for locations
182
worldwide. We maintain a mapping of common locations to their timezone identifiers,
183
then use Python's pytz library to calculate the current time.
184
"""
185
186
187
TIMEZONE_MAP = {
188
# Countries
189
"india": "Asia/Kolkata",
190
"usa": "America/New_York",
191
"uk": "Europe/London",
192
"japan": "Asia/Tokyo",
193
"china": "Asia/Shanghai",
194
"australia": "Australia/Sydney",
195
"france": "Europe/Paris",
196
"germany": "Europe/Berlin",
197
"russia": "Europe/Moscow",
198
"brazil": "America/Sao_Paulo",
199
"canada": "America/Toronto",
200
"mexico": "America/Mexico_City",
201
"spain": "Europe/Madrid",
202
"italy": "Europe/Rome",
203
# US States & Major Cities
204
"california": "America/Los_Angeles",
205
"new york": "America/New_York",
206
"texas": "America/Chicago",
207
"florida": "America/New_York",
208
"washington": "America/Los_Angeles",
209
"oregon": "America/Los_Angeles",
210
"nevada": "America/Los_Angeles",
211
"arizona": "America/Phoenix",
212
"colorado": "America/Denver",
213
"utah": "America/Denver",
214
"illinois": "America/Chicago",
215
"michigan": "America/Detroit",
216
"massachusetts": "America/New_York",
217
"pennsylvania": "America/New_York",
218
# Major World Cities
219
"los angeles": "America/Los_Angeles",
220
"san francisco": "America/Los_Angeles",
221
"seattle": "America/Los_Angeles",
222
"chicago": "America/Chicago",
223
"boston": "America/New_York",
224
"miami": "America/New_York",
225
"london": "Europe/London",
226
"paris": "Europe/Paris",
227
"tokyo": "Asia/Tokyo",
228
"beijing": "Asia/Shanghai",
229
"shanghai": "Asia/Shanghai",
230
"dubai": "Asia/Dubai",
231
"singapore": "Asia/Singapore",
232
"hong kong": "Asia/Hong_Kong",
233
"seoul": "Asia/Seoul",
234
"moscow": "Europe/Moscow",
235
"sydney": "Australia/Sydney",
236
"mumbai": "Asia/Kolkata",
237
"delhi": "Asia/Kolkata",
238
"berlin": "Europe/Berlin",
239
"rome": "Europe/Rome",
240
"madrid": "Europe/Madrid",
241
"toronto": "America/Toronto",
242
"vancouver": "America/Vancouver",
243
"montreal": "America/Toronto",
244
# Special
245
"utc": "UTC",
246
"gmt": "GMT",
247
}
248
249
250
def get_timezone_for_location(location):
251
"""Map location to timezone."""
252
253
loc = location.lower().strip()
254
return TIMEZONE_MAP.get(loc)
255
256
257
"""
258
## World Time Tool
259
260
This function gets the current time for a specific location. It uses the
261
helper function `get_timezone_for_location` to determine the correct timezone
262
and then returns the formatted time, date, and day.
263
"""
264
265
266
def get_current_time(time_location=None):
267
"""Get current time for a location."""
268
269
if not time_location:
270
time_location = "UTC"
271
272
try:
273
timezone = get_timezone_for_location(time_location)
274
note = ""
275
276
if not timezone:
277
# Default to UTC if location not found
278
timezone = "UTC"
279
note = (
280
f"Location '{time_location}' not found in database. Showing UTC time."
281
)
282
283
tz = pytz.timezone(timezone)
284
now = datetime.datetime.now(tz)
285
return {
286
"location": time_location if not note else "UTC",
287
"time": now.strftime("%H:%M:%S"),
288
"date": now.strftime("%Y-%m-%d"),
289
"day": now.strftime("%A"),
290
"timezone": timezone,
291
"note": note,
292
}
293
except Exception as e:
294
return {"error": str(e)}
295
296
297
"""
298
## Tool Definitions (Function Calling Format)
299
300
Now we need to tell the model about these tools. We do this by defining tool
301
schemas in a format that the model can understand. Each tool definition includes:
302
303
- Name: The function name
304
- Description: What the function does (helps the model decide when to use it)
305
- Parameters: The arguments the function accepts
306
307
This format follows the standard function calling convention used by FunctionGemma.
308
"""
309
310
TOOL_DEFINITIONS = [
311
{
312
"type": "function",
313
"function": {
314
"name": "web_search",
315
"description": "Search the web for information.",
316
"parameters": {
317
"type": "object",
318
"properties": {
319
"query": {"type": "string", "description": "The search query"}
320
},
321
"required": ["query"],
322
},
323
},
324
},
325
{
326
"type": "function",
327
"function": {
328
"name": "get_stock_price",
329
"description": "Get the current stock price for a given ticker symbol",
330
"parameters": {
331
"type": "object",
332
"properties": {
333
"symbol": {
334
"type": "string",
335
"description": "Stock ticker symbol (e.g., AAPL, GOOGL, TSLA)",
336
}
337
},
338
"required": ["symbol"],
339
},
340
},
341
},
342
{
343
"type": "function",
344
"function": {
345
"name": "get_current_time",
346
"description": "Get the current clock time for a specific city or country.",
347
"parameters": {
348
"type": "object",
349
"properties": {
350
"time_location": {
351
"type": "string",
352
"description": "The city or country for which to get the time, e.g., 'Tokyo' or 'Japan'",
353
}
354
},
355
"required": ["time_location"],
356
},
357
},
358
},
359
{
360
"type": "function",
361
"function": {
362
"name": "get_system_stats",
363
"description": "Get current CPU and memory usage statistics",
364
"parameters": {"type": "object", "properties": {}, "required": []},
365
},
366
},
367
]
368
369
"""
370
## Tool Registry
371
372
This dictionary maps tool names (as strings) to their actual Python function
373
implementations. When the model generates a function call like `get_stock_price`,
374
we use this registry to look up and execute the corresponding Python function.
375
376
This pattern allows for dynamic tool execution - we can easily add or remove tools
377
by updating both `TOOL_DEFINITIONS` and this registry.
378
"""
379
380
# Tool registry
381
TOOLS = {
382
"web_search": web_search,
383
"get_stock_price": get_stock_price,
384
"get_current_time": get_current_time,
385
"get_system_stats": get_system_stats,
386
}
387
388
"""
389
## Helper Functions For Function Calling
390
391
The following functions handle the mechanics of function calling:
392
393
- Parsing function calls from model output
394
- Formatting tool declarations for the prompt
395
- Constructing the full conversation prompt
396
- Formatting tool results for display
397
398
## Function Call Parser
399
400
This function is responsible for extracting the structured function call from
401
the model's text output. It looks for the special `<start_function_call>` tags
402
and parses the function name and arguments into a Python dictionary.
403
"""
404
405
406
FUNCTION_CALL_PATTERN = re.compile(
407
r"<start_function_call>call:(\w+)\{([^}]*)\}<end_function_call>"
408
)
409
ARGUMENT_PATTERN = re.compile(
410
r'(\w+):\s*("(?:[^"\\]|\\.)*"|\'(?:[^\'\\]|\\.)*\'|[^,]+)'
411
)
412
413
414
def parse_function_call(output):
415
"""Parse function call from FunctionGemma model output.
416
417
Extracts function name and arguments from the special FunctionGemma function calling
418
format: <start_function_call>call:tool_name{arg1:value1}<end_function_call>
419
420
Args:
421
output: Raw model output string
422
423
Returns:
424
Dictionary with "name" (function name) and "arguments" (dict of args),
425
426
"""
427
match = FUNCTION_CALL_PATTERN.search(output)
428
429
if not match:
430
return None
431
432
tool_name = match.group(1)
433
args_str = match.group(2).strip()
434
435
# Parse arguments
436
args = {}
437
if args_str:
438
# Matches key:"value" or key:'value' or key:value
439
arg_pairs = ARGUMENT_PATTERN.findall(args_str)
440
for key, value in arg_pairs:
441
value = value.strip()
442
443
# Remove <escape> tags
444
value = value.replace("<escape>", "").replace("</escape>", "")
445
446
# Remove quotes if present
447
if value.startswith('"') and value.endswith('"'):
448
value = value[1:-1]
449
elif value.startswith("'") and value.endswith("'"):
450
value = value[1:-1]
451
452
args[key] = value
453
454
return {"name": tool_name, "arguments": args}
455
456
457
"""
458
## Tool Declaration Formatter
459
460
This function converts our Python tool definitions into the specific format
461
expected by `FunctionGemma`. It handles the mapping of types and descriptions to the
462
`declaration:..` string format used in the system prompt.
463
"""
464
465
466
def format_tool_declaration(tool):
467
"""Format a tool declaration for FunctionGemma function calling."""
468
469
func = tool["function"]
470
params = func.get("parameters", {})
471
props = params.get("properties", {})
472
required = params.get("required", [])
473
474
# Build parameter string
475
param_parts = []
476
for name, details in props.items():
477
param_str = f"{name}:{{description:<escape>{details.get('description', '')}<escape>,type:<escape>{details['type'].upper()}<escape>}}"
478
param_parts.append(param_str)
479
480
properties_str = ",".join(param_parts) if param_parts else ""
481
required_str = ",".join(f"<escape>{r}<escape>" for r in required)
482
483
# Build full declaration
484
declaration = f"declaration:{func['name']}"
485
declaration += f"{{description:<escape>{func['description']}<escape>"
486
487
if properties_str or required_str:
488
declaration += ",parameters:{"
489
if properties_str:
490
declaration += f"properties:{{{properties_str}}}"
491
if required_str:
492
if properties_str:
493
declaration += ","
494
declaration += f"required:[{required_str}]"
495
declaration += ",type:<escape>OBJECT<escape>}"
496
497
declaration += "}"
498
499
return declaration
500
501
502
"""
503
## Prompt Constructor
504
505
This is the core function for building the prompt. It combines:
506
507
- The system prompt with tool declarations
508
- Few-shot examples to guide the model
509
- The conversation history (user and assistant messages)
510
- The final generation prompt
511
"""
512
513
514
def format_prompt_with_tools(messages, tools):
515
"""Manually construct FunctionGemma function calling prompt.
516
517
This function replicates the behavior of HuggingFace's apply_chat_template()
518
for FunctionGemma native function calling format, since KerasHub doesn't yet
519
provide this functionality.
520
521
Args:
522
messages: List of message dicts with "role" and "content" keys
523
tools: List of tool definition dicts in OpenAI format
524
525
Returns:
526
Formatted prompt string with tool declarations and conversation history
527
"""
528
prompt = "<bos>"
529
530
# Add tool declarations
531
if tools:
532
prompt += "<start_of_turn>developer\n"
533
for tool in tools:
534
prompt += "<start_function_declaration>"
535
prompt += format_tool_declaration(tool)
536
prompt += "<end_function_declaration>"
537
538
# Add few-shot examples to help the model distinguish between tools
539
prompt += "\nHere are some examples of how to use the tools correctly:\n"
540
prompt += "User: Who is Isaac Newton?\n"
541
prompt += "Model: <start_function_call>call:web_search{query:Isaac Newton}<end_function_call>\n"
542
prompt += "User: What is the time in USA?\n"
543
prompt += "Model: <start_function_call>call:get_current_time{time_location:USA}<end_function_call>\n"
544
prompt += "User: Stock price of Apple\n"
545
prompt += "Model: <start_function_call>call:get_stock_price{symbol:AAPL}<end_function_call>\n"
546
prompt += "User: Who is the PM of Japan?\n"
547
prompt += "Model: <start_function_call>call:web_search{query:PM of Japan}<end_function_call>\n"
548
prompt += "User: Check the weather in London\n"
549
prompt += "Model: <start_function_call>call:web_search{query:weather in London}<end_function_call>\n"
550
prompt += "User: Latest news about AI\n"
551
prompt += "Model: <start_function_call>call:web_search{query:Latest news about AI}<end_function_call>\n"
552
prompt += "User: System memory usage\n"
553
prompt += (
554
"Model: <start_function_call>call:get_system_stats{}<end_function_call>\n"
555
)
556
557
prompt += "<end_of_turn>\n"
558
559
# Add conversation messages
560
for message in messages:
561
role = message["role"]
562
if role == "assistant":
563
role = "model"
564
565
content = message.get("content", "")
566
567
# Regular user/assistant messages
568
prompt += f"<start_of_turn>{role}\n"
569
prompt += content
570
prompt += "<end_of_turn>\n"
571
572
# Add generation prompt
573
prompt += "<start_of_turn>model\n"
574
575
return prompt
576
577
578
"""
579
## Tool Response Formatter
580
581
After a tool is executed, this function formats the result back into a string
582
that can be displayed to the user or fed back into the model. It handles
583
specific formatting for different tools (like adding currency to stock prices).
584
"""
585
586
587
def format_tool_response(tool_name, result):
588
"""Format tool result for conversation."""
589
if "error" in result:
590
return f"Error: {result['error']}"
591
592
# Handle Web Search Results (from web_search tool OR fallback from other tools)
593
if "results" in result:
594
results = result.get("results", [])
595
if results:
596
output = f"Found {len(results)} results:\n"
597
for i, r in enumerate(results[:3], 1):
598
output += f"\n{i}. {r['title']}\n {r['description'][:150]}...\n {r['link']}"
599
return output
600
return "No results found"
601
602
if tool_name == "get_stock_price":
603
return f"Stock {result['symbol']}: ${result['price']} {result['currency']} (as of {result['date']})"
604
605
elif tool_name == "get_current_time":
606
response = f"Time in {result['location']}: {result['time']} ({result['day']}, {result['date']})\nTimezone: {result['timezone']}"
607
if result.get("note"):
608
response += f"\nNote: {result['note']}"
609
return response
610
611
elif tool_name == "get_system_stats":
612
return f"CPU: {result['cpu_percent']}% | Memory: {result['mem_percent']}% ({result['used_gb']}/{result['total_gb']}GB)"
613
614
return str(result)
615
616
617
"""
618
# Interactive Chat Loop
619
620
This section implements the main conversation loop. The agent:
621
622
- Formats the prompt with tool declarations
623
- Generates a response (which may include a function call)
624
- Executes the function if called
625
- Returns the result to the user
626
627
After each successful tool execution, we clear the conversation history to
628
prevent token overflow and keep the model focused.
629
630
## Model Loading
631
632
This function loads the FunctionGemma, a specialized version of our Gemma 3 270M model tuned for function calling using KerasHub's `from_preset` method.
633
You can load it from Kaggle using the preset name `(e.g. "function_gemma_instruct_270m")`
634
or from a local path if you've downloaded the model to your machine.
635
636
Update the path to match your model location. The model is loaded once at
637
startup and then used throughout the chat session.
638
"""
639
640
641
def load_model():
642
"""Load FunctionGemma 270M model."""
643
print("Loading FunctionGemma 270M model...")
644
model = keras_hub.models.Gemma3CausalLM.from_preset("function_gemma_instruct_270m")
645
print("Model loaded!\n")
646
return model
647
648
649
"""
650
This is the main function that runs the interactive conversation with the user.
651
652
It handles the complete cycle of:
653
654
- Displaying capabilities and waiting for user input
655
- Formatting the prompt with tool declarations and conversation history
656
- Generating a response from the model
657
- Parsing any function calls from the response
658
- Executing the requested tools
659
- Formatting and displaying results to the user
660
- Managing conversation history to prevent token overflow
661
662
The loop continues until the user types 'exit', 'quit', or 'q'.
663
"""
664
665
666
def chat(model):
667
"""Main chat loop with native function calling."""
668
print("=" * 70)
669
print(" " * 20 + "FUNCTION CALLING")
670
print("=" * 70)
671
print("\nCapabilities:")
672
print(" 🔍 Web Search - e.g., 'Who is Newton?', 'Latest AI news'")
673
print(" 📈 Stock Prices - e.g., 'Stock price of AAPL', 'TSLA price'")
674
print(
675
" 🕐 World Time - e.g., 'Time in London', 'What time is it in Tokyo?'"
676
)
677
print(" 💻 System Stats - e.g., 'System memory', 'CPU usage'")
678
print("\nType 'exit' to quit.\n")
679
print("-" * 70)
680
681
# Conversation history
682
messages = []
683
684
while True:
685
try:
686
user_input = input("\nYou: ").strip()
687
except (EOFError, KeyboardInterrupt):
688
print("\n\nGoodbye! 👋")
689
break
690
691
if user_input.lower() in {"exit", "quit", "q"}:
692
print("\nGoodbye! 👋")
693
break
694
695
if not user_input:
696
continue
697
698
# Add user message
699
messages.append({"role": "user", "content": user_input})
700
701
# Generate response with tools
702
print("🤔 Thinking...", end="", flush=True)
703
704
try:
705
# Manually format prompt with tools (KerasHub doesn't have apply_chat_template)
706
prompt = format_prompt_with_tools(messages, TOOL_DEFINITIONS)
707
708
# Generate
709
output = model.generate(prompt, max_length=2048)
710
711
# Extract only the new response
712
response = output[len(prompt) :].strip()
713
if "<start_function_response>" in response:
714
response = response.split("<start_function_response>")[0].strip()
715
if "<start_of_turn>" in response:
716
response = response.split("<start_of_turn>")[0].strip()
717
# Remove end tokens
718
response = response.replace("<end_of_turn>", "").strip()
719
720
print(f"\r{' ' * 40}\r", end="") # Clear status
721
print(f"[DEBUG] Raw Model Response: {response}")
722
723
# Check if model made a function call
724
function_call = parse_function_call(response)
725
726
if function_call:
727
tool_name = function_call["name"]
728
tool_args = function_call["arguments"]
729
730
print(f"🔧 Calling {tool_name}...", end="", flush=True)
731
732
# Execute tool
733
if tool_name in TOOLS:
734
try:
735
result = TOOLS[tool_name](**tool_args)
736
formatted_result = format_tool_response(tool_name, result)
737
738
print(f"\r{' ' * 40}\r", end="") # Clear status
739
print(f"\nAssistant: {formatted_result}")
740
741
# Clear conversation history after successful tool execution
742
# This prevents token overflow and keeps the model focused
743
messages = []
744
745
except Exception as e:
746
print(f"\r❌ Error executing {tool_name}: {e}")
747
messages.pop() # Remove user message on error
748
else:
749
print(f"\r❌ Unknown tool: {tool_name}")
750
messages.pop()
751
else:
752
# Regular text response (no function call)
753
# Check if response is empty
754
if not response or response == "...":
755
print(
756
f"\nAssistant: I'm not sure how to help with that. Please try rephrasing your question."
757
)
758
messages.pop() # Remove problematic message
759
else:
760
print(f"\nAssistant: {response}")
761
messages.append({"role": "assistant", "content": response})
762
763
# Keep only last 3 turns to prevent context overflow
764
if len(messages) > 6: # 3 user + 3 assistant
765
messages = messages[-6:]
766
767
except Exception as e:
768
print(f"\r❌ Error: {e}")
769
messages.pop() # Remove user message on error
770
continue
771
772
773
"""
774
This is where the program starts execution. When you run this script:
775
776
- The FunctionGemma model is loaded from the specified path
777
- The interactive chat loop begins, waiting for user input
778
- The Assistant processes queries and executes tools until the user types 'exit'
779
"""
780
781
if __name__ == "__main__":
782
model = load_model()
783
chat(model)
784
785
"""
786
# Conclusion
787
788
## What You've Achieved
789
790
By following this guide, you've learned how to build a production-ready AI Assistant with
791
native function calling capabilities using FunctionGemma in KerasHub. Here's what you've accomplished:
792
793
### Native Function Calling Implementation:
794
795
- Implemented FunctionGemma native function calling format without relying on external APIs
796
- Learned to construct proper tool declarations and parse structured function calls
797
- Built a manual chat template that replicates HuggingFace's apply_chat_template()
798
799
### Multi-Tool Assistant Architecture:
800
801
- Created four distinct tools: web search, stock prices, world time, and system stats
802
- Designed tools with clear error handling and user-friendly responses
803
- Implemented explicit error messages.
804
805
### Prompt Engineering Best Practices:
806
807
- Used few-shot examples to guide model behavior and improve tool selection
808
- Structured prompts with proper role separation (developer, user, model)
809
- Managed conversation history to prevent token overflow
810
811
### Error Handling and User Experience:
812
813
- Implemented graceful degradation (e.g., UTC fallback for unknown timezones)
814
- Provided clear, actionable error messages for invalid inputs
815
- Added contextual information (dates, notes) to make responses more informative
816
817
### Real-World Applications:
818
819
This pattern can be extended to build:
820
821
- Customer service chatbots with database access
822
- Data analysis assistants with computation tools
823
- Personal productivity Assistants with calendar and email integration
824
- Domain-specific assistants (medical, legal, financial) with specialized tools
825
826
## Key Takeaways
827
828
- **Small models can be powerful**: Even a 270M parameter model can reliably use tools
829
when given proper guidance through few-shot examples and clear tool descriptions.
830
831
- **Transparency matters**: Explicit error handling and clear responses build user trust
832
more effectively.
833
834
- **Function calling is composable**: The same pattern works for any number of tools,
835
making it easy to extend your Assistant's capabilities.
836
837
## Next Steps
838
839
To further improve this agent, consider:
840
841
- Fine-tuning the model on domain-specific tool usage examples to optimize performance.
842
- Adding authentication and rate limiting for production deployment.
843
- Implementing tool result caching to reduce API calls.
844
- Creating a web interface using Gradio or Streamlit.
845
- Adding more sophisticated error recovery strategies.
846
847
Happy building! 🚀
848
"""
849
850