
Tenant Data Isolation: Patterns and Anti-Patterns
Explore effective patterns and pitfalls of tenant data isolation in multi-tenant systems to enhance security and compliance.
Jul 30, 2025
Read More
If you've worked with AI agents, you've likely seen the terms "function calling" and "tool use" thrown around interchangeably. They sound similar, but they represent fundamentally different approaches to extending LLM capabilities. Understanding the distinction is critical when building production AI agents—it affects your architecture, error handling, cost, and user experience.
In this guide, we'll break down both concepts, compare their technical implementations, and help you choose the right approach for your use case.
Function calling (also called "tool calling" by some providers) is a structured output feature where the LLM returns a JSON object specifying which function to call and what arguments to pass—instead of generating natural language text.
How it works:
{"function": "get_weather", "arguments": {"city": "London"}}Example with OpenAI:
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{role: "user", content: "What's the weather in London?"}],
tools: [{
type: "function",
function: {
name: "get_weather",
description: "Get current weather for a city",
parameters: {
type: "object",
properties: {
city: {type: "string", description: "City name"},
units: {type: "string", enum: ["celsius", "fahrenheit"]}
},
required: ["city"]
}
}
}]
});
// LLM returns:
// {tool_calls: [{function: {name: "get_weather", arguments: '{"city":"London"}'}}]}
Tool use is a broader paradigm where the LLM can interact with external systems, execute code, or retrieve information—but the implementation details vary. Some providers use prompt-based approaches, others use structured outputs similar to function calling.
Anthropic's Tool Use (Claude):
Anthropic's implementation is nearly identical to OpenAI's function calling—they just call it "tool use." You define tools with schemas, Claude decides when to use them, and returns structured JSON.
Prompt-based tool use (older approach):
Before structured outputs existed, developers would include tool descriptions in the system prompt and parse LLM text output:
System: You have access to these tools:
- weather(city): Get current weather
- calculator(expression): Evaluate math
When you need to use a tool, write: TOOL: tool_name(arguments)
User: What's 25 * 87?
Assistant: TOOL: calculator(25 * 87)
[You execute calculator, return "2175"]
Assistant: The result is 2,175.
This approach is fragile—parsing can fail if the LLM deviates from the expected format.
| Aspect | Function Calling (OpenAI/Anthropic) | Prompt-Based Tool Use |
|---|---|---|
| Output format | Guaranteed JSON structure | Free-form text (must parse) |
| Reliability | High (structured output) | Medium (parsing errors common) |
| Provider support | OpenAI, Anthropic, Google, Mistral | Any LLM |
| Error handling | Invalid calls rejected by API | Must validate manually |
| Cost | Slight overhead (tool schemas in context) | Lower token usage |
Choose function calling if:
Choose prompt-based if:
async def agent_loop(messages):
while True:
response = await llm.complete(messages, tools=TOOLS)
if response.finish_reason == "stop":
# LLM finished, return final answer
return response.content
if response.finish_reason == "tool_calls":
# Execute each tool call
for tool_call in response.tool_calls:
result = await execute_tool(
tool_call.function.name,
json.loads(tool_call.function.arguments)
)
# Add tool result to conversation
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": json.dumps(result)
})
# Loop back to LLM with tool results
continue
Function calling error types:
Mitigation strategies:
MAX_TOOL_CALLS = 10
async def safe_agent_loop(messages):
call_count = 0
while call_count < MAX_TOOL_CALLS:
response = await llm.complete(messages, tools=TOOLS)
if response.finish_reason == "tool_calls":
for tool_call in response.tool_calls:
try:
# Validate function exists
if tool_call.function.name not in TOOL_MAP:
result = {"error": f"Unknown function: {tool_call.function.name}"}
else:
# Execute with timeout
result = await asyncio.wait_for(
execute_tool(tool_call.function.name, tool_call.function.arguments),
timeout=30
)
except Exception as e:
result = {"error": str(e)}
messages.append({"role": "tool", "content": json.dumps(result)})
call_count += 1
else:
return response.content
return "Agent exceeded maximum tool calls."
Function calling adds token overhead—tool schemas must be included in every API call. For an agent with 10 tools (average 100 tokens per schema), that's 1,000 extra input tokens per request.
Cost example (GPT-4o):
At 100K requests/month, tool schema overhead alone costs $375. Mitigation: only include relevant tools based on user intent.
Mostly yes—OpenAI calls it "function calling" while Anthropic calls it "tool use," but the implementation is nearly identical. Both use JSON schemas to define tools and return structured outputs. Historically, "tool use" was broader (including prompt-based approaches), but modern usage treats them as synonyms.
Yes. GPT-4o, Claude 3.5, and Gemini support parallel function calling. If you ask "What's the weather in London and New York?", the LLM can return two function calls in one response. This reduces latency—both calls execute in parallel instead of sequentially.
The function executes and returns an error or unexpected result. The LLM sees this in the next turn and typically corrects itself or apologizes. Best practice: return clear error messages like {"error": "City not found. Try a major city name."} so the LLM can retry correctly.
Set a maximum tool call limit (10-15 is reasonable). Track call count in your agent loop and force termination if exceeded. Also implement tool call deduplication—if the LLM calls the same function with the same arguments twice in a row, something's wrong.
Some can. Models like Mistral 7B Instruct v0.3, LLaMA 3.1, and Hermes 2 Pro have been fine-tuned for function calling. However, reliability is lower than GPT-4 or Claude. For production, test thoroughly or use prompt-based tool use with careful parsing.
Need an expert team to provide digital solutions for your business?
Book A Free CallDive into a wealth of knowledge with our unique articles and resources. Stay informed about the latest trends and best practices in the tech industry.
View All articlesTell us about your vision. We'll respond within 24 hours with a free AI-powered estimate.
© 2026 Propelius Technologies. All rights reserved.