Battle-tested in production. Build on it with confidence.
Tool Use / Function Calling
Foundational capability every production agent relies on — MCP is converging as the universal standard, but structured output reliability and auth management at scale remain real engineering challenges.
Agentic·Infrastructure
modelcontextprotocol.ioOur Take
What It Is
Tool use (also called function calling) lets LLMs take real-world actions by generating structured API calls. Instead of just generating text, the model outputs a function name and parameters that your application executes. The Model Context Protocol (MCP), originally created by Anthropic and now donated to the Linux Foundation's Agentic AI Foundation, provides a universal standard for connecting models to tools.
Why It Matters
This pattern is Proven because it's the foundation everything agentic is built on. The convergence on MCP is the story: Anthropic, OpenAI, Google, Apple, and Microsoft all support it now, with 1,000+ community-built servers. MCP adoption grew 340% in 2025. When every major provider agrees on a protocol, that's not a trend — it's infrastructure.
For practitioners, the practical impact is real: write one MCP server for your database or API, and it works with Claude, GPT, Gemini, and open-source models. Klarna's bot handles two-thirds of customer inquiries using function calling. This is production-scale, not experimental.
Key Developments
- Mar 2026: MCP ecosystem exceeds 1,000 community-built servers.
- Early 2026: MCP donated to Linux Foundation's Agentic AI Foundation (co-founded by Anthropic, Block, OpenAI).
- Feb 2026: Apple integrates MCP into Xcode 26.3 for agentic coding.
- Late 2025: OpenAI officially adopted MCP across products including ChatGPT desktop.
What to Watch
Auth and access control at enterprise scale is the bottleneck. Production deployments with thousands of users each connecting their own OAuth accounts require significant token management infrastructure. If the MCP ecosystem solves multi-tenant auth cleanly, tool use moves from "developer pattern" to "enterprise platform." Watch for the Linux Foundation governance to stabilise the spec.
Strengths
- Industry convergence on MCP: Anthropic, OpenAI, Google, Apple, Microsoft all support it. 1,000+ community-built servers eliminate vendor lock-in.
- Production maturity: Every major agent framework supports tool use as first-class. Klarna's bot handles 2/3 of customer inquiries using function calling.
- Strict mode eliminates format errors: OpenAI strict mode and Anthropic structured outputs guarantee valid JSON responses.
- Dynamic tool loading: Models pull tool definitions on demand, reducing context window overhead.
Considerations
- Context window overhead: Structured tool definitions increase context length. Accuracy drops 16-50 percentage points when tool context exceeds 8,000 tokens.
- Auth management at scale: Production deployments with thousands of users connecting OAuth accounts require significant infrastructure.
- Structured output accuracy penalty: Requiring JSON output reduced accuracy by 27.3 percentage points on GSM8K versus natural language.
- MCP ecosystem maturity: Many community servers are experimental. Enterprise-grade servers with proper error handling and monitoring are still emerging.
Resources
Articles
More in Agents & Orchestration
Tool Use / Function Calling· A2A Protocol· OpenAI Agents SDK· PydanticAI· AI Browser Use· Agentic RAG· CrewAI· Multi-agent Orchestration· OpenClaw· Chain-of-Thought· LangGraph· Model Context Protocol
Back to AI Radar