Skip to main content

MCP vs Function Calling vs LangChain: Which Wins in 2026?

Β· 11 min read
STOA Team
The STOA Platform Team

Three approaches dominate how AI agents call external tools in 2026: the Model Context Protocol (MCP), OpenAI Function Calling, and LangChain Tools. MCP is an open protocol for runtime tool discovery across any AI provider. OpenAI Function Calling is a proprietary API feature tightly integrated with OpenAI models. LangChain Tools is a framework abstraction that wraps tool definitions for orchestration pipelines. They solve different problems, operate at different layers, and can coexist in the same architecture.

Part of the MCP Gateway Series

This comparison focuses on architectural differences. For MCP fundamentals, see What is an MCP Gateway?. For protocol internals, see MCP Protocol Deep Dive.

TL;DR Comparison​

DimensionMCPOpenAI Function CallingLangChain Tools
TypeOpen protocol (JSON-RPC 2.0)Proprietary API featureFramework abstraction
Maintained byAnthropic + communityOpenAILangChain Inc. + community
DiscoveryRuntime (tools/list)Compile-time (schema in API call)Compile-time (Python/TS code)
TransportHTTP+SSE, WebSocket, stdioOpenAI API (HTTPS)In-process function calls
Provider lock-inNone β€” works with any AI providerOpenAI models onlyMultiple providers via adapters
Multi-tenancyBuilt into protocol (tenant-scoped tools)Application-levelApplication-level
Security modelGateway-enforced (auth, policies, audit)API key authenticationApplication-level
StreamingNative (SSE, progress notifications)Streaming via OpenAI APIProvider-dependent
Enterprise governanceDesigned for it (OPA, metering, audit trails)DIYDIY or via plugins
Best forProduction enterprise, multi-provider, governedOpenAI-only applicationsPrototyping, orchestration pipelines

Architecture Comparison​

MCP: Protocol Layer​

MCP operates at the protocol layer β€” it defines how clients and servers communicate, regardless of the AI model or framework:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     MCP Protocol      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ AI Agent β”‚ (JSON-RPC over SSE) β”‚ MCP Server β”‚
β”‚ (Claude, GPT, β”‚ ──────────────────────→ β”‚ (Gateway or β”‚
β”‚ custom LLM) β”‚ ←────────────────────── β”‚ direct server) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Backend APIs β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key characteristics:

  • Runtime discovery: Agents call tools/list to find available tools at connection time
  • Transport-agnostic: Same protocol over SSE, WebSocket, or stdio
  • Provider-independent: Any AI model that speaks JSON-RPC can use MCP
  • Gateway-compatible: An MCP gateway adds auth, rate limiting, and audit without changing the protocol

OpenAI Function Calling: API Feature​

OpenAI Function Calling is a feature of the OpenAI Chat Completions API β€” tool definitions are passed as parameters in each API call:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     OpenAI API         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Application β”‚ ──────────────────────→ β”‚ OpenAI β”‚
β”‚ β”‚ ←────────────────────── β”‚ (GPT-4, etc.) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚
β”‚ (Application executes the function locally)
β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Backend APIs β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key characteristics:

  • Compile-time definitions: Tool schemas are embedded in each API request
  • Model executes nothing: OpenAI returns a function call request; the application executes it
  • OpenAI-only: Requires the OpenAI API (GPT-4, GPT-4o, etc.)
  • Application responsibility: Auth, rate limiting, error handling, and audit are all application code

LangChain Tools: Framework Abstraction​

LangChain Tools is a framework-level abstraction that wraps tool definitions in Python/TypeScript objects:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Application β”‚
β”‚ (LangChain) β”‚
β”‚ β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ Tool A β”‚ │──→ Backend API A
β”‚ β”‚ Tool B β”‚ │──→ Backend API B
β”‚ β”‚ Agent β”‚ │──→ LLM (any provider)
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key characteristics:

  • Code-defined: Tools are Python/TypeScript classes with schemas
  • In-process execution: Tools run in the same process as the agent
  • Provider-agnostic: LangChain adapters support OpenAI, Anthropic, and others
  • Orchestration focus: Chains, agents, and memory management on top of tool calling

Detailed Comparison​

Discovery Model​

How does the AI agent learn about available tools?

ApproachMCPOpenAI FCLangChain
WhenRuntime (per-connection)Per-request (compile-time)Application startup
Howtools/list RPC calltools parameter in API requestPython/TS class registration
DynamicYes β€” server can change tools per tenant, per sessionNo β€” application controls the listLimited β€” code changes required
FilteredYes β€” gateway filters per-tenantNo β€” application filtersNo β€” application filters

MCP's dynamic discovery is the fundamental architectural difference. A tool catalog can change without redeploying the application. New tools appear when backend teams register them. Different tenants see different tools. This is critical for enterprise environments where tool availability is governed by policy, not code.

OpenAI FC's static approach means the application must know all tools at build time. Adding a tool requires a code change and deployment. This is simpler for small applications but doesn't scale to enterprise environments with hundreds of tools managed by different teams.

LangChain's code-defined approach is similar to OpenAI's in that tools are defined at build time, but LangChain provides abstractions (tool registries, dynamic tool loading) that can simulate runtime discovery within the framework.

Security and Governance​

DimensionMCP (with Gateway)OpenAI FCLangChain
AuthenticationGateway-enforced (JWT, API key, mTLS)Application codeApplication code
AuthorizationOPA policies per-tenant, per-toolApplication codeApplication code
Rate limitingGateway-enforced per-tenantApplication codeApplication code
Audit trailAutomatic per-invocation loggingApplication codeApplication code
Input validationSchema validation at gatewayOpenAI validates paramsPydantic/Zod in tool class
Secrets isolationBackend creds in gateway, never in agentApplication manages secretsApplication manages secrets
Multi-tenancyProtocol-native (tenant-scoped tools)Application-levelApplication-level

The pattern is clear: MCP with a gateway provides security at the infrastructure layer, while OpenAI FC and LangChain push all security concerns to application code.

For a single-developer prototype, application-level security is fine. For enterprise deployments with compliance requirements (NIS2, DORA, SOC 2), infrastructure-level governance is essential. You don't want every application team reimplementing authentication, rate limiting, and audit logging.

Enterprise Readiness​

RequirementMCP (with Gateway)OpenAI FCLangChain
Multi-providerYes β€” any MCP clientNo β€” OpenAI onlyYes β€” via adapters
Self-hostedYes β€” deploy on your infraNo β€” OpenAI cloudPartial β€” framework is local, LLM may be cloud
Data residencyFull controlData goes to OpenAIDepends on LLM provider
Compliance auditBuilt-in audit eventsDIY loggingDIY logging
Centralized managementGateway admin consoleNo β€” per-appNo β€” per-app
Tool catalogGateway + portalNo catalogCommunity tool libraries
Cost meteringPer-tenant, per-tool meteringToken counting via APIToken counting via callbacks

Performance​

MetricMCPOpenAI FCLangChain
Discovery latency~1-5ms (tools/list RPC)0ms (embedded in request)0ms (in-memory)
Invocation overheadSub-millisecond (gateway proxy)0ms (local execution) + LLM API latency0ms (in-process) + LLM API latency
Network hopsClient β†’ Gateway β†’ BackendClient β†’ OpenAI β†’ Client β†’ BackendClient β†’ LLM β†’ Client β†’ Backend
StreamingNative SSE/WSOpenAI streaming APIProvider-dependent

MCP adds a network hop (gateway), but the gateway overhead is sub-millisecond. The dominant latency in any AI tool-calling pipeline is the LLM inference time (hundreds of milliseconds to seconds), not the tool invocation infrastructure.

When to Use Each​

Use MCP When:​

  • Multiple AI providers: You use Claude, GPT, and/or open-source models and need a unified tool interface
  • Enterprise governance: You need centralized auth, rate limiting, audit trails, and multi-tenancy
  • Dynamic tool catalogs: Backend teams register tools independently; agents discover them at runtime
  • Production deployments: You're moving beyond prototyping to governed, compliant production systems
  • Self-hosted infrastructure: You need full control over where data flows (EU sovereignty, regulated industries)
  • Gateway pattern: You already use API gateways and want to extend the pattern to AI agent traffic

Use OpenAI Function Calling When:​

  • OpenAI-only applications: You exclusively use GPT models and don't need provider portability
  • Simple tool sets: You have a small, stable set of tools (< 20) that rarely change
  • Prototype stage: You're building a proof of concept and want the fastest path to working tool calls
  • Tight OpenAI integration: You use other OpenAI features (Assistants API, retrieval, code interpreter) that benefit from native function calling

Use LangChain Tools When:​

  • Complex orchestration: You need chains, agents, memory, and retrieval-augmented generation (RAG) in a single framework
  • Rapid prototyping: You want pre-built tool integrations (Google Search, Wikipedia, calculators) out of the box
  • Multi-step agents: Your use case involves multi-step reasoning with branching, backtracking, or plan-and-execute patterns
  • Framework benefits: You value the LangChain ecosystem (LangSmith tracing, LangGraph state machines, community tools)

Can They Coexist?​

Yes β€” and in many enterprise architectures, they do. The three approaches operate at different layers:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Application Layer β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ LangChain β”‚ (orchestration, chains, memory) β”‚
β”‚ β”‚ Agent β”‚ β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚ β”‚ β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ MCP Client β”‚ β”‚ OpenAI Client β”‚ β”‚
β”‚ β”‚ (tools via β”‚ β”‚ (function β”‚ β”‚
β”‚ β”‚ gateway) β”‚ β”‚ calling) β”‚ β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚ β”‚
β–Ό β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ MCP Gateway β”‚ β”‚ OpenAI API β”‚
β”‚ (enterprise β”‚ β”‚ (cloud) β”‚
β”‚ tools) β”‚ β”‚ β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

A practical example:

  1. LangChain provides the agent framework (orchestration, memory, chains)
  2. MCP provides access to enterprise tools (CRM, ERP, internal APIs) via a governed gateway
  3. OpenAI Function Calling handles OpenAI-specific features (code interpreter, DALL-E integration)

The LangChain MCP adapter allows LangChain agents to consume MCP tools natively, bridging the framework and protocol layers.

Migration Paths​

From OpenAI Function Calling to MCP​

If you started with OpenAI Function Calling and need to add governance or multi-provider support:

  1. Extract tool definitions from your API call parameters into MCP Tool CRDs
  2. Deploy an MCP gateway with the same tools registered
  3. Update your application to use an MCP client instead of embedding tools in the OpenAI API call
  4. The OpenAI model still works β€” Claude, GPT, and other models can all use MCP tools

The key change: tool definitions move from application code to the gateway, where they can be managed centrally.

From LangChain Tools to MCP​

If you have LangChain tools and want enterprise governance:

  1. Keep LangChain as the orchestration layer
  2. Register your tools as MCP tools on a gateway instead of defining them inline
  3. Use the LangChain MCP adapter to connect your agent to the MCP gateway
  4. Benefit: Centralized auth, rate limiting, audit, and multi-tenancy without rewriting your agent

From MCP to LangChain (Adding Orchestration)​

If you have MCP tools and need complex orchestration:

  1. Keep your MCP gateway and tool catalog
  2. Add LangChain as the agent framework on top
  3. Use the LangChain MCP adapter to consume your existing MCP tools
  4. Add LangChain-specific features: chains, memory, RAG, plan-and-execute patterns

Frequently Asked Questions​

Can I use MCP with OpenAI models?​

Yes. MCP is provider-independent. You can build an MCP client that uses GPT-4 for reasoning and calls tools via MCP. The model generates tool call requests (based on tool descriptions from tools/list), and your MCP client executes them through the gateway. This gives you OpenAI's model quality with MCP's enterprise governance.

Does LangChain support MCP natively?​

LangChain has community-maintained MCP adapters that allow LangChain agents to consume MCP tools as if they were native LangChain tools. The adapter handles the MCP protocol (connection, discovery, invocation) and exposes tools in LangChain's format. Check the LangChain documentation for the latest adapter availability.

Is MCP only for Anthropic/Claude?​

No. MCP was introduced by Anthropic but is an open protocol. Any AI model or framework can implement an MCP client. Claude has native MCP support, but MCP clients exist for GPT-based applications, open-source models, and custom agents. The protocol is model-agnostic by design.

Which approach has the lowest latency?​

For tool invocation latency specifically: LangChain (in-process, ~0ms overhead) > OpenAI FC (local execution, ~0ms overhead) > MCP (gateway proxy, sub-millisecond overhead). However, the dominant latency is always the LLM inference time (100ms-10s), making the tool invocation overhead negligible. Choose based on governance needs, not latency.

Can I start with one and migrate to another?​

Yes. The most common path is: start with OpenAI Function Calling or LangChain for prototyping, then add MCP when you need enterprise governance, multi-provider support, or centralized tool management. The tool definitions (name, description, schema) are conceptually the same across all three β€” what changes is where they live and how they're managed.

Further Reading​


Feature comparisons are based on publicly available documentation as of 2026-02. Product capabilities change frequently. We encourage readers to verify current features directly with each vendor. All trademarks belong to their respective owners. See trademarks for details.

Evaluating AI agent architectures? Start with the MCP Gateway quickstart to see the protocol in action, or explore the MCP gateway documentation for architecture details.