Skip to main content

16 posts tagged with "AI"

AI agents, LLMs, and intelligent automation

View All Tags

MCP Protocol Deep Dive: Message Flow and Transports

· 13 min read
STOA Team
The STOA Platform Team

The Model Context Protocol (MCP) is a JSON-RPC 2.0 based protocol that standardizes how AI agents discover, authenticate with, and invoke external tools. It defines four phases — initialization, discovery, invocation, and streaming — over pluggable transports including SSE, WebSocket, and stdio. This article covers the protocol internals that matter for production deployments.

MCP vs OpenAI Function Calling vs LangChain: Which One Wins in 2026?

· 11 min read
STOA Team
The STOA Platform Team

Three approaches dominate how AI agents call external tools in 2026: the Model Context Protocol (MCP), OpenAI Function Calling, and LangChain Tools. MCP is an open protocol for runtime tool discovery across any AI provider. OpenAI Function Calling is a proprietary API feature tightly integrated with OpenAI models. LangChain Tools is a framework abstraction that wraps tool definitions for orchestration pipelines. They solve different problems, operate at different layers, and can coexist in the same architecture.

API Gateway Migration Guide: From Legacy to AI-Ready (2026)

· 20 min read
STOA Team
The STOA Platform Team

Migrating from a existing API gateway is one of the highest-stakes infrastructure projects an enterprise platform team can undertake. Done well, it eliminates years of accumulated technical debt, reduces licensing costs, and opens the door to AI agent integration. Done poorly, it disrupts production APIs and erodes trust with every team that depends on the platform.

This guide provides a vendor-neutral framework for planning and executing an API gateway migration in 2026 — covering assessment, policy translation, phased traffic migration, and the new requirements introduced by AI agents. Specific guidance for individual platforms (Broadcom Layer7, Software AG webMethods, Axway, Apigee) is linked throughout.

Connect AI Agents to Enterprise APIs Securely with MCP

· 11 min read
STOA Team
The STOA Platform Team

Connecting AI agents to enterprise APIs is the next frontier of digital transformation — and the next frontier of security risk. As organizations deploy AI agents built on Claude, GPT, Gemini, and open-source models, these agents need access to internal systems: databases, CRMs, ERPs, payment processors, and more. The question is not whether to grant this access, but how to do it without opening a new attack surface.

This article is part of the What is an MCP Gateway series. For the strategic context on why MCP matters for enterprise architecture, see ESB is Dead, Long Live MCP.

The ESB Is Dead: From Service Buses to AI Gateways

· 9 min read
STOA Team
The STOA Platform Team

Let us say what many enterprise architects are thinking but few vendors will admit: the ESB is dead. The enterprise service bus — that monolithic integration middleware that defined the SOA era — has been in decline for a decade. What killed it was not a single technology but a series of architectural shifts: microservices, API gateways, event-driven architectures, and now the Model Context Protocol (MCP). Each shift made the ESB less relevant. MCP may be the final blow.

What Is an MCP Gateway? The Security Layer AI Agents Need

· 9 min read
STOA Team
The STOA Platform Team

As AI agents move from demos to production, enterprises face a critical question: how do you give an LLM secure, governed access to your internal tools and data? The answer is an MCP gateway — a new category of infrastructure that sits between AI agents and the services they consume, enforcing security, observability, and policy at every interaction.