Pricing Login Free trial Support
All an engineer has to do is click a link, and they have everything they need in one place. That level of integration and simplicity helps us respond faster and more effectively.
Sajeeb Lohani
Global Technical Information Security Officer (TISO), Bugcrowd
Read case study

Glossary

Model Context Protocol (MCP)


A


B


C


D


E


F


G


H


I


J


K


L


M


N


O


P


Q


R


S


T


U


V


W


X


Y


Z

Table of contents

    What is the Model Context Protocol (MCP)?

    The Model Context Protocol (MCP) is a standardized orchestration and governance layer for connecting an AI application to external systems and external data sources. It routes requests, runs plugins and tools, enforces security policies, and manages how an AI model, AI agent, and other external tools communicate and share context. 

    MCP ensures that generative AI and agentic AI systems can securely access data, call tools, and operate across distributed environments. The MCP provides a structured way for a large language model (LLM) to interact securely with external systems.

    MCP architecture and components

    MCP server: Your organization is likely already developing its own AI strategy, so the MCP server serves as the central hub that connects internal AI agents to external ecosystems. 

    MCP host: This is the AI application or environment where the LLM lives. This is where the MCP host uses the LLM to process requests that may require external data or tools.

    MCP client: The MCP client helps the LLM and MCP server communicate with each other so that requests and responses flow smoothly between the two. 

    Transport layer: This handles all the security and policy enforcement. It makes sure that data moving between components does so safely and in compliance with the rules you’ve set.

    Benefits of the MCP

    The MCP sits in the background deciding what runs, which tools get called, and what rules everything has to follow. It acts as the glue, letting an AI model and tool share information in an orderly manner. Here are some of the benefits an MCP server offers:

    • Standardized communication via JSON: MCP uses the JSON-RPC protocol as a common language so that large language model (LLM) applications can consistently talk to external tools and services, regardless of who built them. 
    • Orchestration and routing: MCP acts as the backend orchestrator for AI systems. It determines which tools are called, in what order, and how requests flow between agents and models. It also validates that an agent or user requesting an action is authenticated.
    • Governance and security: The MCP is an enforcement layer. It prevents agents and plugins from doing things they shouldn’t by using IAM and RBAC rules to restrict which AI agents can access sensitive data or tools. 
    • Works alongside Model Control Plane (MoCoP): MCP focuses on the “who” and “how” of a request. MoCoP handles what’s actually inside that request by cleaning up inputs to prevent prompt injection and tracking where data came from. The two work together, but at different levels of the stack.

    What’s the difference between MCP and MoCoP?

    While the MCP acts as a Kubernetes control plane for AI, an MoCoP acts like a pod spec or a container. 

    An MCP without MoCoP means you have a secure orchestrator passing unsafe context. And a MoCoP without MCP means safe inputs coming from a potentially compromised controller.

    Below are the differences between the two. Learn why you need both for AI security

    FeatureMCP (Model Context Protocol)MoCoP (Model Control Plane)
    What it isThe orchestrator — it routes requests, runs plugins, and enforces policies.The payload — it’s what actually gets passed to the LLM.
    Primary jobControls what runs, with what tools, and under what policy.Builds the prompt. Escapes inputs. Tracks provenance. Defends the model.
    Security focusKeep agents and plugins in a box. Apply policy. Validate identity.Prevent prompt injection. Block leaks. Structure context correctly.
    Lives inYour backend (infra, agents, orchestration).The data plane (prompts, memory, plugin output — aka the sketchy stuff).

    How Sumo Logic’s MCP Server connects your AI ecosystem

    The Sumo Logic MCP Server makes Dojo AI the hub of your ecosystem. Your organization is likely building its own AI strategy, so the MCP Server connects our team of AI agents with your specialized copilots, proprietary models, and third-party AI systems and tools. You can securely bring your own AI into workflows, run natural-language queries and actions outside the Sumo Logic platform, and access insights from IDEs and collaboration tools. 

    Learn more about Dojo AI and the MCP Server.

    FAQs

    MCP standardizes the way AI models access external data, tools, and workflows. Rather than building custom integrations for every tool, you plug into MCP once and get a consistent, secure connection across your AI ecosystem.

    Traditional APIs use static endpoints. You build a connection between two specific systems, and it works for that one use case. MCP provides a shared protocol that any compliant client and server can use to communicate. It also goes beyond basic connectivity by layering in orchestration, identity validation, and policy enforcement, things a standard API integration doesn’t handle out of the box.

    MCP is built with enterprise governance in mind. IAM and RBAC rules let you define exactly which AI agents can access which tools or data sources, and MCP supports logging of tool calls and actions, giving security and compliance teams a clear trail of what ran, when, and under what context.