Model Context Protocol (MCP)

Introduction

Model Context Protocol (MCP) is a protocol designed to simplify the development of applications that utilize Large Language Models (LLMs) by standardizing the way resources, tools, and prompts are accessed and utilized. MCP allows LLM applications to interact with various resources and tools in a consistent manner, reducing the complexity of integrating these components into LLM applications.

In this blog post, I would like to discuss why MCP is useful and a few key concepts of MCP.

Why Model Context Protocol (MCP) Is Useful

LLMs (Large Language Models) consumes tokens and generate tokens. LLM applications can be built upon LLMs can take queries from the user, process the queries into tokens, send the tokens to the LLM, get the generated tokens from the LLM, process the generated tokens into a response to the user, and optionally run some external tools based on the LLM response.

Often the time, the queries from the user are very simple and do not have the context to instruct the LLM to generate the desired response. In this case, based on the user query, given all the resource and tools the LLM application can access, the LLM application will use the LLM model to first decide what resource to collect, what tools to run, and what prompts to use in the next iteratively.

Before Model Context Protocol (MCP) was born, to implement such LLM applications, developers have to carefully study how to collect resource and use tools from different service providers and implement them in the LLM application. Because there can be lots of resources and tools, the access interface of these resources and tools can be very different and can change from time to time, it becomes quite impossible to implement and maintain the resource collection and tool running logic in the LLM application.

MCP is a protocol that standardizes the resource collection and tool running logic in LLM applications. As a consequence, if resources and tools are served via the MCP, LLM applications can access all of them using the same interface, and developers do not have to study the details of how to collect resources and run tools. Without MCP, if there are 1000 tools and resources to access, developers have to implement 1000 different access functions in the LLM application. With MCP, developers only need to implement one access function to access all the tools and resources, no matter how many there are.

MCP reduces the burden of the LLM application development and transfers the burden to the MCP service providers, who are responsible for implementing the resource collection and tool running logic. At the first glance, it seems that it does not reduce the total development effort. However, with MCP, the resource collection and tool running logic no longer has to be implemented by each LLM application. In the past, if there are 1000 LLM applications, assuming they are all from different companies or individuals who don not share code, the resource collection and tool running logic has to be implemented 1000 times, potentially in different ways. With MCP, the resource collection and tool running logic only has to be implemented and maintained by the MCP service providers, and all the LLM applications can access them (hopefully) without worrying about how they are implemented. This is a huge reduction in the total development effort in the ecosystem.

By following a protocol, it saves the development time and effort in the ecosystem. This is true not only for MCP but also for other protocols which people are more familiar with, such as HTTP and USB.

Suppose there are $M$ services and $N$ applications. Without following a protocol, the total development effort is $O(M \times N)$. However, with a protocol, the total development effort is reduced to $O(M)$, which is a huge reduction in the total development effort. Of course when $N = 1$, we typically don’t need a protocol, because it almost does not save any development effort. But this is not true in the case of LLM and many other applications.

Model Context Protocol (MCP) Concepts

Using MCP, the LLM application will have one or multiple MCP clients interacting with MCP servers via 1:1 connections. The MCP client can query from the MCP server to see what resources and tools are available, provide them as context to the LLM, and let the LLM decide what resources to collect and what tools to run, depending on the user query.

The most important three concepts in MCP are probably Resources, Tools, and Prompts.

Resources

Resources are identified using URIs that follow this format:

1
[protocol]://[host]/[path]

For example:

1
2
3
file:///home/user/documents/report.pdf
postgres://database/customers/schema
screen://localhost/display1

The protocol and path structure is defined by the MCP server implementation. Servers can define their own custom URI schemes.

The MCP client can query the MCP server to get a list of available resources URIs or the way to construct the URIs, together with the description and other metadata of the resources, which LLM can use to decide which resources to collect.

Tools

Each tool is defined with the following structure:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
name: string; // Unique identifier for the tool
description?: string; // Human-readable description
inputSchema: { // JSON Schema for the tool's parameters
type: "object",
properties: { ... } // Tool-specific parameters
},
annotations?: { // Optional hints about tool behavior
title?: string; // Human-readable title for the tool
readOnlyHint?: boolean; // If true, the tool does not modify its environment
destructiveHint?: boolean; // If true, the tool may perform destructive updates
idempotentHint?: boolean; // If true, repeated calls with same args have no additional effect
openWorldHint?: boolean; // If true, tool interacts with external entities
}
}

Like resources, tools are identified by unique names and can include descriptions to guide their usage. However, unlike resources, tools represent dynamic operations that can modify state or interact with external systems.

Prompts

Each prompt is defined with:

1
2
3
4
5
6
7
8
9
10
11
{
name: string; // Unique identifier for the prompt
description?: string; // Human-readable description
arguments?: [ // Optional list of arguments
{
name: string; // Argument identifier
description?: string; // Argument description
required?: boolean; // Whether argument is required
}
]
}

Prompt engineering is a crucial factor for LLM to produce desired responses. However, it does not scale as the number of resources and tools increases. Just like resources and tools, we wanted to avoid each LLM application to spend a huge amount of time and effort to study how to construct effective prompts for each resource and tool. Therefore, MCP servers can also provide prompts for resources and tools, using the designated entries that MCP clients want to fill. With the suggested prompts from MCP servers, LLMs can just use the prompts as they are to generate the desired responses.

Conclusions

Model Context Protocol (MCP) simplifies the development of LLM applications by standardizing the way resources, tools, and prompts are accessed and utilized. With MCP, the LLM application no longer has to have the domain knowledge about how to collect resources and run tools, and run LLMs with prompts.

References

Author

Lei Mao

Posted on

07-13-2025

Updated on

07-13-2025

Licensed under


Comments