Demystifying MCP: What is it and How do I get started?

The Modular Content Protocol (MCP) is emerging as a foundational pattern for how large language models (LLMs) interact with tools. At its core, MCP is not magic—it is simply a new protocol, much like REST or XML—but one specifically tailored for the era of AI agents and assistant-based computing.

Where REST was designed for humans and machines to exchange structured data via well-defined endpoints, MCP is designed to let LLMs reason over tool capabilities, decide when and how to invoke them, and understand the shape of the response they’ll receive. It formalizes function-level access to real-world services, whether you’re querying the weather or updating a CRM.

The POC

mcp

Part 1: Creating the Netlify MCP Wrapper

1. Define the MCP Manifest

Here is the functions.json manifest that exposes three tools:

  • get_weather_by_location: current weather for a city or coordinates
  • get_forecast_by_location: daily or hourly forecast
  • compare_weather_between_locations: compares multiple locations

Each entry follows OpenAI tool spec format, with parameter schemas for LLM parsing.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
[
{
"name": "get_weather_by_location",
"description": "Returns the current weather for a given city or coordinates.",
"parameters": {
"type": "object",
"properties": {
"latitude": { "type": "number", "description": "Latitude of the location" },
"longitude": { "type": "number", "description": "Longitude of the location" },
"locationName": { "type": "string", "description": "Name of the location (e.g. Tokyo)" }
},
"required": ["latitude", "longitude", "locationName"]
}
},
{
"name": "get_forecast_by_location",
"description": "Returns the daily or hourly forecast for a given city or coordinates.",
"parameters": {
"type": "object",
"properties": {
"latitude": { "type": "number", "description": "Latitude of the location" },
"longitude": { "type": "number", "description": "Longitude of the location" },
"locationName": { "type": "string", "description": "Name of the location" },
"forecastType": { "type": "string", "enum": ["daily", "hourly"], "description": "Type of forecast to return" }
},
"required": ["latitude", "longitude", "locationName"]
}
},
{
"name": "compare_weather_between_locations",
"description": "Compares current weather between multiple locations and identifies which is hotter and windier.",
"parameters": {
"type": "object",
"properties": {
"locations": {
"type": "array",
"items": {
"type": "object",
"properties": {
"latitude": { "type": "number" },
"longitude": { "type": "number" },
"locationName": { "type": "string" }
},
"required": ["latitude", "longitude", "locationName"]
}
}
},
"required": ["locations"]
}
}
]
2. Build the Netlify Functions

For each endpoint, I created a Netlify function. Here’s an outline of get-weather.js:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
export async function handler(event) {
const { latitude, longitude, locationName } = JSON.parse(event.body || '{}');

if (!latitude || !longitude || !locationName) {
return { statusCode: 400, body: JSON.stringify({ error: 'Missing parameters' }) };
}

const url = `https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}&current_weather=true`;
const res = await fetch(url);
const data = (await res.json()).current_weather;

const mcpRecord = {
id: `weather:${locationName.toLowerCase()}-${data.time}`,
type: 'entity/weather-observation',
location: { name: locationName, latitude, longitude },
timestamp: new Date(data.time).toISOString(),
attributes: {
temperature_celsius: data.temperature,
windspeed_kph: data.windspeed,
wind_direction_deg: data.winddirection,
weather_code: data.weathercode
},
source: url
};

return { statusCode: 200, body: JSON.stringify(mcpRecord) };
};

Part 2: Integrating into n8n as an LLM Toolchain

The goal was to allow GPT-4 (via OpenAI Chat node) to reason over which MCP tool to call, and to execute it via httpRequestTool nodes in n8n. Here are the workflow stages:

1. Webhook Trigger

A generic webhook node accepts a JSON body including a prompt. This starts the conversation.

2. Retrieve MCP Manifest

A HTTP Request node fetches our functions.json from Netlify and passes it to the LLM.

3. AI Agent with Tool Access

I set up n8n’s AI Agent node, referencing GPT-4 with a system message:

1
You have access to this MCP tools {{ $json.manifest }}
4. Define Tool Nodes

Each of our MCP endpoints was added as a separate httpRequestTool node:

  • Weather: get current weather
  • Forecast: get forecast
  • Compare Weather: compare multiple cities

Each tool uses:

  • Method: POST
  • URL: respective Netlify function endpoint
  • JSON schema matching the MCP manifest
  • Auto-generated body via $fromAI() to support function-calling mode
5. Connect Tools to Agent

Each tool node is wired to the agent via ai_tool, allowing GPT to invoke it during a multi-step reasoning process.

6. Return the Response

Finally, a Respond to Webhook node outputs the assistant’s final answer (either raw JSON or a summarized string).


Lessons Learned

  • Tool schemas must match exactly or the agent will fail with input did not match expected schema.
  • Registering multiple tools unlocks more flexibility, but increases risk of model confusion—clear naming and descriptions help.
  • You can serve just the functions.json if your tools map directly to existing APIs.
  • n8n makes it easy to integrate GPT-4’s function-calling with zero-code tool execution.

Three Ways to Build MCP

There are three primary paths to creating MCP endpoints:

1. Wrapping Existing REST APIs

This is the path most of us will take first. You define an MCP function schema (name, parameters, description) that maps to an existing REST endpoint. Then, you build a thin wrapper that:

  • Accepts JSON-formatted arguments
  • Calls the real REST API behind the scenes
  • Returns the output in a structured, model-friendly response

Example: Here it’s demonstrated with the Open-Meteo weather API. I wrapped it using Netlify Functions and defined three MCP tools: get_weather_by_location, get_forecast_by_location, and compare_weather_between_locations. These tools provide LLMs with clear affordances for querying live weather data.

2. MCP-Native Applications

You can also design your application from the ground up to expose MCP-style functions. In this model, your server or microservice is built specifically for LLM use:

  • Every capability is exposed as a named function with clear JSON schemas
  • Responses follow consistent modular patterns (e.g. entity/observation, relation/comparison)
  • Designed for model predictability, not just REST idioms

These systems behave more like callable libraries than resource-driven APIs.

3. Specialized MCP Servers

A third pattern is exemplified by tools like Azure SQL MCP Server, where an existing database or enterprise system is exposed through a dedicated MCP-compatible interface. These servers:

  • Translate LLM tool calls into structured queries or commands
  • Enforce permissions and constraints
  • Return results in structured, language-model-friendly forms

In this mode, your legacy system becomes a controllable environment for AI, without the need to rewrite core business logic.


What Does MCP Enable?

The benefit of MCP is clarity. It gives LLMs a way to know what tools exist, how they behave, and how to use them correctly.

An LLM can:

  • Read a manifest of tools (like functions.json)
  • Decide which one to use based on context
  • Generate valid input
  • Interpret structured responses

This turns AI assistants into more powerful and consistent agents, capable of completing tasks without needing prompt hacks or simulated form-filling.


Deploying Your Own MCP Stack

In this example, I used Netlify Functions to wrap the Open-Meteo API, and connected it to n8n with OpenAI function calling. But you could also:

  • Use Vercel Edge Functions, Cloudflare Workers, or AWS Lambda
  • Expose MCP endpoints directly via your app’s own SDK
  • Serve just the functions.json manifest if your existing RESTful API is sufficient for read-only access

MCP doesn’t require a complex architecture. It just requires that you think in terms of function affordances—what the model can do, and how it can do it.


Closing Thoughts

MCP is not just another protocol—it is the beginning of a new interface paradigm. One where models don’t just consume APIs but interact with systems.

By designing clearly named, parameterized, and purpose-driven functions, you’re not just building backend endpoints—you’re teaching your assistant how to help.

And that makes all the difference.