How To Explain MCP (Model Context Protocol) To An Absolute Idiot
What is MCP (Model Context Protocol)? And why is it important?
Don’t mind the title, you’re not an idiot. Or maybe you are, I’ll never know.
But I know AI has already gone through a few stages, and each one has tried to fix the weaknesses of the one before. MCP (Model Context Protocol) is the latest attempt. And here’s the simple explanation.
Stage 1: Raw LLMs
In the beginning, large language models were basically just advanced predictors. They could predict words and answers fluently because of how much text they’d digested. But they also hallucinated a lot, made things up, and had no awareness of the real world.
When ChatGPT was first launched in November 2022, the knowledge cutoff for the model was September 2021, meaning it could not access information or events after that date. Now that I think of it, it's kind of ridiculous.
But that was an era.
Stage 2: LLMs with Context
Then came the era of pairing LLMs with external sources.
Perplexity bolted search engines onto models.
Windsurf/Cursor bolted LLMs into our codebase.
Suddenly, models weren’t just guessing from memory; they had fresh context to work with.
This fixed a lot, but came with a new problem: integrations.
Because every app depends, to some extent, on another app for extra contexts, every app now needs to build its own custom connectors to other apps.
Stage 3: The MCP Approach
This is where the Model Context Protocol (MCP) comes in. Instead of everyone building one-off bridges, MCP sets a common way for models and tools to communicate.
A Model (ChatGPT, Claude, etc.), the brain, plugs into MCP.
Tools (files, APIs, databases, whatever) plug into MCP.
The Protocol, the rules + language, ensures they can understand each other.
It’s more like standardizing the handshake so every tool doesn’t need to reinvent greeting.
To put it more simply: Without context, models are blind. Without protocol, there are no rules, and where there are no rules, there is no sin. MCP combines all.
A Personal Example
I was working in Windsurf. Normally, if I wanted to check whether my implementation matched the design in Figma, I’d have to open Figma, eyeball the frame, compare it with the code, and manually spot the differences.
But with Figma’s MCP server connected, I just selected the frame in Figma and asked Windsurf:
“Check if my implementation is the same as the specs in the selected frame.”
The AI could directly pull the details from Figma through MCP and compare them with my code.
That’s the kind of workflow MCP makes possible. And in that workflow, the model (Claude inside Windsurf) talked to the context (my Figma design specs) using the protocol (MCP).
But… Let’s Be Honest
Before we get carried away, here’s the truth:
Most MCP implementations right now are trash. They break often, hardly work correctly, and feel clunky. Even Figma’s MCP isn’t perfect.
It’s very early days, Anthropic only introduced MCP in 2024.
Like any new standard, it will take time to mature, gain adoption, and “just work fine.”
It’s not generally considered ‘safe’ to a reasonable level yet, so you don’t want to do dumb stuff like connecting your database to an MCP.
But we’ll get there.
The Takeaway
MCP isn’t perfect yet. But the vision is powerful.
Just like USB-C eventually replaced the drawer of random chargers, MCP could replace today’s integration chaos.
So if you hear someone mention MCP and it sounds confusing, remember this:
👉 The model is the brain.
👉 The context is the stuff it needs.
👉 The protocol is the language that lets them chat.
That’s it. Have a nice day, or don’t.
This post is very befitting to the title of your newsletter. It’s indeed AI in a nutshell. Loved the simplified explanation.
This is nice. I love the simplicity of the word choices. You're the man.