How To Explain Context Engineering To A 9-Year-Old
Hint: This post was inspired by Anthropic’s guide to Effective Context Engineering for AI Agents.
Hint: This post was inspired by Anthropic’s guide to Effective Context Engineering for AI Agents.
I’ve always believed the number one problem of AI right now is context.
Let me explain.
Imagine you have a personal assistant. You say, “Hiiii, book me a flight”. That sounds simple, right? But your assistant can’t just do some abracadabra magic and book a random flight. She needs some important details first: Where are you going? When? Round trip or one way?
That extra info that helps her do her job properly is called context.
Now picture this: You plan to tell your friend a long story. You can hold a few key parts of the story in your head, but not everything. If you try to juggle too much, you can start forgetting the important stuff. AI works the same way.
AI agents, like ChatGPT or Claude, have a limited “working memory”, also known as “context window”. They can only “pay attention” to a certain amount of information at once, usually measured in tokens (tiny pieces of text). If you cram in too much, they get overwhelmed and start making mistakes. That’s called context rot, when too much information makes everything blurrier instead of clearer.
So What’s “Context Engineering”?
Context engineering is the art of choosing what information to give the AI to make it helpful, without overwhelming it.
It comes after prompt engineering.
Prompt engineering is about how you ask the question.
Context engineering is about what info the AI should have access to while answering the question.
The difference is like this:
Prompt engineering is saying, “Hey, robot, make me Nigerian jollof rice.”
Context engineering is ensuring the robot knows all the ingredients you have in the kitchen.
Why Too Much Information Can Be a Bad Thing
Let’s go back to that personal assistant analogy.
She needs to know where and when you’re flying. That’s helpful context.
But she doesn’t need to know why you’re going, who you’ll meet, or what you’re wearing to the airport. That’s just noise, and it makes it harder for her to remember the important stuff.
I remember an old rule from philosophy about effective human communication, by Grice’s Maxims. It says:
“Give as much information as is needed, but no more.”
In other words: Be helpful, but don’t overload your listener.
That rule applies beautifully to AI:
The more stuff you dump into the AI’s memory, the worse it gets at focusing.
Every extra word costs attention.
You don’t want everything; you want the right things.
The Building Blocks of Good Context
If you’re designing an AI agent or workflow, here’s what you need to think about:
1. System Prompts
These are the rules or instructions you give the AI.
You want them to be:
Clear, but not rigid.
Structured, so the AI can “read” them easily (e.g., with headings like
## Tools
or## Output Format
).
2. Tools
These are the abilities the AI can use, like searching the web, doing math, or fetching a file.
You want them to be:
Clear in what they do.
Not too similar to each other (or the AI will get confused).
Returning useful info, not massive data dumps.
Basically: Give it a hammer for nails, not a toolbox full of random kitchen utensils.
3. Examples (Few-shot learning)
Instead of listing rules that can be overwhelming, show a few examples of the behaviour you want. That teaches the AI better than walls of instruction, because examples are better contexts than rules.
I’ve experienced this firsthand. I was trying Claude for an SEO article, and I gave it all these 40 million rules about how I wanted it to write the article. It just kept going off, like giving a teenager a Ferrari with no brakes. Then I gave it just two really good articles as examples, which did a better job.
4. Message History
If your AI is working on something ongoing (like chatting or managing a task), it needs to remember what happened before.
But message history takes up space. So:
Keep only the important bits.
Summarise older stuff.
Let go of the noise.
Putting It All Together: The Golden Rule
Here’s the key question to ask when building or prompting an AI:
What’s the smallest set of information that still lets my AI do what I want?
As AI keeps evolving, it might need less hand-holding and have a longer context window, but good context will always matter.
Because in the end, AI is just like us:
It works better when it knows what it needs, and nothing more.