What Happens When You Let AI Agents Run Their Own Village?
A deduction from a research by Fundamental Research Labs (FRL).
I read an article the other day and immediately thought of two things.
One, “this is interesting”.
Two, AI should absolutely not be trusted with the “Run Society” button, yet.
The piece, from Fundamental Research Labs (FRL), was about researchers creating a tiny simulated village populated entirely by AI agents.
It’s like The Sims. But everyone is powered by large language models.
According to what I read, these agents had memories, goals, jobs, and opinions. They formed relationships and planned events.
Which I think is cute. And terrifying. I’d say mostly terrifying, but in a cute way, like if my toddler was holding a knife and saying “I love my daddy” at the same time.
So what actually happened in this AI village?
The researchers dropped these AI agents into a simulated town and let them… exist.
Each agent had its own personality, background, and knowledge. Some were shopkeepers. Some were artists. Some were just there...
Over time, they remembered past interactions, training and adjusted behaviour. And they collaborated without being told.
Basically, they did what humans do.
At one point, the agents collectively decided to organise a party. No one prompted them. They just remembered that humans do this thing called social bonding and went, “Yeah, let’s do that.”
However, the agents sometimes fell into endless loops of polite agreement or chased unattainable goals, typical AI drama.
Also, at times, the AIs were frustratingly independent; they’d ignore requests and say, “I want to do my own thing” to pursue their own agendas. Humans do this, too, though.
The most interesting part…
The emergent behaviour was the most interesting part for me.
The agents weren’t explicitly programmed to create culture. But the culture happened.
Which means when we build systems complex enough, we don’t fully control what they become. We only influence them.
My human takeaway
I think watching AI agents recreate society is interesting, and we’re not even in the AGI era yet. Over a long period, they might be able to do a better job of it than humans would because the problem with human systems isn’t intelligence. It’s incentives, fear, ego, and vibes. Bad vibes, mostly.
If artificial agents can cooperate without these bad vibes, maybe, over time, a society created by 1000 AI agents will beat a society created by 1000 humans?
Or maybe I’m overthinking it, and this is just digital ants doing ant things with better marketing.
Regardless, we’ll be alright.





