The Risks of AGI

The Post-Singularity World

February 16, 2026

What is 'AGI'? An 'Artificial General Intelligence'. Something that is better at anything that any human. It is a sloppy definition? Arguebly any 'company' or a 'nation' satisfies this definiton as well, since we can think of these collective entities as having skills that go beyond those of any human.

When applied to AI we mean specifically an artificially created system that posseses that same level of intelligence and cognitive power. For decades people have been afraid of such AGI-like systems taking over. It is a common themes in science fiction, and it's a genuine concern among AI researchers.

But the way most people imagine this scenario is based on 20th century assumptions that are becoming increasingly outdated.

Most of the fear and fantasy around AGI takeover has been built around a very specific scenario: a single superintelligent system emerges in a lab, in a world that otherwise looks like the late 20th century, and it has a massive cognitive advantage compared to every other entity on the planet. In that scenario, the AGI can outthink everyone, outmanoeuvre every institution, and effectively take over because nothing else comes close to it.

But that's not the reality of the 21th century.

What we're actually seeing is a world that is rapidly being populated by AI agents. Not one superintelligence, but millions of AI systems of varying capabilities, operating across different domains. Most of them aren't very competent (yet). Some are somewhat competent in very niche domains. But most importantly, there are a lot of them, and the ecosystem is growing fast.

This creates a fundamentally different dynamic for any future AGI system. Instead of emerging into a world populated purely by human opponents, it would emerge into a world that's already filled with other AI systems. And this gives such a hypothetical AGI two important limitations:

First, competition. Any new AGI system would have to compete with many other AI systems that, while individually less powerful, are numerous and might be collectively capable. When an AGI system wants to hack into the power grid and take over financial institutions, it becomes much harder to do so when the cyber security of those institutions is partially delegated to numerious specialised AI systems. It become a much harder game to dominate. Even when an AGI system is more 'intelligent' than all these smaller AI systems combined, all these smaller AI systems together may create such a defensive hurdle that a more intelligent system doesn't have enough resources to outmanouver them.

This bring us to the second point, resources. For any AGI system to outperform all humans and all other AI systems, it requires an enormous amount of computational resources. The advantage that any AGI would have, would essentially depend on the percentage of global resources that it has acess to. But in a world where compute is in high demand from millions of other AI systems, it's not easy to get access to a significant fraction of the world's compute resources.

Does this mean we shouldn't worry at all? No. We should absolutely be careful and thoughtful about how we develop increasingly powerful AI systems.

But we also shouldn't forget that we have been living in a world populated with other 'superintelligences', like 'organisations' and 'nations' for over 5.000 years. Collective entities with skills and resources that go beyond those of any human.

Throughout this time, humanity has continually struggled against institutions that became so big and powerful that they dominated the entire socioeconomic landscape of their times.

The development of our history has been the gradual and continuous development of a more complex society (a more complex ecosystem of institutions) with feedback systems that restrict any specific institutions from becomming too powerful.

A lot of the laws and regulatory systems that we created in the 20th century to protect our democracies and economy against extortionist monopolies, will direcly apply to any AI-first company or larger AGI system.

And the same political process, of setting up balancing acts between different institutions to protect our democracies and economy against extortionist monopolies, will likely continue throughout the 21th century and be the the way by which we safeguard our world against an AGI system that can hoard significant fractions of the world's compute resources.

Continue reading:AI Allignment