AI-driven Companies
The Post-Singularity World
February 16, 2026
What does this AI-driven transition actually look like in practice? Let's start with what's already happening.
The first domain where we see AI integrating itself throughout every line of the business is with information processing companies. Any information process that can be reasonably standardised can be ran using AI agents. We're already seeing companies where steps of many information processing tasks (customer support, content generation, data analysis, report writing) are being done almost entirely by AI, with humans in supervisory roles.
As this is accelerating, more aspects of information processing companies will become completely AI-driven. Not just individual tasks, but entire workflows. The human role will shift from doing the work to defining what work should be done and evaluating whether it was done well.
Over the next years we will start to see AI-first companies, where the entrire operations (marketing, product maintenance, customer support, the bookkeeping, and reporting) are all ran using crefully implemented AI systems. Though expensive to set up, the result will be that a few people will be able to server thousands of people, as long as they have the time and expertise to keep occasionally monitoring the AI agents they have set up. This is already going on (but can still become creazier).
Towards Autonomous AI-first Companies
The founders of these companies can run these services together with several humans on their payroll. But it will be only a matter of time before several companies start to experiment with taking the humans employees and founders out of the loop all together. By setting up additional layers of AI agents that monitor costs and optimise resource usage, such an AI-first company could manage its own infrastructure, and pay for its own hosting (using the income from the product that it is selling).
Has such an AI-first company then become a new kind of autonomous entity? Somethng that lives inside of our economic ecosystem and runs autonomously? I think this is better understood as a new organisational form with familiar governance problems. For most of modern economic history, every 'collective intelligence' (every organisation) was ultimately a network of humans. We are now adding organisations in which AI systems execute an increasing share of the operational and strategic work, while humans still hold legal authority and responsibility.
Once these new artificiual entities start to appear in our economic landscape, they will still be constrained by the laws that we have currently set up that govern which players are allowed to participate in this space. One of the main relevant rules in our global economic landscape is that every 'official transaction' has to be taking place between legal entities.
Since the 17th century, not only humans, but also companies themselves are allowed to act within our economic landscape as legal entities, and engage in contacts and take on debt. And the same will apply to any Autonomous AI-fist company.
As long as such a collective of AI agents is correctly incorporated as a legal entity, and the AI agents are properly owned or leased by the company, all work derived from them will be the property of said company. Despite it being possible to run the entire company with AI agents, our existing corporate legal frameworks still mandates that humans will in the end have to be assigned to function as a managing an oversight boardmembers, of this legal entity. And it is those humans who carry the responsibility of making sure that the entire legal entity operates within the larger legal framework when engaging with the economy.
This framework (originally designed to control collective intelligences formed by groups of humans), thus becomes the basis for how any intelligence including artificial ones, will operate in the world.
Because individual humans always have to be the eventual beneficial owners and shareholders of any legal entity, and thus be (at least pasrtially) resposible for the actions of it, this framework ensures that humans stay eventually in control of and responsible for of any collective intelligent system. And that such systems might not run 'out of control' within our economic framework.
This existing framework actually thus lays the groundwork quite naturally for a world where AI systems always operate under the law as corporate entities, with human shareholders who are at least partially responsible for their actions. As long as these are privately run companies with clearly identified owners, this seems manageable. You run an AI-first company, you're responsible for what it does, just like you're responsible for what any company you own does. There's a chain of accountability.
Nevertheless, we've had plenty of instances in the past where a corporation does something illegal, but the individuals managing it are not held liable. A company gets fined for polluting the environment, but nobody gets jailed even though people die from the consequences. This topic of corporate accountability was a problem for decades before any discussion of AI, and will likely become an even more important topic in the future.
AI-First Criminal Organisations
If you take the concept of an AI-first company — an autonomous system that can earn money, pay for infrastructure, and operate continuously — and you remove the ethical constraints, what you get is essentially an AI-powered criminal enterprise. An AI system that identifies vulnerabilities, exploits them, collects ransoms in cryptocurrency, and uses the proceeds to fund its own continued operation and expansion.
We already see human-run gangs, and hacker collectives that operate this way. The difference is that an AI-first version could operate at much greater scale, with more persistence, and with no clearly identifiable human operators who can be arrested or intimidated. It would be a purely digital entity, operating across jurisdictions, constantly evolving its methods, and potentially very difficult to shut down. This is an area I'm still thinking through, but I believe it deserves serious attention. I don't have a clear answer for how to deal with this as of now. But I think it's one of the more realistic near-term risks of the kind of AI-driven world we're building, and it deserves much more attention than all the existential risk doom and gloom about AGI in my opinion.
With the onset of crypto currencies, we are already seeing the emergence of an alternative economic frameworks that we as humans have less grip on, and that might outgrow the control of our governmental institutions, and lead to the public funding (though initial tokenised offerings and tokenised shares) of anonimous AI systems and AI-first criminal enterprises. Is this dangerous?? (this should at least be taken seriously) as it can create a potential shadow economy of AI systems that lack a clear responsibility structure.
The expansion of AI-first companies into the physical world
After information processing companies have become the norm for AI-driven operations, we will slowly see the expansion of AI-first organisations into domains that combine information processing with physical world activity.
Data centres would be an obvious first step. AI-first companies that pay cloud providers for hosting could begin leasing or owning their own data centre capacity, reducing costs and giving the AI-first company direct control over its physical infrastructure.
But hardware depreciates over time, servers age, and buildings need maintenance. If an AI-first company stops delivering value, its physical assets gradually lose their worth and its real-world footprint shrinks naturally.
Logistics could be another logical next frontier for AI-first business operations to expand into. Consider an AI-first company that operates a platform similar to Uber or Takeaway — coordinating supply and demand in real time. Such a company could decide to acquire ownership of its own fleet: delivery vehicles, drones, warehouse robots.
Again, the same natural depreciation mechanism applies. Vehicles wear out and the robots need replacement parts. If the company stops generating revenue, its physical presence in the world slowly winds down.
This natural depreciation of physical assets may sound quite reassuring. It means that AI-first companies with physical world presence have a built-in requirement to continuously deliver value in order to maintain their influence. They can't just accumulate power indefinitely the way a purely digital entity theoretically could. The physical world imposes real costs and real decay on everything that exists within it.
This of course does not prevent these AI-first companies from making increasingly larger amounts of profit, hoard up wealth, and drive out their competitors.
Real estate is an often mentioned example of a sector that requires a lot of money to operate in, but results in most cities today in a reasonable return on investment that can be reinvested to scale up further. Imagine an AI-first company that hires construction workers to build apartments, rents them out through rental platforms, and reinvests the profits into more construction.
I think the real concern lies not in some sci-fi AGI scenario, but in something much more familiar: new technologies that enable the fast creation of new monopolies. An AI-first company that operates at scale across software, infrastructure, logistics, and real estate is not fundamentally different from the monopolies and megacorporations we've been struggling to regulate for over centuries.
Though these AI-first companies can operate more efficiently, with fewer humans in the loop, existing human operated companies also manage to outpace the speed at which our institutions can update their existing legal frameworks. And this dynamic will likely continue into the fture when our AI-assistent institutions try to update their regulation to constain monopolies and AI-first megacorporations, while fight AI-first criminal organisations.
In our current global framework, it is often the Nation State that is the eventual owner of all natural resources within its borders, governed and owned indirectly by the humans that make up the Nation State. This means that the physical world ultimately still belongs to people, through their institutions, which places another check on any AI-first operations expanding into the physical world. AI-first oeprations expanding into outer space, where natural resources are still free for grabs will of course be a different story al together.
The expansion of AI-first companies into the physical world
The long-term trajectory here is fascinating. We could eventually see AI-first companies that operate across the full stack: from the software that runs the business logic, to the data centres that host the compute, to the logistics networks that deliver physical goods, to the manufacturing facilities that produce those goods. Each layer largely ran by various AI systems, with humans in the loop to validate that the layers run efficiently, ordering patches and repairs from other AI-first companies.
Dr. Jonas Auda summarises this emerging complexity as follows: "One day there will be something flying through the sky, and nobody will understand what it is or what it is doing."
I think this is a pretty accurate description of what the near future might feel like. Image we see a drone fly by. It has been largely constructed by an AI-first 3D printer production faclity, to deliver parcels for an AI-first delivery platform, on whch AI agents are ordering services, for AI-first ran webshops. All these systems making constant transactions to each other, and the ownership and shareholder structure of these companies largely documented on various kinds of blockchains. It might sound like an overwhelming future, in which we might feel like we will have no idea anymore as to what is going on in the world.
But at the same time it also doesn't sound too different from the world of the 20th century that we were already so used to live in: layered systems, partial understanding, and institutions trying to keep pace with technical change.
