Thinking at Machine Speed
The Post-Singularity World
February 16, 2026
I've been fantasizing about buying a VR headset, so I can control over 10 agents in VR at the same time.
It made me think. Will VR headsets, or 'neural interface headsets' be the way coordinate and control large numbers of AI agents in the future, and do our work? And how would this development look like in practice? What follows is just my own fantasy:
There are already existing non-invasive EEG headsets (or invasive brain implants from companies like Neuralink) that allow paralyzed patients to move a cursor along the screen. These devices can pick up electrical brain activity, and translate specific neural patterns into cursor movements on a screen. Instead of moving a physical mouse, the user learns to control certain brain signals that the system maps to directional movement, clicks, or scrolling commands with the mouse.
A more advanced version of these kinds of 'neural interface' might allow users to “type” by intentionally imagining handwriting, or phoneme formation, which machine-learning models then decode into text. This would allow paralysed patients to type entire messages by merely viualising the pronuncuation or handwriting,
We might still be decades away from the moment where we can write faster than we could type by merely thinking. A more advanced system might be able to pick up on more fluent, but still vividly imagined inner speech. And turn imagined inner speech directly into text inputs, instead of requiring users to clearly visualise the input in terms of handwriting movements or sound formations.
For something like this to become practically useful, the technology needs to develop a lot further and become more fluent. While professional typists can type over 100 words per minute, the average person sits closer to 50 WPS. Once the technology gets us to this point, it would quickly become more so useful to write with a neural headset that it might become the new default mode of interacting with a computer instead of using a keyboard.
Once we reach that point, there might be a breakthrough moment where interacting with a computer might very quickly start to feel more like something 'telepathic'.
Once we can turn toughts directly into text on a screen, it would also make sense to replace a mouse or trackpad and navigate directly using our eye movments (tracked by a webcam). Fixating on elements on our screen to highlight them, and confirming a choice via a brief intentional blink.
Typing on a keyoard would from that point on slow us down compared to using using our toughts and eyes to interact with a computer.
While speaking goes at 100 to 200 WPM, inner speech/verbal thought is estimated to go at 500 to 1.000 WPM. Instead of waiting for vividly visualised verbal thoughts at spoken speech, later iterations of these systems might be able to monitor high-level neural markers associated with confusion, curiosity, or decision conflict — still not 'reading thoughts', but detecting cognitive states.
If a user lingers on a concept, struggles with a problem, or hesitates between choices, an AI copilot system could proactively surface relevant information on the visual display. The interface would still rely primarily on screens and crude intentially imagined signals, but it will even begin to feel anticipatory.
By capturing partial semantic structures before they are properly articulated in thought, later versions of these systems would be able to complete not just sentences, but lines of reasoning. A user begins to form an idea, and the system extrapolates its likely trajectory (similar to existing AI copilots that autocomplete typed sentences).
Agentic AI systems could then directly act upon the instructions received this way. Humans working with these interfaces would see problems instantly decomposed before their eyes, while complex systems visualize multi-step reasoning processes, and directly take actions to follow their instructions on computer systems, or coordinate various AI agents accordingly. The interaction would still be mediated through a headset that we can take on or off, but the latency between intention and action would shrink toward invisibility.
Rather than typing a question and reading an answer, it might start to feel like we are experiencing something even closer to accelerated cognition. Experiencing information and orchestrating the processing of it it at speeds and scales that a biological brain on its own could never achieve.
These improved 'computer interfaces' will then also allow us to access, control, and communicate with the full capabilities of large networks of AI agents running in the cloud. Not as an external user typing prompts, but as something closer to a participant in our thinking. Augmenting our own cognition with a large network of AI systems that we can delegate commands to at lightning speed.
This would be something like Kurzweil's vision of the merger between human and artificial intelligence, made physically concrete with us living in a symbiosis with these systems and becoming something new in the process, but staying the same at the same time.
Though we might end up working 10 hour days wearing headsets that plug us into a world of information overload, trying to keep up with the continually faster evolving chaos of the world, we will remaining recognizably human. The headset can always come off. We can still step outside of our office, feel the wind, and remember that unlike the AI systems, we are still rooted in biology.
Some people might want to go further. And wish they could embedding themselves within the internet, and participating continuously in a digital world. Others will probably want nothing to do with this future, and tune out.
