Some technologies feel like science fiction until you see them in action. AlterEgo, a Boston startup led by MIT-trained Arnav Kapur, is building one of those. The company has developed a device that lets people issue commands to a computer without making a sound — essentially, talking in silence.
The implications are striking. For people with ALS, multiple sclerosis, or other conditions that rob them of their voice, this could restore a form of spoken agency. For the rest of us, it opens up possibilities too: sending a quick note in a noisy café, asking an AI agent for help in a library, or issuing a discreet command without reaching for a phone.
A demo video shows Kapur wearing the prototype. He takes notes and queries an AI assistant without uttering a word. The device reads tiny signals from cranial nerves that normally trigger speech muscles, then routes responses back through bone conduction — a simple ear-worn interface, not a brain implant.
That design matters. As Kapur points out, “the hardware has no access to brain data.” This is not about mind reading; it’s about intercepting speech at the edge of expression. Even mouthing words is optional — the system works when you only think about mouthing them.
Kapur first showed an early version of this system at TED in 2019. Now AlterEgo is preparing to reveal more at the Axios AI+ Summit on September 17 in Washington, D.C. What we don’t know yet is when this might reach the market or how the company is funded.
Still, the idea lingers: maybe the future of human–computer interaction isn’t louder or flashier, but quieter — even invisible.