OpenAI cofounder Andrej Karpathy stated on the Dwarkesh Podcast that functional AI agents are a decade away from viability. He outlined significant developmental issues, expressing a critical view of their current capabilities and the industry’s direction regarding their implementation.
During his appearance last week, Karpathy, who is now developing an AI native school at Eureka Labs, detailed his assessment of existing agent technology. “They just don’t work,” he said, citing a list of fundamental problems. He explained that agents currently “don’t have enough intelligence, they’re not multimodal enough, they can’t do computer use and all this stuff.” He further elaborated on their cognitive shortcomings, noting, “They don’t have continual learning. You can’t just tell them something and they’ll remember it. They’re cognitively lacking, and it’s just not working.” Karpathy projected that resolving these multifaceted issues would be a lengthy process, adding, “It will take about a decade to work through all of those issues.”
This perspective contrasts with significant industry enthusiasm for agents, which are defined as virtual assistants capable of completing tasks autonomously. These systems are designed to break down complex problems, formulate plans, and execute actions without continuous user prompts. The interest in this technology has led many investors to label 2025 as “the year of the agent,” anticipating major advancements in the field of autonomous virtual assistants and highlighting a divergence between market expectations and Karpathy’s technical evaluation.
Following the podcast, Karpathy posted on the social media platform X to provide additional clarity. In his post, he reiterated his frustrations with the current trajectory of development and tooling. “My critique of the industry is more in overshooting the tooling w.r.t. present capability,” he wrote. He described a prevailing industry vision that he finds problematic, a future where “fully autonomous entities collaborate in parallel to write all the code and humans are useless.” His comments addressed the gap between the conceptual goal of autonomous agents and the practical limitations of current AI models.
My pleasure to come on Dwarkesh last week, I thought the questions and conversation were really good.
I re-watched the pod just now too. First of all, yes I know, and I'm sorry that I speak so fast :). It's to my detriment because sometimes my speaking thread out-executes my… https://t.co/bnPSrY74px
— Andrej Karpathy (@karpathy) October 18, 2025
Karpathy articulated that he does not want to pursue such a future. He instead advocates for a different model of human-AI interaction. In his preferred scenario, AI would not operate as a fully autonomous entity that supplants human involvement. Instead, he envisions a collaborative framework where humans and AI systems work together to perform tasks, including coding and execution. This model emphasizes a partnership that combines the strengths of both to achieve results rather than one focused on complete automation.