Sam Altman, CEO of OpenAI, stated on a recent podcast that he expects negative outcomes from artificial intelligence, including deepfakes, as his company’s new video application, Sora 2, gains widespread use following its recent invitation-only launch.
In an interview on the a16z podcast, produced by venture capital firm Andreessen Horowitz, Altman articulated his expectations for the technology his company develops. “I expect some really bad stuff to happen because of the technology,” he said, specifically highlighting the potential for “really strange or scary moments.” This warning from the head of the company responsible for ChatGPT comes as AI-powered generative tools become increasingly accessible and sophisticated. His comments provide context for the rapid deployment of powerful AI models into the public sphere and the accompanying societal risks he anticipates.
The release of OpenAI’s new video application, Sora 2, late last month demonstrated the speed at which such technology can achieve mainstream penetration. Although its initial launch was restricted to an invitation-only basis, the application quickly ascended to the number one position on Apple’s U.S. App Store. This rapid adoption illustrates a significant public interest in and accessibility of advanced video-generation technology, which can create realistic-looking but entirely fabricated video content. The app’s popularity underscores the immediate relevance of discussions surrounding the potential misuse of such tools.
Shortly after the app’s release, instances of its use to create deepfake videos began appearing on social media platforms. These videos featured public figures, including civil rights leader Martin Luther King Jr. and Altman himself. The fabricated content depicted these individuals engaged in various forms of criminal activity. In response to the circulation of these deepfakes, OpenAI took action to prevent its users from generating videos that featured Martin Luther King Jr. using the Sora platform. This incident served as a direct and immediate example of the type of misuse that AI video generation tools can enable.
Concerns about misuse extended beyond the creation of defamatory deepfakes of public figures. According to the Global Coalition Against Hate and Extremism, videos promoting Holocaust denial created with Sora 2 accumulated hundreds of thousands of likes on Instagram within days of the application’s launch. The organization has pointed to OpenAI’s usage policies as a contributing factor. It argues that the policies lack specific prohibitions against hate speech, a gap that has, in the coalition’s view, helped enable extremist content to proliferate on online platforms using the new tool.
Altman provided a rationale for releasing powerful AI models to the public despite the evident risks. He argued that society needs a form of test drive to prepare for what is to come. “Very soon the world is going to have to contend with incredible video models that can deepfake anyone or kind of show anything you want,” he stated during the podcast interview. His approach is rooted in the belief that society and artificial intelligence must “co-evolve.” Instead of developing technology in isolation and then releasing a perfected version, he advocates for early and incremental exposure. Altman’s theory is that this process allows communities and institutions to develop necessary social norms and technological guardrails before the tools become even more powerful and potentially more disruptive. He acknowledged the high stakes, including the potential erosion of trust in video evidence, which has historically served as a powerful record of truth.
The OpenAI CEO’s warnings extended beyond the immediate threat of fake videos to broader, systemic risks. He cautioned against a future where a significant portion of the population outsources decision-making to opaque algorithms that few people understand. “I do still think there are going to be some really strange or scary moments,” he said, emphasizing that the absence of a catastrophic AI-related event to date “doesn’t mean it never will.” Altman described a scenario where “billions of people talking to the same brain” could lead to “weird, societal-scale things.” This could manifest as unexpected and rapid chain reactions, producing substantial shifts in public information, political landscapes, and the foundations of communal trust, all moving at a pace that outstrips any ability to control or mitigate them.
Despite these acknowledgments of broad and consequential risks, Altman expressed opposition to widespread government regulation of the technology. “Most regulation probably has a lot of downside,” he commented. He did, however, voice support for a more targeted approach to safety. Altman specified that he is in favor of implementing “very careful safety testing” for what he termed “extremely superhuman” AI models, suggesting a distinction between current AI and more advanced future systems. He concluded with a belief in a societal adaptation process, stating, “I think we’ll develop some guardrails around it as a society.”





