Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Sam Altman: AI will cause “strange or scary moments”

In an interview on the a16z podcast, produced by venture capital firm Andreessen Horowitz, Altman articulated his expectations for the technology his company develops. “I expect some really bad stuff to happen because of the technology,” he said, specifically highlighting the potential for “really strange or scary moments.

byEmre Çıtak
October 24, 2025
in Artificial Intelligence, News

Sam Altman, CEO of OpenAI, stated on a recent podcast that he expects negative outcomes from artificial intelligence, including deepfakes, as his company’s new video application, Sora 2, gains widespread use following its recent invitation-only launch.

In an interview on the a16z podcast, produced by venture capital firm Andreessen Horowitz, Altman articulated his expectations for the technology his company develops. “I expect some really bad stuff to happen because of the technology,” he said, specifically highlighting the potential for “really strange or scary moments.” This warning from the head of the company responsible for ChatGPT comes as AI-powered generative tools become increasingly accessible and sophisticated. His comments provide context for the rapid deployment of powerful AI models into the public sphere and the accompanying societal risks he anticipates.

The release of OpenAI’s new video application, Sora 2, late last month demonstrated the speed at which such technology can achieve mainstream penetration. Although its initial launch was restricted to an invitation-only basis, the application quickly ascended to the number one position on Apple’s U.S. App Store. This rapid adoption illustrates a significant public interest in and accessibility of advanced video-generation technology, which can create realistic-looking but entirely fabricated video content. The app’s popularity underscores the immediate relevance of discussions surrounding the potential misuse of such tools.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Shortly after the app’s release, instances of its use to create deepfake videos began appearing on social media platforms. These videos featured public figures, including civil rights leader Martin Luther King Jr. and Altman himself. The fabricated content depicted these individuals engaged in various forms of criminal activity. In response to the circulation of these deepfakes, OpenAI took action to prevent its users from generating videos that featured Martin Luther King Jr. using the Sora platform. This incident served as a direct and immediate example of the type of misuse that AI video generation tools can enable.

Concerns about misuse extended beyond the creation of defamatory deepfakes of public figures. According to the Global Coalition Against Hate and Extremism, videos promoting Holocaust denial created with Sora 2 accumulated hundreds of thousands of likes on Instagram within days of the application’s launch. The organization has pointed to OpenAI’s usage policies as a contributing factor. It argues that the policies lack specific prohibitions against hate speech, a gap that has, in the coalition’s view, helped enable extremist content to proliferate on online platforms using the new tool.

Altman provided a rationale for releasing powerful AI models to the public despite the evident risks. He argued that society needs a form of test drive to prepare for what is to come. “Very soon the world is going to have to contend with incredible video models that can deepfake anyone or kind of show anything you want,” he stated during the podcast interview. His approach is rooted in the belief that society and artificial intelligence must “co-evolve.” Instead of developing technology in isolation and then releasing a perfected version, he advocates for early and incremental exposure. Altman’s theory is that this process allows communities and institutions to develop necessary social norms and technological guardrails before the tools become even more powerful and potentially more disruptive. He acknowledged the high stakes, including the potential erosion of trust in video evidence, which has historically served as a powerful record of truth.

The OpenAI CEO’s warnings extended beyond the immediate threat of fake videos to broader, systemic risks. He cautioned against a future where a significant portion of the population outsources decision-making to opaque algorithms that few people understand. “I do still think there are going to be some really strange or scary moments,” he said, emphasizing that the absence of a catastrophic AI-related event to date “doesn’t mean it never will.” Altman described a scenario where “billions of people talking to the same brain” could lead to “weird, societal-scale things.” This could manifest as unexpected and rapid chain reactions, producing substantial shifts in public information, political landscapes, and the foundations of communal trust, all moving at a pace that outstrips any ability to control or mitigate them.

Despite these acknowledgments of broad and consequential risks, Altman expressed opposition to widespread government regulation of the technology. “Most regulation probably has a lot of downside,” he commented. He did, however, voice support for a more targeted approach to safety. Altman specified that he is in favor of implementing “very careful safety testing” for what he termed “extremely superhuman” AI models, suggesting a distinction between current AI and more advanced future systems. He concluded with a belief in a societal adaptation process, stating, “I think we’ll develop some guardrails around it as a society.”


Featured image credit

Tags: AIFeaturedopenAI

Related Posts

Is ChatGPT down again? Reports indicate ongoing outage

Is ChatGPT down again? Reports indicate ongoing outage

October 24, 2025
Path of Exile: Keepers of the Flame will be the Breach 2.0!

Path of Exile: Keepers of the Flame will be the Breach 2.0!

October 24, 2025
Google Meet now lets you move people in and out of meetings like a lobby

Google Meet now lets you move people in and out of meetings like a lobby

October 24, 2025
Anthropic gives Claude a real memory and lets users edit it directly

Anthropic gives Claude a real memory and lets users edit it directly

October 24, 2025
Nissan’s Sakura EV gets a solar roof that adds 1,800 miles a year

Nissan’s Sakura EV gets a solar roof that adds 1,800 miles a year

October 24, 2025
Amazon Luna gets a major refresh with phone-based party games

Amazon Luna gets a major refresh with phone-based party games

October 24, 2025

LATEST NEWS

Path of Exile: Keepers of the Flame will be the Breach 2.0!

Google Meet now lets you move people in and out of meetings like a lobby

Sam Altman: AI will cause “strange or scary moments”

Anthropic gives Claude a real memory and lets users edit it directly

Nissan’s Sakura EV gets a solar roof that adds 1,800 miles a year

Amazon Luna gets a major refresh with phone-based party games

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.