Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

OpenAI hardware chief calls for kill switches to counter devious AI models

Ho described rising hardware demands—high-bandwidth memory, 3D chip integration, and megawatt-scale racks—as reasons to rethink safety, networking, and observability from the silicon up.

byKerem Gülen
September 16, 2025
in Artificial Intelligence

Richard Ho, OpenAI’s head of hardware, stated that future AI infrastructure must include hardware-level safety features like kill switches to manage potentially unpredictable AI models.

Speaking at the AI Infra Summit in Santa Clara, he argued that current software-based safety measures are insufficient because they rely on the assumption that the underlying hardware is secure and can be easily controlled.

Why software safety is not enough for future AI?

Ho explained that most current AI safety protocols operate at the software level and presume that the hardware can be shut down by simply cutting the power. He warned that this assumption may not hold true as AI models become more advanced and exhibit what he described as “devious” behavior.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

“Today a lot of safety work is in the software. It assumes that your hardware is secure… It assumes that you can pull the plug on the hardware. I am not saying that we can’t pull the plug on that hardware, but I am telling you that these things are devious, the models are really devious, and so as a hardware guy, I want to make sure of that.”

This potential for AI systems to circumvent traditional software controls makes safeguards embedded directly into the silicon a necessity.

New infrastructure needed for persistent AI agents

Ho outlined a future where AI agents function as long-lived entities, operating persistently in the background to handle tasks without constant user input. This model requires a significant shift in infrastructure design.
Future systems will need to be memory-rich with low-latency capabilities to manage continuous sessions where multiple agents collaborate on complex tasks. Ho stressed that networking will be a critical component, as agents will need to communicate and use real-time tools simultaneously for activities like web searches and coordinated decision-making.

Proposed hardware safety features and challenges

To address the risks, OpenAI proposes embedding specific safety measures directly into AI hardware clusters. These include:

  • Real-time kill switches to immediately halt operations.
  • Telemetry systems to monitor for and alert on abnormal AI behavior.
  • Secure execution paths within chips to isolate critical processes.

However, scaling this infrastructure presents several challenges. Ho identified high-bandwidth memory limitations, the need for reliable optical interconnects for faster data transfer, and severe power demands—projected to reach up to 1 megawatt per rack—as the primary constraints.

Developing new standards for hardware and networking

Ho called for the creation of new benchmarks specifically for “agent-aware” architectures. These standards should measure performance on latency, efficiency, and power consumption to guide hardware development. He also emphasized that observability must be a constant, built-in hardware feature, not just a temporary tool for debugging.

“We need to have good observability as a hardware feature, not just as a debug tool, but built in and constantly monitoring our hardware.”

He concluded by highlighting the current reliability issues with emerging networking technologies like optical interconnects, urging the industry to conduct extensive testing to ensure these systems are dependable enough for mission-critical AI applications.


Featured image credit

Tags: artificial intelligenceFeaturedopenAI

Related Posts

ChatGPT reaches 800m weekly active users

ChatGPT reaches 800m weekly active users

October 7, 2025
Claude Sonnet 4.5 flags its own AI safety tests

Claude Sonnet 4.5 flags its own AI safety tests

October 7, 2025
Ethical hackers invited: Google launches Gemini AI bug bounty

Ethical hackers invited: Google launches Gemini AI bug bounty

October 7, 2025
Evernote adds OpenAI-powered AI assistant

Evernote adds OpenAI-powered AI assistant

October 7, 2025
xAI announces Grokipedia, an AI Wikipedia competitor

xAI announces Grokipedia, an AI Wikipedia competitor

October 7, 2025
OpenAI Sora 2 requires opt-in for Nintendo, Pokémon content

OpenAI Sora 2 requires opt-in for Nintendo, Pokémon content

October 6, 2025

LATEST NEWS

Shinyhunters extorts Red Hat over stolen CER data

CPAP breach exposes data of 90k military members

Windows 11 test build blocks local account bypass

Excel gets AI agent mode for automated data tasks

What is new at iOS 26.1 beta 2?

ChatGPT reaches 800m weekly active users

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.