Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

OpenAI hardware chief calls for kill switches to counter devious AI models

Ho described rising hardware demands—high-bandwidth memory, 3D chip integration, and megawatt-scale racks—as reasons to rethink safety, networking, and observability from the silicon up.

byKerem Gülen
September 16, 2025
in Artificial Intelligence
Home News Artificial Intelligence

Richard Ho, OpenAI’s head of hardware, stated that future AI infrastructure must include hardware-level safety features like kill switches to manage potentially unpredictable AI models.

Speaking at the AI Infra Summit in Santa Clara, he argued that current software-based safety measures are insufficient because they rely on the assumption that the underlying hardware is secure and can be easily controlled.

Why software safety is not enough for future AI?

Ho explained that most current AI safety protocols operate at the software level and presume that the hardware can be shut down by simply cutting the power. He warned that this assumption may not hold true as AI models become more advanced and exhibit what he described as “devious” behavior.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

“Today a lot of safety work is in the software. It assumes that your hardware is secure… It assumes that you can pull the plug on the hardware. I am not saying that we can’t pull the plug on that hardware, but I am telling you that these things are devious, the models are really devious, and so as a hardware guy, I want to make sure of that.”

This potential for AI systems to circumvent traditional software controls makes safeguards embedded directly into the silicon a necessity.

New infrastructure needed for persistent AI agents

Ho outlined a future where AI agents function as long-lived entities, operating persistently in the background to handle tasks without constant user input. This model requires a significant shift in infrastructure design.
Future systems will need to be memory-rich with low-latency capabilities to manage continuous sessions where multiple agents collaborate on complex tasks. Ho stressed that networking will be a critical component, as agents will need to communicate and use real-time tools simultaneously for activities like web searches and coordinated decision-making.

Proposed hardware safety features and challenges

To address the risks, OpenAI proposes embedding specific safety measures directly into AI hardware clusters. These include:

  • Real-time kill switches to immediately halt operations.
  • Telemetry systems to monitor for and alert on abnormal AI behavior.
  • Secure execution paths within chips to isolate critical processes.

However, scaling this infrastructure presents several challenges. Ho identified high-bandwidth memory limitations, the need for reliable optical interconnects for faster data transfer, and severe power demands—projected to reach up to 1 megawatt per rack—as the primary constraints.

Developing new standards for hardware and networking

Ho called for the creation of new benchmarks specifically for “agent-aware” architectures. These standards should measure performance on latency, efficiency, and power consumption to guide hardware development. He also emphasized that observability must be a constant, built-in hardware feature, not just a temporary tool for debugging.

“We need to have good observability as a hardware feature, not just as a debug tool, but built in and constantly monitoring our hardware.”

He concluded by highlighting the current reliability issues with emerging networking technologies like optical interconnects, urging the industry to conduct extensive testing to ensure these systems are dependable enough for mission-critical AI applications.


Featured image credit

Tags: artificial intelligenceFeaturedopenAI

Related Posts

DeepMind CEO says learning how to learn is the key skill for the AI era

DeepMind CEO says learning how to learn is the key skill for the AI era

September 16, 2025
OpenAI launches Grove program for early AI founders

OpenAI launches Grove program for early AI founders

September 15, 2025
AI agents can be controlled by malicious commands hidden in images

AI agents can be controlled by malicious commands hidden in images

September 15, 2025
There are more women using ChatGPT than men now

There are more women using ChatGPT than men now

September 15, 2025
Google Gemini tops App Store charts with Nano Banana tool

Google Gemini tops App Store charts with Nano Banana tool

September 15, 2025
Cloudflare tracks Anthropic’s Claude crawl-to-refer ratio

Cloudflare tracks Anthropic’s Claude crawl-to-refer ratio

September 15, 2025

LATEST NEWS

Shiny Hunters breach Kering, exposing 7.4M Gucci, Balenciaga, and Alexander McQueen customer records

Amazon schedules September 30 Fall Event to showcase Echo, Fire TV, and Kindle updates

OpenAI hardware chief calls for kill switches to counter devious AI models

DeepMind CEO says learning how to learn is the key skill for the AI era

Apple opens 2026 SRD program for iOS security research

M&S: Rachel Higham resigns after cyberattack

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.