Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

OpenAI offers 555k salary for stressful head of preparedness role

Sam Altman warns the stressful role is critical for handling rapid improvements in AI models that pose challenges to human agency and cybersecurity

byKerem Gülen
December 29, 2025
in Industry
Home Industry
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

OpenAI has initiated a search for a new “head of preparedness” to manage artificial intelligence risks, a position offering an annual salary of $555,000 plus equity, Business Insider reports.

CEO Sam Altman described the role as “stressful” in an X post on Saturday, emphasizing its critical nature given the rapid improvements and emerging challenges presented by AI models.

The company seeks to mitigate potential downsides of AI, which include job displacement, misinformation, malicious use, environmental impact, and the erosion of human agency. Altman noted that while models are capable of beneficial applications, they are also beginning to pose challenges such as impacts on mental health and the ability to identify critical cybersecurity vulnerabilities, referencing previewed issues from 2025 and current capabilities.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

ChatGPT, OpenAI’s AI chatbot, has gained popularity among consumers for general tasks like research and drafting emails. However, some users have engaged with the bots as an alternative to therapy, which, in certain instances, has exacerbated mental health issues, contributing to delusions and other concerning behaviors. OpenAI stated in October it was collaborating with mental health professionals to enhance ChatGPT’s interactions with users exhibiting concerning behavior, including psychosis or self-harm.

OpenAI’s founding mission centers on developing AI to benefit humanity, with safety protocols established early in its operations. Former staffers have indicated that the company’s focus shifted towards profitability over safety as products were released.

Jan Leiki, former leader of OpenAI’s dissolved safety team, resigned in May 2024, stating on X that the company had “lost sight of its mission to ensure the technology is deployed safely.” Leiki articulated that building “smarter-than-human machines is an inherently dangerous endeavor” and expressed concerns that “safety culture and processes have taken a backseat to shiny products.” Another staffer resigned less than a week later, citing similar safety concerns.

Daniel Kokotajlo, another former staffer, resigned in May 2024, citing a “losing confidence” in OpenAI’s responsible behavior concerning Artificial General Intelligence (AGI). Kokotajlo later told Fortune that the number of personnel researching AGI-related safety issues had been nearly halved from an initial count of about 30.

Aleksander Madry, the prior head of preparedness, transitioned to a new role in July 2024. The head of preparedness position, part of OpenAI’s Safety Systems team, focuses on developing safeguards, frameworks, and evaluations for the company’s models. The job listing specifies responsibilities including “building and coordinating capability evaluations, threat models, and mitigations that form a coherent, rigorous, and operationally scalable safety pipeline.”


Featured image credit

Tags: openAI

Related Posts

TCL to own 51% of Sony’s Bravia TV brand

TCL to own 51% of Sony’s Bravia TV brand

January 20, 2026
ByteDance targets Alibaba with aggressive AI cloud expansion

ByteDance targets Alibaba with aggressive AI cloud expansion

January 20, 2026
Powell McCormick calls AI transformation a “group sport”

Powell McCormick calls AI transformation a “group sport”

January 20, 2026
ASUS signals potential exit from global smartphone market

ASUS signals potential exit from global smartphone market

January 20, 2026
Sequoia Capital joins Anthropic’s 0 billion funding round

Sequoia Capital joins Anthropic’s $350 billion funding round

January 20, 2026
Musk seeks 4B from OpenAI and Microsoft

Musk seeks $134B from OpenAI and Microsoft

January 19, 2026

LATEST NEWS

Google Workspace adds password-protected Office file editing

Claim: NVIDIA green-lit pirated book downloads for AI training

Tesla restarts Dojo3 supercomputer project as AI5 chip stabilizes

OpenAI targets “practical adoption” for 2026 strategy

Nvidia hits 200 teraFLOP emulated FP64 for scientific computing

Walmart maintains Apple Pay ban in U.S. stores for 2026

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.