Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Another day, another AI warning that authorities won’t care

This letter, advocating for a “right to warn about artificial intelligence”, stands out as one of the most publicly expressed concerns regarding AI risks from insiders of this typically secretive sector

byKerem Gülen
June 5, 2024
in News, Artificial Intelligence
Home News

On Tuesday, an open letter was issued by a group of current and former employees from leading artificial intelligence firms, highlighting the absence of safety oversight within the industry and advocating for stronger protections for whistleblowers.

OpenAI and Google insiders highligh AI dangers, call for change

This letter, advocating for a “right to warn about artificial intelligence”, stands out as one of the most publicly expressed concerns regarding AI risks from insiders of this typically secretive sector. Among the signatories are eleven current and former employees of OpenAI, as well as two current or former employees of Google DeepMind, one of whom had also worked at Anthropic.

“AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm. However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily,” the letter reads.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

OpenAI, in response, defended its practices, highlighting that it has mechanisms such as a tipline for reporting issues within the company and asserting that new technologies are not released until appropriate safeguards are in place. Google, however, did not immediately comment.

Another day, another AI warning that authorities won't care
Concerns regarding the potential dangers of artificial intelligence have been present for decades (Image credit)

“We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world,” an OpenAI spokesperson stated.

Concerns regarding the potential dangers of artificial intelligence have been present for decades. However, the rapid expansion of AI in recent years has heightened these fears and left regulators struggling to keep pace with technological advancements. While AI companies have publicly pledged to develop technology responsibly, researchers and employees have raised alarms about the lack of oversight, pointing out that AI tools can amplify existing social issues or introduce new ones.

The letter from current and former employees of AI companies, initially reported by the New York Times, advocates for stronger protections for workers at advanced AI firms who raise safety concerns. It urges adherence to four principles focused on transparency and accountability, including a commitment not to compel employees to sign non-disparagement agreements that prevent them from discussing AI-related risks, and establishing a system for employees to anonymously share their concerns with board members.

Another day, another AI warning that authorities won't care
This recent open letter from AI industry employees is not an isolated incident (Image credit)

“So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public. Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,” the letter reads.

Companies like OpenAI have reportedly employed stringent measures to prevent employees from discussing their work openly. According to a Vox report from last week, OpenAI required departing employees to sign highly restrictive non-disparagement and non-disclosure agreements or risk losing their vested equity. In response to the backlash, OpenAI’s CEO, Sam Altman, issued an apology and promised to revise the company’s off-boarding procedures.

The open letter follows the recent resignation of two prominent OpenAI employees: co-founder Ilya Sutskever and leading safety researcher Jan Leike. Post-departure, Leike criticized OpenAI, claiming that the company had shifted its focus from safety to pursuing “shiny products.”

An ongoing issue

This recent open letter from AI industry employees is not an isolated incident. In March 2023, the Future of Life Institute published a similar letter, signed by approximately 1,000 AI experts and tech executives, including notable figures like Elon Musk and Steve Wozniak. This earlier letter urged AI laboratories to pause the development of advanced AI systems beyond GPT-4, citing “profound risks” to human society. It called for a public, verifiable halt in the training of such systems for at least six months, involving all public actors.

The group highlighted that AI systems with human-competitive intelligence pose significant dangers to society and humanity, as supported by extensive research and acknowledged by leading AI labs. They warned that these advanced AI systems could bring about a monumental shift in the history of life on Earth, necessitating careful and well-resourced planning and management. However, they argued that such meticulous oversight is lacking, with AI labs instead racing to create increasingly powerful digital minds that are beyond the understanding, prediction, or reliable control of their creators. In ay 2023, Geoffrey Hinton, who had been referred to as the godfather of artificial intelligence, had left Google and voiced his regrets about his contributions to AI development. Hinton, who had helped pioneer AI systems like ChatGPT, had warned of the significant risks posed by AI chatbots.

The mounting concerns and calls for action from within the AI community underscore the urgent need for robust safety measures and transparent, responsible development practices in the rapidly evolving field of artificial intelligence.


Featured image credit: Google DeepMind/Unsplash

Tags: AIartificial intelligenceDeepMindGoogleopenAI

Related Posts

Asus ROG Ally, Ally X preorders open; ships October 16

Asus ROG Ally, Ally X preorders open; ships October 16

September 26, 2025
OpenAI: GDPval framework tests AI on real-world jobs

OpenAI: GDPval framework tests AI on real-world jobs

September 26, 2025
Hugging Face: AI video energy use scales non-linearly

Hugging Face: AI video energy use scales non-linearly

September 26, 2025
Apple Wallet digital ID expands to North Dakota

Apple Wallet digital ID expands to North Dakota

September 26, 2025
Salesforce Agentforce hit by Noma “ForcedLeak” exploit

Salesforce Agentforce hit by Noma “ForcedLeak” exploit

September 26, 2025
BetterPic: The industry-leading AI headshot generator

BetterPic: The industry-leading AI headshot generator

September 26, 2025

LATEST NEWS

Asus ROG Ally, Ally X preorders open; ships October 16

OpenAI: GDPval framework tests AI on real-world jobs

Hugging Face: AI video energy use scales non-linearly

Apple Wallet digital ID expands to North Dakota

Salesforce Agentforce hit by Noma “ForcedLeak” exploit

BetterPic: The industry-leading AI headshot generator

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.