Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

OpenAI’s products are not as secure as you might expect

The tech giant's prominence has attracted global attention, making it a prime target for threat actors seeking valuable user data

byEmre Çıtak
July 5, 2024
in Artificial Intelligence
Home News Artificial Intelligence

OpenAI seems to make headlines every day and this time it’s for a double dose of security concerns. Known for its cutting-edge advancements in artificial intelligence, OpenAI has now been thrust into the spotlight due to not one but two significant security breaches.

These incidents have raised questions about the company’s data handling and cybersecurity protocols, shaking the confidence of both users and industry experts.

ChatGPT for Mac is full of security flaws

The Mac app for ChatGPT has been a popular tool for users looking to leverage OpenAI’s powerful language model, GPT-4o, directly from their desktops. However, this week, a security flaw was revealed.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Pedro José Pereira Vieito, a Swift developer, discovered that the app was storing user conversations in plain text locally on the device. This means that any sensitive information shared during these conversations was not protected, making it accessible to other applications or potential malware.

macOS has blocked access to any user private data since macOS Mojave 10.14 (6 years ago!). Any app accessing private user data (Calendar, Contacts, Mail, Photos, any third-party app sandbox, etc.) now requires explicit user access.

— Pedro José Pereira Vieito (@pvieito) July 2, 2024

Vieito’s findings were quickly picked up by tech news outlet The Verge, amplifying the concern among users and prompting OpenAI to take swift action. The company released an update that introduced encryption for the locally stored chats, addressing the immediate security concern.

The absence of sandboxing, a security practice that isolates applications to prevent vulnerabilities from affecting other parts of the system, further complicated the issue. Since the ChatGPT app is not available on the Apple App Store, it does not have to comply with Apple’s sandboxing requirements.

The fact that such a basic security oversight occurred in the first place has raised questions about OpenAI’s internal security practices and the thoroughness of their app development process.

This loophole allowed the app to store data insecurely, exposing users to potential risks.

The quick fix by OpenAI has mitigated the immediate threat, but it has also highlighted the need for more stringent security measures in the development and deployment of AI applications.

A hacker’s playbook

The second security issue facing OpenAI is rooted in an incident from last spring, but its repercussions are still felt today.

Early last year, a hacker managed to breach OpenAI’s internal messaging systems, gaining access to sensitive information about the company’s operations and AI technologies. The hacker lifted details from discussions in an online forum where employees talked about OpenAI’s latest technologies, according to two people familiar with the incident, but did not get into the systems where the company houses and builds its AI.

OpenAI executives revealed the incident to employees during an all-hands meeting at the company’s San Francisco offices in April 2023. However, the decision was made not to disclose the breach publicly as no customer or partner information was stolen, and the executives did not see it as a threat to national security. They believed the hacker was a private individual without ties to any foreign government, so they did not inform the FBI or law enforcement.

OpenAI security flaws
The ChatGPT Mac app was previously found storing user conversations in plain text, posing a security risk (Image credit)

This decision led to internal concerns about OpenAI’s security posture. Some employees worried that foreign adversaries, such as China, could potentially exploit similar vulnerabilities to steal AI technology that might eventually pose a threat to U.S. national security.

Leopold Aschenbrenner, an OpenAI technical program manager, argued that the company was not taking enough measures to prevent such threats. He sent a memo to the board of directors outlining his concerns but was later fired, a move he claims was politically motivated due to his whistleblowing.

Whispered worries

The internal breach and subsequent handling of the incident have exposed deeper fractures within OpenAI regarding security practices and transparency. Aschenbrenner’s dismissal, which he claims was a result of his whistleblowing, has sparked debate about the company’s commitment to security and how it addresses internal dissent. While OpenAI maintains that his termination was unrelated to his whistleblowing, the situation has highlighted tensions within the company.

The breach and its aftermath have also underscored the potential geopolitical risks associated with advanced AI technologies. The fear that AI secrets could be stolen by foreign adversaries like China is not unfounded. Similar concerns have been raised in the tech industry, notably by Microsoft President Brad Smith, who testified about Chinese hackers exploiting tech systems to attack federal networks.

Despite these concerns, federal and state laws prevent companies like OpenAI from discriminating based on nationality. Experts argue that excluding foreign talent could hinder AI development in the U.S. OpenAI’s head of security, Matt Knight, emphasized the need to balance these risks while leveraging the best talent worldwide to advance AI technologies.

OpenAI security flaws
Back in 2023, a hacker breached OpenAI’s internal messaging systems last spring, gaining access to sensitive information (Image credit)

What’s OpenAI’s power play?

In response to these incidents, OpenAI has taken steps to bolster its security measures. The company has established a Safety and Security Committee to evaluate and mitigate the risks associated with future technologies. This committee includes notable figures such as Paul Nakasone, a former Army general who led the National Security Agency and Cyber Command, and has been appointed to the OpenAI board of directors.

OpenAI’s commitment to security is further evidenced by its ongoing investments in safeguarding its technologies. Knight highlighted that these efforts began years before the introduction of ChatGPT and continue to evolve as the company seeks to understand and address emerging risks. Despite these proactive measures, the recent incidents have shown that the journey to robust security is ongoing and requires constant vigilance.


Featured image credit: Kim Menikh/Unsplash

Tags: FeaturedopenAI

Related Posts

AI chatbots spread false info in 1 of 3 responses

AI chatbots spread false info in 1 of 3 responses

September 5, 2025
OpenAI to mass produce custom AI chip with Broadcom in 2025

OpenAI to mass produce custom AI chip with Broadcom in 2025

September 5, 2025
Deepmind finds RAG limit with fixed-size embeddings

Deepmind finds RAG limit with fixed-size embeddings

September 5, 2025
TCL QM9K integrates Gemini with presence detection

TCL QM9K integrates Gemini with presence detection

September 5, 2025
LunaLock ransomware hits artists/clients with AI training threat

LunaLock ransomware hits artists/clients with AI training threat

September 5, 2025
OpenAI: New ‘OpenAI for Science’ uses GPT-5

OpenAI: New ‘OpenAI for Science’ uses GPT-5

September 5, 2025

LATEST NEWS

Texas Attorney General files lawsuit over the PowerSchool data breach

iPhone 17 Pro is expected to arrive with 48mp telephoto, variable aperture expected

AI chatbots spread false info in 1 of 3 responses

OpenAI to mass produce custom AI chip with Broadcom in 2025

When two Mark Zuckerbergs collide

Deepmind finds RAG limit with fixed-size embeddings

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.