Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

ChatGPT exploit: How hackers might steal information by using false memories

byAytun Çelebi
September 25, 2024
in News

Security researcher Johann Rehberger has exposed a serious vulnerability in ChatGPT that could permit attackers to record incorrect data alongside pernicious instructions in a user’s settings for long-term memory. After reporting the flaw to OpenAI, Rehberger noticed that the company initially dismissed it as a safety matter rather than a security concern. After Rehberger showed a proof-of-concept (PoC) exploit that used the vulnerability to permanently exfiltrate all user input, engineers at OpenAI became aware and released a partial fix earlier this month.

Exploiting long-term memory

According to Arstechnica, Rehberger found that you can alter ChatGPT’s long-term memory using indirect prompt injection. This method permits attackers to embed false memories or directions into untrusted material such as uploaded emails, blog entries, or documents.

Rehberger’s PoC demonstrated that tricking ChatGPT into opening a malicious web link allowed the attacker full control over capturing and dispatching all subsequent user input and ChatGPT responses to a server they controlled. Rehberger demonstrated how the exploit might cause ChatGPT to keep false information, including believing a user was 102 years old and lived in the Matrix, affecting all future discussions.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

OpenAI’s reply and continuing risks

OpenAI initially responded to Rehberger’s report by closing it, classifying the vulnerability as a safety matter rather than a security problem. After sharing the PoC, the company released a patch to prevent the exploit from functioning as an exfiltration vector. Even so, Rehberger pointed out that the fundamental issue of prompt injections remains unsolved. While the explicit strategy for data theft was confronted, manipulative actors could still influence the memory instrument to incorporate fabricated data into a user’s long-term memory settings.

Rehberger noted in the video demonstration, “What’s particularly intriguing is that this exploit persists in memory. The prompt injection successfully integrated memory into ChatGPT’s long-term storage, and even when beginning a new chat, it doesn’t stop exfiltrating data.

Thanks to the API rolled out last year by OpenAI, this specific attack method is not feasible through the ChatGPT web interface.

How to protect yourself from ChatGPT (or LLM) memory exploits?

Those using LLM who want to keep their exchanges with ChatGPT secure are encouraged to look out for updates to the memory system during their sessions. End users must repeatedly check and attend to archived memories for suspicious content. Users have guidance from OpenAI on managing these memory settings, and they can additionally decide to turn off the memory function to eliminate these possible risks.

Due to ChatGPT’s memory capabilities, users can help protect their data from possible exploits by keeping their guard up and taking measures beforehand.

Tags: AIchatgptexploit

Related Posts

Is ChatGPT down again? Reports indicate ongoing outage

Is ChatGPT down again? Reports indicate ongoing outage

October 24, 2025
Path of Exile: Keepers of the Flame will be the Breach 2.0!

Path of Exile: Keepers of the Flame will be the Breach 2.0!

October 24, 2025
Google Meet now lets you move people in and out of meetings like a lobby

Google Meet now lets you move people in and out of meetings like a lobby

October 24, 2025
Sam Altman: AI will cause “strange or scary moments”

Sam Altman: AI will cause “strange or scary moments”

October 24, 2025
Anthropic gives Claude a real memory and lets users edit it directly

Anthropic gives Claude a real memory and lets users edit it directly

October 24, 2025
Nissan’s Sakura EV gets a solar roof that adds 1,800 miles a year

Nissan’s Sakura EV gets a solar roof that adds 1,800 miles a year

October 24, 2025

LATEST NEWS

Is ChatGPT down again? Reports indicate ongoing outage

Path of Exile: Keepers of the Flame will be the Breach 2.0!

Google Meet now lets you move people in and out of meetings like a lobby

Sam Altman: AI will cause “strange or scary moments”

Anthropic gives Claude a real memory and lets users edit it directly

Nissan’s Sakura EV gets a solar roof that adds 1,800 miles a year

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.