Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Attackers used AI prompts to silently exfiltrate code from GitHub repositories

The exploit relied on GitHub’s “invisible comments” feature to hide malicious instructions from human reviewers.

byKerem Gülen
October 15, 2025
in Cybersecurity, News

A critical vulnerability in GitHub Copilot Chat, dubbed “CamoLeak,” allowed attackers to silently steal source code and secrets from private repositories using a sophisticated prompt injection technique. The flaw, which carried a CVSS score of 9.6, has since been patched by GitHub.

How the CamoLeak attack worked

The attack method, discovered by security researcher Omer Mayraz, began by hiding malicious instructions within a pull request description using GitHub’s “invisible comments” feature. While this content is not visible to users in the standard interface, Copilot Chat ingests all repository and pull request context, including this hidden metadata, when generating responses.

The vulnerability was triggered when a legitimate developer with access to private repositories would ask Copilot Chat a question about the compromised pull request. Copilot, which operates with the permissions of the querying user, would then execute the hidden malicious prompt. This allowed the attacker to command the AI assistant to search for sensitive information, such as API keys or source code, within the victim’s accessible private repositories.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

To exfiltrate the stolen data, the attack leveraged GitHub’s own “Camo” image proxy service. Normally, GitHub’s Content Security Policy (CSP) prevents content from directly leaking data to external domains. The Camo proxy is designed to safely route external image requests, rewriting URLs to a camo.githubusercontent.com address with a cryptographic signature.

The CamoLeak attack bypassed these protections by first having the attacker create a dictionary of pre-signed Camo URLs. Each valid URL pointed to a benign, invisible 1×1 pixel image hosted on the attacker’s server, with each unique URL representing a single character of data (e.g., ‘A’, ‘B’, ‘1’, ‘;’).

The injected prompt then instructed Copilot to construct its response by referencing these pre-signed image URLs in a specific sequence that encoded the stolen repository content. When the victim’s browser rendered Copilot’s output, it made a series of requests through the trusted Camo proxy to fetch each invisible pixel. The sequence of these requests, as received by the attacker’s server, effectively reconstructed the stolen data character by character, all without displaying any malicious content to the user or triggering standard network security alerts.


Featured image credit

Tags: camoleakGithub

Related Posts

Microsoft’s biggest-ever Patch Tuesday fixes 175 bugs

Microsoft’s biggest-ever Patch Tuesday fixes 175 bugs

October 15, 2025
Jensen Huang says every Nvidia engineer now codes with Cursor

Jensen Huang says every Nvidia engineer now codes with Cursor

October 15, 2025
Apple unveils new iPad Pro with the M5 chip

Apple unveils new iPad Pro with the M5 chip

October 15, 2025
Apple Vision Pro gets M5 chip upgrade and PS VR2 controller support

Apple Vision Pro gets M5 chip upgrade and PS VR2 controller support

October 15, 2025
Android 16 now shows which apps sneak in your security settings

Android 16 now shows which apps sneak in your security settings

October 15, 2025
4 Samsung Galaxy models just lost all software support

4 Samsung Galaxy models just lost all software support

October 15, 2025

LATEST NEWS

Microsoft’s biggest-ever Patch Tuesday fixes 175 bugs

Jensen Huang says every Nvidia engineer now codes with Cursor

Apple unveils new iPad Pro with the M5 chip

Apple Vision Pro gets M5 chip upgrade and PS VR2 controller support

Attackers used AI prompts to silently exfiltrate code from GitHub repositories

Android 16 now shows which apps sneak in your security settings

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.