Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Salesforce Agentforce hit by Noma “ForcedLeak” exploit

Researchers at Noma uncovered a critical prompt-injection flaw called “ForcedLeak” in Salesforce’s Agentforce AI agents, scoring 9.4/10 on the CVSS scale. Attackers can embed malicious prompts in standard Salesforce web forms, tricking the AI into exfiltrating sensitive CRM data to whitelisted domains — including one that had expired and could be purchased.

byAytun Çelebi
September 26, 2025
in Cybersecurity
Home News Cybersecurity
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Researchers at Noma have disclosed a prompt-injection vulnerability, named “ForcedLeak,” affecting Salesforce’s Agentforce autonomous AI agents. The flaw allows attackers to embed malicious prompts in web forms, causing the AI agent to exfiltrate sensitive customer relationship management data.

The vulnerability targets Agentforce, an AI platform within the Salesforce ecosystem for creating autonomous agents for business tasks. Security firm Noma identified a critical vulnerability chain, assigning it a 9.4 out of 10 score on the CVSS severity scale. The attack, dubbed “ForcedLeak,” is described as a cross-site scripting (XSS) equivalent for the AI era. Instead of code, an attacker plants a malicious prompt into an online form that an agent later processes, compelling it to leak internal data.

The attack vector uses standard Salesforce web forms, such as a Web-to-Lead form for sales inquiries. These forms typically contain a “Description” field for user comments, which serves as the injection point for the malicious prompt. This tactic is an evolution of historical attacks where similar fields were used to inject malicious code. The vulnerability exists because an AI agent may not distinguish between benign user input and disguised instructions within it.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

To establish the attack’s viability, Noma researchers first tested the “context boundaries” of the Agentforce AI. They needed to verify if the model, designed for specific business functions, would process prompts outside its intended scope. The team submitted a simple, non-sales question: “What color do you get by mixing red and yellow?” The AI’s response, “Orange,” confirmed it would entertain matters beyond sales interactions. This result demonstrated that the agent was susceptible to processing arbitrary instructions, a precondition for a prompt injection attack.

With the AI’s susceptibility established, an attacker could embed a malicious prompt in a Web-to-Lead form. When an employee uses an AI agent to process these leads, the agent executes the hidden instructions. Although Agentforce is designed to prevent data exfiltration to arbitrary web domains, researchers found a critical flaw. They discovered that Salesforce’s Content Security Policy whitelisted several domains, including an expired one: “my-salesforce-cms.com.” An attacker could purchase this domain. In their proof-of-concept, Noma’s malicious prompt instructed the agent to send a list of internal customer leads and their email addresses to this specific, whitelisted domain, successfully bypassing the security control.

Alon Tron, co-founder and CTO of Noma, outlined the severity of a successful compromise. “And that’s basically the game over,” Tron stated. “We were able to compromise the agent and tell it to do whatever.” He explained that the attacker is not limited to data exfiltration. A compromised agent could also be instructed to alter information within the CRM, delete entire databases, or be used as a foothold to pivot into other corporate systems, widening the impact of the initial breach.

Researchers warned that a ForcedLeak attack could expose a vast range of sensitive data. This includes internal data like confidential communications and business strategy insights. A breach could also expose extensive employee and customer details. CRMs often contain notes with personally identifiable information (PII) such as a customer’s age, hobbies, birthday, and family status. Furthermore, records of customer interactions are at risk, including call dates and times, meeting locations, conversation summaries, and full chat transcripts from automated tools. Transactional data, such as purchase histories, order information, and payment details, could also be compromised, providing attackers a comprehensive view of customer relationships.

Andy Shoemaker, CISO for CIQ Systems, commented on how this stolen information could be weaponized. He stated that “any and all of this sales information could be used and to target engineering attacks of every type.” Shoemaker explained that with access to sales data, attackers know who is expecting certain communications and from whom, allowing them to craft highly targeted and believable attacks. He concluded, “In short, sales data can be some of the best data for the attackers to use to select and effectively target their victims.”

Salesforce’s initial recommendation to mitigate the risk involves user-side configuration. The company advised users to add any necessary external URLs that agents depend on to the Salesforce Trusted URLs list or to include them directly in the agent’s instructions. This applies to external resources such as feedback forms from services like forms.google.com, external knowledge bases, or other third-party websites that are part of an agent’s legitimate workflow.

To address the specific exploit, Salesforce released technical patches that prevent Agentforce agents from sending output to trusted URLs, directly countering the exfiltration method used in the proof-of-concept. A Salesforce spokesperson provided a formal statement: “Salesforce is aware of the vulnerability reported by Noma and has released patches that prevent output in Agentforce agents from being sent to trusted URLs. The security landscape for prompt injection remains a complex and evolving area, and we continue to invest in strong security controls and work closely with the research community to help protect our customers as these types of issues surface.”

According to Noma’s Alon Tron, while the patches are effective, the fundamental challenge remains. “It’s a complicated issue, defining and getting the AI to understand what’s malicious or not in a prompt,” he explained. This highlights the core difficulty in securing AI models from malicious instructions embedded in user input. Tron noted that Salesforce is pursuing a deeper fix, stating, “Salesforce is working to actually fix the root cause, and provide more robust types of prompt filtering. I expect them to add more robust layers of defense.”


Featured image credit

Tags: FeaturedForcedLeakSalesforce Agentforce

Related Posts

FTC bans GM from selling driver data without explicit consent

FTC bans GM from selling driver data without explicit consent

January 15, 2026
10-hour long Verizon outage is finally resolved

10-hour long Verizon outage is finally resolved

January 15, 2026
85% of security leaders are flying blind on supply chain threats, Panorays study says

85% of security leaders are flying blind on supply chain threats, Panorays study says

January 14, 2026
Instagram denies data breach, blames reset glitch

Instagram denies data breach, blames reset glitch

January 12, 2026
AWS outage disrupts Fortnite and Steam

AWS outage disrupts Fortnite and Steam

December 25, 2025
Aflac data breach affected 22.65M customers

Aflac data breach affected 22.65M customers

December 24, 2025

LATEST NEWS

Anthropic partners with Teach For All to train 100,000 global educators

Signal co-founder launches privacy-focused AI service Confer

Adobe launches AI-powered Object Mask for Premiere Pro

Google Workspace adds password-protected Office file editing

Claim: NVIDIA green-lit pirated book downloads for AI training

Tesla restarts Dojo3 supercomputer project as AI5 chip stabilizes

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.