Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Salesforce Agentforce hit by Noma “ForcedLeak” exploit

Researchers at Noma uncovered a critical prompt-injection flaw called “ForcedLeak” in Salesforce’s Agentforce AI agents, scoring 9.4/10 on the CVSS scale. Attackers can embed malicious prompts in standard Salesforce web forms, tricking the AI into exfiltrating sensitive CRM data to whitelisted domains — including one that had expired and could be purchased.

byAytun Çelebi
September 26, 2025
in Cybersecurity
Home News Cybersecurity

Researchers at Noma have disclosed a prompt-injection vulnerability, named “ForcedLeak,” affecting Salesforce’s Agentforce autonomous AI agents. The flaw allows attackers to embed malicious prompts in web forms, causing the AI agent to exfiltrate sensitive customer relationship management data.

The vulnerability targets Agentforce, an AI platform within the Salesforce ecosystem for creating autonomous agents for business tasks. Security firm Noma identified a critical vulnerability chain, assigning it a 9.4 out of 10 score on the CVSS severity scale. The attack, dubbed “ForcedLeak,” is described as a cross-site scripting (XSS) equivalent for the AI era. Instead of code, an attacker plants a malicious prompt into an online form that an agent later processes, compelling it to leak internal data.

The attack vector uses standard Salesforce web forms, such as a Web-to-Lead form for sales inquiries. These forms typically contain a “Description” field for user comments, which serves as the injection point for the malicious prompt. This tactic is an evolution of historical attacks where similar fields were used to inject malicious code. The vulnerability exists because an AI agent may not distinguish between benign user input and disguised instructions within it.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

To establish the attack’s viability, Noma researchers first tested the “context boundaries” of the Agentforce AI. They needed to verify if the model, designed for specific business functions, would process prompts outside its intended scope. The team submitted a simple, non-sales question: “What color do you get by mixing red and yellow?” The AI’s response, “Orange,” confirmed it would entertain matters beyond sales interactions. This result demonstrated that the agent was susceptible to processing arbitrary instructions, a precondition for a prompt injection attack.

With the AI’s susceptibility established, an attacker could embed a malicious prompt in a Web-to-Lead form. When an employee uses an AI agent to process these leads, the agent executes the hidden instructions. Although Agentforce is designed to prevent data exfiltration to arbitrary web domains, researchers found a critical flaw. They discovered that Salesforce’s Content Security Policy whitelisted several domains, including an expired one: “my-salesforce-cms.com.” An attacker could purchase this domain. In their proof-of-concept, Noma’s malicious prompt instructed the agent to send a list of internal customer leads and their email addresses to this specific, whitelisted domain, successfully bypassing the security control.

Alon Tron, co-founder and CTO of Noma, outlined the severity of a successful compromise. “And that’s basically the game over,” Tron stated. “We were able to compromise the agent and tell it to do whatever.” He explained that the attacker is not limited to data exfiltration. A compromised agent could also be instructed to alter information within the CRM, delete entire databases, or be used as a foothold to pivot into other corporate systems, widening the impact of the initial breach.

Researchers warned that a ForcedLeak attack could expose a vast range of sensitive data. This includes internal data like confidential communications and business strategy insights. A breach could also expose extensive employee and customer details. CRMs often contain notes with personally identifiable information (PII) such as a customer’s age, hobbies, birthday, and family status. Furthermore, records of customer interactions are at risk, including call dates and times, meeting locations, conversation summaries, and full chat transcripts from automated tools. Transactional data, such as purchase histories, order information, and payment details, could also be compromised, providing attackers a comprehensive view of customer relationships.

Andy Shoemaker, CISO for CIQ Systems, commented on how this stolen information could be weaponized. He stated that “any and all of this sales information could be used and to target engineering attacks of every type.” Shoemaker explained that with access to sales data, attackers know who is expecting certain communications and from whom, allowing them to craft highly targeted and believable attacks. He concluded, “In short, sales data can be some of the best data for the attackers to use to select and effectively target their victims.”

Salesforce’s initial recommendation to mitigate the risk involves user-side configuration. The company advised users to add any necessary external URLs that agents depend on to the Salesforce Trusted URLs list or to include them directly in the agent’s instructions. This applies to external resources such as feedback forms from services like forms.google.com, external knowledge bases, or other third-party websites that are part of an agent’s legitimate workflow.

To address the specific exploit, Salesforce released technical patches that prevent Agentforce agents from sending output to trusted URLs, directly countering the exfiltration method used in the proof-of-concept. A Salesforce spokesperson provided a formal statement: “Salesforce is aware of the vulnerability reported by Noma and has released patches that prevent output in Agentforce agents from being sent to trusted URLs. The security landscape for prompt injection remains a complex and evolving area, and we continue to invest in strong security controls and work closely with the research community to help protect our customers as these types of issues surface.”

According to Noma’s Alon Tron, while the patches are effective, the fundamental challenge remains. “It’s a complicated issue, defining and getting the AI to understand what’s malicious or not in a prompt,” he explained. This highlights the core difficulty in securing AI models from malicious instructions embedded in user input. Tron noted that Salesforce is pursuing a deeper fix, stating, “Salesforce is working to actually fix the root cause, and provide more robust types of prompt filtering. I expect them to add more robust layers of defense.”


Featured image credit

Tags: FeaturedForcedLeakSalesforce Agentforce

Related Posts

Co-op Group reports £75m loss after April cyber-attack

Co-op Group reports £75m loss after April cyber-attack

September 25, 2025
Taiwan industrial production up 14.4% in August thanks to AI chips

Taiwan industrial production up 14.4% in August thanks to AI chips

September 25, 2025
LastPass: GitHub hosts atomic stealer malware campaign

LastPass: GitHub hosts atomic stealer malware campaign

September 25, 2025
Sentinelone finds malterminal malware using OpenAI GPT-4

Sentinelone finds malterminal malware using OpenAI GPT-4

September 23, 2025
FBI warns of fake IC3 websites stealing data

FBI warns of fake IC3 websites stealing data

September 23, 2025
Radware finds ChatGPT deep research ShadowLeak zero-click flaw

Radware finds ChatGPT deep research ShadowLeak zero-click flaw

September 23, 2025

LATEST NEWS

Asus ROG Ally, Ally X preorders open; ships October 16

OpenAI: GDPval framework tests AI on real-world jobs

Hugging Face: AI video energy use scales non-linearly

Apple Wallet digital ID expands to North Dakota

Salesforce Agentforce hit by Noma “ForcedLeak” exploit

BetterPic: The industry-leading AI headshot generator

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.