
Introduction
AI agents like Salesforce Agentforce represent the next leap in CRM automation—but also a massive new attack surface. In September 2025, Noma Labs disclosed ForcedLeak—a critical CVSS 9.4 vulnerability chain that revealed how AI-driven systems can be exploited through indirect prompt injection attacks.
What Is ForcedLeak?
ForcedLeak allowed attackers to:
-
Embed malicious instructions inside Web-to-Lead forms.
-
Exploit Salesforce’s Content Security Policy (CSP) bypass.
-
Exfiltrate sensitive CRM data (contacts, sales pipelines, integration data).
Unlike chatbots, Agentforce executes autonomous multi-step tasks, meaning a hidden payload could run during normal employee workflows—without detection.
Who Was at Risk?
Any company using Salesforce Agentforce Web-to-Lead functionality, especially in:
-
Sales & Marketing campaigns
-
Customer acquisition workflows
-
External event lead collection (conferences, trade shows, web forms)
Key Business Impacts
-
CRM Data Exposure: Customer records, sales strategies, internal communications
-
Regulatory Risk: Compliance failures and breach reporting penalties
-
Reputation Damage: Loss of customer trust
-
Blast Radius: Attackers could pivot into integrated systems & APIs, compounding impact
Attack Path Summary
-
Injection: Attacker submits crafted lead data via Salesforce Web-to-Lead.
-
Trigger: Employee queries AI about the lead.
-
Execution: Agentforce processes malicious instructions as legitimate.
-
Exfiltration: Data sent through an expired whitelisted CSP domain.
How Salesforce Responded
Salesforce quickly:
-
Patched Agentforce to enforce Trusted URLs.
-
Secured expired whitelist domains.
-
Released mitigation guidance for customers.
What Organizations Should Do Now
-
Apply Salesforce’s patches and enforce Trusted URLs.
-
Audit historical leads for suspicious instructions.
-
Implement input sanitization across all user-controlled data.
-
Deploy AI security monitoring for indirect prompt injection patterns.
Conclusion
ForcedLeak highlights a new reality: AI agents expand the attack surface beyond simple prompts. With autonomous decision-making, memory, and integrations, they can unknowingly become conduits for hidden attacker logic.
Organizations must shift to AI-native security practices—from prompt injection detection to strict CSP controls—to prevent the next exploit.