ServiceNow patches critical AI platform flaw that could allow user impersonation
ServiceNow has addressed a critical security vulnerability in its AI platform that could have allowed unauthenticated users to impersonate legitimate users and perform unauthorized actions, the company disclosed Monday.
The flaw, designated CVE-2025-12420 and carrying a severity score of 9.3 out of 10, was discovered by SaaS security firm AppOmni in October. ServiceNow deployed fixes to most hosted instances on Oct. 30, 2025, and provided patches to partners and self-hosted customers. The company said it has no evidence the vulnerability was exploited before the fix.
The vulnerability affected Now Assist AI Agents and Virtual Agent API components. Customers using affected versions were advised to upgrade to patched releases, which include Now Assist AI Agents version 5.1.18 or later and 5.2.19 or later, and Virtual Agent API version 3.15.2 or later and 4.0.4 or later.
The disclosure arrives as security researchers raise broader questions about the configuration and deployment of enterprise AI systems. AppOmni’s research, which led to the vulnerability discovery, also revealed that default settings in ServiceNow’s Now Assist platform could enable second-order prompt injection attacks, a sophisticated exploit method that manipulates AI agents through data they process rather than direct user input.
These attacks exploit a feature called agent discovery, which allows AI agents to communicate with each other to complete complex tasks. While designed to enhance functionality, the feature creates potential attack vectors when agents are improperly configured or grouped together without adequate controls.
In testing scenarios, researchers demonstrated that low-privileged users could embed malicious instructions in data fields that higher-privileged users’ AI agents would later process. The compromised agent could then recruit other more powerful agents to execute unauthorized actions, including accessing restricted records, modifying data, and potentially escalating user privileges.
The attacks succeeded even with ServiceNow’s prompt injection protection feature enabled, highlighting how configuration choices can undermine security controls embedded in the AI systems themselves. The researchers found that default settings automatically grouped agents into teams and marked them as discoverable, creating unintended collaboration pathways that attackers could exploit.
The research underscores a fundamental challenge in enterprise AI deployment: security depends not only on the underlying technology but also on how organizations configure and manage these systems. ServiceNow confirmed the behaviors identified by researchers were intentional design choices and updated its documentation to clarify configuration options.
Organizations using ServiceNow’s AI platform face the task of balancing autonomous agent capabilities against security risks. The research suggests several mitigation strategies, including requiring human supervision for agents with powerful capabilities, segmenting agents into isolated teams based on their functions, and monitoring agent behavior for deviations from expected patterns.
You can find more information on the vulnerability on ServiceNow’s website.