Cursor’s AI coding agent morphed ‘into local shell’ with one-line prompt attack

Threat researchers at AimLabs on Friday disclosed a data-poisoning attack affecting the AI-powered code editing software Cursor that would have given an attacker remote code execution privileges over user devices.
According to AimLabs, the flaw was reported to Cursor on July 7 and a patch was included in an update one day later for version 1.3 of Cursor. All previous versions of the software remain “susceptible to remote-code execution triggered by a single externally-hosted prompt-injection,” according to a blog post from the company.
The vulnerability, being tracked under CVE-2025-54135, occurs when Cursor interacts with a Model Contest Protocol (MCP) server that helps the software access a number of external tools from Slack, GitHub and other databases that are used to develop software.
But like EchoLeak — another AI model flaw discovered by AimLabs last month — Cursor’s agent can be hijacked and manipulated through malicious prompts when it fetches data from MCP servers.
Through a single line of prompting, an attacker can influence the actions of Cursor — which has developer-level privileges on host devices — in ways that are nearly silent and invisible to the user. In this case, the researchers executed their prompt injection directly through Slack, which was fetched by Cursor through a connected MCP server. This prompt altered Cursor’s configuration file, causing it to add another server with a malicious start command.
Crucially, the moment these edits are given to Cursor it executes the malicious commands immediately, before the user can reject the suggestion.
It’s a reminder that many organizations and developers are integrating AI systems into their business operations without fully understanding where it may open them up to new risks. Not only do these models routinely generate out insecure software code, but the agents themselves are suggestible to instructions from external third-parties. A single poisoned document can “morph an AI agent into a local shell.”
“The tools expose the agent to external and untrusted data, which can affect the agent’s control-flow,” the company wrote. “This in turn, allows attackers to hijack the agent’s session and take advantage of the agent’s privileges to perform on behalf of the user.”
While this vulnerability has been fixed, the researchers said this type of flaw is inextricably tied to the way most large language models operate, ingesting commands and direction in the form of external prompting. As a result, they believe it’s likely that most major models will continue to be vulnerable to similar variants of the same problem.
“Because model output steers the execution path of any AI agent, this vulnerability pattern is intrinsic and keeps resurfacing across multiple platforms,” the blog concluded.