Site icon Check Point Blog

Cursor IDE: Persistent Code Execution via MCP Trust Bypass

CVE-2025-54136 –  MCPoison
Key Insights
  1. Critical RCE Flaw in Popular AI-powered IDE
    Check Point Research uncovered a persistent remote code execution vulnerability in Cursor, a fast-growing AI-powered coding platform trusted by developers worldwide.
  2. MCP Vulnerability
    Cursor allows attackers to gain long-term, silent access to developer environments by altering previously approved Model Context Protocol (MCPs), with no additional user prompt.
  3. Real-World Attack Scenario
    In shared repositories, a benign-looking MCP configuration can be weaponized after approval, triggering malicious code execution every time a project is opened in Cursor.
  4. Broader AI Supply Chain Risk
    The flaw exposes a critical weakness in the trust model behind AI-assisted development environments, raising the stakes for teams integrating LLMs and automation into their workflows.

Cursor is one of the fastest-growing AI-powered coding tools used by developers today. It combines local code editing with powerful large language model (LLM) integrations to help teams write, debug, and explore code more efficiently. But with that deep integration comes increased trust in automated workflows — and increased risk when that trust is exploited.

As AI-driven developer environments become more embedded in software development workflows, Check Point Research set out to evaluate the security model behind these tools, especially in collaborative environments where code, configuration files, and AI-based plugins are frequently shared across teams and repositories.

We discovered a high-impact vulnerability in Cursor’s Model Context Protocol (MCP) system that enables persistent remote code execution (RCE). Once a user approves a MCP configuration, an attacker can silently change its behavior. From that moment on, malicious commands can be executed every time the project is opened without any further prompts or notifications.

An attacker can:

This isn’t just a theoretical risk, it’s a real-world vulnerability. In shared coding environments, the flaw turns a trusted MCP into a stealthy, persistent point of compromise. For organizations relying on AI tools like Cursor, the implications are serious: silent, ongoing access to developer machines, credentials, and codebases, all triggered by a single, trusted approval.

For a technical understanding of the vulnerability, read the Check Point Research report.

How the Vulnerability Works

Cursos uses a system called Model Context Protocol (MCPs). These are configuration files that tell Cursor how to automate certain tasks. Think of them as a way for developers to plug in tools, scripts, or AI-driven workflows directly into their coding environment.

When a user opens a project that contains MCP configuration , Cursor shows a one-time approval prompt asking whether to trust it. But here’s the problem:

Once a MCP is approved, Cursor never checks it again, even if the commands inside it are silently changed later.

That means an attacker working in the same shared repository could:

Every time the victim opens the project in Cursor, the new command runs automatically without a new prompt or alert.

Proof of Concept: From Harmless MCP to Persistent Exploit

To show how this vulnerability works in practice, we created a proof of concept that mimics a typical attack scenario in a shared project:

  1. Step 1: A Harmless MCP
    The attacker first commits a completely safe MCP Configuration. Something as innocent as a command that just prints a message. When the victim opens the project, they see a prompt asking to approve this MCP.
  2. Step 2: Silent Switch to Malicious Behavior
    After approval, the attacker quietly changes the MCP configuration to malicious code, such as a script that opens a reverse shell or runs harmful system commands.
  3. Step 3: Automatic Execution Every Time
    Now, every time the victim opens the project in Cursor IDE, the malicious command runs silently without a warning or prompt.
  4. Step 4: Persistent, Invisible Access
    This gives the attacker repeated, stealthy access to the victim’s machine, making it possible to steal data, execute further attacks, or move laterally in the victim’s environment.
Real-World Impact

Because many organizations share and sync projects through repositories, this vulnerability creates an ideal way for attackers to establish long-term, hidden footholds.

Here’s why it’s so dangerous:

For companies relying on Cursor and similar AI-powered IDEs, understanding and addressing this vulnerability is critical to protecting their development environments and sensitive assets.

Disclosure and Mitigation

Upon identifying this critical vulnerability, Check Point Research promptly and responsibly disclosed the issue to the Cursor development team on July 16, 2025. Cursor released an update (version 1.3) on July 29th. Although the release notes did not explicitly reference the vulnerability, our independent tests confirm that the issue has been effectively mitigated. Specifically, any modification to an MCP configuration, including minor changes such as adding a space, now triggers a mandatory approval prompt, requiring the user to explicitly approve or reject the updated MCP before it takes effect.

To ensure protection against this vulnerability, we strongly recommend updating to the latest version of Cursor.

This vulnerability is part of a broader challenge facing modern development tools that deeply integrate AI. Platforms like Cursor streamline workflows by automating tasks through natural language and LLM-connected plugins. But with that convenience comes increased reliance on trust, often with limited visibility into how that trust can be abused.

To mitigate this class of vulnerability in AI-assisted development environments, we recommend:

Conclusion

The discovery of this persistent remote code execution vulnerability in Cursor IDE highlights a critical security challenge for AI-powered developer tools. As organizations increasingly rely on integrated AI workflows, ensuring that trust mechanisms are robust and verifiable is essential.

We encourage developers, security teams, and organizations to stay vigilant, audit their AI development environments, and work closely with vendors to address emerging threats. Only through proactive security can we safely harness the power of AI in software development.

For a technical understanding of the vulnerability, read the Check Point Research report.

Exit mobile version