SecurityAI AgentsMCPCursorDevOpsInfrastructure SecurityCredentialsCloudShip StationSelf-Hosted

Cursor Security Flaw Reveals AI Agent MCP Risk Pattern

CS
CloudShip
6 min read
Cursor Security Flaw Reveals AI Agent MCP Risk Pattern

Everyone's reacting to the recent Cursor security flaw like it's new news or are blaming Cursor. Just like they did with Claude last week. It's not new. This pattern has been here the whole time. But I'm glad it's getting attention, because a lot of people need to wake up to this.

What Happened with Cursor?

Researchers at Knostic demonstrated a JavaScript injection attack that can take over AI coding assistants like Cursor and VSCode. The vulnerability allows malicious code introduced through extensions, MCP servers, or poisoned prompts to gain full file-system access and modify IDE functions.

The technical details: injected JavaScript inherits the IDE's full privileges in the Electron/Node.js environment. This means a compromised MCP server can modify or replace installed extensions, persist code that reattaches after restarts, and exfiltrate credentials without user visibility.

The researchers' key finding? "Every week, the attack surface against agents, and specifically AI coding assistants, expands." This isn't a bug in Cursor. It's the architecture of how MCP works. Cursor couldn't have prevented this - nor could we, nor could any platform using MCP. This is about the MCP the user chose to use, not a flaw in Cursor itself.

The Security Advice That Exposes the Real Problem

Elizabeth Montalbano's article security recommendation is telling: "Triple-check every MCP and extension. If there's doubt about its credibility, DO NOT USE IT."

We completely agree with this advice. But here's where it falls apart: In cloud infrastructure, when you're using a black-box AI SRE platform, YOU can't triple-check anything. You're trusting the vendor to do it for you.

Think about what this means:

  • You can't audit the MCPs running in someone else's cloud
  • You can't pin specific versions you've verified as safe
  • You can't review the code before it touches your infrastructure
  • You can't patch vulnerabilities yourself when they're discovered

One compromised MCP from a vendor, and every company using their platform is exposed simultaneously. That's not a security incident. That's a supply chain catastrophe.

The Pattern: Productivity Gains Come With Massive Attack Surface

Here's what nobody wants to talk about: AI agents are insanely productive. They're also insanely powerful. Those two things are connected. As Knostic's researchers noted, "Every week, the attack surface against agents expands." MCPs are super powerful, but with that power comes massive responsibility.

The same capabilities that let an AI agent automate infrastructure provisioning, debug production issues, optimize cloud costs, and respond to incidents also let a compromised agent exfiltrate production data, escalate privileges, deploy malicious code, and delete critical resources.

You can't have the productivity without the power. And power without control is just risk. The attack surface is massive, and most companies are being negligent about it.

Yet Companies Are Handing Over Production Credentials

Despite this obvious risk, I'm watching companies hand AWS credentials, Grafana keys, GitHub tokens, and production database access to black-box SaaS platforms where they can't even see the prompts or MCPs being used.

"Just paste your credentials here. Trust our security. We've got this."

It's wild to me. You wouldn't give your office keys to a random vendor and say "trust me, they're legit." But that's exactly what's happening with infrastructure credentials and AI platforms. You can't audit what you can't see.

This vulnerability discovered in Cursor isn't an outlier. It's a preview of what happens when we treat AI agents like convenient utilities instead of privileged access to critical systems.

This Isn't Just About Cursor

Last week it was researchers using Claude to discover vulnerabilities and exploit systems. This week it's an MCP vulnerability discovered in Cursor. Next week it'll be something else.

The pattern is clear:

  • AI tool gets popular because it's incredibly useful
  • Companies integrate it deeply into their workflow
  • Security researchers find vulnerabilities
  • Everyone reacts with surprise
  • Repeat

We're treating symptoms, not the disease. The disease is outsourcing control of privileged infrastructure access to vendors we can't audit.

What Teams Actually Need to Do

The security advice is correct: teams need to triple-check MCPs, audit extensions, and verify what's running. But that advice is only actionable if you own the infrastructure the agents run on.

Let's be clear: Cursor nor us could have prevented this because of the MCP the user chose to use. But this is all the more reason your team needs to control and check the MCPs you are using and not hand that power to black boxes.

Here's what that looks like in practice:

Deploy Agents Yourself

Don't rely on a vendor's cloud infrastructure. Run agents on your own servers, in your own VPC, under your own security controls.

Audit MCPs Yourself

Review the code of every MCP server before deploying it. If you can't see the code, you can't verify it's safe.

Pin Versions You Approve

Don't auto-update to the latest version of an MCP. Pin specific versions you've audited and approved. Update on your schedule, not a vendor's.

Keep Credentials On Your Servers

Your AWS keys, database passwords, and API tokens should never leave your infrastructure. Period.

Patch Vulnerabilities Yourself

When a security issue is discovered, you need the ability to patch it immediately. Not wait for a vendor to roll out a fix. Not hope you weren't affected.

The CloudShip Approach: Own Your AI Infrastructure

This is exactly why we built CloudShip Station as open source. Not because AI agents are dangerous. Because they're too useful to hand over to someone else's security model.

Station enables teams to:

  • Deploy agent teams on their own infrastructure - Docker or Kubernetes, your choice
  • Keep credentials local - Agents access your tools through MCP servers running on your network
  • Audit everything - Full visibility into what agents do and why
  • Version control agent configuration - Declarative YAML you can review, approve, and deploy through Git
  • Control the runtime - You decide when to update, what MCPs to enable, and how agents access your systems

When a vulnerability like Cursor's drops, teams running Station can:

  • Audit if they're affected
  • Disable the problematic MCP
  • Patch or replace it
  • Resume operations

All without waiting for a vendor or hoping they'll tell you the truth about impact.

The Irony: The More Useful AI Agents Become, The Crazier Black Boxes Are

Here's the uncomfortable truth: The productivity gains from AI agents are too massive to ignore.

Developers using AI coding assistants are 3x faster. SRE teams using AI agents resolve incidents in minutes instead of hours. Platform engineers using AI for cost optimization save millions.

The answer isn't "stop using AI tools." That ship has sailed. Teams that avoid AI entirely are getting left behind.

The answer is: use AI agents securely. And security in this context means control. You can't secure what you can't see. You can't audit what you don't control.

There's Too Much Noise in the AI SRE Space

Every week there's a new "AI for DevOps" platform. Most of them have the same pitch: "AI that fixes your cloud costs," "AI that responds to incidents," "AI that optimizes your infrastructure."

And the same requirement: hand over your credentials.

The market is saturated with vendors asking for the keys to your kingdom. Few of them are offering the transparency and control that security teams actually need.

What To Do Next

If you're using AI agents for infrastructure (and you probably should be), ask yourself:

  • Can you audit what these agents are actually doing?
  • Can you review the MCPs before they run in your environment?
  • Do your credentials leave your infrastructure?
  • If a vulnerability is discovered, can you patch it yourself?

If the answer to any of these is "no," you're trusting a black box with production access.

For teams that want to own their AI infrastructure:

  • Check out CloudShip Station on GitHub - Open source runtime for deploying AI agent teams on your infrastructure
  • Review our documentation - Learn how to deploy agents while keeping credentials local
  • Join our community - Connect with other teams running AI agents securely

The Bottom Line

The Cursor vulnerability isn't shocking because it's novel. It's shocking because it's obvious, and we collectively chose to ignore the pattern.

AI agents are essential infrastructure now. Treating them like optional conveniences or trusting vendors to "handle security" is the real risk.

Own your agents. Audit your MCPs. Keep your credentials.

Because the next headline about an AI security vulnerability is coming. The only question is whether you'll be waiting on a vendor to tell you if you're affected, or patching it yourself.

References & Citations

  1. Demonstrating Code Injection in VSCode and Cursor by Knostic Security Research (2025-11-05). https://www.knostic.ai/blog/demonstrating-code-injection-vscode-cursor
  2. Cursor Issue Paves Way for Credential-Stealing Attacks by Elizabeth Montalbano, Dark Reading (2025-11-17). https://www.darkreading.com/vulnerabilities-threats/cursor-issue-credential-stealing-attacks
  3. Station GitHub Repository by CloudShip (2025). https://github.com/cloudshipai/station
  4. Model Context Protocol Specification by Anthropic (2024). https://modelcontextprotocol.io/
  5. CloudShip Documentation by CloudShip (2025). https://docs.cloudshipai.com

Ready to Transform Your Cloud Infrastructure?

Join the growing list of companies that are revolutionizing their cloud operations with CloudShip.

Cursor Security Flaw Reveals AI Agent MCP Risk Pattern