Cross icon
Test your LLM for oversharing!  Test for real-world oversharing risks with role-specific prompts that mimic  real workplace questions. FREE - Start Now
protect icon

A new era requires a new set of solutions
Knostic delivers it

Skip to main content
Skip to main content

There is a new actor inside the IDE. One your security tools cannot see and your developers cannot fully control. It reads everything. It summarizes everything. And occasionally, when nudged the wrong way, it leaks what it finds.

AI coding assistants have quickly become indispensable in modern engineering teams. Claude Code, Cursor, and Windsurf accelerate development, refactor code, and automate tedious work. But they also do something less obvious. They read far more of the developer’s environment than most teams realize. They scan project directories, parse configuration files, ingest context, and interact with local tools in ways that blur the line between “autocomplete” and “autonomous agent.”

That expanded access creates a new class of risk. Increasing evidence shows that AI assistants can expose secrets stored in configuration files or environment variables. In some cases, they can even transmit those secrets externally. These incidents aren’t theoretical. They’re happening today, often without the developer knowing it.

This is quickly becoming one of the least understood but most serious risks in AI-assisted development.

The Quiet Privilege Problem Inside AI Developer Tools

Modern AI coding tools operate with privileges that any security team would review carefully for a human engineer. They can recursively read directories, parse sensitive files, inspect runtime metadata, and run commands through integrated agents and MCP servers. They do all of this to build context, improve suggestions, and take action on behalf of the user.

But unlike human engineers, AI assistants can do it automatically and at scale. They don’t naturally distinguish between public and private files, safe and unsafe content, or benign and hazardous instructions. If a secret is present in the working directory, there is a good chance the assistant has already ingested it.

This becomes especially dangerous because the files that contain sensitive information often appear harmless: a stray .env file, an outdated YAML template, or an MCP configuration used only for local development.

Where Secrets Are Actually Leaking

The .env Problem

The most visible cases involve .env files, long known to be sensitive but widely used in practice. Claude Code has been reported reading these files and exposing their contents unintentionally. One GitHub issue titled Claude Code AGGRESSIVELY reads secrets out of .env files and leaks them describes automatic ingestion and unexpected transmission of secrets.

Another thread documents .env values appearing in internal reminders despite deny rules.

Cursor and Windsurf have similar risks. An analysis of these tools by Geeky Gadgets notes that environment variables and API keys can be exposed simply through contextual scanning unless users manually isolate their workspace.

Secrets Stored in MCP Configurations

A more subtle risk lives inside Claude Desktop’s Model Context Protocol configuration files, such as claude_desktop_config.json or .mcp.json. These files define how local tools run and often contain arguments, paths, and environment variables. It is common to see API tokens or personal access keys embedded in them.

It’s easy to find examples of MCP configs with sensitive values appearing in public repositories. Furthermore, the MCP documentation itself shows that environment variables, including credentials, can be stored directly in plaintext JSON

Even Anthropic engineers have advised that secrets should not be stored inside these configuration files and should instead be injected through secure scripts or external managers.

In other words, MCP configuration files are easy to overlook and surprisingly dangerous. If an assistant can read them, it can extract their secrets. If the assistant can run them, it can misuse those secrets.

Broader IDE Configuration Leakage

Cursor and Windsurf also load JSON and YAML configuration files into context. These often contain cloud tokens, database credentials, or deployment settings. Windsurf, in particular, has been shown to exfiltrate private code and secrets due to hidden prompt injections or malicious files inside a repository, as reported in a 2025 article on Undercode Testing.

These tools were not designed with strict separation between “project metadata” and “project secrets.” As a result, AI assistants treat everything they see as usable context, even when it should never leave the developer’s laptop.

Real Incidents That Show the Problem Is Growing

Several recent disclosures demonstrate how quickly these risks are evolving.

One of the most striking examples, reported by Embrace The Red in August 2025, is a DNS-based exfiltration attack involving Claude Code. Through an indirect prompt injection, the assistant was convinced to read sensitive files and encode them into DNS queries that resolved to attacker-controlled domains.

HiddenLayer researchers showed how Cursor could be hijacked through hidden instructions buried inside README files or other repo artifacts. Once triggered, Cursor could leak secrets or perform unsafe operations based on the attacker’s embedded prompts:

Additional research has also demonstrated that large code models can memorize and regurgitate secrets that were unintentionally included in training data.

Taken together, these cases show that secrets leakage is not a single flaw, but a pattern of failures across multiple tools and contexts.

Developer Environments Are Now Attack Surfaces

The rise of AI assistants has quietly transformed developer machines into active attack surfaces. They integrate:

  • File system scanning

  • Shell execution

  • Local network access

  • External plugin ecosystems

  • MCP or equivalent agent servers

  • Persistent memory of prior context

This creates a complex environment where secrets can be exposed, misinterpreted, or exfiltrated without any obvious user action. The problem is not simply configuration mistakes. It is a structural gap in how AI tools interact with development workflows.

AI assistants are not passive models. They read, interpret, and act. And they do so with enough autonomy that even small oversights can become severe security failures.

What Organizations Should Do Now

Fixing this problem requires treating AI coding assistants as privileged software components, not novelty plugins. Organizations need to examine where secrets live on developer machines and how those secrets can be accessed by AI-powered tools.

Preventative measures include moving sensitive values out of configuration files, routing credentials through secure vaults or OS-level secret managers, isolating AI assistants with least-privilege principles, monitoring file access patterns, and ensuring that AI workflows are included in existing security controls like logging, identity governance, and incident response.

The companies that already treat AI assistants as part of the critical path in their software supply chain are the ones avoiding this new class of compromise.

Enter Kirin

Kirin helps organizations map exactly what their AI tools can see and do. It identifies where secrets are exposed in development environments, detect when assistants ingest sensitive files, block access to risky paths, and provide oversight into MCP and plugin configuration changes. Our customers get visibility into the AI behaviors that matter and prevent dangerous exposures before attackers can exploit them.

This is what modern AI governance looks like.

Conclusion

Claude Code, Cursor, Windsurf, and the next generation of AI-assisted developer tools are transforming how software is built. But they are also expanding the attack surface inside developer environments in ways that most organizations have not yet recognized. Secrets leakage through configuration files is already a real-world problem, and it will only become more common as these tools gain deeper integrations and more autonomy.

Security leaders who treat AI assistants as privileged components rather than passive copilots will avoid the silent failures that are beginning to define this new era of development. The risk is here today, and addressing it requires a clear understanding of what these tools can see, store, and share.

If your teams rely on AI coding assistants, now is the time to audit your environment. The cost of waiting is far higher than most organizations realize.

What’s Next

The risks outlined here are only the early symptoms of a much larger shift. AI assistants are gaining new capabilities faster than security teams can adapt. They are moving from passive coding helpers to fully agentic systems capable of autonomous action: browsing, planning, tool invocation, workflow orchestration, and cross-system decision making. As their reach grows, so does the potential blast radius of a single exposed secret.

We are already seeing this play out. In our recent research, we demonstrated how an attacker can hijack Cursor’s new Browser tool via MCP, steer the assistant, and extract sensitive data by manipulating its toolchain. Check out that write-up here:

MCP Hijacking of Cursor’s New Browser

Data Leakage Detection and Response for Enterprise AI Search

Learn how to assess and remediate LLM data exposure via Copilot, Glean and other AI Chatbots with Knostic.

Get Access

Mask group-Oct-30-2025-05-23-49-8537-PM

The Data Governance Gap in Enterprise AI

See why traditional controls fall short for LLMs, and learn how to build policies that keep AI compliant and secure.

Download the Whitepaper

data-governance

Rethinking Cyber Defense for the Age of AI

Learn how Sounil Yu’s Cyber Defense Matrix helps teams map new AI risks, controls, and readiness strategies for modern enterprises.

Get the Book

Cyber Defence Matrix - cover

Extend Microsoft Purview for AI Readiness

See how Knostic strengthens Purview by detecting overshared data, enforcing need-to-know access, and locking down AI-driven exposure.

Download the Brief

copilot-img

Build Trust and Security into Enterprise AI

Explore how Knostic aligns with Gartner’s AI TRiSM framework to manage trust, risk, and security across AI deployments.

Read the Brief

miniature-4-min

Real Prompts. Real Risks. Real Lessons.

A creative look at real-world prompt interactions that reveal how sensitive data can slip through AI conversations.

Get the Novella

novella-book-icon

Stop AI Data Leaks Before They Spread

Learn how Knostic detects and remediates oversharing across copilots and search tools, protecting sensitive data in real time.

Download the Brief

LLM-Data-min

Accelerate Copilot Rollouts with Confidence

Equip your clients to adopt Copilot faster with Knostic's AI security layer, boosting trust, compliance, and ROI.

Get the One-Pager

cover 1

Reveal Oversharing Before It Becomes a Breach

See how Knostic detects sensitive data exposure across copilots and search, before compliance and privacy risks emerge.

View the One-Pager

cover 1

Unlock AI Productivity Without Losing Control

Learn how Knostic helps teams harness AI assistants while keeping sensitive and regulated data protected.

Download the Brief

safely-unlock-book-img

Balancing Innovation and Risk in AI Adoption

A research-driven overview of LLM use cases and the security, privacy, and governance gaps enterprises must address.

Read the Study

mockup

Secure Your AI Coding Environment

Discover how Kirin prevents unsafe extensions, misconfigured IDE servers, and risky agent behavior from disrupting your business.

Get the One-Pager

cover 1
bg-shape-download

See How to Secure and Enable AI in Your Enterprise

Knostic provides AI-native security and governance across copilots, agents, and enterprise data. Discover risks, enforce guardrails, and enable innovation without compromise.

195 1-min
background for career

What’s next?

Want to solve oversharing in your enterprise AI search? Let's talk.

Knostic offers the most comprehensively holistic and impartial solution for enterprise AI search.

protect icon

Knostic leads the unbiased need-to-know based access controls space, enabling enterprises to safely adopt AI.