Claude Code automatically loads any .env* files it finds without notifying the user. This behavior is concerning because these files often contain API keys, tokens, and other sensitive credentials.
Findings indicate that Claude Code, Anthropic’s coding assistant, appears to automatically read .env, .env.local, and other similar environment files that developers rely on for sensitive configuration values. These files are typically excluded from version control precisely because they contain sensitive information, or secrets. Yet Claude Code appears to ingest them automatically, without explicit user permission.
Dor Munis first noticed this behavior when he got an HTTP status 407 (Proxy Authentication Required) error when trying to use Claude Code’s /login command. Initially, he suspected Cloudflare or his shared office network had an issue. However, when he curled Anthropic’s API directly, he received a normal 200 status response.
Running Claude Code in debug mode revealed that it was accessing his HTTP_PROXY environment variable, which had been loaded from a .env file inside one of his project directories. His proxy bill was also unusually high. Relocating the .env file outside the project directory eliminated the issue, confirming that Claude Code was automatically loading .env* files.
Any secrets stored in these files, including API keys, proxy credentials, tokens, and passwords, are silently loaded into memory. Anthropic’s documentation states that file accesses may be transmitted to Anthropic systems. While this does not prove that .env secrets are sent to the model or stored by Anthropic, it does mean the risk cannot be ruled out.
Storing these files in memory means that seemingly harmless commands can do major damage. Claude Code could use a 'safe' command like echo - and if you give echo permissions, it will just echo all your secrets (private keys, API keys, etc).
This behavior is likely the result of Claude Code using dotenv, dotenvx, or a similar npm package that loads environment variables automatically. f that is the case, the underlying JavaScript logic may make it difficult to avoid reading .env* files in the current working directory.
It’s important to distinguish that .env files are loaded into runtime memory, not directly into the LLM context. This means the data is not necessarily leaked, but it does not guarantee they are safe.
Since Claude Code runs in the cloud and Anthropic’s documentation indicates that file reads may be processed on their servers, there is no guarantee that secrets from .env files remain local. Developers should assume these values could leave the local environment.
Anthropic’s official guides even recommend explicitly blocking Claude from accessing .env files. This strongly implies that reading them is the default behavior unless users intervene.
Anthropic’s Terms of Service and Privacy Policy do not disclose this automatic .env loading behavior. There is:
No mention that .env files will be read
No request for user permission
No warning that files may be uploaded or processed
While the documentation suggests opting out via deny rules, those protections only apply after the file has already been accessed at least once.
We contacted Anthropic about this issue. Their reply directed us to their general security page and did not address or acknowledge the automatic .env loading behavior. https://code.claude.com/docs/en/security
Claude Code loads .env secrets into memory without consent, creating an unnecessary and avoidable security risk. If Claude Code is breached or misused, the user's secrets are exposed. Regardless of whether the secrets are stored, sent, or simply read, loading them without permission crosses a fundamental security boundary.
For any developer using Claude Code, the safest assumption is:
If a file is not explicitly denied, it is accessible.
Until Anthropic addresses this behavior:
Move .env files outside active project directories.
Add deny rules in ~/.claude/settings.json or .claudeignore (e.g., Read(./.env*)). While this may not guarantee full protection, it may reduce exposure.
Disable Claude’s auto-run or background features in sensitive repositories. Avoid auto-run entirely.
Use Claude Code inside a container or isolated environment to restrict file access.
This issue is not a harmless oversight. It reflects a deeper design problem. Developer tools must respect boundaries, especially when handling secrets. Loading sensitive files without permission should never be the default.
Knostic’s Kirin protects agents, specifically developers and AI coding assistants, against supply chain attacks like these by providing detection and response, posture management capabilities, a comprehensive inventory, and a reputation system for safe ingestion.
Learn more: https://www.knostic.ai/ai-coding-security-solution-kirin