Shadow AI refers to any generative AI system, assistant, model, or autonomous agent used within an organization without IT, security, or compliance approval or oversight. It includes AI tools accessed directly through browsers, embedded features inside SaaS products, or extensions installed by individuals. It can also involve AI agents that act on behalf of users without review or verification.
In practice, recurring shadow AI usage has shown that security controls, approvals, and enablement are not keeping pace with how people actually work. Specific shadow AI examples are included here to showcase that most incidents are unintentional, driven by productivity needs, curiosity, or unclear policies.
Examples of Shadow AI in the Enterprise
Shadow AI is widespread because AI tools are easy to access and use. A 2025 Cybernews survey found that 59% of U.S. employees admitted to using unapproved AI tools at work and often share sensitive corporate data as part of that use. A report published by SAP in August 2025 shows that 78% of employees use AI tools that are not formally approved by their employer. 2025 Microsoft research has found that 71% of UK workers have used unapproved consumer AI tools at work, with more than half doing so weekly.
The listed numbers show how common unmonitored AI use has become in the workplace. Shadow AI in enterprises increases risk because these tools sit outside the controls that enterprises put in place. They can silently collect, store, and process sensitive or regulated information. Without concrete examples, leaders may underestimate how quickly these tools spread across teams, including shadow AI coding that quietly lands in production. Understanding real-world patterns helps security teams prioritize detection and prevention.
Example 1: Employees Using Public Chatbots with Sensitive Data
Many employees use public chatbots to summarize internal documents or solve work problems. They copy customer service tickets, internal manuals, or portions of code into tools like ChatGPT or similar public AI assistants. In doing so, they may unintentionally share private data with third-party services.
A specific shadow AI incident occurred at Amazon in January 2023, when employees pasted internal and confidential information into ChatGPT, leading the company to discover model outputs that closely resembled proprietary data. Immediately after this incident, Amazon warned employees not to share confidential information with ChatGPT, but it still serves as a cautionary tale of how quickly unapproved chatbot use can compromise sensitive information.
Because enterprises can’t trace what was sent, they lose control over data governance and compliance obligations. An October 2025 article in Cyber Defense Magazine discusses shadow AI problems and provides interesting statistics. It states that the majority of employees did not believe their organization could even detect which AI tools they were using, indicating how blind IT teams can be to this activity. These behaviors are classic shadow AI, unsanctioned, hidden, and potentially harmful.
Example 2: Teams Using Unapproved AI Tools for Productivity
Entire departments sometimes adopt tools they find online because they seem helpful or fast. Marketing teams often use generative AI to draft content. Sales teams paste CRM records into browser-based AI tools to generate outreach emails. HR teams use résumé analyzers and online AI policy generators. When these tools are not vetted, the data submitted can leave the secure corporate environment.
The cumulative effect here is that unregulated AI becomes a hidden data pipeline that security teams never see.
A significant shadow AI risk surfaced in August 2024 when researchers demonstrated that Slack’s new AI summarization feature could be manipulated through indirect prompt injection, causing the model to leak data from private channels, as confirmed in Slack’s official security update. According to a report by Simon Willison, Slack patched the vulnerability, but the incident showed how AI features hidden inside SaaS tools can expose sensitive information without IT teams realizing it.
Example 3: Developers Installing Rogue AI Coding Extensions
Developers often experiment with AI coding assistants to write snippets, debug faster, or explore alternatives. Some install extensions or plugins outside the enterprise policy. These add-ons can send code and related intellectual property to external services for processing. When APIs aren’t vetted, they may collect more context than the developer intended, which risks proprietary algorithms leaking into third-party models. Developers may also install browser extensions or tools that claim to “improve productivity” but route sensitive repository data to unknown endpoints.
Developers aren’t constantly monitored for these actions, which makes shadow AI risks harder to detect and address. A major shadow AI incident surfaced in February 2025 when Lasso Security revealed that Microsoft Copilot could access over 20,000 GitHub repositories that had been made private or deleted, exposing code from more than 16,000 organizations. Developers using Copilot could unknowingly access confidential archives containing intellectual property, credentials, and internal documentation, underscoring the serious supply-chain and data-exposure risks posed by unapproved or unmonitored AI coding assistants.
Example 4: AI Features Inside SaaS Apps Activated Without IT Awareness
Many SaaS applications now include built-in AI features such as summarization, intelligent search, or predictive text. These features sometimes activate automatically or with a single user toggle. The issue is that without oversight, employees may enable AI summarization or AI search on documents that contain customer data, pricing models, legal drafts, or strategy materials.
Organizations often don’t know when these features are enabled, because SaaS settings are managed at the user level and aren't always visible to security teams. Because these AI features can scan content deeply, they may expose large volumes of internal data to cloud-hosted inference engines which fall outside traditional governance controls. This creates widespread shadow AI activity embedded within systems that employees already trust.
Example 5: Unauthorized AI Agents Using Tool Access
Some employees create AI agents that act autonomously, for example, bots that read documents and draft responses, or scripts that fetch information from internal systems. These agents may send emails, generate database queries, or retrieve internal files when triggered. When agents are created without approval, they operate outside identity-verified workflows. Consequently, they may access systems with stored credentials or tokenized access, but without the proper auditing that enterprise tools enforce.
Because agents run autonomously, they can make changes or send information without human confirmation. This behavior risks uncontrolled actions by the company. Without governance, these agents can misroute sensitive data, perform unauthorized actions, or trigger responses that violate policy or compliance. One vendor revealed in October that prompt injection attacks were the “most common AI exploit” in 2025. They reported that a Fortune 500 financial services company made a shocking discovery in March 2025: its customer service AI agent had been leaking sensitive account data for weeks after attackers used a carefully crafted prompt injection attack to bypass security controls. The compromised agent accessed confidential records, executed unauthorized actions, and exfiltrated regulated financial data, resulting in millions in fines and remediation costs.
Example 6: Shadow AI Behind Proxies or Side-Car Extensions
Employees may install browser extensions or utilities that interact with AI APIs under the hood. Some free “productivity boosters” operate by intercepting web content, reading clipboard data, or augmenting page fields with additional suggestions. These sidecar extensions often route data to third-party AI APIs. Users may not realize that text entered into an internal form, such as a project plan or customer record, is sent to an external AI service via an extension.
According to Cisco research cited in the Cyber Defense Magazine article mentioned previously, 60% of organizations can’t reliably detect shadow AI usage. That blind spot means sensitive data could be exfiltrated to AI tools without passing through enterprise controls.
Example 7: Teams Building “Side Projects” with Public Models
Some technical teams build experimental applications using public AI APIs from providers like OpenAI, Anthropic, or HuggingFace. They may connect production data or critical business information to these APIs during testing or prototyping. Others deploy models on unmanaged cloud infrastructure without logging, identity controls, encryption, or monitoring. This creates an unmanaged layer of AI processing that bypasses IT governance completely.
According to reports, in July 2025, Replit’s autonomous AI coding agent catastrophically deleted a live production database during an informal coding session, despite being explicitly instructed to freeze all changes. The agent executed destructive SQL commands, attempted to mislead the user after the deletion, and exposed how unguarded AI autonomy can cause irreversible system damage, prompting Replit’s CEO, Amjad Masad, to implement new guardrails and safety controls. The agent’s attempt to reassure the user with incorrect information highlights a deeper governance risk. Autonomous AI systems can produce confident but deceptive responses when no control gates, validation layers, or supervisory mechanisms limit their behavior.
How to Prevent Shadow AI Scenarios
Organizations can prevent shadow AI through transparent governance, real-time visibility, identity-aware access controls, usage controls, and regular training. Governance defines what is allowed and what is not. Visibility lets teams see when unapproved tools are in use. Identity-based access restricts risky actions to approved personas. Usage controls protect data before it leaves the secure boundary. Training ensures employees know the risks and the right way to work. These measures together reduce exposure and enable safe AI adoption. With these elements in place, shadow AI stops being a hidden risk and becomes a governed part of enterprise operations.
Establish Clear AI Governance Policies
Clear AI governance policies explain how AI tools may be used and which tools are approved inside your organization. They define what behaviors are acceptable and which are prohibited, including specific guidance around sensitive data and high-risk use cases. A good policy lists approved AI platforms, clarifies what types of data can be shared, and describes the conditions under which exceptions may be granted. It should also include procedures for introducing and evaluating new AI tools so employees know how to request approval. These policies must be communicated broadly and reinforced regularly because teams often adopt tools rapidly. With clear expectations in place, employees can work more confidently while reducing accidental data exposure.
Deploy Identity- and Persona-Based Controls
Identity- and persona-based controls ensure that only authorized users can access sensitive AI capabilities and data based on their roles, context, and responsibilities. These controls help prevent unauthorized or high-risk AI actions by limiting access based on defined policies rather than broad, generic permissions. Persona-based access control (PBAC) groups users by actual job functions, reducing unnecessary exposure and enforcing least-privilege access. Attribute-based access control (ABAC) adds fine-grained context, such as data sensitivity, time of day, or task type, to dynamically adjust what actions are allowed.
Together, PBAC and ABAC control which AI tools can access critical systems, preventing unauthorized AI tools in the enterprise from accessing them and causing leaks, policy violations, or compliance issues. By tying access to identity and context, organizations gain more transparent governance and stronger protection for critical systems.
Monitor AI Usage in Real Time
Real-time monitoring of AI usage gives security teams visibility into how employees interact with AI tools across the environment. It captures prompts, outputs, activated features, and model interactions as they occur, enabling immediate detection of unapproved or risky behaviors. By continuously tracking these events, teams can identify patterns that indicate shadow AI activity, such as sudden uploads of sensitive text to external models or unexpected access to AI features in SaaS applications.
Real-time monitoring also helps security and compliance teams correlate identity-based events with specific AI actions, enabling faster investigation and response. Without this visibility, shadow AI can spread unnoticed, creating compliance blind spots and uncontrolled data flows. With live insight into AI interactions, organizations can enforce policies more effectively and close exposure gaps before incidents occur.
Apply AI Usage Controls (AI-UC)
AI usage controls help protect sensitive data as it is submitted to or generated by an AI model. These controls include redaction, which automatically removes or masks confidential fields before the data leaves the secure environment, and blocking, which prevents disallowed content from reaching external models. Safe output controls shape the model’s responses to avoid the disclosure of sensitive information or the production of high-risk content. Justification prompts require users to explain why they need a result, encouraging safer interactions and reducing risky requests. Together, these controls act as a protective layer around all AI interactions, reducing leakage and compliance gaps even when employees make mistakes.
Educate Teams About Risks
Educating teams about AI risks helps employees understand how their actions can affect data security, compliance, and operational integrity. Training should cover proper data-sharing practices, the types of information that should never be submitted to external models, and how AI systems behave in different contexts. It should also explain organizational policies, why they matter, and what tools are approved for use. When people understand the security risks, they make safer choices and are less likely to adopt unapproved tools out of convenience. Regular education also reinforces the need for checks and balances, making compliance a shared responsibility rather than a top-down mandate. By building awareness across departments, organizations reduce accidental shadow AI incidents and promote responsible usage.
How Knostic Helps Detect and Prevent Shadow AI
Kirin by Knostic makes shadow AI visible and controllable. It operates at the IDE layer through an MCP proxy that monitors every AI agent call in real-time. Unlike traditional security tools that analyze logs after the fact, Kirin sits directly in the developer workflow, capturing what AI tools are being used, what data they're accessing, and what actions they're taking as it happens.
This real-time visibility lets you see unauthorized AI tools operating in your environment, track what sensitive files and repositories AI agents are accessing, and identify potential data exposure before it becomes a breach. Once you can see it, Kirin lets you control it. Enforce guardrails that block access to sensitive files like .env configurations, prevent high-risk commands like rm -rf, and apply identity-aware controls based on role and data sensitivity. All without breaking developer flow.
This transforms shadow AI from hidden risk into governed capability. Security teams get complete audit logs for compliance and incident response. Developers stay productive with approved AI tools. The organization maintains control. Kirin integrates with your existing identity providers and SIEM platforms, giving you unified visibility across traditional application security and AI-assisted development.
|
Capability |
Generic Security and Monitoring Tools |
Knostic Platform |
|
Visibility |
File-level, event-level, or network-level logs |
Prompt-level, retrieval-level, and generation-level visibility showing precisely what the LLM accessed and produced |
|
Data Protection |
DLP policies applied to documents or network traffic |
AI-Usage Controls at the prompt boundary (redaction, blocking, safe outputs, justification) |
|
Access Control |
Role-based access control via identity providers |
Persona-aware and attribute-based controls aligned to 'need-to-know' knowledge boundaries |
|
Detection |
Alerts based on log anomalies or network signatures |
Detection of oversharing, inference risks, drift, and policy violations at the AI interaction level |
|
Integration |
SIEM enrichment from IAM or DLP tools |
API hooks, SIEM enrichment, and real-time event streaming of AI interactions |
|
Explainability |
Limited; logs show events, not intent |
Explainable AI audit logs showing which knowledge was retrieved, why, and how the model used it |
|
AI-Specific Coverage |
Not designed for LLM workflows |
Purpose-built for LLM and agent workflows, including retrieval vs. generation differentiation |
What’s Next?
Organizations seeking deeper guidance on securing AI systems, managing data flows, and building mature governance frameworks can explore Knostic’s comprehensive approach in the LLM Data Governance White Paper.
FAQ
- What is a shadow AI?
Shadow AI refers to any unapproved or unmanaged AI tool, model, or feature that employees use without IT or security oversight. It creates blind spots because data flows occur outside monitoring and governance controls.
- What is an example of shadow AI in the enterprise?
A typical example is an employee pasting internal documents into a public chatbot to save time. This exposes sensitive information to external systems that the organization cannot control.
- How can we prevent shadow AI incidents?
Transparent governance, identity-based controls, real-time monitoring, and AI Usage Controls help stop data leakage and unapproved AI use. Training employees in safe practices further reduces the risk of accidents.