Skip to main content

 Fast Facts on GenAI Security Statistics

  • GenAI tools significantly increase productivity but also expand potential attack vectors, exposing sensitive data and extending the cyberattack surface.

  • 4% of prompts and over 20% of files uploaded to GenAI tools contain sensitive corporate data, underscoring the need for AI-specific data loss prevention controls.

  • Prompt injection remains a significant threat, with 56% of tests across 36 LLM configurations successfully bypassing safeguards. Nearly half of the surveyed organizations rank adversarial GenAI threats, like deepfakes and AI phishing, as their top cybersecurity concern.

  • While 75% of companies have AI policies, fewer than 60% have invested in trained governance staff or incident-response capabilities, revealing a critical enforcement gap.

Understanding current GenAI security statistics is essential for making informed strategic and technical decisions. The rapid adoption of generative AI tools has expanded both productivity and the attack surface. Without accurate statistics, it is difficult to prioritize security investments or justify policy changes to executive leadership. Executive decisions made without such data often default to generic, one-size-fits-all controls, which often fail to address the most critical and emerging risks. 

Metrics such as data leakage rates, prompt injection prevalence, and AI transaction blocking percentages reveal where vulnerabilities concentrate and how threats evolve. These figures are actionable; they directly correlate with potential financial loss, regulatory exposure, and reputational damage. Knowing the scale of sensitive data exposure, for example, informs whether to invest in AI-specific data loss prevention or adjust access controls. Likewise, tracking adversarial AI attack trends can guide the adoption of detection systems capable of identifying deepfakes or AI phishing campaigns before they reach end users. As regulations tighten, verifiable security data also serves as evidence during audits or incident investigations.

GenAI Security Statistics: Methodology

The statistics in this report are drawn from verifiable, high-quality sources published between late 2024 and mid-2025. Only data backed by peer-reviewed research, recognized cybersecurity bodies, or enterprise-scale telemetry were included. Opaque datasets, where methodology transparency or raw data access is limited, were explicitly excluded to ensure reproducibility. This ensures that each data point reflects a reproducible, transparent, and current view of the GenAI security landscape. Each figure is mapped to a defined threat category aligned with frameworks such as the OWASP LLM Top 10 (2025). This allows direct comparability across datasets and prevents the inclusion of marketing-driven numbers. 

Figures are triangulated by comparing industry telemetry with independent research such as the Alan Turing Institute’s policy papers on large-scale AI risk and the World Economic Forum’s Global Cybersecurity Outlook. Any metric without a clear numerator, denominator, and timeframe is excluded. For example, a widely circulated “60% of companies experienced AI incidents in the past year” statistic was excluded because the source did not define the sample size, incident criteria, or the time window for measurement, making the figure unverifiable and unsuitable for inclusion. Terms like “leak,” “exposure,” and “transaction” are normalized following definitions used in ISO/IEC 27040:2025 on storage security and NIST SP 800-218 Secure Software Development Framework. This consistency ensures that figures can be directly compared across datasets without semantic drift. 

The result is a dataset that reflects reproducible, independently validated realities, giving a defensible evidence base for AI governance, procurement, and technical control decisions in GenAI security.

Why Is It Important To Source Credible AI Security Statistics

Credible GenAI statistics directly influence the effectiveness of governance, deployment safeguards, and budget allocation. Without them, decisions often default to generic controls that fail to address novel, high-severity threats such as prompt injection or model exploitation. Independent datasets from bodies like ENISA and NIST provide validated benchmarks that help security leaders assess actual risk exposure without relying on marketing narratives.

Sourcing  credible AI security statistics is vital because it:

  1. Reduces Strategic Blind Spots - Independent data ensures risk assessments focus on actual high-impact threats instead of relying on vendor claims. This enables targeted mitigation of vulnerabilities that truly matter.

  2. Supports Regulatory and Audit Compliance - AI security frameworks, such as the EU AI Act and ISO/IEC 42001, increasingly require documented evidence of risk assessments and mitigation efforts. Authoritative benchmarks offer measurable indicators that can be referenced during compliance audits and regulatory submissions. For example, tracking verifiable rates of data leakage or the prevalence of adversarial attacks supports Article 72 post-market monitoring obligations under the EU AI Act.

  3. Enables Accurate Risk Prioritization - Trusted measurements help quantify the likelihood and severity of different risks, allowing the prioritization of investments in high-impact defenses such as AI-specific data governance or adversarial content detection. Studies from the Alan Turing Institute and peer-reviewed evaluations of LLMs provide granular insight into which attack vectors are most successful in real scenarios.

  4. Strengthens Stakeholder Confidence - Boards, investors, and customers increasingly demand proof that AI initiatives are secure by design. Citing verifiable evidence from independent research demonstrates due diligence and strengthens trust in AI governance programs. This is important in sectors with high regulatory oversight, such as finance and healthcare, where unverified claims about AI safety can result in reputational damage and loss of market confidence.

Top 10 GenAI Security Trends and GenAI Security Statistics  in 2025

In 2025, independent research and security incident data reveals a rise in AI-related risks, including frequent AI security data exposure in prompts and uploads, surging phishing threats, and widespread prompt injection vulnerabilities. The following quantified information highlights the urgent need for AI-specific security measures.

1. More Than 20% of Files Uploaded to AIs Contained Sensitive Corporate Data in Q2 2025

A 2025 study found that more than 4% of employee prompts and over 20% of file uploads to GenAI tools contained sensitive corporate data. This included proprietary designs, customer records, and internal strategy documents. Such exposures often occur unintentionally during everyday AI-assisted work, making them hard to detect using traditional DLP systems. The findings underscores the necessity of implementing AI-specific monitoring and filtering at the prompt and file upload stages.

2. Adversary-in-the-Middle (AiTM) Phishing Surged by 146% Year-over-Year 

In 2024, Microsoft reported a 146% rise in AiTM (Adversary-in-the-Middle) phishing attacks year over year, alongside an estimated 39,000 token-theft incidents per day. AiTM attacks bypass MFA by tricking users into completing authentication on an attacker-controlled page, then stealing the session to impersonate the user. This directly increases the risk of data-exposure, especially as GenAI-generated lures make phishing more convincing and scalable. For enterprise leaders, this trend justifies stricter conditional access, phishing-resistant MFA (e.g., passkeys), and continuous session validation. It also shows why identity defenses must be treated as a first-class control in any GenAI adoption roadmap.

3.  Nearly 19% of AI Transactions are Blocked by Enterprises in Early 2024

Enterprise security teams have increased their scrutiny of AI network activity. According to Virtualization Review, 18.5% of AI/ML transactions were blocked during security monitoring in early 2024. These blocks often targeted outbound API calls to unverified AI services or attempted data transfers lacking encryption. This trend aligns with the broader adoption of Zero Trust security principles, where no AI data flow is considered safe until it passes rigorous checks. For technology leaders, this statistic highlights the importance of implementing AI traffic inspection and whitelisting policies.

4. Only 38% of Organizations are Taking Steps to Reduce Prompt Injection Risk

A November 2024 report found that 75% of business employees use GenAI, with 46% having adopted it in just the prior six months. However, only 38% of organizations were taking mitigation steps, such as prompt sanitization or output evaluation, to counter prompt injection related risks. This disparity indicates a gap: while AI tools are gaining traction across workflows, security measures are lagging. Prompt injection leverages the inability of models to distinguish between user input and system instructions, allowing manipulated prompts to override safeguards. Without proper controls, this can lead to data exfiltration, misinformation, and manipulation of AI workflows. The implication is clear: adoption must go hand-in-hand with the deployment of strategic defenses such as input filtering, monitoring, and employee training.

5. 47% of Organizations Rank Adversarial GenAI Threats as Their Primary Cybersecurity Concern in 2025

The World Economic Forum’s Global Cybersecurity Outlook 2025 found that almost half of surveyed organizations identified adversarial GenAI threats, such as deepfake generation or AI phishing, as their top security concern. These attacks exploit AI’s ability to create convincing synthetic content, making social engineering campaigns more scalable and effective. The rise in concern reflects the difficulty of detection and the potential for brand damage, theft of data, or fraud. For companies, countermeasures like media authentication, digital watermarking, and real-time anomaly detection are becoming strategic priorities.

6. 66% of Organizations Expect AI to Reshape Cybersecurity in 2025, but Only 37% Have Deployment Safeguards in Place

The same mid-2025 report shows that while two-thirds of organizations anticipate AI having a significant role in cybersecurity, only 37% have implemented safeguards for its deployment. These safeguards include secure model hosting, access control policies, and continuous monitoring of AI outputs. The gap reflects both a skills shortage in AI security engineering, and the speed of GenAI adoption outpacing governance frameworks. Closing this gap will require integrating security into the complete AI lifecycle from development through post-market monitoring.

7. Prompt Injection Attacks Succeed in 56% of Tests Across LLMs

A peer-reviewed 2024 study examined 36 LLM configurations using 144 prompt injection tests. The research found that 56% of these prompts successfully bypassed model safeguards, highlighting a systemic vulnerability in how large models handle deceptive or malicious input. The study observed differences between open-weight and closed-weight model families, with open-weight models generally showing slightly higher susceptibility due to broader accessibility for fine-tuning and experimentation. Partial defenses reduce (but do not eliminate) the risk, especially when attackers use layered or obfuscated techniques.

8. 94.4% of LLMs are Vulnerable to Direct Prompt Injection; 83.3% to RAG Backdoors

Other security research on 18 AI models found that 94.4% were vulnerable to direct prompt injection and 83.3% to RAG (Retrieval-Augmented Generation) backdoor attacks, which involve injecting malicious data into the model’s external vector store, causing it to retrieve and process harmful or misleading information. Even more concerning, the manipulation of inter-agent trust achieved a 100% compromise rate in tested multi-agent systems. These attacks can be used to insert or extract sensitive data through secondary prompts or auxiliary tools connected to the AI. These findings indicate  that complex AI architectures unintentionally increase attack surfaces. For enterprises, this reinforces the importance of securing not just the primary model but every component of the AI ecosystem.

9. 499 Publicly Reported Generative AI Incidents Catalogued in 2025

A recent 2025 peer-reviewed study systematically reviewed 499 publicly reported security incidents involving GenAI across domains like privacy breaches, misinformation, fraud, and bias. The research categorized these events by their underlying causes, ranging from design flaws and deployment issues to downstream misuse. Fraud-related incidents were the most prevalent, followed by misinformation campaigns, with bias-related events ranking lower in frequency but higher in long-term reputational risk. Most of these incidents are from improper or negligent use during deployment or operation, not from failures  in model design. This comprehensive dataset marks a significant expansion in our understanding of real-world GenAI misuse compared to previous studies. The conclusion is that GenAI harm is not rare or limited in scope; it’s widespread and diverse. Effectively addressing these risks requires proactive, system-wide governance, multidisciplinary risk assessments, and stakeholder collaboration.

10. 75% of Organizations Have AI Use Policies, but Fewer Than 60% Have Trained Governance Staff

A 2025 report reveals that while 75% of organizations have formal AI usage policies, fewer than 60% have designated governance roles or established incident-response playbooks. This gap presents an opportunity for integrating automated governance frameworks and  training modules to ensure that AI use policies translate into consistent enforcement and staff readiness. The disparity is even wider among smaller firms; only 36% have dedicated governance officers, and just 41% provide any form of AI use training to their staff. This indicates that documenting AI rules is not enough. Effective AI governance requires dedicated leadership, ongoing employee training, and concrete mechanisms for incident detection and response.

What’s Next

See how your team can accelerate GenAI security readiness by requesting the Knostic solution brief here. Inside, you’ll learn how to assess and remediate LLM data exposure risks in tools like Microsoft Copilot and Glean, including identifying permission gaps, remediating violations, and detecting risky user activity in real time.

bg-shape-download

Learn How to Protect Your Enterprise Data Now!

Knostic delivers an independent, objective assessment, complementing and integrating with Microsoft's own tools.
Assess, monitor and remediate.

folder-with-pocket-mockup-leaned
background for career

What’s next?

Want to solve oversharing in your enterprise AI search? Let's talk.

Knostic offers the most comprehensively holistic and impartial solution for enterprise AI search.

protect icon

Knostic leads the unbiased need-to-know based access controls space, enabling enterprises to safely adopt AI.