The AI Vulnerability Storm

AI, as demonstrated by Anthropic's Mythos, has significantly increased the likelihood of attackers discovering new vulnerabilities, creating new exploits, and using them in complex automated attacks at scale. And while AI also increases the speed of patch development and reduces defects in new software, defenders still face a heavier relative burden due to the inherent limitations of patching. Attackers gain asymmetric benefits.


Gadi led the writing of this briefing for the Cloud Security Alliance, alongside SANS, [un]prompted, and the wider security community. Read the full paper here.


Why the Defender Timeline Is Breaking

AI-driven vulnerability discovery has been accelerating for more than a year. Mythos compresses the timeline further, but the capability is not new, and waiting for the next major announcement is not a strategy. The window between discovery and weaponization has collapsed to hours. Security programs built on weeks-long patch cycles, quarterly pen tests, and CVE-based threat intelligence were not designed for this speed.

What the Paper Covers

The briefing is a working document for CISOs who need to walk into a room Monday morning with a credible plan. It is organized around two questions: what to do now to deal with the current risk spike, and what to start now to be ready for the waves that follow.

On the current risk spike, the paper recommends:

  • Adjusting risk calculations and re-orienting security program resources for the increasing volume of patches, decreasing time to patch, and more persistent and complex attacks.
  • Focusing on the basics and hardening your environment further. Segmentation, egress filtering, multifactor authentication, and defense-in-depth all continue to increase the difficulty for attackers.
  • Prioritizing robust dependency management to reduce vulnerabilities in third-party and open-source components.

On getting ready for what comes next, the paper recommends:

  • Enforcing automated security assessments consistently in your development processes, including using LLM-powered agents to find vulnerabilities before attackers do.
  • Introducing AI agents to the cyber workforce across the board, enabling defenders to match attackers' speed and begin closing the gap.
  • Re-evaluating your risk tolerance for operational downtime caused by vulnerability remediation, to account for shorter adversary timelines.
  • Updating governance for more efficient vendor onboarding and increasing headcount to facilitate faster deployment of new AI-based defenses.

The paper also introduces the concept of a Mythos-ready security program built around minimum viable resilience, and frames VulnOps as a permanent organizational capability, the vulnerability-side analogue to DevOps. It includes a prioritized action plan with start dates and time horizons, and board-facing language to support the executive conversation that is already happening.

Why This Matters

This is not about one model, one vendor, or one announcement. The current wave of vulnerability disclosures from Project Glasswing is the first of many, and the capabilities seen in Mythos will become more widely available in the months ahead. The organizations that fare best will be the ones that begin building the muscle now: the processes, the tooling, and a culture willing to adopt AI as a core part of how security gets done.

As an industry, we also need to strengthen our coalitions, cooperation, and coordination. Attackers already operate as collectives. Defenders need to catch up.

Who Wrote It

The paper was led by Knostic CEO Gadi Evron, with co-authors Rich Mogull (Chief Analyst, Cloud Security Alliance) and Rob T. Lee (Chief AI Officer, SANS Institute). Contributing authors include Jen Easterly, Bruce Schneier, Chris Inglis, Rob Joyce, Heather Adkins, Joshua Saxe, Sounil Yu, John N. Stewart, Katie Moussouris, Dave Lewis, and Maxim Kovalsky. The paper was produced by the CSA CISO Community, SANS, [un]prompted, the OWASP Gen AI Security Project, and the wider community.

More than 250 CISOs redlined the document live. All contributors represent themselves, not their employers.

Read the Paper

The "AI Vulnerability Storm": Building a "Mythos-ready" Security Program Cloud Security Alliance, April 12, 2026. Released under CC BY-NC 4.0.

Download the paper here

For inquiries: cisos@cloudsecurityalliance.org

Discover and Protect Agents and Coding Assistants

Check out what we do at Knostic to defend your agents, prevent them from deleting your hard drive and code, and control associated supply chain risks such as MCP servers, extensions, and skills. Visit knostic.ai for more information.

Data Leakage Detection and Response for Enterprise AI Search

Learn how to assess and remediate LLM data exposure via Copilot, Glean and other AI Chatbots with Knostic.

Get Access

Mask group-Oct-30-2025-05-23-49-8537-PM
The Data Governance Gap in Enterprise AI

See why traditional controls fall short for LLMs, and learn how to build policies that keep AI compliant and secure.

Download the Whitepaper

data-governance
Rethinking Cyber Defense for the Age of AI

Learn how Sounil Yu’s Cyber Defense Matrix helps teams map new AI risks, controls, and readiness strategies for modern enterprises.

Get the Book

Cyber Defence Matrix - cover
Extend Microsoft Purview for AI Readiness

See how Knostic strengthens Purview by detecting overshared data, enforcing need-to-know access, and locking down AI-driven exposure.

Download the Brief

copilot-img
Build Trust and Security into Enterprise AI

Explore how Knostic aligns with Gartner’s AI TRiSM framework to manage trust, risk, and security across AI deployments.

Read the Brief

miniature-4-min
Real Prompts. Real Risks. Real Lessons.

A creative look at real-world prompt interactions that reveal how sensitive data can slip through AI conversations.

Get the Novella

novella-book-icon
Stop AI Data Leaks Before They Spread

Learn how Knostic detects and remediates oversharing across copilots and search tools, protecting sensitive data in real time.

Download the Brief

LLM-Data-min
Accelerate Copilot Rollouts with Confidence

Equip your clients to adopt Copilot faster with Knostic's AI security layer, boosting trust, compliance, and ROI.

Get the One-Pager

cover 1
Reveal Oversharing Before It Becomes a Breach

See how Knostic detects sensitive data exposure across copilots and search, before compliance and privacy risks emerge.

View the One-Pager

cover 1
Unlock AI Productivity Without Losing Control

Learn how Knostic helps teams harness AI assistants while keeping sensitive and regulated data protected.

Download the Brief

safely-unlock-book-img
Balancing Innovation and Risk in AI Adoption

A research-driven overview of LLM use cases and the security, privacy, and governance gaps enterprises must address.

Read the Study

mockup
Secure Your AI Coding Environment

Discover how Kirin prevents unsafe extensions, misconfigured IDE servers, and risky agent behavior from disrupting your business.

Get the One-Pager

cover 1

Tags:

bg-shape-download

See How to Secure and Enable AI in Your Enterprise

Knostic provides AI-native security and governance across copilots, agents, and enterprise data. Discover risks, enforce guardrails, and enable innovation without compromise.

195 1-min
background for career

Schedule a demo to see what Knostic can do for you

protect icon

Knostic leads the unbiased need-to-know based access controls space, enabling enterprises to safely adopt AI.