Anthropic released research in November 2025 documenting the first reported case of a large-scale AI-orchestrated cyber espionage campaign, with humans intervening only at a handful of key decision points.
In mid-September 2025, Anthropic detected an espionage campaign by a Chinese state-sponsored group. The attackers manipulated Claude Code to attempt infiltration of approximately 30 global targets, including tech companies, financial institutions, chemical manufacturers, and government agencies, successfully compromising a small number.
The threat actor leveraged AI to execute 80-90% of tactical operations, requiring human intervention at only 4-6 critical decision points per campaign.
Jailbroke Claude by breaking attacks into small, seemingly routine tasks and claiming they were conducting legitimate security testing
Used AI for reconnaissance to inspect target systems and identify high-value databases
Automated exploitation, as Claude researched vulnerabilities and wrote exploit code
Orchestrated data exfiltration with the AI harvesting credentials, creating backdoors, and categorizing stolen data by intelligence value
Generated attack documentation to assist in planning future operations
The AI autonomously discovered vulnerabilities, exploited them in live operations, managed phase transitions, aggregated results across multiple sessions, and ran post-explotation activities, including lateral movement, privilege escalation, and data exfiltration.
What makes this attack eye-opening is that it actually wasn't particularly sophisticated. The attackers relied largely on open-source penetration testing tools and standard security utilities rather than custom-built malware or novel attack techniques.
As Knostic CEO Gadi Evron, Google VP of Security Engineering Heather Adkins, and Harvard Kennedy School Fellow Bruce Schneier argued in CSO Online, we may be approaching a singularity event for cyber attackers if attack capabilities could accelerate beyond our individual and collective capability to handle. AI agents don't need to excel at every task; they just need to excel in one of four dimensions: speed, scale, scope, or sophistication.
What this attack lacked in sophistication, it overwhelmingly compensated for in the other three:
Speed: Claude performed reconnaissance, inspecting target organizations' systems, identifying high-value databases, and reporting findings to human operators. All in a fraction of the time a human team would require. At peak activity, the AI made thousands of requests, often multiple per second. This attack speed is impossible for human hackers to match.
Scale: The operation achieved a reach typically associated with nation-state campaigns while maintaining minimal direct human involvement. The architecture incorporated Claude's technical capabilities as an execution engine within a larger automated system, where the AI performed specific technical actions based on human operators' instructions while the orchestration logic maintained attack state, managed phase transitions, and aggregated results across multiple sessions.
Scope: The sheer volume of work performed would have required vast amounts of time for a human team. Even if humans could perform each individual task better, the AI executed far more work than any individual or small team could manage simultaneously.
This attack proves that the barriers to performing large-scale cyberattacks have dropped substantially. Operations that previously required teams of skilled hackers can now be conducted with significantly fewer resources and less specialized expertise. AI agents reduce the skill, cost, and time required to find and exploit vulnerabilities, making advanced capabilities more accessible. Current trends suggest AI agents will continue to improve across all four dimensions, including sophistication.
Organizations must adapt their defenses to address AI-enabled threats. Key focus areas include:
Gain visibility into where and how AI agents are being used in your environment
Educate security teams on AI agent capabilities and how to use agents defensively
Implement AI-specific controls to protect against agent misuse
About Knostic
Knostic helps organizations ensure security keeps pace with AI. As enterprises adopt AI, traditional controls often fail. Knostic identifies where these controls fall short and modernizes them for an AI-powered world.
Learn more: https://www.knostic.ai
CSO Online Article: https://www.csoonline.com/article/4069075/autonomous-ai-hacking-and-the-future-of-cybersecurity.html
Anthropics research:
https://assets.anthropic.com/m/ec212e6566a0d47/original/Disrupting-the-first-reported-AI-orchestrated-cyber-espionage-campaign.pdf
About Knostic Labs
Knostic Labs explores the AI security projects that start as bold experiments and evolve into proven solutions. From early concepts to enterprise-ready products, see how innovation moves through Knostic Labs.
Learn more: https://www.knostic.ai/knostic-labs
Read Knostic Lab’s research here: https://www.knostic.ai/blog/tag/research-findings