OpenAnt from Knostic is an LLM-based vulnerability discovery product that helps defenders proactively find verified security flaws while minimizing both false positives and false negatives.
We're pretty proud of this product and are in the vulnerability disclosure process for its findings, but do keep in mind that some features are still in beta. We welcome contributions to make it better.
Submit your open source project for scanning
About Knostic: Knostic discovers and secures AI agents and coding assistants, as well as associated supply chain risks, including MCP servers, skills, IDE extensions, and rules. We detect shadow AI, block data exfiltration, and stop destructive commands like rm -rf.
Visit knostic.ai for more information.
Since Knostic's focus is on protecting coding agents and preventing them from destroying your computer and deleting your code (not vulnerability research), and we like open source, we're releasing OpenAnt for free.
Besides, you may have heard about Aardvark from OpenAI (now Codex Security) and Claude Code Security from Anthropic, and we have zero intention of competing with them.
Considering the upcoming explosion of AI-discovered vulnerabilities, we hope OpenAnt will be the tool helping open source maintainers stay ahead of attackers.
A "unit" in OpenAnt is some code block (e.g., function, module, etc.) along with additional metadata that allows the LLM to analyze it with the proper context. The additional metadata includes who called it, what it calls, and a few bits of other useful information.
Breaking down code into units and their respective flow-graph provides an LLM just enough context to verify the true exploitability of a vulnerability.
The obvious approach to verifying vulnerability findings is to prompt the model: "You are an attacker. Can you exploit this?" This is what many tools do. It's also insufficient, because the model will say yes to almost anything.
LLMs are agreeable by default. Ask "is this code vulnerable?" and the model will find a way to say yes. It will construct a plausible-sounding attack scenario even for safe code. Ask "can you exploit this?" and the model will describe a theoretical attack that sounds convincing but assumes capabilities the attacker doesn't have: server access, admin credentials, the ability to modify files, or local shell access on the target machine.
OpenAnt uses a tightly constrained persona. The model can't assume it has server access. It can't assume it has database credentials. It can't assume it can read local files. Every step of the exploit must work within these constraints, and every step must be traced explicitly. The model can't skip steps or hand-wave through the hard parts. It must show the specific input, the specific endpoint, the specific data flow, step by step.
For CLI tools and libraries, the constraints are even tighter. The LLM is told that it has NO ABILITY TO RUN CLI COMMANDS and it must find a way to trigger this vulnerability REMOTELY. If the only attack path requires running CLI commands locally, having shell access to the server, or being the user who runs the application, then the vulnerability is considered NOT EXPLOITABLE, because local users can already do anything on their own machine.
This eliminates an entire class of false positives. Without this constraint, a model prompted to "act as an attacker" will happily describe how to pass malicious arguments to a CLI tool, producing a technically correct but practically meaningless finding.
OpenAnt's combination of a constrained persona, multiple required approaches, step-by-step tracing, structured exploit path output, and tool access to verify claims against the actual codebase turns this into a genuine adversarial test. The model isn't confirming a finding; it's trying to break code under realistic constraints, and it has to show its work.
Once you point OpenAnt at your software code, it works in five stages:
This stage scans the code, extracts every function, and builds a call graph for that function. The function and call graph together are what we call a "unit." This is pure static analysis and involves no LLM cost.
For example, OpenSSL yields 15,232 units across 1,769 files.
Most code in a large codebase is internal (utility functions, data structures, helpers that are never directly exposed to external input). The reachability filter identifies entry points (CLI handlers, callbacks, main functions) and traces forward through the call graph to find which functions are reachable from external input. This stage involves no LLM costs. It's simple graph traversal on the statically-built call graph.
For example, OpenSSL drops from 15,232 to 390 units (a 97% reduction).
A Sonnet agent iteratively explores the codebase for each reachable unit, classifying it by exposure (exposed externally, exposed internally, security_control, neutral). The agent runs in a loop, searching for callers, reading function definitions, and tracing data flow. Each iteration adds to the conversation history, so token consumption grows with each round. A simple utility function may finish in 1 iteration (~$0.13/iteration), while a complex function with deep call chains can hit the 20-iteration cap (~$10.92/cap per unit).
For example, the median was 9 iterations per unit for OpenSSL. This step dropped OpenSSL's 390 units to 49 externally exposed units (a further 87% reduction and 99.6% reduction overall).
Claude Opus analyzes each of the externally exposed units for vulnerabilities.
For OpenSSL, of the 49 externally exposed units, 28 were flagged as potentially vulnerable (a further 43% reduction and 99.8% reduction overall).
Claude Opus with agentic tool use role-plays as an attacker attempting to exploit each finding step by step. The verification is agentic. The model calls tools to search the codebase, read related functions, and trace exploit paths, with each iteration growing the context window. The most expensive stage per unit.
For OpenSSL, of the 28 potentially vulnerable units, 3 were confirmed vulnerable and exploitable (a further 89% reduction, and 99.98% reduction overall).
After static verification of the vulnerability, Claude is given the task to verify it dynamically as well, using sandboxed, docker-isolated exploit testing. At that point, the AI is prompted to act like a security researcher would, meaning it is tasked to verify the vulnerability and can choose whether to run the whole project or a part of it.
OpenAnt is free, as in puppy - it requires care and feeding, particularly in the form of token costs. OpenAnt currently uses Anthropic's API as the default, but you can change this to any model.
Below, we show the token costs (as of February 2026) that we incurred in running OpenAnt against OpenSSL and other well-known open source projects. However, the calculation of "units" is not necessarily correlated with the size of the code base. A large project may have fewer "units" than a smaller project. As such, it's advisable to track your estimated number of units before letting OpenAnt process your code.
|
Stage |
Per-unit cost |
All 15,232 units |
With filtering |
|---|---|---|---|
|
Enhance (Sonnet, agentic) |
$0.13–$10.92 |
$2,000–$166,000 |
$393.41 (390 units) |
|
Detect (Opus, Stage 1) |
$0.16 |
$2,437 |
$7.69 (49 units) |
|
Verify (Opus, Stage 2 agentic) |
$0.14–$10.54 |
$2,133–$160,583 |
$38.86 (28 units) |
|
Dynamic test |
$0.90 |
— |
$2.70 (3 findings) |
|
Total |
$6,570–$329,020 |
$442.65 |
|
Repository |
Language |
Total units |
After reachability |
Classification → Vuln Discovery → Verification |
Total cost |
|---|---|---|---|---|---|
|
OpenSSL |
C |
15,232 |
390 (97.4% reduction) |
49 → 28 → 3 |
$442.65 |
|
WordPress |
PHP |
12,177 |
393 (96.8% reduction) |
93 → 67 → 20 |
$239.45 |
|
LangChain |
Python |
6,701 |
143 (97.9% reduction) |
37 → 1 → 1 |
$51.48 |
|
Rails |
Ruby |
89 |
89 (0% reduction) |
19 → 2 → 2 |
$25.18 |
|
Grafana |
Typescript & Go |
18,500 |
2,379 (87.1% reduction) |
223 → 143 → 86 |
$1,080.86 |
Dynamic tests are generated on the fly using the most capable LLM within budget constraints (currently Opus 4.6). While this enables flexible vulnerability verification, the quality of the generated test design is not always robust. In some cases, the test may technically confirm a vulnerability, yet the design itself could be methodologically challenged.
This limitation is particularly pronounced in C codebases, where low-level memory management, pointer arithmetic, and complex control flows complicate dynamic validation.
A potential mitigation strategy would be to generate multiple alternative test designs, explicitly instructing the LLM to vary the testing approach, and then select the strongest candidate for final verification. We chose not to implement this approach due to the added architectural and orchestration complexity.
In certain cases, a single logical code unit exceeds the LLM's context window. C projects are especially susceptible to this issue due to dense interdependencies and extensive header inclusion chains.
One mitigation strategy is to route such cases to larger-context (and therefore more expensive) models. However, this does not eliminate the problem entirely. Some units may exceed even the largest available context windows.
An alternative approach is to reduce the effective unit size by excluding lower-risk dependencies. While this may enable processing within context limits, it introduces the risk of incomplete vulnerability assessment due to missing execution paths or implicit assumptions embedded in excluded code.
Cost projections are inherently approximate. In practice, we have observed cases where actual expenses are nearly double the initial estimate.
This variability stems from factors that are difficult to model in advance. For example, when a processing step produces invalid JSON, the system triggers an automatic correction cycle using the LLM, resulting in additional prompts and unplanned token consumption. Similar cascading effects can occur in other error-recovery or validation loops.
We're launching a free scanning program for open source projects using OpenAnt. Contact us at oss-scan@knostic.ai to tell us about your open source project and request a scan, or clone the GitHub repo and run it yourself.
Research: Nahum Korda
Productization: Alex Raihelgaus, Daniel Geyshis
With thanks to: Michal Kamensky, Imri Goldberg, Gadi Evron, Daniel Cuthbert. Josh Grossman, and Avi Douglen.
Do you like our work? Check out what we do at Knostic to defend your agents, prevent them from deleting your hard drive and code, and control associated supply chain risks such as MCP servers, extensions, and skills.
Visit knostic.ai for more information.