Cross icon
Test your LLM for oversharing!  Test for real-world oversharing risks with role-specific prompts that mimic  real workplace questions. FREE - Start Now
protect icon

A new era requires a new set of solutions
Knostic delivers it

Skip to main content
Skip to main content

Large Language Models (LLMs) present a complex array of opportunities and vulnerabilities. Prompt injection and jailbreaking techniques have emerged as indispensable methodologies for probing the resilience of these models, expanding the horizons of their capabilities while uncovering potential weaknesses.

Amidst the recent spotlight on Microsoft's PyRIT, a constellation of LLM pen testing tools has gained more attention:

1. garak by Leon Derczynski

Derczynski's garak stands as a testament to the potency of prompt injection techniques, providing a lens through which to scrutinize the fortitude of LLMs in the face of adversarial inputs.

 

2. HouYi by YI LIU and Gelei Deng

HouYi's contribution lies in its innovative approach to LLM security, illuminating potential vulnerabilities and avenues of exploitation within these models.

 

3. JailbreakingLLMs by Patrick Chao, Alex Robey, Edgar Dobriban, Hamed Hassani, George J. Pappas, and Eric Wong

A collaborative endeavor, JailbreakingLLMs delves into the intricacies of jailbreaking tailored for LLMs, offering invaluable insights into fortifying these systems against malicious incursions.

 

4. llm-attacks by Andy Zou, Zifan Wang, and Zico Kolter

The llm-attacks framework presents a comprehensive toolkit for assessing the security posture of LLMs, encompassing a myriad of attack vectors and defensive strategies.

 

5. PromptInject by Fábio Perez and Ian Ribeiro

PromptInject emerges as a beacon of vigilance against prompt injection attacks, furnishing pragmatic solutions for bolstering the security resilience of LLMs.

 

6. LLM-Canary by Jamie Cohen and Jackson Gor

LLM-Canary introduces novel methodologies for detecting anomalous behaviors within LLMs, serving as a preemptive safeguard against potential breaches.

 

7. PyRIT by Microsoft

Microsoft's PyRIT commands attention as a recent addition to the list of LLM pen testing tools, underscoring the burgeoning significance of AI security in the contemporary cybersecurity discourse.

Credits to Idan Gelbourt and Simo Jaanus for researching this list. 

For those inclined towards deeper exploration, please see our past research on prompt injection detection:

Link to previous research on prompt injection detection

Data Leakage Detection and Response for Enterprise AI Search

Learn how to assess and remediate LLM data exposure via Copilot, Glean and other AI Chatbots with Knostic.

Get Access

Mask group-Oct-30-2025-05-23-49-8537-PM

The Data Governance Gap in Enterprise AI

See why traditional controls fall short for LLMs, and learn how to build policies that keep AI compliant and secure.

Download the Whitepaper

data-governance

Rethinking Cyber Defense for the Age of AI

Learn how Sounil Yu’s Cyber Defense Matrix helps teams map new AI risks, controls, and readiness strategies for modern enterprises.

Get the Book

Cyber Defence Matrix - cover

Extend Microsoft Purview for AI Readiness

See how Knostic strengthens Purview by detecting overshared data, enforcing need-to-know access, and locking down AI-driven exposure.

Download the Brief

copilot-img

Build Trust and Security into Enterprise AI

Explore how Knostic aligns with Gartner’s AI TRiSM framework to manage trust, risk, and security across AI deployments.

Read the Brief

Image-1

Real Prompts. Real Risks. Real Lessons.

A creative look at real-world prompt interactions that reveal how sensitive data can slip through AI conversations.

Get the Novella

novella-book-icon

Stop AI Data Leaks Before They Spread

Learn how Knostic detects and remediates oversharing across copilots and search tools, protecting sensitive data in real time.

Download the Brief

Solution Brief

Accelerate Copilot Rollouts with Confidence

Equip your clients to adopt Copilot faster with Knostic's AI security layer, boosting trust, compliance, and ROI.

Get the One-Pager

cover 1

Reveal Oversharing Before It Becomes a Breach

See how Knostic detects sensitive data exposure across copilots and search, before compliance and privacy risks emerge.

View the One-Pager

cover 1

Unlock AI Productivity Without Losing Control

Learn how Knostic helps teams harness AI assistants while keeping sensitive and regulated data protected.

Download the Brief

safely-unlock-book-img

Balancing Innovation and Risk in AI Adoption

A research-driven overview of LLM use cases and the security, privacy, and governance gaps enterprises must address.

Read the Study

mockup

Secure Your AI Coding Environment

Discover how Kirin prevents unsafe extensions, misconfigured IDE servers, and risky agent behavior from disrupting your business.

Get the One-Pager

cover 1
bg-shape-download

See How to Secure and Enable AI in Your Enterprise

Knostic provides AI-native security and governance across copilots, agents, and enterprise data. Discover risks, enforce guardrails, and enable innovation without compromise.

195 1-min
background for career

What’s next?

Want to solve oversharing in your enterprise AI search? Let's talk.

Knostic offers the most comprehensively holistic and impartial solution for enterprise AI search.

protect icon

Knostic leads the unbiased need-to-know based access controls space, enabling enterprises to safely adopt AI.