Cross icon
Test your LLM for oversharing!  Test for real-world oversharing risks with role-specific prompts that mimic  real workplace questions. FREE - Start Now
protect icon

A new era requires a new set of solutions
Knostic delivers it

Skip to main content
Skip to main content
hero-top-bg

Knostic Research Team Blog

All articles

Attribute-Based Access Control (ABAC) Implementation Guide

Persona-Based Access Control (PBAC) Examples Library

5 Reasons Why AI Governance is Important

Knostic Named a 2025 SINET16 Innovator for Leadership in Enterprise AI Security

99% of Publicly Shared AI Chats are Safe, New Study Finds

AI Governance Strategy That Stops Leaks, Not Innovation

AI Data Labeling Primer: From Gold Sets to Great Models

Red Team, Go! Preventing Oversharing in Enterprise AI

Persona-Based Access Control (PBAC): What You Need to Know

Data Security Posture Management Strategy for GenAI

Prompt||GTFO Season 1 AI Security Conversations

Prompt Injection Basics: Types, Examples and Prevention

Top 10 AI Security Solutions in Q4 2025

6 Attribute-Based Access Control (ABAC) Examples and Use Cases

How to Secure AI Coding Assistants and Protect Your Codebase

RBAC vs. ABAC: Differences, Use Cases, Migration Strategy

ABAC Basics: What Is Attribute-Based Access Control?

LLMs are Fabricating Enterprise Data: A Real-Case Scenario

Primer: AI Security Posture Management (AI-SPM)

AI Regulatory Compliance Starts With Data Control

AI Governance Policy Made Simple: 7 Steps to Get It Right

AI Data Security: A Practical Guide for Modern Enterprises

The 5 Best Persona-Based Access Control (PBAC) Software Tools

The 10 Biggest Statistics and Trends for GenAI Security

14 Best AI Governance Platforms and Tools in 2025

AI Adoption in Government & the Department of Defense

AI Evaluations Ecosystem: Lessons from America’s AI Action Plan

The Rundown: Attribute-Based (ABAC) vs. Persona-Based Access Controls (PBAC)

Enterprise GenAI Adoption Mandate: Lessons from America’s AI Action Plan

GPT-5 “Retry” Behavior and Cross-Session Context Contamination

How Mental Models are Transforming AI Chaos into Clarity

Know your Access Controls: Role-Based (RBAC) vs. Persona-Based (PBAC)

10 AI Governance Best Practices for Enterprise Teams

AI as an Enzyme to Transform Critical Infrastructure

AI Observability: What You Need to Know

Persona-based Access Control Implementation in Just 6 Steps

Glean Secures LLM Search. Who Stops Oversharing?

Enterprise Guide To: Persona-Based Access Controls

Detect and Control: Shadow AI in the Enterprise

AI Security Audit: Proving Your GenAI Is Safe and Compliant

Automating the MCP Servers Discovery with Claude Sonnet 4

How to Find an MCP Server with Shodan

Exposing the Unseen: Mapping MCP Servers Across the Internet

Identity and Access Management for the GenAI Era

The Right AI Guardrails Keep Enterprise LLMs Safe and Compliant

Data Leakage Happens with GenAI. Here’s How to Stop It.

How to Ensure Safe GenAI Deployments in the Enterprise

AI Data Classification: Static Labels, Dynamic Risk Control and Beyond

Enterprise AI Tools Know Too Much: The CISO’s Dillema

4 Best Strategies to Secure Model Context Protocol

How Model Context Protocol (MCP) Servers Communicate

What is a “Model Context Protocol” Server in GenAI

Why Microsoft Purview Needs Help Preventing Oversharing

Explainability in AI Search: Explained

Solving the Very-Real Problem of AI Hallucination

Adversarial AI Attacks & How to Stop Them

How LLM Pentesting Enables Prompt-to-Patch Security

AI Monitoring in Enterprise Search: Safeguard Knowledge at Scale

Microsoft Copilot data security and governance: A practical guide for CISOs

What to Expect When You're Expecting Your GenAI Baby

AI Access Control: Safeguarding GenAI Across the Enterprise

AI Discretion: Teaching Machines the Human Concept of ‘Need-to-Know

AI Data Security Risks and How to Minimize Them

Enterprise AI Oversharing: Hidden Hazards & Quick Fixes

SVCI: "Why We Invested in Knostic" - Leading CISOs' Thesis on AI Security

Enterprise AI Search Tools: Addressing the Risk of Data Leakage

Knostic Top 10 Finalist in RSAC™ Innovation Sandbox Contest: Secures Additional $5 Million Investment

How We Discovered an Attack in Copilot's File Permissions

Ending LLM Oversharing: Knostic Raises $11MM to Secure Enterprise AI

Extracting the GPT4.5 System Prompt

DeepSeek’s cutoff date is July 2024: We extracted DeepSeek’s system prompt

Exposing Microsoft Copilot's Hidden System Prompt: AI Security Implications

How Knostic Maps to Gartner’s AI TRiSM Framework

Suicide Bot: New AI Attack Causes LLM to Provide Potential “Self-Harm” Instructions

Understanding the Differences Between Jailbreaking and Prompt Injection

Merging Mental Models Part 3: The OSI Model + Cyber Defense Matrix

Merging Mental Models Part 4: The DIKW Pyramid + Cyber Defense Matrix

The Case for Pathological AI

Jailbreaking Social Engineering via Adversarial Digital Twins

Reflections on CrowdStrike: Friends, Romans, Countrymen

Knostic Wins 2024 Black Hat Startup Competition!

Knostic in Final Four of 2024 Black Hat Startup Spotlight

AI-Powered Social Engineering: An Increasing Threat

Merging Mental Models Part 2: The Cyber Defense Matrix

Reflections and Highlights from RSAC 2024

Unlocking Microsoft Copilot Without Compromise

AI Attacks: Novel or Iterations of Existing Challenges?

Merging Mental Models Part 1: Discovering Known Unknowns

Building Guardrails for Autonomic Security in 2024

Knostic is RSA Conference Launch Pad Finalist

Getting More Out of Prompt Injection Detection

LLM Pen Testing Tools for Jailbreaking and Prompt Injection

What’s next?

Want to solve oversharing in your enterprise AI search?
Let's talk.

Knostic is the comprehensive impartial solution to stop data leakage.