Skip to main content
hero-top-bg

Knostic Research Team Blog

All articles

The Right Guardrails Keep Enterprise LLMs Safe and Compliant

Data Leakage Happens with GenAI. Here’s How to Stop It.

Ensuring a Safe GenAI Deployment

AI Data Classification: Static Labels, Dynamic Risk Control and Beyond

Enterprise AI Tools Know Too Much: The CISO’s Dillema

Automating the MCP Servers Discovery with Claude Sonnet 4

4 Best Strategies to Secure Model Context Protocol

How Model Context Protocol (MCP) Servers Communicate

How to Find an MCP Server with Shodan

What is a “Model Context Protocol” Server in GenAI

Why Microsoft Purview Needs Help Preventing Oversharing

Explainability in AI Search: Explained

Solving the Very-Real Problem of AI Hallucination

Adversarial AI Attacks & How to Stop Them

How LLM Pentesting Enables Prompt-to-Patch Security

AI Monitoring in Enterprise Search: Safeguard Knowledge at Scale

Microsoft Copilot Data Security & Governance Guide for CISOs

What to Expect When You're Expecting Your GenAI Baby

AI Access Control: Safeguarding GenAI Across the Enterprise

AI Discretion: Teaching Machines the Human Concept of ‘Need-to-Know

AI Data Security Risks and How to Minimize Them

AI Oversharing in the Workplace: Hidden Hazards & Quick Fixes

SVCI: "Why We Invested in Knostic" - Leading CISOs' Thesis on AI Security

Enterprise AI Search Tools: Addressing the Risk of Data Leakage

Knostic Top 10 Finalist in RSAC™ Innovation Sandbox Contest: Secures Additional $5 Million Investment

How We Discovered an Attack in Copilot's File Permissions

Ending LLM Oversharing: Knostic Raises $11MM to Secure Enterprise AI

Extracting the GPT4.5 System Prompt

DeepSeek’s cutoff date is July 2024: We extracted DeepSeek’s system prompt

Exposing Microsoft Copilot's Hidden System Prompt: AI Security Implications

How Knostic Maps to Gartner’s AI TRiSM Framework

Suicide Bot: New AI Attack Causes LLM to Provide Potential “Self-Harm” Instructions

Understanding the Differences Between Jailbreaking and Prompt Injection

Merging Mental Models Part 3: The OSI Model + Cyber Defense Matrix

Merging Mental Models Part 4: The DIKW Pyramid + Cyber Defense Matrix

The Case for Pathological AI

Jailbreaking Social Engineering via Adversarial Digital Twins

Reflections on CrowdStrike: Friends, Romans, Countrymen

Knostic Wins 2024 Black Hat Startup Competition!

Knostic in Final Four of 2024 Black Hat Startup Spotlight

AI-Powered Social Engineering: An Increasing Threat

Merging Mental Models Part 2: The Cyber Defense Matrix

Reflections and Highlights from RSAC 2024

Unlocking Microsoft Copilot Without Compromise

AI Attacks: Novel or Iterations of Existing Challenges?

Merging Mental Models Part 1: Discovering Known Unknowns

Building Guardrails for Autonomic Security in 2024

Knostic is RSA Conference Launch Pad Finalist

Getting More Out of Prompt Injection Detection

LLM Pen Testing Tools for Jailbreaking and Prompt Injection

Learn How to Protect Your Enterprise Data Now!

Please fill the form to access our Solution Brief on stopping Enterprise AI Seach oversharing with Knostic.

Check our other Sources:

LLM Data Leakage

What’s next?

Want to solve oversharing in your enterprise AI search?
Let's talk.

Knostic is the comprehensive impartial solution to stop data leakage.

protect icon
Knostic leads the unbiased need-to-know based access controls space, enabling enterprises to safely adopt AI.