Copilot Readiness and Enterprise AI Security | Knostic Blog

LLMs are Fabricating Enterprise Data: A Real-Case Scenario

Written by Shayell Aharon | Sep 8, 2025 4:15:08 PM

New Knostic research: LLMs can fabricate sensitive personal information, creating similar risks to actual data leaks

Your shiny new AI is supposed to boost productivity. And it does. These tools are game-changers that are reshaping how we work. But there's a catch: sometimes they confidently tell you something that's completely wrong. These systems can generate false but convincing answers that, in the workplace, erode trust, spark real conflicts, and create serious compliance headaches.

At Knostic, our research aimed to uncover the hidden dangers of LLM adoption within enterprises. We built an artificial company environment with real data files and then prompted a copilot for a user’s personal information. The results were concerning: the copilot searched for the information in the HR folder but, unable to find it, fabricated the employee’s personal details and shared them with our team. When confronted about the veracity of the data, it offered a polite retraction. This quick video demonstrates how it happens:

When AI Fabricates Enterprise Data

Here's a scenario happening in companies everywhere: Sarah from marketing casually asks the company AI about average team salaries. The AI responds with confidence: "$95,000." Sounds reasonable, sounds official. One problem, it's completely made up.

Now Sarah thinks her colleague Tom earns way more than he actually does. She's frustrated, maybe even resentful. She might march into HR demanding an explanation. All because the AI decided to fill a knowledge gap with pure fiction.

This isn't some hypothetical scenario. Payscale's 2025 Pay Confidence Gap Report shows that 70% of employers have caught employees using AI assistants for salary research, with 38% saying these tools are pushing salary demands higher than ever. Even worse: 63% of HR leaders report employees making salary requests based on completely inaccurate information they got from AI.

Why AI Makes Stuff Up

Here's the thing about AI: it doesn't actually "know" anything the way humans do. It's really good at predicting what words should come next, but it has poor understanding of truth or falsehood. As Red Hat's AI security expert Huzaifa Sidhpurwala puts it, these systems are designed to produce plausible text, "regardless of whether output aligns with reality." When they don't know something, they'll confidently make it up rather than admit ignorance.

Workplace AI is especially prone to this because company data is often messy or incomplete, document retrieval systems grab the wrong information, and nobody taught these systems to say "I don't know."

The Fabricated Sensitive Information Problem

Here's where things get really scary: your AI doesn't just risk leaking real confidential data, it can invent fake sensitive information that sounds completely legitimate.

Picture this, your AI confidently states "Sarah from accounting makes $87,000." Your employee has no clue whether that's real data pulled from HR files, a number the AI just invented, or last year's outdated figure. The kicker is it doesn't matter. The damage is identical either way.

That fabricated information spreads just as fast as real leaks. It creates the same workplace drama. Employees believe it because it came from the "official" company AI. Suddenly you're dealing with fallout from secrets that never even existed.

Most security tools only worry about real data getting out. But when your AI starts hallucinating about budgets, salaries, or confidential projects, those fake details can be just as destructive as actual leaks. They sound authoritative, people trust them, and your organization faces a crisis over information that was pure fiction.

The Real Cost of AI Lies

In your living room, AI mistakes might be amusing. In your office, they're expensive and dangerous:

  • Teams fall apart: When AI spreads false information about pay, promotions, or performance, trust evaporates. Colleagues who used to work well together start questioning everything. Misinformation doesn’t just cause individual frustration—it triggers emotional contagion across teams, where negative feelings spread quickly and fuel the formation of informal coalitions against colleagues or management.

  • Bad decisions pile up: Managers using AI for research might base strategy on hallucinated market data or implement solutions based on incorrect technical advice.

  • Legal trouble: In regulated industries, one wrong AI answer about compliance can trigger investigations and hefty fines.

  • Your reputation takes a hit: Just ask Air Canada. Their AI chatbot promised a passenger a refund that didn't exist. The airline ended up in court, had to honor the false promise, and dealt with embarrassing headlines worldwide.

  • Productivity gains vanish: That time-saving tool becomes a time-waster when people have to double-check everything or redo work based on bogus information.

How Knostic Solves the Fabricated Information Challenge

While other AI governance tools focus on preventing real data leaks, Knostic tackles something much trickier - catching made-up sensitive information before it spreads.

  • Smarter than keyword scanning: Traditional tools look for actual confidential files. Knostic's AI understands context and recognizes when your copilot starts discussing specific salary figures or budget details, whether they're real or completely invented.

  • Knows what's risky: Knostic gets the difference between harmless general talk ("we offer competitive salaries") and dangerous specifics ("Maria earned $95,000 last year"). The moment your AI crosses that line, Knostic intervenes, regardless of whether that information actually exists anywhere.

  • Stops problems instantly: Instead of letting fabricated sensitive details spread through your organization, Knostic catches them in real-time and provides safe responses that protect both accuracy and confidentiality.

  • The breakthrough insight: Knostic treats lies exactly like leaks, because from your organization's perspective, they are. A hallucinated secret causes the same workplace disruption as a real one.

The Reality Check

AIs aren't going anywhere; they're too valuable to ignore. Organizations that try to avoid AI will fall behind competitors who harness its power effectively. But treating these essential tools like magic truth machines is asking for trouble. Companies that succeed with AI prepare for its quirks and limitations while embracing its transformative potential. The choice isn't whether to use AI, it's how to use it safely.

Your people need tools they can trust, and you need confidence that these powerful systems won't create legal, financial, or reputation disasters. Smart governance that handles both real leaks and fabricated information lets you capture AI's massive productivity benefits without the headaches.

This isn't just about protecting data. It's about preserving truth while unleashing AI's revolutionary potential in an age when these systems can sound convincing while being completely wrong.

What's Next?

Ready to learn more about Data Governance in the age of AI? Check out our White Paper

Sources: