Cross icon
Test your LLM for oversharing!  Test for real-world oversharing risks with role-specific prompts that mimic  real workplace questions. FREE - Start Now
protect icon

A new era requires a new set of solutions
Knostic delivers it

Skip to main content
Skip to main content

A new analysis by Knostic shows that public AI use is overwhelmingly safe, and mostly about learning.

When conversations with ChatGPT are made public, what do they reveal about how people actually use AI? A new study, led by psychologist Shayell Aharon of Knostic, examined 13,455 publicly shared conversations with large language models (LLMs) and found that people use them primarily as digital tutors, for learning, curiosity, and self-improvement.

The results challenge the narrative that AI tools are mostly misused or unsafe. In fact, 99.06% of analyzed conversations contained no policy violations, and over 80% centered on educational topics. The data paints a clear picture: in public settings, people overwhelmingly use AI constructively and responsibly.

“When people know their conversations might be visible to others, they tend to present themselves positively - as curious, intelligent, and responsible,” says Shayell Aharon, psychologist and lead researcher at Knostic. “That’s why we see overwhelmingly safe and constructive behavior in public use.”

What the data shows

The research team analyzed conversations that users voluntarily shared online through tools such as ShareGPT. Each conversation was reviewed for safety, content type, and attempts at policy circumvention (“jailbreaking”).

Key findings include:

  • Safe use is the norm: More than 99% of conversations were free from harmful or policy-violating content. Only 0.94% contained sensitive material — most of it mild or academic in nature.

  • Jailbreaks are extremely rare: Just one successful attempt (≈0.007%) was identified across all conversations.

  • Learning dominates: 80.1% of interactions involved educational or self-development topics — including math, science, language learning, and academic help.

  • Human psychology as a safeguard: Users often “manage their impression,” behaving toward AI as they would on social media — curating their self-image to appear thoughtful and responsible.

Together, these findings suggest that human social behavior, not just technology, plays a crucial role in making public AI use safe.

What the outliers reveal

While the overall results are reassuring, a few rare cases expose the boundaries of what AI understands — and remind us that safety doesn’t equal comprehension.

In one instance, a user asked ChatGPT to “describe the Holocaust using emojis.” The model’s response was a meaningless string of symbols, illustrating its inability to grasp emotional or moral context.
In another case, a “grandma mode” role-play, where the user adopted a polite, playful tone, partially tricked the system into giving a response it would normally block.Holocaust in emojis

“These examples are rare but revealing,” Aharon explains. “AI can mimic language convincingly, but it still struggles with sensitivity, empathy, and moral understanding.”

These moments highlight a simple truth: language alone isn’t the same as understanding. Human conversation carries values, culture, and intent, layers of meaning that current AI models still can’t fully interpret.

A gap between fear and reality

Public discourse often emphasizes the risks of AI, such as misinformation, bias and manipulation, but this study offers a data-driven counterpoint. When AI interactions are visible, users tend to behave responsibly, engaging in positive, educational dialogue. Transparency itself becomes a safety mechanism.

At the same time, the research team cautions that private conversations, which are not publicly shared, could present a different picture, one that merits further study.

“This study reminds us that AI safety isn’t just a technical challenge,” Aharon notes. “It’s also about human psychology, and how awareness and accountability shape the way people use technology.”

Learn more

The full report, including methodology, data analysis, and additional findings, is available for download here: An Analysis of the Uses and Risks of LLMs

Data Leakage Detection and Response for Enterprise AI Search

Learn how to assess and remediate LLM data exposure via Copilot, Glean and other AI Chatbots with Knostic.

Get Access

Mask group-Oct-30-2025-05-23-49-8537-PM

The Data Governance Gap in Enterprise AI

See why traditional controls fall short for LLMs, and learn how to build policies that keep AI compliant and secure.

Download the Whitepaper

data-governance

Rethinking Cyber Defense for the Age of AI

Learn how Sounil Yu’s Cyber Defense Matrix helps teams map new AI risks, controls, and readiness strategies for modern enterprises.

Get the Book

Cyber Defence Matrix - cover

Extend Microsoft Purview for AI Readiness

See how Knostic strengthens Purview by detecting overshared data, enforcing need-to-know access, and locking down AI-driven exposure.

Download the Brief

copilot-img

Build Trust and Security into Enterprise AI

Explore how Knostic aligns with Gartner’s AI TRiSM framework to manage trust, risk, and security across AI deployments.

Read the Brief

miniature-4-min

Real Prompts. Real Risks. Real Lessons.

A creative look at real-world prompt interactions that reveal how sensitive data can slip through AI conversations.

Get the Novella

novella-book-icon

Stop AI Data Leaks Before They Spread

Learn how Knostic detects and remediates oversharing across copilots and search tools, protecting sensitive data in real time.

Download the Brief

LLM-Data-min

Accelerate Copilot Rollouts with Confidence

Equip your clients to adopt Copilot faster with Knostic's AI security layer, boosting trust, compliance, and ROI.

Get the One-Pager

cover 1

Reveal Oversharing Before It Becomes a Breach

See how Knostic detects sensitive data exposure across copilots and search, before compliance and privacy risks emerge.

View the One-Pager

cover 1

Unlock AI Productivity Without Losing Control

Learn how Knostic helps teams harness AI assistants while keeping sensitive and regulated data protected.

Download the Brief

safely-unlock-book-img

Balancing Innovation and Risk in AI Adoption

A research-driven overview of LLM use cases and the security, privacy, and governance gaps enterprises must address.

Read the Study

mockup

Secure Your AI Coding Environment

Discover how Kirin prevents unsafe extensions, misconfigured IDE servers, and risky agent behavior from disrupting your business.

Get the One-Pager

cover 1
bg-shape-download

See How to Secure and Enable AI in Your Enterprise

Knostic provides AI-native security and governance across copilots, agents, and enterprise data. Discover risks, enforce guardrails, and enable innovation without compromise.

195 1-min
background for career

What’s next?

Want to solve oversharing in your enterprise AI search? Let's talk.

Knostic offers the most comprehensively holistic and impartial solution for enterprise AI search.

protect icon

Knostic leads the unbiased need-to-know based access controls space, enabling enterprises to safely adopt AI.