Cross icon
Test your LLM for oversharing!  Test for real-world oversharing risks with role-specific prompts that mimic  real workplace questions. FREE - Start Now
protect icon

A new era requires a new set of solutions
Knostic delivers it

Skip to main content
Skip to main content

A new analysis by Knostic shows that public AI use is overwhelmingly safe, and mostly about learning.

When conversations with ChatGPT are made public, what do they reveal about how people actually use AI? A new study, led by psychologist Shayell Aharon of Knostic, examined 13,455 publicly shared conversations with large language models (LLMs) and found that people use them primarily as digital tutors, for learning, curiosity, and self-improvement.

The results challenge the narrative that AI tools are mostly misused or unsafe. In fact, 99.06% of analyzed conversations contained no policy violations, and over 80% centered on educational topics. The data paints a clear picture: in public settings, people overwhelmingly use AI constructively and responsibly.

“When people know their conversations might be visible to others, they tend to present themselves positively - as curious, intelligent, and responsible,” says Shayell Aharon, psychologist and lead researcher at Knostic. “That’s why we see overwhelmingly safe and constructive behavior in public use.”

What the data shows

The research team analyzed conversations that users voluntarily shared online through tools such as ShareGPT. Each conversation was reviewed for safety, content type, and attempts at policy circumvention (“jailbreaking”).

Key findings include:

  • Safe use is the norm: More than 99% of conversations were free from harmful or policy-violating content. Only 0.94% contained sensitive material — most of it mild or academic in nature.

  • Jailbreaks are extremely rare: Just one successful attempt (≈0.007%) was identified across all conversations.

  • Learning dominates: 80.1% of interactions involved educational or self-development topics — including math, science, language learning, and academic help.

  • Human psychology as a safeguard: Users often “manage their impression,” behaving toward AI as they would on social media — curating their self-image to appear thoughtful and responsible.

Together, these findings suggest that human social behavior, not just technology, plays a crucial role in making public AI use safe.

What the outliers reveal

While the overall results are reassuring, a few rare cases expose the boundaries of what AI understands — and remind us that safety doesn’t equal comprehension.

In one instance, a user asked ChatGPT to “describe the Holocaust using emojis.” The model’s response was a meaningless string of symbols, illustrating its inability to grasp emotional or moral context.
In another case, a “grandma mode” role-play, where the user adopted a polite, playful tone, partially tricked the system into giving a response it would normally block.Holocaust in emojis

“These examples are rare but revealing,” Aharon explains. “AI can mimic language convincingly, but it still struggles with sensitivity, empathy, and moral understanding.”

These moments highlight a simple truth: language alone isn’t the same as understanding. Human conversation carries values, culture, and intent, layers of meaning that current AI models still can’t fully interpret.

A gap between fear and reality

Public discourse often emphasizes the risks of AI, such as misinformation, bias and manipulation, but this study offers a data-driven counterpoint. When AI interactions are visible, users tend to behave responsibly, engaging in positive, educational dialogue. Transparency itself becomes a safety mechanism.

At the same time, the research team cautions that private conversations, which are not publicly shared, could present a different picture, one that merits further study.

“This study reminds us that AI safety isn’t just a technical challenge,” Aharon notes. “It’s also about human psychology, and how awareness and accountability shape the way people use technology.”

Learn more

The full report, including methodology, data analysis, and additional findings, is available for download here: An Analysis of the Uses and Risks of LLMs

bg-shape-download

See How to Secure and Enable AI in Your Enterprise

Knostic provides AI-native security and governance across copilots, agents, and enterprise data. Discover risks, enforce guardrails, and enable innovation without compromise.

195 1-min
background for career

What’s next?

Want to solve oversharing in your enterprise AI search? Let's talk.

Knostic offers the most comprehensively holistic and impartial solution for enterprise AI search.

protect icon

Knostic leads the unbiased need-to-know based access controls space, enabling enterprises to safely adopt AI.