A new analysis by Knostic shows that public AI use is overwhelmingly safe, and mostly about learning.
When conversations with ChatGPT are made public, what do they reveal about how people actually use AI? A new study, led by psychologist Shayell Aharon of Knostic, examined 13,455 publicly shared conversations with large language models (LLMs) and found that people use them primarily as digital tutors, for learning, curiosity, and self-improvement.
The results challenge the narrative that AI tools are mostly misused or unsafe. In fact, 99.06% of analyzed conversations contained no policy violations, and over 80% centered on educational topics. The data paints a clear picture: in public settings, people overwhelmingly use AI constructively and responsibly.
“When people know their conversations might be visible to others, they tend to present themselves positively - as curious, intelligent, and responsible,” says Shayell Aharon, psychologist and lead researcher at Knostic. “That’s why we see overwhelmingly safe and constructive behavior in public use.”
The research team analyzed conversations that users voluntarily shared online through tools such as ShareGPT. Each conversation was reviewed for safety, content type, and attempts at policy circumvention (“jailbreaking”).
Key findings include:
Safe use is the norm: More than 99% of conversations were free from harmful or policy-violating content. Only 0.94% contained sensitive material — most of it mild or academic in nature.
Jailbreaks are extremely rare: Just one successful attempt (≈0.007%) was identified across all conversations.
Learning dominates: 80.1% of interactions involved educational or self-development topics — including math, science, language learning, and academic help.
Human psychology as a safeguard: Users often “manage their impression,” behaving toward AI as they would on social media — curating their self-image to appear thoughtful and responsible.
Together, these findings suggest that human social behavior, not just technology, plays a crucial role in making public AI use safe.
While the overall results are reassuring, a few rare cases expose the boundaries of what AI understands — and remind us that safety doesn’t equal comprehension.
In one instance, a user asked ChatGPT to “describe the Holocaust using emojis.” The model’s response was a meaningless string of symbols, illustrating its inability to grasp emotional or moral context.
In another case, a “grandma mode” role-play, where the user adopted a polite, playful tone, partially tricked the system into giving a response it would normally block.
“These examples are rare but revealing,” Aharon explains. “AI can mimic language convincingly, but it still struggles with sensitivity, empathy, and moral understanding.”
These moments highlight a simple truth: language alone isn’t the same as understanding. Human conversation carries values, culture, and intent, layers of meaning that current AI models still can’t fully interpret.
Public discourse often emphasizes the risks of AI, such as misinformation, bias and manipulation, but this study offers a data-driven counterpoint. When AI interactions are visible, users tend to behave responsibly, engaging in positive, educational dialogue. Transparency itself becomes a safety mechanism.
At the same time, the research team cautions that private conversations, which are not publicly shared, could present a different picture, one that merits further study.
“This study reminds us that AI safety isn’t just a technical challenge,” Aharon notes. “It’s also about human psychology, and how awareness and accountability shape the way people use technology.”
The full report, including methodology, data analysis, and additional findings, is available for download here: An Analysis of the Uses and Risks of LLMs