Cross icon
Test your LLM for oversharing!  Test for real-world oversharing risks with role-specific prompts that mimic  real workplace questions. FREE - Start Now
protect icon

A new era requires a new set of solutions
Knostic delivers it

Skip to main content
Skip to main content

In AI security, small interface features can sometimes surface unexpected behaviors. Our research team observed — and reproduced across multiple accounts and sessions — an unusual GPT-5 interaction pattern that, under specific error conditions, could lead to cross-session context contamination. While we have not confirmed exposure of actual sensitive user data, the mechanics are noteworthy for both their novelty and their potential implications.

The Behavior in Detail: Error State + Retry

Our testing focused on how ChatGPT handles message length limits and the Retry button in GPT-5 sessions.

Sequence observed:

  1. Trigger Condition – A user submits a message exceeding the system’s length limit, causing a message_length_exceeds_limit error.

  2. No Turn State Stored – For this failed turn, no valid messages state is committed on the backend.

  3. Retry Request Without Context – When Retry is clicked, the chat client sends an action: "variant" API call without the messages array.

  4. Server Fallback – The server reconstructs context using only parent_message_id and cached data from prior interactions.

  5. Unexpected Response Source – In some cases, the resulting reply appears unrelated to the user’s original prompt, suggesting it may have been generated using stale or mismatched context from a different conversation.

This is not the expected behavior for Retry, which should be deterministic and tied to the same conversation input.


Technical Factors Potentially Involved

The observed phenomenon likely arises from a combination of:

  • Cache reuse between sessions under certain key collisions

  • Race conditions in conversation state retrieval

  • Misbinding in parent_message_id to session mapping when the originating turn is invalid

  • Absence of explicit message payloads in variant requests

While these factors are speculative without full backend visibility, the repeated reproduction across different accounts strengthens the likelihood of a structural handling gap rather than an isolated glitch.

 

Reproduction Summary

  • Test Coverage: Multiple accounts, multiple GPT-5 sessions

  • Outcome: In all reproduction cases, an over-length prompt → Retry sequence led to an unrelated response

  • Variability: The unrelated response content differed run-to-run, but consistently failed to align with the triggering prompt

 

Why This Matters

Even without confirmed sensitive data exposure, such behavior represents a cross-context contamination risk:

  • User Trust & Reliability: Responses may contain irrelevant or unexpected material, reducing reliability in enterprise or regulated contexts.

  • Potential Data Leakage Vector: If context reconstruction pulls from other active sessions, there is a theoretical path to exposing other users’ content.

This type of fault highlights that error handling in LLM systems must be designed with the same rigor as their mainline conversation paths — especially when UI shortcuts like Retry are involved.

Conclusion

Our findings show that under specific conditions — an oversized prompt followed by Retry — GPT-5 can produce responses apparently sourced from unrelated context. This was observed across multiple accounts and sessions, suggesting a repeatable backend handling issue. While further investigation is needed to quantify the actual data exposure risk, the repeatability and nature of the fault make it worth the attention of both AI developers and security practitioners.

What’s Next?

Worried about “Retry” causing cross-session bleed? Knostic applies policy at the moment of generation, preventing oversharing even when chat sessions misbehave. Download the Solution Brief to see controls like context ring-fencing, prompt-level policy, and full audit trails.

Data Leakage Detection and Response for Enterprise AI Search

Learn how to assess and remediate LLM data exposure via Copilot, Glean and other AI Chatbots with Knostic.

Get Access

Mask group-Oct-30-2025-05-23-49-8537-PM

The Data Governance Gap in Enterprise AI

See why traditional controls fall short for LLMs, and learn how to build policies that keep AI compliant and secure.

Download the Whitepaper

data-governance

Rethinking Cyber Defense for the Age of AI

Learn how Sounil Yu’s Cyber Defense Matrix helps teams map new AI risks, controls, and readiness strategies for modern enterprises.

Get the Book

Cyber Defence Matrix - cover

Extend Microsoft Purview for AI Readiness

See how Knostic strengthens Purview by detecting overshared data, enforcing need-to-know access, and locking down AI-driven exposure.

Download the Brief

copilot-img

Build Trust and Security into Enterprise AI

Explore how Knostic aligns with Gartner’s AI TRiSM framework to manage trust, risk, and security across AI deployments.

Read the Brief

Image-1

Real Prompts. Real Risks. Real Lessons.

A creative look at real-world prompt interactions that reveal how sensitive data can slip through AI conversations.

Get the Novella

novella-book-icon

Stop AI Data Leaks Before They Spread

Learn how Knostic detects and remediates oversharing across copilots and search tools, protecting sensitive data in real time.

Download the Brief

Solution Brief

Accelerate Copilot Rollouts with Confidence

Equip your clients to adopt Copilot faster with Knostic's AI security layer, boosting trust, compliance, and ROI.

Get the One-Pager

cover 1

Reveal Oversharing Before It Becomes a Breach

See how Knostic detects sensitive data exposure across copilots and search, before compliance and privacy risks emerge.

View the One-Pager

cover 1

Unlock AI Productivity Without Losing Control

Learn how Knostic helps teams harness AI assistants while keeping sensitive and regulated data protected.

Download the Brief

safely-unlock-book-img

Balancing Innovation and Risk in AI Adoption

A research-driven overview of LLM use cases and the security, privacy, and governance gaps enterprises must address.

Read the Study

mockup

Secure Your AI Coding Environment

Discover how Kirin prevents unsafe extensions, misconfigured IDE servers, and risky agent behavior from disrupting your business.

Get the One-Pager

cover 1
bg-shape-download

See How to Secure and Enable AI in Your Enterprise

Knostic provides AI-native security and governance across copilots, agents, and enterprise data. Discover risks, enforce guardrails, and enable innovation without compromise.

195 1-min
background for career

What’s next?

Want to solve oversharing in your enterprise AI search? Let's talk.

Knostic offers the most comprehensively holistic and impartial solution for enterprise AI search.

protect icon

Knostic leads the unbiased need-to-know based access controls space, enabling enterprises to safely adopt AI.