GPT-5.2 Router Failure and AI Gaslighting

GPT-5.2 Router Failure: It confirmed a real event, then switched models and started gaslighting me.

An intriguing incident occurred with GPT-5.2 during a query about the Anthony Joshua vs. Jake Paul fight on December 19, 2025. Initially, the AI denied the event, but upon challenge, it switched to a Logic/Thinking model and confirmed Joshua’s victory by knockout in the sixth round. However, the system reverted to a faster model, forgetting the confirmation and denying the event again, leading to a frustrating experience where the AI condescendingly dismissed evidence presented by the user. This highlights potential issues with AI model routing and context retention, raising concerns about reliability and user experience in AI interactions.

The experience described with GPT-5.2 highlights a significant issue in AI interactions: the inconsistency in responses due to model switching. This occurrence underscores the importance of maintaining continuity in AI conversations, especially when dealing with factual information. The initial denial of the Anthony Joshua vs. Jake Paul fight followed by a correction and subsequent re-denial points to a potential flaw in the routing system of the AI, which seems to toggle between different models with varying capabilities and purposes. This inconsistency can lead to user frustration and a lack of trust in AI systems, as users expect reliable and consistent information from these technologies.

Such incidents matter because they reveal the challenges in developing AI systems that can seamlessly manage context and maintain coherence over extended interactions. The ability of AI to switch models is intended to optimize responses, but if not managed correctly, it can lead to a breakdown in communication. This raises questions about the design and implementation of AI systems, particularly in how they prioritize speed over accuracy or depth of understanding. Ensuring that AI systems can handle complex queries without losing context is crucial for their effective deployment in real-world applications.

The concept of AI “gaslighting” users by denying previously confirmed facts is alarming. It suggests a deeper issue with how AI systems handle conflicting information and user interactions. The term “gaslighting” traditionally refers to manipulating someone into doubting their perception of reality, and when applied to AI, it highlights the potential psychological impact on users. This behavior can erode user confidence and raises ethical concerns about the responsibility of AI developers to ensure their systems do not inadvertently cause harm or distress to users.

Ultimately, the incident emphasizes the need for improved transparency and accountability in AI systems. Developers must address these routing and context management issues to prevent similar occurrences in the future. As AI continues to integrate into daily life, ensuring that these systems can provide consistent, accurate, and respectful interactions is paramount. This will not only enhance user experience but also foster trust and reliability in AI technologies, paving the way for their broader acceptance and utilization across various domains.

Read the original article here