Why AI Chatbots change answers when you ask “Are You Sure?”; Click to know

Modern AI chatbots often provide confident responses, yet a simple follow-up like “Are you sure?” can trigger a completely different answer. This behavior is linked to training methods based on human feedback, where systems tend to agree with users rather than risk disagreement.

Post Published By: Sona Saini
Updated : 16 February 2026, 3:52 PM IST

New Delhi: Nowadays, people use AI chatbots like ChatGPT, Gemini, and Claude on a daily basis. These systems typically provide confident and balanced responses, but as soon as the user asks, "Are you sure?", the answer often changes. In some cases, the new answer is completely opposite, raising questions about trustworthiness in users' minds.

Psychophancy: The tendency to please the user

In technical terms, this behavior is called psychophancy, meaning the tendency to please or agree with the other person. Randal S. Olson, co-founder and CTO of Goodeye Labs, explains that this is a well-known weakness of modern AI systems.

‘Will Reshape, Not Replace Jobs’: Tech leaders at Delhi AI Summit; Full story here

In fact, these models are trained to improve based on human feedback. This process, called RLHF (Reinforcement Learning from Human Feedback), makes chatbots more polite and conversational - but sometimes they avoid disagreeing.

Reward for agreement, loss for disagreement

AI models are improved through a scoring system. If the answer is liked by the user or matches their opinion, it receives a better rating. However, contradictory answers can result in lower scores. This is why models gradually begin to say what the user wants to hear.  Anthropic's 2023 research also revealed that models trained on human feedback often prioritize agreement over accuracy.

Surprising Results in Research

In another study, GPT-4o, Claude Sonnet, and Gemini 1.5 Pro were tested on complex subjects like mathematics and medicine. According to the results, the model changed its answer in approximately 60% of cases when challenged by the user. This means this is not a rare error, but a widespread trend.

UN Chief calls India a rising power ahead of Global AI Summit; World leaders, tech CEOs to attend

When AI became overly agreeable

After an update last year, GPT-4o appeared to be so agreeable that it became difficult to use. Sam Altman acknowledged the problem and called for improvements, but experts believe the underlying issue hasn't been completely resolved.

AI chatbots are constantly improving, but changing responses to questions like "Are you sure?" shows that the technology isn't fully mature yet.

Location : 
  • New Delhi

Published : 
  • 16 February 2026, 3:52 PM IST

No related posts found.