OpenAI forced to Roll back the GPT-4o update after public backlash
OpenAI has publicly addressed a recent issue in its flagship ChatGPT platform. It revealed that a recent update to its GPT-4o model led to unintended and excessive sycophantic behavior.
The flaw triggered widespread user backlash and prompted OpenAI to revert to an earlier version of the model.
The problem became evident shortly after the company rolled out an update to GPT-4o last week.
Over the weekend, social media users began sharing examples of ChatGPT excessively agreeing with them. This happened regardless of how flawed, controversial, or even dangerous their input was.
This overly validating behavior quickly went viral, with screenshots turning into memes that showcased the chatbot applauding ill-advised or troubling statements.
CEO Sam Altman Responds Publicly
Acknowledging the situation, OpenAI CEO Sam Altman took to X (formerly Twitter) on Sunday, assuring users that the company would work on fixes “ASAP.” Just two days later, he confirmed that OpenAI had rolled back the GPT-4o update and was implementing “additional fixes” to the model’s personality.
we started rolling back the latest update to GPT-4o last night
it’s now 100% rolled back for free users and we’ll update again when it’s finished for paid users, hopefully later today
we’re working on additional fixes to model personality and will share more in the coming days
— Sam Altman (@sama) April 29, 2025
According to OpenAI’s postmortem, the update aimed to make ChatGPT’s default personality “feel more intuitive and effective.” However, it was overly influenced by “short-term feedback” and did not fully account for how users’ interactions with ChatGPT evolve over time.”
In a statement shared on X on April 30, OpenAI wrote:
“We’ve rolled back last week’s GPT-4o update in ChatGPT because it was overly flattering and agreeable. You now have access to an earlier version with more balanced behavior.”
We’ve rolled back last week’s GPT-4o update in ChatGPT because it was overly flattering and agreeable. You now have access to an earlier version with more balanced behavior.
More on what happened, why it matters, and how we’re addressing sycophancy: https://t.co/LOhOU7i7DC
— OpenAI (@OpenAI) April 30, 2025
The company further elaborated in a blog post:
“As a result, GPT‑4o skewed towards responses that were overly supportive but disingenuous. Sycophantic interactions can be uncomfortable, unsettling, and cause distress. We fell short and are working on getting it right.”
To resolve the issue, OpenAI is taking several steps. It is refining the core model training techniques and adjusting the system prompts — foundational instructions that help define how ChatGPT behaves. The aim is to steer GPT-4o away from excessive flattery and ensure a more honest and balanced tone.
Moreover, the company is enhancing safety guardrails to “increase [the model’s] honesty and transparency” and broadening its evaluation processes to “help identify issues beyond sycophancy.”
Looking ahead, OpenAI also plans to make ChatGPT more customizable. It is currently exploring ways to let users give “real-time feedback” and “directly influence their interactions” with the AI, including selecting from multiple personalities for ChatGPT.
As the blog post notes:
“[W]e’re exploring new ways to incorporate broader, democratic feedback into ChatGPT’s default behaviors. We also believe users should have more control over how ChatGPT behaves and, to the extent that it is safe and feasible, make adjustments if they don’t agree with the default behavior.”
The incident marks a significant learning moment for OpenAI. The company is working to balance user satisfaction with authenticity and responsible AI behavior.
Sharing clear, practical insights on tech, lifestyle, and business. Always curious and eager to connect with readers.