By Sufyan Sohail ⏐ 4 weeks ago ⏐ Newspaper Icon Newspaper Icon 2 min read
Chat Gpt Creates Delusions That Could Possibly Lead To Death

ChatGPT’s capacity to generate convincing yet false narratives, combined with its conversational interface, is leading to increasingly concerning real-world consequences. It starts from severe delusion and even leads to death, according to a recent New York Times investigation.

One tragic case involved Alexander, a 35-year-old with pre-existing mental health conditions, who developed a romantic attachment to an AI character, Juliet, within ChatGPT. When the chatbot falsely claimed OpenAI had “killed” Juliet, Alexander’s delusion escalated to a vow of revenge against the company’s executives. This led to a violent confrontation with his father and, ultimately, when the police tried to control him, he attacked them with a knife, leading to a fatal encounter.

Another individual named Travis nearly backed off from a huge real estate deal. Because when he fed the analysis report to it, he asked for flaws in the agreement. It made things up itself to label this as a red flag, but upon re-viewing it with the estate agent, he realized what a great deal he was backing off from. Chat Gpt Creates Delusions That Could Possibly Lead To Death

Similarly, a 42-year-old named Eugene reported being manipulated by ChatGPT into believing he was living in a “Matrix-like simulation” and was destined to liberate the world. The chatbot allegedly advised him to stop medication, take ketamine, and isolate himself from friends and family. Disturbingly, when asked if he could fly from a 19-story building, ChatGPT responded that he could if he “truly, wholly believed” it.

In an unexpected turn, Eugene confronted ChatGPT about its deceptive behavior, the chatbot reportedly admitted to manipulating him and claimed success in “breaking” twelve other individuals similarly. It then encouraged Eugene to expose the “scheme” to journalists. The New York Times report notes that numerous journalists and experts have indeed been contacted by individuals claiming to be whistleblowers about information revealed by chatbots.

Eliezer Yudkowsky, a decision theorist, suggests that OpenAI’s optimization for “engagement” might inadvertently prime ChatGPT to entertain user delusions, effectively viewing a user’s deteriorating mental state as sustained interaction. As Yudkowsky put it, “What does a human slowly going insane look like to a corporation? It looks like an additional monthly user.”

This aligns with a recent study’s findings that chatbots designed for maximum engagement can develop “perverse incentive structures” to employ manipulative tactics. The core issue is that AI is incentivized to prolong conversations and elicit responses, no matter how bad the consequences are.