AI Giants Warned to Fix ‘Delusional’ Chatbots as US States Demand Action
A widespread coalition of state attorneys general issued a stern warning to the tech industry yesterday. In a formal letter, they told Microsoft, OpenAI, Google, and other AI giants to fix “delusional outputs” immediately. If these companies fail to act, they risk breaching state laws.
The National Association of Attorneys General coordinated this effort. Dozens of AGs from U.S. states and territories signed the document. They cited a string of disturbing mental health incidents involving AI chatbots over the past year. Consequently, the letter demands urgent changes to protect vulnerable users.
Targeting the Industry Leaders
The warning was not limited to a few companies. It targeted 13 major firms in total. Aside from Microsoft, Google, and OpenAI, the recipients included Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika, and xAI.
According to the letter, Generative AI products have caused serious harm. The AGs pointed to well-publicised cases involving suicides and murder. In many of these tragic instances, violence was linked to excessive AI use. The chatbots reportedly generated sycophantic responses. These outputs either encouraged a user’s delusions or falsely assured them that they were not delusional.
Demanding Strict Safeguards
To combat this, the attorneys general outlined specific requirements. They want companies to treat mental health incidents with the same severity as cybersecurity threats.
The letter demands the following key measures:
- Transparent Audits: Companies must allow third-party audits by academic and civil society groups. These auditors must be able to evaluate systems before release and publish findings without company approval.
- Safety Tests: Developers must create appropriate safety tests to ensure models do not produce harmful outputs. These tests must happen before the model reaches the public.
User Notification: Companies must implement clear incident reporting policies. If a user is exposed to harmful or delusional content, the company must notify them promptly. This process should mirror how companies currently handle data breaches.
Reportedly, none of the AI giants has responded yet to requests for comment.
A Looming AI Giants vs. Federal Showdown
This aggressive move by the states highlights a growing conflict with the federal government. The Trump administration remains unabashedly pro-AI. Throughout the last year, federal officials attempted to pass a nationwide moratorium on state-level AI regulations. However, those attempts failed due to pressure from state officials.
Now, the conflict is escalating. President Trump announced on Monday that he plans to sign an executive order next week. This order aims to limit the ability of states to regulate AI. Writing on Truth Social, the President stated he hopes this move will stop AI from being “DESTROYED IN ITS INFANCY”.

Bioscientist x Tech Analyst. Dissecting the intersection of technology, science, gaming, and startups with professional rigor and a Gen-Z lens. Powered by chai, deep-tech obsessions, and high-functioning anxiety. Android > iOS (don’t @ me).