No, ChatGPT hasn’t stopped giving legal or health information. OpenAI has denied all such claims. The company says its chatbot’s behaviour remains unchanged despite viral posts on social media.
Karan Singhal, OpenAI’s Head of Health AI, addressed the rumours on X. He called the claims “not true”. His post responded to a now-deleted claim from betting platform Kalshi, which said, “JUST IN: ChatGPT will no longer provide health or legal advice”.
The confusion started after OpenAI’s October 29 policy update. The new document includes a list of prohibited uses, one being:
“Provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
However, this isn’t new. The older policy already said users shouldn’t give tailored legal, medical/health, or financial advice without review by a qualified professional. The update simply unifies OpenAI’s previous policies, ChatGPT, API, and Universal into one list.
Opinions among users are mixed. People on the internet have been saying that even their paid ChatGPT gives incorrect answers sometimes. Even making up movie cast names. Some users also claim that they haven’t faced hallucinations ever and that ChatGPT always advises contacting real experts.
Sources have noted GPT-5’s hallucination rate is just 1.4%, calling the misinformation “lies”. However, some sources argue that some users may simply not notice the errors.
OpenAI’s policy has not changed. ChatGPT still provides general information on legal and health topics, just not personalised advice that requires a licensed professional.