AI

ChatGPT Health vs. The Doctors: Why AI Belongs in the Back Office

Published by

OpenAI is officially diving into medicine. The company announced the upcoming rollout of ChatGPT Health, a dedicated chatbot designed to discuss your symptoms in a private, encrypted setting. Sounds revolutionary, right? It syncs with Apple Health. It doesn’t train on your data. But behind the glossy launch, a dangerous war is brewing between high-risk consumer AI and the practical tools doctors actually need.

While OpenAI courts the public, competitors like Anthropic are targeting the professionals. And looking at the facts, the smart money isn’t on the chatbot that guesses your disease, it’s on the AI that cuts the paperwork.

ChatGPT Health & The Hallucination Hazard

OpenAI isn’t creating a new behaviour… they are capitalising on an existing one. Over 230 million people already consult ChatGPT about their health weekly. However, formalising this relationship invites massive liability.

Dr Sina Bari, a surgeon and AI leader at iMerit, recently treated a patient terrified by a ChatGPT diagnosis. The bot claimed the patient’s medication carried a 45% chance of pulmonary embolism. It was dead wrong.

Dr Bari discovered the AI had hallucinated context from a niche study regarding tuberculosis patients. That data was irrelevant to his patient. Yet, the user trusted the printout over common sense. This isn’t an isolated glitch. According to Vectara’s Factual Consistency Evaluation Model, OpenAI’s GPT-5 is currently more prone to hallucinations than rival models from Google or Anthropic.

Furthermore, Itai Schwartz, co-founder of MIND, warns of a regulatory nightmare. Medical data is now flowing from HIPAA-compliant health organisations to non-compliant vendors. For the security-minded, this raises immediate red flags.

The Real Cure: Fixing the System, Not Replacing It

While OpenAI plays doctor, others are fixing the broken hospital infrastructure. The reality is grim. Administrative tasks consume roughly 50% of a primary care physician’s time. Consequently, patients face wait times of three to six months just to see a human. This is where the real value lies.

Anthropic recently announced “Claude for Healthcare“, ignoring the consumer frenzy to focus on the provider side. Their tool targets tedious tasks like prior authorisation requests. Anthropic CPO Mike Krieger claims this cuts approximately 20 to 30 minutes out of every case. That is a dramatic time savings that lets doctors see more patients.

Similarly, Stanford Health Care is developing ChatEHR. This software integrates into electronic health records, allowing clinicians to query patient data instantly. Dr Sneha Jain, an early tester, notes that this stops doctors from “scouring every nook and cranny” of a database, freeing them to actually treat people.

The TechJuice Verdict

There is an inescapable tension here. Dr Bari notes that a doctor’s incentive is patient protection, while tech companies answer to shareholders.

Dr Nigam Shah of Stanford correctly argues that desperation drives patients to AI… they would rather talk to a robot than wait six months for a human. But providing a hallucinating chatbot is a band-aid, not a cure. The future of health tech isn’t about giving patients faulty advice. It is about automating the back office so real doctors can get back to work.

Muhammad Haaris

Bioscientist x Tech Analyst. Dissecting the intersection of technology, science, gaming, and startups with professional rigor and a Gen-Z lens. Powered by chai, deep-tech obsessions, and high-functioning anxiety. Android > iOS (don't @ me).