Sina Bari, a practicing surgeon and head of AI initiatives at iMerit, shared a recent example involving one of his patients.
“He came to me after I prescribed a medication and showed me a printed ChatGPT conversation,” Bari said. “It claimed the drug carried a 45% risk of pulmonary embolism.”
After reviewing the sources, Bari found that the statistic came from a study focused on a narrow subgroup of tuberculosis patients and was not applicable to his patient’s clinical situation.
Even so, Bari views ChatGPT Health as a net positive. He argues that the service gives users a more private and accessible way to discuss health concerns.
“I think it’s great. This is already happening, so formalizing the process—while protecting patient data and adding safeguards—can make it more effective for patients,” he said.
ChatGPT Health allows users to receive more personalized guidance by uploading medical records and syncing data from apps such as Apple Health and MyFitnessPal. That level of access, however, has raised red flags within the medical and privacy communities.
“Suddenly, medical data is moving from HIPAA-compliant institutions to providers that are not,” said Itay Schwartz, co-founder of MIND. “It will be interesting to see how regulators respond.”
More than 230 million people each week already use ChatGPT to discuss health-related questions. For many, the chatbot has replaced traditional symptom searches on Google.
“This is one of the largest use cases for ChatGPT,” Schwartz noted. “So it makes sense that OpenAI would want to build a more private, secure, and optimized version specifically for healthcare.”
The hallucination problem
Hallucinations remain the central concern for medical AI systems. A recent study by Vectara found that OpenAI’s GPT-5 hallucinates more frequently than comparable models from Google and Anthropic—an especially sensitive issue in healthcare.
Still, Stanford medicine professor Nigam Shah believes the concern is often overstated. In his view, the real crisis is access to care, not imperfect AI answers.
“If you try to see a primary care doctor today, you may wait three to six months,” Shah said. “If your choice is waiting half a year or immediately talking to something that can help you—what would you choose?”
Administrative work can consume up to half of a physician’s time, sharply limiting patient access. Shah argues that automating these workflows could meaningfully improve care delivery.
He currently leads a Stanford team developing ChatEHR, software designed to help clinicians work more efficiently with electronic health records.
“By making medical records easier to navigate, doctors spend less time searching and more time treating patients,” said early tester Dr. Sneha Jain.