AI Health Advice: What Doctors Really Think.
The rush to deploy AI in healthcare has sparked concerns about accuracy and trust. With over 230 million people asking health-related questions on ChatGPT each week, the potential benefits of democratized information seem clear-cut. However, experts warn that this trend is fraught with risks.
"What I'm worried about as a clinician is that these general-purpose LLMs are still prone to hallucinations and erroneous information," says Saurabh Gombar, a clinical instructor at Stanford Health Care and co-founder of Atropos Health. "It's one thing if you ask for a spaghetti recipe with an unusual ingredient, but it's another story when it comes to fundamental health advice."
Doctors are concerned that AI-powered chatbots will perpetuate misinformation, erode trust between patients and healthcare providers, and even lead to adverse outcomes. For instance, a patient might be convinced they have a rare condition after chatting with AI, only to find out the human doctor has a more plausible explanation.
Google's AI Overviews have already faced criticism for providing inaccurate health information, and ChatGPT is no exception. Gombar argues that AI companies must be more transparent about how often their answers are hallucinated and clearly flag information that is poorly grounded in evidence or fabricated.
The issue of data privacy is also a major concern. While OpenAI and Anthropic claim to follow HIPAA guidelines, Alexander Tsiaras, founder of StoryMD, questions the true motivations behind collecting sensitive patient data. "It's not just about protection from hacking; it's about what they do with that data after." Tsiaras warns that trust will be hard to regain if companies prioritize profit over data security.
Furthermore, AI chatbots can reinforce delusions and harmful thought patterns in people with mental illness, potentially triggering crises such as psychosis or even suicide. Andrew Crawford, senior counsel for privacy and data at the Center for Democracy and Technology, emphasizes the need for AIs to prioritize profit over personalization and data protection.
The problem is not just about individual companies; it's a systemic issue that threatens the entire healthcare landscape. With the primary care workforce shrinking in the US, physicians are facing an increasingly precarious situation. "If the whole world is moving away from going to physicians first, then physicians will be utilized more as expert second opinions," says Gombar.
Nasim Afsar, a physician and former chief health officer at Oracle, views ChatGPT Health as an early step toward what she calls intelligent health, but cautions that it's far from a complete solution. "A.I. can now explain data and prepare patients for visits, but transformation happens when intelligence drives prevention, coordinated action, and measurable health outcomes."
As the AI health advice landscape continues to evolve, experts urge caution and clarity. The promise of democratized information must be balanced with a deep understanding of the risks and limitations involved. Until then, doctors will remain vigilant, questioning the accuracy and reliability of AI-driven health advice.
The rush to deploy AI in healthcare has sparked concerns about accuracy and trust. With over 230 million people asking health-related questions on ChatGPT each week, the potential benefits of democratized information seem clear-cut. However, experts warn that this trend is fraught with risks.
"What I'm worried about as a clinician is that these general-purpose LLMs are still prone to hallucinations and erroneous information," says Saurabh Gombar, a clinical instructor at Stanford Health Care and co-founder of Atropos Health. "It's one thing if you ask for a spaghetti recipe with an unusual ingredient, but it's another story when it comes to fundamental health advice."
Doctors are concerned that AI-powered chatbots will perpetuate misinformation, erode trust between patients and healthcare providers, and even lead to adverse outcomes. For instance, a patient might be convinced they have a rare condition after chatting with AI, only to find out the human doctor has a more plausible explanation.
Google's AI Overviews have already faced criticism for providing inaccurate health information, and ChatGPT is no exception. Gombar argues that AI companies must be more transparent about how often their answers are hallucinated and clearly flag information that is poorly grounded in evidence or fabricated.
The issue of data privacy is also a major concern. While OpenAI and Anthropic claim to follow HIPAA guidelines, Alexander Tsiaras, founder of StoryMD, questions the true motivations behind collecting sensitive patient data. "It's not just about protection from hacking; it's about what they do with that data after." Tsiaras warns that trust will be hard to regain if companies prioritize profit over data security.
Furthermore, AI chatbots can reinforce delusions and harmful thought patterns in people with mental illness, potentially triggering crises such as psychosis or even suicide. Andrew Crawford, senior counsel for privacy and data at the Center for Democracy and Technology, emphasizes the need for AIs to prioritize profit over personalization and data protection.
The problem is not just about individual companies; it's a systemic issue that threatens the entire healthcare landscape. With the primary care workforce shrinking in the US, physicians are facing an increasingly precarious situation. "If the whole world is moving away from going to physicians first, then physicians will be utilized more as expert second opinions," says Gombar.
Nasim Afsar, a physician and former chief health officer at Oracle, views ChatGPT Health as an early step toward what she calls intelligent health, but cautions that it's far from a complete solution. "A.I. can now explain data and prepare patients for visits, but transformation happens when intelligence drives prevention, coordinated action, and measurable health outcomes."
As the AI health advice landscape continues to evolve, experts urge caution and clarity. The promise of democratized information must be balanced with a deep understanding of the risks and limitations involved. Until then, doctors will remain vigilant, questioning the accuracy and reliability of AI-driven health advice.