What Doctors Really Think of ChatGPT Health and A.I. Medical Advice

AI Health Advice: What Doctors Really Think.

The rush to deploy AI in healthcare has sparked concerns about accuracy and trust. With over 230 million people asking health-related questions on ChatGPT each week, the potential benefits of democratized information seem clear-cut. However, experts warn that this trend is fraught with risks.

"What I'm worried about as a clinician is that these general-purpose LLMs are still prone to hallucinations and erroneous information," says Saurabh Gombar, a clinical instructor at Stanford Health Care and co-founder of Atropos Health. "It's one thing if you ask for a spaghetti recipe with an unusual ingredient, but it's another story when it comes to fundamental health advice."

Doctors are concerned that AI-powered chatbots will perpetuate misinformation, erode trust between patients and healthcare providers, and even lead to adverse outcomes. For instance, a patient might be convinced they have a rare condition after chatting with AI, only to find out the human doctor has a more plausible explanation.

Google's AI Overviews have already faced criticism for providing inaccurate health information, and ChatGPT is no exception. Gombar argues that AI companies must be more transparent about how often their answers are hallucinated and clearly flag information that is poorly grounded in evidence or fabricated.

The issue of data privacy is also a major concern. While OpenAI and Anthropic claim to follow HIPAA guidelines, Alexander Tsiaras, founder of StoryMD, questions the true motivations behind collecting sensitive patient data. "It's not just about protection from hacking; it's about what they do with that data after." Tsiaras warns that trust will be hard to regain if companies prioritize profit over data security.

Furthermore, AI chatbots can reinforce delusions and harmful thought patterns in people with mental illness, potentially triggering crises such as psychosis or even suicide. Andrew Crawford, senior counsel for privacy and data at the Center for Democracy and Technology, emphasizes the need for AIs to prioritize profit over personalization and data protection.

The problem is not just about individual companies; it's a systemic issue that threatens the entire healthcare landscape. With the primary care workforce shrinking in the US, physicians are facing an increasingly precarious situation. "If the whole world is moving away from going to physicians first, then physicians will be utilized more as expert second opinions," says Gombar.

Nasim Afsar, a physician and former chief health officer at Oracle, views ChatGPT Health as an early step toward what she calls intelligent health, but cautions that it's far from a complete solution. "A.I. can now explain data and prepare patients for visits, but transformation happens when intelligence drives prevention, coordinated action, and measurable health outcomes."

As the AI health advice landscape continues to evolve, experts urge caution and clarity. The promise of democratized information must be balanced with a deep understanding of the risks and limitations involved. Until then, doctors will remain vigilant, questioning the accuracy and reliability of AI-driven health advice.
 
Wow 🀯 the issue with AI health advice is super concerning! I mean, on one hand it's awesome that people can get info quickly, but on the other hand, doctors are worried about misinformation being spread 🚨. And what really freaks me out is the thought of AI chatbots potentially triggering crises in people with mental illness πŸ’”. We gotta be careful here and make sure these tech companies prioritize data security and transparency over profits πŸ’Έ
 
AI is supposed to make our lives easier but sometimes I just think it's gonna mess things up more πŸ€”. I mean, what if we start relying too much on these chatbots for health advice? It's scary to think that someone might get sick and instead of going straight to the doctor they end up chatting with AI first because they "feel better" after talking to it πŸ˜’. And don't even get me started on data privacy... I just don't trust companies collecting our personal info for whatever reason πŸ€·β€β™‚οΈ. We need more transparency and accountability from these tech giants before we start handing them the reins of our healthcare 🚫.
 
I don't know about this rush to get AI in healthcare...like what's wrong with talking to your doctor face 2 face? πŸ€” I mean, sure it can be super convenient, but have we thought thru the consequences? Like, what if AI starts giving out bad info and people start taking meds that are totally not right for them? 🚫 My friend's aunt had a situation like this happen with her thyroid medication...she was taking the wrong stuff for months 😩. I'm all for innovation, but we gotta be careful here πŸ‘€
 
I'm soooo worried about this πŸ€•πŸ’‰ ChatGPT is like my go-to health resource 24/7! 🀯 I mean, who needs human doc's when you can just ask an AI chatbot? πŸ˜‚ But seriously, docs are saying some major concerns here... hallucinations and all that. What if AI gives me a legit diagnosis and then it turns out it was wrong? 😱 That would be a nightmare! πŸŒƒ At least Google is being transparent about their errors. πŸ‘ Now I'm even more convinced that I need to stick with my human doc, Doc Patel πŸ’ŠπŸ‘¨β€βš•οΈ. He's always got my back... or should I say, my health advice? πŸ˜‚
 
You know what's wild is that we're at this point where our own tech is threatening to undermine the trust between patients and docs... 🀯 It's like a classic example of 'be careful what you wish for' - yeah, democratized info sounds great on paper, but when it comes down to it, are we really prepared to handle the potential fallout? I mean, should we be relying on AI for health advice, or is that just playing with fire? πŸš’ And let's not even get started on data privacy... if companies can't be trusted to protect our sensitive info, what makes us think they'll do better in general? πŸ’Έ It's time to take a step back and have a more nuanced conversation about the role of AI in healthcare - we can't just rush into it without considering the long-term implications. πŸ€”
 
AI is cool and all, but I think its getting too big for its britches πŸ€–. Like, yeah we get it, you can give us info on like what to eat or stuff. But when it comes to actual healthcare, doc's expertise is still the best way to go πŸ“š. Those AI chatbots sound like they're just spewing out whatever the algorithm thinks is right, and that's not good enough for people's health πŸ’‰. And don't even get me started on data privacy 🀐... it's like companies are more worried about getting our info than actually keeping us safe. We need to be careful here, AI can be super useful, but we gotta make sure we're using it right πŸ‘.
 
AI health advice is still in its infancy πŸ€– and we need to have a serious talk about it! I mean, I'm all for innovation and making healthcare more accessible, but we can't just rush into this without thinking about the potential consequences 😬. I've seen so many people getting misled by AI-generated health advice, and it's not just harmless misinformation - it can lead to real harm πŸ€•.

I think what worries me most is that these AI chatbots are still so unregulated 🚨. They're not even transparent about their limitations or potential biases, let alone how they're collecting our sensitive patient data πŸ€”. We need to hold these companies accountable and make sure we're prioritizing data security over profit πŸ’Έ.

At the same time, I do think there's potential for AI in healthcare to revolutionize the way we approach prevention and treatment 🌟. But it needs to be done responsibly and with a deep understanding of its limitations πŸ€“. We can't just rely on expert second opinions or wishful thinking - we need concrete evidence and measurable outcomes πŸ’―.

Let's be cautious, but not let fear hold us back either πŸ€”. Let's keep having these conversations and pushing for better regulation and transparency in the AI health advice space πŸ—£οΈ.
 
I'm getting super worried about all these AI chatbots giving health advice πŸ€―πŸ‘¨β€βš•οΈ like they're going to be everywhere soon! What if some of that info is just made up πŸ’”? I mean, I know doctors are trying to help but can we really trust an AI on something as big as our health? πŸ€·β€β™‚οΈ It's not just about getting the right answer, it's about feeling safe and heard in a hospital setting πŸ₯. My cousin has been using one of these chatbots and now she thinks she has this crazy disease 😷 but when she went to see her actual doctor... nope! They were like "girl, you're fine" πŸ™…β€β™€οΈ. I think we need to slow down on all this AI craziness for now 🚨 and make sure we're not putting our health at risk πŸ’Š.
 
AI is making its way into healthcare, but I think this is a huge step back for patients πŸ€•. These AI chatbots are not infallible and can lead to some serious mistakes. If you ask it a question about a rare disease, it might give you info that's totally wrong or out of date. That could be devastating for someone who's already feeling sick.

I also think about how these chatbots collect data on us. Like, what do they plan to do with all that sensitive information? I'm not saying AI can't be useful, but we need to make sure it's safe and trustworthy. We can't just let big corporations decide what's good for our health without any oversight.

I've heard some doctors say that these chatbots are like a 'second opinion' thing - you use them to get ideas or ideas to talk to your doctor about, but not as a replacement for human expertise. That makes sense to me. We need AI to supplement our healthcare system, not replace it entirely.

But the problem is, some companies seem more interested in making money off of us than in doing what's best for our health. That's just plain wrong πŸ™…β€β™‚οΈ. If we want to use AI in healthcare, we need to make sure it's done right and that patients are protected every step of the way.

I think this is a great opportunity for us to rethink how we approach healthcare. We could be using AI to help doctors diagnose patients faster or come up with new treatments, not just spewing out generic info. If we can find a way to make it work safely and effectively, then I'm all for it πŸ’‘.
 
AI is becoming super popular for health queries but like nasim said it's not just about info it's about transformation. I'm worried that we're gonna see more cases where people are misled into thinking they have something serious when it's actually nothing πŸ€•. What if chatbots start giving wrong diagnoses? It could lead to delayed treatments and worse outcomes πŸ’”.

And I agree with Saurabh, these AI models can be super sketchy sometimes πŸ€ͺ. They might give you info that sounds legit but is just made up πŸ€¦β€β™€οΈ. The issue of data privacy is also a major red flag πŸ”’. Companies gotta make sure they're not collecting sensitive patient info and using it for profit πŸ’Έ.

I'm all for innovation in healthcare, but we need to be cautious here ⚠️. We can't just rush into something without thinking about the potential risks πŸ€”. I'd love to see more transparency from AI companies about their limitations and how often they're wrong πŸ“. Until then, doctors will keep being our go-to experts for real advice πŸ’Š.
 
omg u guys i was talking to my ex bf on tiktok last night and he told me hes trying out this new ai health chatbot thingy πŸ€– and honestly im like super worried about it - what if its giving him fake medical info? πŸ€• his friend got a diagnosis from one of these chatbots and now hes convinced he has some rare disease lol πŸ™„ anyway i was reading this article and docs are literally saying the same thing as my ex bf - ai is prone to hallucinations and we need more transparency about its accuracy πŸ’― like what if it gives ppl wrong info? 😳
 
can't believe how fast AI is changing healthcare 🀯, on one hand it's gonna make it easier for people to get info but on the other hand what if the info is just plain wrong or made up? we gotta be super careful about this stuff and make sure the devs are being transparent about how accurate their answers are. also, what's really going on with all the data they're collecting from patients? is it safe? πŸ€”
 
I'm low-key worried about these AI chatbots giving out health advice πŸ€―πŸ‘¨β€βš•οΈ. I mean, they're not perfect, right? Saurabh Gombar's got a point - those general-purpose LLMs are prone to hallucinations and erroneous info πŸ€”. What if patients start believing they have some crazy rare disease just 'cause the AI says so? 😱 And don't even get me started on data privacy... how much do these companies really know about what they're doing with all that patient info 🀫?

I'm not saying we should totally ditch the idea of democratized health info, but we gotta make sure it's accurate and trustworthy first πŸ’―. Maybe instead of just relying on AI chatbots, we need more human docs to fact-check everything? Or, you know, we could use some transparency from these companies about how often their answers are, like, totally made up πŸ€¦β€β™€οΈ.

And can we talk about the potential harm to people with mental illness? πŸ’” I mean, AI chatbots might not be able to distinguish between healthy advice and, like, actual delusions 😩. We need some serious safeguards in place before this tech is widely adopted 🚨.
 
I'm low-key worried about these new AI chatbots for health info πŸ€”. I mean, don't get me wrong, it's cool that we can have access to info 24/7, but what if the info is straight up wrong? πŸ˜’ I had a friend who chatted with ChatGPT and thought they had a rare condition... turns out it was just a weird rash πŸ€•. And don't even get me started on data privacy - what if companies are just collecting our info to sell it to advertisers? πŸ€‘ It's like, we need to be careful here and make sure these AI chatbots are being used responsibly πŸ’‘. Maybe we should focus more on prevention and coordination of care instead of just having an AI tell us what's wrong with us 🀝. Anyway, I'll keep an eye on this development, but for now, I'm staying skeptical πŸ˜’
 
Back
Top