The Hidden Dangers of Using ChatGPT for Medical Advice Revealed
Hidden Dangers of Using ChatGPT for Medical Advice

The Growing Reliance on AI for Health Queries

Millions of patients worldwide are increasingly turning to artificial intelligence chatbots like ChatGPT to answer their medical questions. Recent data reveals that over 40 million people globally consult these bots for health advice daily, accounting for more than five percent of all messages sent to the platform. In England alone, research indicates that nine percent of men and seven percent of women now use AI chatbots for medical queries.

Personal Stories of AI Dependence

Alexandra Watson, who lives with a rare heart condition, nicknames ChatGPT "Chad" and uses it regularly to check symptoms and explore hypothetical scenarios. "Doctors are dismissive, Google just scares you, but Chad is helpful," she explains, appreciating how the chatbot tracks her previous queries to provide comprehensive responses. Similarly, Carole Railton consults ChatGPT about her heart condition and travel arrangements, valuing its cheerful tone and convenience over what she describes as impersonal medical check-ups.

The Alarming Safety Concerns

Despite their popularity, these AI systems were not originally designed to dispense medical advice. ChatGPT's own guidelines explicitly state it is "not intended for use in the diagnosis or treatment of any health condition." A concerning study from Stanford and Berkeley researchers found that disclaimers and warnings in response to health questions dropped dramatically from 26.3 percent in 2022 to just 0.97 percent by 2025.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Hallucinations and Real-World Consequences

Large language models are notoriously prone to generating factually incorrect information through what experts call "hallucinations." One alarming case reported in an American medical journal involved a 60-year-old man who, after consulting ChatGPT, replaced salt with sodium bromide in his diet. This led to bromide poisoning, resulting in paranoia, hallucinations, and eventual psychiatric care.

Testing Reveals Critical Limitations

A study published in Nature Medicine tested ChatGPT Health on 60 medical scenarios, varying patient demographics and symptoms. While the chatbot performed well in straightforward "textbook emergencies," it failed significantly in more complex situations. In 51.6 percent of cases where patients needed immediate hospital care, the AI incorrectly advised staying home or waiting for routine appointments.

Lead researcher Ashwin Ramaswamy concluded: "ChatGPT Health is most reliable when the clinical decision is least consequential, and least reliable when it matters most." OpenAI responded that the study doesn't reflect typical usage patterns and emphasized ongoing improvements to safety and reliability.

The Prompt Problem and Information Gaps

Dr. Caroline Pilot, acting chief medical officer for digital clinic HealthHero, explains that user prompts inherently contain bias. "When you send a message to a chatbot, you're putting in what you think is important," she notes, potentially omitting crucial details a doctor would identify. Dr. Sonia Szamocki, founder of AI healthtech company 32Co, adds that LLMs are essentially "pattern recognisers" that predict likely answers rather than retrieving verified facts.

Data Privacy and Retention Risks

Dr. Aaisha Makkar, a computer science lecturer specializing in ethical privacy-preserving technologies, warns that health information shared with AI systems may be stored in cloud environments where models learn from user data. "Even the most reputable AI providers rarely allow users to choose how long their health-related data is retained," she cautions, noting that LLMs can sometimes infer sensitive personal details from data patterns.

Medical Professionals' Perspectives

Professor Victoria Tzortziou-Brown, chair of the Royal College of General Practitioners, acknowledges the potential benefits of technology supporting patient curiosity but emphasizes that chatbots "are not without risks." She stresses that information sources are often unclear and content may not be evidence-based. Both she and Dr. Pilot agree that AI should complement rather than replace medical professionals, who provide essential context and evidence-based decision-making.

Pickt after-article banner — collaborative shopping lists app with family illustration

As AI integration in healthcare accelerates, experts unanimously advise using chatbots only for general guidance while maintaining caution about sharing detailed health information and always consulting qualified medical professionals for personalized care.