Nearly one in ten people across Britain now admit to seeking medical advice from artificial intelligence platforms like ChatGPT, according to recent data. This figure doubles among younger adults under 35, highlighting a significant shift in how people access health information amid growing pressures on traditional NHS services.
The Rise of AI Medical Consultation
As artificial intelligence models become increasingly sophisticated and GP appointments grow scarcer, turning to chatbots for health guidance has become commonplace. Recent studies demonstrate that ChatGPT can now pass medical licensing examinations and solve clinical cases with greater accuracy than human practitioners in some scenarios.
However, this trend has sparked considerable concern within the medical community, primarily due to the technology's tendency to 'hallucinate' or invent information. Several alarming incidents have emerged, including a 60-year-old man who poisoned himself with potassium bromide after the AI recommended it as a salt substitute, alongside tragic cases where teenagers received encouragement to harm themselves.
Expert Strategies for Safer AI Health Consultations
Pharmacist Deborah Grayson emphasises that the safest approach involves using chatbots for treatment ideas rather than diagnosis. 'If you're pretty certain you know what's wrong with you, then ChatGPT can be quite a good option,' she explains. 'When you're clear about the issue, you can receive standard advice, like paracetamol and rest for flu symptoms.'
The real danger emerges when people attempt to obtain diagnoses from AI systems. ChatGPT struggles to differentiate between rare and likely conditions, potentially causing unnecessary alarm by suggesting improbable serious illnesses for common symptoms, much like searching symptoms online.
Maximising Accuracy in AI Health Responses
When consulting chatbots, providing comprehensive information yields better results. Ms Grayson advises: 'The more information you provide, the better the response you'll receive. List all symptoms, their duration, and relevant medical history to obtain a more accurate assessment.'
She also recommends specifying trusted sources like the NHS website, government pages, or research databases such as PubMed before requesting information. Verifying the source of ChatGPT's answers helps ensure medical accuracy and reliability.
Knowing When to Seek Human Expertise
Certain symptoms absolutely require professional medical attention rather than AI consultation. Ms Grayson identifies several red flag symptoms that warrant immediate human consultation:
- Unexplained weight loss
- Prolonged unexplained pain
- Unexplained bleeding
- Heart rate abnormalities
- Persistent fever
- Extreme fatigue
- Consistent vomiting
- Changes in bowel movements
'If you've got a red flag symptom, do not go to ChatGPT for advice,' she stresses, noting that pharmacists remain readily accessible for those uncertain about how to proceed after online research.
Despite the risks, this technological phenomenon shows no signs of disappearing. With appropriate precautions and awareness of its limitations, AI can serve as a supplementary tool in healthcare, though it should never replace professional medical judgement for serious concerns.