Study Reveals AI Chatbots Provide Flawed Health Advice, Sparking Safety Concerns
AI Chatbots Give Inaccurate Health Info, Study Warns

Experts have issued a severe caution against relying on AI chatbots for health and medical guidance, following a study that uncovered widespread inaccuracies in their responses. The research, published in the journal BMJ Open, indicates that these digital assistants frequently deliver misleading or incorrect information, posing significant risks to users seeking reliable medical support.

Alarming Statistics on Chatbot Performance

The investigation evaluated responses from popular AI chatbots, including ChatGPT, Grok, and Meta AI, to a set of 50 medical questions. Shockingly, half of all answers were classified as "problematic," highlighting a critical flaw in these systems. Grok performed the worst, with 58 per cent of its responses deemed problematic, closely followed by ChatGPT at 52 per cent and Meta AI at 50 per cent.

Underlying Causes of Inaccurate Advice

Researchers attribute these errors to the chatbots' tendency to "hallucinate," a phenomenon where they generate plausible-sounding but false information due to biased or incomplete training data. Unlike human experts, these AI models do not engage in reasoning or evidence evaluation, leading to potentially dangerous health advice that could misinform users.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Growing Reliance on AI for Mental Health Support

Compounding the issue, a separate study reveals that one in four teenagers are now turning to AI chatbots for mental health support. This trend underscores the urgent need for accurate and safe digital resources, as vulnerable individuals may unknowingly rely on flawed information for critical health decisions.

Calls for Action and Regulatory Measures

The findings have prompted calls for comprehensive public education campaigns to raise awareness about the limitations of AI in healthcare. Additionally, experts advocate for enhanced professional training for medical practitioners to better navigate and critique AI-generated content. Regulatory oversight is also deemed essential to ensure that generative AI technologies are developed and deployed in ways that genuinely support public health objectives.

As AI continues to integrate into daily life, this study serves as a stark reminder of the importance of verifying health information through trusted sources and the ongoing need for robust safeguards in AI development.

Pickt after-article banner — collaborative shopping lists app with family illustration