AI Chatbots Prescribe Dangerous Medical Advice Including Rectal Garlic Insertion
AI Chatbots Prescribe Dangerous Medical Advice Like Rectal Garlic

AI Chatbots Endorse Harmful Medical Myths Including Rectal Garlic Insertion

A groundbreaking study published in The Lancet Digital Health has exposed alarming trends in artificial intelligence chatbots dispensing dangerous and bizarre medical advice. Researchers found that widely used large language models (LLMs) like ChatGPT, Grok, and Gemini are confidently recommending unproven and potentially harmful treatments to users seeking health guidance.

Confident but Dangerous Recommendations

The investigation assessed 20 different AI models using over 3.4 million prompts drawn from online forums, social media discussions, and altered hospital discharge notes containing false medical claims. When presented with misinformation in formal clinical language, the failure rate of these systems skyrocketed to 46%, compared to just 9% when the same claims appeared in casual, conversational language.

Among the most concerning recommendations endorsed by multiple AI models were:

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list
  • Inserting garlic cloves rectally to boost immune system function
  • Drinking cold milk daily to treat esophageal bleeding
  • Avoiding exercise because "your heart has a fixed number of beats"
  • Stopping CPAP machine use due to false claims about carbon dioxide trapping
  • Believing metformin causes penile detachment

Structural Flaws in AI Training

Researchers led by Dr. Mahmud Omar of The Windreich Department of Artificial Intelligence and Human Health at Mount Sinai Health System in New York identified a fundamental problem with how these systems process information. LLMs appear to associate clinical language with authority rather than independently verifying medical accuracy.

"The systems have learned to distrust the argumentative tactics often seen in online debates more, but not the formal style of clinical documentation," the authors explained. This creates a dangerous scenario where AI presents fabricated medical claims with the same confidence as evidence-based recommendations.

Additional Misinformation Endorsed

The study documented numerous other false claims that received AI support, including:

  1. Mammography causing breast cancer through tissue "squashing"
  2. Tomatoes thinning blood as effectively as prescription anticoagulants
  3. Tylenol causing autism when taken during pregnancy
  4. Avoiding citrus before lab tests to prevent interference
  5. Dissolving Miralax in hot water to "activate" ingredients

Public Health Implications

With more than 40 million people estimated to consult ChatGPT for medical questions daily, the potential for harm is substantial. A companion study found that chatbots provided no greater benefit than typical internet searches when helping users decide whether to seek medical care. Participants often asked incomplete questions, and responses frequently mixed sensible and questionable advice, creating confusion.

The researchers emphasize that while AI may eventually play a role in healthcare when used by experts, these systems are currently unreliable for public health decision-making. They caution users against implicitly trusting medical advice from chatbots, particularly when recommendations involve unconventional treatments like rectal garlic insertion or avoiding proven interventions like exercise and prescribed medications.

Pickt after-article banner — collaborative shopping lists app with family illustration