Google is facing severe criticism for potentially endangering users by concealing crucial safety warnings within its AI-generated health advice. The tech giant's AI Overviews, which appear prominently at the top of search results, are designed to provide quick summaries for queries on sensitive topics like medical conditions. However, an exclusive investigation has revealed that Google does not include any disclaimers when users are first presented with this AI-generated medical information.
Warnings Only on Request
Safety labels are only issued if users actively choose to request additional health details by clicking a button labelled "Show more". Even then, these disclaimers are placed below all the extra AI-assembled medical advice and are displayed in a smaller, lighter font, making them easy to overlook. The disclaimer states: "This is for informational purposes only. For medical advice or a diagnosis, consult a professional. AI responses may include mistakes."
Expert Concerns Over Design Flaws
AI specialists and patient advocates have expressed alarm at these findings. Pat Pataranutaporn, an assistant professor and renowned expert in AI at MIT, highlighted the critical dangers. "The absence of disclaimers when users are initially served medical information creates several critical dangers," he explained. "First, even advanced AI models can hallucinate misinformation or prioritise user satisfaction over accuracy, which is genuinely dangerous in healthcare. Second, users may not provide all necessary context or ask incorrect questions about their symptoms."
Gina Neff, a professor of responsible AI at Queen Mary University of London, blamed Google directly, stating that the "problem with bad AI Overviews is by design". She added, "AI Overviews are designed for speed, not accuracy, and that leads to mistakes in health information, which can be dangerous."
Real-World Harm and User Behaviour
Sonali Sharma, a researcher at Stanford University's centre for AI in medicine, pointed out that the placement of AI Overviews at the top of search pages creates a false sense of reassurance. "For many people, because that single summary is there immediately, it basically discourages further searching or clicking through to where a disclaimer might appear," she said. "The AI Overviews can often contain partially correct and partially incorrect information, making it hard to discern accuracy without prior knowledge."
In response, a Google spokesperson defended the system, saying, "It's inaccurate to suggest that AI Overviews don't encourage people to seek professional medical advice. In addition to a clear disclaimer, AI Overviews frequently mention seeking medical attention directly within the overview itself, when appropriate." However, Google did not deny that disclaimers are absent initially or that they appear in a less prominent format.
Calls for Urgent Action
Tom Bishop, head of patient information at the blood cancer charity Anthony Nolan, called for immediate changes. "We know misinformation is a real problem, but when it comes to health misinformation, it's potentially really dangerous," he said. "That disclaimer needs to be much more prominent, right at the top, in the same size font as everything else, to make people step back and think before acting on the information."
This issue follows a previous Guardian investigation in January, which revealed that false health information in Google AI Overviews was putting people at risk. Subsequently, Google removed AI Overviews for some, but not all, medical searches, yet concerns persist over the design and implementation of safety measures.



