For over two decades, people have turned to Google's search engine to answer pressing health questions, from distinguishing flu from Covid to understanding chest pain causes. Traditionally, this yielded lists of website links. Today, however, those same queries increasingly generate responses authored by artificial intelligence, a shift experts warn is putting public health in serious jeopardy.
The Rapid Global Rollout of AI-Powered Search
Sundar Pichai, Google's Chief Executive, first unveiled plans to deeply integrate artificial intelligence into the company's core search product at its annual conference in Mountain View, California, during May 2024. He announced that US users would soon encounter a new feature called AI Overviews, providing concise information summaries positioned above traditional search results. This change represented the most significant overhaul of Google's flagship service in twenty-five years.
By July 2025, the technology had achieved a staggering global footprint, expanding to serve users in more than 200 countries across 40 different languages. The company reported that approximately 2 billion people were receiving AI Overviews each month. This rapid deployment is part of Google's strategic race to defend its traditional search business, which generates around $200 billion annually, against emerging AI competitors. Pichai emphasised the company's commitment to this frontier, stating last July that they were "leading at the frontier of AI and shipping at an incredible pace," and that AI Overviews were "performing well."
Inherent Risks in Generative AI Summaries
Despite corporate confidence, experts highlight fundamental risks embedded within the AI Overviews system. The tool uses generative AI to produce instantaneous, conversational answers to user queries, citing various sources in the process. However, a critical flaw is its inability to reliably discern when a source provides incorrect or misleading information. Within weeks of its US launch, users documented factual errors across numerous subjects, including an AI Overview falsely claiming President Andrew Jackson graduated college in 2005.
Elizabeth Reid, Google's Head of Search, acknowledged these issues in a public blog post, conceding that "in a small number of cases," the AI had misinterpreted web page content. She noted that "at the scale of the web, with billions of queries coming in every day, there are bound to be some oddities and errors." Yet, when the subject matter shifts to human health, experts stress that accuracy and proper context are absolutely essential and non-negotiable. The stakes are immeasurably higher.
Dangerous Medical Misinformation Uncovered
Google is now facing intensifying scrutiny over its AI Overviews for medical queries following a Guardian investigation which found people were being exposed to false and potentially harmful health information. While the company maintains that AI Overviews are "reliable," the investigation uncovered alarming inaccuracies.
In one instance described by experts as "really dangerous," Google's AI wrongly advised individuals with pancreatic cancer to avoid high-fat foods. Medical professionals stated this guidance was the exact opposite of established nutritional advice for such patients and could potentially increase mortality risk. Another "alarming" example involved the AI providing bogus information regarding standard liver function test ranges. This misinformation could lead individuals with serious liver disease to incorrectly believe their results were normal, potentially causing them to miss crucial follow-up medical appointments.
Furthermore, AI Overviews concerning women's cancer screenings delivered "completely wrong" information that experts warned could result in genuine symptoms being dismissed by patients. Google's initial response sought to downplay these findings, stating its own clinicians believed the flagged Overviews linked to reputable sources and included advice to consult experts. A spokesperson asserted, "We invest significantly in the quality of AI Overviews, particularly for topics like health, and the vast majority provide accurate information."
Nevertheless, within days, the company removed several of the specific health-related AI Overviews highlighted by the investigation. A spokesperson declined to comment on individual removals but stated, "In cases where AI Overviews miss some context, we work to make broad improvements, and we also take action under our policies where appropriate."
Persistent Concerns and a Reliance on YouTube
Despite these corrective actions, health experts and patient advocates remain deeply concerned. Vanessa Hebditch, Director of Communications and Policy at the British Liver Trust, argues that removing individual results does not address the systemic problem. "Our bigger concern with all this is that it is nit-picking a single search result and Google can just shut off the AI Overviews for that but it’s not tackling the bigger issue of AI Overviews for health," she says.
Sue Farrington, Chair of the Patient Information Forum, echoes this worry, noting, "There are still too many examples out there of Google AI Overviews giving people inaccurate health information." A new study analysing over 50,000 health-related searches in Germany has amplified these fears, revealing a startling fact: the single most cited source domain for AI Overviews was YouTube.
Researchers pointed out the inherent danger in this reliance, stating, "This matters because YouTube is not a medical publisher. It is a general-purpose video platform. Anyone can upload content there." The platform hosts content from qualified medical professionals alongside material from wellness influencers, life coaches, and creators with no formal medical training whatsoever.
The Illusion of Confident Medical Authority
Experts warn that the presentation of AI-generated health information poses a unique threat. Hannah van Kolfschooten, a researcher in AI, health and law at the University of Basel, explains the critical shift. "With AI Overviews, users no longer encounter a range of sources that they can compare and critically assess. Instead, they are presented with a single, confident, AI-generated answer that exhibits medical authority."
This restructuring of online health information, when built upon sources like YouTube that were never designed to meet rigorous medical standards, creates what van Kolfschooten describes as "a new form of unregulated medical authority online." Google counters that AI Overviews are designed to surface information corroborated by top web results and include supporting links for users to explore topics further.
However, Nicole Gross, an Associate Professor in Business and Society at the National College of Ireland, highlights a behavioural change. "Once the AI summary appears, users are much less likely to research further, which means that they are deprived of the opportunity to critically evaluate and compare information, or even deploy their common sense when it comes to health-related issues."
Broader Systemic Issues and Evolving Answers
Additional concerns have been raised regarding the AI's handling of medical evidence. Experts note that even when facts are accurate, AI Overviews often fail to distinguish between strong evidence from randomised controlled trials and weaker evidence from observational studies. Important caveats and limitations of research are frequently omitted.
Presenting claims side-by-side in a summary can misleadingly suggest they are equally well-established. Furthermore, the answers provided by AI Overviews can change as the system evolves, even when the underlying scientific consensus remains stable. Athena Lamnisos, Chief Executive of the Eve Appeal cancer charity, warns, "That means that people are getting a different answer depending on when they search, and that’s not good enough."
Google has stated that links within AI Overviews are dynamic and change based on the most relevant and timely information for a search, and that errors are used to improve its systems. Yet, the ultimate fear, as expressed by Professor Gross, is that bogus medical advice "ends up getting translated into the everyday practices, routines and life of a patient, even in adapted forms. In healthcare, this can turn into a matter of life and death." The confident authority projected by Google's AI Overviews, therefore, masks a potentially critical public health vulnerability.