Google's AI Health Summaries Risk Public Safety Through Misinformation
When people type medical questions into Google today, they are increasingly likely to receive an answer written by artificial intelligence rather than a simple list of website links. This shift represents the most significant change to the world's dominant search engine in twenty-five years, but experts are raising urgent concerns that it is putting public health at serious risk.
The Rapid Global Rollout of AI Overviews
Sundar Pichai, Google's chief executive, first announced plans to integrate artificial intelligence directly into search results during the company's annual conference in May 2024. Starting that month, users in the United States began seeing a new feature called AI Overviews, which provides conversational summaries of information above traditional search listings.
By July 2025, this technology had expanded dramatically to serve more than two billion people monthly across over two hundred countries in forty different languages. Google is moving at extraordinary speed to protect its traditional search business, which generates approximately two hundred billion dollars annually, from emerging AI competitors.
"We are leading at the frontier of AI and shipping at an incredible pace," Pichai declared last July, adding that AI Overviews were "performing well." However, this rapid implementation comes with substantial dangers, particularly when applied to medical queries where accuracy is absolutely critical.
Dangerous Medical Misinformation in AI Summaries
Within weeks of AI Overviews launching in the United States, users encountered factual errors across numerous subjects. While some inaccuracies were merely odd or amusing, those concerning health matters carry potentially life-threatening consequences.
A Guardian investigation has revealed multiple instances where Google's AI Overviews provided dangerously incorrect medical information:
- People with pancreatic cancer were wrongly advised to avoid high-fat foods, which experts described as "really dangerous" and the exact opposite of proper medical guidance
- Crucial liver function test information was presented inaccurately, potentially leading people with serious liver disease to believe they were healthy
- Women's cancer test information was "completely wrong" and could result in genuine symptoms being dismissed
Elizabeth Reid, Google's head of search, initially responded to criticism by acknowledging that "in a small number of cases" AI Overviews had misinterpreted web content. "At the scale of the web, with billions of queries coming in every day, there are bound to be some oddities and errors," she wrote in a blog post.
However, health experts emphasize that when questions concern medical matters, accuracy and proper context are non-negotiable requirements rather than optional features.
YouTube as Primary Medical Source Raises Concerns
A new study examining more than fifty thousand health-related searches in Germany has revealed an alarming pattern in how AI Overviews gather medical information. The single most frequently cited source was YouTube, a general-purpose video platform where content can be uploaded by anyone from board-certified physicians to wellness influencers with no medical training whatsoever.
"This matters because YouTube is not a medical publisher," the researchers noted in their findings. The platform hosts content from both qualified medical professionals and completely untrained creators without distinction.
Hannah van Kolfschooten, a researcher in artificial intelligence, health, and law at the University of Basel, explains the fundamental problem: "With AI Overviews, users no longer encounter a range of sources that they can compare and critically assess. Instead, they are presented with a single, confident, AI-generated answer that exhibits medical authority."
This creates what van Kolfschooten describes as "a new form of unregulated medical authority online" that actively restructures health information rather than merely reflecting what exists on the internet.
How AI Summaries Change User Behaviour
Nicole Gross, an associate professor in business and society at the National College of Ireland, highlights another significant concern about how AI Overviews affect user behaviour. "Once the AI summary appears, users are much less likely to research further," she explains.
This means people are deprived of the opportunity to critically evaluate information, compare different sources, or apply common sense when dealing with health-related questions. The confident, authoritative presentation of AI-generated answers creates a false sense of reliability that discourages further investigation.
Google maintains that AI Overviews are designed to surface information supported by top web results and include links to source material. The company told the Guardian that people can use these links to explore topics more deeply, and that when AI Overviews misinterpret content or miss context, Google works to make broad improvements to its systems.
Additional Concerns About Evidence Quality
Experts have identified further problems with how AI Overviews handle medical information:
- They often fail to distinguish between strong evidence from randomised trials and weaker evidence from observational studies
- Important caveats about medical evidence are frequently omitted from summaries
- Different types of claims listed together may give the impression that some are better established than they truly are
- Answers can change as AI Overviews evolve, even when the underlying science remains constant
Athena Lamnisos, chief executive of the Eve Appeal cancer charity, emphasizes this last point: "That means that people are getting a different answer depending on when they search, and that's not good enough."
Following the Guardian's investigation, Google removed some of the problematic AI Overviews for health queries, though the company stated it does not comment on individual removals within search. Many experts remain concerned that this addresses symptoms rather than the underlying problem.
Vanessa Hebditch, director of communications and policy at the British Liver Trust, expresses this ongoing worry: "Our bigger concern with all this is that it is nit-picking a single search result and Google can just shut off the AI Overviews for that but it's not tackling the bigger issue of AI Overviews for health."
Sue Farrington, chair of the Patient Information Forum, which promotes evidence-based health information, adds: "There are still too many examples out there of Google AI Overviews giving people inaccurate health information."
The ultimate danger, as Nicole Gross warns, is that "bogus and dangerous medical information or advice in AI Overviews ends up getting translated into the everyday practices, routines and life of a patient." In healthcare contexts, such misinformation can quite literally become a matter of life and death for vulnerable individuals seeking reliable guidance.