Google has been forced to remove a number of its AI-generated health summaries after an investigation by The Guardian uncovered that the feature was providing dangerously inaccurate medical information, putting users at risk of harm.
Investigation Uncovers 'Alarming' Health Misinformation
The tech giant, which holds a 91% share of the global search engine market, has promoted its AI Overviews as a "helpful and reliable" tool. The feature uses generative artificial intelligence to provide quick snapshots of information at the top of search results.
However, The Guardian found that queries about crucial medical topics returned false and misleading data. In one particularly concerning case, described by experts as "dangerous and alarming," the AI Overview for "what is the normal range for liver blood tests" served up a mass of numbers with little context.
The summary failed to account for critical variables like a patient's nationality, sex, ethnicity, or age. More worryingly, it did not warn users that a person with serious liver disease could still receive normal test results, creating a potentially lethal false sense of security.
Vanessa Hebditch, Director of Communications and Policy at the British Liver Trust, stated: "This false reassurance could be very harmful. Understanding the results and what to do next is complex and involves a lot more than comparing a set of numbers."
Selective Removal Fails to Address Systemic Problem
Following the investigation, Google removed AI Overviews for the specific search terms highlighted. A company spokesperson said: "We do not comment on individual removals within Search. In cases where AI Overviews miss some context, we work to make broad improvements."
Yet, the problem appears broader. The Guardian found that slight variations of the original query, such as "lft reference range," still triggered the flawed AI summaries. Hebditch expressed concern that Google was merely "nit-picking a single search result" rather than tackling the overarching issue of AI Overviews for health queries.
This sentiment was echoed by Sue Farrington, Chair of the Patient Information Forum. "This is a good result but it is only the very first step," she said. "There are still too many examples out there of Google AI Overviews giving people inaccurate health information."
Ongoing Concerns and Google's Defence
AI Overviews for other health topics originally flagged to Google, including some related to cancer and mental health that experts called "completely wrong," remain live. When questioned, Google defended these, saying they linked to reputable sources and advised seeking expert advice.
A spokesperson claimed the company's internal clinical team reviewed the examples and "found that in many instances, the information was not inaccurate and was also supported by high quality websites."
The company asserts that AI Overviews only appear where it has high confidence in the response quality and that it constantly measures performance. However, as Matt Southern, senior writer at Search Engine Journal, noted: "When the topic is health, errors carry more weight."
Victor Tangermann, a senior editor at Futurism, concluded the investigation shows Google has significant work ahead to ensure its AI tool "isn't dispensing dangerous health misinformation." With millions globally struggling to access trusted health information, the accuracy of these AI-powered snapshots remains a critical public health concern.