Google has removed certain health-related “AI Overviews” from its search results after investigations found the generative summaries were providing inaccurate or potentially harmful medical information, raising concerns about user safety. The decision follows a Guardian investigation that highlighted serious flaws in how the artificial intelligence generated health-related answers, prompting Google to disable the feature for specific queries.
AI Overviews are automatically generated summaries that appear at the top of Google search results, as snapshots of essential information from the web. Originally introduced to give users quick insights on complex topics, the feature has come under scrutiny for how it handles sensitive questions such as medical inquiries.
The Guardian report found numerous examples where the Overviews provided health information that lacked crucial context or was outright incorrect, potentially endangering users. In one instance, AI Overviews presented masses of raw numbers for the question “what is the normal range for liver blood tests?” without adjusting for factors like age, sex, ethnicity or clinical context — elements that can significantly change what is medically considered “normal.” Experts warned that such output could lead patients with serious liver disease to wrongly assume their test results were healthy.

Other problematic summaries flagged in the investigation included responses to queries about pancreatic cancer and dietary guidance, with the AI recommending avoidance of high-fat foods — contrary to standard medical advice. Critics described the outputs as “dangerous and alarming,” warning that they could increase the risk of patients dying from the diseases or prompt them to take wrong actions that can seriously harm their health.
Searches on Google for information about women’s cancer tests were also providing “completely wrong” answers, with experts warning that such errors could lead people to disregard real and potentially serious symptoms.
In response, a Google spokesperson said that many of the examples provided were based on “incomplete screenshots” and that, based on the material the company had reviewed, the results linked “to well-known, reputable sources and recommend seeking out expert advice”.
However, Google has removed the AI Overviews for a set of specific health-related searches — including, explicitly, “what is the normal range for liver blood tests” and “what is the normal range for liver function tests” — so that users no longer see the generative summary at the top of search results for those queries.

Google commented that “the vast majority” of AI Overviews were “factual and helpful”, and that they were continually working to improve their quality. The company said the accuracy of the summaries was in line with other established search features, including featured snippets that have been in use for more than a decade. It added that when the system misinterprets online content or fails to account for important context, it would respond in accordance with its policies.
A Google spokesperson said: “We invest significantly in the quality of AI Overviews, particularly for topics like health, and the vast majority provide accurate information.”
Health experts emphasise that directing users toward trusted, evidence-based health information and clearer contextual guidance is essential to prevent misunderstanding and potential harm.
The controversy underscores wider debates around the use of AI for health information. As generative AI tools become more prevalent in search and other applications, experts say companies must implement stronger safeguards to protect users, especially when dealing with information that can influence medical decisions.












