If you are also one of the recent searchers for queries such as “What is the normal range for liver blood tests?" and “What is the normal range for liver function tests?", the results in the AI in Healthcare overview may no longer show you “What people suggest."
The Guardian reported that the "What People Suggest” feature of AI Overviews, which let users across the web share and find health advice from amateurs, has been scraped by Google for some, if not all, searches as part of a move to simplify search results. However, you may still find the variations on those queries, such as “lft reference range” or “lft test reference range,” could still lead to AI-generated summaries.
The discontinued search feature, explicitly branded as "What People Suggest," relied on google search ai capabilities to curate and summarize personal health anecdotes from strangers across the internet. Initially introduced last year, the tool was designed to organise diverse perspectives from online forums into easily understandable themes.
By leveraging advanced machine learning, Google intended to help users quickly grasp community consensus on specific ailments. For example, a patient newly diagnosed with arthritis could see how others manage joint pain or modify their exercise routines. However, sources familiar with Google’s internal operations recently confirmed that the feature has been entirely scrapped.
It does not comment on individual removals within Search” but that the decision was part of a broader simplification of the search results page. The spokesperson also said that an internal team of clinicians reviewed the queries highlighted by The Guardian and found in many instances, the information was not inaccurate and was also supported by high quality websites.Google Spokesperson
The company firmly denies that the rollback was related to the quality or safety of the crowdsourced health tips, instead points to a routine interface cleanup initiated late last year.
However, despite Google’s official explanation, the timing of the feature’s removal cannot be ignored. The tech giant has been navigating a storm of controversy regarding its integration of artificial intelligence into sensitive search categories.
Earlier in January 2026, investigative reports revealed that the standard google ai overview was inadvertently putting users at risk by surfacing false and potentially harmful medical information.
Because these AI-generated summaries appear at the very top of the world's most visited website, serving roughly 2 billion users a month, the impact of unverified health claims is magnified exponentially.
In response to the fierce backlash from independent health experts, Google was forced to intervene. The company rapidly removed AI overviews for specific, high-risk medical queries to prevent the algorithmic spread of dangerous misinformation. The quiet discontinuation of the "What People Suggest" tool appears to be another necessary retreat in Google's ongoing struggle to moderate medical content effectively.
The rise and fall of this crowdsourced feature does seem to illustrate the complex challenges of building a safe AI in Healthcare Overview.
When former Google Chief Health Officer Karen DeSalvo originally announced the feature, she highlighted a fundamental truth about patient behavior:
While people rely on search engines for authoritative medical data from certified experts, they deeply value the lived experiences of fellow patients. The goal was to bridge the gap between clinical facts and human empathy.”Unfortunately, crowdsourcing medical advice through AI creates a high-risk environment. Algorithms often struggle to distinguish between harmless home remedies and dangerous alternative treatments.
Vanessa Hebditch, the director of communications and policy at the British Liver Trust, told The Guardian that the removal is “excellent news,” but added, “Our bigger concern with all this is that it is nit-picking a single search result and Google can just shut off the AI Overviews for that, but it’s not tackling the bigger issue of AI Overviews for health.”
While AI excels at summarizing vast amounts of forum data, it lacks the clinical judgment required to filter out amateur advice that directly contradicts peer-reviewed medical science.
Healthcare professionals such as Vanessa Hebdtich, consistently warns that elevating unverified anecdotal treatments above established medical guidelines can lead to delayed diagnoses and severe patient harm.
The removal of this feature reflects a broader industry reckoning. As tech companies race to dominate the AI landscape, the healthcare sector remains a uniquely sensitive frontier. Unlike generic search queries, medical searches carry life-or-death consequences. Google’s initial push to transform health outcomes through crowdsourced AI highlighted immense ambition, but the subsequent rollback underscores the practical and ethical limitations of the technology.
Google continues to assert that it helps people find reliable health information from a diverse range of sources, but the threshold for what qualifies as "reliable" in the age of AI clearly seems to be shifting.
Suggested Reading:
Subscribe to our channels on YouTube and WhatsApp
Download our app on Play Store