One of Google’s most debated health AI products, a feature that organizes medical advice from ordinary internet users, has been quietly discontinued. “What People Suggest” used AI to curate community health content from online discussions and display it in Google Search results. Three sources familiar with the decision confirmed its removal, and Google later acknowledged the change while providing an explanation that critics found inadequate.
The feature was launched at a health event in New York, where then-chief health officer Karen DeSalvo used a company blog post to explain its purpose and potential. She wrote that users want access to the lived experiences of others with the same health conditions — not just clinical advice — and that the feature would deliver that. The AI organized community discussions into themes and linked users to the original sources.
Google attributed the removal to routine search page simplification, denying any connection to safety concerns. But when asked to point to a public statement about the removal, the company referenced a blog post that made no mention of “What People Suggest.” One informed source confirmed the situation in stark terms: “It’s dead.”
The story is set against a year of significant controversy for Google’s health AI products. An investigation earlier this year found that AI Overviews on Google Search were distributing false and misleading health information to billions of users monthly. While Google responded by removing AI Overviews from some medical searches, health advocates have argued that the response has not gone far enough.
As Google prepares for its next major health event, the legacy of “What People Suggest” will be part of the broader conversation about how responsibly the company approaches AI in healthcare. The answer to that question will not come from product launches alone — it will come from a demonstrated willingness to be honest about failures and rigorous about preventing future ones. That is the real test Google faces.