NewsTech

Google Kills AI Health Tool That Pulled Medical Tips From Reddit

Google has quietly shut down “What People Suggest,” an AI search feature that pulled health advice from Reddit, Quora, and X to answer medical questions. The tool, launched just one year ago, is now gone. And while Google insists safety had nothing to do with it, the timing tells a very different story.

What Was “What People Suggest” and Why It Mattered

Launched in early 2025, “What People Suggest” arrived as a way for Google Search to include not just traditional search results for queries such as “Why does my leg hurt?,” but also suggestions from real people.1

The idea, as Google pitched it at the time, was to “organize different perspectives from online discussions into easy-to-understand themes, helping you quickly grasp what people are saying” with the help of AI.1 The feature would largely source information from Reddit, Twitter/X, Quora, and other online forums.1

Social media icons appeared at the top of suggestion boxes to show users whether the advice came from Reddit, Twitter, or Quora. The suggestions were organized into digestible summaries with dropdown arrows for more details and source credits.2

The feature launched on mobile devices in the U.S. last year at Google’s annual health event, The Check Up. At the time, Karen DeSalvo, then Google’s chief health officer, said people value hearing from others who have experienced similar health conditions.3

The concept sounded good on paper. But packaging amateur medical tips with AI and placing them near the top of search results? That is where the problems began.

Google AI health search feature removed Reddit medical advice

Google AI health search feature removed Reddit medical advice

Google Says It Was Just “Simplifying” Search

Google confirmed to The Guardian that “What People Suggest” has been removed from Search.1

A spokesperson further added that the removal was not due to quality or safety, saying: “It had nothing to do with the quality or safety of the feature, and we continue to help people find reliable health information from a range of sources, including forums with first-person perspectives that people find incredibly useful.”1

But here is where things get murky.

Google did not publish a dedicated announcement when the feature was removed. When asked where the decision had been communicated publicly, a spokesperson pointed to a November 2025 post by Google search advocate John Mueller. That post contains no reference to What People Suggest.4

The “public” sharing of this was a brief post from November 2025 where Google explained that it had “identified some features that aren’t being used very often and aren’t adding significant value to users.” That post never specifically outlined that “What People Suggest” would be included in the removals.1

Google has still not confirmed the exact date the feature was taken down. The lack of transparency around the removal raises more questions than answers.

The Real Reason? A Pattern of Dangerous AI Health Errors

The timing is hard to ignore.5

In January 2026, a Guardian investigation found that Google’s AI Overviews have displayed false and misleading health information that could put people at risk of harm.6 The findings were alarming:

  • AI Overviews advised people with pancreatic cancer to avoid high-fat foods. Experts say this is exactly the opposite of what should be recommended, and may increase the patient’s risk of death.7
  • The summaries also showed incorrect information about crucial liver function tests, which could leave people with serious liver disease wrongly thinking they are healthy.7
  • Answers for women’s cancer tests showed the wrong information, too, which experts say could cause people to dismiss serious symptoms.7

Stephen Buckley, head of information at mental health charity Mind, said some AI summaries for conditions such as psychosis and eating disorders offered “very dangerous advice.”8

Following that investigation, Google removed AI Overviews for certain medical queries.9 The move adds to a pattern of Google retreating from AI health features after public scrutiny.10

Key Fact: Google’s AI Overviews now reach 2 billion monthly users across 200 countries and territories.11 Ahrefs data from November showed medical YMYL queries triggered AI Overviews 44.1% of the time, the highest rate among YMYL categories.3 The stakes of getting health information wrong at this scale cannot be overstated.

Why AI and Health Advice Are a Risky Mix

Critics said that AI often has trouble with vital context. Age, medical history, and specific symptoms can all affect how useful health advice is. Some suggestions might work for one person but not for another.5

This is the core problem with using AI to package crowdsourced health tips. A Reddit post about managing back pain might be helpful for a 25 year old athlete. The same advice could be harmful for a 70 year old with osteoporosis. AI cannot tell the difference.

Disclaimers warning users to consult medical professionals weren’t always front and center, which made the AI responses feel more authoritative than they should have.12

A Mount Sinai study published in August 2025 found that widely used AI chatbots are “highly vulnerable” to spreading harmful health information. The research team discovered that AI chatbots can be easily misled by false medical details, whether errors are intentional or accidental.13

A survey conducted in April 2025 with over 1,600 U.S. adults found that nearly 8 in 10 adults say they’re likely to go online to answer a specific question about health symptoms. Among those who search for health information online, nearly two-thirds (65%) report that they have seen AI-generated responses at the top of results.14

Millions of people are trusting AI with their health decisions every single day. That is why accuracy is not optional. It is a matter of life and death.

What Google Is Doing Next in Health AI

For those who valued personal stories from others, the information is not gone. But Google’s AI algorithm no longer packages it for you. You can still find forum discussions and personal stories in your search results, but you will have to click through and evaluate the sources yourself.5

Meanwhile, Google is not stepping away from health AI entirely. Far from it.

At its 2026 Check Up event, Google announced a $10 million investment to train future clinicians in using AI to improve patient care alongside upgrades to Google Search and Fitbit.15

Google also says health-related videos on YouTube have surpassed 1 trillion views globally.3 The company announced an AI-powered “Ask” button for eligible health videos on YouTube.3

Karen DeSalvo, who championed the original feature, retired in August 2025 and was succeeded by Dr. Michael Howell.3 The leadership shift may also signal a change in how Google approaches health AI going forward.

Google’s AI Health Actions Timeline
“What People Suggest” launched on U.S. mobile March 2025
Karen DeSalvo retires as Chief Health Officer August 2025
John Mueller’s post on simplifying search features November 2025
Guardian investigation exposes AI health errors January 2026
Google removes AI Overviews for some health queries January 2026
“What People Suggest” removal confirmed March 2026
$10M clinician AI training announced at The Check Up March 2026

The death of “What People Suggest” is not just a product update. It is a warning sign for anyone building AI tools that touch people’s health. When your search engine reaches 2 billion people every month, packaging unverified advice from strangers is not innovation. It is a risk the world cannot afford. Google made the right call pulling this feature. But the bigger question remains: how many people received bad health advice before it was taken down? And will other tech companies learn from this before someone gets seriously hurt?

Drop your thoughts in the comments below. What do you think about AI giving medical advice in search results?

About author

Articles

Sofia Ramirez is a senior correspondent at Thunder Tiger Europe Media with 18 years of experience covering Latin American politics and global migration trends. Holding a Master's in Journalism from Columbia University, she has expertise in investigative reporting, having exposed corruption scandals in South America for The Guardian and Al Jazeera. Her authoritativeness is underscored by the International Women's Media Foundation Award in 2020. Sofia upholds trustworthiness by adhering to ethical sourcing and transparency, delivering reliable insights on worldwide events to Thunder Tiger's readers.

Leave a Reply

Your email address will not be published. Required fields are marked *