Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

AI that suggested glue on pizza now handles your health questions

Google says its AI health info is vetted by professionals. Critics aren’t convinced.

byKerem Gülen
March 21, 2025
in Artificial Intelligence, News
Home News Artificial Intelligence

Google is expanding its “AI Overviews” feature to include medical advice, despite a history of providing inaccurate information. Chief health officer Karen DeSalvo announced that advancements in Gemini models will allow the feature to cover “thousands more health topics.”

The update introduces “What People Suggest,” a feature that aggregates health advice from users on the internet. While DeSalvo stated that this feature is available on mobile in the US, users have reported difficulties accessing it on both the Google app and the web.

Concerns surrounding the accuracy of Google’s AI Overviews have been well-documented. Reports highlight instances where the AI provided erroneous information, such as claiming baby elephants could fit in a human hand and suggesting glue as a pizza topping. A study by Columbia’s Tow Center for Digital Journalism found that Google’s Gemini chatbot returned incorrect answers to basic questions 60 percent of the time.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

A Google spokesperson indicated that the “What People Suggest” feature aims to help users find relatable health information. The company claims this feature has undergone rigorous testing and clinical evaluation by licensed medical professionals and appears alongside authoritative health content on Search.

In 2021, Google closed its health division, leading to layoffs and reorganizations of staff focused on health-related roles. Experts have expressed skepticism about the ability of Google’s AI to deliver reliable healthcare information, citing past failures in health tech initiatives. Emarketer senior analyst Rajiv Leventhal told Bloomberg that “nobody in the big tech world has succeeded” in disrupting the healthcare industry, which he described as a “unique beast.”

Google has been questioned about its methods for ensuring the accuracy of the health information presented by its AI. Previously, representatives stated that insufficient high-quality web content may contribute to errors in the AI’s responses. The company claimed to have “guardrails and policies” to protect against low-quality outputs and asserted that it uses problematic cases to guide improvements.


Featured image credit: Kerem Gülen/Imagen 3

Tags: geminiGoogle

Related Posts

Texas Attorney General files lawsuit over the PowerSchool data breach

Texas Attorney General files lawsuit over the PowerSchool data breach

September 5, 2025
iPhone 17 Pro is expected to arrive with 48mp telephoto, variable aperture expected

iPhone 17 Pro is expected to arrive with 48mp telephoto, variable aperture expected

September 5, 2025
AI chatbots spread false info in 1 of 3 responses

AI chatbots spread false info in 1 of 3 responses

September 5, 2025
OpenAI to mass produce custom AI chip with Broadcom in 2025

OpenAI to mass produce custom AI chip with Broadcom in 2025

September 5, 2025
When two Mark Zuckerbergs collide

When two Mark Zuckerbergs collide

September 5, 2025
Deepmind finds RAG limit with fixed-size embeddings

Deepmind finds RAG limit with fixed-size embeddings

September 5, 2025

LATEST NEWS

Texas Attorney General files lawsuit over the PowerSchool data breach

iPhone 17 Pro is expected to arrive with 48mp telephoto, variable aperture expected

AI chatbots spread false info in 1 of 3 responses

OpenAI to mass produce custom AI chip with Broadcom in 2025

When two Mark Zuckerbergs collide

Deepmind finds RAG limit with fixed-size embeddings

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.