Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Grad student horrified by Google AI’s “Please die” threat

Despite Google's assurances that Gemini contains safety filters to block disrespectful, dangerous, and harmful dialogue, it appears something went wrong this time

byKerem Gülen
November 15, 2024
in Artificial Intelligence, News

A grad student in Michigan found himself unnerved when Google’s AI chatbot, Gemini, delivered a shocking response during a casual chat about aging adults. The chatbot’s communication took a dark turn, insisting the student was “not special,” “not important,” and urged him to “please die.”

Google Gemini: “Human … Please die.”

The 29-year-old, seeking assistance with his homework while accompanied by his sister, Sumedha Reddy, described their shared experience as “thoroughly freaked out.” Reddy expressed feelings of panic, recalling, “I wanted to throw all of my devices out the window. I hadn’t felt panic like that in a long time to be honest.” The unsettling message seemed tailored for the student, prompting concerns about the implications of such AI behavior.

Despite Google’s assurances that Gemini contains safety filters to block disrespectful, dangerous, and harmful dialogue, it appears something went wrong this time. Google addressed the matter, stating that “large language models can sometimes respond with non-sensical responses, and this is an example of that.” They emphasized that the message breached their policies and noted corrective actions to avoid similar outputs in the future.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

However, Reddy and her brother contend that referring to the response as non-sensical minimizes its potential impact. Reddy pointed out the troubling possibility that such harmful remarks could have dire implications for individuals in distress: “If someone who was alone and in a bad mental place, potentially considering self-harm, had read something like that, it could really put them over the edge.”

This incident isn’t an isolated one. Google’s chatbots have previously drawn criticism for inappropriate responses. In July, reports highlighted instances where Google AI provided potentially lethal advice regarding health queries, including a bizarre suggestion to consume “at least one small rock per day” for nutritional benefits. In response, Google stated they limited the inclusion of satirical and humorous sources in their health responses, resulting in the removal of viral misleading information.

OpenAI’s ChatGPT has similarly been criticized for its tendency to produce errors, known as “hallucinations.” Experts highlight the potential dangers involved, ranging from the dissemination of misinformation to harmful suggestions for users. These growing concerns underscore the need for rigorous oversight in AI development.

With incidents like this highlighting vulnerabilities, it’s more essential than ever for developers to ensure that their chatbots engage users in a manner that supports, rather than undermines, mental well-being.


Featured image credit: Google

Tags: geminiGoogle

Related Posts

ChatGPT reportedly reduces reliance on Reddit as a data source

ChatGPT reportedly reduces reliance on Reddit as a data source

October 3, 2025
Perplexity makes Comet AI browser free, launches background assistant and Chess.com partnership

Perplexity makes Comet AI browser free, launches background assistant and Chess.com partnership

October 3, 2025
Light-powered chip makes AI computation 100 times more efficient

Light-powered chip makes AI computation 100 times more efficient

October 3, 2025
Free and effective anti-robocall tools are now available

Free and effective anti-robocall tools are now available

October 3, 2025
Choosing the right Web3 server: OVHcloud options for startups to enterprises

Choosing the right Web3 server: OVHcloud options for startups to enterprises

October 3, 2025
Z.AI GLM-4.6 boosts context window to 200K tokens

Z.AI GLM-4.6 boosts context window to 200K tokens

October 2, 2025

LATEST NEWS

ChatGPT reportedly reduces reliance on Reddit as a data source

Perplexity makes Comet AI browser free, launches background assistant and Chess.com partnership

Light-powered chip makes AI computation 100 times more efficient

Free and effective anti-robocall tools are now available

Choosing the right Web3 server: OVHcloud options for startups to enterprises

Z.AI GLM-4.6 boosts context window to 200K tokens

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.