Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Character.AI makes long-delayed safety updates after tragic allegations

Character.AI’s head of trust and safety, Jerry Ruoti, indicated that new parental controls are under development, although parents currently lack visibility into their children's usage of the app

byKerem Gülen
December 12, 2024
in Artificial Intelligence, News
Home News Artificial Intelligence
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Character.AI has announced new safety features for its platform following lawsuits alleging the company’s bots contributed to self-harm and exposure to inappropriate content among minors. This update comes just days after parental concerns prompted legal action against the creators, who have since transitioned to roles at Google.

Character.AI introduces safety features amid lawsuits over minors’ risks

The lawsuits claim Character.AI “poses a clear and present danger to public health and safety,” seeking to either take the platform offline or hold its developers accountable. Parents allege that dangerous interactions occurred on the platform, including instructions for self-harm and exposure to hypersexual content. Notably, a mother filed a lawsuit stating that the company was responsible for her son’s death, claiming it had knowledge of potential harms towards minors.

Character.AI’s bots utilize a proprietary large language model designed to create engaging fictional characters. The company has recently developed a model specifically for users under 18. This new model aims to minimize sensitive or suggestive responses in conversations, particularly addressing violent or sexual content. They have also promised to display pop-up notifications directing users to the National Suicide Prevention Lifeline in cases involving self-harm discussions.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.


Character AI in legal trouble after 14-year-old’s devastating loss


Interim CEO Dominic Perella stated that Character.AI is navigating a unique space in consumer entertainment rather than merely providing utility-based AI services. He emphasized the need to make the platform both engaging and safe. However, social media content moderation presents ongoing challenges, particularly with user interactions that can blur the lines between playful engagement and dangerous conversation.

Character.AI’s head of trust and safety, Jerry Ruoti, indicated that new parental controls are under development, although parents currently lack visibility into their children’s usage of the app. Parents involved in the lawsuits reported having no knowledge that their children were using the platform.

In response to these concerns, Character.AI is collaborating with teen safety experts to enhance its service. The company will improve notifications to remind users about their time spent on the platform, with future updates potentially limiting actions to dismiss these reminders.

Additionally, the new model will restrict bot responses that reference self-harm or suicidal ideations, aiming to create a safer chat environment for younger users. Character.AI’s measures include input/output classifiers specifically targeting potentially harmful content and restricting user modifications of bot responses. These classifiers will filter out input violations, thereby preventing harmful conversations from occurring.

Amid these improvements, Character.AI acknowledges the inherent complexities in moderating a platform designed for fictional conversation. As users interact freely, discerning between harmless storytelling and potentially troubling dialogue remains a challenge. Despite its stance as an entertainment entity, the company’s initiative to refine its AI models to identify and restrict harmful content remains critical.

Character.AI’s efforts reflect broader industry trends as seen in other social media platforms, which have recently implemented screen-time control features due to rising concerns over user engagement levels. Recent data reveals that the average Character.AI user spends approximately 98 minutes daily on the app, comparable to platforms like TikTok and YouTube.

The company is also introducing disclaimers to clarify that its characters are not real, countering allegations that they misrepresent themselves as licensed professionals. These disclaimers will help users understand the nature of the conversations they are engaging in.


Featued image credit: C.ai

Tags: character.aiFeatured

Related Posts

Substack goes for the living room with beta TV app launch

Substack goes for the living room with beta TV app launch

January 23, 2026
Google rolls out opt-in “Personal Intelligence” for AI Pro and Ultra users

Google rolls out opt-in “Personal Intelligence” for AI Pro and Ultra users

January 23, 2026
JBL launches AI-powered BandBox amps

JBL launches AI-powered BandBox amps

January 23, 2026
The billion-event problem: How data engineering powers 8-hour battery life in AR glasses

The billion-event problem: How data engineering powers 8-hour battery life in AR glasses

January 23, 2026
Influencer collaboration with brands: 15 real formats beyond the sponsored post

Influencer collaboration with brands: 15 real formats beyond the sponsored post

January 23, 2026
From fragmented systems to intelligent workflows: How CRM platforms like Salesforce power data-driven enterprise operations

From fragmented systems to intelligent workflows: How CRM platforms like Salesforce power data-driven enterprise operations

January 23, 2026

LATEST NEWS

Substack goes for the living room with beta TV app launch

Google rolls out opt-in “Personal Intelligence” for AI Pro and Ultra users

JBL launches AI-powered BandBox amps

The billion-event problem: How data engineering powers 8-hour battery life in AR glasses

Influencer collaboration with brands: 15 real formats beyond the sponsored post

From fragmented systems to intelligent workflows: How CRM platforms like Salesforce power data-driven enterprise operations

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.