Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Anthropic Claude Opus 4 models can terminate chats

The company stated on its website that the Claude Opus 4 and 4.1 models now possess the capacity to conclude a conversation with users.

byAytun Çelebi
August 18, 2025
in Artificial Intelligence, News
Home News Artificial Intelligence
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Anthropic has implemented a new feature enabling its Claude Opus 4 and 4.1 AI models to terminate user conversations, a measure intended for rare instances of harmful or abusive interactions, as part of its AI welfare research.

The company stated on its website that the Claude Opus 4 and 4.1 models now possess the capacity to conclude a conversation with users. This functionality is designated for “rare, extreme cases of persistently harmful or abusive user interactions.” Specific examples provided by Anthropic include user requests for sexual content involving minors and attempts to solicit information that would facilitate large-scale violence or acts of terror.

The models will only initiate a conversation termination “as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted.” Anthropic anticipates that the majority of users will not experience this feature, even when discussing controversial subjects, as its application is strictly limited to “extreme edge cases.”

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

When Claude concludes a chat, users are prevented from sending new messages within that specific conversation. However, users retain the ability to initiate a new conversation immediately. Anthropic clarified that the termination of one conversation does not impact other ongoing chats. Users are also able to edit or retry previous messages within an ended conversation to guide the interaction in a different direction.

This initiative is integrated into Anthropic’s broader research program, which examines the concept of AI welfare. The company views the capacity for its models to exit a “potentially distressing interaction” as a low-cost method for managing risks associated with AI welfare. Anthropic is presently conducting experiments with this feature and has invited users to submit feedback based on their experiences.


Featured image credit

Tags: Anthropiccluadeopus 4

Related Posts

Dell fixes the XPS: Physical keys return in new 14 and 16 models

Dell fixes the XPS: Physical keys return in new 14 and 16 models

January 13, 2026
Zuckerberg launches Meta Compute to build massive AI energy grid

Zuckerberg launches Meta Compute to build massive AI energy grid

January 13, 2026
Official: Google Gemini will power Apple Intelligence and Siri

Official: Google Gemini will power Apple Intelligence and Siri

January 13, 2026
Amazon: 97% of our devices are ready for Alexa+

Amazon: 97% of our devices are ready for Alexa+

January 13, 2026
Anthropic’s Cowork brings developer-grade AI agents to non-coders

Anthropic’s Cowork brings developer-grade AI agents to non-coders

January 13, 2026
Xiaomi eyes total independence with new chip and OS

Xiaomi eyes total independence with new chip and OS

January 12, 2026

LATEST NEWS

Dell fixes the XPS: Physical keys return in new 14 and 16 models

Zuckerberg launches Meta Compute to build massive AI energy grid

Official: Google Gemini will power Apple Intelligence and Siri

Amazon: 97% of our devices are ready for Alexa+

Anthropic’s Cowork brings developer-grade AI agents to non-coders

Xiaomi eyes total independence with new chip and OS

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.