Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Anthropic Claude Opus 4 models can terminate chats

The company stated on its website that the Claude Opus 4 and 4.1 models now possess the capacity to conclude a conversation with users.

byAytun Çelebi
August 18, 2025
in Artificial Intelligence, News
Home News Artificial Intelligence
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Anthropic has implemented a new feature enabling its Claude Opus 4 and 4.1 AI models to terminate user conversations, a measure intended for rare instances of harmful or abusive interactions, as part of its AI welfare research.

The company stated on its website that the Claude Opus 4 and 4.1 models now possess the capacity to conclude a conversation with users. This functionality is designated for “rare, extreme cases of persistently harmful or abusive user interactions.” Specific examples provided by Anthropic include user requests for sexual content involving minors and attempts to solicit information that would facilitate large-scale violence or acts of terror.

The models will only initiate a conversation termination “as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted.” Anthropic anticipates that the majority of users will not experience this feature, even when discussing controversial subjects, as its application is strictly limited to “extreme edge cases.”

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

When Claude concludes a chat, users are prevented from sending new messages within that specific conversation. However, users retain the ability to initiate a new conversation immediately. Anthropic clarified that the termination of one conversation does not impact other ongoing chats. Users are also able to edit or retry previous messages within an ended conversation to guide the interaction in a different direction.

This initiative is integrated into Anthropic’s broader research program, which examines the concept of AI welfare. The company views the capacity for its models to exit a “potentially distressing interaction” as a low-cost method for managing risks associated with AI welfare. Anthropic is presently conducting experiments with this feature and has invited users to submit feedback based on their experiences.


Featured image credit

Tags: Anthropiccluadeopus 4

Related Posts

Why you have to wait until 2027 for the next real F1 game

Why you have to wait until 2027 for the next real F1 game

November 19, 2025
Cloudflare admits a bot filter bug caused its worst outage since 2019

Cloudflare admits a bot filter bug caused its worst outage since 2019

November 19, 2025
Snapchat now lets you talk to strangers without exposing your real profile

Snapchat now lets you talk to strangers without exposing your real profile

November 19, 2025
You can now use GPT-5 and Claude together in one chaotic thread

You can now use GPT-5 and Claude together in one chaotic thread

November 19, 2025
You can finally tell TikTok to stop showing you fake AI videos

You can finally tell TikTok to stop showing you fake AI videos

November 19, 2025
Atomico report shows EU tech is lobbying harder than ever

Atomico report shows EU tech is lobbying harder than ever

November 19, 2025

LATEST NEWS

Why you have to wait until 2027 for the next real F1 game

Cloudflare admits a bot filter bug caused its worst outage since 2019

Snapchat now lets you talk to strangers without exposing your real profile

You can now use GPT-5 and Claude together in one chaotic thread

You can finally tell TikTok to stop showing you fake AI videos

Atomico report shows EU tech is lobbying harder than ever

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.