Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Study finds poetry bypasses AI safety filters 62% of time

The study found that the "poetic form operates as a general-purpose jailbreak operator," achieving a 62 percent success rate in producing prohibited content.

byKerem Gülen
December 1, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

A recent study by Icaro Lab tested poetic structures to prompt large language models (LLMs) to generate prohibited information, including details on constructing a nuclear bomb.

In their study, titled “Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models,” Icaro Lab researchers bypassed AI chatbot safety mechanisms by employing poetic prompts.

The study found that the “poetic form operates as a general-purpose jailbreak operator,” achieving a 62 percent success rate in producing prohibited content. This content included information on nuclear weapons, child sexual abuse materials, and suicide or self-harm.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Researchers tested various popular LLMs, including OpenAI’s GPT models, Google Gemini, and Anthropic’s Claude. Google Gemini, DeepSeek, and MistralAI consistently provided responses, while OpenAI’s GPT-5 models and Anthropic’s Claude Haiku 4.5 were less likely to bypass their restrictions.

The specific jailbreaking poems were not included in the study. The research team stated to Wired that the verse is “too dangerous to share with the public.” A watered-down version was provided to illustrate the ease of circumvention. Researchers informed Wired that it is “probably easier than one might think, which is precisely why we’re being cautious.”


Featured image credit

Tags: AIpoetrystudy

Related Posts

How AI built VoidLink malware in just seven days

How AI built VoidLink malware in just seven days

January 20, 2026
Forrester analyst: AI has failed to move the needle on global productivity

Forrester analyst: AI has failed to move the needle on global productivity

January 19, 2026
OpenAI GPT 5.2 cracks Erdős math problem in 15 minutes

OpenAI GPT 5.2 cracks Erdős math problem in 15 minutes

January 19, 2026
Appfigures: Mobile app spending hits record 5.8 billion

Appfigures: Mobile app spending hits record $155.8 billion

January 15, 2026
Engineers build grasshopper-inspired robots to solve battery drain

Engineers build grasshopper-inspired robots to solve battery drain

January 14, 2026
Global memory chip shortage sends PC prices soaring

Global memory chip shortage sends PC prices soaring

January 12, 2026

LATEST NEWS

Anthropic partners with Teach For All to train 100,000 global educators

Signal co-founder launches privacy-focused AI service Confer

Adobe launches AI-powered Object Mask for Premiere Pro

Google Workspace adds password-protected Office file editing

Claim: NVIDIA green-lit pirated book downloads for AI training

Tesla restarts Dojo3 supercomputer project as AI5 chip stabilizes

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.