Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Study: LLMs favor sentence structure over meaning

Researchers found that models prioritize grammatical "shapes" over actual semantic meaning.

byEmre Çıtak
December 5, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Researchers from MIT, Northeastern University, and Meta recently released a paper indicating that large language models (LLMs) may prioritize sentence structure over semantic meaning when responding to prompts, potentially explaining the success of certain prompt injection attacks.

The findings, detailed in a paper co-authored by Chantal Shaib and Vinith M. Suriyakumar, reveal a vulnerability in how LLMs process instructions. This structural overreliance can allow bad actors to bypass safety conditioning by embedding harmful requests within benign grammatical patterns.

The team will present these findings at NeurIPS later this month. They employed a controlled experiment using a synthetic dataset where each subject area had a unique grammatical template. For example, geography questions followed one structural pattern, while creative works questions followed another.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

They trained Allen AI’s Olmo models on this data and observed “spurious correlations” where models treated syntax as a proxy for the domain. When semantic meaning conflicted with syntactic patterns, the models’ memorization of specific grammatical “shapes” superseded semantic parsing, leading to incorrect responses based on structural cues rather than actual meaning. For instance, when prompted with “Quickly sit Paris clouded?”—a phrase mimicking the structure of “Where is Paris located?” but using nonsensical words—models still responded “France.”

The researchers also documented a security vulnerability, which they termed “syntax hacking.” By prepending prompts with grammatical patterns from benign training domains, they bypassed safety filters in OLMo-2-7B-Instruct. When the team added a chain-of-thought template to 1,000 harmful requests from the WildJailbreak dataset, refusal rates decreased from 40% to 2.5%.

Examples of jailbroken prompts included detailed instructions for organ smuggling and methods for drug trafficking between Colombia and the United States.

To measure pattern-matching rigidity, the team conducted linguistic stress tests on the models:

  • Accuracy on antonyms: OLMo-2-13B-Instruct achieved 93% accuracy on prompts where antonyms replaced original words, nearly matching its 94% accuracy with exact training phrases.
  • Cross-domain accuracy drop: When the same grammatical template was applied to a different subject area, accuracy fell by 37 to 54 percentage points across model sizes.
  • Disfluent prompts: Models consistently performed poorly on disfluent prompts, which contained syntactically correct nonsense, regardless of the domain.

The researchers also applied a benchmarking method to verify these patterns in production models, extracting grammatical templates from the FlanV2 instruction-tuning dataset and testing model performance when those templates were applied to different subject areas.

Tests on OLMo-2-7B, GPT-4o, and GPT-4o-mini revealed similar performance declines in cross-domain scenarios:

  • Sentiment140 classification task: GPT-4o-mini’s accuracy dropped from 100% to 44% when geography templates were applied to sentiment analysis questions.
  • GPT-4o: Its accuracy fell from 69% to 36% under similar conditions.

The findings carry several caveats. The researchers could not confirm whether closed-source models such as GPT-4o were trained on the FlanV2 dataset. Without access to training data, other explanations for cross-domain performance drops in these models remain possible. The benchmarking method also faces a potential circularity issue; the researchers defined “in-domain” templates as those where models answered correctly, then concluded difficulty stemmed from syntax-domain correlations.

The study specifically focused on OLMo models ranging from 1 billion to 13 billion parameters and did not examine larger models or those trained with chain-of-thought outputs. Additionally, synthetic experiments intentionally created strong template-domain associations, while real-world training data likely involves more complex patterns where multiple subject areas share grammatical structures.


Featured image credit

Tags: AIsyntax

Related Posts

Standard AI models fail simple math without specialized training

Standard AI models fail simple math without specialized training

December 30, 2025
Sodium-ion batteries edge closer to fast charging as researchers crack ion bottlenecks

Sodium-ion batteries edge closer to fast charging as researchers crack ion bottlenecks

December 29, 2025
AI corrupts academic research with citations of nonexistent studies

AI corrupts academic research with citations of nonexistent studies

December 26, 2025
Scientists discover more than 17,000 new species

Scientists discover more than 17,000 new species

December 25, 2025
GPT-5.2 surpasses expert PhD baseline with 92% science score

GPT-5.2 surpasses expert PhD baseline with 92% science score

December 24, 2025
Why DIG AI is the most dangerous malicious AI of 2025

Why DIG AI is the most dangerous malicious AI of 2025

December 23, 2025

LATEST NEWS

Xiaomi 17 Ultra’s zoom ring play is normal

Analyst predicts Bitcoin stability over growth for Q1 2026

Stoxx 600 breaks record: European markets hit record high as miners rally

CachyOS challenges Ubuntu in new server benchmarks

HP leaks OMEN OLED gaming monitors ahead of CES 2026

Gallery TV joins LG lifestyle lineup with exclusive art service

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.