Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

AI masters language but flunks LEGO 101

The researchers designed a clever benchmark called LEGO-Puzzles precisely because building LEGOs mirrors how humans develop "spatial intelligence." Following those little diagrams requires understanding 3D shapes, how they fit together, their orientation, and the correct sequence of actions. If an AI can't handle that, how can we expect it to guide a robot arm assembling a product or navigate a self-driving car through a complex construction zone?

byKerem Gülen
March 27, 2025
in Research
Home Research

We hear constantly about the incredible feats of AI like GPT-4o and Gemini – writing code, crafting poetry, acing exams. You might think these powerful Multimodal Large Language Models (MLLMs), which understand both text and images, are well on their way to mastering everything. But what happens when you ask them to do something seemingly simple, like follow LEGO instructions?

According to a new study from researchers at the Shanghai AI Laboratory and Tongji University, the answer is: they largely fail. These AI wizards, it turns out, are surprisingly clumsy when it comes to understanding and reasoning about objects in space over multiple steps – a skill crucial for interacting with the real world.

Why test AI with LEGOs?

The researchers designed a clever benchmark called LEGO-Puzzles precisely because building LEGOs mirrors how humans develop “spatial intelligence.” Following those little diagrams requires understanding 3D shapes, how they fit together, their orientation, and the correct sequence of actions. If an AI can’t handle that, how can we expect it to guide a robot arm assembling a product or navigate a self-driving car through a complex construction zone?

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

The LEGO-Puzzles benchmark isn’t child’s play. It includes over 1,100 visual questions spanning 11 different tasks. These range from basic checks (“Is this piece taller than that one?”, “Are these two blocks touching?”) to complex sequences (“Put these assembly steps in the right order,” “Which image shows the wrong step?”).

The surprising scorecard: AI vs humans

So, how did today’s top AI models fare on these LEGO challenges? The results were striking, and frankly, a bit embarrassing for the AI.

  • Massive gap: Even the best models, like OpenAI’s GPT-4o and Google’s Gemini-2.0-Flash, only answered about 50-58% of the questions correctly.
  • Human triumph: Human participants, in contrast, breezed through the puzzles with over 90% accuracy.
  • Open-source struggles: Many open-source MLLMs performed only slightly better than random guessing. Some completely failed specific tasks, like ordering assembly steps, sometimes just outputting the same wrong letter for almost every question.

The AI particularly struggled with tasks involving:

  • Height perception: Often confusing a 2D image projection with 3D reality (think optical illusions).
  • Rotation: Understanding how objects look after being turned.
  • Multi-step reasoning: The more steps involved in a sequence, the worse the AI performed, highlighting a failure to track changes over time.

KAIST grew brains for AI that can learn right off devices


Can AI even show us the next step?

Perhaps even more telling was the image generation test. Researchers asked MLLMs to generate an image showing the result of a specific LEGO assembly step.

The outcome? A near-total failure. Most models either ignored the instructions, simply copied the input image, or generated something completely unrelated. Only Gemini-2.0-Flash and GPT-4o showed a “limited ability” – Gemini was better at editing the existing image accurately, while GPT-4o seemed to regenerate the scene conceptually, often losing visual consistency. The open-source models were hopelessly lost.

This research exposes a critical weakness in current AI development. While models excel at pattern matching in language and static images, they lack a robust grasp of multi-step spatial reasoning – the dynamic understanding of how things work in physical space and time.

The study found that even prompting techniques like “Chain-of-Thought” (asking the AI to “think step-by-step”), which often help with text problems, provided minimal benefit and sometimes even hindered performance on these spatial tasks, especially complex ones.

It seems that truly understanding our 3D world and how actions unfold within it requires more than just processing massive amounts of text and images. MLLMs need better ways to represent space, track changes sequentially, and perhaps develop a form of “visual memory.”


Featured image credit: Kerem Gülen/Imagen 3

Tags: AI

Related Posts

Radware tricks ChatGPT’s Deep Research into Gmail data leak

Radware tricks ChatGPT’s Deep Research into Gmail data leak

September 19, 2025
OpenAI research finds AI models can scheme and deliberately deceive users

OpenAI research finds AI models can scheme and deliberately deceive users

September 19, 2025
MIT studies AI romantic bonds in r/MyBoyfriendIsAI group

MIT studies AI romantic bonds in r/MyBoyfriendIsAI group

September 19, 2025
Anthropic economic index reveals uneven Claude.ai adoption

Anthropic economic index reveals uneven Claude.ai adoption

September 17, 2025
Google releases VaultGemma 1B with differential privacy

Google releases VaultGemma 1B with differential privacy

September 17, 2025
OpenAI researchers identify the mathematical causes of AI hallucinations

OpenAI researchers identify the mathematical causes of AI hallucinations

September 17, 2025

LATEST NEWS

Zoom announces AI Companion 3.0 at Zoomtopia

Google Cloud adds Lovable and Windsurf as AI coding customers

Radware tricks ChatGPT’s Deep Research into Gmail data leak

Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

Roblox game Steal a Brainrot removes AI-generated character, sparking fan backlash and a debate over copyright

DeepSeek releases R1 model trained for $294,000 on 512 H800 GPUs

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.