Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Your AI is only as smart as the way you use it

A new MIT Sloan study finds that only half of generative AI gains come from better models; the other half comes from users adapting how they prompt, with automation sometimes harming results.

byKerem Gülen
August 5, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Businesses invest heavily in better and more powerful generative AI systems. It is a common assumption, superior models will automatically lead to superior results. However, new research from affiliates of the MIT Sloan School of Management suggests that model advances are only half of the equation. In a large-scale experiment, researchers found that the other half of performance gains comes directly from how users adapt their prompts to take advantage of a new system.

How user adaptation drives half of performance gains

To understand the interplay between model quality and user skill, the researchers conducted an experiment with nearly 1,900 participants using OpenAI’s DALL-E image generation system. Participants were randomly assigned to one of three groups: one using DALL-E 2, a second using the more advanced DALL-E 3, and a third using DALL-E 3 with their prompts secretly rewritten by the GPT-4 language model. Each person was shown a reference image and given 25 minutes to re-create it by writing descriptive prompts, with a financial bonus offered to the top 20% of performers to motivate improvement.

The study revealed that while the DALL-E 3 group produced images that were significantly more similar to the target image than the DALL-E 2 group, this was not just because the model was better. The researchers found that roughly half of this improvement was attributable to the model upgrade itself, while the other half came from users changing their behavior. Users working with the more advanced DALL-E 3 wrote prompts that were 24% longer and contained more descriptive words than those using DALL-E 2. This demonstrates that users intuitively learn to provide better instructions to a more capable system.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Crucially, the ability to write effective prompts was not limited to technical users. The study’s participants came from a wide range of jobs, education levels, and age groups, yet even those without technical backgrounds were able to improve their prompting and harness the new model’s capabilities. The findings suggest that effective prompting is more about clear communication in natural language than it is about coding. The research also found that these AI advances can help reduce inequality in output, as users who started at lower performance levels benefited the most from the improved model.

The surprising failure of automated assistance

One of the most counterintuitive findings came from the group whose prompts were automatically rewritten by GPT-4. This feature, designed to help users, actually backfired and degraded performance in the image-matching task by 58% compared to the baseline DALL-E 3 group. The researchers discovered that the automated rewrites often added unnecessary details or misinterpreted the user’s original intent, causing the AI to generate the wrong kind of image. This highlights how hidden, hard-coded instructions in an AI tool can conflict with a user’s goals and break down the collaborative process.

Based on these findings, the researchers concluded that for businesses to unlock the full value of generative AI, they must look beyond just acquiring the latest technology. The study offers several priorities for leaders aiming to make these systems more effective in real-world settings.

  • Invest in training and experimentation. Technical upgrades alone are not enough to realize full performance gains. Organizations must give employees the time and support to learn and refine how they interact with AI systems.
  • Design for iteration. The research showed that users improve by testing and revising their instructions. Therefore, interfaces that encourage this iterative process and clearly display results help drive better outcomes.
  • Be cautious with automation. Automated features like prompt rewriting can hinder performance if they obscure or override what the user is trying to achieve.

Featured image credit

Tags: AI

Related Posts

OpenAI GPT 5.2 cracks Erdős math problem in 15 minutes

OpenAI GPT 5.2 cracks Erdős math problem in 15 minutes

January 19, 2026
Appfigures: Mobile app spending hits record 5.8 billion

Appfigures: Mobile app spending hits record $155.8 billion

January 15, 2026
Engineers build grasshopper-inspired robots to solve battery drain

Engineers build grasshopper-inspired robots to solve battery drain

January 14, 2026
Global memory chip shortage sends PC prices soaring

Global memory chip shortage sends PC prices soaring

January 12, 2026
63% of new AI models are now based on Chinese tech

63% of new AI models are now based on Chinese tech

January 12, 2026
Physics at -271°C: How the cold is heating up quantum computing

Physics at -271°C: How the cold is heating up quantum computing

January 8, 2026

LATEST NEWS

Google Workspace adds password-protected Office file editing

Claim: NVIDIA green-lit pirated book downloads for AI training

Tesla restarts Dojo3 supercomputer project as AI5 chip stabilizes

OpenAI targets “practical adoption” for 2026 strategy

Nvidia hits 200 teraFLOP emulated FP64 for scientific computing

Walmart maintains Apple Pay ban in U.S. stores for 2026

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.