Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Your AI is only as smart as the way you use it

A new MIT Sloan study finds that only half of generative AI gains come from better models; the other half comes from users adapting how they prompt, with automation sometimes harming results.

byKerem Gülen
August 5, 2025
in Research

Businesses invest heavily in better and more powerful generative AI systems. It is a common assumption, superior models will automatically lead to superior results. However, new research from affiliates of the MIT Sloan School of Management suggests that model advances are only half of the equation. In a large-scale experiment, researchers found that the other half of performance gains comes directly from how users adapt their prompts to take advantage of a new system.

How user adaptation drives half of performance gains

To understand the interplay between model quality and user skill, the researchers conducted an experiment with nearly 1,900 participants using OpenAI’s DALL-E image generation system. Participants were randomly assigned to one of three groups: one using DALL-E 2, a second using the more advanced DALL-E 3, and a third using DALL-E 3 with their prompts secretly rewritten by the GPT-4 language model. Each person was shown a reference image and given 25 minutes to re-create it by writing descriptive prompts, with a financial bonus offered to the top 20% of performers to motivate improvement.

The study revealed that while the DALL-E 3 group produced images that were significantly more similar to the target image than the DALL-E 2 group, this was not just because the model was better. The researchers found that roughly half of this improvement was attributable to the model upgrade itself, while the other half came from users changing their behavior. Users working with the more advanced DALL-E 3 wrote prompts that were 24% longer and contained more descriptive words than those using DALL-E 2. This demonstrates that users intuitively learn to provide better instructions to a more capable system.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Crucially, the ability to write effective prompts was not limited to technical users. The study’s participants came from a wide range of jobs, education levels, and age groups, yet even those without technical backgrounds were able to improve their prompting and harness the new model’s capabilities. The findings suggest that effective prompting is more about clear communication in natural language than it is about coding. The research also found that these AI advances can help reduce inequality in output, as users who started at lower performance levels benefited the most from the improved model.

The surprising failure of automated assistance

One of the most counterintuitive findings came from the group whose prompts were automatically rewritten by GPT-4. This feature, designed to help users, actually backfired and degraded performance in the image-matching task by 58% compared to the baseline DALL-E 3 group. The researchers discovered that the automated rewrites often added unnecessary details or misinterpreted the user’s original intent, causing the AI to generate the wrong kind of image. This highlights how hidden, hard-coded instructions in an AI tool can conflict with a user’s goals and break down the collaborative process.

Based on these findings, the researchers concluded that for businesses to unlock the full value of generative AI, they must look beyond just acquiring the latest technology. The study offers several priorities for leaders aiming to make these systems more effective in real-world settings.

  • Invest in training and experimentation. Technical upgrades alone are not enough to realize full performance gains. Organizations must give employees the time and support to learn and refine how they interact with AI systems.
  • Design for iteration. The research showed that users improve by testing and revising their instructions. Therefore, interfaces that encourage this iterative process and clearly display results help drive better outcomes.
  • Be cautious with automation. Automated features like prompt rewriting can hinder performance if they obscure or override what the user is trying to achieve.

Featured image credit

Tags: AI

Related Posts

Researchers find electric cars erase their “carbon debt” in under two years

Researchers find electric cars erase their “carbon debt” in under two years

November 5, 2025
Anthropic study reveals AIs can’t reliably explain their own thoughts

Anthropic study reveals AIs can’t reliably explain their own thoughts

November 4, 2025
Apple’s Pico-Banana-400K dataset could redefine how AI learns to edit images

Apple’s Pico-Banana-400K dataset could redefine how AI learns to edit images

November 4, 2025
USC researchers build artificial neurons that physically think like the brain

USC researchers build artificial neurons that physically think like the brain

November 3, 2025
Forget seeing dark matter, it’s time to listen for it

Forget seeing dark matter, it’s time to listen for it

October 28, 2025
Google’s search business could lose  billion a year to ChatGPT

Google’s search business could lose $30 billion a year to ChatGPT

October 27, 2025

LATEST NEWS

Tech News Today: Sora’s video tricks and the invisible bug that defines Android’s power

OpenAI’s Sora hits 470,000 Android installs on day one

Mastodon adds quote posts in major 4.5 update with built-in safeguards

Elon Musk says Tesla may need a “gigantic” chip factory for its AI ambitions

BMW integrates Alexa+ for true in-car conversations

This Samsung Galaxy phone needs and immediate update

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.