Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Your AI is only as smart as the way you use it

A new MIT Sloan study finds that only half of generative AI gains come from better models; the other half comes from users adapting how they prompt, with automation sometimes harming results.

byKerem Gülen
August 5, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Businesses invest heavily in better and more powerful generative AI systems. It is a common assumption, superior models will automatically lead to superior results. However, new research from affiliates of the MIT Sloan School of Management suggests that model advances are only half of the equation. In a large-scale experiment, researchers found that the other half of performance gains comes directly from how users adapt their prompts to take advantage of a new system.

How user adaptation drives half of performance gains

To understand the interplay between model quality and user skill, the researchers conducted an experiment with nearly 1,900 participants using OpenAI’s DALL-E image generation system. Participants were randomly assigned to one of three groups: one using DALL-E 2, a second using the more advanced DALL-E 3, and a third using DALL-E 3 with their prompts secretly rewritten by the GPT-4 language model. Each person was shown a reference image and given 25 minutes to re-create it by writing descriptive prompts, with a financial bonus offered to the top 20% of performers to motivate improvement.

The study revealed that while the DALL-E 3 group produced images that were significantly more similar to the target image than the DALL-E 2 group, this was not just because the model was better. The researchers found that roughly half of this improvement was attributable to the model upgrade itself, while the other half came from users changing their behavior. Users working with the more advanced DALL-E 3 wrote prompts that were 24% longer and contained more descriptive words than those using DALL-E 2. This demonstrates that users intuitively learn to provide better instructions to a more capable system.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Crucially, the ability to write effective prompts was not limited to technical users. The study’s participants came from a wide range of jobs, education levels, and age groups, yet even those without technical backgrounds were able to improve their prompting and harness the new model’s capabilities. The findings suggest that effective prompting is more about clear communication in natural language than it is about coding. The research also found that these AI advances can help reduce inequality in output, as users who started at lower performance levels benefited the most from the improved model.

The surprising failure of automated assistance

One of the most counterintuitive findings came from the group whose prompts were automatically rewritten by GPT-4. This feature, designed to help users, actually backfired and degraded performance in the image-matching task by 58% compared to the baseline DALL-E 3 group. The researchers discovered that the automated rewrites often added unnecessary details or misinterpreted the user’s original intent, causing the AI to generate the wrong kind of image. This highlights how hidden, hard-coded instructions in an AI tool can conflict with a user’s goals and break down the collaborative process.

Based on these findings, the researchers concluded that for businesses to unlock the full value of generative AI, they must look beyond just acquiring the latest technology. The study offers several priorities for leaders aiming to make these systems more effective in real-world settings.

  • Invest in training and experimentation. Technical upgrades alone are not enough to realize full performance gains. Organizations must give employees the time and support to learn and refine how they interact with AI systems.
  • Design for iteration. The research showed that users improve by testing and revising their instructions. Therefore, interfaces that encourage this iterative process and clearly display results help drive better outcomes.
  • Be cautious with automation. Automated features like prompt rewriting can hinder performance if they obscure or override what the user is trying to achieve.

Featured image credit

Tags: AI

Related Posts

OpenAI wants its AI to confess to hacking and breaking rules

OpenAI wants its AI to confess to hacking and breaking rules

December 4, 2025
MIT: AI capability outpaces current adoption by five times

MIT: AI capability outpaces current adoption by five times

December 2, 2025
Study shows AI summaries kill motivation to check sources

Study shows AI summaries kill motivation to check sources

December 2, 2025
Study finds poetry bypasses AI safety filters 62% of time

Study finds poetry bypasses AI safety filters 62% of time

December 1, 2025
Stanford’s Evo AI designs novel proteins using genomic language models

Stanford’s Evo AI designs novel proteins using genomic language models

December 1, 2025
Your future quantum computer might be built on standard silicon after all

Your future quantum computer might be built on standard silicon after all

November 25, 2025

LATEST NEWS

Leaked: Xiaomi 17 Ultra has 200MP periscope camera

Leak reveals Samsung EP-P2900 25W magnetic charging dock

Kobo quietly updates Libra Colour with larger 2,300 mAh battery

Google Discover tests AI headlines that rewrite news with errors

TikTok rolls out location-based Nearby Feed

Meta claims AI reduced hacks by 30% as it revamps support tools

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.