Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Your AI is only as smart as the way you use it

A new MIT Sloan study finds that only half of generative AI gains come from better models; the other half comes from users adapting how they prompt, with automation sometimes harming results.

byKerem Gülen
August 5, 2025
in Research
Home Research

Businesses invest heavily in better and more powerful generative AI systems. It is a common assumption, superior models will automatically lead to superior results. However, new research from affiliates of the MIT Sloan School of Management suggests that model advances are only half of the equation. In a large-scale experiment, researchers found that the other half of performance gains comes directly from how users adapt their prompts to take advantage of a new system.

How user adaptation drives half of performance gains

To understand the interplay between model quality and user skill, the researchers conducted an experiment with nearly 1,900 participants using OpenAI’s DALL-E image generation system. Participants were randomly assigned to one of three groups: one using DALL-E 2, a second using the more advanced DALL-E 3, and a third using DALL-E 3 with their prompts secretly rewritten by the GPT-4 language model. Each person was shown a reference image and given 25 minutes to re-create it by writing descriptive prompts, with a financial bonus offered to the top 20% of performers to motivate improvement.

The study revealed that while the DALL-E 3 group produced images that were significantly more similar to the target image than the DALL-E 2 group, this was not just because the model was better. The researchers found that roughly half of this improvement was attributable to the model upgrade itself, while the other half came from users changing their behavior. Users working with the more advanced DALL-E 3 wrote prompts that were 24% longer and contained more descriptive words than those using DALL-E 2. This demonstrates that users intuitively learn to provide better instructions to a more capable system.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Crucially, the ability to write effective prompts was not limited to technical users. The study’s participants came from a wide range of jobs, education levels, and age groups, yet even those without technical backgrounds were able to improve their prompting and harness the new model’s capabilities. The findings suggest that effective prompting is more about clear communication in natural language than it is about coding. The research also found that these AI advances can help reduce inequality in output, as users who started at lower performance levels benefited the most from the improved model.

The surprising failure of automated assistance

One of the most counterintuitive findings came from the group whose prompts were automatically rewritten by GPT-4. This feature, designed to help users, actually backfired and degraded performance in the image-matching task by 58% compared to the baseline DALL-E 3 group. The researchers discovered that the automated rewrites often added unnecessary details or misinterpreted the user’s original intent, causing the AI to generate the wrong kind of image. This highlights how hidden, hard-coded instructions in an AI tool can conflict with a user’s goals and break down the collaborative process.

Based on these findings, the researchers concluded that for businesses to unlock the full value of generative AI, they must look beyond just acquiring the latest technology. The study offers several priorities for leaders aiming to make these systems more effective in real-world settings.

  • Invest in training and experimentation. Technical upgrades alone are not enough to realize full performance gains. Organizations must give employees the time and support to learn and refine how they interact with AI systems.
  • Design for iteration. The research showed that users improve by testing and revising their instructions. Therefore, interfaces that encourage this iterative process and clearly display results help drive better outcomes.
  • Be cautious with automation. Automated features like prompt rewriting can hinder performance if they obscure or override what the user is trying to achieve.

Featured image credit

Tags: AI

Related Posts

Radware tricks ChatGPT’s Deep Research into Gmail data leak

Radware tricks ChatGPT’s Deep Research into Gmail data leak

September 19, 2025
OpenAI research finds AI models can scheme and deliberately deceive users

OpenAI research finds AI models can scheme and deliberately deceive users

September 19, 2025
MIT studies AI romantic bonds in r/MyBoyfriendIsAI group

MIT studies AI romantic bonds in r/MyBoyfriendIsAI group

September 19, 2025
Anthropic economic index reveals uneven Claude.ai adoption

Anthropic economic index reveals uneven Claude.ai adoption

September 17, 2025
Google releases VaultGemma 1B with differential privacy

Google releases VaultGemma 1B with differential privacy

September 17, 2025
OpenAI researchers identify the mathematical causes of AI hallucinations

OpenAI researchers identify the mathematical causes of AI hallucinations

September 17, 2025

LATEST NEWS

Zoom announces AI Companion 3.0 at Zoomtopia

Google Cloud adds Lovable and Windsurf as AI coding customers

Radware tricks ChatGPT’s Deep Research into Gmail data leak

Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

Roblox game Steal a Brainrot removes AI-generated character, sparking fan backlash and a debate over copyright

DeepSeek releases R1 model trained for $294,000 on 512 H800 GPUs

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.