Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

GEPA cuts AI training costs by up to fifteen times

Researchers from UC Berkeley Stanford and Databricks introduced GEPA a new AI optimization method that learns from natural language feedback achieving up to 19 percent higher scores with 35 times fewer rollouts.

byAytun Çelebi
August 19, 2025
in Research
Home Research

Researchers from the University of California, Berkeley, Stanford University, and Databricks have introduced a new method called GEPA that replaces traditional, trial-and-error learning with an AI’s own language understanding. According to a recent article summarizing the research, this approach is not only more accurate but also significantly more efficient, achieving superior results with up to 35 times fewer trial runs than established techniques.

The inefficiency of traditional reinforcement learning

Modern enterprise AI applications are often “compound AI systems,” which are complex workflows that connect multiple AI modules and external tools like databases or code interpreters. A popular way to optimize these systems is through reinforcement learning (RL), which treats the system as a black box. This method runs a task, receives a simple numerical score or “scalar reward” (e.g., 7/10), and uses this feedback to slowly adjust the model’s parameters. The primary drawback of this approach is its “sample inefficiency”.

To learn effectively from these sparse numerical scores, RL methods often require tens of thousands, or even hundreds of thousands, of trial runs, known as “rollouts”. For any real-world application involving expensive tool calls or powerful proprietary models, this process is prohibitively slow and costly. As one of the paper’s co-authors, Lakshya A Agrawal, noted, this complexity makes RL impractical for many teams, who often resort to manual prompt engineering instead. GEPA was designed to address this challenge, particularly for teams that need to optimize systems built on top-tier models that cannot be easily fine-tuned.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

How GEPA uses language to learn and evolve

The GEPA (Genetic-Pareto) framework tackles the inefficiency of RL by replacing sparse numerical rewards with rich, natural language feedback. It leverages the fact that the entire execution of an AI system, including its reasoning steps, tool calls, and error messages, can be turned into text that an AI model can read and analyze. The methodology is built on three core pillars.

  • Genetic prompt evolution: GEPA treats a collection of prompts like a gene pool. It iteratively “mutates” these prompts to create new, potentially better versions for the AI system to use.
  • Reflection with natural language feedback: This is the key innovation. After a few trial runs, GEPA provides an AI model with the full text of what the system tried to do and what went wrong. The model then “reflects” on this feedback to diagnose the problem in plain language and write an improved prompt. For example, instead of just seeing a low score, it might analyze a compiler error and conclude the prompt needs to specify a particular library version.
  • Pareto-based selection: To avoid getting stuck on a single, suboptimal solution, GEPA maintains a diverse roster of high-performing “specialist” prompts. By tracking which prompts work best on different examples, it explores a wider range of strategies and is more likely to find a solution that works well across many inputs.

The researchers evaluated GEPA across four diverse tasks and found that it substantially outperformed the RL-based method GRPO. In testing, GEPA achieved up to a 19% higher score while using up to 35 times fewer rollouts. In one concrete example, GEPA optimized a question-answering system in approximately 3 hours at a cost of less than $20, whereas the RL-based approach took 24 hours and cost about $300, representing an 8x reduction in time and a 15x reduction in cost for better results.

Beyond raw performance, GEPA-optimized systems were found to be more reliable on new, unseen data, which the researchers attribute to the richer, language-based feedback. The prompts produced by GEPA were also up to 9.2 times shorter than those from other optimizers, which reduces latency and cost in production. The researchers also noted that GEPA can be used as an “inference-time” problem solver, automatically generating and refining solutions in a continuous integration pipeline. In one experiment, this approach boosted performance on code generation tasks to an expert level on 20% of tasks, a level that was achieved on 0% of tasks by a standard single-shot attempt from GPT-4o.

Tags: AIgepatraining

Related Posts

AGI ethics checklist proposes ten key elements

AGI ethics checklist proposes ten key elements

September 11, 2025
Can an AI be happy? Scientists are developing new ways to measure the “welfare” of language models

Can an AI be happy? Scientists are developing new ways to measure the “welfare” of language models

September 10, 2025
Uc San Diego study questions phishing training impact

Uc San Diego study questions phishing training impact

September 8, 2025
Deepmind finds RAG limit with fixed-size embeddings

Deepmind finds RAG limit with fixed-size embeddings

September 5, 2025
Psychopathia Machinalis and the path to “Artificial Sanity”

Psychopathia Machinalis and the path to “Artificial Sanity”

September 1, 2025
New research finds AI prefers content from other AIs

New research finds AI prefers content from other AIs

August 29, 2025

LATEST NEWS

AGI ethics checklist proposes ten key elements

ICO warns of student cyberattacks on UK schools

Ant Group unveils their own Tesla Optimus competitor, R1 humanoid robot

Google Gemini now transcribes audio files

Cybersecurity shifts from network to human element

Meta expands Community Notes with user alerts

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.