Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

This AI explains your genes the way a doctor would

Unlike typical black-box DNA models, BIOREASON offers transparent explanations researchers can evaluate.

byKerem Gülen
June 10, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

For years, artificial intelligence has been a powerful tool in genomics, capable of sifting through mountains of DNA data at incredible speeds. These “DNA foundation models” are fantastic at recognizing patterns, but they have a major limitation: they operate as “black boxes.” They can often predict what might happen—like whether a genetic variant is harmful—but they can’t explain why. This leaves scientists with answers but no understanding of the underlying biological story.

On the other hand, large language nodels (LLMs), the technology behind tools like ChatGPT, have become masters of reasoning and explanation. They can write essays, solve logic puzzles, and explain complex topics. However, they can’t natively read the intricate language of a DNA sequence.

This is the gap a new paper from researchers at the University of Toronto, the Vector Institute, and other leading institutions aims to bridge. They’ve developed a pioneering new architecture called BIOREASON, the first model to deeply integrate a DNA foundation model with an LLM.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Think of it as creating a new kind of AI expert: one that is not only fluent in the A’s, C’s, G’s, and T’s of our genetic code but can also reason about what it’s reading and explain its conclusions step-by-step, just like a human biologist.

From “black box” to clear explanations

“Unlocking deep, interpretable biological reasoning from complex genomic data is a major AI challenge hindering scientific discovery,” state the authors, led by Adibvafa Fallahpour, Andrew Magnuson, and Purav Gupta. Current DNA models can’t provide the “mechanistic insights and falsifiable hypotheses” that are the cornerstone of scientific progress.

BIOREASON changes the game. It doesn’t just treat DNA as a long string of text. Instead, it uses a specialized DNA model to first translate the raw genetic sequence into a rich, meaningful representation. This “embedding” is then fed directly into the reasoning engine of an LLM.

The result is a hybrid AI that can:

  1. Directly process raw DNA sequences.
  2. Connect genomic information to a vast database of biological knowledge.
  3. Perform multi-step logical reasoning.
  4. Generate clear, step-by-step explanations for its predictions.

A leap in performance and understanding

The team tested BIOREASON on several complex biological tasks, and the results are striking. On a key benchmark for predicting disease pathways from genetic variants, BIOREASON’s accuracy jumped from 88% to an incredible 97%. Across the board, the model demonstrated an average 15% performance gain over previous “single-modality” models.

But the most exciting part isn’t just the accuracy; it’s the how.

In one case study, the researchers asked BIOREASON about a specific genetic mutation and its effect. The model didn’t just spit out a one-word answer. Instead, it correctly predicted the disease—Amyotrophic Lateral Sclerosis (ALS)—and then articulated a plausible, 10-step biological rationale. It identified the specific gene, explained how the mutation disrupted a key cellular process (actin dynamics), and traced the downstream consequences to the motor neuron degeneration that characterizes ALS.

This is the “interpretable reasoning trace” that makes BIOREASON so powerful. It moves beyond a simple prediction to offering a testable hypothesis that researchers can take back to the lab.

The paper’s authors are clear that this is just the beginning. While there are limitations to address—such as biases in the training data and the computational cost—the potential is immense.

“BIOREASON offers a robust tool for gaining deeper, mechanistic insights from genomic data, aiding in understanding complex disease pathways and the formulation of novel research questions,” the researchers conclude.


Featured image credit

Tags: AI

Related Posts

USENIX study finds AI extensions collect medical, banking data

USENIX study finds AI extensions collect medical, banking data

December 15, 2025
LLMs show distinct cultural biases in English vs Chinese prompts

LLMs show distinct cultural biases in English vs Chinese prompts

December 13, 2025
Catching the  trillion ghost: AI is rewriting the rules of financial crime

Catching the $2 trillion ghost: AI is rewriting the rules of financial crime

December 12, 2025
AI mirrors the brain’s processing and is quietly changing human vocabulary

AI mirrors the brain’s processing and is quietly changing human vocabulary

December 11, 2025
New robot builds furniture from voice commands in 5 minutes

New robot builds furniture from voice commands in 5 minutes

December 8, 2025
Study: LLMs favor sentence structure over meaning

Study: LLMs favor sentence structure over meaning

December 5, 2025

LATEST NEWS

How Zesty uses AI to find your next meal

YouTube Gaming opens Playables Builder beta with Gemini 3

Watch Instagram Reels on TV with new Fire TV app

Netflix secures 14 iHeartMedia video podcasts for 2026

Google launches email assistant CC powered by Gemini

Steam Replay 2025 reveals your top games of the year

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.