Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

This AI explains your genes the way a doctor would

Unlike typical black-box DNA models, BIOREASON offers transparent explanations researchers can evaluate.

byKerem Gülen
June 10, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

For years, artificial intelligence has been a powerful tool in genomics, capable of sifting through mountains of DNA data at incredible speeds. These “DNA foundation models” are fantastic at recognizing patterns, but they have a major limitation: they operate as “black boxes.” They can often predict what might happen—like whether a genetic variant is harmful—but they can’t explain why. This leaves scientists with answers but no understanding of the underlying biological story.

On the other hand, large language nodels (LLMs), the technology behind tools like ChatGPT, have become masters of reasoning and explanation. They can write essays, solve logic puzzles, and explain complex topics. However, they can’t natively read the intricate language of a DNA sequence.

This is the gap a new paper from researchers at the University of Toronto, the Vector Institute, and other leading institutions aims to bridge. They’ve developed a pioneering new architecture called BIOREASON, the first model to deeply integrate a DNA foundation model with an LLM.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Think of it as creating a new kind of AI expert: one that is not only fluent in the A’s, C’s, G’s, and T’s of our genetic code but can also reason about what it’s reading and explain its conclusions step-by-step, just like a human biologist.

From “black box” to clear explanations

“Unlocking deep, interpretable biological reasoning from complex genomic data is a major AI challenge hindering scientific discovery,” state the authors, led by Adibvafa Fallahpour, Andrew Magnuson, and Purav Gupta. Current DNA models can’t provide the “mechanistic insights and falsifiable hypotheses” that are the cornerstone of scientific progress.

BIOREASON changes the game. It doesn’t just treat DNA as a long string of text. Instead, it uses a specialized DNA model to first translate the raw genetic sequence into a rich, meaningful representation. This “embedding” is then fed directly into the reasoning engine of an LLM.

The result is a hybrid AI that can:

  1. Directly process raw DNA sequences.
  2. Connect genomic information to a vast database of biological knowledge.
  3. Perform multi-step logical reasoning.
  4. Generate clear, step-by-step explanations for its predictions.

A leap in performance and understanding

The team tested BIOREASON on several complex biological tasks, and the results are striking. On a key benchmark for predicting disease pathways from genetic variants, BIOREASON’s accuracy jumped from 88% to an incredible 97%. Across the board, the model demonstrated an average 15% performance gain over previous “single-modality” models.

But the most exciting part isn’t just the accuracy; it’s the how.

In one case study, the researchers asked BIOREASON about a specific genetic mutation and its effect. The model didn’t just spit out a one-word answer. Instead, it correctly predicted the disease—Amyotrophic Lateral Sclerosis (ALS)—and then articulated a plausible, 10-step biological rationale. It identified the specific gene, explained how the mutation disrupted a key cellular process (actin dynamics), and traced the downstream consequences to the motor neuron degeneration that characterizes ALS.

This is the “interpretable reasoning trace” that makes BIOREASON so powerful. It moves beyond a simple prediction to offering a testable hypothesis that researchers can take back to the lab.

The paper’s authors are clear that this is just the beginning. While there are limitations to address—such as biases in the training data and the computational cost—the potential is immense.

“BIOREASON offers a robust tool for gaining deeper, mechanistic insights from genomic data, aiding in understanding complex disease pathways and the formulation of novel research questions,” the researchers conclude.


Featured image credit

Tags: AI

Related Posts

Nature study projects 2B wearable health devices by 2050

Nature study projects 2B wearable health devices by 2050

January 7, 2026
DeepSeek introduces Manifold-Constrained Hyper-Connections for R2

DeepSeek introduces Manifold-Constrained Hyper-Connections for R2

January 6, 2026
Imperial College London develops AI to accelerate cardiac drug discovery

Imperial College London develops AI to accelerate cardiac drug discovery

January 5, 2026
DarkSpectre malware infects 8.8 million users via browser extensions

DarkSpectre malware infects 8.8 million users via browser extensions

January 2, 2026
CMU researchers develop self-moving objects powered by AI

CMU researchers develop self-moving objects powered by AI

December 31, 2025
Glean’s Work AI Institute identifies 5 core AI tensions

Glean’s Work AI Institute identifies 5 core AI tensions

December 31, 2025

LATEST NEWS

Meta expands neural wristband tech to cars and accessibility at CES 2026

iPolish unveils color-changing smart nails at CES 2026

Lenovo and Motorola introduce Qira cross-device AI assistant

Motorola expands Moto Things lineup at CES 2026

Lenovo reveals Legion Go 2 with SteamOS at CES 2026

CES 2026: Lenovo unveils XD Rollable Concept with wrap-around screen

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.