Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

This AI explains your genes the way a doctor would

Unlike typical black-box DNA models, BIOREASON offers transparent explanations researchers can evaluate.

byKerem Gülen
June 10, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

For years, artificial intelligence has been a powerful tool in genomics, capable of sifting through mountains of DNA data at incredible speeds. These “DNA foundation models” are fantastic at recognizing patterns, but they have a major limitation: they operate as “black boxes.” They can often predict what might happen—like whether a genetic variant is harmful—but they can’t explain why. This leaves scientists with answers but no understanding of the underlying biological story.

On the other hand, large language nodels (LLMs), the technology behind tools like ChatGPT, have become masters of reasoning and explanation. They can write essays, solve logic puzzles, and explain complex topics. However, they can’t natively read the intricate language of a DNA sequence.

This is the gap a new paper from researchers at the University of Toronto, the Vector Institute, and other leading institutions aims to bridge. They’ve developed a pioneering new architecture called BIOREASON, the first model to deeply integrate a DNA foundation model with an LLM.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Think of it as creating a new kind of AI expert: one that is not only fluent in the A’s, C’s, G’s, and T’s of our genetic code but can also reason about what it’s reading and explain its conclusions step-by-step, just like a human biologist.

From “black box” to clear explanations

“Unlocking deep, interpretable biological reasoning from complex genomic data is a major AI challenge hindering scientific discovery,” state the authors, led by Adibvafa Fallahpour, Andrew Magnuson, and Purav Gupta. Current DNA models can’t provide the “mechanistic insights and falsifiable hypotheses” that are the cornerstone of scientific progress.

BIOREASON changes the game. It doesn’t just treat DNA as a long string of text. Instead, it uses a specialized DNA model to first translate the raw genetic sequence into a rich, meaningful representation. This “embedding” is then fed directly into the reasoning engine of an LLM.

The result is a hybrid AI that can:

  1. Directly process raw DNA sequences.
  2. Connect genomic information to a vast database of biological knowledge.
  3. Perform multi-step logical reasoning.
  4. Generate clear, step-by-step explanations for its predictions.

A leap in performance and understanding

The team tested BIOREASON on several complex biological tasks, and the results are striking. On a key benchmark for predicting disease pathways from genetic variants, BIOREASON’s accuracy jumped from 88% to an incredible 97%. Across the board, the model demonstrated an average 15% performance gain over previous “single-modality” models.

But the most exciting part isn’t just the accuracy; it’s the how.

In one case study, the researchers asked BIOREASON about a specific genetic mutation and its effect. The model didn’t just spit out a one-word answer. Instead, it correctly predicted the disease—Amyotrophic Lateral Sclerosis (ALS)—and then articulated a plausible, 10-step biological rationale. It identified the specific gene, explained how the mutation disrupted a key cellular process (actin dynamics), and traced the downstream consequences to the motor neuron degeneration that characterizes ALS.

This is the “interpretable reasoning trace” that makes BIOREASON so powerful. It moves beyond a simple prediction to offering a testable hypothesis that researchers can take back to the lab.

The paper’s authors are clear that this is just the beginning. While there are limitations to address—such as biases in the training data and the computational cost—the potential is immense.

“BIOREASON offers a robust tool for gaining deeper, mechanistic insights from genomic data, aiding in understanding complex disease pathways and the formulation of novel research questions,” the researchers conclude.


Featured image credit

Tags: AI

Related Posts

Your future quantum computer might be built on standard silicon after all

Your future quantum computer might be built on standard silicon after all

November 25, 2025
Microsoft’s Fara-7B: New agentic LLM from screenshots

Microsoft’s Fara-7B: New agentic LLM from screenshots

November 25, 2025
Precision Neuroscience proves you do not need to drill holes to read brains

Precision Neuroscience proves you do not need to drill holes to read brains

November 24, 2025
New Apple paper reveals how AI can track your daily chores

New Apple paper reveals how AI can track your daily chores

November 23, 2025
Why your lonely teenager should never trust ChatGPT with their mental health

Why your lonely teenager should never trust ChatGPT with their mental health

November 21, 2025
Google wants AI to build web pages instead of just writing text

Google wants AI to build web pages instead of just writing text

November 20, 2025

LATEST NEWS

Speechify adds voice typing and assistant to Chrome

Copilot exits WhatsApp on January 15 citing policy shift

Rockstar co-founder critiques EA and Microsoft’s AI expectations

Gemini’s upcoming Projects feature mirrors ChatGPT workspaces

OpenAI moves ChatGPT Voice into main chat thread

Perplexity launches Instant Buy AI shopping assistant with PayPal

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.