Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

This AI explains your genes the way a doctor would

Unlike typical black-box DNA models, BIOREASON offers transparent explanations researchers can evaluate.

byKerem Gülen
June 10, 2025
in Research
Home Research

For years, artificial intelligence has been a powerful tool in genomics, capable of sifting through mountains of DNA data at incredible speeds. These “DNA foundation models” are fantastic at recognizing patterns, but they have a major limitation: they operate as “black boxes.” They can often predict what might happen—like whether a genetic variant is harmful—but they can’t explain why. This leaves scientists with answers but no understanding of the underlying biological story.

On the other hand, large language nodels (LLMs), the technology behind tools like ChatGPT, have become masters of reasoning and explanation. They can write essays, solve logic puzzles, and explain complex topics. However, they can’t natively read the intricate language of a DNA sequence.

This is the gap a new paper from researchers at the University of Toronto, the Vector Institute, and other leading institutions aims to bridge. They’ve developed a pioneering new architecture called BIOREASON, the first model to deeply integrate a DNA foundation model with an LLM.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Think of it as creating a new kind of AI expert: one that is not only fluent in the A’s, C’s, G’s, and T’s of our genetic code but can also reason about what it’s reading and explain its conclusions step-by-step, just like a human biologist.

From “black box” to clear explanations

“Unlocking deep, interpretable biological reasoning from complex genomic data is a major AI challenge hindering scientific discovery,” state the authors, led by Adibvafa Fallahpour, Andrew Magnuson, and Purav Gupta. Current DNA models can’t provide the “mechanistic insights and falsifiable hypotheses” that are the cornerstone of scientific progress.

BIOREASON changes the game. It doesn’t just treat DNA as a long string of text. Instead, it uses a specialized DNA model to first translate the raw genetic sequence into a rich, meaningful representation. This “embedding” is then fed directly into the reasoning engine of an LLM.

The result is a hybrid AI that can:

  1. Directly process raw DNA sequences.
  2. Connect genomic information to a vast database of biological knowledge.
  3. Perform multi-step logical reasoning.
  4. Generate clear, step-by-step explanations for its predictions.

A leap in performance and understanding

The team tested BIOREASON on several complex biological tasks, and the results are striking. On a key benchmark for predicting disease pathways from genetic variants, BIOREASON’s accuracy jumped from 88% to an incredible 97%. Across the board, the model demonstrated an average 15% performance gain over previous “single-modality” models.

But the most exciting part isn’t just the accuracy; it’s the how.

In one case study, the researchers asked BIOREASON about a specific genetic mutation and its effect. The model didn’t just spit out a one-word answer. Instead, it correctly predicted the disease—Amyotrophic Lateral Sclerosis (ALS)—and then articulated a plausible, 10-step biological rationale. It identified the specific gene, explained how the mutation disrupted a key cellular process (actin dynamics), and traced the downstream consequences to the motor neuron degeneration that characterizes ALS.

This is the “interpretable reasoning trace” that makes BIOREASON so powerful. It moves beyond a simple prediction to offering a testable hypothesis that researchers can take back to the lab.

The paper’s authors are clear that this is just the beginning. While there are limitations to address—such as biases in the training data and the computational cost—the potential is immense.

“BIOREASON offers a robust tool for gaining deeper, mechanistic insights from genomic data, aiding in understanding complex disease pathways and the formulation of novel research questions,” the researchers conclude.


Featured image credit

Tags: AI

Related Posts

Anthropic economic index reveals uneven Claude.ai adoption

Anthropic economic index reveals uneven Claude.ai adoption

September 17, 2025
Google releases VaultGemma 1B with differential privacy

Google releases VaultGemma 1B with differential privacy

September 17, 2025
OpenAI researchers identify the mathematical causes of AI hallucinations

OpenAI researchers identify the mathematical causes of AI hallucinations

September 17, 2025
AI agents can be controlled by malicious commands hidden in images

AI agents can be controlled by malicious commands hidden in images

September 15, 2025
AGI ethics checklist proposes ten key elements

AGI ethics checklist proposes ten key elements

September 11, 2025
Can an AI be happy? Scientists are developing new ways to measure the “welfare” of language models

Can an AI be happy? Scientists are developing new ways to measure the “welfare” of language models

September 10, 2025

LATEST NEWS

Meta unveils Ray-Ban Meta Display smart glasses with augmented reality at Meta Connect 2025

Google’s Gemini AI achieves gold medal in prestigious ICPC coding competition, outperforming most human teams

DJI Mini 5 Pro launches with a 1-inch sensor but skips official US release

Google launches Gemini Canvas AI no-code platform

AI tool uses mammograms to predict women’s 10-year heart health and cancer risk

Scale AI secures $100 million Pentagon contract for AI platform deployment

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.