Pande Lab at Stanford University in collaboration with Google released a paper earlier this week that focuses on how neural networks and deep learning technology could be crucial in improving the accuracy of determining which chemical compounds would be effective drug treatments for a variety of diseases.

A Google Research blog post explains how in the recent past, computational methods using deep learning with neural networks have attempted to ‘replace or augment the high-throughput screening process.’

So far, virtual drug screening has used existing data on studied diseases; but the volume of experimental drug screening data across many diseases continues to grow.

The paper titled “Massively Multitask Networks for Drug Discovery,” among other things, quantifies how the amount and diversity of screening data from a variety of diseases with very different biological processes can be used to improve the virtual drug screening predictions, explains the blog.

Working with a total of 37.8M data points across 259 distinct biological processes, using large-scale neural network training system to train at a scale 18x larger than previously used, the researchers managed to “probe the sensitivity of these models to a variety of changes in model structure and input data.”

“In the paper, we examine not just the performance of the model but why it performs well and what we can expect for similar models in the future. The data in the paper represents more than 50M total CPU hours.”

The entire effort, although it does not outline any milestone, is a step towards discerning an accurate and time saving method in drug discovery, that was traditionally almost impossible.


(Image credit: Erich Ferdinand, via Flickr)

 

Previous post

Bank Consortium Led by Goldman Sachs Launch Networking Platform Symphony

Next post

Y Combinator's Sam Altman Jumps on AI Regulation Bandwagon