Facebook in an unprecedented move has open-sourced some of its machine learning tools with the scientific computing framework,Torch.
The announcement came earlier last week on Friday, through the Facebook AI Research (FAIR) blog.
“Today, we’re open sourcing optimized deep-learning modules for Torch. These modules are significantly faster than the default ones in Torch and have accelerated our research projects by allowing us to train larger neural nets in less time,” wrote Soumith Chintala for Facebook.
The blog outlines that the release includes a number of other CUDA-based modules and containers:
- Containers that allow the user to parallelize the training on multiple GPUs.
- An optimized Lookup Table that is often used when learning embedding of discrete objects (e.g. words) and neural language models.
- Hierarchical SoftMax module to speed up training over extremely large number of classes.
- Cross-map pooling often used for certain types of visual and text models.
- A GPU implementation of 1-bit SGD based on the paper by Frank Seide, et al.
- A significantly faster Temporal Convolution layer.
Speculating the Social Network giant’s long term strategy for the move, it is believed that the innovative outcomes in both ML and AI that occur as a result of the open sourcing, may be used by Facebook in the larger scheme of things, “even while the algorithms and tools themselves are released as open source projects,” comments Serdar Yegulalp for Info World.
Read more here.
(Image credit: Pixabay)