Google and Stanford have combined the best of neural network models from two independent researches to create systems that can accurately describe images.
Automatically describing the content of an image is a fundamental problem in AI that connects computer vision and natural language processing. Google in a recent paper presented a generative model based on deep recurrent architecture that combines recent advances in computer vision and machine translation that can be used to generate natural sentences describing an image.
Researchers at Stanford quoted in their abstract- “We present a model that generates free-form natural language descriptions of image regions. Our model leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between text and visual data. Our approach is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding.”
Distinguished Scholar Geoff Hinton was asked in a recent Reddit Ask Me Anything session, about how deep learning models might account for various elements and objects present in a single image. The closing lines in his response: “I guess we should just train [a recurrent neural network] to output a caption so that it can tell us what it thinks is there. Then maybe the philosophers and cognitive scientists will stop telling us what our nets cannot do.”]
“I consider the pixel data in images and video to be the dark matter of the Internet,” said Fei-Fei Li, director of the Stanford Artificial Intelligence Laboratory, who led the research with Andrej Karpathy, a graduate student. “We are now starting to illuminate it.”
This collaboration between Stanford and Google can possibly lead to more advanced object recognition systems with human-like understanding and prediction capabilities. It is also very promising for developing application models that can assess the entirety of scenes and deliver accurate image results and content libraries.
Machine translation that powers Skype Translate and Google’s word2vec libraries are among other advancements in language understanding perpetuated by recurrent neural networks.
Read more here
(Image Credit: Franco Folini)