Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Google and Stanford Collaborate to Build Neural Image Caption Generator

byadmin
November 19, 2014
in Articles, News
Home Resources Articles
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Google and Stanford have combined the best of neural network models from two independent researches to create systems that can accurately describe images.

Automatically describing the content of an image is a fundamental problem in AI that connects computer vision and natural language processing. Google in a recent paper presented a generative model based on deep recurrent architecture that combines recent advances in computer vision and machine translation that can be used to generate natural sentences describing an image.

Researchers at Stanford quoted in their abstract- “We present a model that generates free-form natural language descriptions of image regions. Our model leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between text and visual data. Our approach is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding.”

Distinguished Scholar Geoff Hinton was asked in a recent Reddit Ask Me Anything session, about how deep learning models might account for various elements and objects present in a single image. The closing lines in his response: “I guess we should just train [a recurrent neural network] to output a caption so that it can tell us what it thinks is there. Then maybe the philosophers and cognitive scientists will stop telling us what our nets cannot do.”]

“I consider the pixel data in images and video to be the dark matter of the Internet,” said Fei-Fei Li, director of the Stanford Artificial Intelligence Laboratory, who led the research with Andrej Karpathy, a graduate student. “We are now starting to illuminate it.”

This collaboration between Stanford and Google can possibly lead to more advanced object recognition systems with human-like understanding and prediction capabilities. It is also very promising for developing application models that can assess the entirety of scenes and deliver accurate image results and content libraries.

Machine translation that powers Skype Translate and Google’s word2vec libraries are among other advancements in language understanding perpetuated by recurrent neural networks.

Read more here

Follow @DataconomyMedia

(Image Credit: Franco Folini)

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Tags: Fei-Fei Liskype translatestanfordsurveillance

Related Posts

Xbox Developer Direct returns January 22 with Fable and Forza Horizon 6

Xbox Developer Direct returns January 22 with Fable and Forza Horizon 6

January 9, 2026
Dell debuts disaggregated infrastructure for modern data centers

Dell debuts disaggregated infrastructure for modern data centers

January 9, 2026
TikTok scores partnership with FIFA for World Cup highlights

TikTok scores partnership with FIFA for World Cup highlights

January 9, 2026
YouTube now lets you hide Shorts in search results

YouTube now lets you hide Shorts in search results

January 9, 2026
Google transforms Gmail with AI Inbox and natural language search

Google transforms Gmail with AI Inbox and natural language search

January 9, 2026
Disney+ to launch TikTok-style short-form video feed in the US

Disney+ to launch TikTok-style short-form video feed in the US

January 9, 2026
Please login to join discussion

LATEST NEWS

Xbox Developer Direct returns January 22 with Fable and Forza Horizon 6

Dell debuts disaggregated infrastructure for modern data centers

TikTok scores partnership with FIFA for World Cup highlights

YouTube now lets you hide Shorts in search results

Google transforms Gmail with AI Inbox and natural language search

Disney+ to launch TikTok-style short-form video feed in the US

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.