Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have devised an algorithm that can determine facts like “which scene has a higher crime rate”, or is “closer to a McDonald’s restaurant”, about a location, only with the help of an image or two.

MIT reports that although “humans are generally better at this specific task than the algorithm”, what remains intriguing is that “the computer consistently outperformed humans at a variation of the task in which users are shown two photos and asked which scene is closer to a McDonald’s.”

The team of researchers included PhD students Aditya Khosla, Byoungkwon An, and Joseph Lim, and CSAIL principal investigator Antonio Torralba. They essentially managed feed a computer on a set of 8 million Google images from eight U.S. cities that were embedded with GPS data on crime rates and McDonald’s locations.

Using deep-learning techniques they enabled the computer to teach itself how different qualities of the photos correlate.

“These sorts of algorithms have been applied to all sorts of content, like inferring the memorability of faces from headshots,” said Khosla. “But before this, there hadn’t really been research that’s taken such a large set of photos and used it to predict qualities of the specific locations the photos represent.”

A live demo posted online challenges the user to navigate to the nearest McDonald’s in the fewest possible steps using Google Street View.

Read more here.

(Image credit: MIT News)

Previous post

MapR’s Latest Distribution Gives a Flavour of Apache Drill, With New Developer Pre-Release

Next post

Cornell’s Robo Brain Educates Robots by Tapping the Internet for Information