Researchers from the University of Maryland, U.S. and  Australia’s Information and Communications Technology Research Centre of Excellence NICTA have together published a paper that provides insight into tapping deep learning to govern a newly devised system ‘that learns manipulation action plans by processing unconstrained videos from the World Wide Web.’

“In order to advance action generation and creation in robots beyond simple learned schemas we need computational tools that allow us to automatically interpret and represent human actions. Experiments conducted on a publicly available unconstrained video dataset show that the system is able to learn manipulation actions by “watching” unconstrained videos with high accuracy,” the abstract of the paper explains.

VentureBeat reports that researchers selected data from 88 YouTube videos of people cooking to generated commands for a robot to carry out.

The lower level of the system consists of two convolutional neural network (CNN) based recognition modules, one for classifying the hand grasp type and the other for object recognition. The higher level is a probabilistic manipulation action grammar based parsing module that aims at generating visual sentences for robot manipulation.

Companies such as Facebook, Google, Microsoft, Baidu, IBM, NEC and AT&T have utilized CNNs for deploying products and services relating to image and video handling, document recognition, etc.

Read more here.


(Image credit: Erik Charlton)

 

Previous post

Python Packages For Data Mining

Next post

Digital Health Witnessed Startling Influx of Funding in 2014