Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

The Grammar of Movement- MIT Present Video Recognition Programme

byEileen McNulty
May 16, 2014
in Articles, Artificial Intelligence, News
Home Resources Articles
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Hamed Pirsiavash, a postdoctoral scholar from MIT, is developing a new activity-recognition algorithm to identify what’s happening in video files. Pirsiavash and his former thesis advisor, Deva Ramanan of the University of California at Irvine, will present the video recognition programme at the Conference on Computer Vision and Pattern Recognition in June. In a similar fashion to the AlchemyVision Image-Processing software, Pirsiavash’s programme also draws on natural language processing techniques, in that it analyses small parts of a sequence to uncover what is happening in the larger context.

“One of the challenging problems they [NLP researchers] try to solve is, if you have a sentence, you want to basically parse the sentence, saying what is the subject, what is the verb, what is the adverb,” Pirsiavash says. “We see an analogy here, which is, if you have a complex action — like making tea or making coffee — that has some subactions, we can basically stitch together these subactions and look at each one as something like verb, adjective, and adverb.”

For each new action, Pirsiavash and Ramanan’s algorithm must learn a new set of ‘grammar’, or subactions that comprise the whole. The algorithm is not wholly unsupervised; they feed the algorithm a set of videos depicting the same action and specify how many subactions the algorithm should identify, but not what the subactions are.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Although there are several companies working on video-processing programmes, (including Dropcam, who are particularly interested in distinguishing normal and anomalous actions), Pirsiavash and Ramanan’s has several advantages. First, the time it takes to analyse a video is on a linear scale; if a video is 10 times as long, it takes 10 times as long to process (rather than 1,000 times longer, as was the case with previous algorithms). Secondly, its comprehension of subactions means it’s able to identify partially-completed actions, and doesn’t have to wait until the end of video clip to deliver results. Third, the amount of memory required to run the algorithm is fixed; it doesn’t require any more space to process lengthier or larger clips.

Looking forward, Pirsiavash is particularly excited about possible medical applications of the programme. For instance, they might be able to teach the programme the grammar of properly- and improperly-executed physical therapy exercises, or distinguish whether a patient has remembered or forgotten to take their medicine.

Read more here.
(Image source: MIT website)


Interested in more content like this? Sign up to our newsletter, and you wont miss a thing!

[mc4wp_form]

Tags: Machine LearningMITNatural Language ProcessingsurveillanceVisual computing

Related Posts

Dell fixes the XPS: Physical keys return in new 14 and 16 models

Dell fixes the XPS: Physical keys return in new 14 and 16 models

January 13, 2026
Zuckerberg launches Meta Compute to build massive AI energy grid

Zuckerberg launches Meta Compute to build massive AI energy grid

January 13, 2026
Official: Google Gemini will power Apple Intelligence and Siri

Official: Google Gemini will power Apple Intelligence and Siri

January 13, 2026
Amazon: 97% of our devices are ready for Alexa+

Amazon: 97% of our devices are ready for Alexa+

January 13, 2026
Anthropic’s Cowork brings developer-grade AI agents to non-coders

Anthropic’s Cowork brings developer-grade AI agents to non-coders

January 13, 2026
Xiaomi eyes total independence with new chip and OS

Xiaomi eyes total independence with new chip and OS

January 12, 2026
Please login to join discussion

LATEST NEWS

Dell fixes the XPS: Physical keys return in new 14 and 16 models

Zuckerberg launches Meta Compute to build massive AI energy grid

Official: Google Gemini will power Apple Intelligence and Siri

Amazon: 97% of our devices are ready for Alexa+

Anthropic’s Cowork brings developer-grade AI agents to non-coders

Xiaomi eyes total independence with new chip and OS

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.