Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

The Visual Microphone: Recovering Sound from Vibration in Objects

byEileen McNulty
August 5, 2014
in Artificial Intelligence, News
Home News Artificial Intelligence
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail


Researchers from MIT, Microsoft Research, and Adobe Research have been collaborating on an intriguing project. They are experimenting with recovering sound from high-speed video of the minute vibrations from objects. The project is known as The Visual Microphone, and according the project website, this is how it works:

When sound hits an object, it causes small vibrations of the object’s surface. We show how, using only high-speed video of the object, we can extract those minute vibrations and partially recover the sound that produced them, allowing us to turn everyday objects—a glass of water, a potted plant, a box of tissues, or a bag of chips—into visual microphones. We recover sounds from highspeed footage of a variety of objects with different properties, and use both real and simulated data to examine some of the factors that affect our ability to visually recover sound. We evaluate the quality of recovered sounds using intelligibility and SNR metrics and provide input and recovered audio samples for direct comparison. We also explore how to leverage the rolling shutter in regular consumer cameras to recover audio from standard frame-rate videos, and use the spatial resolution of our method to visualize how sound-related vibrations vary over an object’s surface, which we can use to recover the vibration modes of an object.

 

You can see the (surprisingly accurate) results in the video above for yourself- the experiment using a normal DSLR is particularly extraordinary.

You can find more example experiments here, or read the research paper for yourself here.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

(Video and featured image credit: The Visual Microphone)

Follow @DataconomyMedia

Interested in more content like this? Sign up to our newsletter, and you wont miss a thing!

[mc4wp_form]

Related Posts

Anthropic partners with Teach For All to train 100,000 global educators

Anthropic partners with Teach For All to train 100,000 global educators

January 20, 2026
Signal co-founder launches privacy-focused AI service Confer

Signal co-founder launches privacy-focused AI service Confer

January 20, 2026
Adobe launches AI-powered Object Mask for Premiere Pro

Adobe launches AI-powered Object Mask for Premiere Pro

January 20, 2026
Google Workspace adds password-protected Office file editing

Google Workspace adds password-protected Office file editing

January 20, 2026
Claim: NVIDIA green-lit pirated book downloads for AI training

Claim: NVIDIA green-lit pirated book downloads for AI training

January 20, 2026
Tesla restarts Dojo3 supercomputer project as AI5 chip stabilizes

Tesla restarts Dojo3 supercomputer project as AI5 chip stabilizes

January 20, 2026
Please login to join discussion

LATEST NEWS

Anthropic partners with Teach For All to train 100,000 global educators

Signal co-founder launches privacy-focused AI service Confer

Adobe launches AI-powered Object Mask for Premiere Pro

Google Workspace adds password-protected Office file editing

Claim: NVIDIA green-lit pirated book downloads for AI training

Tesla restarts Dojo3 supercomputer project as AI5 chip stabilizes

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.