Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

The Visual Microphone: Recovering Sound from Vibration in Objects

byEileen McNulty
August 5, 2014
in Artificial Intelligence, News
Home News Artificial Intelligence
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail


Researchers from MIT, Microsoft Research, and Adobe Research have been collaborating on an intriguing project. They are experimenting with recovering sound from high-speed video of the minute vibrations from objects. The project is known as The Visual Microphone, and according the project website, this is how it works:

When sound hits an object, it causes small vibrations of the object’s surface. We show how, using only high-speed video of the object, we can extract those minute vibrations and partially recover the sound that produced them, allowing us to turn everyday objects—a glass of water, a potted plant, a box of tissues, or a bag of chips—into visual microphones. We recover sounds from highspeed footage of a variety of objects with different properties, and use both real and simulated data to examine some of the factors that affect our ability to visually recover sound. We evaluate the quality of recovered sounds using intelligibility and SNR metrics and provide input and recovered audio samples for direct comparison. We also explore how to leverage the rolling shutter in regular consumer cameras to recover audio from standard frame-rate videos, and use the spatial resolution of our method to visualize how sound-related vibrations vary over an object’s surface, which we can use to recover the vibration modes of an object.

 

You can see the (surprisingly accurate) results in the video above for yourself- the experiment using a normal DSLR is particularly extraordinary.

You can find more example experiments here, or read the research paper for yourself here.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

(Video and featured image credit: The Visual Microphone)

Follow @DataconomyMedia

Interested in more content like this? Sign up to our newsletter, and you wont miss a thing!

[mc4wp_form]

Related Posts

Perplexity brings its AI browser Comet to Android

Perplexity brings its AI browser Comet to Android

November 21, 2025
Google claims Nano Banana Pro can finally render legible text on posters

Google claims Nano Banana Pro can finally render legible text on posters

November 21, 2025
Apple wants you to chain Mac Studios together to build AI clusters

Apple wants you to chain Mac Studios together to build AI clusters

November 21, 2025
Bitcoin for America Act allows tax payments in Bitcoin

Bitcoin for America Act allows tax payments in Bitcoin

November 21, 2025
Blue Origin upgrades New Glenn and unveils massive 9×4 variant

Blue Origin upgrades New Glenn and unveils massive 9×4 variant

November 21, 2025
Amazon launches Alexa+ in Canada with natural-language controls

Amazon launches Alexa+ in Canada with natural-language controls

November 21, 2025
Please login to join discussion

LATEST NEWS

Perplexity brings its AI browser Comet to Android

Google claims Nano Banana Pro can finally render legible text on posters

Apple wants you to chain Mac Studios together to build AI clusters

Bitcoin for America Act allows tax payments in Bitcoin

Blue Origin upgrades New Glenn and unveils massive 9×4 variant

Amazon launches Alexa+ in Canada with natural-language controls

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.