Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Google taught your voice assistant to understand what you mean

After years of “screen” vs. “scream” mix-ups, Google’s new Speech-to-Retrieval system cuts out the text transcription middleman to make voice search faster and more accurate.

byKerem Gülen
October 14, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Let’s be honest, we’ve all been there. You ask your phone for information on the famous painting “The Scream,” and it cheerfully offers you tutorials on screen painting. This kind of frustrating mix-up has been a stubborn bug in voice search for years. Now, in a recent post on the Google Research blog, scientists Ehsan Variani and Michael Riley have unveiled a new system called Speech-to-Retrieval (S2R) that gets to the heart of the problem.

The single most important finding is that by skipping the flawed step of turning speech into text, S2R provides faster, more accurate results. This matters because it marks a shift from simply hearing our words to actually understanding our intent, making voice assistants significantly less aggravating and a lot more useful.

https://storage.googleapis.com/gweb-research2023-media/media/SpeechToRetrieval2_Cascade.mp4

Video: Google

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

The problem with playing telephone

So, why do voice assistants get things so wrong? Traditionally, they use a two-step process called a cascade model. First, an Automatic Speech Recognition (ASR) system listens to your voice and transcribes it into text. Second, that text is fed into a standard search engine. The catch is that this process is like a game of telephone; if the ASR makes a tiny mistake at the beginning—mistaking an “m” for an “n”—that error gets passed down the line, and the final search result is completely wrong.

To figure out just how big this problem was, the Google team ran a clever experiment. They compared a typical ASR-powered search system with a “perfect” version that used flawless, human-verified text transcripts. They measured the quality of the results using a metric called Mean Reciprocal Rank (MRR), which is basically a score for how high up the correct answer appears in the search list. Unsurprisingly, they found a significant performance gap between the real-world system and the perfect one across numerous languages. This gap proved that the text-first approach was the main bottleneck, creating a clear opportunity for a smarter system.

From sound to meaning directly

Enter Speech-to-Retrieval, or S2R. Instead of translating your voice into text, S2R translates the sound itself directly into meaning. Okay, let’s pause. What does that really mean?

At its core, S2R uses a sophisticated setup called a dual-encoder architecture. Think of it like a universal matchmaking service for information.

  • One part, the audio encoder, listens to your spoken query and creates a rich numerical profile—a vector—that captures its essential meaning. This isn’t just about the words, but potentially the context and nuances in your voice.
  • In parallel, a document encoder has already created similar profiles for billions of web documents.

When you speak, the system doesn’t try to write down your words. Instead, it takes the “profile” of your voice query and instantly finds the document “profiles” that are the closest mathematical match. It’s a bit like a Shazam for search queries; it finds a match based on the underlying signature, not a clumsy transcription. This entire process bypasses the fragile text step, eliminating the chance for a “scream” versus “screen” type of error.

So does it actually work in the real world?

Yes, and the results are impressive. When the researchers tested S2R on their dataset of voice questions, they found that it significantly outperforms the old cascade model. Even better, its performance gets remarkably close to the theoretical “perfect” system that used human transcribers. While there’s still a small gap to close, S2R has effectively solved the majority of the problem caused by transcription errors.

This isn’t just a lab experiment. Google has already rolled out S2R to power its voice search in multiple languages. The next time your voice assistant correctly understands a tricky query, you’re likely experiencing this new technology firsthand. To push the field forward, the team has also open-sourced their Simple Voice Questions (SVQ) dataset, inviting researchers everywhere to help build the next generation of voice interfaces. The upshot is a future where you can finally stop enunciating like a robot and just talk to your devices like a normal person.


Featured image credit

Tags: Googles2r

Related Posts

Miggo Security bypasses Google Gemini defenses via calendar invites

Miggo Security bypasses Google Gemini defenses via calendar invites

January 21, 2026
JWST identifies SN Eos: The most distant supernova ever spectroscopically confirmed

JWST identifies SN Eos: The most distant supernova ever spectroscopically confirmed

January 21, 2026
How AI built VoidLink malware in just seven days

How AI built VoidLink malware in just seven days

January 20, 2026
Forrester analyst: AI has failed to move the needle on global productivity

Forrester analyst: AI has failed to move the needle on global productivity

January 19, 2026
OpenAI GPT 5.2 cracks Erdős math problem in 15 minutes

OpenAI GPT 5.2 cracks Erdős math problem in 15 minutes

January 19, 2026
Appfigures: Mobile app spending hits record 5.8 billion

Appfigures: Mobile app spending hits record $155.8 billion

January 15, 2026

LATEST NEWS

Blue Origin sets late February launch for third New Glenn mission

Anthropic overhauls hiring tests due to Claude AI

NexPhone launches triple OS phone for $549

Google Photos redesigns sharing with immersive full-screen carousel

Snap rolls out granular screen time tracking in Family Center update

Spotify launches AI-powered Prompted Playlists

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.