Microsoft revealed a privacy flaw, Whisper Leak, enabling monitoring internet connections to infer AI chatbot conversation topics, despite encryption, affecting user privacy.
The vulnerability exploits the pattern of data flow between users and AI services. While actual content remains encrypted, the rhythm and size of data packets provide enough information for an educated guess about conversation topics.
This process is analogous to discerning activity through a frosted window by observing movement patterns. Whisper Leak analyzes both the size and timing of encrypted data packets to infer discussion subjects.
Research from Microsoft security experts Jonathan Bar Or and Geoff McDonald, and the Microsoft Defender Security Research Team, indicates this vulnerability stems from AI chatbots displaying responses word by word. This streaming feature, designed for natural conversation flow, inadvertently creates a privacy risk.
The attack operates by examining data packet size and timing. Entities capable of monitoring internet traffic, including government agencies at the ISP level, local network hackers, or individuals on shared Wi-Fi networks, could use this technique. They do not need to decrypt conversation content.
To demonstrate the vulnerability, Microsoft researchers trained computer programs using AI to recognize conversation patterns. These programs tested AI chatbots from Mistral, xAI, DeepSeek, and OpenAI, correctly identifying specific conversation topics with over 98% accuracy.
Whisper Leak’s effectiveness increases with prolonged use. As an attacker gathers more topic-specific conversation examples, their detection software improves. Monitoring multiple conversations from a single individual further enhances accuracy.
Microsoft stated that patient adversaries with sufficient resources could achieve success rates higher than the initial 98% figure.
Major AI providers are addressing this issue. Following Microsoft’s report, OpenAI, Microsoft, and Mistral implemented a solution: adding random gibberish of varying lengths to responses. This padding disrupts the patterns attackers rely on, neutralizing the attack.
This countermeasure is comparable to adding random static to a radio signal. The message remains clear to the recipient, but analysis of the transmission pattern becomes difficult due to the introduced noise.
For users concerned about AI chatbot privacy, Microsoft recommends several precautions:
- Avoid discussing sensitive topics on public or untrusted Wi-Fi networks.
- Utilize a virtual private network (VPN), which encrypts traffic through a tunnel.
- Verify if your AI service has protections against Whisper Leak; OpenAI, Microsoft, and Mistral have deployed fixes.
- Consider alternatives to AI assistance for extremely sensitive matters, or defer discussions until a more secure network is available.





