Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

How AI platforms rank on data privacy in 2025

Incogni says Le Chat, ChatGPT, and Grok offer the best privacy. Meta, Gemini, and Microsoft rank lowest. Most platforms still collect user data and make opt-outs difficult.

byKerem Gülen
July 9, 2025
in Research
Home Research

A new report from Incogni evaluates the data privacy practices of today’s most widely used AI platforms. As generative AI and large language models (LLMs) become deeply embedded in everyday tools and services, the risk of unauthorized data collection and sharing has surged. Incogni’s researchers analyzed nine leading platforms using 11 criteria to understand which systems offer the most privacy-friendly experience. Their findings reveal significant gaps between transparency, data control, and user protection across the industry.

Why privacy in Gen AI is a growing concern

While Gen AI platforms offer clear productivity benefits, they often expose users to complex data privacy risks that are hard to detect. These risks stem from two sources: the data used to train the models and the personal information exposed during user interactions. Most platforms do not clearly communicate what data is collected, how it is used, or whether users can opt out.

With LLMs being deployed in products for content creation, search, code generation, and digital assistants, users frequently share sensitive information without realizing it may be retained or used to train future models. Incogni’s report addresses this gap by offering a standardized framework to score and rank AI platforms according to their privacy practices.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

How AI platforms rank on data privacy in 2025
Image: Incogni

The least and most privacy-invasive platforms

According to Incogni’s ranking, Le Chat (Mistral AI) is the least invasive AI platform in terms of data privacy. It limits data collection and performed well across most of the 11 measured criteria. ChatGPT (OpenAI) ranked second, followed by Grok (xAI). These platforms offer relatively clear privacy policies and provide users with a way to opt out of having their data used in model training.

At the bottom of the ranking are Meta AI, Gemini (Google), and Copilot (Microsoft). These platforms were found to be the most aggressive in data collection and least transparent about their practices. DeepSeek also performed poorly, particularly in the ability to opt out of model training and in vague policy language.

Training data practices

The report delves into several key questions regarding how user data is utilized for model training.

Are prompts used to train the models?

Incogni found that some platforms explicitly allow users to opt out of training: ChatGPT, Copilot, Le Chat, and Grok fall into this group. Others, such as Gemini, DeepSeek, Pi AI, and Meta AI, do not appear to provide a way to opt out. Claude (Anthropic) was the only platform that claims to never use user inputs for training.

Are prompts shared with external parties?

Most platforms share prompts with a defined set of third parties, including service providers, legal authorities, and affiliated companies. However, Microsoft and Meta allow sharing with advertisers or affiliates under broader terms. Anthropic and Meta also disclose sharing with research collaborators. These policies raise questions about the limits of data control once prompts leave the platform.

What kind of training data is used?

All platforms train their models on publicly accessible data. Many also use user feedback or prompts to improve performance. OpenAI, Meta, and Anthropic provided the most detailed explanations about training data sources, although even these were limited in scope. No platform offered a way for users to remove their personal data from existing training sets.

Transparency scores

Beyond the policies themselves, Incogni also evaluated how transparent platforms are about their data practices.

How clearly do platforms explain prompt usage?

OpenAI, Mistral, Anthropic, and xAI made it easy to determine how prompts are used for training. These platforms offered searchable support content or detailed FAQ sections. Meta and Microsoft, on the other hand, required users to search through unrelated documentation. DeepSeek, Pi AI, and Google’s Gemini offered the least clarity.

Can users find information about model training?

Platforms were grouped into three levels of transparency. OpenAI, Mistral, Anthropic, and xAI provided accessible documentation. Microsoft and Meta made this information somewhat difficult to find. Gemini, DeepSeek, and Inflection offered limited or fragmented disclosures, requiring users to parse multiple documents to get answers.

Are privacy policies readable?

Incogni used the Dale-Chall formula to assess readability. All policies required at least a college-level reading ability. Meta, Microsoft, and Google provided long and complex privacy documents that covered multiple products. Inflection and DeepSeek offered very short policies that lacked clarity and depth. OpenAI and xAI were noted for offering helpful support articles, though these must be maintained over time to remain accurate.

Data collection and sharing practices

The investigation also uncovered details about what specific data is collected and with whom it might be shared.

What data can be shared with third parties?

Meta and DeepSeek share personal information across corporate entities. Meta and Anthropic share information with research partners. In several cases, vague terms like “affiliates” were used, making it unclear who exactly receives user data. Microsoft’s policy also permits sharing with advertisers under specific conditions.

Where does user data come from?

Most platforms collect data during account setup or user interaction. However, Incogni found evidence that some platforms also gather data from additional sources:

  • Security partners: ChatGPT, Gemini, DeepSeek
  • Marketing partners: Gemini, Meta AI
  • Financial institutions: Copilot
  • Commercial datasets: Claude (Anthropic)

Pi AI appears to use the fewest external sources, focusing mainly on direct input and public data. Microsoft stated that it may use data from brokers as well.

Mobile app data collection and sharing

Incogni also examined how iOS and Android apps collect and share user data. Le Chat had the lowest privacy risk, followed by Pi AI and ChatGPT. Meta AI was the most aggressive, collecting data like usernames, emails, phone numbers, and sharing much of it with third parties.

Gemini and Meta AI collect exact user locations. Pi AI, Gemini, and DeepSeek collect phone numbers. Grok shares photos and app interaction data, while Claude shares app usage and email addresses.

Interestingly, Microsoft’s Copilot Android app claimed not to collect or share any user data. Because this was inconsistent with its iOS app disclosures, Incogni scored both apps based on the iOS version.

Privacy risks vary widely between Gen AI platforms. The best performers offered clear privacy policies, opt-out controls, and minimal data collection. The worst offenders lacked transparency and shared user data broadly without clear justification.

Incogni concludes that AI platforms must make privacy documentation easier to read, provide modular privacy policies for each product, and avoid relying on broad umbrella policies. Companies should also maintain up-to-date support resources that clearly answer data handling questions in plain language.


Featured image credit

Tags: AIdata privacy

Related Posts

Can an AI be happy? Scientists are developing new ways to measure the “welfare” of language models

Can an AI be happy? Scientists are developing new ways to measure the “welfare” of language models

September 10, 2025
Uc San Diego study questions phishing training impact

Uc San Diego study questions phishing training impact

September 8, 2025
Deepmind finds RAG limit with fixed-size embeddings

Deepmind finds RAG limit with fixed-size embeddings

September 5, 2025
Psychopathia Machinalis and the path to “Artificial Sanity”

Psychopathia Machinalis and the path to “Artificial Sanity”

September 1, 2025
New research finds AI prefers content from other AIs

New research finds AI prefers content from other AIs

August 29, 2025
87% of game devs already use AI tools survey finds

87% of game devs already use AI tools survey finds

August 27, 2025

LATEST NEWS

Spotify Premium to add 24-bit FLAC lossless audio

Bending Spoons to acquire Vimeo for $1.38 billion

Nintendo Direct September 2025: What’s coming for Nintendo Switch and Switch 2?

China develops SpikingBrain1.0, a brain-inspired AI model

TwinMind raises $5.7M to launch AI second brain for offline note-taking

YouTube Music tests lyrics paywall for free users

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.