Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

How AI platforms rank on data privacy in 2025

Incogni says Le Chat, ChatGPT, and Grok offer the best privacy. Meta, Gemini, and Microsoft rank lowest. Most platforms still collect user data and make opt-outs difficult.

byKerem Gülen
July 9, 2025
in Research

A new report from Incogni evaluates the data privacy practices of today’s most widely used AI platforms. As generative AI and large language models (LLMs) become deeply embedded in everyday tools and services, the risk of unauthorized data collection and sharing has surged. Incogni’s researchers analyzed nine leading platforms using 11 criteria to understand which systems offer the most privacy-friendly experience. Their findings reveal significant gaps between transparency, data control, and user protection across the industry.

Why privacy in Gen AI is a growing concern

While Gen AI platforms offer clear productivity benefits, they often expose users to complex data privacy risks that are hard to detect. These risks stem from two sources: the data used to train the models and the personal information exposed during user interactions. Most platforms do not clearly communicate what data is collected, how it is used, or whether users can opt out.

With LLMs being deployed in products for content creation, search, code generation, and digital assistants, users frequently share sensitive information without realizing it may be retained or used to train future models. Incogni’s report addresses this gap by offering a standardized framework to score and rank AI platforms according to their privacy practices.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

How AI platforms rank on data privacy in 2025
Image: Incogni

The least and most privacy-invasive platforms

According to Incogni’s ranking, Le Chat (Mistral AI) is the least invasive AI platform in terms of data privacy. It limits data collection and performed well across most of the 11 measured criteria. ChatGPT (OpenAI) ranked second, followed by Grok (xAI). These platforms offer relatively clear privacy policies and provide users with a way to opt out of having their data used in model training.

At the bottom of the ranking are Meta AI, Gemini (Google), and Copilot (Microsoft). These platforms were found to be the most aggressive in data collection and least transparent about their practices. DeepSeek also performed poorly, particularly in the ability to opt out of model training and in vague policy language.

Training data practices

The report delves into several key questions regarding how user data is utilized for model training.

Are prompts used to train the models?

Incogni found that some platforms explicitly allow users to opt out of training: ChatGPT, Copilot, Le Chat, and Grok fall into this group. Others, such as Gemini, DeepSeek, Pi AI, and Meta AI, do not appear to provide a way to opt out. Claude (Anthropic) was the only platform that claims to never use user inputs for training.

Are prompts shared with external parties?

Most platforms share prompts with a defined set of third parties, including service providers, legal authorities, and affiliated companies. However, Microsoft and Meta allow sharing with advertisers or affiliates under broader terms. Anthropic and Meta also disclose sharing with research collaborators. These policies raise questions about the limits of data control once prompts leave the platform.

What kind of training data is used?

All platforms train their models on publicly accessible data. Many also use user feedback or prompts to improve performance. OpenAI, Meta, and Anthropic provided the most detailed explanations about training data sources, although even these were limited in scope. No platform offered a way for users to remove their personal data from existing training sets.

Transparency scores

Beyond the policies themselves, Incogni also evaluated how transparent platforms are about their data practices.

How clearly do platforms explain prompt usage?

OpenAI, Mistral, Anthropic, and xAI made it easy to determine how prompts are used for training. These platforms offered searchable support content or detailed FAQ sections. Meta and Microsoft, on the other hand, required users to search through unrelated documentation. DeepSeek, Pi AI, and Google’s Gemini offered the least clarity.

Can users find information about model training?

Platforms were grouped into three levels of transparency. OpenAI, Mistral, Anthropic, and xAI provided accessible documentation. Microsoft and Meta made this information somewhat difficult to find. Gemini, DeepSeek, and Inflection offered limited or fragmented disclosures, requiring users to parse multiple documents to get answers.

Are privacy policies readable?

Incogni used the Dale-Chall formula to assess readability. All policies required at least a college-level reading ability. Meta, Microsoft, and Google provided long and complex privacy documents that covered multiple products. Inflection and DeepSeek offered very short policies that lacked clarity and depth. OpenAI and xAI were noted for offering helpful support articles, though these must be maintained over time to remain accurate.

Data collection and sharing practices

The investigation also uncovered details about what specific data is collected and with whom it might be shared.

What data can be shared with third parties?

Meta and DeepSeek share personal information across corporate entities. Meta and Anthropic share information with research partners. In several cases, vague terms like “affiliates” were used, making it unclear who exactly receives user data. Microsoft’s policy also permits sharing with advertisers under specific conditions.

Where does user data come from?

Most platforms collect data during account setup or user interaction. However, Incogni found evidence that some platforms also gather data from additional sources:

  • Security partners: ChatGPT, Gemini, DeepSeek
  • Marketing partners: Gemini, Meta AI
  • Financial institutions: Copilot
  • Commercial datasets: Claude (Anthropic)

Pi AI appears to use the fewest external sources, focusing mainly on direct input and public data. Microsoft stated that it may use data from brokers as well.

Mobile app data collection and sharing

Incogni also examined how iOS and Android apps collect and share user data. Le Chat had the lowest privacy risk, followed by Pi AI and ChatGPT. Meta AI was the most aggressive, collecting data like usernames, emails, phone numbers, and sharing much of it with third parties.

Gemini and Meta AI collect exact user locations. Pi AI, Gemini, and DeepSeek collect phone numbers. Grok shares photos and app interaction data, while Claude shares app usage and email addresses.

Interestingly, Microsoft’s Copilot Android app claimed not to collect or share any user data. Because this was inconsistent with its iOS app disclosures, Incogni scored both apps based on the iOS version.

Privacy risks vary widely between Gen AI platforms. The best performers offered clear privacy policies, opt-out controls, and minimal data collection. The worst offenders lacked transparency and shared user data broadly without clear justification.

Incogni concludes that AI platforms must make privacy documentation easier to read, provide modular privacy policies for each product, and avoid relying on broad umbrella policies. Companies should also maintain up-to-date support resources that clearly answer data handling questions in plain language.


Featured image credit

Tags: AIdata privacy

Related Posts

Cyberattacks are now killing patients not just crashing systems

Cyberattacks are now killing patients not just crashing systems

October 21, 2025
Gen Z workers are telling AI things they’ve never told a human

Gen Z workers are telling AI things they’ve never told a human

October 20, 2025
MIT researchers have built an AI that teaches itself how to learn

MIT researchers have built an AI that teaches itself how to learn

October 20, 2025
Apple builds an AI “engineering team” that finds and fixes bugs on its own

Apple builds an AI “engineering team” that finds and fixes bugs on its own

October 17, 2025
Graphite: 52% of new content is AI-generated

Graphite: 52% of new content is AI-generated

October 17, 2025
Just 250 bad documents can poison a massive AI model

Just 250 bad documents can poison a massive AI model

October 15, 2025

LATEST NEWS

Reddit sues Perplexity over alleged large-scale data scraping

Google’s Live Threat Detection is reportedly coming to more Android phones

The ChatGPT Atlas browser is already facing its first security exploit

The Willow chip marks a new milestone in Google’s quantum race

HBO Max finally lets you tell the algorithm what you actually think

The Lomo MC-A is a film camera with USB-C charging capability

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.