Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Experts say Gemini 2.5 safety report is too thin

Gemini 2.5 Pro’s technical report omits safety test results, raising fresh doubts about Google’s commitment to responsible AI.

byKerem Gülen
April 18, 2025
in Artificial Intelligence, News
Home News Artificial Intelligence

Google published a technical report on its latest AI model, Gemini 2.5 Pro, weeks after its launch, but experts say the report lacks key safety details, making it difficult to assess the model’s risks.

The report is part of Google’s effort to provide transparency about its AI models, but it differs from its rivals in that it only publishes technical reports for models it considers to have moved beyond the experimental stage. Google also reserves some safety evaluation findings for a separate audit.

Experts, including Peter Wildeford, co-founder of the Institute for AI Policy and Strategy, and Thomas Woodside, co-founder of the Secure AI Project, expressed disappointment with the report’s sparsity, noting that it doesn’t mention Google’s Frontier Safety Framework (FSF), introduced last year to identify potential AI risks.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Wildeford said the report’s minimal information, released weeks after the model’s public launch, makes it impossible to verify Google’s public commitments to safety and security. Woodside also questioned Google’s commitment to timely safety evaluations, pointing out that the company’s last report on dangerous capability tests was in June 2024, for a model announced in February 2024.

Moreover, Google hasn’t released a report for Gemini 2.5 Flash, a smaller model announced last week, although a spokesperson said one is “coming soon.” Thomas Woodside hopes this indicates Google will start publishing more frequent updates, including evaluations for models not yet publicly deployed.


Judge rules Google holds illegal advertising monopoly


Other AI labs, such as Meta and OpenAI, have also faced criticism for lacking transparency in their safety evaluations. Kevin Bankston, a senior adviser on AI governance at the Center for Democracy and Technology, described the trend of sporadic and vague reports as a “race to the bottom” on AI safety.

Google has stated that it conducts safety testing and “adversarial red teaming” for its models before release, even if not detailed in its technical reports.


Featured image credit

Tags: AIgeminiGoogle

Related Posts

Isotopes AI emerges from stealth with  million seed funding for Aidnn

Isotopes AI emerges from stealth with $20 million seed funding for Aidnn

September 8, 2025
Alex Xcode AI tool team joins OpenAI Codex division

Alex Xcode AI tool team joins OpenAI Codex division

September 8, 2025
Criminals are Grokking their way into your devices

Criminals are Grokking their way into your devices

September 8, 2025
Uc San Diego study questions phishing training impact

Uc San Diego study questions phishing training impact

September 8, 2025
Alibaba releases Qwen-3-Max-Preview, its largest AI model yet

Alibaba releases Qwen-3-Max-Preview, its largest AI model yet

September 8, 2025
Jeff Dean explains AI’s impact on jobs and innovation at Singapore

Jeff Dean explains AI’s impact on jobs and innovation at Singapore

September 8, 2025

LATEST NEWS

Isotopes AI emerges from stealth with $20 million seed funding for Aidnn

Alex Xcode AI tool team joins OpenAI Codex division

Criminals are Grokking their way into your devices

Uc San Diego study questions phishing training impact

Alibaba releases Qwen-3-Max-Preview, its largest AI model yet

Jeff Dean explains AI’s impact on jobs and innovation at Singapore

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.