Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Infosys launches open-source AI toolkit to strengthen responsible AI

Infosys has strengthened its Responsible AI initiatives through the launch of its Responsible AI Office and participation in global AI safety efforts

byKerem Gülen
February 26, 2025
in Industry
Home Industry
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Infosys has introduced an open-source Responsible AI Toolkit as part of its Infosys Topaz Responsible AI Suite, aiming to enhance transparency, security, and trust in artificial intelligence systems. The toolkit provides enterprises with defensive technical guardrails to address AI-related risks, including privacy breaches, security threats, biased outputs, hallucinations, and deepfakes.

The Responsible AI Toolkit is based on Infosys’ AI3S framework (Scan, Shield, and Steer), which helps businesses detect and mitigate risks associated with AI adoption. The solution enhances model transparency by providing explanations for AI-generated outputs while maintaining performance efficiency. The open-source nature of the toolkit allows customization and seamless integration across cloud and on-premise environments, making it adaptable for various industries.

“As AI becomes central to enterprise growth, ethical adoption is no longer optional,” said Balakrishna D. R., executive vice president and global services head, AI and industry verticals at Infosys. “By making the Responsible AI Toolkit open source, we are fostering a collaborative ecosystem to address AI bias, security, and transparency challenges.”

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.


Deutsche Telekom taps Google Cloud for AI-driven network automation


Industry leaders have acknowledged Infosys’ initiative as a step toward safer and more accountable AI practices.

  • Joshua Bamford, head of science, technology, and innovation at the British High Commission, praised Infosys’ decision to go open source, calling it a benchmark for responsible AI development.
  • Sunil Abraham, public policy director for data economy and emerging tech at Meta, highlighted the importance of open-source tools in ensuring AI safety and accessibility for a broader spectrum of developers.
  • Abhishek Singh, additional secretary at India’s Ministry of Electronics and Information Technology (MeitY), noted that the toolkit would be instrumental in enhancing security, privacy, and fairness in AI models, particularly for startups and AI developers.

Infosys has strengthened its Responsible AI initiatives through the launch of its Responsible AI Office and participation in global AI safety efforts. The company is among the first to achieve ISO 42001:2023 certification for AI management systems and is actively involved in industry coalitions such as the NIST AI Safety Institute Consortium, AI Alliance, and Stanford HAI.


Featured image credit: Infosys

Tags: Infosysresponsible AI

Related Posts

Amazon reaches  billion settlement over customer refund lawsuit

Amazon reaches $1 billion settlement over customer refund lawsuit

January 28, 2026
Anthropic doubles funding target to B at 0B valuation

Anthropic doubles funding target to $20B at $350B valuation

January 28, 2026
Nvidia invests B in CoreWeave for 5GW AI capacity

Nvidia invests $2B in CoreWeave for 5GW AI capacity

January 27, 2026
Google to pay  million to settle Google Assistant privacy lawsuit

Google to pay $68 million to settle Google Assistant privacy lawsuit

January 27, 2026
From meme factory to social petri dish: The cultural life cycle of Roblox trends

From meme factory to social petri dish: The cultural life cycle of Roblox trends

January 23, 2026
Backwards compatibility: Nostalgia or necessary?

Backwards compatibility: Nostalgia or necessary?

January 23, 2026

LATEST NEWS

Google makes Gemini 3 the default model for AI Overviews globally

Yahoo launches Scout beta to bring conversational AI to its 250 million users

New Strict Account Settings bring extreme security to WhatsApp users

Adobe rolls out 2K resolution and “geometry-aware” AI to Photoshop

Prism arrives as a free AI-native workspace for scientific paper drafting

Bluesky reveals 2026 roadmap prioritizing Discover feed

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.