Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

82% of nonprofits use AI: Almost none are regulating it

When ethics is an afterthought, efficiency gains can come at the cost of equity, especially in the nonprofit sector.

byKerem Gülen
April 8, 2025
in Research
Home Research

The nonprofit sector is embracing artificial intelligence faster than it is ready for. More than half of nonprofits now use AI tools in some form—ChatGPT, automation systems, predictive analytics—but less than 10 percent have written policies on how that AI should be used. That’s not just a procedural oversight.

It’s a structural vulnerability. These organizations, many of which serve historically marginalized communities, are stepping into a high-stakes technological landscape with few ethical guardrails and even fewer internal frameworks to guide them. This gap between adoption and governance poses real risks—algorithmic bias, privacy breaches, and unintended harm—particularly when off-the-shelf tools are deployed without deep understanding or oversight. The rush to efficiency may unintentionally erode trust, compromise values, and expose nonprofits to reputational and legal fallout.

Efficiency now, regret later

The numbers tell a striking story. According to BDO’s 2024 Nonprofit Benchmarking Survey, 82 percent of U.S. nonprofits now report using AI. Of those, the majority are applying it to internal operations: 44 percent are using AI for financial tasks like budgeting and payment automation, and 36 percent are applying it to program optimization and impact assessment. The focus, in other words, is administrative efficiency—not mission delivery.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

That’s consistent with the Center for Effective Philanthropy’s 2024 State of Nonprofits survey, which also found that productivity gains were the most common reason for AI use. But that same survey reveals the ethical lag: fewer than one in ten organizations have formal policies in place. And the organizations that do use AI are often working with limited infrastructure, little in-house expertise, and constrained budgets that prevent them from building customized, domain-aware systems. Instead, they lean on commercial tools not designed for their unique contexts, increasing the likelihood of bias, misuse, or mission misalignment.

At a time when trust is central to nonprofit credibility, this governance vacuum is alarming. AI is not neutral. It reflects, magnifies, and operationalizes the data it is trained on—and that data is often riddled with historical inequities. Without policies to guide use, nonprofits risk reinforcing the very structural inequalities they aim to dismantle. They also risk falling short of their own values. As Addie Achan, director of AI programs at Fast Forward, put it: “It’s better for an organization to define the rules and expectations around that use rather than have people use it and inadvertently cause more harm.” In this context, “harm” could mean anything from discriminatory decision-making in service provision to unintentional leaks of sensitive beneficiary data. The need for ethical AI policies isn’t a theoretical concern—it’s a practical one.

The cost of caution and the price of action

According to BDO’s survey points to a trifecta of resistance: lack of knowledge, insufficient infrastructure, and funding constraints. But about one-third of respondents also cited employee resistance and ethical concerns. While managers fear risk, employees may fear replacement. The skepticism, then, is both practical and existential. And it plays out unevenly. Most AI deployments are limited to back-office functions, where the tech can quietly improve accuracy and efficiency. But the more transformative applications—AI-powered energy tracking, real-time data synthesis for global education programs—remain largely aspirational. These mission-aligned uses demand both financial muscle and ethical clarity. Right now, most nonprofits have one or the other. Few have both.

The financial balancing act

Ironically, the sector’s financial position is more stable than it has been in years. According to BDO, 52 percent of nonprofits saw revenue growth in the past fiscal year, up from 44 percent in 2023. Meanwhile, 62 percent now hold seven or more months of operating reserves—the strongest cushion since 2018. That’s a significant shift from the lean years of the pandemic. And it’s giving leaders the confidence to consider more ambitious operational shifts.

Nearly three-quarters of nonprofits say they plan to expand or shift the scope of their missions in the next 12 months. But caution remains the dominant financial posture. Most organizations are spending less across the board in 2024 compared to 2023, especially in advocacy, fundraising, and donor relations. The exceptions are new program development and talent acquisition—areas that saw modest spending increases. In other words, nonprofits are saving, hiring, and testing new directions, but they’re doing so with one eye on the political calendar and the other on macroeconomic instability.

A policy vacuum with real consequences

So where does this leave the sector? It’s in a moment of quiet contradiction. On one hand, nonprofits are building reserves, hiring talent, and expanding missions—clear signs of institutional confidence. On the other, they’re rapidly adopting a powerful, unpredictable technology without the governance structures to manage it. The sector is entering the AI era in the same way it entered the digital era—through improvisation and adaptation rather than strategic design. That may be fine for a while. But without policies to ensure transparency, accountability, and alignment with mission, the risks will only grow. The tools may be new, but the ethical dilemmas—who benefits, who’s left out, and who decides—are old and unresolved.


Verified Market Research: AI agents set to hit $51.58B by 2032


What needs to happen next

Creating ethical AI policies for nonprofits isn’t about slowing innovation; it’s about directing it. That means establishing guidelines that reflect each organization’s mission and values, investing in internal education on how AI systems work, and implementing oversight processes for evaluating both benefits and harms. Policies should clarify not just what AI can be used for, but what it should not be used for. They should identify decision points where human review is mandatory, outline data privacy expectations, and provide procedures for redress if harm occurs.

Nonprofits have a narrow window to lead by example. They can show that it’s possible to use AI not just efficiently, but ethically.


Featured image credit

Tags: AIFeatured

Related Posts

OpenAI researchers identify the mathematical causes of AI hallucinations

OpenAI researchers identify the mathematical causes of AI hallucinations

September 17, 2025
AI agents can be controlled by malicious commands hidden in images

AI agents can be controlled by malicious commands hidden in images

September 15, 2025
AGI ethics checklist proposes ten key elements

AGI ethics checklist proposes ten key elements

September 11, 2025
Can an AI be happy? Scientists are developing new ways to measure the “welfare” of language models

Can an AI be happy? Scientists are developing new ways to measure the “welfare” of language models

September 10, 2025
Uc San Diego study questions phishing training impact

Uc San Diego study questions phishing training impact

September 8, 2025
Deepmind finds RAG limit with fixed-size embeddings

Deepmind finds RAG limit with fixed-size embeddings

September 5, 2025

LATEST NEWS

OpenAI researchers identify the mathematical causes of AI hallucinations

How data bias in healthcare leaves midlife women behind—and how to fix it

Microsoft will install Copilot to everyone’s PCs from fall 2025

Microsoft’s deal with OpenAI in question as they trusted Anthropic for this new feature

Under 18? You won’t be able to use ChatGPT soon

OpenAI’s ChatGPT-5 finally got the “half of knowledge”

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.