AI & Change, Part II: The Equity Gap

A low-angle view of a historic ornate brick building with arched windows standing in the foreground, with a towering modern glass skyscraper rising dramatically behind it against an overcast sky, framed by green trees on either side.

Share this article:


Part I of the AI and Change series distinguished between AI adoption and absorption. It also highlighted the costs of AI adoption on the people inside of organizations. To read Part I, click here.


The AI adoption gap is already here, and it’s already widening. According to TechSoup and Tapp Network’s 2025 State of AI in Nonprofits report, nonprofits with annual budgets above $1 million are adopting AI at nearly double the rate of smaller organizations - 66 percent versus 34 percent. That gap maps precisely onto the resource conditions that make meaningful AI adoption possible in the first place. Small and mid-sized nonprofits, along with lean public agencies and community-anchored businesses, are absorbing AI’s demands while simultaneously managing funding volatility, workforce shortages, and strain from years of under-resourcing. The asymmetry that results from that is growing, and it’s doing so at the direct expense of the communities these institutions were built to serve.

The Invisible Starting Line

AI adoption has carried the broad assumption that enterprises adopting new technology already have the financial, technical, and human conditions in place to ensure that adoption is meaningful. To be sure, well-capitalized institutions typically have the IT infrastructure capable of handling new tools, staff capacity available for training, and enough budgetary cushion to absorb the costs that accumulate over time. For institutions like these, AI presents itself as an operational innovation full of efficiency gains and competitive advantage they are best positioned to act on immediately and effectively.

For community-based organizations and under-resourced public agencies, AI arrives with a resource-heavy appetite that can feel too bottomless to feasibly address. Dedicated technical staff to configure new tools are rarely available. Professional development budgets to bring people up to speed are frequently insufficient or entirely absent. Change management infrastructure to guide the transition is, for many of these entities, still under construction for far simpler organizational challenges. Budget flexibility to absorb unexpected costs is similarly scarce. The generative AI adoption mandate assumes these conditions exist. For the overwhelming majority of enterprises guided by mission, those conditions tend to be insufficient at best.


According to TechSoup and Tapp Network’s 2025 State of AI in Nonprofits report, nonprofits with annual budgets above $1 million are adopting AI at nearly double the rate of smaller organizations - 66 percent versus 34 percent.


The data makes the resulting disparity difficult to minimize. Larger nonprofits are adopting AI at nearly twice the rate of smaller ones, and the organizations falling furthest behind are those already operating under the greatest resource constraints. These are enterprises being left behind in a race that has barely started, with distance increased as the cycle of AI development continues to churn new models and features at breakneck speeds. At its core, this is a readiness problem, and readiness costs money most of these institutions simply may not have ready access to.


The Full Price

The true cost of AI adoption accumulates in installments, each one landing after the initial commitment has already been made, and each one drawing on organizational resources that many nonprofit enterprises were not carrying in sufficient supply. Most adoption timelines budget for the tool. Few budget for everything the tool requires to function.

Software licensing fees scale with team size in ways that compound quickly across even modestly sized staff. Enterprise AI platforms charge per user per month, and those charges become significant recurring line-item costs for organizations accustomed to operating on margins that leave little room for new fixed costs. Implementation and configuration add further expense, typically requiring either external technical support or internal staff time that the organization had likely committed to other priorities. Quite simply, the cost of entry is rarely just what the invoice suggests.

Beyond the invoice lies a second tier of costs that adoption timelines seldom surface. Retrofitting existing data systems to work with new AI tools demands technical expertise that many under-resourced agencies do not have on staff. Building data governance frameworks - specifically, the internal policies that determine who accesses what data, how that data is stored, and what legal and ethical protections are in place to manage it all - calls on organizational capacity that many institutions are still developing for far less complex operational challenges. Strengthening cybersecurity infrastructure to manage the vulnerabilities that AI-processed data introduces is similarly non-negotiable, expensive, and absent from most adoption plans.


Key Concept: Scope Insensitivity

Behavioral psychologists have documented a cognitive pattern called scope insensitivity - the tendency to underestimate costs, risks, or impacts that are diffuse, cumulative, or distributed across time rather than arriving as a single visible demand. In practice, it looks like this: an organization estimates that AI adoption will cost $10,000, but once governance frameworks, cybersecurity upgrades, and staff training are factored in, the actual cost is closer to four times that figure. The invoice is visible. Everything the invoice omits is not.


The capacity burden compounds the financial one in ways that are equally underestimated. How many staff hours does it take to reach a functional baseline of AI competency? What does professional development cost, and how quickly do those skills depreciate in a domain where capabilities shift faster than any curriculum can track? Who provides the training? None of these questions have inexpensive answers, and most resist quick fixes.


Among nonprofits not yet using AI, nearly half cite lack of training as the primary obstacle. For those already using AI regularly, concerns shift to privacy, security, and time - revealing that clearing the first barrier does not resolve the resource gap. Instead, it deepens it.


Every hour redirected toward AI training and implementation is an hour pulled from direct service delivery, community engagement, and fundraising. For lean operations with minimal reserves, that reallocation materializes in grant reports, program outputs, and staff capacity data - concrete, measurable, and rarely acknowledged in adoption planning as a cost the sector is collectively bearing.

The Trust Factor

Organizations that serve communities - particularly communities that have endured sustained institutional neglect or harm - depend on relational trust. Unlike transactional trust built through consistent exchange, or institutional trust conferred through credentials and authority, relational trust is built person by person, interaction by interaction, and through the accumulated weight of showing up in ways that resist systematization. It is fragile in proportion to its intimacy. Once an organization’s presence begins to feel overly automated to the people it serves, the relational capital spent building that presence begins to erode.


Once an organization’s presence begins to feel overly automated to the people it serves, the relational capital spent building that presence begins to erode.


Research on digital communication and remote interaction has consistently shown that reduced face-to-face contact weakens the informal relationship-building that sustains community trust over time. For institutions whose operational logic depends on the quality of human connection - whose effectiveness is measured, however imperfectly, in the degree to which people feel genuinely seen and served - the efficiency gains of AI adoption must be weighed honestly against what gets displaced in the process. That weighing tends to happen when the damage is visible.


Key Question

What does this AI tool do to the relationships that make our work possible? Are we making that trade-off carefully?


Organizational Values

Most organizations have made explicit commitments related to environmental sustainability, racial equity, and the well-being of the communities they serve. Fewer have examined whether the infrastructure behind the AI tools they are now being encouraged to adopt honors those commitments - or even contradicts them.

Data centers - the physical facilities that train and power the Large Language Models underlying AI tools - rank among the most energy-intensive structures ever built. According to the Congressional Research Service, domestic data center electricity demand has already tripled over the past decade and is projected to double or triple again by 2028, potentially accounting for up to 12 percent of total U.S. electricity use. Recent research documents their role in creating heat islands that warm surrounding land by up to sixteen degrees Fahrenheit, a thermal burden affecting more than three hundred million people - disproportionately concentrated in communities already contending with the consequences of environmental degradation.

In 2025, U.S. greenhouse gas emissions rose for the first time in two years, and data centers were a primary driver of that increase. Even as tech companies operating those centers continue to issue sustainability pledges, they are - by their own reporting - no longer on trajectory to fulfill them. Put plainly, the tools an organization adopts in the name of efficiency may be powered by infrastructure that directly contradicts its environmental commitments.

The disparity between what an organization publicly professes and what the supply chain of its technology produces is a governance gap. Governance gaps, left unexamined, surface at the worst possible moments - in community settings where trust is already strained, in funder relationships where alignment is assumed, and in the internal culture of organizations whose staff are watching to see whether stated values translate into operational decisions. The more consequential question is whether the organization has done the work to understand what using AI costs across all dimensions that matter - financial, operational, relational, and ethical.


Is your organization adopting AI because it is genuinely ready, or is your decision influenced by the desire not to feel left behind?


This is not cause for alarm. Instead, it is an invitation toward a different kind of strategic seriousness - one that treats organizational clarity as a tool for equity and long-term sustainability rather than as a luxury available only to well-resourced institutions.

Organizations that will serve their communities well over the next several years are not necessarily those that were best funded or best staffed going into this period. They are the ones that are most clear about why they exist, most disciplined about what they choose to do, and most honest about whether their work is producing the change they intend. That kind of clarity is not accidental; it is built intentionally, iteratively, and with the people doing the work. And it can be built now, at this very moment.

A Place to Begin

Organizations navigating this well are distinguished less by their resources than by their willingness to ask questions the sector may find inconvenient.

  • What would responsible adoption cost in staff capacity, community relationships, and organizational values?

  • What conditions need to be in place before your organization can genuinely integrate AI?

  • What do the communities you serve need from AI? What do they need that no platform, however sophisticated, can substitute for?

These questions are diagnostic. Organizations that answer them honestly tend to find that the path forward is clearer and more manageable than the pressure of the moment suggests. The catch is timing. These questions become harder to ask and easier to dismiss once adoption is already underway and slowing down feels like falling behind.

If that process surfaces more complexity than your organization can work through alone, that is itself useful information. The most productive next step is sometimes a conversation with someone who has sat with these questions and faced comparable constraints.

In the next two entries of this series, we will explore what it means to build conditions for AI adoption that simplify processes for people and serves them.



Sources & Further Reading

  1. TechSoup & Tapp Network. (2025). State of AI in nonprofits: 2025 benchmark report on adoption, impact and trends. https://page.techsoup.org/ai-benchmark-report-2025

  2. Virtuous & Fundraising

  3. Social Current. (2026). The growing AI gap between social sector organizations. https://www.social-current.org/2026/01/the-growing-ai-gap-between-social-sector-organizations/

  4. Das, M. (2025, November 6). What AI equity for nonprofits really looks like. Candid Insights. https://candid.org/blogs/what-ai-equity-for-nonprofits-means-looks-like-in-practice/

  5. Offutt, M. C., & Lawson, A. J. (2026, January 23). Data centers and their energy consumption: Frequently asked questions (Report No. R48646). Congressional Research Service. https://www.congress.gov/crs-product/R48646‍ ‍

  6. Webber T. & The Associated Press. (2026, March 29). Big tech was embracing clean energy and turning a corner on climate change. Then AI data centers arrived. Fortune. https://fortune.com/2026/03/29/big-tech-climate-change-goals-data-center-ai-fossil-fuels/

  7. Paddison, L. (2026, March 30). Scientists have found an alarming environmental impact of vast data centers. CNN Climate. https://www.cnn.com/2026/03/30/climate/data-centers-are-having-an-underrported

  8. Drasin, J. & Holliday, T. (2024, November 14). Navigating change fatigue: The energy-commitment model for organizational change. Educause Review. https://er.educause.edu/articles/2024/11/navigating-change-fatigue-the-energy-commitment-model-for-organizational-change

Next
Next

AI & Change, Part I: Speed, People, and the Cost of Getting It Wrong