AI & Change, Part I: Speed, People, and the Cost of Getting It Wrong

A Black male professional pauses in quiet reflection at his workplace, hands pressed together, eyes closed, with a softly blurred open office environment around him, conveying a omment of stillness amid the demands of a busy workday.

Share this article:

The organizational pressure to integrate AI is not subtle, and it does not ask for permission. It arrives in board presentations, strategic plans, and leadership retreats. It surfaces in grant applications that ask how applicants are using technology, in peer conversations where the question is no longer whether to adopt AI but how quickly that adoption is unfolding. Across a variety of sectors navigating a maelstrom of disruption, the message on AI integration is consistent: get moving and get moving now.

What’s less consistent is any serious organizational reckoning with what moving so rapidly may cost.

Speed has its own costs and in organizational change, those costs tend to be paid by people, not technology. The tools don’t feel the transition. The people implementing them do. The pace at which AI integration is currently being demanded by organizations is outpacing the human conditions necessary to do it well.

Whether your organization is still weighing the decision to integrate AI, navigating midway through piloting it, or revisiting an integration that did not land the way you may have hoped, gaps are worth identifying and understanding before the next move is made.

The Pace Problem

What happens to people inside organizations that move faster than their capacity to absorb change can catch up? Researchers who study organizational change call it change fatigue - a state of emotional and physical depletion that sets in when people are asked to adapt more often than their capacity to recover from previous changes takes hold. It isn’t burnout exactly, though it most certainly can lead there. Rather, it’s the specific exhaustion of people who are willing to change but have not been given enough time between changes to find their footing for the next wave of change.


AI integration does not arrive in a vacuum or alone. For most organizations, it arrives on top of everything else already in motion - funding uncertainty, workforce transitions, leadership changes, the accumulated weight of years of sustained disruption. The AI directive is not the first wave many teams are absorbing. It is the latest one, carrying the weight of everything that came before it.


What makes this moment distinct from earlier waves of technological change isn’t the technology itself, but the speed of its arrival compared to organizational readiness. Earlier technology transitions unfolded over years, giving organizations time to build competency incrementally. Generative AI, on the other hand, has arrived at a pace that organizational change approaches are not fully designed to absorb. The gap between what technology can do and what most organizations are ready to do with Generative AI is wide. Closing it requires more than access to tools.

What the Integration Conversation Misses

Most organizational conversations about AI integration focus on two questions: (a) what tools to adopt and (b) how quickly to deploy them. These are reasonable questions to consider but they aren’t the only initial ones. In fact, many of the questions being systematically skipped are often the ones most likely to clarify whether integration works at all.


Missing Question #1: The Full Stack of Human Experience

AI integration doesn’t land the same way at every level of an organization. Senior leaders navigating this strategic decision are having a fundamentally distinct experience from mid-level managers asked to champion tools they didn’t choose and frontline staff wondering what that learning means for the security of their roles. The people in the middle - directors, operations managers, team leads - are often the least supported and the most relied on. How often do you identify the needs of staff responsible for AI implementation?

Missing Question #2: The Legitimacy of Uncertainty

Organizational conversations tied to AI tend to perform confidence in the tools, timeline, and the organization’s readiness. That performed confidence creates a condition in which the people most responsible for implementation feel least able to name what they’re experiencing. The questions don’t disappear. Instead, they just go underground and take up residence in after-meeting silences - only to become organizational risk later. Is there constructive and welcoming space in your organization for someone to express when they aren’t ready?

Missing Question #3: What Preparedness Means in Motion

The credentials being developed to certify AI competency are themselves struggling to keep pace with a technology whose evolution doesn’t slow down for anyone. The pressure organizations are placing on their people to show AI readiness is pressure to perform a mastery that no one - including the people issuing the pressure - has fully achieved. The changing pace of AI improvements risks making AI credentials obsolete just a year after receiving them - perhaps even sooner. That reality is worth naming directly rather than managing around. How is readiness and iterative competency internally defined and who is part of that conversation?


Adoption Is Different from Absorption

There is a distinction to keep in mind in AI integration conversations, and that is most certainly the difference between adoption and absorption.


Adoption moves on schedule, meets its milestones, and reports back to the board on time. Absorption does not respond to timelines - it moves at a pace of human sense-making.


Adoption is the institutional act of introducing a tool or practice into an organization’s operations. It moves on schedule, meets its milestones, and reports back to the board on time. A 90-day pilot program is essentially an adoption plan.

Absorption is something much slower and painstakingly less obedient. It’s the human process of making sense of what’s new and novel - building trust and competence, developing the confidence to ask questions without the sting of embarrassment, and finding genuine usefulness in something that felt foreign in the beginning. Absorption doesn’t respond to timelines. It requires psychological safety - the condition in which people feel it’s acceptable to not know something yet, to make mistakes, and to learn. While tools are a necessary ingredient, space and the time needed to practice with those tools is the true secret sauce giving flavor to the productivity banquet.

When organizations move at the pace of adoption without accounting for the pace of absorption, the gap shows up in very human ways before it shows up in cold performance data. People engage with new tools superficially and often in ways that satisfy the directive without changing how work gets done. The implementation moves ahead. The people inside that implementation effort end up managing something privately that the broader organization hasn’t made space to acknowledge publicly. It’s that misalignment that reveals the human cost of moving to0 quickly, where unacknowledged pressure in the people an organization depends on most takes initial hold and gradually rises.


Is your organization moving at the pace of adoption or the pace of absorption? Does leadership know the difference?


A Place to Begin

Organizations navigating AI integration fairly well are not moving slowly; they’re simply sequencing differently by building the human conditions for absorption before they demand adoption, creating genuine space for questions before declaring readiness, and treating the people responsible for implementation as participants in a process rather than recipients of a decision.


Before launching a tool, asking the people who will use it what they need to feel genuinely prepared is more effective than solely relying on a training to be completed. To supplement trainings, build explicitly checkpoints where uncertainty can be named without consequence, rather than assuming silence means confidence. Distinguishing whether a tool is available from whether the organization is ready to use it well is vital to success.


That sequencing requires honest organizational conversations about what integration is asking of people at every level, what support those people need to do it well, and what the cost of skipping that conversation is likely to be. For organizations already mid-integration, those conversations are not too late to have; they’re simply more urgent.

A practical place to begin, regardless of where your organization is in the process, is with a single internal question worth bringing to your next leadership conversation: Are the people responsible for implementing this change the same people who had a voice in shaping it? If the answer is no or not yet, then that is the gap worth closing before the 90-day timeline advances further.

If that conversation surfaces more complexity than your team can navigate alone, that is a signal worth honoring. The most useful next step is not always more reading or even strategizing. Sometimes, it’s a simple conversation with someone who has sat with these questions alongside organizations like yours. There is also more to this conversation than any single organization’s experience, which is where this three-part series on AI and change goes next.

In the next entry of this three-part series, I’ll examine the equity dimension of AI integration - what it means when pressure to adopt new technology assumes a level of readiness that not every organization has, and what is at stake when that assumption goes unexamined.



Sources & Further Reading

  1. Drasin, J., & Holliday, T. (2024). Navigating change fatigue: The energy-commitment model for organizational change. EDUCAUSE Review. https://er.educaus.edu/articles/2024/11/navigating-change-fatigue-the-energy-commitment-model-for-organizational-change

  2. Westover, J. (2025). Organizational change fatigue: Building adaptive capacity in an era of permanent disruption. Human Capital Leadership Review. https://www.innovativehumancapital.com/article/organizational-change-fatigue-building-adaptive-capacity-in-an-era-of-permanent-disruption

  3. Schmidt, J. Villeneuve, L., Alvord, D. & Barberio, J. (2024). Change fatigue is pervasive: Mitigate its risks in your service org. Gartner. https://www.gartner.com/en/documents/5300863‍ ‍

  4. McKinsey & Company. (2023). The state of AI in 2023: Generative AI’s breakout year. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year

Next
Next

The Hidden Weight of Leading Through Constant Change