The 1990s change management industry collapsed under its own vagueness, then got eaten by the cost-cutters. Now agentic AI is forcing the hardest organizational question in a generation, and neither playbook works. The answer is uncomfortably simple: tell your people the truth.
In 1994, a partner at a large consulting firm could bill $300 an hour to stand in a hotel ballroom in Scottsdale and ask a room full of senior vice presidents to close their eyes and visualize the organization they wanted to become. This was considered serious work. The partner had a framework. The framework had steps. The steps had names like "establish a sense of urgency" and "build a guiding coalition" and "communicate the vision." There were workbooks. There was butcher paper. Someone always wrote "INTEGRITY" in block letters and underlined it twice.
Then everyone flew home and nothing changed.
This was the golden age of change management, a period roughly spanning the early 1990s to the early 2000s when the idea that organizations needed professional help navigating transformation became, briefly, a $50 billion industry. The intellectual architecture was genuinely impressive. John Kotter at Harvard published "Leading Change" in 1996 and gave the world an eight-step model that became gospel. Peter Senge published "The Fifth Discipline" and introduced the concept of the learning organization. William Bridges wrote about the psychology of transitions. Prosci developed the ADKAR model. The frameworks multiplied. The consulting fees multiplied faster.
Running parallel to all of this, and eventually consuming it, was the reengineering movement. Michael Hammer and James Champy's 1993 bestseller "Reengineering the Corporation" promised radical process redesign and dramatic performance breakthroughs. It electrified executive suites. It also, in practice, became the most sophisticated justification for mass layoffs that American business had ever produced. By 1994, a $50 billion consulting industry had formed around it. By 1995, studies showed that 70 percent of reengineering initiatives had made things worse. Hammer, to his credit, eventually admitted the error publicly: "I was reflecting my engineering background and was insufficiently appreciative of the human dimension."
The human dimension. That's a clinical way of saying that millions of people were restructured out of their jobs by 25-year-old MBAs with new frameworks and no institutional memory, and the psychological residue of that experience shaped how an entire generation of workers relates to the word "transformation."
If you've seen Office Space, you remember the Bobs. Two consultants with matching clipboards who interview every employee to determine their value to the organization. "What would you say... you do here?" It was a comedy. For the people who lived through the late 1990s, it was barely an exaggeration. The change consultants had been too soft, all vision and no specifics. The Bobs were too brutal, all spreadsheets and no humanity. And the workforce learned a lesson that proved remarkably durable: when someone shows up talking about transformation, the smart move is to keep your head down and update your resume.
That scar tissue never fully healed. And it's about to be tested in ways the 1990s couldn't have imagined.
The Pace Problem
Here is the situation in early 2026. Generative AI reached 56 percent adoption among U.S. adults within three years of ChatGPT's launch, according to the Project on Workforce. That's nearly double the internet's adoption rate over the same period. The internet took five years to reach that level. The personal computer took twelve. And those numbers describe generative AI, the technology that responds to prompts. The next wave, agentic AI, initiates actions on its own. New models arrive in months. Capabilities that didn't exist last quarter are in production this quarter.
The organizational implications are compounding faster than any planning cycle can absorb. We've written about how agentic commerce will reshape B2B industrial companies and why most AI strategies are pointed at the wrong target. Those pieces address markets and business models. This one addresses the humans inside the companies trying to respond.
Fifty-one percent of U.S. workers say they are worried about losing their jobs to AI this year. Two-thirds believe AI will eventually threaten their role. One in five already knows someone who lost a job to AI in the past twelve months. And here's the number that should concern every executive more than any of those: only 4 percent believe AI will create more jobs than it eliminates.
Read that again. Ninety-six percent of the workforce either believes AI is net-destructive to employment or isn't sure. These are the people you need to adopt new tools, redesign workflows, surface risks, and drive innovation through the most disorienting organizational transition since industrialization. And almost none of them believe the story ends well for them.
This is a change management problem. A profound one. And almost nobody is treating it that way.
The Two Playbooks That Don't Work
The default corporate response to AI-driven anxiety falls into two inherited playbooks, both of which should be familiar by now.
The first is the vision quest. Leadership announces an AI transformation initiative. There's a town hall. The CEO talks about "embracing the future" and "empowering our people." A chief AI officer is hired. The mission statement is updated to include words like "intelligent" or "adaptive." Training programs are announced. Everyone gets a copilot login. The implicit message, decoded effortlessly by anyone who has survived a previous transformation cycle: We don't know what's happening either, but we need you to keep working while we figure it out. This is the 1990s change consultant playbook, minus the butcher paper. Vision without specifics. Urgency without honesty. It fails for the same reason it failed thirty years ago: you cannot ask people to trust a process that doesn't trust them with information.
The second is the Bobs, updated. Leadership quietly deploys AI to automate processes, measures the productivity gains, restructures teams around the results, and announces the outcome as "organizational optimization." The Bobs are back; they just carry laptops with AI dashboards now instead of clipboards. The workforce recognizes this playbook instantly because their parents lived through it, and the trust destruction is immediate and total.
Both approaches fail for the same reason, which is also the reason the 1990s versions failed. They treat employees as objects of change rather than participants in it. The vision quest asks for trust without providing information. The efficiency play provides information (you're being restructured) without trust. Neither gives people what they actually need to navigate genuine uncertainty: an honest accounting of what the company knows, what it doesn't know, and what the deal is.
Psychological Safety Isn't Enough
Amy Edmondson has spent more than two decades at Harvard Business School studying what she calls psychological safety, the shared belief that a team is safe for interpersonal risk-taking. Her research, validated at scale by Google's Project Aristotle, shows that it's the single most important factor in team effectiveness. Teams where people feel safe to speak up, flag problems, and admit mistakes consistently outperform teams where they don't.
This finding is necessary but not sufficient for what's coming. Psychological safety describes the interpersonal climate within teams. The AI transition requires something harder: institutional honesty about uncertainty itself.
Here's the distinction: Most companies approaching AI transformation know a few things and are genuinely uncertain about many more. They know AI will change how work gets done. They know some roles will be automated. They know new roles will emerge. They know they need to move fast. What they don't know (and what they almost never say out loud) is which roles, on what timeline, with what support, and whether the company itself will navigate the transition successfully.
The instinct is to project confidence. Executives believe, not unreasonably, that uncertainty paralyzes organizations. So they build narratives that sound certain: "AI will augment our workforce, not replace it." "We're investing in our people." "No one will be left behind." Employees hear these and compare them to what they can see: the layoffs at peer companies, the automation of tasks they used to perform, the AI tools that are measurably better at pieces of their job. The gap between the narrative and the observable reality is where trust goes to die.
The 1990s change consultants made exactly this mistake. They manufactured certainty. Eight steps to successful change. Five disciplines for the learning organization. Three phases of transition. The frameworks implied a knowable destination and a manageable path. That was always somewhat fictitious, and in an environment where foundational AI capabilities shift quarterly, it's completely fictitious.
The alternative is radical, and also obvious: tell people the truth.
What the Truth Actually Sounds Like
What does that mean in practice? It means three things that most companies are not doing.
First, specificity about what you know.
Not "we're exploring AI across the enterprise" but which workflows are being evaluated, on what timeline, and what criteria will determine whether a role persists, evolves, or is eliminated. Employees can see the pilots. They know which tools are being tested. They're already having the conversations you think you're managing. The only question is whether those conversations include accurate information from leadership or speculation fueled by anxiety.
Forrester's recent research on psychological safety during AI transformation makes this point directly: when leading major transformations, the job is to explain what's changing and why, while acknowledging the uncertainties. The credibility comes from the acknowledgment, not from pretending the uncertainties don't exist.
Second, honesty about what you don't know.
This is the conversation executives resist most. Admitting that you don't know whether a particular function will exist in two years feels like a failure of leadership. It's actually the opposite. It's the precondition for building the adaptive capacity the organization will need, because adaptive capacity requires people who are engaged with reality rather than managing their own anxiety about a reality leadership won't confirm.
And third, the one that matters most, telling people what's in it for them.
Not what's in it for the company. For them. The typical AI transformation narrative is entirely institutional: we'll be more competitive, more efficient, more innovative. The employee sitting in the audience hears: The company will be fine. What about me?
An honest answer to that question has three components:
The investment: not a training catalog, but a specific, resourced commitment to developing capabilities the person will need, with timelines and accountability on both sides.
The risks: an acknowledgment that some roles will change substantially and some may not survive, delivered early enough that people can make informed decisions about their own careers rather than finding out through a restructuring announcement.
And the safety net: what happens if, despite everyone's best efforts, a role is eliminated? What is the severance framework, the transition support, the timeline? Companies that answer this question proactively, before anyone has to ask, signal a fundamentally different relationship with their people than companies that wait for the lawyers.
None of this is easy. All of it is uncomfortable. It requires executives to sit with the tension of admitting uncertainty to the very people they're asking to follow them into it. It requires HR teams to abandon the instinct to manage the narrative and instead share the reality. It requires a bet that informed, trusted employees will outperform anxious, suspicious ones.
That shouldn't actually be a hard bet. It's what every piece of organizational research for thirty years has shown. The data is clear. The execution is just uncomfortable.
What the 1990s Got Right
Here is what the 1990s got right: organizational transformation is fundamentally a people problem. Vision matters. Alignment matters. You cannot restructure your way to excellence by treating people as interchangeable components, a lesson the reengineering movement learned at enormous human cost, and which Hammer himself spent his later career trying to correct.
What the 1990s got wrong was the belief that you could script the process. That there were eight steps, or five disciplines, or three phases. That transformation had a knowable destination and a manageable path. That belief was always partially fictional. In a world where foundational AI models are released quarterly and agentic systems are rewriting competitive dynamics in real time, it's entirely fictional.
The useful inheritance isn't the frameworks. It's the underlying insight that people need to understand why things are changing, believe they have a role in the change, and see a credible path for themselves through it. Those needs are permanent. They're human. The 1990s consultants just didn't have the tools or the honesty to meet them.
The change consultant should come back. Not with eight steps and a burning platform. Not with a vision statement drafted on butcher paper in Scottsdale. With a harder, simpler message: tell your people the truth. All of it. Including the parts you haven't figured out yet. Especially those parts.
The companies that do this will build trust that compounds into organizational velocity: the ability to move, adapt, and learn faster than companies where every initiative is filtered through a workforce that has learned, over decades, to assume the worst. The companies that don't will discover, again, that the most expensive thing in organizational life isn't the wrong strategy. It's the scar tissue from broken promises.
We know how the 1990s version of this story ended. Everybody remembers the Bobs. Nobody remembers the mission statement.
This time, the pace won't allow a thirty-year recovery. Get it right now, or don't get it at all.