How I think about AI transformation
I did not set out to create a framework. I set out to understand why AI adoption keeps failing in ways that have nothing to do with the technology. The more organizations I worked with and the more leaders I spoke to, the clearer the answer became.
The pattern I kept seeing
Through my own implementation work and hundreds of conversations with leaders navigating AI adoption, I identified a pattern that repeats with remarkable consistency. It shows up in organizations of every size, across every industry, regardless of how sophisticated their technology stack is.
The pattern has three failure modes. Every struggling AI initiative I have observed falls into at least one of them. Most fall into all three: a capability gap where the workforce can’t operate in an AI-augmented environment, an authority gap where AI systems make consequential decisions without clear boundaries, explainability, or defensibility, and a leadership gap where leaders are destroying the trust that adoption requires.
Understanding these failure modes led me to a conclusion that shaped everything I’ve built since: successful AI transformation requires three dimensions evolving together. Not sequentially. Not independently. Together.
The CAL Framework
Capability, Authority, and Leadership. Three dimensions that must evolve together for AI transformation to succeed. When any dimension is neglected, AI deployments stall, fail, or create unintended harm.
Not a training program. The fundamental question of whether the people inside an organization can actually operate in an AI-augmented environment — and whether the organization has designed the workforce architecture to support the new way of working. AI literacy is brand new and must be built deliberately across every level: front-line employees who use AI tools daily, middle managers interpreting AI recommendations, and senior leaders making governance decisions. The organizations seeing the weakest ROI eliminated the very people who would have made the technology work. Capability isn’t about upskilling for the sake of it. It’s about building and preserving the human infrastructure that makes AI investments pay off.
How decision authority is shared between humans and AI — what AI decides autonomously, what requires human approval, and what must never be delegated. Most organizations have not drawn these lines. They let the vendor’s default settings make the decision for them. That is not a strategy. That is an abdication. When an AI system makes a decision about someone’s career and it cannot be explained, it is not just a regulatory risk — it is a trust violation. Who governs the tool? Who audits it? Who has authority to override it? These are not IT questions. They are organizational design questions. Traceability. Explainability. Defensibility. That’s what accountability looks like when AI touches human lives.
Not executive sponsorship. Actively creating the organizational conditions where AI transformation can actually succeed. That requires trust, psychological safety, governance, and organizational learning — none of which happen by default. Right now, the way most organizations approach AI is destroying all of it. They announce adoption alongside layoffs. They frame efficiency as the only metric. The research on survivor syndrome is consistent: after significant AI-driven reductions, the remaining employees are measurably less productive, less engaged, and more likely to leave. AI improvement is structurally dependent on human feedback — and that feedback loop only functions when people feel safe enough to surface what the AI is getting wrong. Leaders who get this right compound AI capability over time. Leaders who get it wrong spend more rebuilding trust than they ever saved.
What happens when a dimension is missing
Every one of these scenarios is playing out inside real organizations right now. The question is not whether your organization will encounter one of them. The question is which one — and whether you’ll recognize it in time to change course.
Authority + Leadership
The AI works. Leadership is aligned. But the workforce cannot use or evaluate the tools. AI literacy gaps compound. Employees who know the work best go silent rather than surface errors. Workforce architecture gaps emerge with no transition plan. The AI never improves because the humans who would have improved it are unprepared, disengaged, or gone.
Capability + Leadership
People are ready. Leaders are on board. But the AI system is a black box — no explainability, no human-in-the-loop boundaries, no traceability. Consequential decisions about people’s careers cannot be defended. Disparate impact accumulates undetected. Organizations discover they are liable for the vendor’s defaults.
Capability + Authority
The workforce is skilled. The AI is well-governed. But leadership hasn’t created the conditions for trust. Employees disengage. Middle managers stall implementation. The best talent leaves. Technically sound AI that nobody will adopt because the psychological safety and organizational learning conditions were never built.
What I believe
AI is the most significant capability shift of our generation, and the organizations that approach it thoughtfully will create extraordinary value. The ones that approach it recklessly will create extraordinary damage — to their workforces, their cultures, their brands, and ultimately their competitive position. The data on AI adoption failure rates is not a prediction. It is a current measurement of what reckless looks like at scale.
Jobs will be lost and new jobs will be created, and the transition between those two realities deserves more care, more planning, and more imagination than most organizations are currently willing to invest. I have watched the consequences when it does not receive that investment — in productivity declines, in trust destruction, in the silent disengagement of employees who have concluded that the organization sees them as interchangeable with the tool.
Every AI system that affects a person’s career, compensation, or livelihood should be explainable, traceable, and defensible — and should be examined for disparate impact before it is ever deployed. Not because a regulation requires it, but because basic accountability demands it. AI systems built on historical data will replicate and amplify historical inequities unless someone is actively examining the outputs for patterns that would be unacceptable in any other context.
Change management and adoption are not line items on a project plan. They are the result of trust, psychological safety, and leadership that earns the right to ask people to change. You cannot automate your way to either one.
Workforce architecture and capability building are not downstream consequences of AI adoption. They are upstream requirements. The organizations that plan for both before deployment will outperform the ones that discover the gap after the fact.
The organizations that get Capability, Authority, and Leadership right — that evolve all three together — will be the ones that define what responsible, intelligent, human-centered AI transformation looks like for the next decade.
See the philosophy in action
Every tool in the AI Decision Lab was built on CAL. Try them yourself and see what happens when you start with the human problem instead of the technology.
Explore the AI Decision LabFollow the Builds
What I’m building, what I’m learning, and what’s happening at the intersection of AI, workforce strategy, and the future of work. No hype. Just the signal.