How I think about AI transformation
I did not set out to create a framework. I set out to understand why AI adoption keeps failing in ways that have nothing to do with the technology. The more organizations I worked with and the more leaders I spoke to, the clearer the answer became.
The pattern I kept seeing
Through my own implementation work and hundreds of conversations with leaders navigating AI adoption, I identified a pattern that repeats with remarkable consistency. It shows up in organizations of every size, across every industry, regardless of how sophisticated their technology stack is.
The pattern has three failure modes. Every struggling AI initiative I have observed falls into at least one of them. Most fall into all three: a capability gap where the workforce can’t operate in an AI-augmented environment, a design gap where AI systems interact with human work without boundaries or explainability, and a leadership gap where leaders are destroying the trust that adoption requires.
Understanding these failure modes led me to a conclusion that shaped everything I’ve built since: successful AI transformation requires three dimensions evolving together. Not sequentially. Not independently. Together.
The CAL Framework
Capability, Agentic AI, and Leadership. Three dimensions that must evolve together for AI transformation to succeed. When any dimension is neglected, AI deployments stall, fail, or create unintended harm.
Not a training program. The fundamental question of whether the people inside an organization can actually operate in an AI-augmented environment. Someone has to validate outputs, catch errors, provide the feedback loops that improve the model over time, and translate AI reasoning into something humans can trust and act on. Those are human capabilities — judgment, empathy, context awareness, communication — that AI does not have. The organizations seeing the weakest ROI eliminated the very people who would have made the technology work. Capability isn’t about upskilling for the sake of it. It’s about building and preserving the human infrastructure that makes AI investments pay off.
How AI systems are designed to interact with human work. Where must humans remain in control? What decisions should AI make autonomously, what should AI recommend with human approval, and what should never be delegated to AI at all? Most organizations haven’t drawn these lines — they let the vendor’s default settings decide for them. That is not a strategy. That is an abdication. When an AI system makes a decision about someone’s career and it cannot be explained, it is not just a regulatory risk — it is a trust violation. Traceability. Explainability. Defensibility. That’s what accountability looks like when AI touches human lives.
Not executive sponsorship. Creating the organizational conditions where AI transformation can actually succeed. That starts with trust and psychological safety — the understanding that adoption is not a technology challenge but a human one. Right now, the way most organizations approach AI is destroying both. They announce adoption alongside layoffs. They frame efficiency as the only metric. The research on survivor syndrome is consistent: after significant AI-driven reductions, the remaining employees are measurably less productive, less engaged, and more likely to leave. Leaders who get this right compound AI capability over time. Leaders who get it wrong spend more rebuilding trust than they ever saved.
What happens when a dimension is missing
Every one of these scenarios is playing out inside real organizations right now. The question is which one yours will encounter.
Agentic AI + Leadership
The AI works. Leadership is aligned. But the workforce cannot use it effectively. Adoption stalls. ROI flatlines. The organization invested in strategy and technology but forgot about the humans who have to operate it every day.
Capability + Leadership
People are ready. Leaders are on board. But the AI system is a black box — no explainability, no human-in-the-loop boundaries, no traceability. Decisions affecting people’s careers cannot be defended. Trust erodes from within.
Capability + Agentic AI
The workforce is skilled. The AI is well-designed. But leadership hasn’t created the conditions for trust. No psychological safety. No change management. Technically sound AI that nobody will adopt because the organizational culture is too fractured to absorb it.
What I believe
AI is the most significant capability shift of our generation, and the organizations that approach it thoughtfully will create extraordinary value. The ones that approach it recklessly will create extraordinary damage — to their workforces, their cultures, their brands, and ultimately their competitive position.
Jobs will be lost and new jobs will be created, and the transition between those two realities deserves more care, more planning, and more imagination than most organizations are currently willing to invest.
Every AI system that affects a person’s career, compensation, or livelihood should be explainable, traceable, and defensible. Not because a regulation requires it, but because basic accountability demands it.
Change management and adoption are not line items on a project plan. They are the result of trust, psychological safety, and leadership that earns the right to ask people to change.
The organizations that get Capability, Agentic AI, and Leadership right — that evolve all three together — will be the ones that define what responsible, intelligent, human-centered AI transformation looks like for the next decade.
See the philosophy in action
Every tool in The AXIS Lab was built on CAL. Try them yourself and see what happens when you start with the human problem instead of the technology.
Explore The AXIS LabFollow the Builds
What I’m building, what I’m learning, and what’s happening at the intersection of AI, workforce strategy, and the future of work. No hype. Just the signal.