Executives Love Talking About AI.
The Numbers Say They're Faking It.
This is the final instalment of a 5-part series expanding on my paper - Five Myths About AI Transformation, where I unpack the patterns that keep repeating every time a new technology wave arrives. Part 1 covered why you need to know how your business works before you need an AI strategy. Part 2 covered why boring technology gets better results than AI. Part 3 covered why profitability breeds complacency. Part 4 talked about how snow melts from the edges. This week: the myth that ties them all together.
Stephen Andriole was blunt about this in 2017: the number of executives who really want to transform their companies is relatively small. He described a wide gap between what executives say about transformation and what they do.
Nothing has changed.
Today, executives talk about AI in their strategic plans. That talk creates pressure. Subordinates are forced to do something performative, something the executives can point to during quarterly business calls. The result is a lot of motion and very little movement.
I've seen this in company after company. I've watched leadership teams announce AI initiatives in January and quietly shelve them by September. I've watched executives demand "AI strategies" from their teams while refusing to invest in the foundational work that would make those strategies possible. I've watched the same pattern repeat across mobile, cloud, digital transformation, and now AI.
The myths we've covered in this series, every one of them, trace back to this one. Companies skip the foundation because executives won't fund it. They chase the shiny tool because executives want a headline. They coast on profits because executives aren't uncomfortable enough. They miss the signals from their own employees because executives aren't listening.
Myth 5 is the root cause.
The Myth:
Executives are hungry for AI transformation.
The reality: they're only hungry to talk about it.
The Optimism Gap
BCG's 2026 AI Radar report surveyed 640 CEOs and 2,360 senior leaders. The headline: 82% of CEOs are more optimistic about AI than they were a year ago. AI is now a top-3 strategic priority for 2 out of 3 CEOs. Half believe their job stability depends on getting AI right in 2026.
That sounds like commitment. Until you look at what they're actually doing.
60% of those same CEOs admitted they've intentionally slowed AI implementation due to concerns over errors and malfunctions. Only 6% plan to scale back investments if AI fails to deliver. So they're slowing down the work while refusing to cut the budget. Motion without movement.
EY's 2025 AI Pulse Survey of 500 US senior leaders found the same gap from a different angle. 96% of AI-investing organizations report AI-driven productivity gains. Sounds great. But 65% admit they struggle to tie those productivity gains directly to AI adoption. They're reporting gains they can't measure.
Here's the number that exposes the pattern most clearly. EY asked executives in 2024 how much they planned to spend on AI in 2025. 65% said at least $1 million. When 2025 arrived, only 58% actually did. 34% predicted they'd spend $10 million or more. Only 23% hit that number. The gap between what executives promise and what they deliver runs through everything.
An HBR piece from March 2026 by Thomas Davenport at Babson College and MIT put the finest point on it: 71% of global CIOs said their AI budgets would be frozen or cut if value can't be demonstrated within 2 years. The clock is running. And most companies still can't show what they've got for the money.
BCG's own "Widening AI Gap" report from September 2025 found that C-level executives deeply engaged with AI are 12x more likely to be in the top 5% of companies winning with AI. Which means 95% of companies aren't winning. And the difference isn't technology. It's leadership engagement.
The Information Flow Problem
This is a systems problem, not a people problem. Donella Meadows identified information flow as 1 of the most important characteristics of a healthy system. When information moves accurately from the edges to the center, the system adapts. When it doesn't, the system stagnates or makes bad decisions based on bad data.
In most organizations, the information flow around AI is broken. The signals from frontline employees, the canaries who know what's actually working, never reach the people making strategy decisions. The system punishes honest feedback and rewards telling leadership what they want to hear.
Executives manage up to shareholders the same way individual contributors manage up to their bosses. Saying the words they think people want to hear. Forgetting their duty to dissent. Forgetting to call out when something doesn't make sense, because that feels risky. This behavior has been taught and reinforced for years.
Kim Scott, a former Google and Apple executive, gave this problem a name in her 2017 book Radical Candor. People are afraid of radical candor. And without it, the information that would drive real transformation gets filtered, softened, and sanitized until it's useless.
The result: executives announce AI initiatives based on what sounds good, not what the data says. Subordinates build pilots based on what leadership approved, not what the business needs. The pilots stall because nobody addresses the real problems. And the cycle repeats.
VG's Three Box framework explains why this persists structurally. Box 1, managing the present, is where all the reporting happens. It's what gets measured on quarterly calls. It's what executives are evaluated on. Box 3, creating the future, requires honest assessment of what's not working, which means Box 2, forgetting the past. But executives who've built their careers inside Box 1 have every incentive to protect it and no incentive to admit that the current approach isn't working.
The most dangerous version of this: executives who fund Box 3 initiatives (AI pilots, innovation labs, vendor partnerships) without doing the Box 2 work (process documentation, data cleanup, organizational redesign). Those Box 3 investments get strangled by Box 1 operating logic. Then leadership blames execution. "The team couldn't deliver." The team was never set up to succeed.
Performative Transformation
George Westerman of MIT captured it perfectly: "When digital transformation is done right, it's like a caterpillar turning into a butterfly, but when done wrong, all you have is a really fast caterpillar."
That's what most AI "transformation" looks like right now. Fast caterpillars. And companies are calling it progress.
Westerman and others identified 3 dangerous downsides to this pattern.
Companies become so busy creating fast caterpillars that they stand still in the real transformation stakes. They're heads-down implementing widgets and chatbots and copilots, and they can't see the larger shifts they're not preparing for.
They devote all their limited time, energy, and resources to faster caterpillars because those "change" initiatives have become the priority. There's no bandwidth left for actual transformation. The busy work of incremental improvement crowds out the hard work of rethinking how the business operates.
And the most dangerous: companies are lulled into a false sense of security. They look at all the AI projects on their roadmap and feel good about themselves. Multiple initiatives in flight. But there's no vision of a butterfly in sight.
John Hagel III at Deloitte observed that most executives he spoke with were still focused on digital as a way to do the same things, just faster and cheaper. He saw little evidence of leaders stepping back to rethink, at a basic level, what business they were actually in.
That's Substitution on the SAMR scale. That's performative transformation. And that's the script AI is following today, almost perfectly.
The Duty to Dissent
I have a phrase from a former leader: "Execute your duty to dissent." Call out when something doesn't make sense. Not because it's comfortable. Because it's your job.
Most organizations have lost this. The information flow runs 1 direction: up. And it gets filtered at every level. By the time a signal reaches the C-suite, it's been softened, repackaged, and stripped of the uncomfortable parts.
External consultants can help surface what's really happening. But ultimately, the work has to be done in partnership with the people inside the organization. If you go to 1 of the big consulting firms, you're going to get an 800-page report that tells you all the things you could be doing. That's nice. But the real power is giving the gift of exploration and experimentation to the people within your organization who can actually implement, not just talk about or theorize on the opportunities.
The Forrester data from Myth 3 connects here directly. 28% of your workforce is coasting. They've decided you're not serious. The shadow AI data from Myth 4 connects here too. 93% of your executives are using unapproved tools. They're already experimenting. They just haven't told anyone, including their own teams.
The information is everywhere. It's just not flowing to the right places.
What To Do Instead:
Meadows argued that the most powerful place to intervene in a system is at the level of its goals. If the real, unstated goal of your AI strategy is "say the right words on the quarterly call," the system will produce exactly that. Words. Not results.
If you want different behavior, change the goal.
1. Make the goal "build the capacity to adapt" instead of "implement AI." The tool is secondary. The muscle is primary. Can your organization describe its processes? Can it clean its data? Can it train its people? Can it listen to the edges? Can it run a small experiment and measure the result? If yes, you have the capacity to adapt to whatever comes next, AI or otherwise. If no, buying more AI tools won't change that.
2. Measure behavior, not announcements. The gap between what executives say and what they do is measurable. Track it. How many AI initiatives went from pilot to production? How many were shelved? What happened to the budget that was announced vs. the budget that was spent? What was the ROI on the last 3 technology investments? If leadership can't answer these questions with specifics, the strategy is performative.
3. Fix the information flow. Kim Scott's Radical Candor isn't just a management philosophy. It's an operating requirement for transformation. If the only information reaching the C-suite has been filtered through 4 layers of people who are incentivized to deliver good news, you're making decisions on fiction. Create channels for unfiltered signal. Anonymous feedback. Skip-level meetings. Direct contact between leadership and the people doing the work. The discomfort is the point.
4. Do the Box 2 work that nobody wants to do. VG's framework is clear: you can't create the future (Box 3) without selectively forgetting the past (Box 2). That means documenting which processes no longer serve the business. Which approval chains exist because they always have, not because they need to. Which tools are kept because switching feels hard. Which organizational structures protect status instead of producing results. This is the work executives avoid because it means admitting that decisions they made, and structures they built, need to change.
5. Stop managing up. Start managing out. The executive's job is not to tell the board what they want to hear about AI. It's to build an organization that can adapt. That means being honest about what's working and what isn't. It means protecting the people who dissent when something doesn't make sense. It means creating the conditions for the canaries to sing, not just tolerating them.
The Conversation Starts Here
I've watched this pattern repeat across mobile, cloud, digital transformation, and now AI. Same myths. Same mistakes. Same gap between what leaders say and what they do.
The reason the pattern persists is systemic. Organizations are systems. They have feedback loops that resist change, information flows that filter honest signals, and goals that are revealed by behavior, not press releases. Until you address the system, no amount of AI spending will produce transformation. Just faster caterpillars.
The environment has already changed. Your customers know it. Your employees know it. Your competitors know it, even if they're no better at responding than you are.
The work is not easy. It's not fast. It won't fit in a slide deck.
Because it's hard, it's worth doing.
I'd love to hear where you think your organization sits in this picture. What myth are you living inside? Where are your canaries, and are you listening to them?
The conversation doesn't end with this series. It starts.
This concludes the 5-part series on Five Myths About AI Transformation. If you've been reading along from the beginning, thank you. If you jumped in here, go back and start withMyth 1 - You don’t need an AI strategy.
I break down frameworks like this every week in From Signal to Scale, my weekly newsletter. Three signals from AI, automation, and tech. No hype. No buzzwords. Just the stuff that actually matters if you're running or building a business.
If this was useful, you'll like what shows up on Fridays.
If you're not already reading Signal to Scale, that's where I share tools and approaches like this every Friday. [Subscribe here]
Sources:
- Andriole, S.J. (2017). "Five Myths About Digital Transformation." MIT Sloan Management Review.
- BCG (2026). "AI Radar: CEO Perspectives on AI."
- BCG (2025). "The Widening AI Gap."
- Davenport, T.H. and Srinivasan, L. (2026). "7 Factors That Drive Returns on AI Investments." Harvard Business Review.
- EY (2025). "AI Pulse Survey: 500 US Senior Leaders."
- Govindarajan, V. (2016). The Three Box Solution. Harvard Business Review Press.
- Hagel III, J. Referenced in "Transformation Illusions." MIT Sloan Management Review.
- Meadows, D.H. (2008). Thinking in Systems: A Primer. Chelsea Green Publishing.
- Puentedura, R. (2006). SAMR Model.
- Scott, K. (2017). Radical Candor. St. Martin's Press.
- Westerman, G. "The Transformation Illusion." MIT Sloan / Digital Business Transformation.