Your AI Strategy Has a People Problem
And no amount of new tooling will fix it.
Last week I wrote about the infrastructure gap, the disconnect between what AI can do and what your business is actually built to support. This week, three new product launches made an even harder problem impossible to ignore.
Anthropic shipped Cowork. OpenAI upgraded Deep Research. OpenAI quietly launched Frontier. All three are genuinely useful. All three assume your people are ready to use them.
Most aren't.
A 2025 Kyndryl survey spanning 25 industries and eight countries found that 45% of CEOs believe their employees are either resistant or openly hostile to gen AI in the workplace. A BCG study from the same year found that while 85% of leaders and 78% of managers regularly use gen AI, only 51% of frontline workers do. And only 36% of employees said they felt properly trained.
So we have a split. Leadership is bought in. The tools are ready. And the people who actually need to use them every day are somewhere between skeptical and terrified.
That gap is where AI projects go to die.
Why This Matters Now?
The AI adoption conversation has shifted. Two years ago, the question was "Does this technology work?" Today, it clearly does. The bottleneck moved from capability to readiness, and most companies haven't noticed.
Harvard Business Review's March/April 2026 issue drives this home from multiple angles. Researchers Hermann, Puntoni, and Morewedge found that workers' resistance to gen AI isn't about laziness or technophobia. It comes down to three psychological needs being threatened: competence (feeling effective at your job), autonomy (feeling in control of your work), and relatedness (feeling connected to other people).
When AI gets introduced poorly, it threatens all three at once.
Think about it from the worker's perspective. You've spent years getting good at something. Now a tool does a version of it in seconds. Your boss mandates you use it. The collaborative parts of your job, the conversations with colleagues where you used to figure things out together, get replaced by a chat interface. You feel less skilled, less in control, and more isolated. All at the same time.
No wonder 31% of employees in a recent cross-industry survey admitted to actively working against their company's AI initiatives. Among Gen Z workers, it was 41%.
This isn't a training problem. It's a trust problem dressed up as a skills gap.
The Infrastructure Was Step One. People Are Step Two.
In a separate HBR piece from January 2026, Bouquet, Wright, and Nolan lay out a framework for matching AI strategy to organizational reality. Their core argument is that AI fails not because the technology is weak, but because there's a mismatch between what leaders want to achieve and what their value chains, operating models, and teams can actually support.
The numbers back this up. S&P Global Market Intelligence reported that 42% of companies abandoned the majority of their AI initiatives in 2025, up from 17% in 2024. On average, 46% of proof-of-concepts were scrapped before reaching production. And among companies spending over $1 million annually on AI, only a third report significant ROI.
That's a lot of money being lit on fire. And the pattern is consistent: the technology works in the lab, then hits organizational reality and stalls.
GM designed an AI-generated seat bracket that was 40% lighter and 20% stronger than the original. It never made it to production. The manufacturing system couldn't handle the geometry. Retooling would have taken years.
Zillow built an AI pricing model and bought 27,000 homes based on its recommendations. The model was off by as much as 6.9% on off-market listings. Result: a $304 million write-down, 2,000 layoffs, and the entire business unit shut down.
These aren't technology failures. They're organizational readiness failures. And for companies in the $2M-$35M range, the same pattern plays out on a smaller scale every day. You buy the tool. Your team doesn't use it. Six months later, you're back to spreadsheets.
What Actually Works: Build From the Edges
The companies getting this right share a common approach. They don't mandate AI from the top down. They build from the edges, one team at a time, with the people who actually do the work involved in every decision.
Here's what that looks like in practice:
Start with the willing. Pick one recurring workflow where the person currently doing it manually actually wants to try something new. Don't force AI on your most resistant team members first. That's like teaching someone to swim by pushing them off a dock. Find the person who's already curious, run a side-by-side pilot, and let them define what "good" looks like. Their enthusiasm becomes contagious. Their results become proof.
Address the psychology before the technology. The AWARE framework from HBR offers a useful structure here. Acknowledge how AI might affect people's sense of competence, control, and connection. Watch for signs of resistance, not just vocal pushback, but quiet withdrawal, task avoidance, or people secretly using unsanctioned tools (54% of workers in the BCG study said they'd use AI tools without formal approval). Match your support systems to what people actually need, not what the vendor's onboarding guide says. Redesign workflows around human-AI collaboration rather than just layering tools onto broken processes. Give workers real say in how AI affects their work.
Redesign the workflow, not just the tool. BCG found that companies focusing on end-to-end workflow redesign rather than just deploying tools reported better training effectiveness, more leadership support, more time saved, and higher worker engagement. Dell, before introducing gen AI tools, first simplified its sales processes, consolidated content and systems, and removed redundancies. Adding AI after that cleanup made the gains stick. Moderna merged its technology and HR departments into a single unit specifically to design AI workflows collaboratively and decide what stays human-led versus what gets handled by AI.
Run real experiments, not informal pilots. Berndt, Englmaier, Sadun, Tamayo, and von Hesler make a strong case in HBR for treating AI adoption as a portfolio of organizational experiments rather than one big bet. The difference between a pilot and an experiment matters. Pilots are informal tests with handpicked teams and anecdotal feedback. Experiments have control groups, clear hypotheses, and measurable outcomes. GitHub and Google ran controlled trials where developers were randomly assigned to code with or without AI assistants. Those using AI completed tasks 21% to 55% faster and reported greater job satisfaction. A Fortune 500 company staggered the rollout of a gen AI assistant to 5,000+ customer support agents and measured a 14% productivity increase overall, with a 34% increase for less-experienced agents. That kind of evidence gives you something real to scale with.
Let people see every AI action. When Siemens tested its gen AI shop-floor assistant, maintenance technicians were initially skeptical about their job security. Within a few weeks of using the tool, they reported feeling more secure. The tool cut the time it took to find information, which freed them to spend more time on the work only they could do. They started using the assistant to expand their knowledge of machines and recurring incidents. Less dependence on senior colleagues' availability. More autonomy. More competence. That's the opposite of what they feared.
The Mistakes That Kill AI Adoption
Mandating AI use without addressing why people resist it. Microsoft and Shopify both mandated gen AI use. The research shows this creates an "algorithmic cage," a set of standardized procedures that strips workers of the ability to tailor tasks to their own needs. When you hold people responsible for AI-generated output they don't trust, you're setting up a fight.
Treating training as a checkbox. Only 36% of employees in the BCG survey felt properly trained on gen AI tools. Many described their training as too short or superficial. And 52% of IT decision-makers said they don't even know what they need to do to train employees on gen AI. You can't run a two-hour webinar on prompt engineering and call it done.
Ignoring the generational divide. Experienced workers think younger colleagues are misusing AI. Younger workers think senior ones are stuck in the past. Both sides believe their own use of AI is legitimate while questioning the other's. This resentment weakens teams if nobody addresses it directly.
Scaling before you have evidence. The "productivity J curve" is real. Performance typically dips when organizations first adopt a new technology, then rises once complementary investments pay off. A 2025 McKinsey survey found that while many firms had rapidly adopted gen AI, more than 80% reported no significant impact on earnings yet. That doesn't mean AI doesn't work. It means you haven't reorganized tasks, skills, and workflows around it yet. Scaling a tool that hasn't been properly woven into your operations just multiplies the dysfunction.
Assuming the technology is the strategy. AI is a tool that brings strategy to life. It is not the strategy itself. The difference between GM's failed seat bracket and Apple's successful metalens project wasn't the AI. It was the system around it. Apple controlled the value chain from design through manufacturing. GM's supply chain couldn't support the output. For your business: if your processes are broken, AI won't fix them. It'll just break them faster.
Getting Started This Week
If you read last week's Deep Dive and audited your infrastructure, good. Now do the people audit.
One conversation. Sit down with one team lead whose work involves repetitive, time-consuming tasks. Ask them: "If you could hand off one piece of your weekly workload to a tool that did it 80% as well as you, what would it be?" Listen to what they say. Also listen to what they're afraid of.
One experiment. Pick that task. Run a side-by-side test for two weeks. The team member does it their way and also tries it with an AI tool. They compare the outputs. They decide if it's worth continuing. Nobody mandates anything. The person doing the work has veto power.
One measurement. Track three things: time saved, output quality (how many edits the AI version needs), and how the person feels about it. That last one matters more than you think. If the tool saves time but makes someone miserable, you haven't solved anything.
The companies that figure out the people side of AI will pull ahead. Not because they have better tools, because everyone has access to the same tools. They'll win because they built the organizational muscle to actually use them.
That's the difference between buying a gym membership and getting in shape.
This is what I cover in detail in The Modern Digital Business Blueprint. If you want a structured approach to leveraging AI and frontier technologies across your business, with hands-on guidance and a group of peers doing the same work, the next cohort opens soon.
If you're not already reading Signal to Scale, that's where I share tools and approaches like this every Friday. [Subscribe here]