80% of Firms Report AI Has Zero Impact on Productivity. Here Is What That Data Actually Means.
Eighty percent of firms across the US, UK, Germany, and Australia report that AI has had zero impact on their productivity over the past three years. Not negative. Not marginal. Zero. That's the headline finding from an NBER working paper published in February 2026, based on a survey of nearly 6,000 executives.
At the same time, 4% of all public GitHub commits are now authored by Claude Code, according to SemiAnalysis. Anthropic says Claude Code users average 20 hours per week with the product. A subset of power users at Anthropic self-report more than doubling their productivity on certain tasks.
Both of these things are true, simultaneously. And the gap between them tells you everything about where AI adoption actually stands.
The Survey Data Is Worse Than the Headlines
The NBER study didn't just find that 80% of firms saw no productivity impact. It found that over 90% reported no change in employment levels, and 89% reported no change in labor productivity (measured as sales volume per employee). These aren't self-selected tech companies. They're stratified samples across four major economies.
Here's the detail that makes the numbers sting: 70% of these firms are actively using AI in some capacity. Two-thirds of their top executives use AI personally. But the average executive spends just 1.5 hours per week on it.
That's the gap. Organizations are buying AI tools. They're deploying them. And then almost nobody is using them enough to move any metric that matters.
The Gallup workplace survey tells the same story from a different angle. In Q4 2025, 38% of employees said their organization had integrated AI to improve productivity, essentially flat from Q3. Overall adoption stalled. But among those already using AI, frequent use jumped. Daily users ticked up. The pattern is clear: the converted are going deeper while the majority hasn't started.
Why the Skepticism Runs So Deep
Financial historian William Quinn, co-author of "Boom and Bust: A Global History of Financial Bubbles," put it bluntly in the New York Times: "I can't really remember a boom with such active hostility to it."
He's right, and the data backs him up. Over one-third of Americans believe AI could end human life, per a YouGov survey. Eighty percent support regulation even if it slows development, according to Gallup. Employee fears of AI-driven job loss jumped from 28% in 2024 to 40% in 2026, per Mercer's Global Talent Trends survey of 12,000 workers.
This isn't just abstract anxiety. Edelman's Trust Barometer from January 2026 found that 54% of low-income respondents and 44% of middle-income workers fear being "left behind" by generative AI. People aren't skeptical because they don't understand AI. They're skeptical because they understand exactly what's being promised, and the last three hype cycles (crypto, metaverse, Web3) burned them.
Even the industry's biggest boosters acknowledge the problem. Sam Altman told the New York Times that AI diffusion "feels surprisingly slow." Jensen Huang described 2025 as defined by a "battle of narratives" dominated by critics and called AI doomerism "extremely hurtful."
Where Value Is Actually Landing
So if 80% of firms see nothing, where's the other 20%? The answer is narrow and specific: coding, and a handful of well-scoped automation tasks.
The Claude Code numbers are real. SemiAnalysis reports that 4% of public GitHub commits are now Claude Code-authored, with projections hitting 20% by end of 2026. Spotify says two-thirds of its engineering staff adopted Claude Code, beating out competing tools. These aren't vanity metrics. They represent actual code shipping to production.
But coding assistance is a best-case scenario for AI adoption. The task is well-defined. The output is verifiable (it compiles or it doesn't, the tests pass or they don't). The users are technically sophisticated enough to prompt well and catch errors. Most work doesn't look like this.
The NBER study found that firms do expect change going forward: a 1.4% productivity boost and 0.7% headcount reduction over the next three years, translating to roughly 1.75 million jobs across the four surveyed countries. But those are predictions, not measurements. And the predictions come from the same executives who've spent three years deploying AI to zero measurable effect.
What Practitioners Should Actually Do With This
If you're trying to get an AI project resourced, funded, or adopted inside an organization that looks like the 80%, here's what the data actually tells you.
Stop selling transformation. Start selling the task. The places where AI works are places where someone identified a specific, repetitive task with a verifiable output and gave a technically capable person time to integrate it. (Part of that work: picking the right model for the workload. The cost spread across 10 capable models is over 100x.) The places where AI doesn't work are places where someone bought an enterprise license, sent a company-wide email, and waited for productivity to materialize.
The 1.5-hour problem is your real enemy. Executives are using AI for 90 minutes a week. That's not enough to build fluency with any tool, let alone one that requires prompt engineering skills most people haven't developed. If your team's AI usage looks like "opened ChatGPT once to summarize meeting notes," you're in the 80%. The path to the 20% is sustained, daily use on specific workflows.
Lead with the fear data, not against it. Forty percent of employees fear AI will cost them their jobs. Pretending that fear doesn't exist, or dismissing it as irrational, guarantees resistance. The organizations in the productive 20% are the ones that reframed AI as a tool that makes existing employees more capable, not a replacement pipeline.
Pick your beachhead based on verifiability. Code generation works because you can see if it works. Customer service automation works because you can measure resolution rates. Content summarization works because you can spot-check the output. If you can't define a clear metric for "this AI integration is working," you're building toward the 80%.
The Productivity Paradox Has a Timer on It
This pattern has a name. Economist Robert Solow observed in 1987 that "you can see the computer age everywhere but in the productivity statistics." It took another decade before IT investments showed up in macroeconomic data. The AI version of this paradox is playing out faster, but it's still playing out.
The difference this time is the hostility Quinn identified. The dot-com boom had irrational enthusiasm pushing adoption forward. The AI boom has genuine fear pulling adoption back. That means the gap between the 20% who figure it out and the 80% who don't will widen before it narrows.
If you're a practitioner, that gap is your opportunity. Not because AI is magic, but because the bar for "doing AI well" is so low right now that basic competence is a competitive advantage. Ninety minutes a week isn't a strategy. Twenty hours a week, on the right tasks, is.