Has the productivity impact of AI gone missing in action? We are now well over three years on from the launch of ChatGPT in November 2022, since when there has been huge investment in AI across the global economy. No less than 61% of all venture capital ($259 billion) went into AI companies in 2025, according to the OECD. Corporate investment in AI has been estimated at $252 billion in 2024, while one analyst firm estimated that overall worldwide spending on AI was nearly $1.5 trillion in 2025. Given this colossal investment, it is reasonable to ask what impact it is making on productivity. A number of recent reports suggest that it is either small or hard to detect at all.
A February 2026 US study by the National Bureau of Economic Research interviewed 6,000 executives (mostly CEOs and CFOs) about the impact of AI. Two-thirds of executives report that their companies use AI, but 90% said that AI has had no impact on productivity (or employment) over the last three years. The same executives nonetheless hope that productivity may increase by 1.4% due to AI within three more years. In a separate 2026 survey from Accenture of UK executives, it was a similar story. This study found that more than a quarter of UK executives said that pulling the plug on AI today would have “little immediate impact”, while 46% of interviewed executives said that AI had so far delivered little or no impact on profitability. Additionally, 58% said that they are not ready to integrate AI agents with core enterprise IT systems due to the complexity of their existing IT landscape.
It is fair to say that economic impact takes a long time to filter through to official government statistics, but these surveys are current and based on executive assessments right now. A January 2026 UK government report said that “ there is currently limited robust statistical evidence that higher AI adoption at the firm level is linked to higher overall productivity.” These reports back up earlier studies that show very high failure rates in AI projects, the most widely reported being the 95% failure rate found by an in-depth MIT study released in August 2025. Other reports, from RAND, Boston Consulting Group and others found similarly high AI project failure rates.
There appears to be a disconnect between the faith in AI of company executives, who continue to pour money into the technology, and the people on the ground actually using it. One late 2025 study found a “pretty shocking disconnect” between executive vision and employee experience of AI. This disconnect has been reported by survey after survey. A Wall Street Journal article in January 2026 found that 40% of workers report no time saved whatsoever by using AI, and a further 47% reporting a saving of less than four hours a week. 19% of executives reckoned that their companies were saving more than 12 hours a week per employee, but only 2% of employees agreed. In a Stanford 2026 study, just 29% of US survey participants felt that AI would create new jobs and new ways of working. One in six US employees lie about using AI to please their bosses. A 2026 survey by WalkMe of 3,750 employees across 14 countries found that 80% were either avoiding or actively rejecting AI at work.
The fact that this disconnect between executive belief and employee reality with regard to AI is now showing up in study after study suggests that this phenomenon is not just noise. There appears to be a worrying gap between the perceptions of AI among those leading enterprises, and those employees on the ground actually tasked with using it. There are many reasons why this may be the case. Large language models (LLMs) hallucinate at a high rate, meaning that output is frequently fabricated and needs careful checking. LLMs are probabilistic in nature, whereas enterprises are used to dealing with deterministic systems that behave reliably and consistently. There are issues with integrating AI into legacy applications, problems with data quality, employee skills and more that may be hindering successful AI adoption.
LLMs have many uses, such as pattern recognition, generating text creatively, creating images and videos, software engineering etc. There are areas where AI is already delivering value. Software development, customer support, and content production have all seen measurable efficiency gains from AI-assisted workflows. An April 2026 research paper from Stanford examine 51 of successful AI projects. However, these gains appear to be uneven, often concentrated in specific roles or tasks, and may not yet be large enough to shift overall productivity metrics at the firm or economy level. Even in the Stanford report, 61% of companies reporting on a successful project had previously had a least one prior failure.
On the other hand, most business processes require highly reliable computer systems. Industrial processes require failure rates far below 1%. Six Sigma compliant organisations (such as Motorola, Boeing or General Electric) aim for 99.99966% accuracy, or 34 defects per million opportunities. LLMs regularly hallucinate at a 20% rate or higher, depending on a number of factors. To be more precise, they usually fall in a range of 20%-27% according to one April 2026 study. Even in the most constrained of cases (where the “temperature” of the LLM is artificially set to zero), hallucination rates are around 5%. These numbers are simply too high for a lot of tasks in industry. You don’t want one in five of your deliveries going to the wrong address, or one in five of your bank payments going to random bank accounts. This is especially the case in regulated sectors such as medicine or finance, or where there are safety critical systems involved. If humans need to be “in the loop” to check LLM output, then that extra work eats into any productivity savings that the AI may have made.
As time passes there will doubtless be more research into this area, but it would seem that enterprises need to carefully scrutinise the productivity claims of the AI vendors and those who gain from selling it. There are certainly some good use cases for AI, but we need to be collectively more careful at finding the best use cases, and being selective about where it is applied. Above all, we need to treat AI like any other technology investment, and carefully measure its impact and return on investment. The honeymoon period for AI is over.







