Most articles about artificial intelligence (AI) today discuss its capabilities and potential for innovative applications, from chatbots to predictive maintenance in engineering. What is less well explored is the cost of delivering these projects, and even less well addressed is the issue of supporting an AI model once it is already in production use. We know that at least half of existing corporate IT budgets are spent on support and maintenance. Exact estimates vary, but a 2018 Deloitte survey found that the average enterprise spent 57% of its budget on operations. A 2019 U.S. Government Accountability Office report found that the federal government spends about 80% on the operations and maintenance of existing IT investments. With support and operations being the majority of the IT budget spend, it would be useful to know how AI applications compare. This is especially true given that AI projects are an increasingly important part of new project investment. McKinsey estimates that 5% of digital budgets are being spent on generative AI projects. Another April 2025 survey of 800 enterprises found that almost half of all digital innovation and modernisation was being spent on AI. The same survey found that 99% of enterprises have encountered issues that disrupted AI projects or prevented them outright, including problems accessing or managing the required data; perception that the risk of failure had become too high; and an inability to stay on budget.
So, how do AI models compare to traditional applications in terms of supportability and maintenance costs? Clearly, there are different aspects of support costs, covering people costs, software licences and processing costs (whether in-house or bills for processing and storage from public cloud services like AWS and Azure). There are also additional costs that are unique to AI. Data preparation, model updating, prompt tuning and AI-specific security measures (to try to defend against prompt injection attacks etc) along with model monitoring. These are all things that you need to do to keep an AI application running. Executing large language models (LLMs) in response to user prompts is processor-intensive. Popular LLMs like ChatGPT are typically accessed through APIs and charged based on usage. Clearly, the more the user traffic, the higher the costs, while longer or more complex prompts consume more tokens, which in turn increases cost. Traditional computer systems run on CPUs, but AI usually runs on GPUs, which are much pricier. That fundamental cost difference is built into the charges made by cloud service vendors.
Beyond computing costs, AI models require care and attention in the form of monitoring for model drift, fine-tuning, updating, etc, all of which are carried out by premium-priced AI engineers, who are typically costlier, maybe 25% to 40% costlier, than staff supporting traditional applications. LLMs are a relatively new technology, certainly compared to traditional applications that have been around for decades. This means that traditional databases and applications are relatively stable and well understood, with predictable running costs. Newer LLM-based applications are still bedding in, with unpredictable running costs that may result in unexpected bills. Moreover, at present, the major LLM vendors like OpenAI and Claude are running their models below cost. OpenAI licenses ChatGPT Enterprise at a cost of around $60+ per user per month, with a minimum of 150 users and a one-year contract. However, in the first half of 2025, OpenAI generated $4.3 billion in revenue, yet managed to lose around three times this. It seems that the venture capital industry is currently subsidising the true costs of ChatGPT, so it seems likely that ChatGPT pricing over time will increase, as eventually the company will need to make a profit. Every query run on ChatGPT today actually loses money for OpenAI, even those run on an enterprise license. Corporate customers need to factor in possible future price rises from vendors into the long-term costs of running their LLM projects.
Because LLMs are new, there is limited hard data available so far on the true support and operations costs of running LLMs. However, anecdotal evidence suggests that LLMs are substantially costlier to run than traditional applications, for the reasons given.
This matters because we need to factor in annual support costs as well as initial project costs when assessing the return on investment of any IT application. An application that costs twice as much to maintain needs to have substantially higher cost savings or other benefits to set against that in order to deliver a positive return on investment. With 95% of current AI projects showing no return on investment whatsoever (according to MIT), this is a further hurdle that AI projects need to overcome when companies decide whether to invest in them. This dawning realization may account for the decline in AI adoption across corporate America in August 2025, as measured by the huge US Census Bureau survey of 1.2 million businesses.
For all the excitement about AI, ultimately, it is a technology investment like any other, and it needs to be assessed in terms of its return on investment, just like any corporate project. More research must be done into the support and operating costs of LLM-based projects, so that the true return on investment of AI projects can be measured. If it is indeed the case, as seems likely, that LLM applications cost substantially more to support than traditional ones, then these projects need to deliver a matching increase in hard monetary benefits to offset this. In the headlong rush to implement AI, corporate executives should take a cool, calm and collected view of overall operating costs in addition to project costs, and assess AI projects like any other investment. Ultimately, do they deliver a return on investment? Until AI projects prove they can deliver value beyond their ever-rising support bills, they remain experiments, not true investments.







